AI not immune to hacking, says Hon Hai Research Institute CEO


Hon Hai Research Institute CEO Wei-Bin Lee. Credit: DIGITIMES

AI has permeated well into the modern society playing a crucial role in a wide variety of applications, but AI systems, just like all kinds of computer operating systems, can hardly avoid being hacked. Therefore how to make them stay immune to attacks has become a common goal for all those engaged in the AI technology development, Wei-Bin Lee, CEO of Hon Hai Research Institute under Foxconn Technology Group, has said.

AI is a powerful tool that can help users quickly analyze data, seek correlations between different data, and make inferences or even decisions, but whether it can go wrong or be deceived seems to be an issue rarely mentioned when people discuss AI, Lee noted in a pre-event interview ahead of DIGITIMES-organized 2022 Taiwan AI Expo running May 4-6.

Once hacked AI models go wrong, he continued, the impact will be hard to estimate, especially those with high security requirements, such as systems for self-driving cars

The value of AI cannot be highlighted via innovative algorithms alone, and instead, many supporting or complementary innovations such as new workflows and new business models are also needed to enable successful AI-based IT application services. In other words, AI can help enterprises differentiate their services and achieve sales breakthrough in the market, Lee indicated.

While hackers used to attack the most vulnerable parts of IT systems, will they switch to attack AI models? And will it be easier to attack AI models than IT systems? Or will attacking the AI model make certain goals easier to accomplish? All these issues are naturally what should be taken into account during the early development of AI technology, Lee said.

AI models are complex logics learned from training data. But when people deploy AI models on systems, there are often corner cases where the models cannot be accurately recognized, as the cases are not included in the training data. Actually, many such cases have been found in the application of AI to self-driving systems, according to Lee.

Malicious AI manipulations

There is no problem in AI itself, Lee stressed, and the real problem is that some people…

Source…