The next cybersecurity crisis: Poisoned AI

Holds space while article actions load

In the last decade, artificial intelligence has been used to recognize faces, assess creditworthiness, and predict the weather. At the same time, increasingly sophisticated hacks have been escalated using stealthier methods. The combination of artificial intelligence and cyber security was inevitable as both fields sought better tools and new uses for their technology. But there is a massive problem that threatens to undermine these efforts and could allow opponents to bypass digital defenses undetected.

The danger is data poisoning: manipulating the information used to train machines is a virtually untraceable method of circumventing AI-powered defenses. Many companies may not be ready to deal with escalating challenges. The global market for AI cybersecurity is already expected to triple by 2028 to $ 35 billion. Security providers and their customers may need to patch several strategies together to keep threats at bay.

The very nature of machine learning, a subset of AI, is the target of data poisoning. Given the amount of data, computers can be trained to categorize information correctly. A system may not have seen a picture of Lassie, but given enough examples of different animals correctly labeled by species (and even breed), it should be able to assume that she is a dog. With even more samples, it would be able to correctly guess the breed of the famous TV dog: the Rough Collie. The computer does not really know. Simply draw statistically informed conclusions based on previous training data.

The same approach is used in cybersecurity. To capture malicious software, companies feed their systems with data and let the machine learn by itself. Computers armed with numerous examples of both good and bad code can learn to keep an eye on malicious software (or even snippets of software) and catch it.

An advanced technique called neural networks – it mimics the structure and processes of the human brain – runs through training data and makes adjustments based on both known and new information. Such a network does not need to have seen a particular piece of malicious code to assume that it is bad. It is learned by itself and can adequately predict good against evil.

All of this is very powerful, but it is not invincible.

Machine learning systems require a large number of correctly labeled tests to begin to become good at prediction. Even the largest cybersecurity companies are only able to gather and categorize a limited number of examples of malware, so they have no choice but to supplement their training data. Some of the data may be crowd-sourced. “We already know that a resourceful hacker can take advantage of this observation to their advantage,” noted Giorgio Severi, a PhD student at Northwestern University, in a recent presentation at the Usenix Security Symposium.

Using the animal analogy, if cat-phobic hackers wanted to cause chaos, they could tag a lot of photos of sloths as cats and put the images into an open source database of pets. Since the tree-hugging mammals will appear far less frequently in a corpus of domesticated animals, this little sample of poisoned data has a good chance of fooling a system into spitting out sloths when asked to show kittens.

It’s the same technique for more malicious hackers. By carefully creating malicious code, labeling these examples as good, and then adding them to a larger batch of data, an attacker could trick a neutral network into thinking that a snippet of software similar to the bad example is actually harmless. It is almost impossible to catch the wrong samples. It is far more difficult for a human to rummage through computer code than to sort pictures of sloths from pictures of cats.

In a presentation at the HITCon Security Conference in Taipei last year, researchers Cheng Shin-ming and Tseng Ming-huei showed that backdoor code could fully circumvent defenses by poisoning less than 0.7% of the data sent to the machine learning system. Not only does this mean that only a few malicious tests are needed, but it indicates that a machine learning system can be made vulnerable even if it uses only a small amount of unconfirmed open source data.

The industry is not blind to the problem, and this weakness is forcing cybersecurity companies to take a much broader approach to strengthening defense. One way to help prevent data poisoning is for researchers who develop AI models to regularly check that all labels in their training data are accurate. OpenAI LLP, the research firm co-founded by Elon Musk, said that when its researchers cured their datasets into a new image-generating tool, they would regularly send the data through special filters to ensure the accuracy of each label. “[That] removes the vast majority of images that are incorrectly labeled, says a spokeswoman.

To stay safe, companies need to make sure their data is clean, but that means they train their systems with fewer examples than they would get with open source offerings. In machine learning, the sample size is crucial.

This cat-and-mouse game between attackers and defenders has been going on for decades, with artificial intelligence simply the latest tool implemented to help the good side stay ahead. Remember: Artificial intelligence is not omnipotent. Hackers are always looking for their next exploit.

More from Bloomberg Opinion:

• The OpenAI project deserves more scrutiny: Parmy Olson

• Insurance companies need to prepare for catastrophic cyber risk: Olson & Culpan

• China’s Alibaba reprimand sends wrong signal: Tim Culpan

This column does not necessarily reflect the opinion of the editorial staff or Bloomberg LP and its owners.

Tim Culpan is a technology columnist for Bloomberg Opinion. Based in Taipei, he writes about Asian and global companies and trends. He has previously covered the beat on Bloomberg News.

More stories like this are available at bloomberg.com/opinion

Leave a Reply

Your email address will not be published.