Researchers Embedded Malware into an AI's 'Neurons' and it Worked Scarily Well
According to a new study, as neural networks become more popularly used, they may become the next frontier for malware operations.
The study published to the arXiv preprint site stated, malware may be implanted directly into the artificial neurons that make up machine learning models in a manner that protects them from being discovered.
The neural network would even be able to carry on with its usual activities. The authors from the University of the Chinese Academy of Sciences wrote, "As neural networks become more widely used, this method will become universal in delivering malware in the future."
With actual malware samples, they discovered that changing up to half of the neurons in the AlexNet model—a benchmark-setting classic in the AI field—kept the model's accuracy rate over 93.1 percent. The scientists determined that utilizing a method known as steganography, a 178MB AlexNet model may include up to 36.9MB of malware buried in its structure without being detected. The malware was not identified in some of the models when they were tested against 58 different antivirus programs.
Other ways of invading businesses or organizations, such as attaching malware to papers or files, are frequently unable to distribute harmful software in large quantities without being discovered. As per the study, this is because AlexNet (like many machine learning models) is comprised mainly of millions of parameters and numerous complicated layers of neurons, including fully connected "hidden" layers,
The researchers discovered that altering certain other neurons had no influence on performance since the massive hidden layers in AlexNet were still intact.
The authors set out a playbook for how a hacker could create a malware-loaded machine learning model and distribute it in the wild: "First, the attacker needs to design the neural network. To ensure more malware can be embedded, the attacker can introduce more neurons. Then the attacker needs to train the network with the prepared dataset to get a well-performed model. If there are suitable well-trained models, the attacker can choose to use the existing models. After that, the attacker selects the best layer and embeds the malware. After embedding malware, the attacker needs to evaluate the model’s performance to ensure the loss is acceptable. If the loss on the model is beyond an acceptable range, the attacker needs to retrain the model with the dataset to gain higher performance. Once the model is prepared, the attacker can publish it on public repositories or other places using methods like supply chain pollution, etc."
According to the article, when malware is incorporated into the network's neurons, it is "disassembled" and assembled into working malware by a malicious receiver software, which may also be used to download the poisoned model via an upgrade. The virus can still be halted if the target device checks the model before executing it. Traditional approaches like static and dynamic analysis can also be used to identify it.
Dr. Lukasz Olejnik, a cybersecurity expert and consultant, told Motherboard, “Today it would not be simple to detect it by antivirus software, but this is only because nobody is looking in there.”
"But it's also a problem because custom methods to extract malware from the [deep neural network] model means that the targeted systems may already be under attacker control. But if the target hosts are already under attacker control, there's a reduced need to hide extra malware."
"While this is legitimate and good research, I do not think that hiding whole malware in the DNN model offers much to the attacker,” he added.
The researchers anticipated that this would “provide a referenceable scenario for the protection on neural network-assisted attacks,” as per the paper. They did not respond to a request for comment from Motherboard.
This isn't the first time experts have looked at how malicious actors may manipulate neural networks, such as by presenting them with misleading pictures or installing backdoors that lead models to malfunction. If neural networks represent the future of hacking, major corporations may face a new threat as malware campaigns get more sophisticated.
The paper notes, “With the popularity of AI, AI-assisted attacks will emerge and bring new challenges for computer security. Network attack and defense are interdependent. We hope the proposed scenario will contribute to future protection efforts.”
from E Hacking News - Latest Hacker News and IT Security News https://ift.tt/3hX767D
Comments