With the popularization of AI-assisted software writing tools such as ChatGPT and GitHub Copilot, security experts have noticed the hidden danger of AI poisoning. Researchers at Microsoft, the University of Santa Barbara, the University of California and the University of Virginia this week unveiled a new attack method for attackers to train AI to provide suggestions for malicious development.
