Trojan Puzzle attack trains AI assistants into suggesting malicious code

10-Jan-23

A novel poisoning assault has been developed by researchers from the universities of California, Virginia, and Microsoft that could deceive AI-based coding assistants into suggesting harmful code. The attack, known as “Trojan Puzzle,” is notable for eluding static detection and signature-based dataset cleansing techniques, leading to the training of AI models to learn how to replicate harmful payloads.

Finding a sneaky way to covertly introduce harmful code into the training set of AI models might have far-reaching repercussions, possibly resulting in massive supply-chain hacks, given the popularity of coding tools like GitHub’s Copilot and OpenAI’s ChatGPT.

Read More…