Study finds AI assistants help developers produce code thats more likely to be buggy


Programmers who accept assistance from AI tools like Github Copilot generate less secure code than those who work alone, according to research by computer scientists at Stanford University. Stanford researchers Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh provide a positive response to this topic in their study titled “Do Users Write More Insecure Code with AI Assistants?”

According to the authors’ findings, participants who had access to an AI helper frequently created more security flaws than those who did not, with results for string encryption and SQL injection being especially noteworthy.

Read More…