The Huge Power and Potential Danger of AI-Generated Code

In June 2021, GitHub announced Copilot, a type of computer code auto-completion based on OpenAI’s text generation technology. It gave a first glimpse of the impressive potential of generative artificial intelligence to automate valuable work. Two years later, Copilot is one of the most mature examples of how technology can take over tasks that previously had to be done by hand.

This week Github published a report, based on data from nearly a million programmers who pay to use Copilot, shows how transformative AI generative coding has become. On average, they accepted the AI ​​assistant’s suggestions about 30 percent of the time, suggesting the system is remarkably good at predicting useful code.

The striking graph above shows that users tend to accept more suggestions from Copilot the more months they use the tool. The report also concludes that the productivity of AI-powered programmers increases over time based on this fact a previous copilot study reported a correlation between the number of accepted suggestions and a programmer’s productivity. GitHub’s new report states that the greatest productivity gains have been seen among less experienced developers.

At first glance, this is an impressive picture of a novel technology that is quickly proving its worth. Any technology that increases productivity and enhances the skills of less-skilled workers could be a boon for both individuals and the wider economy. GitHub continues to provide some outside-the-box speculation, estimating that AI programming could boost global GDP by $1.5 trillion by 2030.

But GitHub’s chart showing programmer attachment to Copilot reminded me of another study I heard about in chat recently Talia RingerProfessor at the University of Illinois at Urbana-Champaign, on how programmers relate to tools like Copilot.

Late last year, a team from Stanford University published a research paper It examined how using a code-generating AI assistant they developed affects the quality of human-made code. The researchers found that programmers who received AI suggestions tended to introduce more bugs into their final code — however, those who had access to the tool tended to assume their code did more secure. “There are probably both benefits and risks,” says Ringer. “More code is not better code.”

Given the nature of programming, this finding is hardly surprising. As Clive Thompson wrote in a 2022 WIRED post, Copilot can work wonders, but its suggestions are based on patterns in other programmers’ work that may be flawed. These guesses can lead to errors that are hard as hell to spot, especially when you’re fascinated by how good the tool often is.

Zack Zwiezen

Zack Zwiezen is a USTimesPost U.S. News Reporter based in London. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. Zack Zwiezen joined USTimesPost in 2023 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with me by emailing

Related Articles

Back to top button