CNET’s AI-Written Articles Are Riddled With Errors

Apart from stringing together human-like, fluent English language sentences, one of ChatGPTs greatest skills seems to be doing something wrong. In the pursuit of reasonable sales, the AI program generates fabricates information and messes up facts like nobody’s business. Unfortunately, tech outlet CNET has decided to make it their business.
The tech media site was forced to issue several important corrections to a post published on CNET that was made via ChatGPT reported first from futurism. In a single AI-written explainer compound interest had at least five material inaccuracies that have now been corrected. The errors, according to CNET’s hefty correction, were as follows:
- The article implied that a savings account that initially held $10,000 at a 3% interest rate and compounded annually would accrue $10,300 in interest after one year. Real interest earned would be $300.
- An error similar to the above occurred in a second example based on the first.
- The post incorrectly stated that interest on one-year CD accounts only accrues annually. In reality: CD accounts are composed with variable frequencies.
- The article incorrectly stated how much a person would have to pay for a car loan with an interest rate of 4% over five years.
- The original post mistakenly conflated APR and APY and gave bad advice accordingly.
CNET has been on board for more than two months Pump out post generated by ChatGPT. The site has published a total of 78 of these articles and up to 12 in a single day (as of November 11, 2022), originally under the headline “CNET Money Staff” and now just “CNET Money.” Initially, the outlet seemed eager to let its AI authorship fly under the radar, only revealing the lack of a human author in an obscure author description on the robot’s “Authors” page. Then, futurism and other media joined in. criticism followed. CNET Editor-in-Chief Connie Guglielmo wrote an explanation about that.
And just as the outlet’s public acknowledgment of its use of AI only followed widespread criticism, CNET has failed to identify or fix all of these inaccuracies noted Tuesday, all by itself. The media correction came only after Futurism directly alerted CNET to some of the errors, Futurism reported.
G/O Media may receive a commission

Up to $100 credit
Samsung backup
Reserve the next generation Samsung device
All you have to do is sign up with your email address and boom: credit your pre-order on a new Samsung device.
CNET has claimed that all of its AI-generated articles are “reviewed, fact-checked, and edited” by real, human staff. And every post has an editor’s name in the byline. But obviously this alleged oversight is not enough to stop ChatGPTs many generated errors don’t slip through the cracks.
When an editor approaches an article (especially an explanation as simple as “What is compound interest”), one can usually assume that the author has done their best to provide accurate information. But with AI there is no intention, only the product. An editor who evaluates an AI-generated text cannot accept anything, but has to examine every sentence, every world and every punctuation mark carefully and critically. It’s a different kind of job than editing a person, and a person might not be well equipped given the amount of complete, unfailing attention it requires and the high volume that CNET is aiming for with its ChatGPT-produced stories appears.
It’s easy to understand (though not excuse) that an editor looking through stacks of AI-generated posts might miss an error about the nature of interest rates in the authoritative-sounding series of statements. When writing is outsourced to AI, editors shoulder the burden and their failure seems inevitable.
And the failures are almost certainly not limited to just the one item. Almost all of CNET’s AI-written articles now include an “Editor’s Note” at the top stating, “We are currently reviewing this story for accuracy. If we find errors, we will update and correct them”, indicating that the outlet has recognized the inadequacy of the initial editing process.
Gizmodo emailed CNET for more clarification on what this secondary review process means. (Is each story read again for accuracy by the same editor? A different editor? An AI fact-checker?) However, CNET didn’t directly respond to my questions. Instead, Ivey Oneal, the outlet’s PR manager, referred Gizmodo to Guglielmo’s earlier statement, writing, “We actively review all of our AI-enabled items to ensure no further inaccuracies passed through the editing process. We will continue to issue any necessary corrections in accordance with CNET’s corrections policy.”
Given the apparent high probability of AI-generated bugs, one might wonder why CNET is switching from humans to robots. Other journalistic media, like that Associated Press, also use artificial intelligence – but only in very limited contexts, such as B. when filling in information in preset templates. And in these more constrained environments, the use of AI seems designed to free journalists for other work more worthy of their time. But CNET’s application of the technology differs significantly in scope and intent.
all articles published under the heading “CNET Money” are very general explanations with questions in plain language as headlines. They’re clearly optimized to take advantage of Google’s search algorithms and land at the top of people’s results pages – drowning out existing content and Track clicks. CNET, like Gizmodo and many other digital media sites, earns revenue from ads on its pages. The more clicks, the more money an advertiser pays for their digital mini billboards.
Financially, AI is unbeatable: there are no overheads and there is no human limit to how much can be produced in a day. But from a journalistic perspective, AI generation is a looming crisis in which accuracy becomes completely secondary to SEO and volume. Click-based earnings do not encourage thorough reporting or well-crafted explanations. And in a world where AI posts become the accepted norm, the computer will only know how to reward itself.
Updated 01/17/2023 5:05 PM ET: This post was updated with a comment from CNET.
https://gizmodo.com/cnet-ai-chatgpt-news-robot-1849996151 CNET’s AI-Written Articles Are Riddled With Errors