CNET Admits To Using AI Writer, Doubles Down on Using It
After getting caught Tech publication CNET uses an algorithm to write dozens of articles and has apologized (sort of) but wants everyone to know that it definitely has no intention of turning its back on AI journalism.
Yes, about two weeks ago Futurism reported that CNET had used an in-house artificial intelligence Program to write crowds of financial explainers. The articles — about 78 in total — were published over the course of two months under the headings “CNET Money Staff” or “CNET Money” and were not directly attributed to a non-human author. Last week, after a online riot Speaking of the results of Futurism, CNET and its parent company, media company Red Ventures, announced that they would temporarily press “pause” on AI editorials.
it would seem that this “Pause” will not last however long. On Wednesday, Connie Guglielmo, CNET’s senior editor and senior vice president, released a new one expression about the scandal, noting that the outlet would eventually continue to use what it called its “AI engine” to write (or help write) more articles. In her own words, Guglielmo said that…
[Readers should] …expect CNET to continue exploring and testing how AI can be used to help our teams in their work of testing, researching and creating unbiased advice and fact-based reporting we are known for The process may not always be easy or pretty, but we will continue to embrace it – and any new technology we believe will make life better.
Guglielmo also used Wednesday’s article as an opportunity to address some of the other criticisms leveled at CNET’s dystopian algo — namely, that he’d frequently created content that was both factually inaccurate and potential plagiarism. Under a section titled “AI engines make mistakes like humans,” Guglielmo addressed the fact that his so-called engine made quite a few mistakes:
After one of the AI-assisted stories was rightfully cited for factual errors, the CNET Money editorial team conducted a full review… We identified additional stories that needed correction, with a small number requiring significant correction and multiple stories with minor issues such as incomplete company names, transposed digits, or language that our senior editors considered vague.
G/O Media may receive a commission
The publisher also acknowledged that some of the automated articles may not have passed the original content sniffing test:
In a handful of stories, our plagiarism check tool was either not used properly by the editor, or no sentences or phrases that closely resembled the original language could be found. We are developing additional ways to mark exact or similar matches to other published content identified by the AI tool, including automatic citations and external links for proprietary information such as data points or direct citations.
It would be one thing if CNET announced that very publicly It was a bold new experiment to automate some of its editorial tasks and let everyone know it was doing something new and strange. However, CNET did exactly the opposite of that — quietly rolling out article after article among vague writers, clearly hoping no one would notice. Guglielmo now admits that “when you read a story on CNET, you should know how it came about” — which seems like standard journalism ethics 101.
https://gizmodo.com/cnet-artificial-intelligence-writing-scandal-1850031292 CNET Admits To Using AI Writer, Doubles Down on Using It