The tech-focused online magazine CNET reportedly found errors in a total of 41 out of 77 stories the news outlet published lately with the help of an in-house developed artificial intelligence tool.
The magazine has been silently testing the technology in a small number of articles published on its CNET Money website in November last year. The content was pretty basic and focused on explaining some financial concepts to visitors.
The use of AI by CNET was first spotted by Futurism on 11 January. A few days after they published this first report, they shared a follow-up story in which they highlighted some mistakes that an article from CNET titled “What is Compound Interest?” contained.
A disclosure accompanied the article and stated that it was “reviewed, fact-checked, and edited” by their editorial staff.
CNET’s Editor-in-Chief Steps Up to Explain What Happened
A week after, CNET’s Editor in Chief, Connie Guglielmo, published a blog post in which she attempted to explain what happened.
According to Guglielmo, all of the articles were reviewed by editors before being published. However, it appears that these professionals failed to identify and correct many errors including “incomplete company names, transposed numbers or language that our senior editors viewed as vague”.
The head of the editing team at CNET clarified that they did not use ChatGPT. Guglielmo further expanded on the caveats that her team identified after a couple of months of using the solution.
First, and probably as expected, an AI tool can make mistakes in the same way a human can, she said. However, the corrections required were considered “minor”. Meanwhile, Guglielmo stated that CNET has decided to stop using the solution for now until the team feels confident that it can produce higher quality content.
For transparency purposes, CNET will include a disclosure in all of its articles whenever they are produced by or with the assistance of the AI engine. It also came to the editing team’s attention that some articles did not pass the magazine’s plagiarism prevention tool. This is yet another red flag that editors somehow missed or ignored.
Despite all of these issues, CNET is committed to keep “exploring and testing” how artificial intelligence solutions can be used to create content. Their aim, according to Guglielmo, is to free up the agenda of human journalists so they can focus on creating more thoroughly researched and insightful news pieces.
“The process may not always be easy or pretty, but we’re going to continue embracing it – and any new tech that we believe makes life better”, the chief editor concluded.
Why are Media Outlets Trying to Use AI to Create Content?
CNET may have been exploring the possibility of using AI-generated content to contribute to its website’s search engine optimization (SEO). This can be done by instructing the AI tool to incorporate certain keywords and create news pieces and articles that focus on certain topics that are regularly searched by people.
Websites like CNET generate money from advertising and affiliate links, among others. The more traffic their articles get, the more money they make. Therefore, if they can create high-quality SEO-optimized articles that rank high for certain Google searches, the magazine could dramatically increase its revenues without having to expand its headcount.
Professionals in the field of artificial intelligence have warned that solutions like OpenAI’s ChatGPT cannot be fully relied on to create accurate content for news articles, essays, or any other similar materials as the software is not prepared to judge what is real and what is not.
If the data subset used by the AI tool is somehow corrupted by inaccurate data, chances are that the content it will create will contain numerous errors. However, companies may continue exploring how they can use AI for this particular activity as it can save them a lot of money in both marketing and personnel.
Other Related Articles: