NewsGuard, a company that tracks and reports on disinformation and misinformation online, has identified 49 websites in seven languages that are largely or completely generated by artificial intelligence (AI) language models. The sites produce high volumes of content on topics such as politics, health, entertainment, finance, and technology. NewsGuard found that many of these sites are owned or controlled by unknown entities, and that the AI-generated content is often repetitive and bland. The vast majority of these sites feature programmatic ads, indicating that they are designed to generate advertising revenue. Some of these sites publish hundreds of articles a day, and some of the content pushes false narratives.

The presence of these AI-generated articles raises concerns about the use of AI to create entire news organizations. While it is true that AI tools have become increasingly powerful and accessible in recent years, the use of these tools to produce news content is still largely untested and raises serious ethical questions.

NewsGuard contacted the 29 sites that provided contact information, and two of them confirmed that they have used AI. Of the remaining 27 sites, eight provided invalid email addresses, and 17 did not respond. NewsGuard exchanged emails with the self-described owner of Famadillo.com, who denied that the site used AI extensively. NewsGuard also spoke to the founder of GetIntoKnowledge.com, who claimed that the site uses automation only where it is necessary and that the content is fact-checked.

Many of the AI-generated articles identified by NewsGuard are credited to “Admin” or “Editor,” or have no bylines at all. Some of the sites feature fake author profiles. In addition, the About and Privacy Policy pages on these sites were algorithmically produced by tools used to generate customizable disclaimers and copyright notices, but were not fully completed. This lack of human oversight raises serious concerns about the quality and accuracy of the content.

Overall, NewsGuard’s report highlights the risks associated with the use of AI in content creation. While AI has the potential to transform the way we produce and consume news, it is important to recognize the ethical implications of using these tools to generate news content on a large scale. In order to ensure that AI-generated news is accurate, reliable, and trustworthy, we must establish clear guidelines and standards for its use. Only then can we reap the benefits of AI without compromising the integrity of our news and information ecosystem.

By Impact Lab