Advertisement
Singapore markets closed
  • Straits Times Index

    3,332.80
    -10.55 (-0.32%)
     
  • Nikkei

    39,583.08
    +241.54 (+0.61%)
     
  • Hang Seng

    17,718.61
    +2.14 (+0.01%)
     
  • FTSE 100

    8,164.12
    -15.56 (-0.19%)
     
  • Bitcoin USD

    60,937.30
    +88.44 (+0.15%)
     
  • CMC Crypto 200

    1,266.95
    -16.88 (-1.31%)
     
  • S&P 500

    5,460.48
    -22.39 (-0.41%)
     
  • Dow

    39,118.86
    -45.20 (-0.12%)
     
  • Nasdaq

    17,732.60
    -126.08 (-0.71%)
     
  • Gold

    2,336.90
    +0.30 (+0.01%)
     
  • Crude Oil

    81.46
    -0.28 (-0.34%)
     
  • 10-Yr Bond

    4.3430
    +0.0550 (+1.28%)
     
  • FTSE Bursa Malaysia

    1,590.09
    +5.15 (+0.32%)
     
  • Jakarta Composite Index

    7,063.58
    +95.63 (+1.37%)
     
  • PSE Index

    6,411.91
    +21.33 (+0.33%)
     

A national network of local news sites is publishing AI-written articles under fake bylines. Experts are raising alarm

The articles on a local news site popping up around the country appear to cover what any community outlet would focus on: crime, local politics, weather and happenings. “In-depth reporting about your home area,” the outlet’s slogan proudly declares.

But a closer look at the bylines populating the local site and a national network of others — Sarah Kim, Jake Rodriguez, Mitch M. Rosenthal — reveals a tiny badge with the words “AI.” These are not real bylines. In fact, the names don’t even belong to real humans. The articles were written with the use of artificial intelligence.

The outlet, Hoodline, is not the first or only news site to harness AI. News organizations across the world are grappling with how to take advantage of the rapidly developing technology, while also not being overrun by it.

But experts warn that relying too heavily on AI could wreck the credibility of news organizations and potentially supercharge the spread of misinformation if not kept in close check. Media companies integrating AI in news publishing have also seen it backfire, resulting in public embarrassments. Tech outlet CNET’s AI-generated articles made embarrassing factual errors. The nation’s largest newspaper chain owner, Gannett, pulled back on an AI experiment reporting on high school sports games after public mockery. Sports Illustrated deleted several articles from its website after they were found to have been published under fake author names.

ADVERTISEMENT

Hoodline, founded in 2014 as a San Francisco-based hyper-local news outlet with a mission “to cover the news deserts that no one else is covering,” once employed a newsroom full of human journalists. The outlet has since expanded into a national network of local websites, covering news and events in major cities across the country and drawing millions of readers each month, the company said.

But last year, Hoodline began filling its site with AI-generated articles. A disclaimer page linked at the bottom of its pages notes to readers, “While AI may assist in the background, the essence of our journalism — from conception to publication — is driven by real human insight and discretion.”

Zachary Chen, chief executive of Hoodline parent company Impress3, which acquired the site in 2020, defended the site’s use of AI and its transparency with readers, telling CNN the outlet provides valuable reporting in news deserts around the country and is generating revenue to hire more human journalists in the future.

Hoodline’s staff includes “dozens of editors, as well as dozens of journalist researchers, full time,” Chen said. The outlet also employs a “growing number of on-the-ground journalists who research and write original stories about their neighborhood beats,” he added, referencing recent articles about restaurants, retail stores and events in the San Francisco area.

A screen grab from the Hoodline website shows a story with a byline labeled "AI." - From Hoodline
A screen grab from the Hoodline website shows a story with a byline labeled "AI." - From Hoodline

Bios for bots

But until recently, the site had further blurred the line between reality and illusion. Screenshots captured last year by the Internet Archive and local outlet Gazetteer showed Hoodline had further embellished its AI author bylines with what appeared to be AI-generated headshots resembling real people and fake biographical information.

“Nina is a long-time writer and a Bay Area Native who writes about good food & delicious drink, tantalizing tech & bustling business,” one biography claimed.

The fake headshots and biographies have since been removed from the site, replaced with a small “AI” badge next to each machine-assisted article’s byline, though they still carry human names. The archived screenshots have also been wiped from much of the internet. Wayback Machine director Mark Graham told CNN that archived pages of Hoodline’s AI writers were removed last month “at the request of the rights holder of the site.”

Chen acknowledged the company requested that the archive’s screenshots of the site be removed from the internet, saying “some websites have taken outdated screenshots from months or even years ago to mischaracterize our present-day practices.”

“An empty gesture toward transparency”

But experts expressed alarm over Hoodline’s practices, warning that it exemplifies the potential pitfalls and perils of using AI in journalism, threatening to diminish public trust in news.

The way the site uses and discloses AI purposely tricks readers by “mimicking” the look and feel of a “standards-based local news organization with real journalists,” said Peter Adams, a senior vice president of the News Literacy Project, which aims to educate the public on identifying credible information.

“It’s a kind of flagrantly opaque way to dupe people into thinking that they’re reading actual reporting by an actual journalist who has a concern for being fair, for being accurate, for being transparent,” Adams told CNN.

The small “AI” badge that now appears next to fake author personas on the site is “an empty gesture toward transparency rather than actually exercising transparency,” Adams added.

Chen would not disclose what AI system Hoodline is employing, only calling it “our own proprietary and custom-built software, combined with the most cutting-edge AI partners to craft publish-ready, fact-based article.” Each article, Chen said, is overseen by editors before it is published.

Gazetteer previously reported that at least two Hoodline employees said on LinkedIn that they were based in the Philippines, far from the US cities that the outlet purports to cover. Chen did not respond to CNN’s question about its staff or where they are located.

The News/Media Alliance, which represents more than 2,200 US publishers, has supported news organizations taking legal action against AI developers who are harvesting news content without permission. Danielle Coffey, the group’s chief executive, told CNN that Hoodline’s content “is likely a violation of copyright law.”

“It’s another example of stealing our content without permission and without compensation to then turn around and compete with the original work,” Coffey said. “Without quality news in the first place, this type of content among other practices will become unsustainable over time, as quality news will simply disappear.”

Chen told CNN he takes copyright law very seriously and that the outlet has “greatly refined processes with heavy guardrails.” The site’s readers, he asserted, “appreciate the unbiased nature of our AI-assisted news,” and claimed Hoodline’s visitor traffic has soared twentyfold since the publication was acquired. (Chen did not specify their traffic numbers.)

That’s not to say there isn’t a place for AI in a newsroom. It can assist journalists in research and data processing and reduce costs in an industry struggling with tighter budgets. Some news organizations, like News Corp., are increasingly inking lucrative partnerships with AI developers like OpenAI to help bolster its large language models’ knowledge base.

But Hoodline’s use of machine-written articles under seemingly human names is not the way to do it, said Felix Simon, a research fellow in AI and digital news at the Reuters Institute for the Study of Journalism at the University of Oxford.

“Employing AI to help local journalists save time so they can focus on doing more in-depth investigations is qualitatively different from churning out a high amount of low-quality stories that do nothing to provide people with timely and relevant information about what is happening in their community, or that provides them with a better understanding of how the things happening around them will end up affecting them,” Simon told CNN.

Research conducted by Simon and Benjamin Toff, a journalism professor at the University of Minnesota, has also found that the public has not embraced the use of AI in news reporting.

“We found that people are somewhat less trusting of news labelled as AI, and there is reason to believe that people won’t be as willing to pay for news generated purely with AI,” he said.

Keeping local news alive

On Hoodline’s network of local news sites, it is difficult to find an article not written by the software. Much of the site’s content appears to be rewritten directly from press releases, social media postings or aggregated from other news organizations. Chen said the outlet aims to “always provide proper attribution” and follow “fair use” practices.

“Local news has been on a terrible downward trend for two decades, and as we expand, Hoodline is able to bring local stories that provide insight into what’s going on at a hyper-local level, even in so-called ‘news deserts,’” Chen said.

The outlet, which is profitable, Chen said, plans to hire more human journalists as the company looks to evolve its current AI personas into “AI news anchors delivering stories in short-form videos.” The plan will make use of the fake bylines published on the site, eventually turning them into AI news readers, he said.

“It would not make sense for an AI news anchor to be named ‘Hoodline San Francisco’ or ‘Researched by Person A & Edited by Persona B.’ This is what we are building toward,” Chen said.

Nuala Bishari, a former Hoodline reporter, wrote in a recent column for the San Francisco Chronicle that seeing her old job replaced by AI is “surreal.”

“Old-fashioned shoe-leather reporting has been replaced by fake people who’ve never set foot in any of the neighborhoods they write about — because they don’t have feet,” Bishari wrote.

But the transformation at Hoodline shows that bigger solutions are needed to keep vital local news reporting alive.

“Without a big shift, journalism as we know it will continue to sputter out,” she wrote.
“And it isn’t just tiny outlets like Hoodline that are in danger of going extinct or being zombified by AI.”

For more CNN news and newsletters create an account at CNN.com