Advertisement
Singapore markets open in 4 hours 14 minutes
  • Straits Times Index

    3,367.90
    +29.33 (+0.88%)
     
  • S&P 500

    5,509.01
    +33.92 (+0.62%)
     
  • Dow

    39,331.85
    +162.33 (+0.41%)
     
  • Nasdaq

    18,028.76
    +149.46 (+0.84%)
     
  • Bitcoin USD

    61,902.63
    -1,341.58 (-2.12%)
     
  • CMC Crypto 200

    1,333.22
    -11.29 (-0.84%)
     
  • FTSE 100

    8,121.20
    -45.56 (-0.56%)
     
  • Gold

    2,338.90
    0.00 (0.00%)
     
  • Crude Oil

    83.10
    -0.28 (-0.34%)
     
  • 10-Yr Bond

    4.4360
    -0.0430 (-0.96%)
     
  • Nikkei

    40,074.69
    +443.63 (+1.12%)
     
  • Hang Seng

    17,769.14
    +50.53 (+0.29%)
     
  • FTSE Bursa Malaysia

    1,597.96
    -0.24 (-0.02%)
     
  • Jakarta Composite Index

    7,125.14
    -7,139.63 (-50.05%)
     
  • PSE Index

    6,358.96
    -39.81 (-0.62%)
     

Adobe says its war on deepfakes could help combat the fake Taylor Swift porn problem

Getty Images

Last week, sexually explicit images of Taylor Swift—created using artificial intelligence—spread online, reaching as many as 45 million people. The news was a stark reminder of the dangers AI presents to women and girls, and it posed chilling questions of what’s to come.

Adobe, the photo software maker, wants to hold bad actors who create these kinds of images accountable. The company's four-year effort to create an industry-wide watermarking tool for all AI-generated photos could help combat the Taylor Swift problem and those like it, general counsel and chief trust officer Dana Rao told Fortune. The feature, called Content Credentials, appears as a digital watermark that includes information including who created it, when, and how. If adopted broadly, it could make it easier to track down people who post abusive images, Rao said.

“This is early stages and we would love, honestly, more support on this,” Rao told Fortune. “There’s a lot more work to think that through, but it’s work that needs to be done.”

Tracking abusers is one potential application of the technology, but industry-wide adoption of Adobe’s tool would have much broader implications. The service’s primary purpose is to provide viewers with information about images so they can decide if pictures are trustworthy. Adobe validates whether an image’s metadata is the original to help users know if it has been changed, Rao said.

ADVERTISEMENT

The problems caused by AI-generated images are growing. Other fake images that have gone viral within the past year include pictures of Pope Francis in a Balenciaga puffer jacket and an explosion near the Pentagon. If Content Credentials is adopted across all AI-generated image creators, as Rao suggested, it could help people distinguish true photographs from misinformation, satire, and parody.

But Adobe’s product isn’t likely the antidote to all the problems AI-generated images raise. Critics say the cryptographic standard behind Content Credentials is flawed and that bad actors can still manipulate an AI image’s metadata to make it appear real, Fortune previously reported.

Posts on Adobe’s community blog by users also highlight at least one case in which Content Credentials stated that an artist used AI to create a photo when the artist claimed they hadn’t, and they were unable to remove the label. While the watermark is automatically attached to images created using Firefly, Adobe’s generative AI product, it’s optional for its photo-editing products, Photoshop and Lightroom— leaving the door open for misuse.

And while Adobe’s product may help law enforcement track down the creators of exploitative images, it’s too early to determine exactly how. Using Content Credential to help victims is a “framework of an idea,” Rao said, rather than a plan of action.

OpenAI is in

Adobe has spent the last four years garnering support for its broader initiative to create an industry standard around content authenticity, which includes the use of Content Credentials. In the latest win for the initiative, OpenAI agreed to add these content labels to images generated by DALL·E 3, its image generation product. Other AI image creators, Stability.AI and Midjourney, support the project or similar ones.

Adobe is in early talks with other similar companies, Rao said.

Camera makers Leica and Nikon have also built Content Credentials into their new camera models. News publishers including the New York Times, the Associated Press, and Reuters have all committed to the initiative as well, though images on their websites don’t display the Content Credentials watermark or metadata. This is because pictures must be taken on a Content Credentials–supported camera, Rao said.

“We’ve made a lot of progress on the capture side,” Rao said, referring to the number of camera brands implementing the metadata technology. “But we need to get there on the consumption side,” referencing media sites and social platforms.

While the Content Authenticity Initiative has signed up more than 2,000 companies, it is missing some notable online services, including Facebook and Instagram parent company Meta. And some of the members, including Getty Images, are working on other technology because they don’t see the initiative as the be-all and end-all fix, Getty CEO Craig Peters previously told Fortune. As with other industry-wide efforts, many companies may be unwilling to cede power to another one—Adobe.

The explicit images purportedly of Swift first appeared on 4chan and in a Telegram group before going viral on X, formerly known as Twitter, 404 Media reported. The phrase “Taylor Swift AI” trended on X in multiple regions last week, and one post remained online for 17 hours before X suspended the account, according to The Verge. As a temporary solution, X blocked searches related to the images. But it was Swift’s fans who largely took action in the hours after the images began circulating by reporting posts and flooding the social media site with other images of the artist to make finding the AI-generated pictures difficult, as Fortune previously reported.

While X has said it is taking “appropriate action” against the accounts that posted the images, it's still unclear who created them, though 404 Media reported they used Microsoft Designer. Microsoft is a member of the Content Authenticity Initiative, though Content Credentials is optional within Designer. The company has since addressed the loophole that allowed Designer to create the images, 404 Media reported.

In the future, “people are going to doubt everything, because everything that is digital can be edited,” Rao told Fortune. While the creation of Photoshop made that true decades ago, the development of AI technologies marks a new frontier. “You should use content credentials if you want people to believe what you’re saying is true.”

This story was originally featured on Fortune.com