When It Comes to Photos, Don't Believe Your Lying Eyes in the Artificial Intelligence Era
A fake photo of an explosion near the Pentagon went viral across Twitter on Monday, and stocks dipped.
A fake photo of an explosion near the Pentagon went viral across Twitter on Monday, and stocks dipped. The incident confirmed what many have said for months: Misinformation is on course to be supercharged as new AI tools for concocting photos get easier to use.
Fixing this problem with technology will be an endless game of whack-a-mole. It's certainly worth trying to track image provenance, as Adobe Inc. is doing with its Content Authenticity Initiative. But as the saying goes, a lie can travel around the world and back again while the truth is still lacing up its boots. In a world where more content than ever is being generated artificially, we'll all need to become more skeptical about what we see online — especially in the run-up to a US presidential election next year.
We are now on WhatsApp. Click to join.
The Pentagon “photo” became particularly messy because of Twitter's poor excuse for a verification system. Elon Musk revamped the site's blue ticks so that they would no longer be monopolized by “elites” like press and celebrities, and so more people could become verified and have a louder voice for a flat fee. Unfortunately, his system has become a target for imitators, like the paid account BloombergFeed, which was one of several verified accounts that posted the Pentagon photo before getting suspended Monday morning.
Bloomberg Feed and a Twitter account called Walter Bloomberg, which also carried the report, are not affiliated with Bloomberg News, according to a spokesperson for Bloomberg News.
Although Twitter has made a perfect environment for fake AI photos to flourish, the problem ultimately goes beyond the platform. The Pentagon photo originated on Facebook and we can expect more photos like it circulating on other social networks too, such as WhatsApp, where fake information about the elections in Brazil last year went viral through the app's forwarding feature.
TikTok could also become more susceptible to fake videos soon enough. Early examples of videos made from AI tools still look glitchy, but they're likely to become more realistic in the next year or two, with millions of dollars of venture-capital investment going into startups building deepfake technology (for legitimate purposes, of course).
For instance, New York startup Runway has just released a tool that allows anyone to transform one video into another type of video using words and images as prompts, while San Francisco-based Gan.ai has raised money from VC luminaries like Sequoia Capital to sell “video personalisation” software to brands.
While realistic fake videos might still be a year or two out, image generation is becoming easier than ever. Adobe has just updated its Photoshop software with generative AI tools that allow users of the ubiquitous image-editing software to manipulate photos in much more drastic ways. And there are several good image-generating tools available as mobile apps, making them easier for people to access on the go. While tools like Adobe's, Midjourney Inc.'s, or OpenAI's DALL-E 2 won't create images of celebrities, politicians, violence and war, open-source alternatives like Stable Diffusion will.
When I asked Stable Diffusion's co-founder last year about how the world should deal with a surge in fake photos, he said we'll all have to adjust. “People will be aware of the fact that anyone can create that image on their phone, in one second,” Emad Mostaque said. “People will be like, ‘Oh it's probably just created.'”
Remember the internet jargon, “pics or it didn't happen?” Soon enough, pics won't be so useful for proof, and we'll find ourselves questioning legitimate images too. Twitter users got a taste of AI's potential for accelerating misinformation in March, when a fake photo of Pope Benedict in a puffer jacket went viral. As we predicted back then, the potential for fakery has taken a darker turn.
Generative AI and dodgy blue checkmarks are a perfect mix for misinformation to thrive on Twitter, and as Meta Platforms Inc. prepares to cut more jobs in the coming weeks, staff are concerned their content moderation teams will get curbed too, according to a Tuesday report in the Washington Post, meaning there will be fewer people to handle the problem.
This time last year, platforms like Twitter and Facebook had improved their abilities to stamp out misinformation. Things look different today. The tech companies have to do a better job of preventing fake news from spreading — but we will also need to approach them with greater doses of skepticism. At a time when seeing is no longer believing, we must arm ourselves with more discerning eyes, and a little more doubt.
Also, read these top stories:
The AI Fight of the ages! False and misleading information supercharged with cutting-edge AI that threatens to erode democracy, the World Economic Forum said. Some interesting details in this article. Check it out here. If you enjoyed reading this article, please forward it to your friends and family.
ChatGPT-Maker vs NYT! A barrage of lawsuits in a New York federal court will test the future of ChatGPT and other AI products. Read all about it here. Found it interesting? Go on, and share it with everyone you know.
The Suite Life! Chef-like robots, AI-powered appliances and other high-tech kitchen gadgets are holding out the promise that humans don't need to cook — or mix drinks — for themselves. Jump in and see what's happening here.
Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.