Can We No Longer Believe Anything We See?

By Tiffany Hsu and Steven Lee Myers

Seeing has not been believing for a very long time. Photos have been faked and manipulated for nearly as long as photography has existed.

Now, not even reality is required for photographs to look authentic — just artificial intelligence responding to a prompt. Even experts sometimes struggle to tell if one is real or not. Can you?

Which image was created by artificial intelligence?


The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.

The advancements are already fueling disinformation and being used to stoke political divisions. Authoritarian governments have created seemingly realistic news broadcasters to advance their political goals. Last month, some people fell for images showing Pope Francis donning a puffy Balenciaga jacket and an earthquake devastating the Pacific Northwest, even though neither of those events had occurred. The images had been created using Midjourney, a popular image generator.

On Tuesday, as former President Donald J. Trump turned himself in at the Manhattan district attorney’s office to face criminal charges, images generated by artificial intelligence appeared on Reddit showing the actor Bill Murray as president in the White House. Another image showing Mr. Trump marching in front of a large crowd with American flags in the background was quickly reshared on Twitter without the disclosure that had accompanied the original post, noting it was not actually a photograph.

Experts fear the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?

“The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” said Wasim Khaled, chief executive of Blackbird.AI, a company that helps clients fight disinformation.

Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction. Plug in a text description, and the technology can produce a related image — no special skills required.

Often, there are hints that viral images were created by a computer rather than captured in real life: The luxuriously coated pope had glasses that seemed to melt into his cheek and blurry fingers, for example. A.I. art tools also often produce nonsensical text. Here are some examples:

Rapid advancements in the technology, however, are eliminating many of those flaws. Midjourney’s latest version, released last month, is able to depict realistic hands, a feat that had, conspicuously, eluded early imaging tools.

Days before Mr. Trump turned himself in to face criminal charges in New York City, images made of his “arrest” coursed around social media.They were created by Eliot Higgins, a British journalist and founder of Bellingcat, an open source investigative organization. He used Midjourney to imagine the former president’s arrest, trial, imprisonment in an orange jumpsuit and escape through a sewer. He posted the images on Twitter, clearly marking them as creations. They have since been widely shared.

The images weren’t meant to fool anyone. Instead, Mr. Higgins wanted to draw attention to the tool’s power — even in its infancy.

A New Generation of Chatbots

A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).

Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.

Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.

Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.

Midjourney’s images, he said, were able to pass muster in facial-recognition programs that Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It’s not hard to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies.

At the same time, Mr. Higgins said, the tool also struggled to create convincing images with people who are not as widely photographed as Mr. Trump, such as the new British prime minister, Rishi Sunak, or the comedian Harry Hill, “who probably isn’t known outside of the U.K. that much.”

Midjourney was not amused in any case. It suspended Mr. Higgins’s account without explanation after the images spread. The company did not respond to requests for comment.

The limits of generative images make them relatively easy to detect by news organizations or others attuned to the risk — at least for now.

Still, stock photo companies, government regulators and a music industry trade group have moved to protect their content from unauthorized use, but technology’s powerful ability to mimic and adapt is complicating those efforts.

Some A.I. image generators have even reproduced images — a queasy “Twin Peaks” homage; Will Smith eating fistfuls of pasta — with distorted versions of the watermarks used by companies like Getty Images or Shutterstock.

In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to train the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that Stable Diffusion diluted the value of the Getty watermark by incorporating it into images that ranged “from the bizarre to the grotesque.”

Getty said the “brazen theft and freeriding” was conducted “on a staggering scale.” Stability AI did not respond to a request for comment.

Getty’s lawsuit reflects concerns raised by many individual artists — that A.I. companies are becoming a competitive threat by copying content they do not have permission to use.

Trademark violations have also become a concern: Artificially generated images have replicated NBC’s peacock logo, though with unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra O’s looped into the name.

In February, the U.S. Copyright Office weighed in on artificially generated images when it evaluated the case of “Zarya of the Dawn,” an 18-page comic book written by Kristina Kashtanova with art generated by Midjourney. The government administrator decided to offer copyright protection to the comic book’s text, but not to its art.

“Because of the significant distance between what a user may direct Midjourney to create and the visual material Midjourney actually produces, Midjourney users lack sufficient control over generated images to be treated as the ‘master mind’ behind them,” the office explained in its decision.

The threat to photographers is fast outpacing the development of legal protections, said Mickey H. Osterreicher, general counsel for the National Press Photographers Association. Newsrooms will increasingly struggle to authenticate content. Social media users are ignoring labels that clearly identify images as artificially generated, choosing to believe they are real photographs, he said.

Generative A.I. could also make fake videos easier to produce. This week, a video appeared online that seemed to show Nina Schick, an author and a generative A.I. expert, explaining how the technology was creating “a world where shadows are mistaken for the real thing.” Ms. Schick’s face then glitched as the camera pulled back, showing a body double in her place.

The video explained that the deepfake had been created, with Ms. Schick’s consent, by the Dutch company Revel.ai and Truepic, a California company that is exploring broader digital content verification.

The companies described their video, which features a stamp identifying it as computer-generated, as the “first digitally transparent deepfake.” The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software.

The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving A.I. images.

“The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, chief executive of Truepic.

Truepic is part of the Coalition for Content Provenance and Authenticity, a project set up through an alliance with companies such as Adobe, Intel and Microsoft to better trace the origins of digital media. The chip-maker Nvidia said last month that it was working with Getty to help train “responsible” A.I. models using Getty’s licensed content, with royalties paid to artists.

On the same day, Adobe unveiled its own image-generating product, Firefly, which will be trained using only images that were licensed or from its own stock or no longer under copyright. Dana Rao, the company’s chief trust officer, said on its website that the tool would automatically add content credentials — “like a nutrition label for imaging” — that identified how an image had been made. Adobe said it also planned to compensate contributors.

Last month, the model Chrissy Teigen wrote on Twitter that she had been hoodwinked by the pope’s puffy jacket, adding that “no way am I surviving the future of technology.”

Last week, a series of new A.I. images showed the pope, back in his usual robe, enjoying a tall glass of beer. The hands appeared mostly normal — save for the wedding band on the pontiff’s ring finger.

Additional production by Jeanne Noonan DelMundo, Aaron Krolik and Michael Andre.

Source: Read Full Article