Lensa, the AI portrait app, has soared in popularity. But many artists question the ethics of AI art.
For many online, Lensa AI is a cheap, accessible profile picture generator. But in digital art circles, the popularity of artificial intelligence-generated art has raised major privacy and ethics concerns.
Lensa, which launched as a photo editing app in 2018, went viral last month after releasing its “magic avatars” feature. It uses a minimum of 10 user-uploaded images and the neural network Stable Diffusion to generate portraits in a variety of digital art styles. Social media has been flooded with Lensa AI portraits, from photorealistic paintings to more abstract illustrations. The app claimed the No. 1 spot in the iOS App Store’s “Photo & Video” category earlier this month.
But the app’s growth — and the rise of AI-generated art in recent months — has reignited discussion over the ethics of creating images with models that have been trained using other people’s original work.
Lensa is tinged with controversy — multiple artists have accused Stable Diffusion of using their art without permission. Many in the digital art space have also expressed qualms over AI models producing images en masse for so cheap, especially if those images imitate styles that actual artists have spent years refining.
For a $7.99 service fee, users receive 50 unique avatars — which artists said is a fraction of what a single portrait commission normally costs.
Companies like Lensa say they’re “bringing art to the masses,” said artist Karla Ortiz. “But really what they’re bringing is forgery, art theft [and] copying to the masses.”
Prisma Labs, the company behind Lensa, did not respond to requests for comment.
In a lengthy Twitter thread posted Tuesday morning, Prisma addressed concerns of AI art replacing art by actual artists.
“As cinema didn’t kill theater and accounting software hasn’t eradicated the profession, AI won’t replace artists but can become a great assisting tool,” the company tweeted. “We also believe that the growing accessibility of AI-powered tools would only make man-made art in its creative excellence more valued and appreciated, since any industrialization brings more value to handcrafted works.”
The company said that AI-generated images “can’t be described as exact replicas of any particular artwork.” The thread did not address accusations that many artists didn’t consent to the use of their work for AI training.
For some artists, AI models are a creative tool. Several have pointed out that the models are helpful for generating reference images that are otherwise difficult to find online. Other writers have posted about using the models to visualize scenes in their screenplays and novels. While the value of art is subjective, the crux of the AI art controversy is the right to privacy.
Ortiz, who is known for designing concept art for movies like “Doctor Strange,” also paints fine art portraits. When she realized that her art was included in a dataset used to train the AI model that Lensa uses to generate avatars, she said it felt like a “violation of identity.”
Prisma Labs deletes user photos from the cloud services it uses to process the images after it uses them to train its AI, the company told TechCrunch. The company’s user agreement states that Lensa can use the photos, videos and other user content for “operating or improving Lensa” without compensation.
In its Twitter thread, Lensa said that it uses a “separate model for each user, not a one-size-fits-all monstrous neural network trained to reproduce any face.” The company also stated that each user’s photos and “associated model” are permanently erased from its servers as soon as the user’s avatars are generated.
The fact that Lensa uses user content to further train its AI model, as stated in the app’s user agreement, should alarm the public, artists who spoke with NBC News said.
“We’re learning that even if you’re using it for your own inspiration, you’re still training it with other people’s data,” said Jon Lam, a storyboard artist at Riot Games. “Anytime people use it more, this thing just keeps learning. Anytime anyone uses it, it just gets worse and worse for everybody.”
Image synthesis models like Google Imagen, DALL-E and Stable Diffusion are trained using datasets of millions of images. The models learn associations between the arrangement of pixels in an image and the image’s metadata, which typically includes text descriptions of the image subject and artistic style.
The model can then generate new images based on the associations it has learned. When fed the prompt “biologically accurate anatomical description of a birthday cake,” for example, the model Midjourney generated unsettling images that looked like actual medical textbook material. Reddit users described the images as “brilliantly weird” and “like something straight out of a dream.”
The San Francisco Ballet even used images generated by Midjourney to promote this season’s production of the Nutcracker. In a press release earlier this year, the San Francisco Ballet’s chief marketing officer Kim Lundgren said that pairing the traditional live performance with AI-generated art was the “perfect way to add an unexpected twist to a holiday classic.” The campaign was widely criticized by artist advocacy groups. A spokesperson for the ballet did not immediately respond to a request for comment.
“The reason those images look so good is due to the nonconsensual data they gathered from artists and the public,” Ortiz said.
Ortiz is referring to the Large-scale Artificial Intelligence Open Network (LAION), a nonprofit organization that releases free datasets for AI research and development. LAION-5B, one of the datasets used to train Stable Diffusion and Google Imagen, includes publicly available images scraped from sites like DeviantArt, Getty Images and Pinterest.
Many artists have spoken out against models that have been trained with LAION because their art was used in the set without their knowledge or permission. When an artist used the site Have I Been Trained, which allows users to check if their images were included in LAION-5B, she found her own face and medical records. Ars Technica reported that “thousands of similar patient medical record photos” were also included in the dataset.
Artist Mateusz Urbanowicz, whose work was also included in LAION-5B, said that fans have sent him AI-generated images that bear striking similarities to his watercolor illustrations.
It’s clear that LAION is “not just a research project that someone put on the internet for everyone to enjoy,” he said, now that companies like Prisma Labs are using it for commercial products.
“And now we are facing the same problem the music industry faced with websites like Napster, which was maybe made with good intentions or without thinking about the moral implications.”
The art and music industry abide by stringent copyright laws in the United States, but the use of copyrighted material in AI is legally murky. Using copyrighted material to train AI models might fall under fair use laws, The Verge reported. It’s more complicated when it comes to the content that AI models generate, and it’s difficult to enforce, which leaves artists with little recourse.
“They just take everything because it’s a legal gray zone and just exploiting it,” Lam said. “Because tech always moves faster than law, and law is always trying to catch up with it.”
There’s also little legal precedent for pursuing legal action against commercial products that use AI trained on publicly available material. Lam and others in the digital art space say they hope that a pending class action lawsuit against GitHub Copilot, a Microsoft product that uses an AI system trained by public code on GitHub, will pave the way for artists to protect their work. Until then, Lam said he’s wary of sharing his work online at all.
Lam isn’t the only artist worried about posting his art. After his recent posts calling out AI art went viral on Instagram and Twitter, Lam said that he received “an overwhelming amount” of messages from students and early career artists asking for advice.
The internet “democratized” art, Ortiz said, by allowing artists to promote their work and connect with other artists. For artists like Lam, who has been hired for most of his jobs because of his social media presence, posting online is vital for landing career opportunities. Putting a portfolio of work samples on a password-protected site doesn’t compare to the exposure gained from sharing it publicly.
“If no one knows your art, they’re not going to go to your website,” Lam added. “And it’s going to be increasingly difficult for students to get their foot in the door.”
Adding a watermark may not be enough to protect artists — in a recent Twitter thread, graphic designer Lauryn Ipsum listed examples of the “mangled remains” of artists’ signatures in Lensa AI portraits.
Some argue that AI art generators are no different from an aspiring artist who emulates another’s style, which has become a point of contention within art circles.
Days after illustrator Kim Jung Gi died in October, a former game developer created an AI model that generates images in the artist’s unique ink and brush style. The creator said the model was an homage to Kim’s work, but it received immediate backlash from other artists. Ortiz, who was friends with Kim, said that the artist’s “whole thing was teaching people how to draw,” and to feed his life’s work into an AI model was “really disrespectful.”
Urbanowicz said he’s less bothered by an actual artist who’s inspired by his illustrations. An AI model, however, can churn out an image that he would “never make” and hurt his brand — like if a model was prompted to generate “a store painted with watercolors that sells drugs or weapons” in his illustration style, and the image was posted with his name attached.
“If someone makes art based on my style, and makes a new piece, it’s their piece. It’s something they made. They learned from me as I learned from other artists,” he continued. “If you type in my name and store [in a prompt] to make a new piece of art, it’s forcing the AI to make art that I don’t want to make.”
Many artists and advocates also question if AI art will devalue work created by human artists.
Lam worries that companies will cancel artist contracts in favor of faster, cheaper AI-generated images.
Urbanowicz pointed out that AI models can be trained to replicate an artist’s previous work, but will never be able to create the art that an artist hasn’t made yet. Without decades of examples to learn from, he said, the AI images that looked just like his illustrations would never exist. Even if the future of visual art is uncertain as apps like Lensa AI become more common, he’s hopeful that aspiring artists will continue to pursue careers in creative fields.
“Only that person can make their unique art,” Urbanowicz said. “AI cannot make the art that they will make in 20 years.”
Source: Read Full Article