Artists use tech weapons against AI copycats | Tech News

Artists use tech weapons against AI copycats

Artists under siege by artificial intelligence (AI) that studies their work, then replicates their styles, have teamed with university researchers to stymy such copycat activity.

By:AFP
| Updated on: Dec 25 2023, 06:37 IST
NASA tracks 130-foot asteroid along with 4 others, reveals details of speed, close encounter with Earth
Artificial intelligence
1/5 Asteroid 2010 UE51: Designated as Asteroid 2010UE51, this space rock is on it way to pass Earth in December 2024. This asteroid is 22 Feet wide. During its approach, it is expected to come as close as 3.43 million kilometers. It will travel towards the Earth at a speed of 4724 kilometers per hour. (Pixabay)
image caption
2/5 Asteroid 2023 XP13: This asteroid is expected to pass Earth on December 24 and is 89 Feet wide. According to NASA, this asteroid will come as close as 4.53 million kilometers during its approach. The asteroid will travel towards Earth at a speed of 67156 kilometers per hour.  (Pixabay)
Artificial intelligence
3/5 Asteroid 2023 XP13: This asteroid is expected to pass Earth on December 24 and is 89 Feet wide. According to NASA, this asteroid will come as close as 4.53 million kilometers during its approach. The asteroid will travel towards Earth at a speed of 67156 kilometers per hour. (Pixabay)
Artificial intelligence
4/5 Asteroid 2023 YT: This asteroid is designated as Asteroid 2023 YT, and is expected to pass Earth on December 25. This asteroid is 63 Feet wide and it will come as close as 2.340 million kilometers during its approach. This space rock will be traveling towards Earth at a speed of 15950 kilometers per hour.  (pixabay)
Artificial intelligence
5/5 Asteroid 2023 XK16: NASA designated this space rock as Asteroid 2023 XK16. It is expected to pass Earth on December 25. This asteroid is 120 Feet wide and will come as close as 3.34 million kilometers to Earth. This asteroid will be traveling towards the planet at a speed of 26684 kilometers per hour. (Pixabay)
Artificial intelligence
icon View all Images
Artists are teaming up with university researchers to protect their work from AI copycat activity. (AFP)

Artists under siege by artificial intelligence (AI) that studies their work, then replicates their styles, have teamed with university researchers to stymy such copycat activity.

US illustrator Paloma McClain went into defense mode after learning that several AI models had been "trained" using her art, with no credit or compensation sent her way.

You may be interested in

MobilesTablets Laptops
7% OFF
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
23% OFF
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

"It bothered me," McClain told AFP.

Also read
Looking for a smartphone? To check mobile finder click here.

"I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others."

The artist turned to free software called Glaze created by researchers at the University of Chicago.

Glaze essentially outthinks AI models when it comes to how they train, tweaking pixels in ways indiscernible by human viewers but which make a digitized piece of art appear dramatically different to AI.

We are now on WhatsApp. Click to join.

"We're basically providing technical tools to help protect human creators against invasive and abusive AI models," said professor of computer science Ben Zhao of the Glaze team.

Created in just four months, Glaze spun off technology used to disrupt facial recognition systems.

"We were working at super-fast speed because we knew the problem was serious," Zhao said of rushing to defend artists from software imitators.

"A lot of people were in pain."

Generative AI giants have agreements to use data for training in some cases, but the majority if digital images, audio, and text used to shape the way supersmart software thinks has been scraped from the internet without explicit consent.

Since its release in March of 2023, Glaze has been downloaded more than 1.6 million times, according to Zhao.

Zhao's team is working on a Glaze enhancement called Nightshade that notches up defenses by confusing AI, say by getting it to interpret a dog as a cat.

"I believe Nightshade will have a noticeable effect if enough artists use it and put enough poisoned images into the wild," McClain said, meaning easily available online.

"According to Nightshade's research, it wouldn't take as many poisoned images as one might think."

Zhao's team has been approached by several companies that want to use Nightshade, according to the Chicago academic.

"The goal is for people to be able to protect their content, whether it's individual artists or companies with a lot of intellectual property," said Zhao.

Viva Voce

Startup Spawning has developed Kudurru software that detects attempts to harvest large numbers of images from an online venue.

An artist can then block access or send images that don't match what is being requested, tainting the pool of data being used to teach AI what is what, according to Spawning cofounder Jordan Meyer.

More than a thousand websites have already been integrated into the Kudurru network.

Spawning has also launched haveibeentrained.com, a website that features an online tool for finding out whether digitized works have been fed into an AI model and allow artists to opt out of such use in the future.

As defenses ramp up for images, researchers at Washington University in Missouri have developed AntiFake software to thwart AI copying voices.

AntiFake enriches digital recordings of people speaking, adding noises inaudible to people but which make it "impossible to synthesize a human voice," said Zhiyuan Yu, the PhD student behind the project.

The program aims to go beyond just stopping unauthorized training of AI to preventing creation of "deepfakes" -- bogus soundtracks or videos of celebrities, politicians, relatives, or others showing them doing or saying something they didn't.

A popular podcast recently reached out to the AntiFake team for help stopping its productions from being hijacked, according to Zhiyuan Yu.

The freely available software has so far been used for recordings of people speaking, but could also be applied to songs, the researcher said.

"The best solution would be a world in which all data used for AI is subject to consent and payment," Meyer contended.

“We hope to push developers in this direction.”

 

 

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 25 Dec, 06:37 IST
NEXT ARTICLE BEGINS
Not sure which Mobile to buy? Need help?