Dall-E, Midjourney, ChatGPT and more riding the AI wave, but 3 key legal issues roiling the space | Tech News

Dall-E, Midjourney, ChatGPT and more riding the AI wave, but 3 key legal issues roiling the space

Generative AI has captured the imagination of individuals and businesses alike – enabling (artificial) creativity at a scale which was largely unheard of.

By: HT TECH
| Updated on: Jun 04 2023, 06:50 IST
How To Use ChatGPT For Beginners
Hindustan Time Tech
Wondering how to use the OpenAI's ChatGPT? Here's a quick guide if you are using this AI chatbot for the first time. 
ChatGPT
GAI is all about creativity and the first obvious question that it throws up is regarding intellectual property. (Pexels)
ChatGPT
iconWatch Video
GAI is all about creativity and the first obvious question that it throws up is regarding intellectual property. (Pexels)

Dall-E, Beatoven, and Midjourney - these pun-filled and poetic-sounding applications have suddenly become household names in the last year or so, along with ChatGPT, which is perhaps the most popular. Generative Artificial Intelligence (GAI) has captured the imagination of individuals and businesses alike – enabling (artificial) creativity at a scale which was largely unheard of. Text, images, music, videos, 3D printing – you name it and these applications are capable of producing fairly impressive outputs (although they are still far from perfect).

GAI is all about creativity and the first obvious question that it throws up is regarding intellectual property. On one hand, there are allegations of GAI's training procedure infringing on existing copyrighted works. GAI are typically trained on vast amounts of existing data on the internet. This may involve articles, news pages, image websites (and even entire e-books as per some reports), all of which are potentially copyrighted works. Numerous lawsuits have already been filed in the US claiming copyright infringement by GAI developers during the training process. The training process does not necessarily involve taking consent or licenses from all authors leading to concerns about authors' autonomy over their work. While training of GAI will involve making copies of and storing existing copyrighted works, whether or not this amounts to infringement may depend on factors such as fair use principles, commercialization of GAI, the extent of infringement of existing works, the creativity involved in the output and threat to the market of the original work.

The other side on the IP front is of course the authorship of GAI outputs, and whether GAI outputs can qualify as copyrightable works in the first place. Concepts of authorship have traditionally revolved around individual authors, their “sweat of the brow” and creativity (although ownership may be of companies and other legal entities). Even the term of copyright is linked to the life of the author. With such precedents, it is tough to ascertain whether a GAI developer or the user providing inputs or both, should be treated as an author in the work since none of them have expended creativity or skill towards a particular output. AI may not be treated as a legal person, especially for copyright purposes and hence, it may not be possible for the GAI to be considered as the author. This fragmentation could lead to an argument that the output of GAI is not intellectual property in the first place, given that there is no author.

In addition to the above, the existence of bias and prejudice in AI are well-documented. AIs have been reported to demonstrate racial and ethnic bias in addition to prejudice towards gender. Such bias can have considerable implications, especially for public-facing AI. For e.g., text-based AI may assume certain characteristics based on the race of a person while image-generating AI may produce outputs which assume certain gender roles. When businesses use GAI for customer-facing activities, such bias can lead to huge reputational risks, in addition to regulatory risks in jurisdictions that have stringent anti-discrimination laws.

This brings us to the wider issue of accountability, explainability, and liability. While bias is one source of potential risks, liability may also arise through unlawful content. For instance, if GAI produces hate speech or defamatory content, which party should be liable for it – the GAI developer, the business deploying it, or the user providing the prompt? The answer becomes more complex when businesses also train the GAI with their own data, to serve their specific use case. While training of GAI may not always be deliberate, it is challenging, if not impossible, to understand or predict how GAI will create the final output. This “black box” conundrum leads to accountability issues especially when GAI is deployed for public-facing use cases. How GAI developers and their business customers allocate liability for GAI output will be a critical point of legal consideration. Hence, these legal issues will be key to ascertaining both the value of the GAI business and its services, and the associated risks.

By Huzefa Tavawalla and Aniruddha Majumdar

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 04 Jun, 06:50 IST
NEXT ARTICLE BEGINS