Elon Musk-led Twitter ripe for a misinformation avalanche | Tech News

Elon Musk-led Twitter ripe for a misinformation avalanche

Elon Musk-led Twitter is showing signs of being vulnerable to misinformation.

By:PTI
| Updated on: Jan 17 2023, 14:10 IST
Elon Musk Twitter Bankruptcy Talk: Timeline
Elon Musk
1/12 He’s told employees to brace themselves for long hours, that “the road ahead is arduous and will require intense work to succeed,” and said bankruptcy was possible. Here’s how the saga is unfolding: (Bloomberg)
Elon Musk
2/12 Oct. 27: Musk takes control- His first act is to fire the Board along with CEO Parag Agrawal, CFO Ned Segal, head of legal Vijaya Gadde and Counsel Sean Edgett. Musk forms advisory team that includes celebrity attorney Alex Spiro, VC David Sacks, Neuralink CEO and head of Musk’s family office Jared Birchall, investor Jason Calacanis, and partner of Andreessen Horowitz Sriram Krishnan. (Reuters)
Elon Musk
3/12 Oct. 31: Top tweeters protest- Amid murmurings of plans to charge existing verified accounts, author Steven King tweets, “$20 a month to keep my blue check? F**k that, they should pay me. If that gets instituted, I’m gone like Enron.” (AFP)
Elon Musk
4/12 Nov. 1: Teams working around the clock- The product team works over the weekend on Musk’s idea to charge users for blue check marks. A photo of product director Esther Crawford sleeping on the floor of a conference room, trying to make the deadline, goes viral. Meanwhile, managers are asked to make lists of who can be fired. Employees print out their software code for review by Musk and engineers from Tesla, to determine if their contributions are worthy of keeping a job. (REUTERS)
Elon Musk
5/12 Nov. 3: Massive layoffs begin- A memo is sent to all employees telling them of imminent layoffs and to watch for an email with the subject line: “Your Role at Twitter.” Badge access to offices is suspended as 3,700 staffers receive word that they’ve been cut. Realizing employees essential for the continuity of the business have been let go by mistake, some are asked to come back. (AP)
Elon Musk
6/12 Co-founder EV Williams tweets, “Heart’s out to the tweeps getting laid off today.” Co-founder Jack Dorsey adds, “I realize many are angry with me. I own the responsibility for why everyone is in this situation: I grew the company size too quickly. I apologize for that.” (REUTERS)
image caption
7/12 Nov. 5-6: Musk responds to celebrity protests- Unrest grows on the platform over the weekend, particularly over the issue of impersonator accounts. Actress Valerie Bertinelli starts a movement of people changing their Twitter names to “Elon Musk.” Comedian Kathy Griffin joins the protest, finds her account locked. Then Musk announces, “Going forward, any Twitter handles engaging in impersonation without clearly specifying `parody’ will be permanently suspended.” (AP)
Elon Musk
8/12 Nov. 8: Musk sells more Tesla- Despite a previous vow not to sell any more Tesla stock, Musk sells an additional $3.95 billion, bringing the total sold in past year to $36 billion. (REUTERS)
Elon Musk
9/12 Nov. 9: Musk Blue tick mark- Blue check mark option becomes available for purchase, and immediately becomes a tool for impersonators. An account masquerading as Nintendo Inc. posts an image of Super Mario holding up a middle finger. (REUTERS)
Elon Musk
10/12 Tesla and SpaceX CEO Elon Musk, and a close cadre of advisers are considering a host of changes to the way Twitter is run and makes money. (REUTERS)
Elon Musk
11/12 Nov. 10: More key executives quit as Musk warns of bankruptcy- In his first meeting with employees, Musk tells them to brace for 80-hour weeks and requires everyone back in the office full time. He also says bankruptcy for the company is not out of the question if it doesn’t start generating more cash. Several executives in charge of keeping Twitter safe and accountable to its users quit, including chief information security officer Lea Kissner, chief privacy officer Damien Kieran and chief compliance Marianne Fogarty.. (AFP)
Elon Musk
12/12 Nov. 11: Verified accounts get “Official” tags- Twitter adds badges that say “offiical” to verified accounts in some places, though confusion abounds. More brands depart the platform. (REUTERS)
Elon Musk
icon View all Images
Elon Musk led Twitter could be plagued with misinformation (REUTERS)

Seeing might not be believing going forward as digital technologies make the fight against misinformation even trickier for embattled social media giants.

In a grainy video, Ukrainian President Volodymyr Zelenskyy appears to tell his people to lay down their arms and surrender to Russia. The video — quickly debunked by Zelenskyy — was a deep fake, a digital imitation generated by artificial intelligence (AI) to mimic his voice and facial expressions.

You may be interested in

MobilesTablets Laptops
Apple iPhone 15 Pro Max
  • Black Titanium
  • 8 GB RAM
  • 256 GB Storage
Google Pixel 8 Pro
  • Obsidian
  • 12 GB RAM
  • 128 GB Storage
Samsung Galaxy S23 Ultra 5G
  • Green
  • 12 GB RAM
  • 256 GB Storage
Apple iPhone 15 Plus
  • Black
  • 6 GB RAM
  • 128 GB Storage

High-profile forgeries like this are just the tip of what is likely to be a far bigger iceberg. There is a digital deception arms race underway, in which AI models are being created that can effectively deceive online audiences, while others are being developed to detect the potentially misleading or deceptive content generated by these same models. With the growing concern regarding AI text plagiarism, one model, Grover, is designed to discern news texts written by a human from articles generated by AI.

Also read
Looking for a smartphone? To check mobile finder click here.

As online trickery and misinformation surges, the armour that platforms built against it are being stripped away. Since Elon Musk's takeover of Twitter, he has trashed the platform's online safety division and as a result misinformation is back on the rise.

Musk, like others, looks to technological fixes to solve his problems. He's already signalled a plan for upping use of AI for Twitter's content moderation. But this isn't sustainable nor scalable, and is unlikely to be the silver bullet. Microsoft researcher Tarleton Gillespie suggests: “automated tools are best used to identify the bulk of the cases, leaving the less obvious or more controversial identifications to human reviewers".

Some human intervention remains in the automated decision-making systems embraced by news platforms but what shows up in newsfeeds is largely driven by algorithms. Similar tools act as important moderation tools to block inappropriate or illegal content.

The key problem remains that technology 'fixes' aren't perfect and mistakes have consequences. Algorithms sometimes can't catch harmful content fast enough and can be manipulated into amplifying misinformation. Sometimes an overzealous algorithm can also take down legitimate speech.

Beyond its fallibility, there are core questions about whether these algorithms help or hurt society. The technology can better engage people by tailoring news to align with readers' interests. But to do so, algorithms feed off a trove of personal data, often accrued without a user's full understanding.

There's a need to know the nuts and bolts of how an algorithm works — that is opening the ‘black box'.

But, in many cases, knowing what's inside an algorithmic system would still leave us wanting, particularly without knowing what data and user behaviours and cultures sustain these massive systems.

One way researchers may be able to understand automated systems better is by observing them from the perspective of users, an idea put forward by scholars Bernhard Rieder, from the University of Amsterdam, and Jeanette Hofmann, from the Berlin Social Science Centre.

Australian researchers also have taken up the call, enrolling citizen scientists to donate algorithmically personalised web content and examine how algorithms shape internet searches and how they target advertising. Early results suggest the personalisation of Google Web Search is less profound than we may expect, adding more evidence to debunk the ‘filter bubble' myth, that we exist in highly personalised content communities. Instead it may be that search personalisation is more due to how people construct their online search queries.

Last year several AI-powered language and media generation models entered the mainstream. Trained on hundreds of millions of data points (such as images and sentences), these ‘foundational' AI models can be adapted to specific tasks. For instance, DALL-E 2 is a tool trained on millions of labelled images, linking images to their text captions.

This model is significantly larger and more sophisticated than previous models for the purpose of automatic image labelling, but also allows adaption to tasks like automatic image caption generation and even synthesising new images from text prompts. These models have seen a wave of creative apps and uses spring up, but concerns around artist copyright and their environmental footprint remain.

The ability to create seemingly realistic images or text at scale has also prompted concern from misinformation scholars — these replications can be convincing, especially as technology advances and more data is fed into the machine. Platforms need to be intelligent and nuanced in their approach to these increasingly powerful tools if they want to avoid furthering the AI-fuelled digital deception arms race.

Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 17 Jan, 14:02 IST
NEXT ARTICLE BEGINS