LinkedIn reveals AI image detector to catch fake profiles
LinkedIn has announced a new AI image detector that can identify 99.6 percent fake profile images and thus help stop the spread of fake profiles.
Social media is a microcosm of our society. And just like the real world has its own dangers, social media is also not free of them. One such danger is the issue of fake profiles. Fake profiles are deeply problematic as they not only confuse other users about the authenticity of the person behind the profile but also many people's identity is stolen this way. And when such incidents occur in a professional space such as LinkedIn, the gravity of the situation increases manifold. To stop such issues, the social media platform has introduced a new AI tool that can catch fake profile pictures and mitigate the risk of such accounts spreading on the platform.
Announcing the new AI tool, LinkedIn said in a blog post, “To protect members from inauthentic interactions online, it is important that the forensic community develop reliable techniques to distinguish real from synthetic faces that can operate on large networks with hundreds of millions of daily users”. The new tool can catch fake profile pictures with an accuracy of 99.6 percent, although there is a false positive rate of 1 percent.
AI tool to mitigate fake profiles on LinkedIn
LinkedIn partnered with academia to build its detection tool that closely observes profile pictures and detects if any picture has been used in multiple profiles. The tool goes after images that have been created using an AI technique called generative adversarial network (GAN). It identifies such images using a high number of elements that looks for structural irregularities in the face, which AI-generated images usually lack.
The tool uses two specific techniques in order to train the model. The first is, a learned linear embedding based on a principal components analysis (PCA) and the second is a learned embedding based on an autoencoder (AE).
“The goal of the Fourier-based embedding is to demonstrate that a generic embedding is not sufficient to distinguish synthesized faces from photographed faces and that the learned embeddings are required to extract sufficiently descriptive representations,” the post mentioned.
The tool is aimed at reducing the instances of fake profiles pretending to be a person of influence to either scam or harm another user.
Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.