Deepfakes in age of AI! Worried about photos and videos being manipulated? Know how to be safe | How-to

Deepfakes in age of AI! Worried about photos and videos being manipulated? Know how to be safe

In the age of AI, deepfakes are emerging as the biggest threat to privacy! Know how to protect yourself from fake photos and videos.

By: HT TECH
| Updated on: Dec 25 2023, 10:34 IST
Google Pixel 8 Pro gets Gemini Nano AI update: Here are the benefits you will get now
Artificial Intelligence
1/6 The Google Pixel 8 Pro, recognised as one of the first smartphones with AI capabilities, is now undergoing a significant makeover. Google has rolled out a software update to introduce Gemini Nano AI, a variant of Google's latest Large Language Model (LLM), to enhance the flagship device. Nano AI is the lightest version.  (Google)
image caption
2/6 Gemini Nano Unveiled: Gemini Nano is part of the Google Gemini 1.0 family, tailored specifically for mobile devices. The higher versions of Gemini AI are Gemini Pro and the upcoming Gemini Ultra. Nano supports Google's scalability initiative is evident in their new model lineup. (Google)
image caption
3/6 Pixel Feature Drops: The software update not only introduces Gemini Nano but also brings forth a series of Pixel enhancements, improving the functionality and user experience of the Google Pixel 8 Pro. (Pexels)
image caption
4/6 Summarise In Recorder: Google Pixel 8 Pro users now benefit from the Summarise feature in the Recorder app. This innovative function allows users to obtain summaries of recorded conversations, interviews, and presentations without the need for a network connection. (Google )
image caption
5/6 Smart Reply in Gboard: Gemini Nano brings Smart Reply to Gboard for Pixel 8 Pro users, currently available in developer preview. Initially applicable in apps like WhatsApp, this feature provides high-quality response suggestions. It's important to note that this functionality is exclusively accessible to English language users. (HT Tech)
image caption
6/6 Enhanced Video Capabilities and Photo Unblur: Beyond Gemini AI features, Pixel 8 Pro introduces the 'Video Boost' feature, utilising computational photography models to enhance normal-looking videos. This includes adjustments to colour, lighting, stabilisation, and grain, along with the addition of Night Sight for low-light videos. Furthermore, Photo Unblur receives an update, enabling users to sharpen images of pets, ensuring clearer and more vibrant photos of dogs and cats. (HT Tech)
Artificial Intelligence
View all Images
Check out the ways through which you can safeguard yourself from being a victim of deepfakes in the age of AI. (REUTERS)

Over the past few years, humanity has become very technologically dependent, heavily reliant on devices for professional and personal purposes. Now, technologies such as Artificial intelligence and machine learning are even deciding the way we function by bringing in various advances, changing even national cultures globally. While most of these changes are positive, there are some people who are misusing these technologies to manipulate and scam people. One such growing concern that has emerged in recent times is in the form of “Deepfakes” - photos, videos, audio and more. We have seen several cases of fake videos and images of famous celebrities being manipulated to show them in poor light or send the wrong message.

In no time, you can also be targeted and become a victim of deepfakes, which can have serious and disturbing consequences. Therefore, one must stay vigilant and take the necessary steps to stay safe. Know what are deepfakes and the correct ways to protect yourself.

What is deepfake?

According to a National Cybersecurity Alliance report, deep fakes are artificial intelligence-generated videos, images, and audio that are edited or manipulated to make anyone say or do anything that they did not do in real life. Deepfakes can be used to defraud, manipulate, and defame anyone, be it a celebrity, politician, or common people. NCA said, “if your vocal identity and sensitive information got into the wrong hands, a cybercriminal could use deepfaked audio to contact your bank.”

We are now on WhatsApp. Click to join.

As scary as it sounds, you can always take safety and privacy measures to protect yourself from being a victim of such a heinous crime. Check out how to protect yourself from deepfakes.

Also read: What are Deepfakes? Know how to spot manipulated videos and audio created via AI tools

How to safeguard from deepfakes

  1. The first thing one should do is activate all the privacy settings on websites and social media so that no one can get access to your personal information and content. Users can also limit access to who can see your photos, videos, or other data.

2. Look for clear cues if you come across a video that seems too good to be true. Check if the video or image has jerky movements, weird facial movements of lips and eyes, strange placements of facial expressions, etc. Refrain from sharing such videos with others to stop spreading fake information.

3. Use watermarking on your photos and videos when you are sharing them on online platforms. Watermarks work as a digital fingerprint which can put scammers in danger and they will think twice before manipulating your image or videos. There are some apps available that can do that properly.

4. The most important thing is to gain knowledge about deepfakes, generative AI and how such technologies can be used to trick innocent people. Stay updated about the recent advancements and crimes taking place in the world.

5. Strengthen your passwords and enable multi-factor authentication for your digital space. It is an extra layer of security that requires a user to log in to your account twice with the owner's permission. No one other than you can access your account without your permission with multi-factor authentication.

Follow HT Tech for the latest tech news and reviews , also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 25 Dec, 09:57 IST
NEXT ARTICLE BEGINS