Adobe MAX 2024: Firefly AI now lets you ‘extend’ shots in Premiere Pro, remove distractions in Photoshop with one tap
Adobe's Firefly is enabling a range of new AI features in applications such as Premiere Pro and Photoshop. Here are the details.
Adobe Max 2024 has kicked off, with a slew of new announcements for its range of products, including Premiere Pro, Photoshop, and more. The annual conference kicked off on October 14 and will run until October 16. From the initial batch of announcements, a few stand out; let's take a look at what Adobe has in store for creatives using its portfolio of apps, the majority of which have benefited from the company's leap in generative AI with its model, Firefly.
Also Read: Vivo X200 series launched with MediaTek Dimensity 9400 SoC- All details
Distraction Removal in Photoshop
Remember Google's Magic Eraser? Well, Distraction Removal, Adobe's latest feature, powered by Adobe's Firefly generative AI model (though users have the option not to use generative AI), allows users to remove distractions from an image—such as people in the background, annoying objects, wires, etc.—with just one click. The feature automatically identifies these distractions, allowing users to get rid of them easily, with just a click. Currently, this feature is available in the Photoshop desktop and web apps only.
Generative Extend in Premiere Pro (Beta)
Creatives can have good or bad days at shoots, and sometimes, when things don't go as planned, they can end up with shots that may not make the cut—ending abruptly or if the shot starts too late. These are errors that are typically not fixable on the edit table, but now, with the power of generative AI, Adobe will allow Premiere Pro users to do things like extend clips to cover gaps in audio or video footage, smooth out transitions, and even hold shots longer. This feature will be available in beta for now.
Also Read: iOS 18.1 releasing soon: Apple rolls out new beta ahead of big launch
Text-to-Video (Beta) in Adobe Firefly
Creatives are familiar with the frustration of needing an extra shot during editing, only to realise it's too late to film it or too expensive to buy a stock clip. Now, with Adobe's generative AI, users can generate video through text-based prompts or even from a single image frame from their footage. This means you can create B-roll from stills and fill gaps in your timeline. Based on current capabilities, the prompts can be highly detailed, specifying camera lens types, shot depth, lighting, and even character descriptions. You can even generate overlay elements, including light leaks, and more. Adobe says generated videos will be commercially safe, as the model is trained on licensed content.
Photoshop Generative Fill Now Uses Firefly Image 3
Adobe's Generative Fill and Generative Expand tools were an instant hit with the creative community, helping users save time and complete projects more efficiently. Now, these features have improved with the introduction of Adobe's latest Firefly Image 3 AI model, which provides more realistic results thanks to its better understanding of text-based prompts.
Catch all the Latest Tech News, Mobile News, Laptop News, Gaming news, Wearables News , How To News, also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.