Nine New AI Features Are Coming to Adobe’s Creative Apps.

Adobe, the creator of such powerful content creation programs as Photoshop and Premiere, just wrapped up its Adobe Max 2025 keynote , and you know what that means. That’s right: more AI. During the three-hour presentation, the company focused on automating creative processes, unveiling new generative AI tools for Photoshop, Lightroom, Premiere Pro, and other Creative Cloud apps. Some of these are enhancements to existing tools, such as the improved Generative Fill, while others are completely new, such as the new AI-powered sound generation feature in Firefly.

Adobe Express can create vibration-based designs

Credit: Adobe

Before we get into the big picture, let’s start with Adobe’s entry-level apps. While Adobe is known for its professional programs like Photoshop, the company also has its own free, basic web editor (though it also has a mobile app) that helps it compete with alternatives like Canva . The tool, called Adobe Express, has been constantly updated since its debut in 2015, and with the advent of generative AI, it quickly embraced the trend, aiming to make itself easier to use.

Meet today’s ” AI Assistant in Adobe Express .” When enabled with the toggle in the upper left corner of the app, the Assistant replaces your tools with a chat window where you can tell it whether to create a new design from scratch or edit an existing one. If you need the tools again, you can turn them back on by disabling the Assistant. Adobe’s demos of this feature also show how the Assistant displays contextual sliders when needed, such as for resizing.

You may also like

While this isn’t Adobe Express’s first foray into generative AI, the idea is to make getting started or quickly editing text less intimidating, allowing inexperienced users to spend more time in the chat window rather than clicking on toolbars. Adobe says that, like its other AI tools, it uses a number of ” commercially safe ” sources, including the company’s font and stock image libraries, as well as its Firefly AI models.

The tool’s public beta testing begins today, so you’ll be able to try it out soon.

Adobe Premiere integrates into YouTube Shorts

Credit: Adobe

Shorts are the next big trend on YouTube , and to encourage more people to create short videos, YouTube is teaming up with Adobe . The new Adobe Create for YouTube Shorts feature, an update to both the Premiere app for iPhone and YouTube itself, lets you upload footage and instantly prepare it for publishing with Adobe font overlays and a range of exclusive effects, transitions, and stickers. You can also directly insert footage into templates with pre-made transitions and effects.

The feature is currently listed as “coming soon,” so it won’t be available for testing anytime soon. Adobe and YouTube say that once it launches, you’ll be able to access it either through the Premiere app for iPhone or directly through YouTube using the “Edit in Adobe Premiere” icon in short YouTube videos.

There is no information yet about the release of a version for Android or PC.

Adobe will add sound to your videos for you.

Credit: Adobe

When creating a new video, it’s easy to overlook audio, and I’ve often struggled to find the right audio track to add at the last minute. Adobe Firefly AI’s new audio features will help you avoid this fate, making it easier to add music and even narration to silent videos.

Firefly’s new Generate Soundtrack and Generate Speech buttons, which are in public beta today, use artificial intelligence and a Mad Libs-style suggestion system to help you quickly evaluate your content from multiple options.

To create a soundtrack, upload a video, click the corresponding button, and the app will suggest a suggestion and provide a palette of adjectives, genres, and content types to refine it. Drag your chosen terms into the suggestion field, click “Generate,” and you’ll be presented with four options, each up to five minutes long.

It’s a bit odd that you can’t simply type your own terms into the search box, though Adobe’s head of AI, Alexandru Costin, told The Verge that this is because AI-powered audio is “a new muscle we need to develop” and that the current approach is “simpler and more accessible.”

As with other generations of Firefly, audio will be generated using Adobe’s own licensed content, so users won’t have to worry about copyright infringement on videos created using this feature.

Meanwhile, the “Generate Speech” feature gives users access to over 50 text-to-speech voices, both from Adobe Firefly and licensed from ElevenLabs. There are no Mad Libs prompts; instead, Adobe allows fine-tuning of parameters such as speed, pitch, tone, and even pronunciation. Over 20 languages ​​are currently supported.

Taken together, these updates strike me as an attempt to keep up with platforms like Instagram and TikTok, which have licensed music libraries and built-in text-to-speech functionality. Whether a purely AI-powered version can keep up remains to be seen, though hosting it in the editor rather than on the platform gives creators more choice about where to upload their content.

Updates in Photoshop, Lightroom, and Premiere

Credit: Adobe

Finally, for Adobe’s most dedicated users, updates will also be released for the company’s core applications .

First, Photoshop will also get its own AI-powered assistant that can use editing suggestions. However, unlike Adobe Express, it’s still in closed beta, so it will take some time before it’s available to most users. For now, it’s only available in the web version of the app.

However, the beta version doesn’t allow you to choose which AI models the app works with. Previously, the Generative Fill feature, which uses AI to fill empty spaces in the background (or simply generate entire canvases from scratch), was limited to Adobe Firefly models. Now, users will also be able to use it with Google’s Gemini 2.5 Flash model and Kontext’s Black Forest Flux.1 model. Given Flash 2.5’s popularity on social media, dubbed “nano banana,” this is a significant achievement for Adobe.

However, Firefly is keeping up. Adobe says it has updated its model, adding the ability to generate at native four-megapixel resolution and improved human rendering. The company is also integrating it into a new “Layered Image Editing” tool, which allows for contextual changes between layers, such as adjusting shadows after moving an image.

Besides Photoshop, Lightroom has its own closed beta feature called “Assisted Culling.” Admittedly, I have perhaps the least experience with Lightroom compared to Adobe, but the company claims it will be able to filter uploaded photos and find the most suitable ones for editing.

Finally, Premiere Pro has its own beta feature, kindly released to the public. It’s called “AI Object Mask,” and it automatically detects and tracks people and objects in the background of your video, making it easier to add effects like blur or color correction. This could be useful, for example, if you’re filming in a crowded area and need to blur a lot of faces.

A little something interesting for everyone

Overall, Max is a fairly well-balanced app, with features for both professionals and beginners. However, the emphasis on AI and automated generation is undeniable. On the one hand, I understand that Photoshop can seem a bit intimidating. On the other hand, the more Adobe takes over your work, the greater the risk of competition with existing apps and platforms for easy editing. I’m curious to see how this industry giant will compete, given that platforms like TikTok and Instagram continue to offer their own built-in editing tools.

More…

Leave a Reply