Adobe Firefly Turns Text or Images Into Realistic AI-Generated Video

1 week ago 3

A collage of eight images surrounds a red square with a white "A" logo. The images include a fox, cocktails, a person's eyes behind glasses, flowers, a butterfly, police car lights, a dog, and a scenic river with a bridge.

Adobe previewed its generative video tools earlier this year, but details were relatively scarce. Today, Adobe shared much more information, including actual videos created in its Adobe Firefly Video Model.

Adobe says its Adobe Firefly Video Model is designed to “empower film editors and video professionals” with a wide variety of tools to inspire creative vision, fill gaps in a video’s editing timeline, or add new elements to existing clips.

“The Firefly Video Model will extend Adobe’s family of generative AI models, which already includes an Image Model, Vector Model and Design Model, making Firefly the most comprehensive model offering for creative teams,” says Adobe. “To date, Adobe Firefly has been used to generate over 12 billion images globally.”

Arriving to the public in beta form later this year, new Firefly-powered Text to Video and Image to Video capabilities will come to Adobe Firefly on the web, and some AI features will be implemented natively in Adobe Premiere Pro, which was updated yesterday with a suite of new color grading tools.

A computer screen displays a video editing interface with a close-up of a person's eyes wearing glasses. The editing tools and options are visible on the left side of the screen, featuring settings for image and text input, font style, text size, and other visual effects.

Text to Video enables users to generate video clips through simple text prompts. These prompts are reactive to specific camera-related text, including things like angle, motion, and zoom. With Image to Video, users can feed Firefly reference still frames to generate motion clips.

Adobe published numerous AI-generated clips, all of which were created in “under two minutes” using the Adobe Firefly Video Model.

Prompt: Cinematic closeup and detailed portrait of a reindeer in a snowy forest at sunset. The lighting is cinematic and gorgeous and soft and sun-kissed, with golden backlight and dreamy bokeh and lens flares. The color grade is cinematic and magical. Prompt: Slow-motion fiery volcanic landscape, with lava spewing out of craters. the camera flies through the lava and lava splatters onto the lens. The lighting is cinematic and moody. The color grade is cinematic, dramatic, and high-contrast. Prompt: Miniature adorable monsters made out of wool and felt, dancing with each other, 3d render, octane, soft lighting, dreamy bokeh, cinematic. Prompt: Footage of a camera on a drone flying over a desert with wind blowing over the dunes creating waves in the sand below. Prompt: Drone shot going between the trees of a snowy forest at sunset golden hour. The lighting is cinematic and gorgeous and soft and sun-kissed, with golden backlight and dreamy bokeh and lens flares. The color grade is cinematic and magical. Prompt: Cinematic closeup and detailed portrait of a man in the middle of a rainy street. the lighting is moody and dramatic. The color grade is blues and teals. the man is extremely realistic with detailed skin texture and stubble on his face. movement is subtle and soft. the camera doesn’t move. heavy film grain and textures. Beads of water slowly rolling down his face.

“Building upon our foundational Firefly models for imaging, design and vector creation, our Firefly foundation video model is designed to help the professional video community unlock new possibilities, streamline workflows and support their creative ideation,” says Ashley Still, senior vice president, Creative Product Group at Adobe. “We are excited to bring new levels of creative control and efficiency to video editing with Firefly-powered Generative Extend in Premiere Pro.”

 shot size (auto, closeup, extreme closeup, long shot, extreme long shot, medium shot), camera angle (auto, aerial, eye level, high angle, low angle, top down), and camera motion (auto, zoom in, zoom out, pan left, pan right, tilt up, tilt down, static, handheld). Each technique is illustrated with an image, primarily using sunflowers in a field.

Adobe notes that the camera control prompts, like angle and motion, can be combined with real video to further augment the look, flow, and feel of content without needing to reshoot something.

Adobe also shared clips that it generated to augment existing real-world footage. The first clip below is original, human-captured footage, while the second was generated using Firefly. The final clip is the combined, edited footage put together into a single sequence.

Prompt: Detailed extremely macro closeup view of a white dandelion viewed through a large red magnifying glass

“Over the past several months, we’ve worked closely with the video editing community to advance the Firefly Video Model. Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage,” Still writes.

Adobe believes video creators and editors can use Adobe’s AI technology to address gaps in footage, remove unwanted objects from a scene, smoothing out transitions, and creating the perfect B-roll clips.

As with Adobe Firefly’s other tools and functions, the Firefly Video Model is designed to be commercially safe and has been trained exclusively using content Adobe has permission to use.

The Adobe Firefly Video Model beta will be released later this year.


Image and video credits: Adobe

Read Entire Article