Dive into Generative Extend of Adobe Premiere Pro
Adobe is continually adding impressive features to its software. They are enhancing both the quality and speed of editing. By using Adobe Firefly they’ve introduced next-generation features that push editing boundaries even further. One such feature, Generative Extend, is currently available in Adobe Premiere Pro (Beta). We tried it out on several videos. We have identified both the challenges and best practices for using this feature in its current stage.
What is Generative Extend?
Generative Extend predicts frames that weren’t in the original footage. This allows editors to seamlessly extend video length to fill the timeline as needed. This tool extends not only the video but also the audio. However, since it’s still in its early stages, there are some limitations to consider.
- Extends only 2 seconds at a time
- Supports 1920×1080 or 1280×720 resolutions
- Limited to a 16:9 aspect ratio
- Frame rates between 12-30fps
- 8-bit color, SDR
- Mono and stereo audio
Challenges and Glitches
To understand the strengths and weaknesses of this feature, we tested it on different types of videos and encountered some glitches along the way.
Test Case 01
Crowded city road with people walking toward the camera. Extended by 4 seconds (feature applied twice).
Glitches: Faces generated in the crowd were not accurate, and defocused background subjects appeared somewhat “oil-painted.”
Test Case 02
Water flows gently over rocks. It creates a small cascade or mini-waterfall. Extended by 4 seconds.
Glitches: Water is looks like a long exposure photo. Shapeness detail has been gone
Test Case 03
A red bus passing through a crowded city. Extended by 4 seconds.
Glitches: Brand logos were not generated accurately.
Test Case 04
A dog running toward the camera in a green garden.
Glitches: Dog paws weren’t generated correctly, and the fur appeared slightly “oil-painted.”
Successful Test Cases
On the other hand, some tests delivered better results, showcasing the feature’s potential.
Test Case 05
A close-up shot of a man’s face, blinking eyes.
Eye and skin details looked great, though blinking wasn’t generated.
Test Case 06
Close-up of a dog with long hair turning its head.
Results: Dog details were well-generated, although the frame speed was slightly slower than in the original footage.
Conclusion
As of October 2024, the Generative Extend feature works best on videos with minimal movement, a single subject in focus, and fewer fine details like hair. We’re excited to see Adobe continue developing this feature, aiming to achieve even more realistic and refined results in future updates.
At our podcast editing agency, we stay up-to-date with cutting-edge tools like these, which have a huge impact on production quality. If you’re interested in learning more about how we can enhance your podcast post-production, we’re here to help!