<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

This video edited itself!

2 minute read

Lumen5

We're not sure it'll give Walter Murch sleepless nights yet, but in a sign of things to come Lumen 5 presents online tools to create AI edited videos from scratch using any article or blog as a template.

We’re starting a new feature here at Red Shark. From now on, we’ll be including in-article web videos, complete with motion graphic text, for each article. I dug up David Shapton’s article from last year Why isn’t VR more popular? To get started with our new format. Take a look! (Don't worry, we aren't really!)

 

Now, I have a confession to make. I didn’t edit this video. I didn’t create the text, or manipulate the shots in any way. I used the generative video tools available from LUMEN5 to produce this video. I started by sending the previously published article, just the live url link, to the Lumen5 templates page. The cloud-based analysis program determined what information from the article was relevant and built storyboards based on that content. From there, stock video is harvested and joined with the text.

vr_02-2.JPG 

The Lumen.com Web Interface

Once the video is assembled, shots that are represented as boards can be reassembled by dragging around the web interface. Text can be easily changed, and the accent colours can be manipulated. Timing for each slide can be adjusted, as well as the number of panels of text on a single shot. Oddly, there is no drop-shadow tool in the font window, but, the brightness of the backdrop can be lowered for better text clarity.

Individual shots can be swapped out by either searching for stock assets straight from the project window. Custom video and graphics can also be uploaded.

vr_03.JPG

Stock Assets tab

While the auto-match algorithm occasionally makes compelling matches for text and video, there are times when the pairings are very incorrect. I predict that this will change over time as the system works with more user input, manually matching text to shots to compensate for the machine learning’s shortcomings. As the stock library grows, and as those assets are tagged with more descriptive metadata, even more specific video content can be generated.

I started feeding all kinds of content into the platform to see what would happen, including Hugh Jackman’s Wikipedia page, as well as Nintendo’s product page for the NES Classic. More than 50% of the videos generated were accurate enough to publish.

 vr_04.JPG

While this all sounds impressive, there are some shortcomings that a professional Editor would expect. There is no option to export in finishing quality codecs, or to save out an edl/xml of the edit, or source media with handles. How you work with text is interesting, there is no need to fumble with layers or inserting bounding boxes. Animations are automatically applied. In many ways this is much faster than working with a traditional NLE. If you take the time to upload your own assets, and work with the limited web interface, It is entirely possible to supplement the base set of tools with your own content to create and publish video destined for web outlets. With a little patience, professional video with slick animations can be produced extremely quickly. For those who need to create video content in quantity for constant social media postings, Lumen5 offers a compelling suite of tools.

Could machine learning products like Lumen5 replace a traditional editor? How far off are we from feeding a script into a web portal, only to have a feature cut itself? No far at all. As various media outlets build their own respective asset libraries and license those libraries to each other, it will be only a matter of time before anyone can access decades worth of professionally produced video, to be manipulated easily in a web browser. It will be interesting to see how disruptive new platforms emerge and how the market reacts to them.

Tags: Post & VFX

Comments