RedShark News - Video technology news and analysis

Adobe MotionStream: real-time control over AI video generation

Written by Andy Stout | Apr 13, 2026 10:43:19 AM

Adobe Research has previewed MotionStream, an experimental technology that lets creators interact with AI-generated video in real time — adjusting object movement and camera angles as the clip is generating.

Adobe Research has previewed MotionStream, an experimental technology that lets creators interact with AI-generated video in real time — adjusting object movement and camera angles as the clip is generating.

In a paper presented at ICLR 2026, Adobe Research previewed MotionStream, an experimental research technology that lets users interact with AI-generated video while it is being created.

Being able to potentially control object movement and change camera angles in real time using a cursor and sliders is a huge departure away from the current painful ‘enter text prompt > wait for render > repeat ad nauseam' model that dominates current AI video.

The video below shows some of the stuff that can be accomplished.

How it works

Rather than each new generation starting from scratch, creators begin with a text prompt and can then click and drag objects to control their movement, adjust camera position, and choose which elements stay static. Their edits take effect as the video is generating.

The technical foundation relies on what the research team says is an autoregressive approach. Rather than generating an entire video before delivery (where each frame depends on every other frame), MotionStream generates video in segments. Users see the first segment while subsequent sections are being produced in the background. This architecture is very similar to what you find in chunked streaming technologies, and is what makes real-time interaction possible, as well as opening the door for user feedback mid-generation.

The underlying model also handles physics and secondary motion automatically. As Adobe Senior Principal Scientist Eli Shechtman explains: “If you want to move an elephant, for example, you can click and move its body, but it’s a lot of work to manually make those movements look natural. This currently requires skills and specialized software to rig, and animate or keyframe the animation, following a process that typically takes hours, if not days depending on scope.

“Instead, the underlying video generator behind MotionStream is basically simulating the world in real time. So, the elephant’s legs move naturally, and the ears flap naturally as the elephant moves. The model provides you with knowledge about the world and you can interact with it."

Shechtman sees implications beyond video. If the canvas is always a running video, edits could be applied as smooth transitions and creators could stop mid-transition if they prefer an intermediate, still result.

The research has been published and a public preview is available. More details here.