<img src="https://certify.alexametrics.com/atrk.gif?account=43vOv1Y1Mn20Io" style="display:none" height="1" width="1" alt="">

Your landscape photos can now be animated using deep learning

One of the images used in the new research. Image: Hołyński et al./CVPR.
2 minute read
One of the images used in the new research. Image: Hołyński et al./CVPR.

A new system developed at the University of Washington can take a still image an animate the elements within it using machine learning.

Currently the system is limited to animating flowing material such as water, smoke, and clouds, but the way in which it does it creates an animated image that can be looped indefinitely. The researchers are set to present their approach at the Conference on Computer Vision and Pattern Recognition on June 22nd.

Lead author of the paper that describes the method, Aleksander Holyński, a doctoral student in the Paul G. Allen School of Computer Science & Engineering said, “What’s special about our method is that it doesn’t require any user input or extra information. All you need is a picture and it produces a high-resolution, seamlessly looping video that quite often looks like a real video.”


Image: Hołyński et al./CVPR

How the system approaches the animation elements is extremely simple to describe, but incredibly complex to implement. At its heart, the system has to work out what the elements of the image are, and then predict both the future and the past of an individual pixel for the moving elements. The team had to train the neural network using thousands of videos of waterfalls, oceans, rivers and other elements such as smoke and clouds that contain fluid motion.

They then got the system to predict motion based upon only the first frame of a video before comparing it to the actual video. This helped the system learn clues that would then help it predict what happened next.

Symmetric splatting

“Symmetric splatting” might not sound a particularly technical term, but it is in fact that secret sauce that allows the final image to loop seamlessly and indefinitely. If you only predict the future of a pixel, for example in an image of a waterfall, you will end up with nothing to replace it once it moves to the next frame. So the system needs to predict the past of the pixel as well as the future, and then it combined it into a single animation. The system performs a few other tricks as well, such as transitioning parts of the frame at different times and deciding how quickly or slowly to blend each pixel based upon its surroundings.


Image: Hołyński et al./CVPR.

There are limitations, such as the fact that the system cannot model reflections, or the realistic distortions of objects below the water’s surface. In other words the current system needs objects with predictable fluid motion to work. However the team would like to extend the system to be able to animate elements such as a person’s hair blowing in the wind.

AI or reality?

Apple still includes a Live Image option on its photo app. In many ways this could be utilised to capture a loopable image at source, although it is interesting to see where the idea of animated stills images go. The MyHeritage site uses machine learning systems to colourise old photos and even to animate people within a still image. Are we reaching the point where we put freaky animated images of long dead relatives around our homes? Maybe I’ll pass on that one. But a full size ultra high resolution animated image of Victoria Falls could well be an impressive centrepiece to a living room.

View some more impressive examples, as well as getting more detail on the technique from the team's website, and also the video below.

Tags: Technology Featured AI

Comments