Google Motion Stills adds AR Mode: How it works and complex tech behind it
The new AR mode makes Google’s Motion Stills app more fun to use.
Google's GIF-making Android application, Motion Stills, had added a new Augmented Reality mode. The new mode allows users to project virtual item on any static or moving horizontal surface and create a video or GIF with an AR content interacting with the real world.
This feature is quite similar to AR stickers rolled out for Google Pixel smartphones. While the AR stickers were exclusive to Pixel phones and others that ran Android 8.0 and above, Motion Stills makes the AR mode available to all Android smartphone.
How to use AR mode on Google Motion Stills
Google Motion Stills is a standalone application available on Google Play Store. Download the free application on your phone and give it the necessary permissions.
Now, launch the application, toggle to 'AR Mode' located next to Motion Still and Fast Forward features.
After choosing the AR Mode, you will get access to multiple virtual characters such as dinosaurs, chicken, hanging globe and alien, among others. There are less than 10 AR characters available in the application right now.
To create a GIF or a small video clip, just tap the record button.
Trying Motion Stills pic.twitter.com/HtoSbj8YoJ— Kul Bhushan (@1987Kulbhushan) February 7, 2018
The technology behind Motion Stills
Google has a fairly complex process behind AR Mode. The new mode is powered by Instant Motion Tracking, a sophisticated system that currently powers apps such as Motion Text in Motion Stills in iOS and privacy blur on YouTube. Google says it improved the technology to make it usable on any Android device with gyroscope.
"When you touch the viewfinder, Motion Stills AR "sticks" a 3D virtual object to that location, making it look as if it's part of the real-world scene. By assuming that the tracked surface is parallel to the ground plane, and using the device's accelerometer sensor to provide the initial orientation of the phone with respect to the ground plane, one can track the six degrees of freedom of the camera (3 for translation and 3 for rotation). This allows us to accurately transform and render the virtual object within the scene," Google explains in a blog post.
But that's not all. Google also elaborates how its Instant Motion Tracking technology works.
"First, we determine the 3D camera translation solely from the visual signal of the camera. To do this, we observe the target region's apparent 2D translation and relative scale across frames. A simple pinhole camera model relates both translation and scale of a box in the image plane with the final 3D translation of the camera," Google explains.
"The translation and the change in size (relative scale) of the box in the image plane can be used to determine 3D translation between two camera position C1 and C2. However, as our camera model doesn't assume the focal length of the camera lens, we do not know the true distance/depth of the tracked plane," it adds.
This also includes scale estimation to its existing tracking system and region tracking outside the field of view of the camera, Google said. So, whenever the camera inches closer to the surface, the AR character automatically is scaled up. ALSO READ: Spreadtrum to bring AR to masses with its new SC9853I mobile SoC
Follow HT Tech for the latest tech news and reviews , also keep up with us on Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.