VLOGGER AI – New Image-To-Video Model from Google AI

Updated on April 5 2024
image

Google’s researchers have been recently working on a very interesting project – VLOGGER AI. This latest project is a way to take a still image and turn it into a controllable avatar, coming off the back of a game playing AI agent. VLOGGER AI can generate a high-resolution video of people talking based on a photograph.

More importantly, VLOGGER AI can animate the video according to a speech sample, just like a life-like “Avatar” of high fidelity. VLOGGER AI is surely poised to give an unmatched versatility in video synthesis and is on the cusp of making a huge impact on how we make and consume video content.

In this article, we explored features, technology behind Google’s VLOGGER AI, benchmarks and even compared it with Heygen, a popular image-to-video software (we use Heygen for our own video content).

What is Google VLOGGER AI?

VLOGGER AI is a new generative AI tool developed by Google AI, which has the ability to produce animated photorealistic video from images. These videos showcase a highly realistic appearance, accurately representing the individual in each frame of the generated video.

Additionally, users have the option to input a person’s image, allowing VLOGGER AI to produce temporary, cohesive videos featuring the individual speaking, making facial expressions, hand gestures, and natural head movements. VLOGGER is also capable of generating head motions, blinking, and lip movements based on synthesized images from audio inputs.

Contrary to other Image-to-Video models that crop the face or the lips, and ignore the rest of the image, VLOGGER AI refines every frame which makes the AI avatars more realistic and human-like.

How VLOGGER AI Works?

Vlogger Generated Videos

At its core, VLOGGER AI operates on a two-stage pipeline, utilizing stochastic diffusion models(models responsible for random movement of particles or information through a medium) to bridge the gap between speech and video.. In the context of VLOGGER, this stochasticity is used to generate a multitude of photorealistic videos for the same subject, showcasing the diversity in hand poses, facial expressions, and overall body motion.

The first stage involves an audio waveform as input, which is then processed to generate intermediate body motion controls. These controls are responsible for the gaze, facial expressions, and pose of the target subject throughout the video.

The second stage employs a temporal image-to-image translation model, extending large image diffusion models, to generate the corresponding video frames. To ensure the video reflects a specific identity, the model also takes a reference image of the person as input.

Input Processing: This process phase starts with user inputting audio and using only an image. It is the snapshot, the starting point, for the avatar and the sound sends the motion and emotions into him.

3D Motion Generation: Audio and visual images will be passed through the 3D motion-tracking process. This is the stage where the multilayered motion vectors for face, body, pose, gaze, and expressions of the still image as the first frame are predicted, with the audio coupled as the guide.

Temporal Diffusion Model: Taking up the 3D motion generation, later timings and movements of the agent are figured out by way of “temporal diffusion” model. These subtle motions create realistic movements and match the music selected.

Upscaling and Final Output: Finally, the output is ready when the input is fed into the upscaled avatar. This implies that the video quality is preserved and the reproduced images are in detail and realistic.

Also Read: How to Use Google’s MusicFx

Benchmarking VLOGGER AI

Vlogger Benchmark

VLOGGER AI capabilities are however not limited to its users’ facing applications as well. Researchers designed VLOGGER on precise experiments using three different evaluations benchmarks that it computed much better than existing state-of-the-art methods.

Image Quality: VLOGGER was very good at creating images.

Identity Preservation: Unlike previous methods, this feature faithfully allows to convey the visual aspects of the speaker, making him/her stay recognizable.

Temporal Consistency: VLOGGER is made keeping in mind smooth, natural transitions to keep the video flow that matches the audio. When it comes to expression of lips and facial features, VLOGGER paid attention to match them to audio.

Moving fast forward with the innovations of the training data and test out the most updated version – the dataset- which promises to overthrow its precursors.

We talk here about a dataset which is really huge, having an order of size more or less and containing a data chunk of 2,200 hours and 800,000 identities, and for training, there are another 120 hours and 4,000 identities which have been added for testing purposes. VLOGGER was trained on MENTOR dataset based on its technical grounds to introduce advanced progression. We will talk about MENTOR dataset further in the article.

Google VLOGGER AI vs Heygen

AspectVLOGGER AIHeyGen
Facial ExpressionsUtilizes a stochastic human-to-3D-motion diffusion model to predict facial expressions accurately based on input audio signals.Offers high-quality voice overs and subtitles, but may sometimes appear robotic or machine-generated.
Temporal CoherenceEmploys a super-resolution diffusion model and temporal inpainting approach to maintain consistent motion sequences.Real-time generation capability for videos without rendering delays.
Image QualityConditions the video generation process on 2D controls representing full-body features, resulting in high-quality videos.Customizable avatars allow users to choose appearance, age, gender, race, and hairstyle.
Facial DetailUtilizes generative human priors acquired during pre-training to improve the capacity of image diffusion models.Cannot match the capability of vlogger ai.

All About MENTOR Dataset

VLOGGER AI is trained on an initially perfect data set. This is a collective set that incorporates background voices of everything from people to dogs and also visually (a big plus) that it displays their mouths as they speak. The philosophy behind this launches over the GANs by teaching them to recognize the profound correlations among audio, text, and visual characteristics that afterward lead to a real talk video.

One of the big effects that VLOGGER has is to provide a new set of data called specific for AI models that create talking human videos.

As novel data set, this behaves as a complementary subset for existing datasets which includes more variability of ethnicity, age and speaking styles. This diversity carries much importance for VLOGGER AI to create authentic and inclusive videos that are not even stereotypical but can describe the reality.

Also Read: Google Gemma – What is it and how to use it?

Understanding Stochastic Diffusion Models

Stochastic diffusion models operate by gradually transforming a simple, initial state into a complex, final state through a series of random steps. This process is inherently stochastic, meaning it involves a degree of randomness that allows for the generation of a wide variety of outputs from the same input.

In the context of VLOGGER AI, this stochasticity is used to generate a multitude of photorealistic videos for the same subject, showcasing the diversity in hand poses, facial expressions, and overall body motion.

Key Features of Stochastic Diffusion Models in VLOGGER AI

Generative Capabilities: VLOGGER’s stochastic diffusion models enable to create different videos with a lot of hand poses, facial expressions or motion of the body, that reflect people’s uniqueness. We see it most when watching the model output, having to go to its last scanned image, we still can notice diversity as it recreates multiple compositions.

Personalization and Identity Preservation: VLOGGER AI, not only a single but, the monocular image-based synthesis can also fine-tune its diffusion model with more data to more accurately portray the features of the given subject.

Sample Diversity and Temporal Inpainting: The random characteristics of VLOGGER AI help produce multiple movements and animations given the same input data such as audio or text. This diversity finds the most expression in video editing applications, for example, where VLOGGER AI makes it possible to fill in parts of each frame of the given region, such as the face or the mouth, to change an expression or a location of the eyes during the whole video.

Also Read: Google Genie – A challenger to OpenAI’s Sora

Conclusion

Google’s VLOGGER AI is intended to take content creation to a whole new level. It surely has the potential to shape the future of human video synthesis and reduce the need for physical content creators and influencers. Much like other Generative AI softwares and image-to-video apps, VLOGGER AI only intends to simplify and streamline processes for content creators and not replace them.

We use Heygen for our video content but we see a need for improvement both in quality and usage aesthetics in it. We can’t wait to get our hands to Google’s VLOGGER AI and see its magic in our content creation journey.

Featured Tools

creatify-logo

Creatify

Humanize AI

Air Chat

Beatoven

Notably

Spice

Unhinged

Thumbmachine

Related Articles