Technology

One of the most interesting sessions at Adobe MAX is traditionally the Sneaks keynote, where engineers from the company various units show off their most cutting-edge work.

Sometimes, those turn into products.

Sometimes they don&t.

These days, a lot of the work focuses on AI, often based on the Adobe Sensei platform.

This year, the company gave us an early look at Project Sweet Talk, one of the featured sneaks of tonight event. The idea here is pretty straightforward, but hard to pull off: take a portrait, either a drawing or a painting, identify the different parts of the face, then animate the mouth in sync with a voice-over.

Today, Adobe Character Animator (which you may have seen on shows like The Late Show with Stephen Colbert) does some of that, but it limited in the number of animations, and the result, even in the hands of the best animators, doesn&t always look all that realistic (as far as that possible for the kind of drawings you animate in the product).

Project Sweet Talk is far smarter.

It analyzes the voice-over and then uses its AI smarts to realistically animate the character mouth and head. https://techcrunch.com/wp-content/uploads/2019/11/wilk_output_1.mp4 The team, lead by Adobe Researcher Dingzeyu Li, together with Yang Zhou (University of Massachusetts, Amherst) and Jose Echevarria and Eli Schectman (Adobe Research), actually fed their model with thousands of hours of video of real people talking to the camera on YouTube.

Surprisingly, that model transferred really well to drawing and paintings — even though the faces the team worked with, including pretty basic drawings of animal faces, don&t really look like human faces. &Animation is hard and we all know this,& Li told me.

&If we all know that if we want to align a face with a given audio track, it is even harder.

Adobe Charter Animator already has a feature called ‘compute lip sync& from scene audio,& and that shows you what the limitations are.& The existing feature in Character Animator only moves the mouth, while everything else remains static.

That obviously not a very realistic look.

If you look at the examples embedded in this post, you&ll see that the team smartly warps the faces automatically to make them look more realistic — all from a basic JPG image. https://techcrunch.com/wp-content/uploads/2019/11/cat_output_1.mp4 Because it does this face warping, Project Sweet Talk doesn&t work all that well on photos.

They just wouldn&t look right — and it also means there no need to worry about anybody abusing this project for deepfakes.

&To generate a realistic-looking deepfake, a lot of training data is needed,& Li told me.

&In our case, we only focus on the landmarks, which can be predicted from images — and landmarks are sufficient to animate animations.

But in our experiments, we find that landmarks alone are not enough to generate a realistic-looking [animation based on] photos.& Chances are, Adobe will build this feature into Character Animator in the long run.

Li also tells me that building a real-time system — similar to what possible in Character Animator today — is high on the team priority list.





Unlimited Portal Access + Monthly Magazine - 12 issues-Publication from Jan 2021


Buy Our Merchandise (Peace Series)

 


Contribute US to Start Broadcasting



It's Voluntary! Take care of your Family, Friends and People around You First and later think about us. Its Fine if you dont wish to contribute and if you wish to contribute then think about the Homeless first and Feed them. We can survive with your wishes too :-). You can Buy our Merchandise too which are of the finest quality.

Debit/Credit/UPI

UPI/Debit/Credit

Paytm


STRIPE





25