We need to talk
Auroratrek
Posts: 218
By "we" I am speaking on behalf of the Daz characters. I was hoping the big annoucement was going to be an upgrade to Daz Studio that included lip sync, but it's yet another generation of Daz characters that are pretty to look at, but that you can't use for character animation because there's no decent lip sync capability in Daz Studio. I wouldn't care, but the Daz characters are so great, and only get better, but it's like somebody offering you a beautiful sports car that you can't drive.
Comments
with machine learning and styleGAN I think 3D animated lipsync will soon be obsolete TBH
video by me using https://huggingface.co/spaces/HarlanHong/DaGAN on a Midjourney render
"the Deepfake" technology is moving vey fast
I recall hearing that part of the benefit of having the mouth as a separate object was specifically lip-syncing. You just gotta wait for the right vendor.
I totally agree. I have yet to see anything out there that works as well as MImic or it's subset in 32-bit DAZ Studio. If what I've heard is true, DAZ only licensed the 32-bit version for D|S and so couldn't use the 64-bit version in newer versions of D|S. They used the 64-bit version for Carrarra and for their Lightwave plug-in, but not for DAZ Studio, where it's needed the most. I have no idea if it's still possible to licence the 64-bit version, but if it is, it should have been done years ago.
I currently still have one of my iMacs running Mojave simply for the 32-bit version's lip syncing. I'd like to be able to use one machine for everything, especially since what I'm doing is so simplistic.I've only made one real animation test last year when I had a couple of days off, but now that I'm retired I'm starting to dig into it a little more and am still disappointed with my options.
If, in fact, a separate mouth is going to be used for a new lip synching mechanism in -- I'm guessing -- DAZ Studio 5 (due...?) will said mechism still work for older characters, and will it do more than just sync the mouth? Mimic at least does some slight head movement, and eye blinks.
Extremely annoying to me is that a cheap plug-in for Reallusion's Cartoon Animator software uses your computer's webcam for face tracking (a more refined tracking is available with a more expensive plug-n that requires a newer iPhone, lilke some options offered here in the store) while Adobe's Character Animator uses your computer's webcam for not only face tracking, but some body tracking as well.
@WendyLuvsCatz
Thanks for the info. Results are more than ok imo:
https://streamable.com/xpl11h
https://streamable.com/yeruau
I haven't looked into pricing yet and if can do big videos but am guessing are apps besides colabs
I do know many phone and iPad aps use the technology
I was hoping for an update to the software like new animation features and improvements for Animate2
Beside the Iphone based face mojo, there are these options
but We shall have to wait to see what changes have been made to the face rig of Genesis 9.
The face motion capture technologies are getting better, but they don't help for recorded and edited audio.
The moment, we will have a render engine integrated in DAZ Studio, that can keep up in quality and speed with the Unreal 5 enginie, we can also talk about animating in DS and make demands.
Until this is not possible, animation in DS is mostly useless. Good luck with waiting for you film strip with a 25 minute IRAY render per frame.
The only option and solution I see is to further improve DAZ bridges, most likley to Blender.
Haven't done many tests with animations, but Iray renders don't take 25minutes per frame if one chooses the settings right - Mine takes 30sec/frame on my 3060 12GB
Good point, I use nothing but pre recorded audio
I recently bought a one year subscription to an AI voice
generation service ,mostly for voice narrator for my tutorials but alot of the voices will be acceptable even for speaking characters in films.
@catmaster
Uhhmm…..No they wont
Even if the Blender foundation sold Blender(not likely)
it would not retroactively invalidate the FOSS licences of every blender owner on earth.
that means any person or group could continue to develop/update and distribute Blender branches forever and Epic would have zero control of it and thus would have a very difficult time monetizing it beyond selling content & addons in competition with Blender market/gumroad etc.
There is Unreal LiveLink plugin for Blender:
https://github.com/Viga-Entertainment-Technology/Unreal-LiveLink-for-Blender
https://vigaet-my.sharepoint.com/:w:/p/shreyas/Eab3ieXYF_JDvMs_51-H3osByFEwrzTcrqj8wMJMO95DOA
Update: This plugin does not work for UE5
Tested with Unreal 5.0.3 and Blender 3.3 / Blender 2.93 / Blender 2.83
The link is established and the green dot is on, however the skeletal mesh in UE5 does not follow the animations, there is output log in the LiveLink window showing some errors.
The plugin should work with UE4 and older Blender version as demonstrated in their Youtube video, this plugin has not been updated for months, questions on the plugin page haven't got replied.
What picture quality do you get from 30/frame?
None of my renders took ever only 30 sec only
Adobe pulled a product that mimicked a human speaker. That is you sampled a speaker or musician, like say Ozzy from interviews and songs and then you could record another voice, like your own saying something and it woudl emulate that sampled voice of Ozzy's over yours. Like I said they pulled that and questions abound on why.
Was that an actual product? I do remember a live event a few years ago wehre they would sample enough of someone's voice and than it with a text-to-speech program to have it say anythng in that person's voice. This is a very similar demo:
http://https://www.youtube.com/watch?v=0fO7CBDMGNA
Okay, I'll ask it simply -- how does Pixar and Disney Animation pull this stuff off? What goes into their speech-to-lip synch engines that hobbyist-targeted applications simply cannot afford to reproduce?
800x600px, max samples 500, single figure, no denoising. If I add interior scene it takes about 45 sec/frame. Rendering straight to AVI.
Doing a freeze frame on the video, one sees that the picture is not as sharp as when doing static renders, but like in TV, as one sees a single image just for 1/30th of a second, the movement hides the lower quality and the video looks pretty good when running.
Edit: Preset attached
Lighting is with three spotlights, one in front (35deg lighting angle) two in the back (90deg lighting angle), about 1200 Lumen each (not included in the preset)
very skilled animators who adjust any automatic settings.
IMHO Modern audiences are used to 1920 x1080 at a minimum
What is your hardware setup?
Not interested in what 'modern' audiences want, as I'm just doing it for myself.
Current system; i7-5820K, X99, 64GB, 3060 12GB, running W7 Ultimate.
I do 20 iterations a frame with denoiser set to 19
path length 8
filter Mitchel
it flies mostly depending on the scene
if no emitters skip denoising and use dome/sky only
They don't use lip sync technology, they hand key it by shooting reference video of the performance and block it out accordingly, and then make it more animation-styled in the blocking plus, spline, and polish phase. I'm super excited as I just started class four at animation mentor last night and this class is teaching facial expressions and hand keying dialogue. I'll still use facial mocap for my job, but personal projects, I'm going to be able to hand key it all I've been surprised to find out from my teachers that mocap is not used as often as we would think for animation, it's still hand keyed.
@benniewoodell
Speaking of lipsyncing,
The newest commit of Diffeo has added the ability to bake animated morphs to animated shape key frames!!.
Giving us the ability to use Daz Mimic basic, Anilip2 ,Iphone,MOHO.dat files on our Daz genesis figures in Blender AFTER re-rigging the body with autorig pro.
So no more struggling with the ARP face rig
just import your facial animation from your preferred Daz native method, or use the built in Diffeo MOHO importer along with the free Autolipsyncotizer app.
Exciting times !!
@wolf359 no kidding? That's awesome! I have to download the new version then and give it a whirl. Thanks for the heads up, exciting times indeed!
I really don't understand the big fuss here to be honest. New generations can also mean more discounts for those who can truly get the best out of things. I mean... although I am very happy and impressed with Genesis 8.x I'm also more than often still using Genesis. One? The first gen?
As for animation... With all due respect to Daz3D (Daz Studio is my favorite, all my other stuff is meant to accomplement it) ... still... I'm not too sure about using Daz Studio for animation. Just like I favor ZBrush over Hexagon any day of the week even though you will never see me talking bad about about Hexagon because that critter is a seriously impressive editor, not to mention the seemingless "integration" with Daz Studio. But with all due respect... it's not ZBrush either.
Animation in Daz Studio is a side-product, definitely not an afterthought but it's not a primary goal. And that's ... nearly always an issue.
In case it wasn't obvious: ZBrush is also a personal favorite of mine, and one of its features is "PolyPaint"; basically having the ability to paint directly onto a mesh and obviously baking the maps as well. I've had some really fun results with that while working on basic items.
The thing is: as impressive as it is (!) it's not even coming close to what I can pull off with Substance 3D Painter.
And let's make one thing really clear: Polypaint in itself is amazing, you can truly get some really awesome results from it. Just like the animation options (mostly Animate2? together with Filiament (I love that engine!)) while using Daz Studio can manage to get you solid results as well... but that doesn't make it a full blown animator.
Why would you even assume as much?
That's very good news. I just used DAZ's 32-bit Mimic to lip synch just over 10 minutes of audio (58 voice tracks) saving each one as a partial Pose Preset (it took 85 minutes), allowing me to apply them as partial poses in 64-bit D|S to any Genesis 3 figures (I've done the same with DAZ 8 earlier this month).It works beautifully applying them to characters without changing any of the characters' other animation, so two people walking side-by-side can now have a conversaiton without breakig stride.
Thanks for the update! Haven't started moving to Blender yet, but knowing this is done really makes things look a little brighter.
@wsterdan
Just be aware that diffeo is only for the windows version of Blender.
Ah, right, foiled again!
Thanks!
Daz used to offer Mimic Pro long ago and it worked awesomely. They pulled it a long time ago and I just noticed that I can't find it in my purchases anymore either.
Most of them also have mirrors at their desks and use their own faces to help.
Additionally, contrary to Masterstroke's beliefs, use RenderMan, which is far from real time.
I have no need for realtime rendering. I have my base scenes dialed in for speed rendering in Iray and it works fast enough - especially considering the output. If I could get the same results from a real time engine, I'd definitely use it. But I'd never let something like that stop me from making motion pictures.
Getting a bit back on track, PhilW has an animation course for Carrara and he shows how he animates lip sync by hand and it turns out pretty nice.
It's a big bummer that some license thingy prevents Daz 3d from making a 64 bit version of Mimic, but at least we have it, it works pretty darned well, it's free and we can save the results as DUF and work with them in 64 bit Daz Studio Pro.