AI for posing?
striker_675ae4d997
Posts: 0
I know AI seems to be creeping into all creative endevors, sometimes to the detrement of the creative process. What I think would be interesting to see would be an AI system to help a creator pose his model and perhaps light it creativetly/realistically. So the AI is still a tool rather than a be all end all power tool such as Auto-tune became for the recording industry until it once again became old hat and went back to being something that helped a singer when they were having a bad day or over-stressed their voice. It bothers me when I see what is obviously AI created content claimed as artist created when the artist's creative content was "give me a picture of this"
Post edited by Richard Haseltine on
Comments
Moved to Daz Studio Discussion as it is not a Scripting topic - I assume this is for daz Studio, if not please edit the first post and move it to Art Studio or the Commons.
Openpose + stable diffusion
Curious how to use OpenPose with DAZ characters - are there articles or tests on this?
I guess someone could write a script
after all plask AI uses a form of it to create BVH from videos
https://github.com/CMU-Perceptual-Computing-Lab/openpose for those scripty people
Thanks, looks like there is a lot of research here.
I did play with the CMU OpenPose project and it looks pretty cool. Was able to extract openpose output from videos, basically a series of files that correspond to frames in the video that map to figure pose keypoints. The next step would be to retarget the openpose output to a Genesis figure presumably. I haven't done that before so I'm not even sure it's possible (at least without going through something else first like Blender).
Assuming it is, rather than integrate with DAZ scripting (since the the backend that runs the pose generation has a lot of external dependencies), it would be an interesting side-project to create a standalone little application that you could drop images or videos into and have it generate DAZ-compatible poses or animation files.
The first step would to convert OpenPose to BVH, because there are already tools can retarget BVH to other skeletons.
Here's something you can look at:
https://github.com/KevinLTT/video2bvh
Another:
https://github.com/FORTH-ModelBasedTracker/MocapNET
Ignore the MOCAP stuff, but see the section down below:
Higher accuracy with more work deploying Caffe/OpenPose and using OpenPose JSON files
"In order to get higher accuracy output compared to the live demo which is more performance oriented, you can use OpenPose and the 2D output JSON files produced by it. The convertOpenPoseJSONToCSV application can convert them to a BVH file."