Remixing your art with AI

1111214161721

Comments

  • WendyLuvsCatzWendyLuvsCatz Posts: 37,945

    Artini said:

    I have increased scalling on the shorts to 300%, I think, and still got pokethrough in Daz Studio.

    AI treatment was much easier to handle and still got some guidance from DS render.

    It is also possible to get digitigrade feet in ai, if one desire.

     

    with the cat you can edit the density maps and save copies but I was actually thinking something naughty and thought the AI might have done a NSFW interpretation of that pokethroughblush 

    was not criticising your render, that's easily fixed in post anyway

  • ArtiniArtini Posts: 9,073

    I have used the XL version and I am happy with the results.

    Still searching for the best ways to incorporate DS renders with AI generated images.

     

  • HLEET_3DHLEET_3D Posts: 172
    edited August 2023

    That's a really cool subject. I wish that AI could have a special place on this forum (like a sticky post).

    Anyway, let's get to AI. I didn't try yet to mix both world (AI + DAZ) but it's a good idea ^^.
    When I first try playing with StableDiffusion, it was quite ugly to be honest ... 3 legs, 3 hands ... that was just a bad dream come true lol. 
    Further more, the interface of Automatic1111-webui is not very friendly.
    And then I retried it a month ago with comfyUI and other websites that provides checkpoints/safetensors for fanart and anime (just google it, I don't know if I have the right to link it here).
    I was mindblown by the results ! I learned also that you shouldn't go beyond 512x512pixels to have better results in the generation which imply some skill to implement latent upscale to make a high-res image.

    Okay, now with daz, I see that you guys try to use controlnet openpose to make it pose like you wanted to and also image to image and inpainting with prompts to make it more real and hide imperfection like hands or face.
    But what about "depth Controlnet" ? you know the shadow version that you input to guide the AI that the subject should be in this area only in the whole picture.
    Maybe it would be a way to change the whole background while keeping the subject. And with inpainting maybe try to correct the lights on the subject itself to reflect the background. 
    I will give this theory a try, if I get good results, I'll post them here too :)

    Post edited by HLEET_3D on
  • ArtiniArtini Posts: 9,073

    Mixing character with ai generated background. Used: https://www.daz3d.com/baby-barry

  • me too

    Filament render with AI backgrounds 

  • RenderPretenderRenderPretender Posts: 1,034
    edited September 2023

    I thought I'd share a few tests with a V4-based character, run through an SD model with custom settings (Paragon, in this case), and then composited to preserve hands and feet. No ControlNet. The compositing is the only thing that allows me to retain compositional control and achieve consistency with outfits, hair, and facial features. Additionally, because the AI "knows" what proper human anatomy looks like, it appears to be able to identify the notorious shortcomings in the V4 platform, and correct them quite convincingly. More realistic skin tones are achieved too, which I view as a huge plus.

    Perfect-V4-Roxanne - Iray-merge.png
    704 x 1024 - 287K
    Perfect-V4-Roxanne - Iray 2-merge.png
    704 x 1024 - 390K
    Perfect-V4-Roxanne - Iray 3-merge.png
    704 x 1024 - 393K
    Post edited by RenderPretender on
  • not remixing but I wanted to share it anyway 

    I am intending to do something similar in 3D if I can get more realistic cat heads and paws on my characters 

    (sorry Oso3D, RAWart, WillDupre your's too anthropomorphic)

     

  • This is

     

  • FauvistFauvist Posts: 2,044

    If you are using AI to express yourself, you're not going to get much personal satisfaction.

  • WendyLuvsCatzWendyLuvsCatz Posts: 37,945

    Fauvist said:

    If you are using AI to express yourself, you're not going to get much personal satisfaction.

    I got as much satisfaction loading Genesis 8 and the aniblock I used to make her walk as I did running my render batch image 2 image through Stable Diffusion 

    make of that what you will

  • Dim ReaperDim Reaper Posts: 687
    edited November 2023

    Only just found this thread - some interesting things here.

    I've been trying to combine AI generated scenery with figures in Daz Studio.  Some success so far, but I haven't been able to get 360 panorimic images to work.  For now, I'm having to stick to using the AI backgrounds as environments and placing the Daz figures there.

    Posted this in the commons thread, as part of the discussion of combining AI backgrounds with Daz content, but looks like some people are not happy about images being posted there:

    test town render 2-800.jpg
    800 x 600 - 113K
    Post edited by Dim Reaper on
  • WendyLuvsCatzWendyLuvsCatz Posts: 37,945
    edited November 2023

    yeah probably my fault blush

    videos 

    moved from other thread 

    Post edited by WendyLuvsCatz on
  • mwokeemwokee Posts: 1,275
    cgidesign said:

    Fun fact:

    I tried a local instance of stable diffusion and got an image with "Shutterstock" watermark in it. That did not come from the prompt but seems to part of the AI training base model. Question now is:

    The AI companies say: "you can legally sell the AI images" but on the other hand they use copyrighted material in their training data which you can not sell without permission. In some cases you are even not allowed to use images from stock sites at all without permission (e.g. a subscription plan for personal use or so). But if I now publish in the DAZ gallery or sell somewhere, won't I this be potentially already a copyright theft issue?

    As far as I know Getty and Shutterstock already removed AI images from their sites because of that.

    But, it is really fun using those tools. Below images are from the Midjourney beta discord server (no remixing here though).

    The attached images are of higher resolution.

    I'm using Firefly because I subscribe to Adobe CC and by default have the license to sell images. Adobe supposedly uses images only from their stock library and will be giving contributors some compensation. So I should be in a good place commercial wise. I find other AI providers to produce better quality images than Firefly because they use images from the entire internet. I would have to pay extra to use other sites commercially so I'm sticking with Firefly for now. My agency will not accept AI with faces because of model release issues. I'm using live models that I've photographed or 3D figures to get around that issue. Do your homework and follow any rules in place before selling commercially.
  • WendyLuvsCatz said:

    yeah probably my fault blush

    videos 

    moved from other thread 

    Thank you for reposting them - I was definitely enjoying looking at other people's work.  Personally, if I'm not interested in something then I just scroll past, but each to their own.  Glad that this thread has lots of examples both of AI generated art, and hybrid AI/Daz to learn from. 

  • mwokee said:

    I'm using Firefly because I subscribe to Adobe CC and by default have the license to sell images. Adobe supposedly uses images only from their stock library and will be giving contributors some compensation. So I should be in a good place commercial wise. I find other AI providers to produce better quality images than Firefly because they use images from the entire internet. I would have to pay extra to use other sites commercially so I'm sticking with Firefly for now. My agency will not accept AI with faces because of model release issues. I'm using live models that I've photographed or 3D figures to get around that issue. Do your homework and follow any rules in place before selling commercially.

    Interesting with the model release issues.  I did use Firefly for a while, but when they introduced the tokens (or whatever they are calling them), I stopped because it takes too many iterations of an image to get it just right.  I am finding Easy Diffusion to be a fantastic tool once you get the hang of the prompts (the negative prompts seem to be more important in many cases).  It produces some great images in a very short space of time. 

  • Nyghtfall3DNyghtfall3D Posts: 765
    edited November 2023

    Decided to experiment with SD last week to learn more about the tech and was gobsmacked by what it did with a few of my Daz renders.  I did find its Achilles heel though, in that AI-enhanced figures and objects can't be reproduced for other 3D projects.  Coupled with what I also learned about why it has trouble with faces and hands, any interest I might've had in a potential future with an AI-assisted workflow completely evaporated.

    Nevertheless, while I still have ethical and creative issues with pure, AI-generated art using nothing more than text-to-image prompts, my position on AI has definitely softened with regard to using apps like SD to enhance one's own artwork.

    On the left is my Daz portrait render of Van Helsing 9.  The AI version is on the right.  The Denoising Strength was set to 4.

    Van Helsing 3D.jpg
    1280 x 1657 - 808K
    Van Helsing AI2.jpg
    1280 x 1656 - 1M
    Post edited by Nyghtfall3D on
  • ArtiniArtini Posts: 9,073

    Looks like great enhancement.

     

  • ArtiniArtini Posts: 9,073
    edited November 2023

    Trying to figure out, how to use or recreate such images in Daz Studio.

    image

    chipmunk09.jpg
    1024 x 1024 - 480K
    Post edited by Artini on
  • ArtiniArtini Posts: 9,073
    edited November 2023

    Another one...

    image

     

    owl08.jpg
    1024 x 1024 - 599K
    Post edited by Artini on
  • ArtiniArtini Posts: 9,073
    edited November 2023

    It take me ages to recreate something similar in Daz Studio...

    image

     

    Woodpecker02.jpg
    1024 x 1024 - 447K
    Post edited by Artini on
  • LindaBLindaB Posts: 169

    Here's Mada's Gorgon with a stable diffusion background.

    Gorgon Portrait.jpg
    656 x 900 - 565K
  • LindaB said:

    Here's Mada's Gorgon with a stable diffusion background.

    Very nice work.  I'm finding background generation to be a great timesaver. 

  • LindaBLindaB Posts: 169

     

    Dim Reaper said:

    LindaB said:

    Here's Mada's Gorgon with a stable diffusion background.

    Very nice work.  I'm finding background generation to be a great timesaver. 

    It's great for generating ideas, too.

  • ArtiniArtini Posts: 9,073

    Looks very nice.

    I am still experimenting with how to combine AI background with Daz Studio render.

    How to make background to melt naturally with the render.

     

  • Carrara background image series batch processed with Stable Diffusion, iray foreground bunny

  • ArtiniArtini Posts: 9,073
    edited November 2023

    Interesting approach, Wendy.

    I am still experimenting with Shape-E generated props.

    Looks promising for me, but require at least UV mapping

    to be used with regular iray shaders in Daz Studio.

    I am doing quickly UV mapping in Blender, mostly box projection.

    The best would be to retopologize such props.

    Have some old version of Zbrush, will see, if I could still install it.

     

    Post edited by Artini on
  • WendyLuvsCatzWendyLuvsCatz Posts: 37,945
    edited November 2023

    Artini do they have vertex colours? Zbrush should display it.

    One I used did but I didn't bother saving the crappy messes

    Post edited by WendyLuvsCatz on
  • DiomedeDiomede Posts: 15,088

    WendyLuvsCatz said:

    I am so glad it didn't misinterpret your pokethrough as a happy boy cat

    Had to go back and look again.  surprise

  • ArtiniArtini Posts: 9,073

    Yes, they have vertex colors, when exported as .FBX

    Daz Studio cannot read such files directly, so I convert them to .OBJ in Blender.

     

Sign In or Register to comment.