AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
To her earlier point, I started out with a background in traditional clay sculpture and model making for special f/x. As I got older, it just became too much of a hassle to store all the supplies needed for that work (not to mention the chemicals) so I switched to Zbrush and digital sculpting along with morph making for g3 and g8. How I sculpt in Zbrush is basically the same as in clay. Block out, add mass, smooth, detail, smooth, add creases, add clay, smooth, refine, etc. I still have to look at the "piece" from every angle to check for balance and symmetry.
Are there differences? Yes, on both sides. I could feel a lot with my fingers sculpting in clay, I can't with a mouse and pen. But I also didn't have a symmetry switch to cut my sculpting time in half and also reduce symmetry errors.
If you want a way more famous example of the above, look at Rick Baker. He basically does everything in Zbrush now, obviously he didn'st start with CG modeling when working on Schlock and An American Werewolf in London But he leverages Zbrush heavily now
Daz can be a base to build your character from the same way you have a rigged wire model in model making to build your clay sculpture on. It's the same analogy. You can also go out and buy a pre-made unpainted model and spend hours painting it. You still added to it, but obviously you used a lot more "pre-made assets" than the person who sculpted from the wire figure model.
Somewhere in here, someone said the AI is actually like a human brain. I had to think about that a little bit, and the answer is, so what? Just because you can create a simulacrum of a human brain doesn't mean you're getting anything like a human. Humans make art from their experience of the world around them, from the inputs of their bodies, from their passions, from feelings of which the AI knows nothing. All the AI can do is reswizzle whatever is fed to it. It doesn't have any frame of reference for the images it makes. It's just swizzling and regurgitating endlessly. Humans go, "ooh, look: The computer made something." It's 50 million monkeys and for all the bad answers, it gets some good answers, at least according to the humans picking what they like from the mass of garbage spit out by the algorithm.
If that's really the future, you can have it - if you can survive having it. In case you didn't know, carbon footptrint of AI is enormous. And what AI does may well not be in the best interests of humans or humanity. Here's a nice article mentioning how an AI came up with 40000 nerve agents in about 6 hours: https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx
Maybe an image-processing program like On1 would provide such help? IDK. I have only a very old copy of their program. They send me a lot of junk mail touting their AI-driven image manipulation capabilities, and they swear the results are photo-real. So, I doubt their program will care if your image came from a 3D program..
I suspect that the big loser will be truth. Most of us can still tell the difference between a photograph and a painting, i.e. "truth" vs "opinion". When that line blurs and opinions are malevolently presented as truth, in a medium that we accept as truth, we've got a problem. Yeah, yeah, that line is kind a' blurry already in various ways, and yeah, photographs already can be used to lie, but you start putting undetectably fake videos of famous/important/powerful people supposedly saying things that they never said, then we've got a problem. I'm 74, but I'll wager that before I shuffle off this mortal coil I'll see a major scandal or disaster caused by that problem. But I don't care anymore. You young'uns just carry on doin' what you're doin'. It's your world now. I've had my say.
"AI" is a wide field. It may go without saying, but just to ensure that's clear.
The generative magicky stuff will probably not help too much at this stage, though it can output something useful for compositing images.
(Later on larger and likely cloud-based systems, maybe...)
The capabilities of the technology in use with the currently discussed generative ai systems however, has capabilities that will enhance photo real contexts too, namely:
- Upscaling. Better and faster upscaling of images.
- Inpaining. Believably put in another image or thing into an existing image. Clouds, a balcony, whatever. This for the context of having both images ready, but not wanting to fiddle around with combining them in a believable way for too long.
On the tooling side there likely will emerge a vast range of interesting tools, also for photo realism. E.g. better denoising already is an application for more "classic" machine learning systems, but also DOF, and other aspects. On the more magic side, we might see more easy and fast to use realistic inpaininting into 2D but as if it was 3d, e.g. a tattoo onto a character in a 2d image (i assume such will be pretty much in range, but it's not the language-model ploy, though you could end up with tools that you can talk to, in terms of describing where and how to inpaint something, on the medium run). Some generated stuff might already be fit for some kind of a collage, e.g. as artwork to be put inside a photo real image, or as a background (...).
This ^ And what happens when the AIs get hacked? Should be fun.
Like with the inspiration question. We haven't even understood human brain well enough, in order to even model it, simply. We don't really have high resolution enough scans of brains, especially not in real time. We're playing puzzle games (with educated guesses and some science backup, though).
So "like brain" ... yes somehow similar processes happen with machine learning, as we are witnissing within the brain, but we see much more than happens within those ai systems, and that literally says nothing (of use).
Concerning the generative aspect i suspect that we got closer to modeling some aspects of visual processing of brains in general. However the text interface and the 2D-Input+Output are incredibly flat, and the system is static in comparison to actual brains (human or not). Not only is the real-time design missing, but also the aspect of dimensionality (3+), time and space. So comparing to (human) brain is very bold. Of course such and other systems can do something better than humans can. So if people suspect, that we actually have modeled creativity here, i must say that might be similarly bold.
In reminiscence of the Battlestar Galactica remake, i'd say, a toaster can do something better than humans do, and as long as we don't give toasters rights and assign all sorts of glory to them, we shouldn't do that with these generative ai systems either.
To me 3d rendered art falls into this uncomfortable territory between the AI-generated "good enough and fast" and entirely hand-painted "total control over what you get depending on your skill and level of practice". It's neither of these two and this is its biggest weakness. Render times can be long and feeding photos to the AI can give you photorealism too. If you do build your own assets for renders? Then it takes even longer. If you don't? Then you don't have full control over what's on the image, you are in the same ballpark with the AI-generated but the AI is faster.
In comparison professionals making art for living have painting times counted in dozens of minutes, if they go longer then it's really polished down to every detail. Learning and practice makes a painter faster. Without a bottleneck of render times followed by post-work.
Then there's a thing with professional artists training AIs using their own assets and using this to speed up their work.
I must say I'm at loss where Daz and Poser are going to fall on this spectrum in terms of utility.
It can't be avoided that this will hit hard on text soon-ish. While right now only big tech seems in the game, others will follow, and it might get "cheap-ish" with even more specialized hardware. Maybe the currently happening chip wars already forestall a bit of what will happen. Some people may have a ChatGPT-like system in "no time", but without the nice filtering (or instead use ChatGPT. while circumventing filtering), to create all sorts of even more believable "content", likely in the range somewhere between ads, manipulation, propaganda, inciting, ...
The problem is, that while manipulative texting gets much better and way faster and cheaper, defending doesn't become easier by nature. I see two main routes:
1. People consult fact-checkers and "the good bot" like ChatGPT for validity checking, giving those more weight. Same way that has dangers concerning information control. A system like ChatGPT might though fail, because training frequency will probably be too low, to keep up with "facts" injected into the discourse. So the chat system always is behind, but also is consulted by the "attackers" same way, similarly to the anti virus paradigm.
2. People fact check by digging up information. Do they now and how does it work in reality? People don't have more time to check everything, which might flush many towards the scenario described in 1.
This is a static image still, we might "soon-ish" have information systems that keep up with real time, just with less magic (compared to full-image generators and the answer-everything-bot). Perhaps more neutral fact-checking gets easier, because the tool will be very good at digging up stuff and relating sources somehow. But that's no guarantee, as also attempts (e.g. by governments) to keep out disinformation might hit on a neutral system, rendering it essentially biased and less fit (and so on)...
The Software clearly falls onto the "utility" side ;). But that's not how DAZ is making their money (correct me if i'm wrong).
So enhancing Studio with all sorts of tools ai and not ai, as well as more interconnection with other tools and services, might allow them to keep value of assets up to some extent.
Assume the cloud service attacking with bringing in a smarter variant of generation, e.g. with own assets and inpainting, description-based character and scene construction (future-ish, not trivial), that would be a pretty hard attack, as it also might allow for animation and movie rendering, by the nature of the advancements necessary to provide such.
So it could happen that DAZ user amount will shrink towards those who want more or full control, e.g. using characters both in still and animation, probably alongside with games. That's tough though. If only big tech can provide the killer tools, we'll probably witness human downfall on creativity, disguised as an explosion of creativity :), before it stalls. If smaller players can build generative/inpainting/whatnot systems, then there at least is a good chance to combine high/est quality assets with any type of tool to great effect - thus we may end up with DAZ providing different services in the end, e.g. generate based on all assets vs. cheaper generation based on your assets, vs. full control just buy assets and do your stuff like you do now, with faster scene/animation/... creation due to improved tools.
Hard to tell, nothing is certain.
There's always the possibility that we could set up two separate AI's (unarmed) and get them arguing and plotting against each other till one of them shows us the way to defeat them.
Can we talk about the entertainment value of AI, like in movies like this
Harold Finch
To elaborate: the limits concerning the depth of what current systems like ChatGPT (plural!) can do, might already make those susceptible to "malicious campaign text forgery" (pick your own words!), because ChatGPT will not understand the content crafted towards being checked by ChatGPT, thus the "ai" is tricked into answering something useless or even endorsing the position, or at least inflicting uncertainty. Only some kind of filtering, with whatever other kind of detection /"ai" in the background, will give some remedy there, short-term. Relying blindly on the sanity of such filters would not be very smart, but in case of widespread adoption, could become a limitation of the society as a whole.
(How can it pass xyz examination, without having "some depth" - this may plausibly be explained by the knowledge represented by the training data, say the internet, actually containing all the steps for solutions for all stages of solving, for many type of problems.)
(Limitations and training data horizon... images and text alike, though images should be easier to obfuscate than text... still, if the text-system is to understand and reproduce the whole web, it will inevitably have to be fed generated or mixed content. However, being able to detect generated text quite well, would allow to mark training data with the likelyhood of having been generated. So on the long term, text-based systems might have more longevity to them.)
(Is there an impact of image generators on the market already? -> Hard or impossible to tell short term. Stock photos and those harassed by ai-copycats going to court first, may be an indicator of where the impact is highest, initially. Photorealistic higher res... not yet really? So rather fantasy or artwork type of stock photos first, probably. For sales of PAs on DAZ to get affected, people would have to drop off DAZ or the customers of the PA's customers have to drop off in numbers, before it gets noticed at all. So i assume there will be some delay, once it happens. Still it may be too early to tell, how far the generation of images will lead. While you may think it'll linearly or because it's new ~'exponentially' improve, this may not actually be the case, neither for the structure of the system, nor simply for "more input data". Further you need quite widespread adoption to impact the market, and copyright remains a big question mark. Without copyright on such images, even with commercial use allowed, selling them means that some of the buyers might need a new protection/business-model too, thinking of a computer game. No issue for some folks, but for others it could be. We'd have faster content creation spirals on some fields. Could be that more specific tools for animation and texturing could have more impact earlier on, should they become easy to integrate and cheap/ish, though those probably don't effect as much of the market. Here copyright is crucial too - if it applies, also PAs will be able to create fantastic :) textuers in a whimp and flood the store with them out of sheer necessity. If copyright doesn't apply, probably only selling old and highest quality items, as well as synergetic items/tools, which make stuff easy to apply within a software, will remain rather untouched. In addition to quality, the "who pays what" question for generated content remains as well. It's not fully certain that extreme widespread use happens, if you only have 2 tries on a prompt for a few cents, because it'll add up too, and you'll end up being better with getting a stock image generated by a PA with ai, for a few more cents :p. As a PA i wouldn't jump off the cliff right now...)
I put an old render into AI and got this...
Oh boy, this is something else for sure... I could see myself running all my renders through AI now.
I do this mostly
as well as photographs
my ai videos are mostly img2img batch processed image series renders using openGL or Filament
Well, my art normally falls into no-man's land... too much paintover and postwork for the 3D render purists and too much 3D for the artist purists, so I have no problem with AI as a tool - and it's already given me lots of fresh ideas.
i see what you did... a foreshadowing of all the art districts worldwide!
i'm loving it at the moment but bluejaunte poses an interesting question.
This is very interesting topic. Personally I like the idea of AI helping us doing our thing. I'm mostly interested about things like upscaling ( like I render 1024x768 and AI upscales it to 4k or something ), or AI filling gaps when rendering animations ( I'd just render 5 images per second and AI fills out missing frames to get smooth animation ). Stuff like that would be enourmous time savers in my workflow, and I hope Daz includes these kinds of things to Studio. I think these kind of services are already available in the web, but I haven't had time to test any of those yet. Anybody got any good recommendations for these kind of things?
Also I'm sure image manipulation works nicely ( like we can see in lots of images here and in the web ) for backgrounds and characters in single images, but if you are doing a game or a cartoon with several images in the same location, is AI able to "remember" locations and characters? Like if there's a fighting scene with 10 images in a cyber punk back alley, is AI able to "remember" that specific back alley and characters for every image in the series? It would be confusing for reader/gamer if appearance of characters and background keeps on changing for every image.
Mendoman this will do what you want https://nmkd.itch.io/flowframes
I frequently use it with iray rendered image series as they are so slow
deleted
I actually like very much combining both 3d apps and ai apps, as demonstrated in the thread.
That way one can get the best from any of them.
I have not tried this approach myself, but enjoy watching examples posted in the forums.
I'm not sure if anyone should feel bad about running their renders through the AI. Already I've seen professional artists using AI to speedup their work on concept art by using it for things like fast generic backgrounds. Or to sketch something then run this through the AI and then paint over this.
It isn't even anything new, except before it was photoshopping fast scenes from stock photos then drawing over this.
You still need to make a render to have something to run through the AI.
i'm looking forward to seeing what artists and other such creative minds can do with it... beyond dismissing it.
There is no inherent value in an image being entirely a 3D render (or even 3D at all). A lesson I learned far later than I should have (and still struggle with sometimes) is that you don't get points in art for trying harder. Do you think that using a photo backdrop or photographic HDRI is also "cheating"?
Different strokes for different folks, that's how it should be - do whatever you're comfortable with and not feel forced down an uncomfortable path.
But yes yes yes, I hate the guys that try to claim it's "all their own work"... they can't paint, can't light a 3D scene and try to pass themselves off as serious artists because the AI responded to their text prompts - and yet it'll just be a matter of time before they are considered serious artists because people will nevertheless like the art produced irrespective of the method.
Right now AI has a lack of precision, so it's great if you have a general idea, but if you want precision over object design, you still need to paint that part yourself and/or layer-in a 3D render.