AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
relaxing ambient AI forest video
FX from Capcut
If you knew how Stable Diffusion works, you would know that there is a CFG (Classifier free guidance scale) parameter. The default value is 7.5 and it can be increased up to 30. This parameter is bascially how much the AI will stick to your prompt -- consider it as a freedom of artistic expression. If you told a human artist to draw you a cat a hundred times, chances are most humans would at least once draw you something else (and it might even be something rude). AI is no different in that sense. You can absolutely raise the value to 12 (above that is not recommended because it amounts to yelling "NO SERIOUSLY I WANNA CAT!!!" at it), and thus force it to stick to the prompt more strictly.
I do know about the CFG parameter, I usually leave it around 8, sometimes 9 because higher can give weird results so I'm told. And yes, I did get cats with the prompts I was using. Many cats. Maybe even too many cats -- nah, no such thing. ;-) Just thought it funny that in the middle of every batch there was at least one render that bore not the slightest resemblance to anything feline. Got to the point I was wondering each time: cat or no cat? Think I'm gonna call it the Schrödinger Effect.
That can happen if the query is not too specific. You can have a prompt saying "a cute cat" and the AI model might have "seen" the tag "cat" associated with "tabby cat" "cat fish" "old lady with a cat" and why not a "witch with cat like eyes" or "a girl with cat ears" or even "cat paw lucky charm" or, God forbid, "a box of cat litter". It may even stick to "cute" because it weighs words that appear earlier in the prompt more and draw you "cute girl with a cat tatoo".
If you asked me to imagine a dog right now, my first would be my own belated mixed breed dog whom I loved very much, the next would be a Border Collie, etc. The AI will also "imagine" whatever it has seen the most in the training dataset and you must not forget that this dataset includes bias.
For example, if you ask for a nude woman and add the word "porn" to your prompt she will have large breasts -- no exceptions. It's the training dataset that has the bias and not even adding a negative prompt can remove this bias -- the AI simply connected those two things and can't do them differently. Also, if you have been wondering why it crops the heads in half, that's because dataset contained a lot of photos of t-shirts which were focusing on t-shirt while cutting off the model's face. So you need to basically do prompt engineering to get what you want.
Frankly, that behavior is quite similar to humans. With many of them you have to be very specific when you are giving them any instructions. If you are in a relationship ask your partner to bring you shoes without any specifics and try to imagine which ones you will get if you have several pairs. How would what they bring change depending on what you are wearing (if anything), and would it vary with their mood and weather outside?
It is easy to mock the AI for not always doing what you ask and to forget that humans are at least equally unpredictable if not much more.
Been having internet problems so it's a bit late but ... conclusion to the "Where's My Cat?" saga just posted in the gallery. https://www.daz3d.com/gallery/user/6386104575983616#gallery=newest&page=1&image=1254198
I am thinking this AI business is very like herding cats. Much patience is required and still it will do whatever. Fun, though.
(editing because accidental wrong link. Silly Byrdie ;-)
This lays it out pretty well. The guy talks too slow. I recommend speeding him up and listening for a while.
Listened to the whole thing, his point on how the new Music AI generator required them to make sure no copyrighted music was included in the modeling in opposed to the same consideration was not made for the imaging is absolutely telling.
Charles, thank you for taking the time to listen! Yes, very telling.
The main thing I want from AI is the ability to take a 3D render and 're-render' it in a different style using the AI while retaining all of the elements of the original image. I see what look like examples of this in Youtube thumbnails (like a cosplayer and then an anime image in the same pose and outfit), but they never show how or even if it can be done. I've tried img2img in various AIs and it never gives good results for this sort of thing. If anyone ever does see this being done well, could you please share a link? Thanks.
AI will definitely replace DAZ studio
oddly enough I don't want it to generate art but rather photorealistic interpretations of what I input even if fantastical.
human versions of cartoon characters, if I ask for a cat dressed in armour I want a realistic cat in realistic armour.
so no, I am not using it to replace artists, photographers and photo manipulators maybe
AI really took off since stable diffusion was gifted to humanity. I didn't really look twice at AI before that. Now we can generate all kinds of images, fast and mostly free if you have a decent gpu. I say mostly because the electricity bill probably gonna bump up a bit if you are generatin images 24/7 lol. 3090 and 4090+ gpu owners have a lot more options as far as training goes, but there are 3 methods of training anything you want on 8GB GPU. It's awesome. You can train image styles, or people/characters. You can dialspin someone up in DS, make a bunch of renders of it, then train your favorite SD model to know how to make images with it. Ususally it takes maybe 5-10 mins to do a basic render, not counting setup time of course. Generating images can be as fast as 15 second if you use a smaller size. then you can pick through and find ones you like and redo them at higher resolution. It's pretty nuts really.
I got the electricity bill and have now backed off on running Stable Diffusion on my PC
I feel, after the initial exitement, I start to get bored using the AIs. Somehow I prefer to create the results myself instead of letting the software do it.
Well I finally had some free time to play with Stable Diffusion (Online version) and DALE-E.
I LOVE this new paradigm for still image art creation!!.
I have a ,somewhat little known, but powerful image upscaling app over on my 2017 intel imac called
“photozoom pro”
It can enlarge those 512 & 1024 images up to 600% with great quality.
I will be using stable diffusion for all of my thumbnail art for my youtube channel
As a former professional graphic designer ( in my previous career),
I have found that I can create some really cool brand logo base images and convert them to Adobe Illustrator Vector art with a mac app I have called “super vectorizer”.
Welcome to the future!!
I created these using Midjourney. I'm just having fun. My experience is that AI is an easy way to make one-offs at the moment rather than, say for instance, a story of many scenes about a specific character created. I started with Daz doing the same thing, creating one-offs, portraits of people that I have no practicle application for. This AI art does for me in a few minutes what Daz renders take me hours to do - create something I like. I am still using Daz to create a picture book for children (and my renders take forever because I have a potato for a 'puter). I would not be surprised if I explore Midjourney for a while then drop it. I am having fun creating stuff tho.
Haven't tried cats in armour yet. But cats in tactical gear ... Team Whiskers reporting for duty.
been doing a few
outpainted from my Midjourney helm creations
Niiice! Cats of the Round Table, maybe? Pretty certain that's gotta be Sir Purrsalot. :-)
I have a thing for cats
keep postworking my images i other AIs too
Only listened up to 04:30, but basically that's the current thing.
Such can lead to tools, though, without exploitation. However the showcase stuff that is filling the news at present, is based on ~ basically exactly that. At least that's a threat in the wild, if ever anyone claims to have an "improved version"...
This makes it all worth while!
Love it.
Strictly, they are talking about Machine Learning in this circumstance. Computers are excellent at doing repetitive tasks. There are now methods to find and identify shapes and colors. This is the modern way to sort spoiled eggs; the eggs are candled on a moving light table, a computer looks at each egg, notes any with dark inclusions, and disgards them. Traditionally, you give the computer a data set such as "doctors" (a set of hundreds or thousands of photos), the computer creates a model (images that contain a human, a white coat, a stethoscope) and decides that defines a "doctor". You then can show the computer a set of a million photos and it will identify those people with the attributes it defined a "doctor" as a doctor. The really sophisticated ones can merge data so you can as for a doctor on a unicorn and the computer can synthesize an image from it set of doctors, unicorns, and people riding to create an image of a doctor riding a unicorn. The limitation is that computers have the "garbage in, garbage out" issue so if your data set of images doesn't contain a Black woman doctor, the model always assumes that the ideal "doctor" is whatever the dataset contains the most images of. The second problem is that many systems use datasets they haven't paid for raising copyright issues.
How and why?
Does 'AI' give me a 3D environment populated with people of my choice and let me take pictures from all directions?
If I was only interested in a single 2D image with one time environment and one time characters, I wouldn't be in 3D in the first place
you can train it now to use a specific character
On a computer the 3D environment would have to be reduced back to a 2D image in series to be displayed to you on a monitor, so from there it is representatively equivalent whether the AI created a series of 360 degree 2D images from that 2D image or created a 360 degree 3D environment from that 2D image and then used it's AI 3D environment to create a series of 360 degree 2D images, if both sets of 360 2D images were the same at the finish.
If you're interested in 3D you can use AI altering the 3D environment to keep your interest. Some aspects of some nVidia Omniverse apps do so. They are slow intensive apps though, even the simpler ones bring the nVidia GeForce RTX 3060 12GB to a crawl!
Those AI generators seem to really love "Gandalf" from PeterJacksons Lord of the rings.
I think the following article is what AI will lead storytelling and art to. The article starts from how analytics affected baseball, then goes on to discuss how analytics affected music and movies. In my opinion, AI will be an even stronger force for the trends this author discusses. Doesn't mean that the use of AI as a tool for individual creatives is a bad thing. Just considering the impact in the aggregate.
https://www.theatlantic.com/newsletters/archive/2022/10/sabermetrics-analytics-ruined-baseball-sports-music-film/671924/?utm_source=pocket-newtab&fbclid=IwAR17YPud91q6TAVVQMK_t3LTjYRxvYYOtz5cc-3InRo3EI5Gcy7uJpr2Q7A
Any predictions on how the RX 7000 release will affect RTX 3090 prices? I'm looking at a few factors:
Thoughts?
I'd like to know how this is done, because for me, it's actually easier to make art that I'm satisfied with in DAZ Studio than with AI. The inconsistency and amount of postwork required to fix anything is still keeping me from using it for anything except concepting.