AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
deleted
Dave McKean on the impact of AI for Artists.
A corner further: if this really happens large-scale, how will art look in a few years/decades/centuries?
Apparently, the scenario that could follow your idea, due to how the commercial world works, clearly has the potential for "the downfall of mankind" (on the matter), as we might end up with no one being able to do anything from scratch anymore.
There is a wild variety of outcomes in future, also with more precise tools, and i assume there will always be artists, who want precise and efficient ways of creating, so i assume, that those tools will always exist, then with "ai" support and even more efficient. Maybe such tools will end up being very cheap and widespread, e.g. with open-source and crowd-funded projects. The potential of copyright-bombing is so vast, that i assume, there will be rules imposed at least on fully-ai-created art (e.g. the text->image barrier being crossed by the software exclusively), though there likely will remain grey-zones even on that. Perhaps we'll bomb the whole of copyright anyway, with even more vast consequences. The worst scenarios of course, are those with few corporations dominating the art-market with their "ai", likely cloud-driven, posing for the noble enablers of the whole world of art, while at the same time being able to patent- and copyright-bomb the rest of the planet, preventing actual progress for decades to come. That can only happen if law-makers are pretty much fully stupid, let stuff happen, and then protect the next commercial big thing beyond reason, because it gives your country a head start internationally.
I feel like this gif is again applicable -
It is because Midjourney and other AI datasets have been trained on copyrighted and/or private inputs that is the biggest issue. AI is literally taking artists' work and then putting them out of a job at the same time (without offering a hefty severance package as compensation).
Once artists go the way of the dodo, AI art will all start to look the same (most of it already does). Because there won't be enough new works to scrape from the internet to keep the learning going. Without human artists' work to use as input, AI has nothing.
As far as I am aware there have been court rulings that AI-generated art is not protected by copyright. If that seems true in at least some cases I would doubt the people who commission illustrations would want AI-generated images - in most cases having control of the way the image is used is essential for brand-management and so on. Generally the commisioning body would expect to see preliminary drawings and intermediate stages in the creations to ensure it met their requirements, so it would probably be quite hard (for now) for an unscupulous artist to slip AI-generated art past. Of course there may be many less exacting markets in which AI-generated art would be accepted, knowingly or otherwise.
Not unlikely, that it stalls art-wise. Though, especially with an established cloud-player, if they had a highly profitable model for the moment, they'd probably try to elongate the ai-life as long as they can in a profitable way. Better models, trading accountability for "ingenuity", training from social networks, hiring artists, train from photos, more sources like movies from the past, incorporating 3D assets rendered automatically, meaning massive amounts, incorporating other ai-tools which help create assets and so on. They may manage to push the horizon further away for a while, before it stalls. Or maybe it doesn't, maybe it becomes distinct enough, to actually do what you say, eventually. Right now, it looks like we are not far enough to really push "the horizon" on "the whole thing", technically. It does look like there is a vast range of possibilities ahead, so i assume specific tools will evolve for pretty much all stages of (3D) art. Maybe the movies will take/keep the lead there, or computer games, or the fusion of both...
Digital artists were regurgiating popular styles for so long, it's no big wonder how the AI has easy time replicating it. It's been going for a while, especially in games and animation. Every time a style becomes popular, it's copied.
For anything except for grassroot publications and hobby art control over who's having ownership is going to be important, though. Plainly, it's money. Production art is not created for any other reason, neither are illustrations for larger publications.
Daz & Poser are going to have hard time, though. Hand-made art gives 100% control over the output. A trained artist can be very fast. Way faster than iterating through multiple AI outputs, then postwork on it before there's something usable.
Renders don't have this advantage, they are neither more controllable than anything made by hand, nor faster than AI output.
well unless you are Loki no mother is going to find the Stable Diffusion 2 Reindeer with a Sleigh I got desirable
things are about to get interesting
BTW I am neither pro or anti ai technology
but I also know it's not going to go away
I am not staying in the path of an erupting volcano because I have principles to defend
I have shared my view that the current training model shouldn't really be used commercially
it is however also just my view
just like my playing and experimenting with Stable Diffusion on my own computer and sharing a few images and videos on social media and youtube is not affecting anyone's livelihood
I am actually using it in an educational capacity
it will be interesting to see how this class action goes down
If you are talking about artists making 2D art for a living, ok, but... Using DAZ and Poser is so much more than just trying to reach the goal of getting a render finished.
In DS, I build a 'world', create the characters for it, place the props there and use these elements to create several scenes and take several 'snapshots' of the characters in different situations - Can that be done with an AI?
Wow - what a bad take.
A few doozies from the video - can you guess whether their channel is predominantly AI-related?
I agree that AI isn't going anywhere but I also don't think the discussion needs more inane YT takes.
I'm waiting for DAZ to release a studio version where I can set scenes with text prompts. Like, Michael7 and Kjaer wearing shorts and hoodies, sitting on a sofa of the Venezian apartment, and having a drink. Zack, the scene appears, and all I have to do are the finishing touches. That would be cool.
there have been videos shared here strongly biased towards both sides
this was just one in my YT recommended feed as obviously I am following the topic
hard to find an objective view but the class action is going forwards regardless
I agree and my comment wasn't directed at you. It is hard to find objective viewpoints.
The issue I have with a lot of the YT takes is the 'whataboutism' - instead of looking at what the other side is saying or making a compelling case, they default to: "What about photography? What about the other jobs AI is taking away? etc."
That video is so heavily biased my head spun around and nearly popped off. Instead of whinging on about artists supposedly concerned about losing jobs, he should be listening to what they're really saying and look at the heart of the issue which is copyright infringement. I haven't heard one artist say they would still have an issue with a "make art" button if the learning database used only included opted in and/or open source material. Instead, artists are forced to deal with seeing their copyrighted works scraped into a giant pile without their permission or compensation...forced to deal with seeing for-profit entities such as Stable Diffusion profit from said artwork by way of subscription revenue...all while being forced to watch rando interpeeps profiting from said copyrighted material by way of selling re-mixes...again without permission or proper compensation. That is the problem and that is what artists are up in arms about.
As an example...almost every day I see a new background set pop up on certain other websites that are most certainly direct outputs from Stable Diffusion. Nobody seems to care that the users are making profit from re-mixing other people's work with almost no effort.
They already have that, just look at the recent Game of Thrones release
Years ago, when I was dabbling in photography, I was a member of a website, photo.net. The site was created in like 1995 or so by a university student who was majoring in photography and minoring in computer science. Part of his computer science project, he had to create a website for this new "world wide web" thing that was taking off. So, he decided to make a website dedicated to all things photography, along with a message forum to allow photographers to share their art, and discuss their techniques. He kept access to the forums going back to the inception of the site in 1995.
In around 2000, digital photography entered into the market. The first digital cameras were really nothing to be excited about. We're talking bulky P&S cameras that were like .5 megapixels. But, you could see the technology picking up steam. It wasn't long before 4 and 6 megapixel DSLRs were showing up on the market. Scores of photographers screamed out, "It's not art! How can you call yourself a photographer when you can take 10,000 pictures and pick the one you like best? The best photographers can get the shot they want in 1 shot!"
A few years later, programs like PhotoShop start becoming more popular. Again, these photographers started screaming, "It's not art! How can you call yourself an artist when you take your picture and alter it in a computer program!"
Fast forward a few years, and those same photographers are posting amazing shots. "It took me 5000 shots to get this one. I did three exposures, one for the background, one for the mid, and one for the foreground, and layered them in PhotoShop."
And it's the same for stuff like Daz. I post my work on Instagram, and every so often I get some idiot say, "Pfft! This is just CGI!" Uh-huh....and what's your point? I mean, sure, sometimes I feel like I'm cheating. *Click* model. *Click* clothes. *Click* hair. *Click* pose. *Click* render. Look! I made a art! But, there's still thought and technique, and creativity put into it. With AI...I dunno, I find that we'd lose some of that creativity process. Sure, we can make anything we imagine now, but are we making it, or are we just telling a computer what we think would be cool? "Computer, make me a picture of Mona Lisa surfing on a dinosaur, shooting rocket launchers." Boom! It's done. Just pick the one you like the best.
Yes. I have said this from the start, too. Nobody is 100% against the idea here, but the execution of the idea needs to be done the right way.
Now there has been debate on how the AI 'learns' something. Some people say that an AI should be able to learn like a person. Look, AI is not a human, it does not learn anything like a human, and it does not have any rights of a human.
So here is a video that kind of breaks down how these AI work. This video is surprisingly not biased in either direction, the person behind it is simply intrigued by the technology. It is well worth watching.
So to break it down, the AI starts with a noise pattern. The video doesn't metion this, but the exact noise pattern is what Stable Diffusion and others call the "seed". These noise patterns are NOT random. It is important to understand this. You can use the same seed over again. In fact, if you use the same seed and prompt settings, you will create the exact same image, even on totally different machines. Nothing is random here, and this is a very important point to make. Some people keep saying that AI is doing some kind of thought process, no it is not. There is no guess work. The AI takes your prompt as an input and processes it as a machine.
Now look at how they were trained. Images are fed to the AI with tags on them. The example in the video is excellent, because it just so happens to have an artist tagged.
These images are actually broken down into noise patterns.
Yes, you read that right, the images are broken down into noise in steps. At this point you might begin to understand how this works.
The random noise pattern takes your prompt and begins to work back through the noise. It scrapes its database for all images that have your key words. It matches the noise patterns up, and starts to work step by step, filling in the noise pattern it interprets to match your key words, using pixels from relevant hits. With 5.8 BILLION images to draw from, it has a lot of data to work with. Tags are important. Notice the image of the eagle doesn't even say "eagle" in the tag, it only has the artist name and title. The image from Far Cry is worse. These may not be real, but they do show how the system can be flawed. (This makes it logical for them to better curate their images and tags.)
Anyway, yes, the original images are absolutely being used to fill in these pixels. It is not making up brand new information on the fly, or being creative. Without that original image, the AI has less data to match with that key word, which would directly lead to poorer results.
Stable Diffusion 2.1 removed a bunch of artist names as tags...you have to wonder why they did that. <.< But the images are still in the database, Diffusion has admitted this on record. I don't know if they retagged the images or what, and it is certainly possible that their images have multiple tags anyway. If the image above still contained "eagle" in the tag, then it doesn't really matter if the artist Peter Eades name is removed or not. Anybody typing "eagle" in the prompt could still have the AI use a portion of the image in its processing.
My over all point is to ask a very simple question...does this look anything like how a human "learns" art? Of course not, AI is not human, so why does it get to be treated like one? AI does not have a right to view copyrighted works, because the AI is not a person. And even if you somehow get past that, the AI is still processing elements taken from these copyrighted works as it generates an image. No matter what you think of AI learning, that last fact is not one that can be excused. You do not just get to process copyrighted works for free without permission. That is not ok, and has never been ok, this is not Fair Use.
This whole saga is just following the normal procession of a disrupting technology. Nothing to see here. But I think you hit the nail on the head, Wendy... it never just goes away.
When someone can show me an AI image that contains recognizable elements of someone else's existing work, I'll reconsider my current thoughts on AI (and I'm not counting inpainting or img2img, that's what all of the "proof of theft" images always are).
The argument always seems to leave out the fact that human artists copy each other all the time and a lot more blatantly than an AI does. I have two almost identical drawings of a boxing illustration where one artist only changed the race of one of the boxers. People draw established characters as commissions all the time and never think about giving part of their earnings to the original character's creator.
I'm not really in favor of using AI for commercial purposes because of the ambiguity of how the data was collected, but there's way too much hypocrisy going around.
Selling fanart of someone else's intellectual property is absolutely copyright infringement. Generally it falls under the radar because fanart is typically small potatoes in terms of money changing hands. However...start making actual money and throwing names like "Mickey Mouse" or "Obi Wan" around in relation to said fanart and you bet Disney will be all over it. There's a reason why I would never sell fanart of actual Star Wars characters and only allow for commissions of original characters based in the same fandom (such as player characters).
The point is not liking or not liking "the technology", the point is what kind of use in which way does to society, balance of power(s) and why we should allow abusive behavior. And if a transition is unavoidable, if we should (and can and how) shape it. This is not meant to judge people using the systems, but rather a question of implementation, including data gathering, and what follows from that.
The kind of system can actually put in parts of originals (downscaled, processed upscaled, mangled, but still "pretty much possible"), no need for proof on the general question. Publishing resulting images also is done at your risk (responsability), essentially. It's not "a free one", just because it was generated by a computer system.
It will be cases like this that will inform legal precedent going forward. This situation will continue to evolve.
ARRG Captain, they be calling 'em software pirates!
I agree with @MelissaGT. It is true, some authors encourage fan art as free advertising and look the other way. Other authors have been approached and given a written consist with stipulations, no racism or anti-semtic or ilegal or whatever ... I'm not for or against AI. But I am against how AI art currently operates. AI dioes intrigue me. Probably because since 2015 I've been writing a fictional series (yet to be published) about AI. Bring on the robots! The more the merrier scarier. AI reminds me of the citizens of the Capitol in the Hunger Games series. Chapter six of Catching Fire is where we learn the citizens of the Capitol attend feasts, eat until they are stuffed,then vomit to make room for a second plate. AI has an insatiable appetite, and it needs to gourge on tons of assets it doen't own, in order to regurgutate out code it doesn't own or rightfully own a license for. There is a huge case where the open-source code used to train the AI was not in the public domain and was copyrighted code, as most is. In November 2022, a class action law suit was filed against Microsoft, GitHub and OpenAI. Basically the lawsuit is accusing the defendents of stealing the code from those who wrote the open source code to train Copilot a sort of copy and paste code service. You tell Copilot what you want and it spits out your dream. Sounds familiar huh? But, that is just the nose on the dog. This is just for the open-source code that trained a specific AI. Imagine what will be the next class action. If Microsoft loses what lawsuit follows it? Will new laws be made forgiving past trespassers of ethics? The question that haunts me is does AI only using what participants upload? Were the uploaded works originals or scraped from Google? How many artists scrape from google would be a very interesting count. But thats just Joe Blow scraping for pennies. Now we have a case that involves biliions. Microsoft invest one billion in their AI technology as an investment, not a donation. Does AI scrape the web? Does it steal from small fry or is it hitting the big players? Are the big fish, the stock photo players going to watch this law suit like hawks? Will there be take down notices? Right now I'm glad I stayed away from AI art. If the Plaintiffs lose the lawsuit, will AI forge ahead with lightning speed or will the defendents lose and AI be deeemed a forger? The page 2 of a 56 document shows the parties to the suit, while page 53 shows the attorneys for both parties in case you guys want the story from the horse's mouth.
Edit: Lazy author leaves spelling errors intact . . .
Artists and people like me are already leaving DeviantArt and other sites because our pictures were made available to AI for learning without our consent.
Animals have their own languages, some have social behavior and use tools. To the best of my knowledge they do not create art.
If we outsource creating art to AI we give up what defines us.
AI is the dark side. Quicker it is, but more powerful it is not.
Yes, art is part of our birthright as humans, and it's likely a bad idea to give it over to AIs. The irony, of course, is that the data sets are made of art made by humans.
No, I don't think leaving DA will help protect your work against inclusion in AI data sets. Anywhere you go on the web, your work will be vulnerable to inclusion in AI data sets. The imgurs and other anonymous image gathering sites of the web leave too much work exposed to easy harvesting, and we have no control at all over what ends up there or how it's used. I'm interested to hear where people are going when they leave DA as I think DA has gone a bit downhill over time; but I have not found any viable replacement for DA.
I have one issue with all this AI stuff, where's the sense of accomplishment for having figured out how to do it all yourself.
I've been modding games for roughly 25 years, and I've encountered quite a few different mindsets over the years, the main two being those that are willing to stretch their minds and put in the effort to learn how to do it all, and those that think it's too much like hard work and look for the easy option.
Guess what mindset I see when people flap their gums about AI.
1. AI doesn't do this. Humans do by implementing what data is used for training, and how exactly the processing is done and if or how it is safeguarded against whatever. Humans also influence lawmaking and so on, it's not magic, just in case anybody might have thought so. And it might hit societies hard, with the potential of very few people gaining ridiculous power in comparison, on the monetary side, as well as society gaining nothing but larger single points of failure, while abilities in the broad might decrease (artwork, financial business, programming, ...). The potential for manipulation is vast (right now: more so with text), and concentration of power poses a problem in this context, because you can't defend society or even yourself with "making a better ai". The real gains with generative systems may be within areas, where really hard problems get solved, like solid body physics, molecule construction, but of course also in interfacing (text, image, ...), not only having xyz-impaired people in mind. Putting the generator upfront at this stage, in my opinion has the potential of manipulation, for the sake of bending people's minds and the law towards full exploitation (THEY would probably say: preventing hindrances to the markets, in order to make mankind progress).
2. I don't think this is about lawsuits. It's similarl to "the internet" a new "problem", which you have to adjust the old or make new law for. Letting the big players have fun at courts might just make everything much worse.
3. Stock photos might get hit hard, but big player? The "big players" on ai will be the ones with the biggest budgets on ai (and other), with computing power, better with specialized hardware, and the data gathering and processing powers - assets count too, but usually the players having those, won't meet the other prerequisites, so they'll probably be "forced to cooperate" at some point, or the big ai players just ignore them, going for user content. It's not fully set in stone, because there is a lot of smaller players like DAZ3D or maybe stock fotos (if that's not a big player :p), which may be in need to counter the general magic text-to-image or what follows it, somehow, so there may be one or another service provider, building something for those players, as the total market share might add up to something interesting. So maybe stock photo will have their own ai thing with accountability and monetization share for the authors (pennies...), offering another type of generator, based on their assets plus maybe others. It won't be the same magic. On the flipside, maybe the technique won't be far enough "fast enough" (i wouldn't count this in decades), so perhaps the smaller players will soon be dwindling and eventually get bought by the big players, which then use their assets for "fun with ai". In the first phase they'll just bind the hands of content creators and customers more and more, to force them to contribute to scraping for ai, then they' ll probably shut it all down, once appropriate. Just for a predatory outlook. Maybe copyright dies over this, and while the remaining big players dominate most of the remaining markets, just by controlling media, ads, people, people might then be more free to create whatever they create, and while they can't upload to the big players networks freely, due to automatic filtering (you had me at "copyright"), they'll probably not go to jail for publishing on some "pirate site" for a sub-culture. Then again, it might well turn towards the opposite on that, and everybody who publishes bites the dust eventually, because politics fail to regulate, and the big players each have their walled-everything gardens, but everyone without a dedicated AAA-sized law department of their own, will have actual difficulties to publish anything. So who do those (hand picked) scenarios have in common? The artists and services providing models for others to use will have a more and more hard time. This is why i wrote "2.".
4. For each lazy author there is an even lazier one, reading only half of what they wrote. All this is rough estimates, and by no means represents the only, and likely neither represent the most probabe course of events. Things could turn out to rather no be "good enough", regulation kicks in hard with foresight and the generators rather scratch the fast and cheap content creation markets (which would still have a lot of impact). Currently i am weary about how lawmaking proceeds, e.g. in Europe they had just half-negotiated a thing, until a German party finally comes to their mind, claiming that (translated from "credit rating":) social scoring is a high risk application for ai too (which is correct even without ai, but at the same time so basic, that intellectually i would call... too lazy to write this down).