AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
and now for some conspiracy
Indeed interesting. Woudn't subscribe to all the statements, though due to not sleeping the night, i might actually have forgotten all relevant points, as well as turned stuff into incomprehensible gibberish, while trying to edit it into what i thought what should be there at one or another moment. E.g....
On the other side: humans are language models with a digestion attached to. And some basic survival needs.
I've been silently cheering @bluejaunte on, on many points. But on two in particular:
1) It took us 60 years to get from Eliza to ChatGPT. But a few weeks from ChatGPT not being able to remember what you said in the prompt before last, to it remembering everything you've ever said. It will get better. Not incrementally, but at an accelerating rate.
2) You can devote your time searching for arbitrary and insignificant was in which it is technically still insufficient, or you can use it in ways that improve your own performance in endeavors that matter to you, right now. Is it intelligent, or not? Who cares if it just ported 500 lines of Sagan's source code to Python 3, a language I am not at all good at, in about 3 minutes? If it just pointed out a severe plot hole in my story and came up with ways to address it that I am not sure I would have thought of, given an infinite amount of time?
We are past the point where it is more interesting to use AI in practical situations than it is to debate its objective technical properties.
Have you considered that it made a decision about the context in which you wanted the answer? I would not be surprised if it knew about all the issues you mentioned, individually, but came to the conclusion that that was probably not what you were interested in. I've encountered many instances when I was not satisfied, for whatever reason, with a particular response and found that it had a whole lot more to say after I asked it to expand on a certain aspect.
And I appreciate that it is natural to use things we do understand to explain things that we don't, but it's sort of a disservice to keep saying "database lookups". That is not how neural nets encode nor retrieve information at all, and helps to complicate and misguide the coming legal debate on IP.
1) There is no "it". Hardware and some theory have scaled ridiculously, with them using a sophisticated modeling alongside. That's it. Could be another 60 years from now. Well, i'd not assume so, but as there are limits to data, and to real brains, you won't get arbitrary improvements "for free". And ChatGPT "remembering" is / would-be just an add-on to the prompt +- 100%, it's not an advancement like from Eliza to ChatGPT. Still there can be gradual improvements from day to day after re-training, as they keep updating the models.
2) The hand of god has decided to... have us all plod along? If this is intended as a civilization-based show, things might want to be different. There also is potential for fundamental dealbrakers ahead, which i don't know of, if theory has removed them yet. May be lack of knowledge on my side, could ask ChatGPT for a change. The players lobbying for free scraping, probably don't care for more than the quick $, whatever to follow. Arguing with the past for a thing that is supposed to be future in a fundamental way, because the experience of the past doesn't work for judging it... judge for yourself, how that will not lead to repeating the past in other ways. So using the ai now? Yes, of course! People should just be ready to understand the downsides of what humans actually do, including those humans who run the scraping and the models. Would you take responsability for a commercial application, putting in that code without fully understanding it? [Hint 1: Isn't there an active lawsuit in a similar case?, Hint 2: Quality control..., Hint 3: Just checking "the function" /once might not be enough for a satisfying result.]
Let's be realistic - using generators commercially still should mean taking a high risk, either way (image/text). So "you" or "me" can't really go there "improving", of course you can train, and be a couple of days ahead of other people, if it's freed of any charges/odds/ends for some reason. Using a commercial cloud service is no option for many projects, neither would be an expensive on-premise version. In fact for what i am planning, this has little use alltogether. Neither for programming - you typically learn in small-ish steps for good reasons, to really understand the bits you are using. Some people learn faster, and maybe some people are good at cross-checking examples from ChatGPT with a documentation - this likely isn't the same for all humans. With an on-premise thing, better an open-data open-source one, i'd rather use the text generation for dynamic storytelling. Pricey means maybe later, if i have a game - i'd probably design for use of some ai, maybe i'll "rite my own". Such can be done in more classic ways too, though, but the syntactic and stylistic sugar would be very welcome to make up a story of what happend on-screen (or probably even more so: off-screen). Since those new things don't work for me currently, i've got all day to make up random show-stoppers for the amateur players who run this "civilization".
By no means feel discouraged in using those generators. It's not my intention to fiddle with the fiddling. I just don't see those blue skies...
I don't want to nit-pick on everything, but you say in one instance "it takes decisions" and in the other instance "it's working like a neural network (and not like database lookups)". This could be seen as slightly contradictory, because neural networks don't take decisions in such ways, because that's not in their nature ;).
In fact it's comprised of multiple stages and components. At least there is the big language model/s with one component being the prompt interpretation and one component being pretty much the rest of ChatGPT. Very freely reworded. OpenAI has an explanation page on their (rough) system design.
I just asked it that question (what is the best way to format {} in programming) and the answer seemed fine to me.
For each it gave a code example that also looked fine. I can google all these styles and find them by name. It's all here in the Wiki really: https://en.wikipedia.org/wiki/Indentation_style
It added:
Ultimately, the choice of formatting style depends on the project and the team working on it. Some projects may have a predefined style guide that all members must follow, while others may allow team members to choose their preferred style.
So I don't really know what you were expecting. Did you want it to tell you why these styles are readable on a more visual/perception/brain kind of level?
Never mind, you already explained this above. But you know, why not keep asking? If you asked a human that question, it's quite possible you wouldn't get the answer you wanted right away either. Have a conversation with it. If you wanna know more, dig deeper. Ask it about studies. About soft vs hard tabs, editors etc.
I asked it about studies:
There are several studies that have been conducted on the topic of code formatting and its impact on readability, understandability, and maintainability of code. One study conducted by researchers at the University of Gdansk found that code readability is improved by using consistent indentation and naming conventions, as well as by breaking up code into smaller, more manageable chunks. Another study published in the Journal of Systems and Software found that code readability is also improved by using whitespace and blank lines to separate code blocks and by aligning code elements consistently. The use of code formatting tools has also been found to be beneficial in improving the readability of code. The study also found that developers preferred using a consistent code formatting style, as it made it easier to understand and maintain the code.
I asked to cite sources:
There have been several studies on the readability and understandability of code formatting. Some examples include:
"Improving the readability of program source code" by Thomas J. Ostrand and Elaine J. Weyuker (ACM Transactions on Software Engineering and Methodology, 2002)
"The effects of code layout on program comprehension" by Michael J. Godfrey and David Binkley (Empirical Studies of Programmers: Third Workshop, 1991)
"The impact of code layout on software maintainability" by Lionel C. Briand, Jacquelyn S. Moore, and Yvan Labiche (IEEE Transactions on Software Engineering, 1999)
"An empirical study of the relationship between program structure and program comprehension" by J.M. Carroll and J.C. Thomas (International Journal of Man-Machine Studies, 1979)
It's worth noting that these studies often have different conclusions depending on the languages and the sample code used. Also, it's important to consider that code readability is a subjective concept and what might be readable for one person might not be for another.
The IP shenanigans the AI generated images are facing now are exactly the same human artists are facing every day unless they are doing it purely as a non-profit hobby. Everything is property, graphical assets are property too. This isn't caused by people being tech-hating luddites.
If a human artist has to be careful to not run into this kind of problems, why wouldn't be an image generator affected by this? Note that a machine cannot claim artistic inspiration or interpretation.
If an image is not royalty-free, it always is risky to use it in any fashion because it belongs to someone. This someone might be considering it enough of a problem to hire a lawyer.
There's nothing new nor unexpected here, really.
Absolutely, that's the minimal point to make. The "button" doesn't relieve me of basic duties.
The issue is, that i don't create the actual image consciously and there is no way to know, what it was based on (at present), effectively making it more difficult to judge.
(In a way, following a style of artwork for a while, might have made me less susceptible to violating anything, once i got it right. No guarantee either, though. Should i, as an artist, hypothetically be forced to change style every day, because everything gets copied too quickly, then i am in fact more susceptible to making mistakes than before. So the new curse could become "to be known".)
Bottom line it is not the AI that committed the crime of copyright infringement, it was the very human owners of the companies that made the request and the very human programmers that programmed the computers to steal and scrape the copyrighted material off the internet from the legal owner of those Intellectual properties.
I love the argument of "how are we supposed to track down the owners of the millions of images we used in our dataset?"
Um...then perhaps don't use them.
It seems they don't have the capacity to understand it was not "their" dataset to begin with. Ignorance to the LAW is not a defence, never has been.
I bet if the AI was trained only using purchased references or even only hiring artists for this, all of this would be a dismissible annoyance. But it wasn't.
I can google anything Matt Rhodes and put it on my desktop, and that's about it.
I'm almost sure Adobe neural filters were only made using stock illustrations owned by Adobe. I doubt Nvidia was making this kind of mistake too.
Edit: In theory only public domain images could be used too, but classic senior art wasn't what they were going for.
I'm not sure it's quite as clear though. One could make an argument that any human or machine could learn from publicly available images on the internet. This isn't a copyright infringement yet, is it? It's just learning from others who willingly put their images on the web. So the real question is this: is what these AI generators do more copying or learning? Society and courts will have to decide I guess. Creators of those AIs I'm sure feel like this is public information that is freely available and can be used to learn from by humans and machines alike.
Human can claim inspiration. Here, you feed an image with possible IP rights attached to a computer. A machine is not a person with any rights. A company using said machine does need the rights to use an asset, though.
The worst case scenario, it's going to be considered not any different than photoshopping using said image.
The best case scenario, the courts might handwave the issue only to get rid of it by deciding that it's under no IP rights at all.
Either way you have to deal with an owner of an original image being perfectly in the right of stating that they simply do not allow this kind of use. It's their property so they can.
They could even state this if another human was using it as an inspiration, but let's say they didn't feel like it before. Because IP rights are very much pursued on demand unless they are trademarks, it's literally betting on someone else will to pursue it or not.
One way or another, it's a bit too open for lawsuits.
It'd be safer to just purchase it before feeding it to the machine.
No. It is clear from a legal standpoint, versus opinion.
There is clear evidence that a majority of the AI outputs tried to obscure clearly marked WATERMARKS and COPYRIGHT NOTICES on the source material artist's works. The evidence is there for anyone with an objective understanding of the law and copyright protection.
Yeah, in that case that would definitely point more toward copying than learning. Unless maybe you make an argument that the AI simply didn't know that a watermark is not part of the art style. If I'm giving a child watermarked images without explanation then it would assume that this is how you draw. But then it would be the parent's duty to teach better, as it may be the creator's duty to teach the AI better.
I don't know, I just find it so interesting. All the questions AI creates for society. So many ethical and philosophical questions, far beyond just the boring legal stuff.
I think there's a lot of cross-talk over this.
I think the video I posted earlier addressed a lot of fallacies.
Copyright is about owning a thing.
A License is about how you can use the thing.
A Trademark is branding.
A signature on a piece of art is not a copyright.
It's not clear if anyone "needs your permission" to scrape your work(s) and build their datasets.
In other words, this has not yet been decided. It is still in the court of public opinion.
Infringement is the word that no one wants to use because that goes on a case-by-case basis.
And it's also VERY hard to prove.
AND for that, you need to go after the person, yes, THE PERSON that is doing the infringing and NOT the company that made the tool(s) they used.
That's the real elephant in the room.
-------------------
And I checked the data set using the link from about two pages ago.
Yes, my comics are in there.
And yes, my friend made something that looks like a horrible version of my work and sent it to me as a joke.
----
I still can't process all of this and have no idea how to feel.
So I'll the courts decide and roll with that. lol
But if you can't prove that a person putting it into a database had right for doing this then you might end having an infringement. Because it's going to be hard to tell if it even falls under educational uses... Might as well end boiling down to the owner saying whether they agree or not to this use.
I think we're going to see more of the art hosting sites trying to stay on the safe side of things by making it easy to state if images can or can not be used in AI generators.
No one in their sane mind wants to tackle with lawyers if it can be avoided.
Aren't search engines essentially putting images into databases too? Nobody cares about that. And what if the AI would instead browse the web in real-time while learning, without putting anything into a separate database?
Is there even a law against saving an image somewhere? I thought it was only the usage of that image. Nobody cares if I save a copy of it to my computer to look at it later. Or am I wrong?
Is there an AI version of limewire yet?
There's a few very expensive pieces of software I'd like to have for "training purposes".
I promise I'll never make a single cent from their use.
...anyone else see the similarities?
It's not about people looking at the pictures or saving the pictures to their computers to look at later. It's about people profiting from other people's copyrighted works without any permission or compensation given back to the original artist.
Think of it this way. Midjourney uses an AI dataset that was scraped from hundreds of thousands of copyrighted works. They turn around and charge a subscription fee for people to use their service...that was trained off of copyrighted work without any compensation back to the artists who essentially created that dataset. Even further...there are people using Midjourney and other tools (Stable Diffusion et al) to spit out pieces...and then turn around and re-sell said pieces...thus adding an additional layer of profit off of copyrighted works.
If someone wants to train their own dataset on their own work and then use it to remix or create new works to sell...totally fine by me (minus the environmental impacts if everybody were to all of a sudden start training their own AI's). But this is such a small % of use cases.
Well, yes but it's a fine line and brings us back to square one. Did the AI not simply use publicly available images to learn? Isn't it what humans do too with images that they see and copy in style? It may seem obviously wrong but it's an unprecedented situation that will have to be solved in court. I can certainly understand the concerns of artists, I mean, I'm an artist myself. But I can also find arguments for the other side. This isn't just a case of "you're profiting off of my stolen images", it's also "I put these images on the web for everyone to see, I can not prevent someone from learning how to draw in the same fashion". There is no copyright on the looking-at, analyzing and learning part. There may be an issue with the end result if it looks too much like some other image, but then that would have to be solved on a case-by-case basis. And with the sheer number of images in circulation, it'll probably be a nightmare. Like how many different ways are there to draw the Eiffel Tower.
well I mostly use my own renders and photographs, videos as inputs for im2img,
the ai I use as a filter to apply a style which is never one artist if I use any, if I do it's usually 5 or more
I run Stable Diffusion on my own machine and is much faster and uses less power than a DAZ studio render
I think I can safely say anything I do with ai is very transformative,
I don't refer to it as my art though, I don't consider myself an artist even using DAZ content, I am more a producer of entertainment, 99% of my stuff is YouTube videos
I shot a video of me sitting on a chair an hour ago
I am now running the separated images through Stable Diffusion as batch render
Prompt
an ugly old medieval hag with saggy old skin Shub-Niggurath, highly detailed, jacek yerka gaston bussiere, craig mullins, j. c. leyendecker Ernst Haeckel Mark Ryden James C Christensen
size width 1088 height 1920 resize mode resize and fill
50 sampling steps
CFG scale 16
Denoising strength 0.4
Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content. search web for article.
Artists are sueing - Stability AI, Midjourney, and DeviantArt are being sued by a trio of artists who allege that the companies' AI art models violate copyright law. search web for article sand read lawsuit.
While AI might be the next big thing, implementation and copyright is a thing. reminds me of the MP3 craze and Napster.
Yup, and the courts will settle it. Then we know. Right now we don't.
But the AI physically saves and and breaks down the images into the format it needs to learn. And then the algorithm spits out those pieces (along with pieces from other consumed parts) based on the prompt. The process is described a couple pages back and is wonderfully put together. It's not like a person who looks at images and then tries to emulate someone's style. Not like me when I first started out looking at Shiba Shake's artwork and saying something to the effect of "wow, I really love their style...that's totally the look I'm going for"...and then practicing and practicing and practicing til I developed my own style. When the AI responds to a promt, it's not like Bob Ross thinking to itself that it's going to put a pretty little tree over here this one time just to see how things go. There's no thought/feeling and no anthropomorphization. It's a machine that is performing calculations based on following a prescribed algorithm. As already explained, the same prompt and seed will always result in the same output regardless of what machine it is run on or who inputs the promt.
I completely understand the reaction. Aren't these wonderful philosophical questions though? What even is learning? Is it not learning anymore if the AI can do it a billion times faster than any human could? Is it not learning if all it does is execute algorithms? Are we not also essentially executing our own human algorithms and storing the information in our brains?
Except for the millionth time, it never "spits" them back out in the exact same configuration as the image that it was trained on. Photoshop tutorials literally tell people to search for a texture on Google, save it to your computer, then paste it into your scene where necessary for a background or to add grunge or roughness to your scene - which is closer to actually stealing art than what the AI is doing. Also, you can "put a pretty little tree over there" wherever you want using inpainting, and the AI will adjust the lighting on that tree to match your scene as well (which it couldn't really do if it was actually copying a tree it got from another image, could it?).
and that would not be allowed in an environment where rights management mattered, because it would stand a good chance of opening the door to being sued. Something being suggested in a tutorial does not guarantee that it is legitimate.