AI is going to be our biggest game changer

1242527293048

Comments

  • PixelSploitingPixelSploiting Posts: 898
    edited January 2023

    Tutorials aren't concerned with legalities because their purpose is to teach the technique. Whether the same will be encouraged in a commercial work is a different matter. This is why all stock texture selling sites are making any coin.

     

    Note that this is only about use of generic small things like canvas or wool textures. Not about composing an entire piece from parts from multiple different pieces.

    Post edited by PixelSploiting on
  • SnowSultanSnowSultan Posts: 3,632

    Not about composing an entire piece from parts from multiple different pieces.

    IT

    DOES

    NOT

    DO 

    THAT.

     

    Seriously, I'm done now. This is like arguing with conspiracy theorists.

  • SnowSultan said:

    Not about composing an entire piece from parts from multiple different pieces.

    IT

    DOES

    NOT

    DO 

    THAT.

     

    Seriously, I'm done now. This is like arguing with conspiracy theorists.

    If I understood the previous discussion it is taking the data from images with matching tags (usually more than one) and processing them to get a final image - it isn't a collage, but it is a procedural derivative.

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,484
    edited January 2023

    it has been trained on a vast number of images but as pointed out in that video I linked (in spite of the bias) a checkpoint file is around 4GB

    I can confirm this on my own computer

    so what it contains cannot possibly be all those billions of images

    how it exactly works I still don't really understand but images even postage stamp sized thumbnails of that quantity cannot be compressed to such a tiny size

    it is not piecing together images in any way but given a very specific prompt due to the training and word association it will reproduce something very similar to an original sometimes

    ethics of the training source aside it isn't actually copying and reproducing stolen art

    however

    if an ai image does match an original artwork it indeed would infringe copyright

    this is why using ai art generators is at your own risk just like tracing and and then trying to recreate an artwork exactly

    that is on the user not the tool, if you use an artists name and clear descriptive terms of course you will get something close that can get you in strife

    and it is indeed a gamble it could happen by accident 

    which is why I use my own stuff as a starting point and don't monetise anything either

    I try within the parameters of using a generative tool without skill to be as creative as I can

     

     

    silly selfie video

    Post edited by WendyLuvsCatz on
  • bluejauntebluejaunte Posts: 1,909
    edited January 2023

    Although kind of ironic, GPT-3 seems pretty clear on potential copyright issues smiley

    Weights and biases in a neural network are the parameters that are learned during training. They are used to make predictions based on input data. For example, in an image-generating neural network, weights and biases would be used to determine the relationship between the input image data and the output image data.

    In terms of storage, it is likely that the image data used to train a neural network is not actually stored within the network itself. Instead, the network's parameters (weights and biases) are stored, which contain the information learned from the training data. The network can then use these parameters to generate new images without needing to store the original training images.

    As for the potential copyright issues, it depends on the source of the image data. If the image data is obtained from copyrighted sources, then using it to train a neural network could potentially be a violation of copyright law. However, if the image data is obtained from non-copyrighted sources, or if the use of the image data falls under the fair use exception, then there may not be any copyright issues. It is important to be aware of the legal issues surrounding the use of image data in AI and to obtain the necessary permissions before using it.

    Copyright issues with image data being stored in a neural network can arise if the images used for training the network are not properly licensed or obtained with permission. This can include using copyrighted images without permission, using images that are not licensed for commercial use, or using images that have not been properly attributed to their original creators. Additionally, if the AI image generator is used to create new images that are similar to existing copyrighted images, this can also lead to copyright infringement. It is important for developers and organizations to be aware of these issues and to obtain proper licenses and permissions for any images used in the training of a neural network.

    Post edited by bluejaunte on
  • kyoto kidkyoto kid Posts: 41,198
    edited January 2023

    ...interesting take from Turbosqid that I received this morning. (Thursday).

    Apologies some junk left over on the clipbpard here's the actual link.

    https://blog.turbosquid.com/2023/01/10/6-predictions-for-the-3d-industry-in-2023/comment-page-1/#comment-97324

    Post edited by kyoto kid on
  • WendyLuvsCatz said:

    Shub-Niggurath

    Ha ha, Wendy, why would you do that

     

  • nonesuch00nonesuch00 Posts: 18,274

    TheMysteryIsThePoint said:

    nonesuch00 said:

    I was being silly, or so I thought, and purposely chose {} format preferences to query on. I asked it what was the best way to format {} in programming. I think the answer it gave just citing database entries of (associated with UC Berkeley no doubt) {} format styles and the celebrity programmers that claim to have originated those particular {} formatting styles wasn't problem solving at all, but just a database look up. It did great though understanding my English query and converting it to a database query to fetch that data.

    A problem solving AI would have considered the way the various text editors work, whether tabs were hard tabs or soft tabs when inserted by the text editor and such, studies on making blocks of text easily readable to humans, human typing skills, reading skills, vision properties, and keyboards, and proposed a solution using those available facts. I've never known a programmer look up what celebrity programmers from UC Berkley use so they could use that style as their personal {} formatting style. Come on, man! laugh They use what the editor defaults to or what the existing code is already using, otherwise for new code they may use what the editor defaults too or they my have devised there own {} formatting style. Look up what some programmer from UC Berkeley does and copy that style? HaHa! It's a good joke.

    It just doesn't have those facts in it's database as to why those {} formatting styles with noted celebrity UC Berkeky programmers were chosen as the answers so it can't figure out the technical why and so just recites the final answer that was given to it at some point or another in the past.   

    Am I wrong to think it should have the capability to problem solve in a way that I was led to believe this AI could rather then do a dB lookup? I guess so.

    The professor got better results, about 75%. Take note his questions were well known physics theorems with the logical mathematical relations stated in those theorems. So the AI is demonstrating it can convert English queries to typical database lookups and math tables and equations look ups and application of those quite well. 

    I've not tried querying in another language either because my language skills outside of English are barely intermediate level at best and only in the best of fortuitous circumstances. 

    Others have complained that these AI engines are too weighted to be dependent on "subject matter expert celebrities" database look ups and not actually looking at the problem from a naive perspective and trying to solve it independently. Would it get the answer right or wrong? I am going to agree with them (the complainers) for the most part.

    Have you considered that it made a decision about the context in which you wanted the answer? I would not be surprised if it knew about all the issues you mentioned, individually, but came to the conclusion that that was probably not what you were interested in. I've encountered many instances when I was not satisfied, for whatever reason, with a particular response and found that it had a whole lot more to say after I asked it to expand on a certain aspect.

    And I appreciate that it is natural to use things we do understand to explain things that we don't, but it's sort of a disservice to keep saying "database lookups". That is not how neural nets encode nor retrieve information at all, and helps to complicate and misguide the coming legal debate on IP.

    I gave it context. I was hoping that it was programmed to look at some information about objects given to it in a query as would one of the creatures, in this case humans, would look at the information in order to use those objects to help it accomplish a goal. And instead it looked up a database entry about some UC Berkely programmers. If it has facts stored somewhat and retreives them, then no matter if it's a SQL or some other fact lookup language, it's a database lookup. That's not a bad thing, people's brains must do the same, even if scientists don't understand how that information is stored in our brains it is still information retreival. This AI should have something similar to neural plasticity with regards to it approach to problem solving but I'm not sure how to see that in action.

    I will find some old college math homework at different grade levels and see how it handles that. It should be interesting. There is a Youtube video about Wolfram Alpha & ChatGPT that I haven't watched yet. Maybe they will tell me something helpful in that endeavor. I'm not in a hurry to though. I will definitely mess around more with it as it improves. It doesn't bring enough value to me to pay to subscribe to it though, not yet. 

  • nonesuch00nonesuch00 Posts: 18,274

    TheMysteryIsThePoint said:

    nonesuch00 said:

    I was being silly, or so I thought, and purposely chose {} format preferences to query on. I asked it what was the best way to format {} in programming. I think the answer it gave just citing database entries of (associated with UC Berkeley no doubt) {} format styles and the celebrity programmers that claim to have originated those particular {} formatting styles wasn't problem solving at all, but just a database look up. It did great though understanding my English query and converting it to a database query to fetch that data.

    A problem solving AI would have considered the way the various text editors work, whether tabs were hard tabs or soft tabs when inserted by the text editor and such, studies on making blocks of text easily readable to humans, human typing skills, reading skills, vision properties, and keyboards, and proposed a solution using those available facts. I've never known a programmer look up what celebrity programmers from UC Berkley use so they could use that style as their personal {} formatting style. Come on, man! laugh They use what the editor defaults to or what the existing code is already using, otherwise for new code they may use what the editor defaults too or they my have devised there own {} formatting style. Look up what some programmer from UC Berkeley does and copy that style? HaHa! It's a good joke.

    It just doesn't have those facts in it's database as to why those {} formatting styles with noted celebrity UC Berkeky programmers were chosen as the answers so it can't figure out the technical why and so just recites the final answer that was given to it at some point or another in the past.   

    Am I wrong to think it should have the capability to problem solve in a way that I was led to believe this AI could rather then do a dB lookup? I guess so.

    The professor got better results, about 75%. Take note his questions were well known physics theorems with the logical mathematical relations stated in those theorems. So the AI is demonstrating it can convert English queries to typical database lookups and math tables and equations look ups and application of those quite well. 

    I've not tried querying in another language either because my language skills outside of English are barely intermediate level at best and only in the best of fortuitous circumstances. 

    Others have complained that these AI engines are too weighted to be dependent on "subject matter expert celebrities" database look ups and not actually looking at the problem from a naive perspective and trying to solve it independently. Would it get the answer right or wrong? I am going to agree with them (the complainers) for the most part.

    Have you considered that it made a decision about the context in which you wanted the answer? I would not be surprised if it knew about all the issues you mentioned, individually, but came to the conclusion that that was probably not what you were interested in. I've encountered many instances when I was not satisfied, for whatever reason, with a particular response and found that it had a whole lot more to say after I asked it to expand on a certain aspect.

    And I appreciate that it is natural to use things we do understand to explain things that we don't, but it's sort of a disservice to keep saying "database lookups". That is not how neural nets encode nor retrieve information at all, and helps to complicate and misguide the coming legal debate on IP.

    I gave it context. I was hoping that it was programmed to look at some information about objects given to it in a query as would one of the creatures, in this case humans, would look at the information in order to use those objects to help it accomplish a goal. And instead it looked up a database entry about some UC Berkely programmers. If it has facts stored somewhat and retreives them, then no matter if it's a SQL or some other fact lookup language, it's a database lookup. That's not a bad thing, people's brains must do the same, even if scientists don't understand how that information is stored in our brains it is still information retreival. This AI should have something similar to neural plasticity with regards to it approach to problem solving but I'm not sure how to see that in action.

    I will find some old college math homework at different grade levels and see how it handles that. It should be interesting. There is a Youtube video about Wolfram Alpha & ChatGPT that I haven't watched yet. Maybe they will tell me something helpful in that endeavor. I'm not in a hurry to though. I will definitely mess around more with it as it improves. It doesn't bring enough value to me to pay to subscribe to it though, not yet. 

  • WendyLuvsCatzWendyLuvsCatz Posts: 38,484

    TheMysteryIsThePoint said:

    WendyLuvsCatz said:

    Shub-Niggurath

    Ha ha, Wendy, why would you do that

     

    always using cthulhu gets boring after a while 

  • generalgameplayinggeneralgameplaying Posts: 517
    edited January 2023

    Well it does use the images from training, perhaps scaled to a unified size to start with, technically not like a collage, but moving from fully noisy to an actual image, based on the text prompt interpreted by a language model. However it can still end up putting in parts of original images almost 1:1. That's probably not the biggest issue with it, it probably isn't one most of the time.

     

    The real issues are abuse of people  against their consent, possibly rendering copyright and licensing void for the context, in addition to the almost certain but yet still somewhat distant danger, that the system will not have enough genuine training data anymore, because no one will want to feed it anymore, unless you force many many people to do so at gunpoint, or not enough people even being able to, so in the end the system is left to stagnation, having achieved destruction on societal scale. Yes it has inspired many, but think again, if you want to use this as a precedent for all future use of such ai, because the ruinous potential will be there for all contexts or jobs, that it will "destroy" in this manner. Relying on some future technology that will then generate something out of nothing, later on - good luck, mankind!

    (Don't get me wrong, this is not about bashing on ai in general. It's about replacing abilities or processes, stagnation, and the "What then?"-question, in the context of the "What now?"-question.)

    Post edited by generalgameplaying on
  • FirstBastionFirstBastion Posts: 7,822

    Case in point:  Enjoy digitizing this.

     

     

    notoAIart.jpg
    1517 x 1600 - 137K
  • FirstBastionFirstBastion Posts: 7,822

    generalgameplaying said:

    Well it does use the images from training, perhaps scaled to a unified size to start with, technically not like a collage, but moving from fully noisy to an actual image, based on the text prompt interpreted by a language model. However it can still end up putting in parts of original images almost 1:1. That's probably not the biggest issue with it, it probably isn't one most of the time.

     

    The real issues are abuse of people  against their consent, possibly rendering copyright and licensing void for the context, in addition to the almost certain but yet still somewhat distant danger, that the system will not have enough genuine training data anymore, because no one will want to feed it anymore, unless you force many many people to do so at gunpoint, or not enough people even being able to, so in the end the system is left to stagnation, having achieved destruction on societal scale. Yes it has inspired many, but think again, if you want to use this as a precedent for all future use of such ai, because the ruinous potential will be there for all contexts or jobs, that it will "destroy" in this manner. Relying on some future technology that will then generate something out of nothing, later on - good luck, mankind!

    (Don't get me wrong, this is not about bashing on ai in general. It's about replacing abilities or processes, stagnation, and the "What then?"-question, in the context of the "What now?"-question.)

    Many artists have left Deviant Art and Artstation for this very reason. No respect to the creative artist,  no more support for the site that disrespects them. 

  • MelissaGTMelissaGT Posts: 2,611

    FirstBastion said:

    generalgameplaying said:

    Well it does use the images from training, perhaps scaled to a unified size to start with, technically not like a collage, but moving from fully noisy to an actual image, based on the text prompt interpreted by a language model. However it can still end up putting in parts of original images almost 1:1. That's probably not the biggest issue with it, it probably isn't one most of the time.

     

    The real issues are abuse of people  against their consent, possibly rendering copyright and licensing void for the context, in addition to the almost certain but yet still somewhat distant danger, that the system will not have enough genuine training data anymore, because no one will want to feed it anymore, unless you force many many people to do so at gunpoint, or not enough people even being able to, so in the end the system is left to stagnation, having achieved destruction on societal scale. Yes it has inspired many, but think again, if you want to use this as a precedent for all future use of such ai, because the ruinous potential will be there for all contexts or jobs, that it will "destroy" in this manner. Relying on some future technology that will then generate something out of nothing, later on - good luck, mankind!

    (Don't get me wrong, this is not about bashing on ai in general. It's about replacing abilities or processes, stagnation, and the "What then?"-question, in the context of the "What now?"-question.)

    Many artists have left Deviant Art and Artstation for this very reason. No respect to the creative artist,  no more support for the site that disrespects them. 

    Both ArtStation and DeviantArt did add the 'NoAI' indicator to the account settings, however that will do nothing to stop people from just saving whatever images they want anyways...or just scraping from google images. But that's not really the fault of ArtStation and DeviantArt. Nothing will stop thieves from thieving, sadly. We can only hope that litigation will shut the heavy hitters like Midjourney down...or heavily gimp them by making them need to completely rebuild from scratch with opt-in and/or open source images only.  

  • algovincianalgovincian Posts: 2,633

    As some of you know, I've been designing/training my own neural nets for use in NPR (Non-Photorealistic Rendering) for decades. In my case, the networks are designed to make specific, narrow decisions throughout the workflow. Eliminating the need for human interaction and decision making along the way is precisely what has allowed the process to be automated. This automation in turn opens up new ways of thinking about how to solve problems that were previously too tedious to even consider as viable solutions.

    I encourage people to take the time to enroll in some online classes, do some reading, watch some videos, or otherwise educate themselves on exactly what it is that these networks are actually doing and how they work. Learn a bit about concepts such as convolution, minimizing cost/loss functions, the whole idea of deterministic vs. iterative, parallel computing, etc. 

    Do neural nets really "learn"? Or are they just clever algorithms designed to efficiently minimize loss funtions using iterative processes - all made possible on a scale never seen before by modern computer hardware?

    Answering these questions first may further inform your opinion on some of the philosophical/legal issues being discussed.

    - Greg

  • wolf359wolf359 Posts: 3,834
    edited January 2023

    We can only hope that litigation will shut the heavy hitters like Midjourney down...or heavily gimp them by making them need to completely rebuild from scratch

     

     

    Keep hopingwink
    AI generated art cannot be copyrighted because an AI does NOT  have legal “personhood”

    there are literally millions of new images  being produced per day that belong to no one.

    Those can be used for new training data .

    So any notion of even one of the big AI companies being “shut Down” by “starving” the AI is pure fantasy.

    Midjourney reportedly viewed 300 TB of online images to create their 5GB training CPTK

    how many terabytes of non copyrightable AI images already exists as of  20 jan 2023?.. one wonders

    Best case scenario is that a court orders a “clean” training data set in, some future ruling, and after

    months or years of  appeals etc, how many terabytes of non copyrightable AI images will exists by then.

    And before anyone declares them “fruit of the poison tree” well there  is that existing court ruling that declared AI art as essentially belonging to no one .

    Game over.

     

     

    and if the process is found to violate copyright then those images will also violate copyright. Even if theya re not themselves protected it is still possible for them to infringe, and so for anything else derived from them to inherit that infringement. It would depend, no doubt, on the fine detail of any judgements.

     

     

    Well since there is no metadata in the AI generated images on the internet you cannot legally determine where the new  training data images online came from, particularly images altered in photoshop to fix flaws etc.
    Also there is a  legal percentage of derivation that makes something an original work soeach new AI image will have to be examine to determine which one does not meet the threshhold…not practical

    so ..no.. a "friuit of the poisen tree" argument will likey fail

    Game overcool

     

    Post edited by wolf359 on
  • generalgameplayinggeneralgameplaying Posts: 517
    edited January 2023

    "no-AI indicator" - as good as it gets ;) ... opt-out still is a form of abuse, though you would have some choice, at least. Especially note that: you opt out now, but the question is unanswered yet, if already scraped images, due to opt-out always being "late", even if you checked it,  could still remain in the training data. See next point too.

    "what with pirated images" - this will happen. It will also happen with modified generated images, that then are flagged as "ok for ai". In general scrapers MUST respect DMCA and change of consent, at least law must resolve this. If you remove copyright AND licensing here, we're obviously in a highly abusive overall scenario, because if you don't want to participate, you can't publish and even further, if you sell your stuff, the results either can't be published or will get used by scraping as well. Imaginable yes, stupid yes.

    "output as training data" - Even if "reviewed" by humans, this is ruinous for the machine. It can not work :). You can train another ai with the image output of this one, but not itself. The "images from the net to create training data" does not mean, they made random outputs as training data, that is a misconception. I mean you could try, but it will end in a wrong turn. No solving the question would be like relying on some future tech, not yet invented. The training data question must be answered on ground of theory (maybe it has been?). While there are "some of the brightest minds" working on it, i have not heard much about fundamental questions like this, other than from the foggy realms. Perhaps i am not deep enough into it, and most articles and interviews cover just the obvious question of "change", like which jobs might get replaced, and what do we do in a society with only a hand full of jobs. The question then is, if you have a route back at all, or where you would go from there, if the the training data "turns sour", in terms of applicability, and you don't have much of training data available anymore. This could become worse than the worst the cloud can do.  These are not just nice to answer questions for later on.  In some cases we need to know beforehand, if we're trying to violate law of nature or maybe just math? Imagine basing the "Journy to the Mars" on such an approach... who wants to go first?

    -> "if that's not relevant" - i still question, how society is going to benefit with such abuse. I strongly doubt it. Further questions for a cloud service are obvious too, in terms of information control ... controlling mainstream, automatic filtering. Some of that can be answered later, but the fundamental questions should be done early.

     

    (Example for "experiments with society": If you build a new type of rocket, to go to the Mars, you know that a) it has been done before and b) there is good reasons to assume, that with burning a lot of money, you might do better than past approaches and c) if you fail, YOU fail, and/or a couple of investors or just that project, at best/worst.)

    (Training data "turns sour": Of course the images on hard disk don't turn sour, but for artwork you might need new input, for humans to still want to use it on the one hand, but also to justify "removing artists from society". Still you will eventually have more generated images than genuinely created by humans ones, as training data. While this scenario is pretty certain, do you have a counter argument not relying on not  yet invented future tech? Relevance is even higher with other fields of application.)

    (DMCA: Effectively following would the most sane thing to do: a) explicit consent with standardized metadata only and b) images removed from the location they had been trained from previously, must be removed from training data, due to indistinguishability OR you start a DMCA database where every image that ever infringed or got removed due to DMCA... uh oh... think international?)

    Post edited by generalgameplaying on
  • wolf359 said:

    We can only hope that litigation will shut the heavy hitters like Midjourney down...or heavily gimp them by making them need to completely rebuild from scratch

     

     

    Keep hopingwink
    AI generated art cannot be copyrighted because an AI does NOT  have legal “personhood”

    there are literally millions of new images  being produced per day that belong to no one.

    and if the process is found to violate copyright then those images will also violate copyright. Even if theya re not themselves protected it is still possible for them to infringe, and so for anything else derived from them to inherit that infringement. It would depend, no doubt, on the fine detail of any judgements.

    Those can be used for new training data .

    So any notion of even one of the big AI companies being “shut Down” by “starving” the AI is pure fantasy.

    Midjourney reportedly viewed 300 TB of online images to create their 5GB training CPTK

    how many terabytes of non copyrightable AI images already exists as of  20 jan 2023?.. one wonders

    Best case scenario is that a court orders a “clean” training data set in, some future ruling, and after

    months or years of  appeals etc, how many terabytes of non copyrightable AI images will exists by then.

    And before anyone declares them “fruit of the poison tree” well there  is that existing court ruling that declared AI art as essentially belonging to no one .

    Game over.

  • generalgameplayinggeneralgameplaying Posts: 517
    edited January 2023

    wolf359 said:

    there are literally millions of new images  being produced per day that belong to no one.

    Those can be used for new training data .

    For training what? You can train your car with it, or other (detection) algorithms, in theory. However, training the generative art ai with it's own output, even if modified slightly to avoid detection as such, should be considered a ruinous scenario. Especially after you've destroyed the then classic artwork market with it (that hypothetically), you'll find it an interesting situation, going back to the art of 2022 and before, to "avoid" stagnation. Even worse, back in 2022 they took art-ai as a precedent for all other sorts of uses, which have way worse impact with facing the same kind of problems (hypothetically).

    Post edited by generalgameplaying on
  • kyoto kidkyoto kid Posts: 41,198

    AI generated art cannot be copyrighted because an AI does NOT  have legal “personhood”

    ...corporations now have full legal "personhood" as ruled by the Supreme Court on several occasions, the latest being 2012.  Though AI is still relatively new and in its infancy it wouldn't be surprising  if the same wouldn't be granted in their case given time.

  • outrider42outrider42 Posts: 3,679

    SnowSultan said:

    Not about composing an entire piece from parts from multiple different pieces.

    IT

    DOES

    NOT

    DO 

    THAT.

     

    Seriously, I'm done now. This is like arguing with conspiracy theorists.

    It has been explained numerous times at this point.

    Simple question: If you remove the training data from the AI, will it work as well?

    And why do you believe a machine has any right whatsoever to use any of your data? The argument of whether a work is transformative from the original is not valid here. The AI does not possess this right. You and I possess this right, but the AI is not a person.

    It is true some websites have terms where such rights are waved, but that doesn't change the situation here. The training data is confirmed to come from specific websites that give the users these rights. Aside from this, courts have ruled against restrictive EULAs many times. So a EULA is not a binding contract. Keep in mind that both Diffusion and Midjourney state that local laws take priority over their EULA.

    wolf359 said:

    We can only hope that litigation will shut the heavy hitters like Midjourney down...or heavily gimp them by making them need to completely rebuild from scratch

     

     

    Keep hopingwink
    AI generated art cannot be copyrighted because an AI does NOT  have legal “personhood”

    there are literally millions of new images  being produced per day that belong to no one.

    Those can be used for new training data .

    So any notion of even one of the big AI companies being “shut Down” by “starving” the AI is pure fantasy.

    Midjourney reportedly viewed 300 TB of online images to create their 5GB training CPTK

    how many terabytes of non copyrightable AI images already exists as of  20 jan 2023?.. one wonders

    Best case scenario is that a court orders a “clean” training data set in, some future ruling, and after

    months or years of  appeals etc, how many terabytes of non copyrightable AI images will exists by then.

    And before anyone declares them “fruit of the poison tree” well there  is that existing court ruling that declared AI art as essentially belonging to no one .

    Game over.

     

     

    and if the process is found to violate copyright then those images will also violate copyright. Even if theya re not themselves protected it is still possible for them to infringe, and so for anything else derived from them to inherit that infringement. It would depend, no doubt, on the fine detail of any judgements.

     

     

    Well since there is no metadata in the AI generated images on the internet you cannot legally determine where the new  training data images online came from, particularly images altered in photoshop to fix flaws etc.
    Also there is a  legal percentage of derivation that makes something an original work soeach new AI image will have to be examine to determine which one does not meet the threshhold…not practical

    so ..no.. a "friuit of the poisen tree" argument will likey fail

    Game overcool

    You sound like you only want to cause chaos to the system. The system already has plenty of chaos, and you seem to ignore just how vicious copyright law can be. The lawsuits have started flowing, and the flood gates are opening fast. I am not sure where your confidence comes from. This will get itself all sorted out in time, as country by country consistantly limits what the AI can do. The AI companies are getting sued before they can really make a profit. They may not have the money to defend themselves for every case that pops up. They may even get bankrupted before finishing their projects. Boy that sure would throw a wrench into some people's AI dreams. They don't even have to lose the cases, if they lose money by constantly going to court, they may have to just cut their losses. Has anybody considered this possibility? These companies might get litigated into obvlion before they can get started. These companies are not huge mega corps just yet. They are only starting to get money from what they have put in.

    Midjourney, according to their CEO, has made money only since August. Do you think they have enough of a war chest built up just a few months? Stable Diffusion has $100 million invested into it, but as far as I can tell, it has not become profitable yet. And if things go south, it could lose those investors if they decide it isn't worth it. Diffussion has spent over $50 million just getting up and running. So things could get ugly for these companies if lawsuits start to eat away at their earnings. 

    Training AI with its own output doesn't even make sense. Besides that, like Richard said, an image lacking copyright doesn't make it fair game to use. If the image infringes on copyright, it cannot be used, period. The loophole you are looking for doesn't exist.

    Talk about game over, LOL.

    As mentioned above the percentages of what makes a work legally transformative is designed for HUMANS. Remember, as has been said multiple times, AI is not a human. Just in case anybody forgot this detail, it is kind of important to point it out. I hate that I have to point this out so much, but some people do seem to forget (including some 'youtube experts'). The AI does not have this same right when it comes to judging how much an image must be changed to avoid copyright. If a machine is using a copyrighted work in ANY capacity, whether it is 100% or 0.1%, that work needs to be given explicit permission to be used by a machine. It is pretty simple, really.

    Any argument about AI needs to first understand that AI has no human rights. If you train a monkey to paint, it doesn't matter how good it gets, the pictures are not copyrightable. This has already been established, and in turn established what rights a non human has for copyright matters...um...none.

  • Nothing as spectacular as shutting down AI image generating sites will occur. This is most likely going to be solved in out of court settlements

    Probably there's going to be some kind of confirmation that if it's AI-generated it cannot be copryighted unless it was trained on licensed images. Stock image sellers might even make some more coin licensing their images for use by the Ai. There will be some overhead cost for AI-made pics because of this, at least if they are intended for commercial use. Artists will have ability to legally protect their art from being used for this, etc.

    The world will keep turning, everyone will be happy. Or at least not as unhappy as before.

    This entire trend with AI images and chats was started as means to train Ais and develop neural networks in general. It's going to continue because it's needed for more than entertainment.

  • JazzyBearJazzyBear Posts: 805

    Indeed it is needed for them to be able to take over the world and end all human life ! LOL

  • generalgameplayinggeneralgameplaying Posts: 517
    edited January 2023

    PixelSploiting said:

    Nothing as spectacular as shutting down AI image generating sites will occur. This is most likely going to be solved in out of court settlements

    Probably there's going to be some kind of confirmation that if it's AI-generated it cannot be copryighted unless it was trained on licensed images. Stock image sellers might even make some more coin licensing their images for use by the Ai. There will be some overhead cost for AI-made pics because of this, at least if they are intended for commercial use. Artists will have ability to legally protect their art from being used for this, etc.

    The world will keep turning, everyone will be happy. Or at least not as unhappy as before.

    This entire trend with AI images and chats was started as means to train Ais and develop neural networks in general. It's going to continue because it's needed for more than entertainment.

    Well the stock license sellers... but only the platform. That will be abusive again. There is no sustainable way to pay all artists who contributed, if you scrape large amounts of data. This will just be more "special contracts" for few people. The simple example is as follows: Take it as an incentive to cheat the system. Generate an image with the system, modify it cheaply, and allow it for ai training. Maybe switch platforms between ai-manufacturers, so it's less obvious. Can anyone imagine, that you will get more out of it than the few cents or fractions, the generation of the image costs you? I can't. This isn't like a music site, where you know which songs are downloaded or streamed.

    I do agree, that they could simply make it a strict explicit opt-in only. Platforms that screw over their users will be left or further sued.

     

    I'm not sure how far the world keeps turning. If ai really effectively manages to replace basic abilities that are needed for having the training data, then you need some kind of miracle, at some point. It's just an invariant of sustainability, like with not destroying a planet, that you can't leave (forever) at will.

    Of course they are training general stuff, much related to language models, which promise to have much use in all sorts of business areas. Images, Videos, Text... are also business fields per se, not too small. Maybe they train getting sued, and where this is going, because lawmaking couldn't be *** to set the rules in time.

    Post edited by generalgameplaying on
  • PerttiAPerttiA Posts: 10,024

    generalgameplaying said:

    I'm not sure how far the world keeps turning. If ai really effectively manages to replace basic abilities that are needed for having the training data, then you need some kind of miracle, at some point. It's just an invariant of sustainability, like with not destroying a planet, that you can't leave (forever) at will.

    Of course they are training general stuff, much related to language models, which promise to have much use in all sorts of business areas. Images, Videos, Text... are also business fields per se, not too small. Maybe they train getting sued, and where this is going, because lawmaking couldn't be *** to set the rules in time.

    Wouldn't AI be the perfect lawyer, it has access to every law and all the previous cases there is and has ever been, or maybe use it to judge the case as well cheeky

  • PerttiA said:

    Wouldn't AI be the perfect lawyer, it has access to every law and all the previous cases there is and has ever been, or maybe use it to judge the case as well cheeky

    That would show confidence in their product... 

    You could then make law human readable, to narrow the gap.

  • wolf359wolf359 Posts: 3,834
    edited January 2023

    I am not sure where your confidence comes from

     

    @outrider42
    It comes from history.

    The MP3 format ( legal or otherwise) outlived Napster

    text  to image diffusion ,as a technology,

    is already in the hands of laypeople in the form

    of OPEN SOURCE , local installs of stable diffusion and thus can outlive Midjourney.

    Like mp3’s. like torrent clients this tech will NOT be un-invented.

    Right now people are likely stockpiling their AI images.

    (I have several hundred myself)devil

     

    If the major companies were to be  court ordered to create “clean” training data they could crowd source the new art. for pennies per image and there is no practical way to prove that the anonymouse “Artists” selling them new training images did not “create” them themselves particularly if they have basic photoshop skills to cleanup known flaws.

     

    The scarcity that gave value to human created still Illustration will never be recaptured

    No matter what happens in American courts

    People  in America and ,most certainly other parts of the world, will never need to hire a still Illustrator again if they choose not to.

    Game over

     

     

    Post edited by wolf359 on
  • ArtiniArtini Posts: 9,661

    MP3 format is a very good point.
    If people will complain so much about current development of AI generated images,
    the next generation could have AI controlled drones, taking pictures themselves,
    to avoid copyright issues.
    One cannot complain too much on innovations...

     

     

  • PixelSploitingPixelSploiting Posts: 898
    edited January 2023

     Might not be any copyright issues if the AI output will end not having any copyright protection. Of course there's not going to be much practical reason for making it if it can't be copyrighted, but the art market is far less about the art and far more about the ownership. Else we wouldn't have bananas taped to walls and NFTs.

    Post edited by PixelSploiting on
  • kyoto kidkyoto kid Posts: 41,198

    ...yes

This discussion has been closed.