Remixing your art with AI

1356721

Comments

  • SnowSultanSnowSultan Posts: 3,596

    It actually IS stolen art, photos, graphics, etc

    It still isn't stolen art as the anti-AI crowd likes to call it because the images themselves are not being duplicated in any recognizable form. Like I've said before, this is basically the equivalent of a human artist using art they find online as reference material or learning aids, just millions of times faster.

    Also, as Midjourney admits, how on earth were they supposed to compensate millions of artists when they had no idea of knowing who they were and what images were scraped? How much would go to each artist if they could, a tenth of a cent? If so, shouldn't artists who take commissions have to pay a character's original creator when they're paid to draw fanart? I know two artists who do spot-on commissions of Disney characters and I'm pretty sure they're not passing along any of their commission fees to Disney.

    Not everything about AI art is squeaky clean, but there's a lot of hypocrisy going around.

  • generalgameplayinggeneralgameplaying Posts: 517
    edited December 2022

    Richard Haseltine said:

    If AI-generated imagery can be protected by copyright and/or if the original copyright persists through generation, then I can certainly imagine big IP companies making at least some use of the technology, trained on thir own assets, if only for one-shot and incidental imagery.

    Copyrightable, or even if not: it probably only needs to be easy enough to produce in bulk, if you have the resources to start with. Then i suggest thinking of the patent system, where you don't create "a song to perform and earn money from performances", but you make general defensive texts, which cover an as wide as possible area, to both prevent others from doing something similar, as well as somehow obfuscate, what the thing actually is, which you intend to protect. That also goes further, in that patents are created for the sole purpose of blocking progress by competition or make them pay licensing fees or similar. Think of that with ai generated art and music, where humans need not add many touches, and have it copyrighted... "ice planet earth".

    Post edited by generalgameplaying on
  • FSMCDesignsFSMCDesigns Posts: 12,755

    SnowSultan said:

    It actually IS stolen art, photos, graphics, etc

    It still isn't stolen art as the anti-AI crowd likes to call it because the images themselves are not being duplicated in any recognizable form. Like I've said before, this is basically the equivalent of a human artist using art they find online as reference material or learning aids, just millions of times faster.

    Also, as Midjourney admits, how on earth were they supposed to compensate millions of artists when they had no idea of knowing who they were and what images were scraped? How much would go to each artist if they could, a tenth of a cent? If so, shouldn't artists who take commissions have to pay a character's original creator when they're paid to draw fanart? I know two artists who do spot-on commissions of Disney characters and I'm pretty sure they're not passing along any of their commission fees to Disney.

    Not everything about AI art is squeaky clean, but there's a lot of hypocrisy going around.

    The issue is you are looking at it from a users perspective and seeing it's positive possibilities in results and your workflow. I have seen some great results, but I know all to well how people are when it comes to stealing and piracy and see the negative aspect first and foremost, and see how many are promoting it and how to use it in negative ways, so I wish it didn't exist. Hopefully it can be deemed to not be as commercially viable so it can stay lower, or off the radar of those willing to exploit it and the work of others to make a quick buck.

  • SnowSultan said:

    It actually IS stolen art, photos, graphics, etc

    It still isn't stolen art as the anti-AI crowd likes to call it because the images themselves are not being duplicated in any recognizable form. Like I've said before, this is basically the equivalent of a human artist using art they find online as reference material or learning aids, just millions of times faster.

    Also, as Midjourney admits, how on earth were they supposed to compensate millions of artists when they had no idea of knowing who they were and what images were scraped? How much would go to each artist if they could, a tenth of a cent? If so, shouldn't artists who take commissions have to pay a character's original creator when they're paid to draw fanart? I know two artists who do spot-on commissions of Disney characters and I'm pretty sure they're not passing along any of their commission fees to Disney.

    Not everything about AI art is squeaky clean, but there's a lot of hypocrisy going around.

    It's "stolen" in the colloquial sense of taken/used without permission. Period.  And their argument that they couldn't possibly obtain the needed permissions is like me saying I couldn't possible obtain permission/license from some IP holder so I get to use it anyway.

    And did you just say you knew people making money off Disney's IP without compensating Disney?  Just because people *may* be violating IP laws doesn't mean it's OK to come up with new ways to do that.

    "AI", misnomer that it is, isn't going to disappear. No one needs to worry about that. As it matures and is introduced into more products there will, I suspect, be litigation and eventually changes to laws to take it's potential and perculiarities into account within the context of intellectual property and licensing.  When there's new legal guidance companies will simply make changes to their products and licensing and they will continue to make money. 

    What bothers me the most about the way this commercial text to image implementation was handled is that it's such an unnecessarily botched expansion of the technology ("AI") and a missed opportunity for learning. (I mean people learning not machine learning).

     

  • No one expected them (or stable diffusion, or any of the others) to track down every artist that was scraped in the LAION database. The point is they SHOULDN'T HAVE USED IT IN THE FIRST PLACE. Period. I'm fine giving people the benefit of the doubt if they don't know how the current major models got their training data sets, but the information is easy to find.
  • SnowSultanSnowSultan Posts: 3,596

    Fine, not going to argue with you guys any longer. I should know by now that arguing with anyone on the internet may be the most pointless activity in all of human history.

    Adapt or be left behind. Beep boop.

  • csaacsaa Posts: 820

    In my opinion lawmakers really need to create something meaningful here,

    …which “Law makers” will have the global jurisdiction to implement & enforce their “meaningful “ legislation to cherry pick training data
    already in the hands of private entities?

    However you slice and dice it, laws have little meaning beyond the weight of their words ... in the absence of law enforcement.

    Judging by the heat coming off of this discussion thread, it sounds like we'll need laws enforced with extreme prejudice! devil

  • AlmightyQUEST said:

    No one expected them (or stable diffusion, or any of the others) to track down every artist that was scraped in the LAION database. The point is they SHOULDN'T HAVE USED IT IN THE FIRST PLACE. Period. I'm fine giving people the benefit of the doubt if they don't know how the current major models got their training data sets, but the information is easy to find.

    Exactly.  Imagine if they had instead worked with art museums, colleges/university, and even the art community at large by saying we want your help training our machine learning databases and here are the terms for our use of your input. I'm certain a lot of budding artists would have loved the chance to volunteer at least some of their art and time for training even without financial compensation just so they could mention it on their vitae or portfolio or whatever. I expect a fair number of established artists would as well.  And a number of curators who'd boost their exposure by contributing their interpretations of art in the public domain. Colleges and University art departments probably would have helped contribute especially if the request came along with a grant. :-)  AI tools could help students compare and contrast the different art movements and works of different artists; just feed an image into img2img and specify different output styles/whatever. Tools that could look at an image and analyze it in terms of similarity to different artists, movements, and media would be wonderful.

  • generalgameplayinggeneralgameplaying Posts: 517
    edited December 2022

    RangerRick said:

    AlmightyQUEST said:

    No one expected them (or stable diffusion, or any of the others) to track down every artist that was scraped in the LAION database. The point is they SHOULDN'T HAVE USED IT IN THE FIRST PLACE. Period. I'm fine giving people the benefit of the doubt if they don't know how the current major models got their training data sets, but the information is easy to find.

    Exactly.  Imagine if they had instead worked with art museums, colleges/university, and even the art community at large by saying we want your help training our machine learning databases and here are the terms for our use of your input. I'm certain a lot of budding artists would have loved the chance to volunteer at least some of their art and time for training even without financial compensation just so they could mention it on their vitae or portfolio or whatever. I expect a fair number of established artists would as well.  And a number of curators who'd boost their exposure by contributing their interpretations of art in the public domain. Colleges and University art departments probably would have helped contribute especially if the request came along with a grant. :-)  AI tools could help students compare and contrast the different art movements and works of different artists; just feed an image into img2img and specify different output styles/whatever. Tools that could look at an image and analyze it in terms of similarity to different artists, movements, and media would be wonderful.

    For an initial excuse (in case of stable diffusion), i would note the following points:

    - Science collaboration with a university.

    - Consent from the social networks, that images have been scraped from. (Uh, oh. / not the users).

    - Made publicly available for free. Open Source software. (Various training data sets are more tricky/detail, probably.)

    - Could see it as an advance in science made public. 

    - Society may be interested in the thing, starting the discussion.

     

    From there on, though, i expect no less than that societal discussion. Otherwise, we will just iterate Facebook+Amazon+Uber another time, without having used brain at all...

    Post edited by generalgameplaying on
  • tsroemitsroemi Posts: 2,742
    edited December 2022

    Speaking as someone who likes to create (and sell) stories and pictures on the one hand and who teaches law at uni on the other, AI and what it can and should or shouldn't do really seem very complex issues. I don't think easy answers like 'adapt or be left behind' or 'it's all just stolen and no art at all' will work here. Nor will they serve us well in the long run, as creators ourselves. Here are some of my current thoughts on the matter:

    Clearly, there's much people enjoy about it, and there are lots of truly amazing pictures being made with AI or AI assisted already. And learning from looking at someone else's art, like the algorithms seem to be doing, is not equal to stealing it, the same way you're not breaking anyone's copyrights when walking around in an art gallery or a museum.
    On the other hand, you do pay a fee to enter a museum - or if you enroll in a painting / writing / creating college course you will often pay tuition fees, or training with an artist directly will often also involve some kind of money and / or recognition being exchanged for the broadening of your creative horizon. This exchange, as far as I can see, is (yet) lacking with AI, and this is already hurtful to artists whose images are being used by the algorithms' trainings. It's especially true for artists who produce in a recognizable style, if that style can be reproduced by the AI when given a prompt related to that artist. These artists have worked hard to create this style, and you, by pressing the proverbial make art button, simply have not; nor are you giving the artist anything in exchange for being able to use their style. The plain unfairness of this process should, all legal issues aside, be quite obvious to anyone I think.

    But the problem goes beyond that. I looked at an image a couple days ago in an online gallery and was amazed by the background's vivid colors and lush vegetation, the way everything swirled and trailed and shone so truly wondrously. But the artist in question hadn't created that background, Midjourney had. Now, if you put this in the credits of your image, I'm okay with it. If you don't, I'm not. Because you're making me believe that you yourself were able to create such beautiful, colorful swirls, and in fact, you were not. The AI is not really able to do that, either. It just remembers and remixes pictures from others. These others were the ones who created. And yes, sampling is a recognized art form too, but it is always identified as such. AI doesn't identify itself, and it doesn' give due credit at all. 
    I do understand the drive to create stunning art, something beautiful and striking, and be recognized for it. And one can of course use all kinds of sources and materials, and one always learns so much from other people's creations. But if, say, all that people liked about a picture of mine was the stunning physique of the Mousso model I used, then they're admiring Mousso's art, not mine. If ALL that's in your picture is from somebody else, and it was mixed together not by you but by a machine - where exactly is your own creative input? Just the prompts? But words are not pictures, are they? 
     

    Soooo ... There should be some level of fairness for the original creators involved, which there isn't, at the moment. The engines' creators also don't seem to be too interested in that by themselves, otherwise they would have sought consent. That's unfortunate and gives me NFT vibes. For this reason mainly, I'm not very likely to use AI anytime soon, although I do see original and beautiful uses it is being put to, like remixes with one's own renders and such. Besides, personally - if my art is liked by anyone, I want it to be MY art indeed that is liked ;-)

    Post edited by tsroemi on
  • ServantServant Posts: 759

    SnowSultan said:

    Fine, not going to argue with you guys any longer. I should know by now that arguing with anyone on the internet may be the most pointless activity in all of human history.

    Adapt or be left behind. Beep boop.

    It's not a matter of ADAPTATION. It's a matter of ETHICS AND MORALS.

    If you support unethical and predatory practices as with most A.I. image generators displayed by going the quickfix to feed their generators with the millions of datasets without consent because you're ok with that just so you can play with your input loops, that reflects more on you. 

  • SevrinSevrin Posts: 6,306

    SnowSultan said:

    Also, as Midjourney admits, how on earth were they supposed to compensate millions of artists when they had no idea of knowing who they were and what images were scraped?

    That's really the kind of question they should have asked before doing the scraping.  This is one of those cases where it seemed easier to ask for forgiveness rather than permission. 

  • wolf359wolf359 Posts: 3,828
    edited December 2022

    Private companies are already using this free ,open source software to train their AI on images they completely own ( uber hardware required apparently)
     


    The Lensa app does this remotely  and needs only 10 training images IIRC.

    So this is possibly where the future of this technology is headed.

    Big advertising agencies will invest in hardware and a few techs to train their LOCAL installs of stable diffusion etc.
    using their own private source images and produce their own bespoke output with any need to hire Illustrators.

    There are already private individuals training their AI on there own GPU ’s at homes.
    this IMHO is the new paradigm for art creation going foward.

     

     

    Post edited by wolf359 on
  • nonesuch00nonesuch00 Posts: 18,130

    It's AI so they've modeled it to be "inspired" by experience of "seeing" other artwork. Saying AI can't do that, is like saying I can't use the wheat flour I bought to makeup a completely new bread recipe by looking at commonalities in all bread recipes. We all know the answer to that and we don't worry about it because most all of those mom & pop bakeries where we buy bread were put out of business decades ago by corporate bakeries. It's legally no different.

  • AllenArtAllenArt Posts: 7,169
    edited December 2022

    AI is kinda cool - at first, but in the end, the result still uses other people's work (if not directly). I'd rather do my own art myself :). Ironic when most of the 3D models I use are also other's work. However, I model myself and more and more of my own work is making it into my scenes, and I hope someday that ALL of it will be my own work. I'm not young (or particularly healthy) anymore though, so whether or not that will ever happen is a crapshoot. LOL.

    Edit: I should mention that even though I use others work, I have compensated them as such, which isn't the case for AI. A lot of rich people are going to get even richer on the backs of people less fortunate than they (what else is new?) over AI and no one's even thought about how the artists whose work is being "used" are going to be compensated.

    Post edited by AllenArt on
  • generalgameplayinggeneralgameplaying Posts: 517
    edited December 2022

    nonesuch00 said:

    It's AI so they've modeled it to be "inspired" by experience of "seeing" other artwork. Saying AI can't do that, is like saying I can't use the wheat flour I bought to makeup a completely new bread recipe by looking at commonalities in all bread recipes. We all know the answer to that and we don't worry about it because most all of those mom & pop bakeries where we buy bread were put out of business decades ago by corporate bakeries. It's legally no different.

    The generative systems are really cute, but it's still machine learning, not magic, nor general human-like "ai". Neither is the situation comparable to bread, because the law for bread still applies to industrial bakeries same way, even if they have the better lawyers. (SNIP: the recipe part.).  As opposed to that, the "AI" discussed in the art case actually can violate copyright on it's own, and even has the potential to violate copyright by bluntly putting in bits of original works, which it had been trained on, into the end result. In effect "completely new" doesn't even apply in many if not most cases for the output. As a tool such is new, of course. 

    Edit: Correcting the recipe part: The industrial bakeries would take the bread from the small bakeries 1:1 and mangle it into something new, and sometimes pieces or the whole bread still ends up on the table! More like that. And how new and special the resulting "recipes" are, isn't 100% researched. We may at some point conclude, that we just have a pretty nifty mangling tool. So the law question with scraping may turn out to already actually have been illegal, in which case we don't really need to discuss anything, or it's an uncovered spot, or maybe it's not. The latter depend on societal or political judgement of where we want to go, or what previously existing law was meant to mean.

     

    The other and probably bigger problem still is the societal question: as an artist, i have the choice: publish and help train the ai that makes other people rich, which then will take my job, or instead don't publish and peril.

    Not taking action also is an action, at least if the problems are roughly understood. Why let random people dominate based on data collection and cloud integration and financial power, why allow works to be used for any purpose, while we had just spent decades of crafting licenses for exactly such purpose: to declare how the works may be used. The "ai" is not a living thing, it's in that case more like a commercial harvester robot, collecting stuff to build something new from, and the next time the "ai" will get trained,  your artwork will get used 1:1 again as an input source, as there is no additive training nor experience, it's just bulk data input, time after time, the whole thing (disambiguation: not for each output image, of course, but for "learning new things"). So what is the incentive to publish art then? Of course it can partly be solved, e.g. by non-copyrightability, but the grey zones may already be too easy to reach (with yet another ai or a few manual stitches). Humans decide this, there is no decision-taking by an "ai" involved. So the societal question could also involve banning such scraping without explicit consent of rights holders, alltogether. So in essence we have two questions: 1. What with the artists? + 2. Who should profit from it? In terms of whom or what we would protect by regulating somehow. Perhaps only allowing non-commercial, publicly funded and science projects could be the result of such pondering. While something may not be possible to evade on the long term, we may still be able to shape the transition.

    So i would urge people not to fall for some kind of "ai romanticism". It's people deciding this, and the next thing.

     

    Of course, any new efficient tool will create winners and losers for a part, but scraping places like deviantart is another dimension than some new hammer or a scaled up production process.

     

    (To embark on the training process: if they store the raw images they train from, they have a copy of your work on their hard drive, which already may be violating a license - especially with audio and images, you often have those 1-leg-in-jail formulas with "one backup copy at maximum", that for those who bought a license to use it at all. Now you could think, maybe they'll only scrape and directly train the application like an "in memory" application, which may or may not be believable with data sets as large as would be needed to be significantly better than stable diffusion is now. So in essence/again, allowing this just so, outside of a science project, could water even the existing mechanics of copyright. They'd be assuming another special case for data processing - and there is the question again: why let it happen?)

    Post edited by generalgameplaying on
  • wolf359 said:

    Private companies are already using this free ,open source software to train their AI on images they completely own ( uber hardware required apparently)
     


    The Lensa app does this remotely  and needs only 10 training images IIRC.

    So this is possibly where the future of this technology is headed.

    Big advertising agencies will invest in hardware and a few techs to train their LOCAL installs of stable diffusion etc.
    using their own private source images and produce their own bespoke output with any need to hire Illustrators.

    There are already private individuals training their AI on there own GPU ’s at homes.
    this IMHO is the new paradigm for art creation going foward.

     

    Are these taking ten images or whatever and a naive network, or are they taking ten images and pre-trained network, then using the images to refine its output? The latter would, depending on the original training set, be open to the same criticism and potential legal action as any other.

  • HighlandHighland Posts: 178

    Some companies have already begun producing rudimentary software that creates 3D models from text prompts. Say I want a brand new shiny toaster for a scene I am rendering. Run it through AI text-to-model and it creates a 3D object, maybe in .obj format, with a material file, and textures. At the pace AI 2D images advanced, it probably will not take long, IMHO.

    Are PAs and Daz and the R's ready for that?

     

     

     

  • wolf359wolf359 Posts: 3,828

    Are these taking ten images or whatever and a naive network, or are they taking ten images and pre-trained network, then using the images to refine its output? The latter would, depending on the original training set, be open to the same criticism and potential legal action as any other


    Lensa uses your input images and one of the open sourced back ends to replicate YOUR provided face in different styles just as
    “the corridoor crew” did themselves with a massive local GPU rig in the video I posted.

    You cannot copyright a “style”
    because anime is a genre but also a visual “style”
    Cyberpunk is a genre but also a visual “style”
    etc etc. 

  • wolf359 said:

    Are these taking ten images or whatever and a naive network, or are they taking ten images and pre-trained network, then using the images to refine its output? The latter would, depending on the original training set, be open to the same criticism and potential legal action as any other


    Lensa uses your input images and one of the open sourced back ends to replicate YOUR provided face in different styles just as
    “the corridoor crew” did themselves with a massive local GPU rig in the video I posted.

    If it is using a pre-trained back-end then refining it with additional personal photos would have no bearing on the ethical or possible legal issues - it is the original training, using material that was possibly (depending on their sample) not licensed for such use that is the bone of contention. Whether this will being level cosnequences, and where if it does, remains to be seen.

    You cannot copyright a “style”
    because anime is a genre but also a visual “style”
    Cyberpunk is a genre but also a visual “style”
    etc etc. 

    No one, as far as I can see, is claiming otherwise.

  • bluejauntebluejaunte Posts: 1,902

    Highland said:

    Some companies have already begun producing rudimentary software that creates 3D models from text prompts. Say I want a brand new shiny toaster for a scene I am rendering. Run it through AI text-to-model and it creates a 3D object, maybe in .obj format, with a material file, and textures. At the pace AI 2D images advanced, it probably will not take long, IMHO.

    Are PAs and Daz and the R's ready for that?

    Kind of hard to be ready for that. What on earth can you do against an instant create model button? This will simply end businesses and careers, though when is unclear but we're clearly heading that way.

  • people felt this way about cars 

    they may have been justified given the results of a society full of expressways road trains logistics resulting in urban growth etc

    the horse and buggy certainly kept populations local and in check, 

    blacksmiths, wagoners etc lost their jobs as well as local business, primary producers etc

    but how many horses suffered, how much time was spent travelling (albeit one got to see more) the ambulances, cops on bicycles ( and mounted police) would not get there in a hurry

    the horse drawn fire truck well why even bother

  • HighlandHighland Posts: 178

    bluejaunte said:

    Highland said:

    Some companies have already begun producing rudimentary software that creates 3D models from text prompts. Say I want a brand new shiny toaster for a scene I am rendering. Run it through AI text-to-model and it creates a 3D object, maybe in .obj format, with a material file, and textures. At the pace AI 2D images advanced, it probably will not take long, IMHO.

    Are PAs and Daz and the R's ready for that?

    Kind of hard to be ready for that. What on earth can you do against an instant create model button? This will simply end businesses and careers, though when is unclear but we're clearly heading that way.

    Here's AI cat 2017 and 2022. So 5 years. I think maybe 3 years before we have high quality text-to-model. I agree with you about possible impact on this industry. It will be interesting times.

    AI-cat.jpg
    1200 x 894 - 425K
  • MattymanxMattymanx Posts: 6,906
    edited December 2022

    Highland said:

    bluejaunte said:

    Highland said:

    Some companies have already begun producing rudimentary software that creates 3D models from text prompts. Say I want a brand new shiny toaster for a scene I am rendering. Run it through AI text-to-model and it creates a 3D object, maybe in .obj format, with a material file, and textures. At the pace AI 2D images advanced, it probably will not take long, IMHO.

    Are PAs and Daz and the R's ready for that?

    Kind of hard to be ready for that. What on earth can you do against an instant create model button? This will simply end businesses and careers, though when is unclear but we're clearly heading that way.

    Here's AI cat 2017 and 2022. So 5 years. I think maybe 3 years before we have high quality text-to-model. I agree with you about possible impact on this industry. It will be interesting times.

    I highly doubt that.  The two image samples prove nothing.  You could get the same results today with poor input from the user.  People tend not to show the bad results, we only see what looks good.  A photographer told me once that the secret to being a good photographer is to never show anyone your bad photos and only show off the good ones!

    Post edited by Mattymanx on
  • SnowSnow Posts: 95
    edited March 2023

    deleted

    Post edited by Snow on
  • I think AI could be good for helping with measurements so it can do things like say how far off from average something is or calculate how many heads tall something is. They can also have different settings to help you make everyone more stylistically consistent. AI can also give slider suggestions to get the age and everything you want and even give store suggestions if you don't have them yet it knows those are the best ones 

  • generalgameplayinggeneralgameplaying Posts: 517
    edited December 2022

    Throwing in randoms:​

    1. "Just from the text." - of course the machine learning system had been trained on a huge amount of images that had descriptive tags or text attached to them. 

    2. The simplest way out would be to regulate scraping and demand accountability for training data sets. Explicit consent only, signalled via metadata, plus a gov funded original works database, perhaps. So any scraper who wants to stay on legal ground, will honour the metadata. In these cases, if someone puts your images up tagged, and you manage to remove those, they will not be in the next training session of any ai. Explicit consent, meaning like some EU law, demanding an extra check-box in the settings, with clear text, not buried somewhere in the TOS, and strictly opt-in, best a global and a per image setting. Of course law needs to be more specific, e.g. if it should be allowed to withdraw a license, so "ai trainers" have to look up the consent status with each new training session, possibly there would need to be a image+consent database. I would also suggest, to think of further uses, possibly with licenses referenced by images (educational, generative ai, image recognition/tagging, ...). I know, it becomes convoluted quickly, simply because it's after all a complex topic with depths to all directions.

    3. Thinking of memory limitations - there may not be"any" (for the big player), the next day. So maybe they'll have some ai system, watching tv independently and learning like what we call "gathering experience". We don't know how far we are from something like that, which doesn't even necessarily need to be a general ai, capable of consciousness or what not. Meaning, regulation needs to become more foresighted, which even complicates things.

    4. While it may have the potential for a super ai, destruction of mankind, i am not so afraid of an intelligent actual ai. Why? Because it will need experience and be spoiled a thousand times, before we understand that it needs education and growing up or into things, as by my judgement. Perhaps it will also have to dwell in a space between, and some day learn to interact and interpret more, once it gets attached to more interesting interfaces. I'm more afraid of the Frankenstein variants, like badly educated ever drunk three year olds, which are trained to kill ("the task"), basically. Maybe they're kept somewhere above bird brain complexity, to be able to clone them, and to proceed from a somewhat working version. But really, we're already close to efficient killer machines with "ordinary" machine learning, though those will not be able to build something on their own - here again the human decision takers are the actual danger.

    Post edited by generalgameplaying on
  • wolf359wolf359 Posts: 3,828
    edited December 2022

    Back on the subject of "remixing you art with AI"

     

    A basic Daz content render "Disney Pixarized" with one click 

    compare FB.jpg
    800 x 600 - 212K
    Post edited by wolf359 on
  • generalgameplayinggeneralgameplaying Posts: 517
    edited December 2022

    Another note on complexity and results: The results depend on the complexity of the training data set, clearly. If you have too few, you won't get the same benefit from variation and grammar for the general thing.

    Going from there, imagine allowing copyright on the output. Is it even a business model? The output of your own ai, thinking of the above example "Disney Pixarized", will probably not be possible to do more than five times, until it violates the copyright of a previous output by someone else. That in the scenario of copyrightable output and possibly too few complexity. This can hit on all those edge cases, where artists might want to go. Of course this is a bit on the FUD side, as i try to elaborate on the necessary to think about edges (for the regulator and for the business). 

    Aspects can certainly be mended, e.g. by having special licensing for output, or a database of some kind, which then is imperative for dispute, and maybe a board for dispute (Modern cloud business: with humans too expensive, maybe?). That goes for one corporation. So if the lawmakers bungle this, e.g. by doing nothing, corporations probably just will bomb each other into nowhere, and no user would ever be safe. Or maybe a cartel rises, who allow each other to use patents and databases, and are just first and fastest on the market. There are not just a few ways, this can go haywire...

    So at present i would call the situation pretty much limbo.

     

    Edit: To dive slightly further into the niche question: assume mainstream input generates a lot of variety, not assuming this to be bluntly 1:1 with intuition, though, and the artist looking for  a niche, we now have the term for the question: "effective niche input complexity". Just words mixed, but meaning, input data on a niche may be few, and the results thus could tend to resemble each other. However even basic things like poses or effects or styles may be plenty enough, combined with the ability to alter specific parts of an image based on description, to allow enough of difference on any niche. So in effect, this could be a non-issue in practice. And perhaps, if things can't be pulled to be different enough around a "niche", maybe the artists will refrain from going there, themselves. Still it's a theoretical question, which may be pretty important to elaborate on, before taking too large business risks.

    Post edited by generalgameplaying on
  • ACueACue Posts: 114
    edited December 2022

    I've been experimenting, merging Daz Studio renderings and Midjourney for a hile. Here is an interesting example. After rendering a basic young woman with a peasant dress in Daz Studio, I uploaded the image into Midjourney, and asked the A.I to imagine this person/image as a chamber maid in a typical, classic portrait in the style of Johannes Vermeer. It's interesting the design and composition choices that Midjourney made. I like what came out, all within a minute or two on the platform. It's still not 100 per cent, but it's a great interpretation. It's amazing, especially since I can tweak and refine the images until I get exactly what I want, and then touch up in Photoshop.

    We can all appreciate how some artists are concerned. Ironically, I recently allowed a non-fiction author in the UK to use one of my earlier Studio renders for his book cover. It's now published and available to buy in print or download. It's a matter of time before this and other authors simply conjure up their own cover art using A.I.

    Post edited by ACue on
Sign In or Register to comment.