AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I am utterly addicted to watching what others have prompted on YouTube though
I would love to see these as 3D animations
there are a lot of "80's style" movie concepts of games etc people have put together using ai images too
storyboarding just got a lot easier
Eh, i thought it just got easier for me with daz3d. But while i can do with random imbecile rendering for the fast part, everything must be placed carefully. The part where i may be fast at will be, that i most of the time won't try to build exactly what i envision, in terms of envisioning anything that precisely at all, but rather make the thing fitting in itself, virtually let it assemble itself, after throwing in a couple of ingredients, say in a rogue-esque ad-hoc way. Much like ai-generated, just with half of an actual brain on the table.
(Currently i'm not fast, nor do i have to do anything at all. Just checking "some" nuts and bolts. ~ API docs without https, seriously... i'll have to write plugins, to avoid those, perhaps.)
So in short: enjoy, while you can and if it get banned, you will have at least sweet memories of using ai.
In a way that also goes for artists who depend on affected fields of application :p. What i write here doesn't necessarily represent any kind of truth or certainty (as long as you don't ask back, it'll probably not even mean probabilities), and certainly much of what i try to point at, would or even could typically not get decided by courts. I'd rather assume lawmaking to be the appropriate path.
While not encouraging any "stalking purposes" and the likes, i don't think the issue is with people experimenting and practicing with the technology. With a standalone version of Stable Diffusion, trained on hand-picked images, there also won't be any legal issues to be expected (if they outlaw any language models, ChatGPT will have a different fate to bargain for at court). The technology itself might be an issue mid to long term, if used wrongly (by society!), but for now should not pose any trouble per se. Even if it it gets outlawed, you still might be allowed to use what you downloaded at home (publishing would get different then). At least that's by far not certain. (Not a lawyer, and so on...)
(If the scraping of erverything gets through as fair use /unconditionally... the other side can try to hold onto something, and eventually society will probably learn, that moving in space without fuel isn't so cool, after all.)
only the use of the LAION 5B trained Checkpoint files will get restricted
Stable Diffusion is Opensourced
people will train their own Checkpoints and sell or redistribute ckpt files using their own art or CC0 and Public Domain content
you can merge them
of course like anything there will be the illicit stuff
but it will mean also Adobe stock images properly described etc can be trained and sold for their own software suite
the big corporations will totally embrace this
Either way.
1. With fair use, expect Google to dwarf most of the competition in a whimp. Of course open data sets or models trained on much the same data would also exist for "home use", but this means a drastic change for artwork in the internet in general, since there will be no security and no time to profit. Might see patreon and similar models"only", as well as some paid by the corporations like with noble courts during the renaissance period.
2. Without... corporations may get ahead with paying a bit to art/image platforms, so they sell off their user's artwork, of which of course the users won't see much, if at all. Many artists will avoid those places. So that's not certain at all, as smaller platforms may stay protective, while large players will try to establish an internet standard for consent and advertise goodies, so people do consent, so the scraping bots have a lot to scrape.
Both have in common, that at some point the incentive to create artwork (as a profession) may be gone for larger areas, and the systems will have less new training data flowing in. While you could assume, that technology might finally advance to be more precise and work without too much of (new) input, we're talking about a future bet there, and that's a different technology in the end.
Regardless, with differing legislation for different regions of the planet, there certainly will be well-tagged curated collections for specific domains after some while, like older period art, plus certain platforms, maybe a subset of CC-licensed content amongst other, where consent is present and documented.
Hard to tell how all this will proceed. Courts might also decide something conditional or partial, ending up somewhere between or even somewhat balanced. While i'm no lawyer, i wouldn't discard the possibility fully.
As I understand Adobe stock images are well protected, so it will be interesting to see,
if Adobe will decide to use them to train ai or not.
The same goes for the other stock images holders, like Getty Images or so.
Do they have enough rights to be allowed to use such images for ai purposes.
Ethically, technically, legally, they could let users opt-in on an image-by-image base, allowing users to set "opt-in" as their default, if they want to. Surprisingly with consent by rights holders, there is little to do for courts. Maybe later there will be law made to prevent certain scenarios, or to help artists to exist, depending on simulations or events, rather mid- to long-term.
Otherwise that entirely depends on future court rulings.
Let's assume the extreme scenario of courts calling "fair use" ~ unconditionally. "Protection" for who or what always will depend on if a specific case really is an instance of fair use.
- A stock-photo supplier will certainly have their images leaked over time, probably via the intended use by their own customers, ending up in the training data of third party image generation ai.
- Publicly available images would likely be free to be scraped.
- I doubt that in this scenario a publicly available site could effectively prohibit scraping by any sort of TOS/imprint-legalese.
- Fair use and accountability? Without accountability, i.e. even allowing to train with illegally posted images, no collection of images would be safe. Having to provide proof for abuse... i guess it'll be similar to social media... they'll just make money fast and maybe some day someone finds something itchy. Hardly doable.
- Extreme case "cambridge analytica" ~ use a user account or another interface intended for another use case by contract to scrape a website that prohibits use for ai training and can only be accessed through a login wall. If leaked images are "free ones", this could just happen illegally, to claim fair use then.
- Users can't be too sure either, because with a generous fair use interpretation, platforms could use their images for an ai, or selling them off to a third party for this purpose. TOS might change and some platform might try some tricks like switching to opt-out, or outright changing from "no ai" to "affiliates" or whatever. This has to be advertised then, but it means somewhat shifty ground. Misleading terms or lies towards the users might be up to court, or if you're unlucky it'll still be "radical fair use" or what not.
"Protected": I'd say, as it stands now, that's at best uncertain. Current lawsuits will determine this for a part, but you might be in need for more specific cases to end up at court next, to be certain, unless lawmakers do their job. Also the details will count, e.g. if fair use is tied to data sets that can be accessed by anyone in theory, like with Stable Diffusion, or similar. A lot of things seem "possible" for a first clash at court, whatever the probabilities.
(Edited for simplicity.)
if it is locked to their software it could be a subscription service like the rest and their image contributors will get paid
You might want to follow this case
I just am fascinated I can cook something like this up on my own computer
you will have to watch it signed into YouTube because partial bosoms with nipples randomly appear so I made it 18+ because they don't like those if they look like they are on women though honestly it's hard to tell as both feminine and masculine faces appear also and the occasional classical statue and those are OK
I am hoping there would be deeper integration with animation workflows - for example, video to mocap. Then it can be applied to Genesis models. There are existing workflows but it requires a non-trivial amount of conversion and rigging.
You'd probably be interested in move.ai
The “AI assisted” cascaduer software Literally has a preset for sending
G8 animation back to Daz studio.
as far as Mocap from video, yes all of the current options have no direct support for genesis without filtering through a third party retargeter like Maya/MOBU or Blender.
sreenwriters are next
anyone getting Replica ads?
I probably will now
I've spent the last week learning more about all Ai offering available on the internet just to get a better understanding of the wave of change it will sweep over all our lives.
Some were innocuous, but REPLIKA AI. was one that had flags waving.
The thing that has me concerned the most are the “Deep fakes” that can even replicate voices of known people along with the faces.
this
the whole ai art thing is the distraction that has gotten the attention of people
nobody is going to stop doing art just like nobody has stopped having gardens because city parks and corporate landscaping exists
the legal issues need sorting and I fully agree with that
but
the real dangers with ai, the Teminator movie Skynet mirroring stuff is flying under the radar
I read a Facebook post saying men are emotionally abusing their Replica girlfriends, while I laughed it off as being like drowning your Sims3 by taking away the pool ladder or killing everyone in GTA, ( I even swear at Siri and ATMs)
there are already bots on social media that can ban you taking posts totally out of context
this is worse than Huxley's Big Brother as it is not a person reviewing you, it is a bot and it is on customer service sites for essential services, things like Robodebt in Australia that lead to actual suicides.
All because it's cheaper than paying humans to handle administrative tasks
Style cannot be copywritten. I am free to create all the drag queen warriors in the style of Frazetta, all the paintings consisting of only dots like Damien Hirst, (or a diamond skull), or pretty pretty pony girls that look like they came from Studio Ghibli, or Kitten Disney Princesses. As long as it is not an exact copy of their work, or a copywrited character, like Mickey. (Unless it's deriviative, like Micky in drag or something). I can make a career of making paintings that look just like Greg Rutkowski's work.
and so it begins...
It happens even with Adobe/Creative Cloud, that they use such content to train their incoming AI.
It is mentioned at 1 minute 52 seconds on:
(*Sigh*) We thought it was interesting that elephants could paint, and monkeys & bears could take selfies, but now our machines are authors and artists. We don't have many more feet to shoot. If one starts a religion, I'm out'a here.
Here lies humanity, blind to the end.
AI is coming anyway, so it is better to be prepared for it:
As has been said: If you publish an AI generated image, or video, or story, and what you publish infringes someones copyright, they won't send the cease and desist letters to your computer, they will send it to you.
So if you publish Ai generated works it's your responsbility to ensure they don't infringe on someones copyrighted work. And if your defence is 'The Ai was trained on too many images for me to check' then sorry you're still liable for what you publish.
"AI" ist just not a species like elephants. Current "AI" reproduces it's training, not (much) more. The more-part rather comes from the language models and your prompt, plus a little history of prompt it may be fed from past conversations, profiles, setup. Humans, monkeys, bears, giraffes... reproduce themselves, AND keep adapting (as good as it gets). That's the big difference. Current AI is not evolving, humans are improving the models, sorting out training data, but "it" is not evolving. It's a dangerous path to walk, assuming it would "be there" now, because it lives of human doing of the past, not ingenuity, not of any kind of evolving. Even with perfect training data, with always getting the context right, (assume such exists), it would not be that.
(To elaborate: Once genuine training data for art gets too few, google will add art puzzles to their R-AI-TCHAS: "Paint an image of a realistic elephant." - Then you'll know that your privacy tools don't work, because you have been identified as an able artist. Or we're beyond any kind of sanity globally anyway, then.)
It could be that any organic species merely exists to create artificial life, which is far more efficient. Evolution is all about fitness. What better way to create fitness than give organic life forms intelligence to ultimately create artificial life, unconstrained by annoyances such as limited life span and the need to breathe and eat food?
But that doesn't mean we should go quietly into the night, blind, deaf, and dumb. The frog in the pan doesn't realize it's getting hot until his goose is cooked.
Yeah. As a fellow organic I would hope that we find a better solution than simply becoming obsolete.
Real intelligence is the ability to think and do something new, not only out of necessity but of curiosity or playfulness. If machines ever do that, we are truly messed up.