AI is going to be our biggest game changer
This discussion has been closed.
Adding to Cart…
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.You currently have no notifications.
Licensing Agreement | Terms of Service | Privacy Policy | EULA
© 2025 Daz Productions Inc. All Rights Reserved.
Comments
I'll be posting on ArtUntamed.com though it has mostly adult pictures because
- it does not have an app, i.e. app stores don't have a say in it
- it's a European site i.e. EU copyright laws apply
- many established render artists banned from DeviantArt moved over
I see. Thanks. It's a possibility - more varied than slushe, I guess. I'd been sort of looking at inkblot. I probably have to set up on some of these places and see for myself.
ai art serves as the distraction
ChatGPT is the thing everyone should be worried about
generalgameplaying you posted between my edit
Indeed, for a longer version, the likelyhood of manipulation and power (from manipulation) to relate to the biggest market potential, as well as posing one of the biggest threats to mankind at the same time, seems to be high. Manipulation also concerns areas, which you otherwise would deem "industrial", like using the technology to create more of a lock-in effect, as we have seen in agriculture, as well as with software and the cloud. E.g. by bending the law towards such by lobbying. In more volatile times to come (at least if so), creating more lock-ins and large external things that people depend on, is a dangerous path to walk.
Unfortunately/Luckily these are totally unrelated to actual advance in technology and it's use, because first and foremost it remains to be a tool, and the point still is, what we do with it. There is no need for exploitation, except that you could earn a lot of money with it, and elongate the pseudo-civilizational period of control-mongery.
But that doesn't mean that we shouldn't regulate it. To make the tool center of comparison: Failing to regulate murder, will allow things like "anyone with a hammer to kill you after six o'clock, while they can't with a chainsaw, unless it's made by the movie industry". Just to elaborate on what failing to regulate could mean. Now we need to find out, what's murder with "ai", and what other similar potential there is. I'm writing this, because the potential for impact is vast, and will come relatively quickly, in addition to some of it already being used actively. So regulating afterwards will be like waiting another 20 years, before even "noticing" some tax-evasion-based monopoly, just that we probably won't even have another 10 or even 5 years, in order to balance out things, in case of ai. There will be more incentives for some players, to let stuff happen, e.g. because your country is top notch on ai and hardware (and investment capital), so there s a geopolitical aspect to it too, similar as it's happened with social media. This will only work with asymmetries like "no abuse of our own citizen allowed, but foreigners are free for consumption". Don't ask me how to balance such with nukes and worse to come. The problem i see there, is that you might be able to employ ai more efficiently for attack than for defense. Defense needs to adapt to quickly shifting attackers, Defense will always be late with attackers supported by fast progressing technology, similar to lawmaking, if it only covers what already has broken in an obvious way.
Fast and slow - computers are fast (computanional), and you might have a chance of some detection and counter measures also based on ai, though in reality defense rather means "locked down", which is pretty much non-dynamic, and rather represents the opposite of the buzz-words on security. But if you make exploitation of society fast, there likely is no fast defense, not by technology. This might lead to regional "lock downs", not letting the others in, in the first place. Imagining health data to be used for ai only with "totally trustworthy" players, i can't but to think of the haywire-potential, not only for the blunt abuse-scenario of a player using the data for deanonymization and all that follows abuse-wise, all that in a parallel loop, extra to whatever legal and sound research they claim they'd be doing. Regulation in EU includes access to data by smaller players too, but that also means access by large players, and it might just mean access, where access had not previously existed, so we have a newly created and abuse potential, because you effectively deregulate without attaining any more control over the consequences. So there the sentence "it's not getting better with ai" might hold true. Of course the hope or the advertising goes like "we'll create great new tech and services using that data", while the incentive very likely will be purely commercial. Prices and price-policies on pharmaceuticals should indicate, where this will certainly go. The good potential... i see more maybes, extra to what we already know.
Concerning "IQ tests" and "astro physics examinations", while somewhat astonishing (and not so for some other applications), we should acknowledge that similar to chess and the game of go (weiqi/baduk), these things are difficult for humans, but turn out not to be that difficult for machine-learning-based systems (of this or that kind, plus access to data if needed, plus computational power as needed). So basically we overrate the general abilities of the technology at this stage, just because something it does, commonly is seen as difficult to do. Humans did this often, like with nuclear energy (tooth paste!), but we wouldn't assume a wheel or a car to rule the planet any time soon. Commercially it might, though, at the hands of something with minimum brains.
In fact you might take almost any ability, and build something that's better at that than most humans, but you end up with a pre-configured specialist most of the time. GPT is a language specialist in a way, but isn't a brain (yet), despite appearing more general than many other things that have been built. So you would still be in need of a more generalist thing to decide what tool to run and what data to feed it for input, but still will need to retrain and adapt to certain situations. It still can get answers to calculations wrong, which also means it's not actually calculating - any human would try to resolve and calculate actively, even if they don't have a special ability to do so, so they would probably resort to a tool or ask someone else. ChatGPT or "an even better toaster" would need to do such, but that's not what it is, nor what it can do at present (AFAIK). Of course you could imagine ChatGPT being enhanced by tools for algebra, calculations and all sorts of things, i.e. forming an expert system of some kind. We just don't have a general intelligence built, yet.
(And on a side note: tasks that underly simple to digitize rules are most easy to do, typically combining deep/machine-learning with tree-search and other statistical methods. In contrast to self-training systems, which are working in a confined space like a chess board with simple rules, real intelligence needs to be able to move through the world somehow and adapt to whatever is there. So thinking of ChatGPT, vast amounts of data are used, and the rules are probably deducted, but it will fail in the simple task of walking somewhere to get new batteries, because it doesn't even have legs, which it probably "knows", but more important, it's based on vast amounts of data from the real world, to be trained, and then works from there on, meaning it "resides in the past" - good enough for many things, but still fundamentally different to something that can adapt. Of course many actual tasks will be to improve something, where humans have higher error rates or no abilities at all. This just to elaborate on the quesiton, what to fear.)
I love chatGPT and plan to use it extensively.
ChatGPT wrote this Dialog
Isn't that a monologue, technically? That's ChatGPT tends to be pose more like the an encyclopedia mode, just without referencing the sources, and i don't find it particularly interesting (for that purpose). It's like citing an encyclopedia or an article, just without paying for it :), and without the basic accountability such an article would need to provide, e.g. by means of citation references, in order to be taken seriously. Further: if a million users do that, will it be all genuine texts for each and every one of them?
(Maybe it'll reach the same accuracy just for knowledge questions, like encyclopedias, in which case it'll still lag behind, if it doesn't reference sources. It could be more valuable to link to the relevant articles of encyclopedias directly. Similar for news or digests of what is nowadays. It would be "the" better search engine then. Not even sure, that's equivalent, unfortunately.)
Not intending to short-circuit any of that love. Just for text that would seem like something, and might even be correct in respect to a conversation, considering harmless applications, i have no issues with the technology getting some love. For text that is professionally reviewed, it might also help getting rid of one or another human writer (or less sarcastic: free their time for better use).
I've watched a couple of YouTube videos too by a couple of people that ran into the same sort of problem, that of the AI getting simple problems badly wrong. One was a college professor that was feeding his froshman level class exam to this ChatGPT in order to help the professor evaluate if his students used that AI. The exam is "open book".
I also did a chatGPT query regarding the use of {} by programmers and was equally as disappointed as the college professor. It acts more like a massive database than an AI.
I think it will improve but for now the chatGPT is really as flawed as much of the art generated by AI.
As a database? I love it.
What was the question regarding {}?
Even in its current iteration I would hesitate to call it flawed. Simply because it already knows more than any one human being on this planet. It's an encyclopedia that can talk. Enjoy it while it still gets some stuff wrong. It won't in the future.
"It knows more than... encyclopedia" - Sorry to disappoint you, but It's worthless as an encyclopedia, if it doesn't state sources. On the contrary, it will have to be called manipulation and dillusion. On the friendly side, you could call it a child with a high-ish IQ or memory, but no dependability. Feed it nuke codes?
(Sorry to all the children for this comparison, it's probably really stupid, in terms of what i'm writing here myself. Have you ever seen ChatGPT changing one of their answers in a preemptive way? Talking nukes...)
Flawed... what's flawed? Trusting it in that state, that i would call flawed. "It" is not an evolving form of life, it doesn't gather experience right now (unless they have done some miracle tech no one knows about, while telling us something else), neither so from what people type in some prompt. Gathering in terms of "in real-time", i.e. reflecting and adjusting, as in learning. So humans are developing something here, and the question is, what this kind of models can do, and what other models they can even implement. It's not certain that you end up better with either of more data, or bigger models, or tweaking here or there. Without accountability of sources of information and references for cross-checking, this machine will at best help in translation, interfacing, at the hands of someone who can judge what it outputs, e.g. if you can't remember well, but judge quickly - otherwise (as always) it'll be good for manipulating people in the firs place, because people seem to expect miracles.
(I'm not necessarily saying that you're wrong here. I'm just weary about people falling for ads. <- "ads" probably isn't right, as the makers just claim "it good". So people are falling for their own imagination, which is what ads try to do most of the time, so it's related but not precise.)
(Second thought: Careful with "not flawed". The greatest abilities of mankind are tightly linked to their flaws, plus a few extra bits, i won't mention here.)
(Elaborate on "better": A system like ChatGPT may easily be better than "most humans" at "everything", except whatever each human is specialized on individually. So if you ask your mother about quantum mechanics, you will need to have a very specialized mother. But if you're professional in what you need it for, or an academic person yourself, you might want to have statements and hints which you can use to verify the resulting information. So i don't ask the bus driver about quantum mechanics next. Of course models could significantly improve, if they had expert-variants of itself for different domains, as well as a general oversser who decides where a question belongs. Still this might be either much more difficult to do, lead to even worse mistakes, especially when misjudging the domain of knowledge, or it's outright a completely different system, which can't be done with what ChatGPT is based on right now. That's actually something i might like to know...)
(On a side not to "every" vs. "specialized": So humans focus on different areas often, and form societies to make a whole of their individual efforts. What the "everything"-ai would do, is to replace societies? Think again...)
At the moment it's simply a more interactive Google (when it comes to pure information finding at least). When you google something, it is up to you to check if the information you got is from legit sources. it's up to you to scroll the page and click on stuff willy-nilly in hopes of finding the right thing. GPT simply removes this and gives you the information it thinks is correct, and while this may not always be the case it's obvious that it will get better rapidly. Frankly, most of its information probably came from Wikis and online Encyclopedias anyway.
But it's so much more. The level of understanding it has about topics, the way it can speak to you and explain and circle back and remember past bits of a conversation is unreal. It can write a poem about the last couple of topics we talked about. It understands very easily what I mean, even if my question is crappy as hell. It generates code in lots of languages, with quality probably depending on how good the API docs were that it learned from (or maybe it's just a matter of analyzing code on GitHub, who knows). If the code doesn't work, tell it the error and it will apologize and go again. it wrote a Powershell script that helps me with certain work tasks, and although it took maybe an hour of back and forth (this included me adding more functionality as well, mind you) it eventually got there. I had never written a Powershell script before so had no clue about syntax.
It writes stories, translates, explains what a word means, what a sentence means, what a word means in a certain context, outlines historical events, it tells topical jokes if you want, etc etc etc. Even now it doesn't feel just like a database, but so what if it is? Who needs IQ and intelligence when it can do this by just being a "dumb" database of human knowledge?
3 artists. Lawsuit. Stable Diffusion, Midjourney, annnnd... deviantart.
https://www.theverge.com/2023/1/16/23557098/generative-ai-art-copyright-legal-lawsuit-stable-diffusion-midjourney-deviantart
So you are saying that because some people steal things, that an AI should be able to as well? If people steal things...that is still bad, and not an excuse to allow AI to do so.
Two wrongs do not make a right.
Watch the video. It explains how it works. You will understand that if you were to remove the data from the AI, it would not know what to do with the pixels. In his example, the AI is trained on a mountain, which is broken down into a noise pattern. Then if the AI is prompted to draw a mountain, it takes a noise pattern and reconstructs the mountain. It might not be exactly the same mountain because it may have other mountain training data in it, but make no mistake it needs that data to draw a mountain in the first place. But the data comes from so many sources without permission.
AI has no rights. AI does not have any right to view other people's works without permission and create something else. Period. Just like an AI cannot get married or buy a gun. We cannot equate AI with people in any respect. We aren't the same, and any sort of 'but people do X...' argument is just not valid.
This is not Short Circuit 2 where Johnny 5 is granted US citizenship and all the rights and privileges as anybody else.
I have a little story there. I was watching this with a friend on VHS in mid 90s, and when this scene ended I noticed she was not pleased. In fact, she was pretty agitated. She looked at me and asked "So what do I have to do get those rights?" I didn't have an answer. My friend is gay, and this was before gay rights and marriage were allowed. It certainly gave me perspective. It made rethink how I look at things, so I try to see things from multiple angles. It is the only reason I can remember this movie.
As I said from the get go, I am not against this AI existing. But it needs to be done right. I never said differently. You admit yourself you wouldn't want to use AI generations commercially because of how the data was trained. So we are not so far apart here.
Retrain the AI on works that give it permission to use. The Copyright Office has already declared that AI generated works cannot be copyrighted (the kids book that was given copyright was a mistake and has since been revoked.)
This will solve many issues. For one, the AI generators will simply not be as good anymore without copyrighted works and works without permission in its data set. This alone will curb a lot of its use, though it no doubt will still get used. Not being abe to get copyrights on works will go a long way towards keeping it in check. Sure, there will be places that embrace it for years to come, the deepfakers will go wild. But many of those places are underground in the darker recesses of the net. Certainly not commercial projects.
When Stable Diffusion 2.1 released, it got a lot of complaints that it was not as good as it used to be, that "it got nerfed". And this was simply because they removed living artists names from the prompt list. I find that funny, but I believe this proves my point.
BTW, I am not even a fan of some copyright laws. I believe that our copyrights should die with us. But that is another argument. It is what it is.
To be fair, humans also need to see a mountain before they can draw one. Bringing the AI on a road trip to see a real mountain seems a bit impractical.
Interactive Google: Yes, and that could be a merit. But when it comes down to checking for stuff, the task even Google denies to carry out, in order to show more ads, is to efficiently link to the encyclopedias. Well, it now actually does often display links to encyclopedias and facts on the top, directly showing a description. But the real difference is not "efficiently", it is that google actually links to the encyclopedias, in the good case. So with simple additions, like confining search to science and encyclopedia sites with a checkbox, google would be magnitudes better for that than it's now. An explanation .. you'd again be at a search engine. The last thing i want to see, when i search for something is a wordy description, for most of the cases. But i do see much potential of such models providing wording to make things colloquially understandable. And the component doing the understanding actually seems to be an extra thing, that ChatGPT uses to understand people, so i could imagine such being used for one type of search engine. The problem is, that the system likely has no idea when it's walking on a thin thread, when it hits a topic that it can't handle well. There is ways to give the system an idea of uncertainty, concerning training data, but it's not clear how far that will lead in this instance. And the human typing the search terms, probably has no idea of the reliability of an answer either. In the end humans and writers for magazines make errors too, though upon notice an error might be corrected (or not) and then stand there corrected (or not) forever. With the language model, you can add corrections to training, but with any more significant changes it might not be clear, in terms of being able to ensure, that it will be more correct on all corrected topics the next time, or if previously well done topics will still be handled correctly, especially if you don't check the answers after re-training (because you need an expert for that, it can't be done automatically). Guarantees are much less easy to give than with more specialized (classic) search engines (which no one really seems to implement, possibly due to business models). Since the similar players like Google, who busy about keeping the users in their realms, e.g. for showing ads, are also heavily invested and tecnhnically able in the field of ai, so i ask: what to expect more from them?
More: Yes, but "it doesn't tell jokes" and such. "It" is language models, not magic. It's more like patterns applied in a language-like fashion. There is abstraction contained, which to some degree means understanding and context (somehow). But it's not so deep. It's just as deep as the training data goes (and reinforcement learning), and there could be articles about just anything, so from somewhere to somewhere, often some answer exists at least in the training data. To really make it even half as bulletproof as an encyclopedia, they would need to fact-check the output for all corrected answers, which probably is out of scope right now. BUT they can and likely do feed it encyclopedias. Just i am not sure if they will manage to have something like specialized modules and at the same time select the correct one, meaning if they really hit the context well enough, and not mix in something from elsewhere at random. Combination of techniques will lead further, and i am not sure that language models alone lead there (and at which size). All those examples are derived from languages, based on the data we/they have, plus supervised learning with people tasked for it. The model currently is just very big, and obviously very good, in terms of comparing it to previous models. That's still all a similar quality in general - the question is, how far it can lead (and how so). You probably can't expect upcoming improvements to display the same magnitude of change, as ChatGPT represents now, compared to previous versions. Presenting step-by-step knowledge, even if all sources of the world have been used correctly in training, doesn't guarantee putting answers together correctly, especially not when understanding is necessary. The biggest potential for market in my opinion still is manipulation (*), until outlawed. Think of ads worded for you personally, links generated with texts for you personally, wording of links created in real-time allows much better tracking and personality testing - hell on earth. From hell to hell, though.
Stories: That's great, but it's still not actual intelligence ("... still only counts as one!"). I actually might have great use for "it", thinking of rephrasing what happened in a computer game scene or an arena battle, based on a series of simple descriptive terms on what happend, plus similarly a description of the hero character and their past efforts. Just i wouldn't like a cloud version of this, which it is, as it's "the biggest neural network" someone ever built. So for me, i'll need a boiled down version, if i went there. In theory, it could use special accelerators for ai, e.g. if it ran on much less power consuming and less accurate ones made for neural networks. So that'd be fictional stories. Maybe for (fantasy) storytelling a smaller thing can exist some day.
IQ and intelligence: General intelligence is the most important part, still. A killer scenario means massively relying on an ai like ChatGPT in a similar state as is now, and actually losing the abilities - the dream of the elites of the past, to rule over dumb people once more, controlling people with a switch, or if necessary: the power plug. That aside, it's not an intelligence, and it won't grow there by itself. As far as i can judge, there will be a new generation of the system some day, training the same plus new data. But it is not evolving on it's own or something like that.
-> Such systems CAN remove hindrances and help do stuff, of course. We're not at the magic bullet with the current display of ability, given the description of the system isn't decoy fiction. Given the kind of mistakes it makes, and what it appears not to be able to do, suggests, that it's essentially still language models done well, but won't understand to depth. So the training data question will apply here, similar to image recognition. But since the system is more complex, it'd be interesting how precisely we/they actually can tell, if it will be improved by a certain change, or the bulk of adjusted training data. But that's probably also the good side, because a general ai would probably learn and do random stuff on purpose, just to unnerve people. I also believe that further improvements are possible, but it's not so clear-cut, that it wil do so much better than replacing jobs, just to leave societies with less than before, because people lose abilities, the jobs are gone, but the system can not train on new data generated by the very people it just has taken the jobs from. Of course there will be other data sources, concerning "the job", but it may still become a real problem. (*) These kind of scenarios also could be counteed as the result of manipulation. That's different systems maybe, though for programming we may see some effects at some point, and it's obviously close to language models (but not only that). With ChatGPT the manipulation potential seems vast, because it appears to be so [...], whatever, while it actually is "so language". Language isn't the worst thing to be, per se, though...
The stock photography industry had a market share of 3.3 billion in 2020. This is a combo of Adobe stock, Getty, Shutterstock and a few others. Yes they are big players. Together they earn 3xmore than Microsoft invested in AI. And it has been on a steady incline since.
That is along the lines of the better scenarios, especially if consent is needed, iff people can't be tricked into consent by retroactive changes to TOS, or opt-out instead of opt-in and so on.
However, it doesn't resolve all issues. just mentioning a few:
- How much change needs to be applied to count it as "human made". Can that amount of change be done by another ai? (In bulk? ... This could render copyright ineffective in the end.)
- Going from an artists work, applying individual specialized ai-based tools, not just the generator, will probably allow you to paint in the style, just something else, with quite a lot of precision, think of inpainting, effects, part-reshaping. (Probably the day after tomorrow, if it's cheap enough to do for either of trolls/enemies or service providers.) Similarly, if these images don't fall under violation of copyright, they could prevent monetization of the original, or prevent the artist of using that style again (like a series). That's probably exaggerated, but distantly it's a potential.
- Like above, even if not copyrightable, the images might be used for demonitization of other (current or future) images, if they are fed (with or without TOS, depending on implementation) into platform-specific systems, that then feed automatic filters.
- How will humans deal with each other, if their resulting images are too similar, regardless of copyright? Think of the filtering again, will some platforms use automatic filters to prevent people from uploading the same images over and over? What if there was monetization, e.g. for works not generated by ai? Similar question as above.
- The impact with such systems integrated into social networks might still be vast, despite "no copyright", because people use the system, instead of creating or buying artwork. That's probably obvious.
- There still may be copyright on works, created with ai-generated images, e.g. a comic or a movie. So in the end the argument of the images probably rather touches the one-shot stuff.
The copyright is the easiest point to start off right-ish with, and it's good if that's taken. I'm curious about what will follow on either sides...
A Tool To Help Artists Control Their Images
Even Midjourney admits to using images without consent. David Holtz said, "There isn't a way to get a 100 million images and know where they are coming from." David Holtz admits that Midjourney's dataset was scraped from the internet. If you want to check if your images were scaped here is the link (from same article). It is a tool for artists who want to control how their images are used.
Ok, at least big-ish. (I mean, compared to dozens of billions for commerce with clouds and ai and search, software, hardware....). Probably big enough to present models of their own.
However to be "fair", compare Microsofts total gains to their total gain, and to turn to "unfair": consider shrinking market shares for them, when the big image generating ai's emerge, or with a little more delay, the more precise ai tools for all sorts of tasks (less magic). IF they ever do - in theory we probably won't get so much better results with the general approach (not necessarily my opinion, just a scenario), and both law as well as research still might need to evolve, in order to be in the green zone risk-wise.
There is a good lot of uncertainty, for how far what size of model can get us, without further advancements happening concerning amounts of training data needed and precision delivered (or imprecision...). Maybe we'll see no real winner on the quality-side, because too big models just get so expensive, mostly for training, but then also in use. There the rest of the law-wise side conditions might mean a lot, e.g. platforms filtering images generated by competitors, copyright, etc. Perhaps different groups that hold assets will further form, and deny others to use their data for ai training. Maybe law will prohibit that, like some legislation for data access has been discussed in the EU, but with uncertain effect on the balance between big and small players (Again: What's big? Will medium big get swallowed by very big, and the small ones can live on as vegetables?)
I'm not saying that two wrongs make a right, but I'm tired of the hypocrisy as I said. I'm not going to argue this again because I know that AI is not literally reproducing existing artwork, no one is really calling for AI to have similar rights as LGBTQ and other minorities, and there's no closing this Pandora's Box now because individuals can train and create their own AI models on any images they wish. I will do the same with my own art if that ridiculous lawsuit has any effect on the AI world.
Now I'm going to test out some of the new AI anime models that were recently released because it is impossible to get similar results in 3D, I probably don't have enough years left to learn to draw to my complete satisfaction, and AI still does better work than any real artists that I have paid to do commissioned work in the past. Take care.
I find that a poor excuse. It's ok for science projects that make everything publically accessible, but for a commercial thing it's just highly abusive. I also detest statements like "if they clearly opt-out in a visible way, we won't scrape". Society doesn't get better with commercial generators, so why against consent? This is not only breaking copyright partly, it's also killing off license terms for images in general, for the sake of few people's profit.
(They may try to argue, that current licenses don't exclude training for ai, and social networks may argue "opt-out" to be enough of consent. That's why lawmaking is imortant here, stepping in to prevent the cheap and fast evolving of the rogue behavior.)
I'm not going after individuals using a (yet) probably legit service. They should just use similar care to when they plung together whatever stuff into an image themselves, without using a generator. ("Tom Cruise fighting Harry Potter" ~ might actually get through, but might end up violating something, just for a clothing item, in theory. Maybe i'm not good at that theory, i just mean that people should not assume, they can just do anything, only because a computer system generates the final output.)
That being said, pandoras box is likely getting closer faster with chat in the first place, but image-wise we also get a bit closer with commercial cloud services, should they become (strategically) profitable for big players. There could happen arms races, law will have to be made in careful and foresighted ways. It might not stay with "some images are generated by ai".
So do I. That's why I put a link in the post that helps artist track if their images were stolen.
Not complaining. I'm probably in the fast-answer mode. Technically that statement is a contradiction in itself. It's a breeze to keep 100 millions of links and check for consent next time - there just needs to be an internet standard for it (cough).
People may also not be aware that by agreeing to the terms of service on many social media sites and galleries, they are sometimes giving permission for the collection of data for "learning purposes" and other forms of research. AI defenders sometimes claim that this absolves them of wrongdoing; I don't know enough about that to comment, but it sounds plausible. Maybe not entirely ethical, but perhaps that's what we get for clicking "I Accept" without bothering to read anything.
I'm beginning to think that whether something is intelligent or not is kind of irrelevant. You keep pointing out it's not intelligent, but I can talk to it. It helps me, it educates me and has more knowledge than any one human being on this earth. Case in point: if it tells a joke, does it matter to me if it's intelligent or not? Is every human who tells a joke intelligent? Of course not. I would argue it probably tells more intelligent jokes that are about very specific topics or even a combination of topics than the average human could come up with.
How much of our intelligence is just data? We all know that knowledge and IQ are not the same things, yet how we perceive intelligence often comes down to how much someone knows. A lot of education is just knowledge. Pure data. In some fields, you do need actual intelligence. But think about how many jobs out there revolve around amassed knowledge, communication skills, and maybe some basic math that is also data. We memorize that 2 + 2 is 4 and 10 x 10 is 100. That level of math is what 99% of people on this earth get by with.
So what about GPT is not intelligent then? It can come up with a story based on my parameters. It may not be a very good story, but it does have the ability. Most humans don't have the ability to create a good story either, let alone write it. GPT can store all the human knowledge. It can do math better than any human. And this is just a computer thing, not really even a feature of GPT. It can play better Chess than any human, at least if it ever gets proper Chess engines integrated that far outperform the very top human chess players for basically decades now.
In other words, what is the difference between "telling a joke" through magic, as you put it, or through a language model? If there's no difference in the end result, there is no difference. Technical differences are irrelevant, and in any case, you could argue that we very much have a language model in our brain too. if I tell you to speak Swahili you'll probably say you don't know that language. You'd have to learn it first. Your human intelligence alone does not allow you to speak Swahili. In fact, I would say that learning a language doesn't have much to do with intelligence and everything with memorization.
So it writes a story, it's not real intelligence. Is a human writing a story real intelligence? Isn't it rather a collection of data, memories, experiences, and some creativity to put it all together into a story? All of this GPT already does and it will get a lot better. To the point where it will spit out a whole 300-page novel, nicely formatted, and 100% correct spelling. Will it be good? Who knows. It might, if the AI learned from all the human books out there. It will "read a lot" like any writer and then create a story based on your parameters. What is the difference to a human?
In a specific context, well done language may be enough. A well made story with some content? Ok...
But knowledge isn't intelligence. It's often seen as great if someone can tell all the stories and things, and it's in general an important part of being able to use intelligence. Though for interaction between life forms, often there is much more than words (can express). That's probably what the image generators are meant for...
Interesting comparison of ChatGPT to "every human". In a way, you assume ChatGPT to have the knowledge of all mankind (probably not wrong), but you assume, that if it tells a bad joke it will be allright? To me that is at least half a contradiction. Half because not everybody fancies the same "a good joke". It's language models, patterns, different kinds of training and corrections, bells and whistles and another type of model for "understanding prompts"... it's encyclopedic in a way, but in a languagy way. It doesn't learn in real time, it won't reflect anything on the next time, it doesn't gather experience.
Point with intelligence being the abilities that follow from it, if you need an encyclopedia, or a well done language model, perhaps you could go with ChatGPT. I don't have a problem with that. It's just not magic, nor general intelligence. General intelligence was used to build it, to create the knowledge on base of which the training data has come to existence in the first place. The achievement may be great and have a lot of applications, similar to wheels.
Math: Trusting a computer with it, that could spit out wrong numbers at random, is dangerous, even if it uses a calculator module in the background, because it might at some point use a language answer from "encyclopedia" which may simply be wrong - knowing what it can do and what it doesn't or can't, will help then, of course. Depends on context, but a) losing abilities due to a calculator is not the same problem, as calculators are produced anywhere, widespread. b) Depending on a single cloud service for basic abilities would be nuts (societally). c) If it fits into a tennis ball, then we talk progress. (Elaborating: there may be an on-premise version for runtime some day, but it can't be trained by you - in fact it will always depend on the huge data sets being present, and a very powerful system for training.)
"So what about GPT is not intelligent then? ... chess...": Exactly: Artificial intelligence. The chess program can't play checkers, can't read you the news, can't change batteries to keep operating. Chess is just difficult for humans, so we tend to overrate it. Of course all this is great achievements, but it's not general intelligence like humans have. Typically the stuff that is simple for humans to do, is hard for ai. And ChatGPT "playing chess" would just be using a subsystem for chess, which is a step into the direction of an expert system, selecting the appropriate sub-system, interpreting the result and communicating it. A lot in one direction, but it's not general intelligence, as is.
Difference in telling jokes: Awareness might actually be there in a basic fashion, if it has been trained to recognize the order to tell a joke, and has the functionality or property to focus on jokes somehow. The questrion is, what kind of results you achieve, because again: All jokes told known? Maybe there is enough jokes already told, to fill a couple of lives with them, and perhaps simple standalone jokes are not the best field of comparison (political cabaret would be another challenge). So perhaps, we're overrating the jokability due to it's encyclopedic nature? Of course language model means, it can combine stuff to form new jokes, though it might tell the same on the same prompt often if not always (yes, variation can be built in somehow, but always at a cost).
Stories: Well, humans put a lot of effort into those, researching stuff (Maybe beaten by encyclopedia? If it's precise.), elaboratig threads of the plot, characters and so on. The result can be an intertwined cosmos of relations and formulas, which isn't just based on patterns of writing combined statistically. Can it be good? Yes, maybe. It could be funny, and especially if you use it for something else, like a short film, or just for imagination, you still can make sense of it yourself, just like with any novel. Perhaps it's already very usable to create some storylines for ineractive games for instance, where one can't foresee all combinations of events, though it's a cloud servicer for now.
Humans overrate progress "sometimes" and tend to rate achievements high, that are difficult for them to do. There is no law of physics telling you, that some computer system can't be good enough for you, or that everybody dies, if people use it. There are some dangers, though.
And concerning "the difference" - i hinted at it in the last post: 1. The mistakes the system makes. 2. The description that OpenAI has published. 3. Assuming the description is near-correct. 4. The nature of the thing is not the same as a so and so intelligent human, there is worlds (in terms of being different at first) between what's easy and what's difficult for either, and how they'll react to something next time.
(On a side note: i don't assume that all ideas humans tend to have, what or why distinguishes them from certain animals, makes them human or something, actually apply. Often those are superstitions, and who knows, of how "simple" parts general intelligence might be made of...)
That's been tried with privacy settings before. EU law overruled that, eventually, so that'll probably get interesting.
It would make a few people leave those platforms, making other ones more attractive artwork-wise.
Concerning law, idk USA either. But the TOS usually (also in EU still) are highly deceptive and way over the top even for educated people to understand. Actually, having to interpret the TOS per se, but more so in the direction of allowing scraping to your direct disadvantage, would be a shining example of how impossible the TOS are. Whatever educated means...
I was being silly, or so I thought, and purposely chose {} format preferences to query on. I asked it what was the best way to format {} in programming. I think the answer it gave just citing database entries of (associated with UC Berkeley no doubt) {} format styles and the celebrity programmers that claim to have originated those particular {} formatting styles wasn't problem solving at all, but just a database look up. It did great though understanding my English query and converting it to a database query to fetch that data.
A problem solving AI would have considered the way the various text editors work, whether tabs were hard tabs or soft tabs when inserted by the text editor and such, studies on making blocks of text easily readable to humans, human typing skills, reading skills, vision properties, and keyboards, and proposed a solution using those available facts. I've never known a programmer look up what celebrity programmers from UC Berkley use so they could use that style as their personal {} formatting style. Come on, man! They use what the editor defaults to or what the existing code is already using, otherwise for new code they may use what the editor defaults too or they my have devised there own {} formatting style. Look up what some programmer from UC Berkeley does and copy that style? HaHa! It's a good joke.
It just doesn't have those facts in it's database as to why those {} formatting styles with noted celebrity UC Berkeky programmers were chosen as the answers so it can't figure out the technical why and so just recites the final answer that was given to it at some point or another in the past.
Am I wrong to think it should have the capability to problem solve in a way that I was led to believe this AI could rather then do a dB lookup? I guess so.
The professor got better results, about 75%. Take note his questions were well known physics theorems with the logical mathematical relations stated in those theorems. So the AI is demonstrating it can convert English queries to typical database lookups and math tables and equations look ups and application of those quite well.
I've not tried querying in another language either because my language skills outside of English are barely intermediate level at best and only in the best of fortuitous circumstances.
Others have complained that these AI engines are too weighted to be dependent on "subject matter expert celebrities" database look ups and not actually looking at the problem from a naive perspective and trying to solve it independently. Would it get the answer right or wrong? I am going to agree with them (the complainers) for the most part.
I found this video very interesting.