this post was submitted on 13 Jul 2023
31 points (100.0% liked)

Technology

23 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 1 year ago
 

I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn't actually what you want, then what's your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it's likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we're only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I'm posting this in a hostile space, and I'm sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that's fine (the jury is literally still out on that). What I'm interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don't is the absolute worst possibility.

you are viewing a single comment's thread
view the rest of the comments
[–] IncognitoErgoSum@kbin.social 3 points 1 year ago (2 children)

Except an AI is not taking inspiration, it's compiling information to determine mathematical averages.

The AIs we're talking about are neural networks. They don't do statistics, they don't have databases, and they don't take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is. Nothing about an artificial neuron ever takes an average of anything, reads any database, or does any statistical calculations. If an artificial neural network can be said to be doing those things, then so is the human brain.

There is nothing magical about how human neurons work. Researchers are already growing small networks out of animal neurons and using them the same way that we use artificial neural networks.

There are a lot of "how AI works" articles in there that put things in layman's terms (and use phrases like "statistical analysis" and "mathematical averages", and unfortunately people (including many very smart people) extrapolate from the incorrect information in those articles and end up making bad assumptions about how AI actually works.

A human being is paid for the work they do, an AI program's creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he's profitting off this AI program.

If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they're making a profit from that copyrighted work. Human beings should, as you said, be paid for the work they do. Right? If an artist goes to art school, they should pay all of the artists whose work they learned from, right? If a teacher teaches children in a class, that teacher should be paid a royalty each time those children make use of the knowledge they were taught, right? (I sense a sidetrack -- yes, teachers are horribly underpaid and we desperately need to fix that, so please don't misconstrue that previous sentence.)

There's a reason we don't copyright facts, styles, and concepts.

Oh, and if you want to talk about something that stores an actual database of scraped data, makes mathematical and statistical inferences, and reproduces things exactly, look no further than Google. It's already been determined in court that what Google does is fair use.

[–] veridicus@kbin.social 2 points 1 year ago (1 children)

The AIs we're talking about are neural networks. They don't do statistics, they don't have databases, and they don't take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is.

This is not at all accurate. Yes, there are very immature neural simulation systems that are being prototyped but that's not what you're seeing in the news today. What the public is witnessing is fundamentally based on vector mathematics. It's pure math and there is nothing at all emergent about it.

If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they're making a profit from that copyrighted work.

That's not how copyright works, nor should it. Anyone who creates a mood board from a blank slate is using their learned experience, most of which they gathered from other works. If you were to write a book analyzing movies, for example, you shouldn't have to pay the copyright for all those movies. You can make a YouTube video right now with a few short clips from a movie or quotes from a book and you're not violating copyright. You're just not allowed to make a largely derivative work.

[–] IncognitoErgoSum@kbin.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

So to clarify, are you making the claim that nothing that's simulated with vector mathematics can have emergent properties? And that AIs like GPT and Stable Diffusion don't contain simulated neurons?

[–] veridicus@kbin.social 1 points 1 year ago (1 children)

Yes, and the math is all publicly documented.

[–] IncognitoErgoSum@kbin.social 3 points 1 year ago (1 children)
[–] veridicus@kbin.social 1 points 1 year ago (2 children)

No, I'm not your Google. You can easily read the background of Stable Diffusion and see it's based on Markov chains.

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago (1 children)

LOL, I love kbin's public downvote records. I quoted a bunch of different sources demonstrating that you're wrong, and rather than own up to it and apologize for preaching from atop Mt. Dunning-Kruger, you downvoted me and ran off.

I advise you to step out of whatever echo chamber you've holed yourself up in and learn a bit about AI before opining on it further.

[–] veridicus@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

My last response didn’t post for some reason. The mistake you’re making is that a neural network is not a neural simulation. It’s relatively simple math, just on a very large scale. I think you mentioned earlier, for example, you played with PyTorch. You should then know that NN stack is based on vector math. You’re making assumptions based on terminology but when you read deeper you’ll see what I mean.

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago

I said it was a neural network.

You said it wasn't.

I asked you for a link.

You told me to do your homework for you.

I did your homework. Your homework says it's a neural network. I suggest you read it, since I took the time to find it for you.

Anyone who knows the first thing about neural networks knows that, yes, artificial neurons are simulated with matrix multiplications, why is why people use GPUs to do them. The simulations are not down to the molecule becuase they don't need to be. The individual neurons are relatively simple math, but when you get into billions of something, you don't need extreme complexity for new properties to emerge (in fact, the whole idea of emergent properties is that they arise from collections of simple things, like the rules of the Game of Life, for instance, which are far simpler than simulated neurons). Nothing about this makes me wrong about what I'm talking about for the purposes of copyright. Neural networks store concepts. They don't archive copies of data.

[–] IncognitoErgoSum@kbin.social 0 points 1 year ago

You need to do your own homework. I'm not doing it for you. What I will do is lay this to rest:

https://en.wikipedia.org/wiki/Stable_Diffusion

Stable Diffusion is a latent diffusion model, a kind of deep generative artificial neural network. Its code and model weights have been released publicly [...]

https://jalammar.github.io/illustrated-stable-diffusion/

The image information creator works completely in the image information space (or latent space). We’ll talk more about what that means later in the post. This property makes it faster than previous diffusion models that worked in pixel space. In technical terms, this component is made up of a UNet neural network and a scheduling algorithm.

[...]

With this we come to see the three main components (each with its own neural network) that make up Stable Diffusion:

  • [...]

https://stable-diffusion-art.com/how-stable-diffusion-work/

The idea of reverse diffusion is undoubtedly clever and elegant. But the million-dollar question is, “How can it be done?”

To reverse the diffusion, we need to know how much noise is added to an image. The answer is teaching a neural network model to predict the noise added. It is called the noise predictor in Stable Diffusion. It is a U-Net model. The training goes as follows.

[...]

It is done using a technique called the variational autoencoder. Yes, that’s precisely what the VAE files are, but I will make it crystal clear later.

The Variational Autoencoder (VAE) neural network has two parts: (1) an encoder and (2) a decoder. The encoder compresses an image to a lower dimensional representation in the latent space. The decoder restores the image from the latent space.

https://www.pcguide.com/apps/how-does-stable-diffusion-work/

Stable Diffusion is a generative model that uses deep learning to create images from text. The model is based on a neural network architecture that can learn to map text descriptions to image features. This means it can create an image matching the input text description.

https://www.vegaitglobal.com/media-center/knowledge-base/what-is-stable-diffusion-and-how-does-it-work

Forward diffusion process is the process where more and more noise is added to the picture. Therefore, the image is taken and the noise is added in t different temporal steps where in the point T, the whole image is just the noise. Backward diffusion is a reversed process when compared to forward diffusion process where the noise from the temporal step t is iteratively removed in temporal step t-1. This process is repeated until the entire noise has been removed from the image using U-Net convolutional neural network which is, besides all of its applications in machine and deep learning, also trained to estimate the amount of noise on the image.

So, I'll have to give you that you're trivially right that Stable Diffusion does use a Markov Chain, but as it turns out, I had the same misconception as you did, that that was some sort of mathematical equation. A markov chain is actually just a process where each step depends only on the step immediately before it, and it most certainly doesn't mean that you're right about Stable Diffusion not using a neural network. Stable Diffusion works by feeding the prompt and partly denoised image into the neural network over some given number of steps (it can do it in a single step, although the results are usually pretty messy). That in and of itself is a Markov chain. However, the piece that's actually doing the real work (that essentially does a Rorschach test over and over) is a neural network.

[–] Ragnell@kbin.social 2 points 1 year ago* (last edited 1 year ago) (1 children)

@IncognitoErgoSum Gonna need a source on Large Language Models using neural networks based on the human brain here.

EDIT: Scratch that. I'm just going to need you to explain how this is based on the human brain functions.

[–] IncognitoErgoSum@kbin.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

I'm willing to, but if I take the time to do that, are you going to listen to my answer, or just dismiss everything I say and go back to thinking what you want to think?

Also, a couple of preliminary questions to help me explain things:

What's your level of familiarity with the source material? How much experience do you have writing or modifying code that deals with neural networks? My own familiarity lies mostly with PyTorch. Do you use that or something else? If you don't have any direct familiarity with programming with neural networks, do you have enough of a familiarity with them to at least know what some of those boxes mean, or do I need to explain them all?

Most importantly, when I say that neural networks like GPT-* use artificial neurons, are you objecting to that statement?

I need to know what it is I'm explaining.

[–] Ragnell@kbin.social 2 points 1 year ago* (last edited 1 year ago) (2 children)

@IncognitoErgoSum I don't think you can. Because THIS? Is not a model of how humans learn language. It's a model of how a computer learns to write sentences.

If what you're going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don't bother because I will dismiss it as a fairy tale.

But, if you have an answer that actually, genuinely proves that this "neural" network is operating similarly to how the human brain does... then you have invalidated your original post. Because if it really is thinking like a human, NO ONE should own it.

In either case, it's probably not worth your time.

[–] IncognitoErgoSum@kbin.social 4 points 1 year ago* (last edited 1 year ago)

If what you're going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don't bother because I will dismiss it as a fairy tale.

I'm curious, how do you feel about global warming? Do you pick and choose the scientists you listen to? You know that the people who develop these AIs are computer scientists and researchers, right?

If you're a global warming denier, at least you're consistent. But if out of one side of you're mouth you're calling what AI researchers talk about a "fairy tail", and out of the other side of your mouth you're criticizing other people for ignoring science when it suits them, then maybe you need to take time for introspection.

You can stop reading here. The rest of this is for people who are actually curious, and you've clearly made up your mind. Until you've actually learned a bit about how they actually work, though, you have absolutely no business opining about how policies ought to apply to them, because your views are rooted in misconceptions.

In any case, curious folks, I'm sure there are fancy flowcharts around about how data flows through the human brain as well. The human brain is arranged in groups of neurons that feed back into each other, where as an AI neural network is arranged in more ordered layers. There structure isn't precisely the same. Notably, an AI (at least, as they are commonly structured right now) doesn't experience "time" per se, because once it's been trained its neural connections don't change anymore. As it turns out, consciousness isn't necessary for learning and reasoning as the parent comment seems to think.

Human brains and neural networks are similar in the way that I explained in my original comment -- neither of them store a database, neither of them do statistical analysis or take averages, and both learn concepts by making modifications to their neural connections (a human does this all the time, whereas an AI does this only while it's being trained). The actual neural network in the above diagram that OP googled and pasted in here lives in the "feed forward" boxes. That's where the actual reasoning and learning is being done. As this particular diagram is a diagram of the entire system and not a diagram of the layers of the feed-forward network, it's not even the right diagram to be comparing to the human brain (although again, the structures wouldn't match up exactly).

[–] throwsbooks@lemmy.world 1 points 1 year ago (1 children)

But, if you have an answer that actually, genuinely proves that this “neural” network is operating similarly to how the human brain does… then you have invalidated your original post. Because if it really is thinking like a human, NO ONE should own it.

I think this is a neat point.

The human brain is very complex. The neural networks trained on computers right now are more like collections of neurons grown together in a petri dish, rather than a full human brain. They serve one function, say, recognizing or generating an image or calculating some probability or deciding on what the next word should be in a sequence. While the brain is a huge internetwork of these smaller, more specialized neural networks.

No, neural networks don't have a database and they don't do stats. They're trained through trial and error, not aggregation. The way they work is explicitly based on a mathematical model of a biological neuron.

And when an AI is developed that's advanced enough to rival the actual human brain, then yeah, the AI rights question becomes a real thing. We're not there yet, though. Still just matter in petri dishes. That's a whole other controversial argument.

[–] IncognitoErgoSum@kbin.social 2 points 1 year ago (2 children)

I don't believe that current AIs should have rights. They aren't conscious.

My point is was purely that AIs learn concepts and that concepts aren't copyrightable. Encoding concepts into neurons (that is, learning) doesn't require consciousness.

[–] Ragnell@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

@IncognitoErgoSum If they don't have consciousness, then they aren't comparable to a human being being inspired. It is that simple.

The human who created the AI is profitting from the AI's work, but that human was not inspired by the works he used to train the AI. He fed them into a machine to help make that machine. It doesn't matter how close the machine is to human thought, it is a machine that is making something for other to profit from.

The people who created the AI took work without permission, used it to build and refine a machine, and are now using that machine to profit. They are selling that machine to people who would otherwise hire the people who did the work that was taken without permission and used to build the machine. This is all sorts of fucked up, man.

If an AI's creation is comparable to a direct human's creation, then it belongs to the AI. Whatever it is, it doesn't belong to the guys who built the AI OR the guys who BOUGHT the AI. Which is actually one of the demands from the WGA, that AI-generated scripts have NOBODY listed as the writer and NOBODY able to copyright that work.

SAG-AFTRA just got a contract offer that says background performers would get their likeness scanned and have it belong to the studio FOREVER so that they can simply generate these performers through AI.

This is what is happening RIGHT NOW. And you want to compare the output of an AI to a human's blood sweat and tears, and argue that copyright protections would HURT people rather than help them avoid exploitation.

Because that is what the AI programmers are doing, they are EXPLOITING living authors, living artists, living performers to create a machine that will replace those very people.

The copyright system, which yes is exploited and manipulated by these corporations, is still the only method we have to protect small-time creatives FROM those corporations. And right now, those corporations are poised to use AI to attack small-time creatives.

So yes, your comparison to human inspiration is a damned fairy tale. Because it whitewashes the exploitation of human workers by equating them to the very machine that's being used to exploit them.

[–] IncognitoErgoSum@kbin.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

Lots to unpack here.

First of all, the physical process of human inspiration is that a human looks at something, their optic nerves fire, those impulses activate other neruons in the brain, and an idea forms. That's exactly how an AI takes "inspiration" from images. This stuff about free will and consciousness is metaphysics. There's no meaningful difference in the actual process.

Secondly, let's look at this:

SAG-AFTRA just got a contract offer that says background performers would get their likeness scanned and have it belong to the studio FOREVER so that they can simply generate these performers through AI.

This is what is happening RIGHT NOW. And you want to compare the output of an AI to a human's blood sweat and tears, and argue that copyright protections would HURT people rather than help them avoid exploitation.

I'll say right off that I don't appreciate the "you're a bad person" schtick. Switching to personal attacks stinks of desperation. Plus, your personal attack on me isn't even correct, because I don't approve of the situation you described any more than you do. The reason they're trying to slip that into those people's contracts is because those people own their likenesses under existing copyright law. That is, you don't have to come up with a funny interpretation of copyright law where concepts can be copyrighted but only if a machine learns them. They need a license to use those people's likenesses regardless of whether they use an AI or Photoshop or just have a painter do it. Using AI doesn't get them out of that -- if it did; they wouldn't need to try to put it into the contract.

In other words, they aren't using an AI to attack anyone; they're using a powerful bargaining position to try to get people to sign away an established right they already have according to copyright law. That has absolutely nothing to do with anything I'm talking about here, except that you want to attach it to what I'm talking about so you can have something to rage about.

And here's the thing. None of you people ever gave a shit when anybody else's job was automated away. Cashiers have had their work automated away recently and all I hear is "ThAt'S oKaY bEcAuSe tHeIr jOb sUcKs!!!!!!111" Artists have been actually violating the real copyright of other artists (NOT JUST LEARNING CONCEPTS) with fanart (which is a DERIVATIVE WORK OF A COPYRIGHTED CHARACTER) for god only knows how long and there's certainly never been a big outcry about that.

It sucks to be the ones looking down the business end of automation. I know that because as a computer programmer I am too. On the other hand, I can see past the end of my own nose, and I know how amazing it would be if lots of regular people suddenly had the ability to do the things that I do, so I'm not going to sit there and creatively interpret copyright law in an attempt to prevent that from happening. If you're worried about the effects of automation, you need to start thinking about things like a universal healthcare and universal income, not just ESTABLISH SPECIAL PROTECTIONS FOR A TINY SUBSET OF PEOPLE WHOM YOU HAPPEN TO LIKE. It just seems a bit convenient, and (dare I say) selfish that the point in history that we need to start smashing the machines happens to be right now. Why not the printing press or the cotton gin or machines that build railroads or looms or or robots in factories or grocery store kiosks? The transition sucked for all those people as well. It's going to suck for artists, and it'll suck for me, but in the end we can pull through and be better off for it, rather than killing the technology in its infancy and calling everyone a monster who doesn't believe that you and you alone ought to have special privileges.

We need to be using the political clout we have to push us toward a workable post-scarcity economy, as opposed to trying to preserve a single, tiny bit of scarcity so a small group of people can continue to do something while everybody else is automated away and we all end up ruled by a bunch of rent-seeking corporations. Your gatekeeping of the ability of people to do art isn't going to prevent any of that.

P.S. We seem to be at the very beginning of a major climate disaster these last couple weeks, so we're probably all equally fucked anyway.

[–] Ragnell@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

Dude, I'm not calling you a bad person. I am calling you out of touch with a very real problem.

Look, you asked what the endgame was for people who hoped that copyright would get applied to AI. I TOLD you. We want to slow down the deployment of AI by large companies and establish legal protections for creatives and others who

You responded by comparing the AI to those human creatives, which honestly is a trap I fell into. Because it derails us from the point, which is those creatives need legal protection. The legal system will see AI as a tool no matter HOW similar or dissimilar it is to a human being until an AGI comes along that is granted legal personhood. Then those legal restrictions won't apply to that AGI, and it will instead fall under the legal restrictions applied to people.

Because the intended use of art is communication between PEOPLE. And the person involved in AI right now is the person who feeds it the art and makes a machine to create what they desire. This is not the intended use case. It is not intended to create machines, it is intended to inspire people.

So unless your AI is LEGALLY classified as a person, applying copyright restrictions to it will not apply to a human reader that is inspired.

I DEFINITELY want a legal distinction between using my writing to make a machine and reading my writing.

Because using the work of creatives to make an AI is exploitation. And I don't think we should preserve the right of a corporation to exploit creatives just so that the average person can ALSO exploit creatives.

But if it makes you happy, how about we get a copyright ala Creative Commons that can allow an individual to create an AI using the copyrighted work for non-profit reason, but restrict corporations from doing so with an AI used for profit, and considers any work created by this AI to be noncopyrighted.

[–] IncognitoErgoSum@kbin.social 2 points 1 year ago (1 children)

But if it makes you happy, how about we get a copyright ala Creative Commons that can allow an individual to create an AI using the copyrighted work for non-profit reason, but restrict corporations from doing so with an AI used for profit, and considers any work created by this AI to be noncopyrighted.

Honestly, I think keeping the output of AI non-copyrighted is probably the best of both worlds, because it allows individuals to use AI as an expressive tool (you keep separating "creatives" from "average people", which I take issue with) while making it impractical for large media companies to use.

At any rate, the reason copyright restrictions would just kill open source AI is that it strikes me as incredibly unlikely that you're going to be able to stop corporations from training AI on media that they own outright. Disney has a massive library of media that they can use as training data, and no amount of stopping open source AI users from training AI on copyrighted works is going to prevent Disney from doing that (same goes for Warner Bros, etc). Disney, which is known for exploiting its own workers, will almost certainly use that AI to replace their animators completely, and they'll be within their legal rights to do so since they own all the copyrights on it.

Now consider companies like Adobe, Artstation, and just about any other website that you can upload art to. When you sign up for those sites, you agree to their user agreement, which has standard boilerplate language that gives them a sublicenseable right to use your work however they see fit (or "for business purposes", which means the same thing). In other words, if you've ever uploaded your work anywhere, you've already given someone else the legal right to train an AI on your work (even with a creative interpretation of copyright law that allows concepts and styles to be copyrighted), which means they're just going to build their own AI and then sell it back to you for a monthly fee.

But artists and writers should be compensated every time someone uses an AI trained on their work, right? Well, let's look at ChatGPT for a moment. I have open source code out there on github, which was almost certainly included in ChatGPT's training data. Therefore, when someone uses ChatGPT for anything (since the training data doesn't go into a database; it just makes tiny tiny little changes to neuron connection weights), they're using my copyrighted work, and thus they owe me a royalty. Who better to handle that royalty check but OpenAI? So now you get on there and use ChatGPT, making use of my work, and some of the "royalty fee" they're now charging goes to me. Similarly, ChatGPT has been trained on some of whatever text you've added to the internet (comments, writing, whatever, it doesn't matter), so when I use it, you get royalties. So far so good. Now OpenAI charges us both, keeps a big commission, and we both pay them $50/month for the privilege of access to all that knowledge, and we both make $20/month because people are using it, for a net -$30/month. Who wins? OpenAI. With a compensation scheme, the big corporations win every time and the rest of us lose, because it costs money to do it, and open source can't do it at all. Better to skip the middle man, say here's an AI that we all contributed to and we all have access to.

So again, what specifically is your plan to slow down deployment? Because onerous copyright restrictions aren't going to stop any of the people who need to be stopped, but they will absolutely stop the people competing with those people.

[–] Ragnell@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

@IncognitoErgoSum Honestly? Arguing against AI to anyone I can find and supporting any legal action to regulate the industry. That includes my boss when he considers purchasing an AI service.

If find something that's mine has been used to train an AI, I am willing to join a class action suit. The next work contract renegotiation I have will take into account the possibility of my writing being used to train, and it'll be a no. I'm supporting the SAG-AFTRA and WGA strikes because those contracts will set important precedents on how AI can be used in creative industries at least, and will likely spread to other industries.

And I think if enough people don't buy into the hype, and are skeptical, and public opinion remains against it, then it's less likely AI will be used in industries that need a strict safety standard until we get a regulatory agency for it.

[–] IncognitoErgoSum@kbin.social 2 points 1 year ago* (last edited 1 year ago) (1 children)

I get it, then.

It's more about the utilitarian goal of convincing people of something that it's convenient for you if the public believes it, in order to protect yourself and your immediate peers from automation, as opposed to actually seeking the truth and sticking going with established legal precedent.

Legally, your class action lawsuit doesn't really have a leg to stand on, but you might manage to win anyway if you can depend on the ignorance of the judge and the jury about how AI actually works, and prejudice them against it. If you can get people to think of computer scientists and AI researches as "tech bros" instead of scientists with PHDs, you might be able to get them to dismiss what they say as "hype" and "fairy tales".

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

I still say you're wrong about how the AI actually works, man. You're looking at it with rose-colored goggles, head filled with sci-fi catch phrases. But it's just a math machine.

[–] IncognitoErgoSum@kbin.social 2 points 1 year ago (1 children)

I'm looking at it with a computer science degree and experience with AI programming libraries.

And yes, it's a machine that simulates neurons using math. We simulate physics with math all the way down to the quantum foam. I don't know what your point is. Whether it's simulated neurons or real neurons, it learns concepts, and concepts cannot be copyrighted.

I have a sneaking suspicion since you switched tactics from googling the wrong flowchart to accusing me of not caring about workers due to a contract dispute that's completely unrelated to anything of the copyright stuff I'm talking about, I have a feeling you at least suspect that I know what I'm talking about.

Anyway, since you're arguing based on personal convenience and not fact, I can't really trust anything that you say anyway, because we're on entirely different wavelengths. You've already pretty much indicated that even if I were to convince you I'm right, you'd still go on doing exactly what you're doing, because you're on a crusade to save a small group of your peers from automation, and damn the rest of us.

Best of luck to you.

[–] Ragnell@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yeah, we're on different wavelengths. But I do have over twenty years in cyber transport and electronics. I know the first four layers in and out, including that physical layer it seems just about all programmers forget about completely.

It's not learning. It's not reading. It's not COMPREHENDING. It is processing. It is not like a person.

I admit, I'm firing from any direction I can get an angle at because this idea that these programs are actual AGI and are comparable to humanity is well... dangerous. There are people with power and influence who want to put these things in areas that WILL get people hurt. There are people who are dying to put them to work doing every bit of writing from scripts to NOTAMs and they are horrifically unreliable because they have no way of verifying the ACCURACY of what they right. They do not have the ability to make a judgement, which is a key component of human thinking. They can only favor the set result coming through the logic gate. If A and B enter, B comes out. If A and A enter, A comes out. It has no way to evaluate whether A or B is the actual answer.

You call it a small group of my peers, but everyone is in trouble because people with money are SEVERELY overestimating the capabilities of these programs. The danger is not that AI will take over the world, but that idiots will hand AI the world and AI will tank it because AI does not come with the capabilities needed to make actual decisions.

So yeah, I bring up the WGA/SAG-AFTRA strike. Because that happens to be the best known example of the harm being done not by the AI, but by the people who have too much faith in the AI and are ready to replace messy humans of all stripes with it.

And I argue with you, because you have too much faith in the AI. I'm not impressed by your degree to be perfectly honest because in my years in the trade I have known too many people with that degree who think they know way more than they do and end up having to rely on people like me to keep them grounded in what actually can be accomplished.

[–] IncognitoErgoSum@kbin.social 3 points 1 year ago (1 children)

What, specifically, do you think I'm wrong about?

If it's the future potential of AI, that's just a guess. AGI could be 100 years away (or financially impossible) as easily as it could be 5 years. AGI is in the future still, and nobody is really qualified to guess when it'll come to fruition.

If you think I'm wrong about the present potential of AI, I've already seen individuals with no budget use it to express themselves in ways that would have required an entire team and lots of money, and that's where I believe its real potential right now lies. That is, opening up the possibility for regular period to express themselves in ways that were impossible for them before. If Disney starts replacing animators with AI, I'll be right there with you boycotting them. AI should be for everyone, not for large corporations that can already afford to express themselves however they want.

If you think I'm wrong that AIs like ChatGPT and Stable Diffusion do their computing with simulated neurons, let me know and I'll try to find some literature about it from the source. I've had a lot of AI haters confidently tell me that it doesn't (including in this thread), and I don't know if you're in that camp or not.

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

I don't think we know enough about the human brain to actually replicate it in electronics.

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

So what does that mean? Do you not believe that AIs like ChatGPT and Stable Diffusion have neural networks that are made up of simulated neurons? Or are you saying that we haven't simulated an actual human brain? Because the former is factually incorrect, and I never claimed the latter. Please explain exactly what "hype" you believe I'm buying into? Because I don't think you have any clue what it is you think I'm wrong about. You just really don't want me to be right.

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

I think they simulate what some people think neurons are like. I mean, I guess you can get the binary neurons fine but there are analog neurons (and that is something that has just now been proven). But there are so many value inputs in the human brain that we haven't isolated, so much about it we haven't mapped. We don't even know how the electricity is encoded. So no, I don't think what you're calling a "neural network" is ACTUALLY simulating the human brain.

The hype you're buying into is that AI will improve our lives just by existing. Thing is, any new tech is a weapon in the hands of the rich whether it's available to the common man or not. We need to focus on setting the rules for the rich and enforcing the rules we have. Copyright, which is also a weapon in the hands of the rich yes, has aspects which are made to protect the common man and we need to enforce those to keep the rich in line while we have them. If someday we junk copyright, it needs to be as a whole. We can't go chucking copyright for small time authors while the courts are still allowing Disney to keep Mickey Mouse out of the public domain, which is what you suggest doing when you suggest copyright should be ignored so that the common man can make their own AI.

I think I've softened on quite a bit with your arguments, honestly. It's unfair to say I just don't want you to be right. My position remains that I think copyright is a fair place for limitation on AI training.

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago (2 children)

So most of my opinions about what AI can do aren't about hype at all, but what I've personally experienced with it firsthand. The news, frankly, is just as bad a source about AI is marketing departments of AI companies, because the news is primarily written by people who feel threatened by its existence and are rationalizing reasons that it's bad, as well as amplifying bad things that they hear and, in the best case, reporting on it without really understanding what it actually does. The news is partly why you're connecting what's happening with that WGA/SAG-AFTRA contract; nothing I've said here supports people losing their existing rights to their own likenesses, and the reason they're trying to slip it into the contracts is because even under existing copyright law, AI isn't a get out of jail free card to produce copyrighted works despite the fact that you can train it on them.

At any rate, here are a few of my personal experiences with using AI:

  • I've used AI art generation to create background art for a video game that I made with my kids over winter break, and because of that, it looks really good. It would have otherwise looked pretty bad.
  • For my online tabletop roleplaying campaign, I generate images of original locations and NPCs.
  • I subscribe to ChatGPT and because of that I have access to the GPT-4 version, which is leaps and bounds smarter than GPT-3 (although it's still like talking to some kind of savant who knows a whole lot of information but has trouble with certain types of reasoning). While ChatGPT isn't something you should use to write your legal briefs (I could have told you that before that dumbass lawyer even tried it), it's an amazing replacement for google, which nowadays involves a lot of fiddling and putting quotations marks around things just so you can get a result that's actually addressing what you want to know as opposed to "here's a bunch of vaguely related shit that has almost nothing to do with what you asked." That alone has improved my life.\
  • It's also great at helping you figure out what something is called. "I'm looking for a thing that does X and Y, but I don't know what it's called." Google is absolutely terrible at that.
  • I've used ChatGPT to generate custom one-shot adventure ideas for my online roleplaying game. Rather than having to adapt an existing adventure module to what I'm doing, if I give it information about my campaign, it'll come up with something that utilizes my existing locations, NPCs, and setting. (Indicentally, when people say that AI "can't be creative", they're essentially using a tautological definition of creativity that amount so "AI isn't creative because only humans can be creative, therefore AI can't be creative." AI, in my experience, is very creative.) Compare this to the common advice that people give to game masters who can't come up with an idea: take someone else's story, change a few things, and drop it into your campaign. ChatGPT is also amazing at worldbuilding.

This kind of thing is why I'm excited about AI -- it's improving my life in a big way right now. None of what I've done with it is "hype". I don't care that Elon Musk's dumb ass is starting his own AI company, or what tech company marketing divisions have to say about it, or what some MBA CEO's wild guess about what we'll be using it for in 5 years is.

[–] Ragnell@kbin.social 1 points 1 year ago

It's nice that your life is better, but that doesn't change that these AIs were trained by being fed the work of creatives who were never compensated for that work.

And it doesn't change that on the high level and in the real world, they're pushing to put AI in places AI isn't ready to be, because they don't want to pay humans to do those jobs.

I mean, yeah, you don't care... but the rest of us do.

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

Waaaaait a minute. How is it you thinking AI is good because it makes your life a bit better leisure-wise different from me thinking it's a problem because it will make my life worse work-wise? You threw that at me saying I was worried about a small group and here you are basing your excitement on it helping your niche hobbies?

Are you sure you're not projecting here? In this entire thread, have you budged an inch based on all the people arguing against your original post? Or are you just refusing to admit that it could cause trouble in the world for people's livelihoods because you get to have fun with it?

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago (1 children)

When did I refuse to admit automation causes problems for people?

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

When did I refuse to admit it could help with anything?

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not sure why you're asking that. You literally just asked me if I'm refusing to admit that AI could cause trouble for people's livelihoods. I don't know where you even got that idea. I never asked you anything about whether you admit it could help with things, because that's irrelevant (and also it would be a pretty silly blanket assumption to make).

Are you sure you're not projecting here? In this entire thread, have you budged an inch based on all the people arguing against your original post?

Who am I supposed to be budging for? Of the three people here who are actually arguing with me, you're the only one who isn't saying they're going to slash my car tires and likening personal AI use to eating steak in terms of power usage (it's not even in the same ballpark), or claiming that Stable Diffusion doesn't use a neural network. I only replied to the other guy's most recent comment because I don't want to be swiftboated -- people will believe other people who confidently state something that they find validating, even if they're dead wrong.

We just seem to mostly have a difference of opinion. I don't get the sense that you're making up your own facts. And fundamentally, I'm not convinced of the idea that only a small group of people deserve laws protecting their jobs from automation, particularly not at the expense of the rest of us. If we want to grant people relief from having their jobs automated away, we need to be doing that for everybody, and the answer to that isn't copyright law.

And as far as AI being used to automate dangerous jobs, copyright isn't going to stop that at all. Tesla's dangerous auto-pilot function (honestly, I have no idea if that's a neural network or just a regular computer program) uses data that Tesla gathers themselves. Any pharmaceutical company that develops an AI for making medicines will train it on their own trade secrets. Same with AI surgeons, AI-operated heavy machinery, and so on. None of that is going to be affected by copyright, and public concerns about safety aren't going to get in the way of stockholders and their profits anymore than it has in the past. If you want to talk about the dangers of overreliance on AI doing dangerous work, then by all means talk about that. This copyright fight, for those large companies, is a beneficial distraction.

[–] Ragnell@kbin.social 1 points 1 year ago* (last edited 1 year ago)

All right, let's go back to the original post. You said copyright being applied to materials used for AI training would lock poor people out of AI and make it so only corporations could use it.

This isn't true, because there's a wealth of public domain info out there.

Many of us pointed out that waiving copyright for AI training means that people who are being replaced by AI would have also had their work used to build the AI, which is an exploitation of their labor being used to eliminate their livelihood.

We argued about this and got on tangents, and ultimately you accused me of an anti-AI bias that is made to protect a "small group" of my peers and "damn" everyone else.

But ultimately, everyone else would just be required to keep their training to public domain works, or leave their lives the same. The group of my peers would have their lives worsened.

You haven't budged on this, this basic idea that training AI is so important that it is worth having those lives worsened. That it's so important we can't even give them a cut for the works already used.

And your examples for why AI is so important are... checks your comment Slightly easier websearch, being able to summarize stuff more easily, not having to draw or think up stories for your TTRPG, and... free background art on a video game you made for your kids.

Over this entire time you have budged on... acknowledging there is some trouble, but that the trouble is worth it and we still shouldn't try to use copyright protections to slow down the businesses who are ready to start downsizing or force them to at least pay people for work completed. I appreciate this acknowledgement, must've taken a lot of effort and soulsearching on your part.

So, yeah. I am sorry that I made you feel bad for saying that starving artists should be consigned to poverty--despite their work being used to make this tool--so that your children can have full background art on their free videogames. That's on me, man.

In all seriousness, of course I don't want to slash your tires or anything but come on. Copyright's not the final answer, but we can't just throw it away. It's a tool we have to make sure people get their due, and it is going to take way longer to make a new tool that helps everyone, so why would we waive the one tool we have while working on it?

If one author gets a meal out of copyright awards from an AI company, then yeah, it's worth applying copyright to it.

[–] throwsbooks@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Oh, 100%. They're way too rudimentary. NNs alone don't go through the sense-think-act loops that necessitates a conscious autonomous agent. One day, maybe, but again, we're at the brain matter in petri dish stage.

I agree on the concepts thing too. People learn to paint by imitating what they see around them, their favourite artists, their favourite comics and cartoons. Then, over time with practice and experimentation, these things get encoded, but there's always that influence there somewhere.

Midjourney just has the benefit of being able to learn from way more imagery in a way shorter of an amount of time and practice way faster than any living human. So like, I get why artists are scared of it, but there's definitely a fundamental misunderstanding around how these things work floating around.