this post was submitted on 13 Jul 2023
31 points (100.0% liked)

Technology

23 readers
2 users here now

This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!

founded 2 years ago
 

I know a lot of people want to interpret copyright law so that allowing a machine to learn concepts from a copyrighted work is copyright infringement, but I think what people will need to consider is that all that's going to do is keep AI out of the hands of regular people and place it specifically in the hands of people and organizations who are wealthy and powerful enough to train it for their own use.

If this isn't actually what you want, then what's your game plan for placing copyright restrictions on AI training that will actually work? Have you considered how it's likely to play out? Are you going to be able to stop Elon Musk, Mark Zuckerberg, and the NSA from training an AI on whatever they want and using it to push propaganda on the public? As far as I can tell, all that copyright restrictions will accomplish to to concentrate the power of AI (which we're only beginning to explore) in the hands of the sorts of people who are the least likely to want to do anything good with it.

I know I'm posting this in a hostile space, and I'm sure a lot of people here disagree with my opinion on how copyright should (and should not) apply to AI training, and that's fine (the jury is literally still out on that). What I'm interested in is what your end game is. How do you expect things to actually work out if you get the laws that you want? I would personally argue that an outcome where Mark Zuckerberg gets AI and the rest of us don't is the absolute worst possibility.

you are viewing a single comment's thread
view the rest of the comments
[–] Ragnell@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yeah, we're on different wavelengths. But I do have over twenty years in cyber transport and electronics. I know the first four layers in and out, including that physical layer it seems just about all programmers forget about completely.

It's not learning. It's not reading. It's not COMPREHENDING. It is processing. It is not like a person.

I admit, I'm firing from any direction I can get an angle at because this idea that these programs are actual AGI and are comparable to humanity is well... dangerous. There are people with power and influence who want to put these things in areas that WILL get people hurt. There are people who are dying to put them to work doing every bit of writing from scripts to NOTAMs and they are horrifically unreliable because they have no way of verifying the ACCURACY of what they right. They do not have the ability to make a judgement, which is a key component of human thinking. They can only favor the set result coming through the logic gate. If A and B enter, B comes out. If A and A enter, A comes out. It has no way to evaluate whether A or B is the actual answer.

You call it a small group of my peers, but everyone is in trouble because people with money are SEVERELY overestimating the capabilities of these programs. The danger is not that AI will take over the world, but that idiots will hand AI the world and AI will tank it because AI does not come with the capabilities needed to make actual decisions.

So yeah, I bring up the WGA/SAG-AFTRA strike. Because that happens to be the best known example of the harm being done not by the AI, but by the people who have too much faith in the AI and are ready to replace messy humans of all stripes with it.

And I argue with you, because you have too much faith in the AI. I'm not impressed by your degree to be perfectly honest because in my years in the trade I have known too many people with that degree who think they know way more than they do and end up having to rely on people like me to keep them grounded in what actually can be accomplished.

[–] IncognitoErgoSum@kbin.social 3 points 1 year ago (1 children)

What, specifically, do you think I'm wrong about?

If it's the future potential of AI, that's just a guess. AGI could be 100 years away (or financially impossible) as easily as it could be 5 years. AGI is in the future still, and nobody is really qualified to guess when it'll come to fruition.

If you think I'm wrong about the present potential of AI, I've already seen individuals with no budget use it to express themselves in ways that would have required an entire team and lots of money, and that's where I believe its real potential right now lies. That is, opening up the possibility for regular period to express themselves in ways that were impossible for them before. If Disney starts replacing animators with AI, I'll be right there with you boycotting them. AI should be for everyone, not for large corporations that can already afford to express themselves however they want.

If you think I'm wrong that AIs like ChatGPT and Stable Diffusion do their computing with simulated neurons, let me know and I'll try to find some literature about it from the source. I've had a lot of AI haters confidently tell me that it doesn't (including in this thread), and I don't know if you're in that camp or not.

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

I don't think we know enough about the human brain to actually replicate it in electronics.

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

So what does that mean? Do you not believe that AIs like ChatGPT and Stable Diffusion have neural networks that are made up of simulated neurons? Or are you saying that we haven't simulated an actual human brain? Because the former is factually incorrect, and I never claimed the latter. Please explain exactly what "hype" you believe I'm buying into? Because I don't think you have any clue what it is you think I'm wrong about. You just really don't want me to be right.

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

I think they simulate what some people think neurons are like. I mean, I guess you can get the binary neurons fine but there are analog neurons (and that is something that has just now been proven). But there are so many value inputs in the human brain that we haven't isolated, so much about it we haven't mapped. We don't even know how the electricity is encoded. So no, I don't think what you're calling a "neural network" is ACTUALLY simulating the human brain.

The hype you're buying into is that AI will improve our lives just by existing. Thing is, any new tech is a weapon in the hands of the rich whether it's available to the common man or not. We need to focus on setting the rules for the rich and enforcing the rules we have. Copyright, which is also a weapon in the hands of the rich yes, has aspects which are made to protect the common man and we need to enforce those to keep the rich in line while we have them. If someday we junk copyright, it needs to be as a whole. We can't go chucking copyright for small time authors while the courts are still allowing Disney to keep Mickey Mouse out of the public domain, which is what you suggest doing when you suggest copyright should be ignored so that the common man can make their own AI.

I think I've softened on quite a bit with your arguments, honestly. It's unfair to say I just don't want you to be right. My position remains that I think copyright is a fair place for limitation on AI training.

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago (2 children)

So most of my opinions about what AI can do aren't about hype at all, but what I've personally experienced with it firsthand. The news, frankly, is just as bad a source about AI is marketing departments of AI companies, because the news is primarily written by people who feel threatened by its existence and are rationalizing reasons that it's bad, as well as amplifying bad things that they hear and, in the best case, reporting on it without really understanding what it actually does. The news is partly why you're connecting what's happening with that WGA/SAG-AFTRA contract; nothing I've said here supports people losing their existing rights to their own likenesses, and the reason they're trying to slip it into the contracts is because even under existing copyright law, AI isn't a get out of jail free card to produce copyrighted works despite the fact that you can train it on them.

At any rate, here are a few of my personal experiences with using AI:

  • I've used AI art generation to create background art for a video game that I made with my kids over winter break, and because of that, it looks really good. It would have otherwise looked pretty bad.
  • For my online tabletop roleplaying campaign, I generate images of original locations and NPCs.
  • I subscribe to ChatGPT and because of that I have access to the GPT-4 version, which is leaps and bounds smarter than GPT-3 (although it's still like talking to some kind of savant who knows a whole lot of information but has trouble with certain types of reasoning). While ChatGPT isn't something you should use to write your legal briefs (I could have told you that before that dumbass lawyer even tried it), it's an amazing replacement for google, which nowadays involves a lot of fiddling and putting quotations marks around things just so you can get a result that's actually addressing what you want to know as opposed to "here's a bunch of vaguely related shit that has almost nothing to do with what you asked." That alone has improved my life.\
  • It's also great at helping you figure out what something is called. "I'm looking for a thing that does X and Y, but I don't know what it's called." Google is absolutely terrible at that.
  • I've used ChatGPT to generate custom one-shot adventure ideas for my online roleplaying game. Rather than having to adapt an existing adventure module to what I'm doing, if I give it information about my campaign, it'll come up with something that utilizes my existing locations, NPCs, and setting. (Indicentally, when people say that AI "can't be creative", they're essentially using a tautological definition of creativity that amount so "AI isn't creative because only humans can be creative, therefore AI can't be creative." AI, in my experience, is very creative.) Compare this to the common advice that people give to game masters who can't come up with an idea: take someone else's story, change a few things, and drop it into your campaign. ChatGPT is also amazing at worldbuilding.

This kind of thing is why I'm excited about AI -- it's improving my life in a big way right now. None of what I've done with it is "hype". I don't care that Elon Musk's dumb ass is starting his own AI company, or what tech company marketing divisions have to say about it, or what some MBA CEO's wild guess about what we'll be using it for in 5 years is.

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

Waaaaait a minute. How is it you thinking AI is good because it makes your life a bit better leisure-wise different from me thinking it's a problem because it will make my life worse work-wise? You threw that at me saying I was worried about a small group and here you are basing your excitement on it helping your niche hobbies?

Are you sure you're not projecting here? In this entire thread, have you budged an inch based on all the people arguing against your original post? Or are you just refusing to admit that it could cause trouble in the world for people's livelihoods because you get to have fun with it?

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago (1 children)

When did I refuse to admit automation causes problems for people?

[–] Ragnell@kbin.social 1 points 1 year ago (1 children)

When did I refuse to admit it could help with anything?

[–] IncognitoErgoSum@kbin.social 1 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not sure why you're asking that. You literally just asked me if I'm refusing to admit that AI could cause trouble for people's livelihoods. I don't know where you even got that idea. I never asked you anything about whether you admit it could help with things, because that's irrelevant (and also it would be a pretty silly blanket assumption to make).

Are you sure you're not projecting here? In this entire thread, have you budged an inch based on all the people arguing against your original post?

Who am I supposed to be budging for? Of the three people here who are actually arguing with me, you're the only one who isn't saying they're going to slash my car tires and likening personal AI use to eating steak in terms of power usage (it's not even in the same ballpark), or claiming that Stable Diffusion doesn't use a neural network. I only replied to the other guy's most recent comment because I don't want to be swiftboated -- people will believe other people who confidently state something that they find validating, even if they're dead wrong.

We just seem to mostly have a difference of opinion. I don't get the sense that you're making up your own facts. And fundamentally, I'm not convinced of the idea that only a small group of people deserve laws protecting their jobs from automation, particularly not at the expense of the rest of us. If we want to grant people relief from having their jobs automated away, we need to be doing that for everybody, and the answer to that isn't copyright law.

And as far as AI being used to automate dangerous jobs, copyright isn't going to stop that at all. Tesla's dangerous auto-pilot function (honestly, I have no idea if that's a neural network or just a regular computer program) uses data that Tesla gathers themselves. Any pharmaceutical company that develops an AI for making medicines will train it on their own trade secrets. Same with AI surgeons, AI-operated heavy machinery, and so on. None of that is going to be affected by copyright, and public concerns about safety aren't going to get in the way of stockholders and their profits anymore than it has in the past. If you want to talk about the dangers of overreliance on AI doing dangerous work, then by all means talk about that. This copyright fight, for those large companies, is a beneficial distraction.

[–] Ragnell@kbin.social 1 points 1 year ago* (last edited 1 year ago)

All right, let's go back to the original post. You said copyright being applied to materials used for AI training would lock poor people out of AI and make it so only corporations could use it.

This isn't true, because there's a wealth of public domain info out there.

Many of us pointed out that waiving copyright for AI training means that people who are being replaced by AI would have also had their work used to build the AI, which is an exploitation of their labor being used to eliminate their livelihood.

We argued about this and got on tangents, and ultimately you accused me of an anti-AI bias that is made to protect a "small group" of my peers and "damn" everyone else.

But ultimately, everyone else would just be required to keep their training to public domain works, or leave their lives the same. The group of my peers would have their lives worsened.

You haven't budged on this, this basic idea that training AI is so important that it is worth having those lives worsened. That it's so important we can't even give them a cut for the works already used.

And your examples for why AI is so important are... checks your comment Slightly easier websearch, being able to summarize stuff more easily, not having to draw or think up stories for your TTRPG, and... free background art on a video game you made for your kids.

Over this entire time you have budged on... acknowledging there is some trouble, but that the trouble is worth it and we still shouldn't try to use copyright protections to slow down the businesses who are ready to start downsizing or force them to at least pay people for work completed. I appreciate this acknowledgement, must've taken a lot of effort and soulsearching on your part.

So, yeah. I am sorry that I made you feel bad for saying that starving artists should be consigned to poverty--despite their work being used to make this tool--so that your children can have full background art on their free videogames. That's on me, man.

In all seriousness, of course I don't want to slash your tires or anything but come on. Copyright's not the final answer, but we can't just throw it away. It's a tool we have to make sure people get their due, and it is going to take way longer to make a new tool that helps everyone, so why would we waive the one tool we have while working on it?

If one author gets a meal out of copyright awards from an AI company, then yeah, it's worth applying copyright to it.

[–] Ragnell@kbin.social 1 points 1 year ago

It's nice that your life is better, but that doesn't change that these AIs were trained by being fed the work of creatives who were never compensated for that work.

And it doesn't change that on the high level and in the real world, they're pushing to put AI in places AI isn't ready to be, because they don't want to pay humans to do those jobs.

I mean, yeah, you don't care... but the rest of us do.