this post was submitted on 17 Aug 2023
485 points (96.0% liked)

Technology

59197 readers
3391 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

cross-posted from: https://nom.mom/post/121481

OpenAI could be fined up to $150,000 for each piece of infringing content.https://arstechnica.com/tech-policy/2023/08/report-potential-nyt-lawsuit-could-force-openai-to-wipe-chatgpt-and-start-over/#comments

you are viewing a single comment's thread
view the rest of the comments
[–] Swervish@lemmy.ml 62 points 1 year ago* (last edited 1 year ago) (3 children)

Not trying to argue or troll, but I really don't get this take, maybe I'm just naive though.

Like yea, fuck Big Data, but...

Humans do this naturally, we consume data, we copy data, sometimes for profit. When a program does it, people freak out?

edit well fuck me for taking 10 minutes to write my comment, seems this was already said and covered as I was typing mine lol

[–] QHC@lemmy.world 15 points 1 year ago (1 children)

It's just a natural extension of the concept that entities have some kind of ownership of their creation and thus some say over how it's used. We already do this for humans and human-based organizations, so why would a program not need to follow the same rules?

[–] FaceDeer@kbin.social 29 points 1 year ago (1 children)

Because we don't already do this. In fact, the raw knowledge contained in a copyrighted work is explicitly not copyrighted and can be done with as people please. Only the specific expression of that knowledge can be copyrighted.

An AI model doesn't contain the copyrighted works that went into training it. It only contains the concepts that were learned from it.

[–] BURN@lemmy.world 6 points 1 year ago (2 children)

There’s no learning of concepts. That’s why models hallucinate so frequently. They don’t “know” anything, they’re doing a lot of math based on what they’ve seen before and essentially taking the best guess at what the next word is.

There very much is learning of concepts. This is completely provable. You can give it problems it has never seen before and it will come up with good solutions.

[–] SIGSEGV@sh.itjust.works 8 points 1 year ago (2 children)

Very much like humans do. Many people think that somehow their brain is special, but really, you're just neurons behaving as neurons do, which can be modeled mathematically.

[–] HelloHotel@lemmy.world -1 points 1 year ago (1 children)

This take often denies that entropy soul or not is critically important for the types of intellegence thats not controlled by reward and punishment with an iron fist.

[–] SIGSEGV@sh.itjust.works 3 points 1 year ago

It sounds like you know english words but cannot compose them. I honestly cannot parse what you said.

[–] BURN@lemmy.world -2 points 1 year ago (1 children)

We can’t even map the entirety of the brain of a mouse due to the scale of how neurons work. Mapping a human brain 1:1 will eventually happen, and that’s likely going to coincide with when I’m convinced AI is capable of individual thought and actual intelligence

[–] SIGSEGV@sh.itjust.works -3 points 1 year ago* (last edited 1 year ago) (1 children)

Just saw this today. You should check it out, nitwit: https://www.theguardian.com/science/2023/aug/15/scientists-reconstruct-pink-floyd-song-by-listening-to-peoples-brainwaves

Edit: "nitwit" was uncalled for, but I do think you are an ignorant person.

You aren't magical. You don't have a soul that talks to Jesus. You're a bunch of organized electrical signals—a machine. Because your machine is carbon-based doesn't make you special.

Edit: Downvote all you want, but we're all still animals. Most people don't even believe that simple fact. Then again, most people don't even understand how their cellphone works.

[–] BURN@lemmy.world 5 points 1 year ago (1 children)

I fundamentally disagree and if that’s your take on humanity I’m scared for our future.

There is a human element to us. I’m not spiritual at all. I believe when we die the lights just go out and we cease to exist. But there is undoubtedly a part of us that is still far from being replicated in a machine. I’m not saying it won’t happen, I’m saying we’re a long way from it and what we’re seeing out of current AI is nothing even close to resembling intelligence.

[–] SIGSEGV@sh.itjust.works 5 points 1 year ago (1 children)

So when it happens, you'll change your mind? My point is that what we have today is based on interactions in the human brain: neural networks. You can say, "They're just guessing the next word based on mathematical models", but isn't that exactly what you're doing?

Point to the reason why what comes out of your mouth is any different. Is it because your network is bigger and more complicated? If that's the case GPT-4 is closer to being human than GPT-3 was, being a larger model.

I just don't get your point at all.

[–] PupBiru@kbin.social 1 points 1 year ago (1 children)

and if that is indeed the point: that the difference is simply size, then what does that law look like? surely it would need to specify a size of the relevant neural network that is able to derive works

but that’s then just an arbitrary number because we just don’t know what it would be

[–] SIGSEGV@sh.itjust.works 3 points 1 year ago* (last edited 1 year ago) (1 children)

I don't even think that matters much, right? Current LLMs already out-compete humans at many tasks. I think we're already past the threshold, at least in some regards. That is to say, I don't think there is a hard line because it depends on what your testing criteria are.

[–] PupBiru@kbin.social 2 points 1 year ago

couldn’t agree more!

[–] relative_iterator@sh.itjust.works 0 points 1 year ago (2 children)

It might be nice if we reserve some things just for humans 🤷🏻‍♂️

[–] MossyFeathers@pawb.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

That doesn't sit well to me. I agree that, to some extent, artists and writers should be compensated for their work, even if it just means those interested in creating training sets have to buy a copy of each work they intend to use in a training set so long as it can't be legally acquired for free (similar to how a human has to buy, """buy""" and/or borrow a book if they want to study it).

However, at the same time, this mindset opens the door to actual racism, not the silly "hurr, my skin color's better than your skin color" bullshit we call racism, but the much nastier, "there are actual differences between you and I which I will use to justify my poor behavior" kinda racism; and when your academic partner has the potential to outclass you in nearly every way (assuming most general AI would decide to work in STEM fields), it's much easier to justify your bigotry. That bigotry may then be learned by the AI and spit back at you; but this time, the accusations of inferiority may truly be justified.

I mean, think of it this way, what if someone created a general AI that displays all the characteristics of a human to the point of being seemingly indistinguishable from one? Should they not be considered a person? Should they not then be given the same rights as any other person?

Maybe it's not possible to create a general AI, but maybe we eventually encounter aliens; the universe is a big place after all. Should they not also be given the same rights as a person?

The AI problem is so much larger than I think most people realize. The people making these are trying to create life, even if they don't realize it. Just because it's a program or amalgamation of programs that run on silicon and copper doesn't mean it's any less alive than an amalgamation of programs running on chemical reactions and electric impulses. It's just a different kind of alive, like how a car that uses electricity and a car that uses internal combustion are both cars, they just have different ways of doing the same thing. That's not to say current AI is anywhere near as intelligent as a dog, cat or human, but it has the potential to one day become truly intelligent.

It's also easy to assume that these are all issues that will be solved in the future, but we have plenty of examples even now of how kicking the can down the street isn't really an intelligent strategy. Look at how well that can-kicking is turning out in regards to climate change, wealth inequality, healthcare, LGBT+ and BIPOC rights, etc. Regarding AI, I believe there are a lot of hang-ups that we as humans have, whether conscious or unconscious, when it comes to tolerating beings unlike us (we're still struggling with the skin-color racism); and that it's better to start working on them now than to wait until Mr. Roboto has his chassis smashed by a bunch of neo-luddites who insist that he's just a bunch of circuits formed into a crude imitation of humanity.

Edit: you could also make the argument that choosing not to extend personhood to an intelligent machine opens the door to prejudice and bigotry in regards to transhumanism. At some point, we as humans will start modifying ourselves, via either meat or circuitry, and when that happens, there'll be plenty of people trying to argue that Joe isn't a human because he's had his whole brain replaced with a computer. It doesn't matter if the surgeons replaced his brain step-by-step to insure Joe himself wasn't lost in the process; they'll argue that since the brain is what makes Joe, "Joe", then he must not be human because his brain is no longer organic.

Edit 2: Also, I apologize if I misinterpreted your statement. I've seen way too many people saying that AI should never, ever be treated as a person.

[–] adrian783@lemmy.world 1 points 1 year ago

this is a super-reach, why don't we deal with AI indistinguishable from human when it happens.

right now what we have is a language model that is very distinguishable from human so it doesn't get any human considerations.

if a monkey or chicken created an artwork, it doesn't have copyrights, because it's not human either.

[–] HelloHotel@lemmy.world 0 points 1 year ago* (last edited 1 year ago)

I like that argument as it applies to our ai, which isnt ment to reject bad ideas or motiefs but to never have a bad idea in the first place. This setup results in the bot's path of least resistance being to copy someones homework. Nobody wants the bot to do that.

Someday we may have AI that argument is harder to apply to

i attempt explain, irrelivanttext generators have a "most correct" output that looks and behaves simmlar to pressing the first of the keyboard suggested words repeatedly. We add noise, where the bot is on a dice roll forced to add a random letter to it's output. Like the above example if you typed a 5 letter word every so often instead.