If they can base their business on stealing, then we can steal their AI services, right?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Pirating isn’t stealing but yes the collective works of humanity should belong to humanity, not some slimy cabal of venture capitalists.
Here's an experiment for you to try at home. Ask an AI model a question, copy a sentence or two of what they give back, and paste it into a search engine. The results may surprise you.
And stop comparing AI to humans but then giving AI models more freedom. If I wrote a paper I'd need to cite my sources. Where the fuck are your sources ChatGPT? Oh right, we're not allowed to see that but you can take whatever you want from us. Sounds fair.
The argument that these models learn in a way that's similar to how humans do is absolutely false, and the idea that they discard their training data and produce new content is demonstrably incorrect. These models can and do regurgitate their training data, including copyrighted characters.
And these things don't learn styles, techniques, or concepts. They effectively learn statistical averages and patterns and collage them together. I've gotten to the point where I can guess what model of image generator was used based on the same repeated mistakes that they make every time. Take a look at any generated image, and you won't be able to identify where a light source is because the shadows come from all different directions. These things don't understand the concept of a shadow or lighting, they just know that statistically lighter pixels are followed by darker pixels of the same hue and that some places have collections of lighter pixels. I recently heard about an ai that scientists had trained to identify pictures of wolves that was working with incredible accuracy. When they went in to figure out how it was identifying wolves from dogs like huskies so well, they found that it wasn't even looking at the wolves at all. 100% of the images of wolves in its training data had snowy backgrounds, so it was simply searching for concentrations of white pixels (and therefore snow) in the image to determine whether or not a picture was of wolves or not.
Even if they learned exactly like humans do, like so fucking what, right!? Humans have to pay EXORBITANT fees for higher education in this country. Arguing that your bot gets socialized education before the people do is fucking absurd.
That seems more like an argument for free higher education rather than restricting what corpuses a deep learning model can train on
Basing your argument around how the model or training system works doesn't seem like the best way to frame your point to me. It invites a lot of mucking about in the details of how the systems do or don't work, how humans learn, and what "learning" and "knowledge" actually are.
I'm a human as far as I know, and it's trivial for me to regurgitate my training data. I regularly say things that are either directly references to things I've heard, or accidentally copy them, sometimes with errors.
Would you argue that I'm just a statistical collage of the things I've experienced, seen or read?
My brain has as many copies of my training data in it as the AI model, namely zero, but "Captain Picard of the USS Enterprise sat down for a rousing game of chess with his friend Sherlock Holmes, and then Shakespeare came in dressed like Mickey mouse and said 'to be or not to be, that is the question, for tis nobler in the heart' or something". Direct copies of someone else's work, as well as multiple copyright infringements.
I'm also shit at drawing with perspective. It comes across like a drunk toddler trying their hand at cubism.
Arguing about how the model works or the deficiencies of it to justify treating it differently just invites fixing those issues and repeating the same conversation later. What if we make one that does work how humans do in your opinion? Or it properly actually extracts the information in a way that isn't just statistically inferred patterns, whatever the distinction there is? Does that suddenly make it different?
You don't need to get bogged down in the muck of the technical to say that even if you conceed every technical point, we can still say that a non-sentient machine learning system can be held to different standards with regards to copyright law than a sentient person. A person gets to buy a book, read it, and then carry around that information in their head and use it however they want. Not-A-Person does not get to read a book and hold that information without consent of the author.
Arguing why it's bad for society for machines to mechanise the production of works inspired by others is more to the point.
Computers think the same way boats swim. Arguing about the difference between hands and propellers misses the point that you don't want a shrimp boat in your swimming pool. I don't care why they're different, or that it technically did or didn't violate the "free swim" policy, I care that it ruins the whole thing for the people it exists for in the first place.
I think all the AI stuff is cool, fun and interesting. I also think that letting it train on everything regardless of the creators wishes has too much opportunity to make everything garbage. Same for letting it produce content that isn't labeled or cited.
If they can find a way to do and use the cool stuff without making things worse, they should focus on that.
The whole point of copyright in the first place, is to encourage creative expression, so we can have human culture and shit.
The idea of a "teensy" exception so that we can "advance" into a dark age of creative pointlessness and regurgitated slop, where humans doing the fun part has been made "unnecessary" by the unstoppable progress of "thinking" machines, would be hilarious, if it weren't depressing as fuck.
The whole point of copyright in the first place, is to encourage creative expression
...within a capitalistic framework.
Humans are creative creatures and will express themselves regardless of economic incentives. We don't have to transmute ideas into capital just because they have "value".
Sorry buddy, but that capitalistic framework is where we all have to exist for the forseeable future.
Giving corporations more power is not going to help us end that.
I'll train my AI on just the bee movie. Then I'm going to ask it "can you make me a movie about bees"? When it spits the whole movie, I can just watch it or sell it or whatever, it was a creation of my AI, which learned just like any human would! Of course I didn't even pay for the original copy to train my AI, it's for learning purposes, and learning should be a basic human right!
You drank the kool-aid.
The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works. This has been suppressed by OpenAI in a rather brute force kind of way, by prohibiting the prompts that have been found so far to do this (e.g. the infamous "poetry poetry poetry..." ad infinitum hack), but the possibility is still there, no matter how much they try to plaster over it. In fact there are some people, much smarter than me, who see technical similarities between compression technology and the process of training an LLM, calling it a "blurry JPEG of the Internet"... the point being, you wouldn't allow distribution of a copyrighted book just because you compressed it in a ZIP file first.
The problem with your argument is that it is 100% possible to get ChatGPT to produce verbatim extracts of copyrighted works.
Exactly! This is the core of the argument The New York Times made against OpenAI. And I think they are right.
"but how are we supposed to keep making billions of dollars without unscrupulous intellectual property theft?! line must keep going up!!"
Look... All I have to say is... Support the Internet Archive!
(please)
This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.
Like fuck it is. An LLM "learns" by memorization and by breaking down training data into their component tokens, then calculating the weight between these tokens. This allows it to produce an output that resembles (but may or may not perfectly replicate) its training dataset, but produces no actual understanding or meaning--in other words, there's no actual intelligence, just really, really fancy fuzzy math.
Meanwhile, a human learns by memorizing training data, but also by parsing the underlying meaning and breaking it down into the underlying concepts, and then by applying and testing those concepts, and mastering them through practice and repetition. Where an LLM would learn "2+2 = 4" by ingesting tens or hundreds of thousands of instances of the string "2+2 = 4" and calculating a strong relationship between the tokens "2+2," "=," and "4," a human child would learn 2+2 = 4 by being given two apple slices, putting them down to another pair of apple slices, and counting the total number of apple slices to see that they now have 4 slices. (And then being given a treat of delicious apple slices.)
Similarly, a human learns to draw by starting with basic shapes, then moving on to anatomy, studying light and shadow, shading, and color theory, all the while applying each new concept to their work, and developing muscle memory to allow them to more easily draw the lines and shapes that they combine to form a whole picture. A human may learn off other peoples' drawings during the process, but at most they may process a few thousand images. Meanwhile, an LLM learns to "draw" by ingesting millions of images--without obtaining the permission of the person or organization that created those images--and then breaking those images down to their component tokens, and calculating weights between those tokens. There's about as much similarity between how an LLM "learns" compared to human learning as there is between my cat and my refrigerator.
And YET FUCKING AGAIN, here's the fucking Google Books argument. To repeat: Google Books used a minimal portion of the copyrighted works, and was not building a service to compete with book publishers. Generative AI is using the ENTIRE COPYRIGHTED WORK for its training set, and is building a service TO DIRECTLY COMPETE WITH THE ORGANIZATIONS WHOSE WORKS THEY ARE USING. They have zero fucking relevance to one another as far as claims of fair use. I am sick and fucking tired of hearing about Google Books.
EDIT: I want to make another point: I've commissioned artists for work multiple times, featuring characters that I designed myself. And pretty much every time I have, the art they make for me comes with multiple restrictions: for example, they grant me a license to post it on my own art gallery, and they grant me permission to use portions of the art for non-commercial uses (e.g. cropping a portion out to use as a profile pic or avatar). But they all explicitly forbid me from using the work I commissioned for commercial purposes--in other words, I cannot slap the art I commissioned on a T-shirt and sell it at a convention, or make a mug out of it. If I did so, that artist would be well within their rights to sue the crap out of me, and artists charge several times as much to grant a license for commercial use.
In other words, there is already well-established precedent that even if something is publicly available on the Internet and free to download, there are acceptable and unacceptable use cases, and it's broadly accepted that using other peoples' work for commercial use without compensating them is not permitted, even if I directly paid someone to create that work myself.
Bullshit. AI are not human. We shouldn't treat them as such. AI are not creative. They just regurgitate what they are trained on. We call what it does "learning", but that doesn't mean we should elevate what they do to be legally equal to human learning.
It's this same kind of twisted logic that makes people think Corporations are People.
tweet is good, your body argument is completely wrong
Disagree. These companies are exploiting an unfair power dynamic they created that people can't say no to, to make an ungodly amount of money for themselves without compensating people whose data they took without telling them. They are not creating a cool creative project that collaboratively comments on or remixes what other people have made, they are seeking to gobble up and render irrelevant everything that they can, for short term greed. That's not the scenario these laws were made for. AI hurts people who have already been exploited and industries that have already been decimated. Copyright laws were not written with this kind of thing in mind. There are potentially cool and ethical uses for AI models, but open ai and google are just greed machines.
Edited * THRICE because spelling. oof.
Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.
AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.
AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.
Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.
See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.
TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.
This process is akin to how humans learn...
I'm so fucking sick of people saying that. We have no fucking clue how humans LEARN. Aka gather understanding aka how cognition works or what it truly is. On the contrary we can deduce that it probably isn't very close to human memory/learning/cognition/sentience (any other buzzword that are stands-ins for things we don't understand yet), considering human memory is extremely lossy and tends to infer its own bias, as opposed to LLMs that do neither and religiously follow patters to their own fault.
It's quite literally a text prediction machine that started its life as a translator (and still does amazingly at that task), it just happens to turn out that general human language is a very powerful tool all on its own.
I could go on and on as I usually do on lemmy about AI, but your argument is literally "Neural network is theoretically like the nervous system, therefore human", I have no faith in getting through to you people.
Not even stealing cheese to run a sandwich shop.
Stealing cheese to melt it all together and run a cheese shop that undercuts the original cheese shops they stole from.
If ChatGPT was free I might see their point but it's not so no. If you're making money from someone's work you should pay them.
You know, those obsessed with pushing AI would do a lot better if they dropped the patronizing tone in every single one of their comments defending them.
It's always fun reading "but you just don't understand".
As others have said, it isn't inspired always, sometimes it literally just copies stuff.
This feels like it was written by someone who invested their money in AI companies because they're worried about their stocks
Considering that original works are discarded, it's strange how effective they're at plagiarizing them
Studied AI at uni. I'm also a cyber security professional. AI can be hacked or tricked into exposing training data. Therefore your claim about it disposing of the training material is totally wrong.
Ask your search engine of choice what happened when Gippity was asked to print the word "book" indefinitely. Answer: it printed training material after printing the word book a couple hundred times.
Also my main tutor in uni was a neuroscientist. Dude straight up told us that the current AI was only capable of accurately modelling something as complex as a dragon fly. For larger organisms it is nowhere near an accurate recreation of a brain. There are complexities in our brain chemistry that simply aren't accounted for in a statistical inference model and definitely not in the current gpt models.
Generative AI is not 'influenced' by other people's work the way humans are. A human musician might spend years covering songs they like and copying or emulating the style, until they find their own style, which may or may not be a blend of their influences, but crucially, they will usually add something. AI does not do that. The idea that AI functions the same as human artists, by absorbing influences and producing their own result, is not only fundamentally false, it is dangerously misleading. To portray it as 'not unethical' is even more misleading.
Generative AI does not work like this. They're not like humans at all, it will regurgitate whatever input it receives, like how Google can't stop Gemini from telling people to put glue in their pizza. If it really worked like that, there wouldn't be these broad and extensive policies within tech companies about using it with company sensitive data like protection compliances. The day that a health insurance company manager says, "sure, you can feed Chat-GPT medical data" is the day I trust genAI.
"This process is akin to how humans learn... The AI discards the original text, keeping only abstract representations..."
Now I sail the high seas myself, but I don't think Paramount Studios would buy anyone's defence they were only pirating their movies so they can learn the general content so they can produce their own knockoff.
Yes artists learn and inspire each other, but more often than not I'd imagine they consumed that art in an ethical way.
This process is akin to how humans learn by reading widely and absorbing styles and techniques, rather than memorizing and reproducing exact passages.
Machine learning algorithms are not people and are not ingesting these works the same way a person does. This argument is brought up all the time and just doesn't ring true. You're defending the unethical use of copyrighted works by a giant corporation with a metaphor that doesn't have any bearing on reality; in an age where artists are already shamefully undervalued. Creating art is a human process with the express intent of it being enjoyed by other humans. Having an algorithm do it is removing the most important part of art; the humanity.
Are the models that OpenAI creates open source? I don't know enough about LLMs but if ChatGPT wants exemptions from the law, it result in a public good (emphasis on public).
Nothing about OpenAI is open-source. The name is a misdirection.
If you use my IP without my permission and profit it from it, then that is IP theft, whether or not you republish a plagiarized version.
Kids pay for books, openAI should also pay for the material access used for training.
The joke is of course that "paying for copyright" is impossible in this case. ONLY the large social media companies that own all the comments and content that has accumulated by the community have enough data to train AI models. Or sites like stock photo libraries or deviantart who own the distribution rights for the content. That means all copyright arguments practically argue that AI should be owned by big corporations and should be inaccessible to normal people.
Basically the "means of generation" will be owned by the capitalists, since they are the only ones with the economic power to license these things.
That is basically the worst case scenario. Not only will the value of work diminish greatly, the advances in productivity will also be only accessible to big capitalists.
Of course, that is basically inevitable anyway. Why wouldn't they want this? It's just sad seeing the stupid morons arguing for this as if they had anything to gain.
I thought the larger point was that they're using plenty of sources that do not lie in the public domain. Like if I download a textbook to read for a class instead of buying it - I could be proscecuted for stealing. And they've downloaded and read millions of books without paying for them.
I don't think LLMs should be taken down, it would be impossible for that to happen. I do, however think it should be forced into open source.
Am I the only person that remembers that it was "you wouldn't steal a car" or has everyone just decided to pretend it was "you wouldn't download a car" because that's easier to dunk on.
People remember the parody, which is usually modified to be more recognizable. Like Darth Vader never said "Luke, I am your father"; in the movie it's actually "No, I am your father".
While I agree that using copyrighted material to train your model is not theft, text that model produces can very much be plagiarism and OpenAI should be on the hook when it occurs.