LittleLordLimerick

joined 1 year ago
[–] LittleLordLimerick@lemm.ee 1 points 1 year ago (1 children)

You seem to have the assumption that they’re not. And that “helping society” is anything more than a happy accident that results from “making big profits”.

It's not an assumption. There's academic researchers at universities working on developing these kinds of models as we speak.

Are you asking me whether it’s a good idea to give up the concept of “Privacy” in return for an image classifier that detects how much film grain there is in a given image?

I'm not wasting time responding to straw men.

[–] LittleLordLimerick@lemm.ee 2 points 1 year ago

It’s a statistical model. Given a sequence of words, there’s a set of probabilities for what the next word will be.

That is a gross oversimplification. LLM's operate on much more than just statistical probabilities. It's true that they predict the next word based on probabilities learned from training datasets, but they also have layers of transformers to process the context provided from a prompt to eke out meaningful relationships between words and phrases.

For example: Imagine you give an LLM the prompt, "Dumbledore went to the store to get ice cream and passed his friend Sam along the way. At the store, he got chocolate ice cream." Now, if you ask the model, "who got chocolate ice cream from the store?" it doesn't just blindly rely on statistical likelihood. There's no way you could argue that "Dumbledore" is a statistically likely word to follow the text "who got chocolate ice cream from the store?" Instead, it uses its understanding of the specific context to determine that "Dumbledore" is the one who got chocolate ice cream from the store.

So, it's not just statistical probabilities; the models' have an ability to comprehend context and generate meaningful responses based on that context.

[–] LittleLordLimerick@lemm.ee 0 points 1 year ago

If enforcement means big tech companies have to throw out models because they used personal information without knowledge or consent, boo fucking hoo

A) this article isn't about a big tech company, it's about an academic researcher. B) he had consent to use the data when he trained the model. The participants later revoked their consent to have their data used.

[–] LittleLordLimerick@lemm.ee 1 points 1 year ago (1 children)

How is “don’t rely on content you have no right to use” litteraly impossible?

At the time they used the data, they had a right to use it. The participants later revoked their consent for their data to be used, after the model was already trained at an enormous cost.

[–] LittleLordLimerick@lemm.ee 0 points 1 year ago (3 children)

ok i guess you don’t get to use private data in your models too bad so sad

You seem to have an assumption that all AI models are intended for the sole benefit of corporations. What about medical models that can predict disease more accurately and more quickly than human doctors? Something like that could be hugely beneficial for society as a whole. Do you think we should just not do it because someone doesn't like that their data was used to train the model?

[–] LittleLordLimerick@lemm.ee 1 points 1 year ago (3 children)

There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

What if you want to create a model that predicts, say, diseases or medical conditions? You have to train that on medical data or you can't train it at all. There's simply no way that such a model could be created without using private data. Are you suggesting that we simply not build models like that? What if they can save lives and massively reduce medical costs? Should we scrap a massively expensive and successful medical AI model just because one person whose data was used in training wants their data removed?

[–] LittleLordLimerick@lemm.ee 1 points 1 year ago

You still use TP if you have a bidet though

[–] LittleLordLimerick@lemm.ee 34 points 1 year ago (2 children)

I’m a socialist but not a tankie. Criticizing tankies!= criticizing socialists

[–] LittleLordLimerick@lemm.ee 10 points 1 year ago

The anecdote proves nothing because the model could potentially have known of the McGonagal character without ever being trained on the books, since that character appears in a lot of fan fiction. So their point is invalid and their anecdote proves nothing.

[–] LittleLordLimerick@lemm.ee 5 points 1 year ago (1 children)

Honestly, I think yes, it’s inevitable. The reason why is that keeping up with constantly changing technologies requires constantly learning how to do everything over again, and again, and again. It will get tiring eventually, and people will feel that learning the ins and outs of yet another social media app just isn’t worth it when they can already get by.

I say this as as software developer who sees a new tool or framework or language come out every year that’s bigger and better than the last, and I see the writing on the wall for myself. I’ll be outdated and just some old geezer who works on legacy tech stacks in 10-20 years, just like the guys working in COBOL or whatever now.

[–] LittleLordLimerick@lemm.ee 13 points 1 year ago (1 children)

I’m still calling it Twitter because it will piss off Elon Musk if everyone keeps calling it Twitter and I think that’s funny.

[–] LittleLordLimerick@lemm.ee 6 points 1 year ago

Just want to say that this is a fantastic answer. Pay attention to the parts about printing/downloading stuff. There are huge parts of America where you won't get a reliable cell signal sometimes for hours.

view more: next ›