this post was submitted on 17 May 2024
251 points (96.0% liked)

Technology

59582 readers
4135 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 42 comments
sorted by: hot top controversial new old
[–] Allonzee@lemmy.world 117 points 6 months ago* (last edited 6 months ago) (3 children)

Humanity is surrounding itself with the harbingers of its own self-inflicted destruction.

All in the name of not only tolerated avarice, but celebrated avarice.

Greed is a more effectively harmful human impulse than even hate. We've merely been propagandized to ignore greed, oh im sorry "rational self-interest," as the personal failing and character deficit it is.

The widely accepted thought terminating cliche of "it's just business" should never have been allowed to propagate. Humans should never feel comfortable leaving their empathy/decency at the door in our interactions, not for groups they hate, and not for groups they wish to exploit value from. Cruelty is cruelty, and doing it to make moooaaaaaar money for yourself makes it significantly more damning, not less.

[–] Boozilla@lemmy.world 37 points 6 months ago (1 children)

Empathy and decency are scarce precious commodities. But the ruthless predatory "thought leaders" have been in charge ever since we clubbed the last neanderthal.

"It Was Just Business" should be engraved on whatever memorial is left behind to mark our self-extinction.

[–] Allonzee@lemmy.world 8 points 6 months ago

I completely agree and have made similar points about that being our species' epitaph.

[–] bunnyfc@kbin.social 10 points 6 months ago

Star Trek TNG had it pretty right in terms of what's moral or what is desirable

[–] iAvicenna@lemmy.world 1 points 6 months ago

greed coupled with high ambition is the biggest problem. neither on its own is as destructive

[–] WhatIsThePointAnyway@lemmy.world 50 points 6 months ago* (last edited 6 months ago) (1 children)

Capitalism doesn’t care about humanity, only profits. Any safeguards self imposed will always fall to profitability in a capitalist system. It’s why regulations and a government people trust are important.

[–] uriel238@lemmy.blahaj.zone 18 points 6 months ago

But, according to Das Kapital (and the last two centuries) capitalists will always capture the government and regulators, neutering their ability to fulfill their role. Greed and the susceptibility to corruption will always drive the system to where it is today, in which only revolution will free us from the established system.

But even then, civil war rarely heralds a communist revolution, but usually a run of dictatorships, each overthrown by the next. We have to get very lucky or be tired of fighting before we can install a public serving state. And we haven't yet tried pre-writing and publishing the new constitution.

[–] Veedem@lemmy.world 44 points 6 months ago* (last edited 6 months ago) (5 children)

I mean is this stuff even really AI? It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out. I’m not sure this is the tech that will decide humanity is unnecessary.

It’s just rebranded machine learning, IMO.

[–] kromem@lemmy.world 16 points 6 months ago* (last edited 6 months ago) (1 children)

It has no awareness of what it’s saying. It’s simply calculating the most probable next word in a typical sentence and spewing it out.

Neither of these things are true.

It does create world models (see the Othello-GPT papers, Chess-GPT replication, and the Max Tegmark world model papers).

And while it is trained on predicting the next token, it isn't necessarily doing it from there on out purely based on "most probable" as your sentence suggests, such as using surface statistics.

Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of "my pieces" and "opponent pieces."

And that was a toy model.

[–] technocrit@lemmy.dbzer0.com 0 points 6 months ago* (last edited 6 months ago) (1 children)

Something like Othello-GPT, trained to predict the next move and only fed a bunch of moves, generated a virtual Othello board in its neural network and kept track of “my pieces” and “opponent pieces.”

AKA Othello-GPT chooses moves based on statistics.

Ofc it's going to use a virtual board in this process. Why would a computer ever use a real one board?

There's zero awareness here.

[–] xthexder@l.sw0.com 2 points 6 months ago

Let me try putting this a different way: The machine is picking the next best word / action / chess move to output based on its past experience of the world (i.e. it's training data). It's not just statistics, it's making millions of learned connections between words, and through association they start to have meaning.

Is this not exactly what the human brain does itself? Humans just have the advantage of multiple senses and having a physical agent (a body) to interact with the world.

The problem that AI has is it's got no basis in reality. It's like a human talking about fantasy things like unicorns. We've only ever experienced them as descriptions and art created from those descriptions without any basis in reality.

[–] Pilferjinx@lemmy.world 14 points 6 months ago (1 children)

The definitions and semantics are getting stressed to breaking points. We don't have clear philosophy of mind for us humans let alone an overlay of other non human agents.

[–] dustyData@lemmy.world -3 points 6 months ago* (last edited 6 months ago)

We have 3 thousand years of tradition on philosophy of the mind, we have a clear idea. It's just somewhat complex and difficult to grasp with, and there is still room for development and understanding. But this is like saying that we don't have a clear philosophy of physics just because quantum physics is hard and there are things we don't fully understand yet. As for non-human agents, what even is that? are dogs non-human agents? fish? virus? Computers are just the newest addition to the list of non-human agents we have philosophized about and we probably understand better the mind of other relatively simple life forms than our own. Definitions and semantics are always being stressed and are always breaking, that's what symbols are for, that's one of their main defining use cases. Go talk to an north-east African about rizz and tell me how that goes.

[–] redcalcium@lemmy.institute 12 points 6 months ago

Supposedly they found a new method (Q*) that significantly improved their models, enough to make some key people revolt to force the company to not monetize it out of ethical concern. Those people have been pushed out ofc.

[–] erwan@lemmy.ml 10 points 6 months ago (1 children)

OK, generative AI isn't machine learning.

But to get back to what AI is, the definition has been moving forever as AI becomes "just software" when it becomes ubiquitous. People were shocked that machines could calculate, then that they can play chess better than humans, then that they can read handwriting...

The first mistake have been to invent the term to start with, as it implies thinking machine but they're not.

Or as Dijkstra puts it: "asking whether a machine can think is as dumb as asking if a submarine can swim".

[–] blurg@lemmy.world 2 points 6 months ago

Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.

Alan Turing puts it similarly, the question is nonsense. However, if you define "machine" and "thinking", and redefine the question to mean: is machine thinking differentiable from human thinking; you can answer affirmatively, theoretically (rough paraphrasing). Though the current evidence suggests otherwise (e.g. AI learning from other AI drifts toward nonsense).

For more, see: Computing Machinery and Intelligence, and Turing's original paper (which goes into the Imitation Game).

[–] possiblylinux127@lemmy.zip 3 points 6 months ago

The problem is that it is capable of doing things that historically wasn't possible with a machine. It can "act natural" in a sense.

There are so many cans of worms

[–] uriel238@lemmy.blahaj.zone 34 points 6 months ago (3 children)

Extinction by AI takeover or robot apocalypse does seem cooler than extinction by pollution rendering then environment uninhabitable.

I'd rather not go extinct at all, but if we're fucked regardless.

[–] higgsboson@dubvee.org 8 points 6 months ago* (last edited 6 months ago) (1 children)

Instead we're going to get "D- All of the above."

[–] shasta@lemm.ee 6 points 6 months ago

Who doesn't want to get D?

[–] Muffi@programming.dev 3 points 6 months ago

Combine the two and we've got a proper Matrix situation on our hands

[–] Pandantic@midwest.social 2 points 6 months ago (1 children)

Yeah but what if we unleash an evil AI on the universe? Our mess spilling over and fucking up nature again.

[–] Melt@lemm.ee 6 points 6 months ago (1 children)

The universe is so hostile to organic life it's so boring, just rock, gas, burning hell or frozen hell. Might as well let robot inhabit it

[–] erwan@lemmy.ml 2 points 6 months ago

We kinda started already, Mars is inhabited by robots already.

[–] mansfield@lemmy.world 28 points 6 months ago

Don't fall for this horseshit. The only danger here is unchecked greed from these sociopaths.

[–] homesweethomeMrL@lemmy.world 27 points 6 months ago

Cry profit and let slip the dogs of enshittification

[–] technocrit@lemmy.dbzer0.com 19 points 6 months ago* (last edited 6 months ago) (1 children)

If these people actually cared about "saving humanity", they would be attacking car dependency, pollution, waste, etc.

Not making a shitty cliff notes machine.

[–] BeardedGingerWonder@feddit.uk -2 points 6 months ago

What a bloody stupid take. No one cares about saving humanity unless that's their only pursuit in life?

[–] Dreizehn@kbin.social 16 points 6 months ago

Everything for profit and shareholders.

[–] lung@lemmy.world 3 points 6 months ago (5 children)

Miss me with the doomsday news cycle capture, we aren't even close to AI being a threat to ~anything

(and all hail the AI overlords if it does happen, can't be worse than politicians)

[–] thesporkeffect@lemmy.world 19 points 6 months ago (2 children)

Except for the environment

[–] unautrenom@jlai.lu 9 points 6 months ago

idk most politicians are a threat to the environement like AI (if not even more so with their moronic laws)

[–] bionicjoey@lemmy.ca 5 points 6 months ago

And people's jobs (not because it can replace people, but because execs think it can)

[–] 4z01235@lemmy.world 18 points 6 months ago (1 children)

AI on its own isn't a threat, but people (mis)using and misrepresenting AI are. That isn't a problem unique to AI but there sure are a lot of people doing dumb and bad things with AI right now.

[–] Xeroxchasechase@lemmy.world 9 points 6 months ago (1 children)
[–] 4z01235@lemmy.world 6 points 6 months ago (1 children)

When was the last time you saw a corporation making decisions and taking actions of its own accord, without people?

Maybe they will start to, now, as people delegate their responsibilities to "AI"

[–] Xeroxchasechase@lemmy.world 3 points 6 months ago (1 children)

People are getiing paid by corporation to "do their job". People who apeak up against the interest of the corporation are getting laid off. Unions are regularly busted to prevent collective actions and workers cooperation. CEO's are getting piad by corporation stupid amounts of money to keep maximizing shareholders profits against everything else, even moral considerations.

[–] 4z01235@lemmy.world 1 points 6 months ago

People decide who to hire for what roles and who to lay off. People form unions and people bust unions. The shareholders are people, and the decisions made in their interests are made by other people.

[–] Thorry84@feddit.nl 14 points 6 months ago

No the "AI" isn't a threat in itself. And treating generative algorithms like LLM like it's general intelligence is dumb beyond words. However:

It massively increases the reach and capacity of foreign (and sadly domestic) agents to influence people. All of those Russian trolls that brought about fascism, Brexit and the rise of the far right used to be humans. Now a single human can do more than a whole army of people could in the past using AI. Spreading misinformation has never been easier.

Then there's the whole replacing peoples jobs with AI. No the AI can't actually do those jobs, not very well at least. But if management and the share holders think they can increase profits using AI, they will certainly fire a lot of folk. And even if that ends up ruining the company down the line, that costs even more jobs and usually impacts the people lower in the organization the most.

Also there's a risk of people literally becoming less capable and knowledgeable because of AI. If you can have a digital assistant you carry around on your pocket at all times answer every question ever, why bother learning anything yourself? Why take the hard road, when the easy road is available? People are at risk of losing information, knowledge and the ability to think for themselves because of this. And it can become so bad, when the AI just makes shit up, people think it's the truth. And in a darker tone, if the people behind the big AIs want something to not be known or misrepresented, they can make it happen. And people would be so reliant on it, they wouldn't even know this happens. This is already an issue with social media, AI is much much worse.

Then there is the resource usage for AI. This makes the impact of crypto currency seem like a rounding error. The energy and water usage is huge and becoming bigger every day. This has the potential to undo almost all of the climate wins we've had for the past two decades and push the Earth beyond the tipping point. What people seem to forget about climate change is once things start becoming bad, it's way too late and the situation will deteriorate at an exponential rate.

That's just a couple of big things I can think of on the top of my head. I'm sure there are many more issues (such as the death of the internet). But I think this is enough to call the current level of "AI" a threat to humanity.

[–] Pandantic@midwest.social 6 points 6 months ago

Sorry I made this before Drake was a ~~certified lover boy~~ certified pedophile.

[–] misspacific@lemmy.blahaj.zone 1 points 6 months ago

i agree with the first part

[–] drawerair@lemmy.world 1 points 6 months ago

I guess Altman thought "The ai race comes 1st. If Openai will lose the race, there'll be nothing to be safe about." But Openai is rich. They can afford to devote a portion of their resources to safety research.

What if he thinks that the improvement of ai won't be exponential? What if he thinks that it'll be slow enough that Openai can start focusing on ai safety when they can see superintelligence's approach from the distance? That focusing on safety now is premature? That surely is a difference in opinion compared to Sutskever and Leike.

I think ai safety is key. I won't be :o if Sutskever and Leike will go to Google or Anthropic.

I was curious whether or not Google and Anthropic have ai safety initiatives. Did a quick search and saw this –

For Anthropic, my quick search yielded none.