this post was submitted on 29 Sep 2023
436 points (93.6% liked)

Technology

59381 readers
4205 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Authors using a new tool to search a list of 183,000 books used to train AI are furious to find their works on the list.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] FaceDeer@kbin.social 4 points 1 year ago (1 children)

On the one hand, I agree with your estimation of how things will go overall.

On the other hand, though, I think there's value to be had in pushing back against the misinformation whenever it comes up. I don't think AI is going to be hindered by it in the long run, but it's possible that in the short run it's going to kill interesting projects and harm some of the people who are experimenting with it.

And I have seen technologies that have suffered from longer-term difficulties once the zeitgeist turned against them despite having technical merit. There are useful applications for NFTs to be had out there, for example, but just try mentioning them when the opportunity arises and see what sort of reaction you get.

[โ€“] kromem@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Yes, though to be fair these things often move in pendulum swings back and forth and that's a natural component of any system finding equilibrium.

Just as AI has many detractors raised to fear HAL or the Terminator and approaching any news of what's perceived as the existential threat they were warned of with hyper caution, there's also a ton of hypemasters packaging up snake oil with AI having become the new 'quantum' to slap on a pile of crap and claim is gold.

The two are going to balance out.

NFTs were hyped to shit on numerous get rich quick pyramid schemes, and a number of companies that were jumping on the bandwagon to try and catch a wave rightfully abandoned what was going to be a terrible idea (i.e. Square Enix).

That said, the technology isn't going anywhere and I'm sure we'll see peer to peer cryptography backed exchanges of goods and services continuing to work their way into future products where the technology makes sense on its own merits as opposed to hype cycles.

The utility of AI, and specifically LLMs, is so astounding right now even in its infancy that it's not going anywhere no matter where public opinion sits. It just won't necessarily be used as a selling point, like for a new Coke flavor. Which is ultimately going to be a good thing.

I agree that misinformation tends to be bad, and I do have legitimate concerns that the feverous anti-AI crowd is going to end up cutting off their nose to spite their face driving a technological revolution behind closed doors of international conglomerates rather than open access, but at a certain point pretty soon this ship is going to be out of anyone's control, and just as the DCMA doesn't actually prevent me from downloading The Matrix right now vs in 2000 outside of a few extra hoops, the likely eventual "let's try to handicap AI Act" is probably not going to prevent me from running model weights published in Israel or Japan on a local GPU.

I used to get more stressed about the rhetoric online, but it's reached a point where it's clear 90% of people aren't looking for facts or understanding, they are only seeking confirmation bias and down voting anything that doesn't deliver it.

In that climate, why waste our time? Discussions where one stands to learn through contribution and formulating a comment are still probably worthwhile, but a lot of discussion of AI in more general forums have honestly just turned into tantrums where no one wants to have their outrage party rained on.

It's become the equivalent of explaining the science of immunity to antivax crowds.