this post was submitted on 08 Apr 2024
67 points (92.4% liked)

Asklemmy

43885 readers
1289 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

There's a video on YouTube where someone has managed to train a network of rat neurons to play doom, the way they did it seems reminiscent of how we train ML models

I am under the impression from the video that real neurons are a lot better at learning than simulated ones (and much less power demanding)

Could any ML problems, such as natural language generation be solved using neurons instead and would that be in any way practical?

Ethically at this point is this neuron array considered conscious in any way?

you are viewing a single comment's thread
view the rest of the comments
[โ€“] teawrecks@sopuli.xyz 21 points 7 months ago* (last edited 7 months ago) (2 children)

Afaik, an actual neuron is computationally more powerful than a perceptron, so in theory yeah, for sure.

If you're a subscriber to the Chinese Room thought problem, we are already just a bunch of really good "LLMs".

[โ€“] themusicman@lemmy.world 6 points 7 months ago (1 children)

First time I've come across the Chinese Room, but it's pretty obviously flawed. It's not hard to see that collectively the contents of the room may understand Chinese in both scenarios. The argument boils down to "it's not true understanding unless some component part understands it on its own" which is rubbish - you can't expect to still understand a language after removing part of your brain

[โ€“] teawrecks@sopuli.xyz 1 points 7 months ago

Hah, tbh, I didn't realize it was originally formulated to argue against consciousness in the room. When I originally heard it it was presented as a proper thought problem with no "right" answer. So I honestly remembered it as a sort of illustration of the illusion that is consciousness. But it's been a while since I've discussed it with others, mostly I've just thought about it in the context of recent AI advancements.

[โ€“] flashgnash@lemm.ee 4 points 7 months ago (1 children)

I've always thought we have something resembling an LLM as one components of our brains, and the brain has the ability to train new models by itsself for solving new problems

[โ€“] yelgo@lemmy.ca 5 points 7 months ago* (last edited 7 months ago)

Actually we do, the cerebellum is what the neural networks in LLMs were partially based off. It's essentially a huge collection of input/output modules that the other parts of the brain are wired into which preforms various computations. It also handles motor control for the body and figures out how to do this through reinforcement learning. (The way the reinforcement learning works is different to LLMs though because it's a biological process) So when you throw a ball, for example, various modules in the cerebellum take in inputs from the visual centers, arm muscles, etc and compute the outputs needed to produce the throwing motion to reach your target.

We also have the cerebrum though, which along with the rest of the brain is the magic voodoo that creates our consciousness and self awareness and we can't recreate with a computer.