this post was submitted on 08 Apr 2024
67 points (92.4% liked)

Asklemmy

43885 readers
1260 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy πŸ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS
 

There's a video on YouTube where someone has managed to train a network of rat neurons to play doom, the way they did it seems reminiscent of how we train ML models

I am under the impression from the video that real neurons are a lot better at learning than simulated ones (and much less power demanding)

Could any ML problems, such as natural language generation be solved using neurons instead and would that be in any way practical?

Ethically at this point is this neuron array considered conscious in any way?

top 33 comments
sorted by: hot top controversial new old
[–] nis@feddit.dk 43 points 7 months ago

I've trained mine to emulate a LLM. So far the hallucination feature works perfectly. Basic grammar still lacks a bit.

[–] kadu@lemmy.world 39 points 7 months ago

The idea that LLMs are just like how the brain works, except limited by running in a CPU, comes from software engineers - not neuroscientists.

Although there are many analogies that could be made between how CPUs do work and how the brain integrates information, they're actually fundamentally different and use completely different logic.

You could, theoretically, create a computing language to work using neurons. And therefore you could also train machine learning algorithms. But that's like using calculators to sum 2+2 by buying 4 calculators and putting them all together, rather than actually using what a calculator does to get the result, if you get what I mean.

[–] teawrecks@sopuli.xyz 21 points 7 months ago* (last edited 7 months ago) (2 children)

Afaik, an actual neuron is computationally more powerful than a perceptron, so in theory yeah, for sure.

If you're a subscriber to the Chinese Room thought problem, we are already just a bunch of really good "LLMs".

[–] themusicman@lemmy.world 6 points 7 months ago (1 children)

First time I've come across the Chinese Room, but it's pretty obviously flawed. It's not hard to see that collectively the contents of the room may understand Chinese in both scenarios. The argument boils down to "it's not true understanding unless some component part understands it on its own" which is rubbish - you can't expect to still understand a language after removing part of your brain

[–] teawrecks@sopuli.xyz 1 points 7 months ago

Hah, tbh, I didn't realize it was originally formulated to argue against consciousness in the room. When I originally heard it it was presented as a proper thought problem with no "right" answer. So I honestly remembered it as a sort of illustration of the illusion that is consciousness. But it's been a while since I've discussed it with others, mostly I've just thought about it in the context of recent AI advancements.

[–] flashgnash@lemm.ee 4 points 7 months ago (1 children)

I've always thought we have something resembling an LLM as one components of our brains, and the brain has the ability to train new models by itsself for solving new problems

[–] yelgo@lemmy.ca 5 points 7 months ago* (last edited 7 months ago)

Actually we do, the cerebellum is what the neural networks in LLMs were partially based off. It's essentially a huge collection of input/output modules that the other parts of the brain are wired into which preforms various computations. It also handles motor control for the body and figures out how to do this through reinforcement learning. (The way the reinforcement learning works is different to LLMs though because it's a biological process) So when you throw a ball, for example, various modules in the cerebellum take in inputs from the visual centers, arm muscles, etc and compute the outputs needed to produce the throwing motion to reach your target.

We also have the cerebrum though, which along with the rest of the brain is the magic voodoo that creates our consciousness and self awareness and we can't recreate with a computer.

[–] andrew0@lemmy.dbzer0.com 17 points 7 months ago (1 children)

With the way current LLMs operate? The short answer is no. Most machine learning models can learn the probability distribution by performing backward propagation, which involves "trickling down" errors from the output node all the way back to the input. More specifically, the computer calculates the derivatives of each layer and uses that to slowly nudge the model towards the correct answer by updating the values in each neural layer. Of course, things like the attention mechanism resemble the way humans pay attention, but the underlying processes are vastly different.

In the brain, things don't really work like that. Neurons don't perform backpropagation, and, if I remember correctly, instead build proteins to improve the conductivity along the axons. This allows us to improve connectivity in a neuron the more current passes through it. Similarly, when multiple neurons in a close region fire together, they sort of wire together. New connections between neurons can appear from this process, which neuroscientists refer to as neuroplasticity.

When it comes to the Doom example you've given, that approach relies on the fact that you can encode the visual information to signals. It is a reinforcement learning problem where the action space is small, and the reward function is pretty straight forward. When it comes to LLMs, the usual vocabulary size of the more popular models is between 30-60k tokens (these are small parts of a word, for example "#ing" in "writing"). That means, you would need a way to encode the input of each to feed to the biological neural net, and unless you encode it as a phonetic representation of the word, you're going to need a lot of neurons to mimic the behaviour of the computer-version of LLMs, which is not really feasible. Oh, and let's not forget that you would need to formalize the output of the network and find a way to measure that! How would we know which neuron produces the output for a specific part of a sentence?

We humans are capable of learning language, mainly due to this skill being encoded in our DNA. It is a very complex problem that requires the interaction between multiple specialized areas: e.g. Broca's (for speech), Wernicke's (understanding and producing language), certain bits in the lower temporal cortex that handle categorization of words and other tasks, plus a way to encode memories using the hippocampus. The body generates these areas using the genetic code, which has been iteratively improved over many millennia. If you dive really deep into this subject, you'll start seeing some scientists that argue that consciousness is not really a thing and that we are a product of our genes and the surrounding environment, that we act in predefined ways.

Therefore, you wouldn't be able to call a small neuron array conscious. It only elicits a simple chemical process, which appears when you supply enough current for a few neurons to reach the threshold potential of -55 mV. To have things like emotion, body autonomy and many other things that one would think of when talking about consciousness, you would need a lot more components.

[–] haui_lemmy@lemmy.giftedmc.com 2 points 7 months ago

Thats an interesting explanation. Thanks! :)

[–] neidu2@feddit.nl 14 points 7 months ago* (last edited 7 months ago) (1 children)

Go home, Elon, you're drunk.

[–] flashgnash@lemm.ee 8 points 7 months ago

But if we can train neurons to emulate human emotions and then put them into the neurolink, I can finally know what emotions are

[–] RedditWanderer@lemmy.world 10 points 7 months ago (1 children)

The concept of ML comes from neurons/the brain. If we could use the neurons we'd be way ahead, and that's basically the hard part. If it will ever be feasible I don't know.

Brains have a lot more connections and meaningful ways of communicating compared to our silly signals and weights. This may be the barrier to AGI

[–] flashgnash@lemm.ee 1 points 7 months ago

We can use neurons. I'm not sure we're very good at it but people have used them for small tasks

[–] voracitude@lemmy.world 4 points 7 months ago* (last edited 7 months ago)

Cortical Labs certainly hope so: https://wired.me/science/this-startup-grows-brain-cells-on-ai-chips/

But outside of the context of computing on devices: yes, as others have noted, the neurons we're trying to simulate in machine learning models aren't much different than our own. So, just look at any person to see how well neurons are suited to language/etc. workloads (or not, depending how clever the people around you are πŸ˜‚)

As to ethics, consciousness is an "emergent phenomenon". It seems to arise, near as we can tell, from the interaction of many simple systems. No single cell or cluster thereof in a brain is conscious, but get them all working nearby one another and suddenly... πŸŽ‡

[–] RaoulDook@lemmy.world 4 points 7 months ago (1 children)

You could put neurons in a box and wire it up, and implant a partial personality into it and call it a Magi

[–] aDogCalledSpot@lemmy.zip 3 points 7 months ago

Our current ML Neural Networks work (simplified) like this: A neuron emits a number and the next neuron calculates a new number to emit based on all the values given to it by other neurons as inputs. Our brain can't fire numbers in this way. So there's a fundamental difference. Bridging this difference to create NNs that are more similar to our brains is the basis of the study of Spiking Neural Networks. Their performance so far isn't great, but it's an interesting topic of research.

[–] davel@lemmy.ml 3 points 7 months ago* (last edited 7 months ago) (1 children)

Ethically at this point is this neuron array considered conscious in any way?

It’s really a matter of taste, as in how do they taste?

[–] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 4 points 7 months ago (1 children)

Salty.

At least in my case.

Bastards.

[–] flashgnash@lemm.ee 2 points 7 months ago (1 children)

Can we train the neuron LLM to participate in a CoD lobby, that's the real question here

[–] EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 1 points 7 months ago* (last edited 7 months ago)

Best not to tempt fate too much, unless we want robot overlords with the temperament of a 13-year-old white kid from Pennsylvania talkin' shit like a gangsta.

[–] richieadler@lemmy.myserv.one 1 points 7 months ago

Calling Cordwainer Smith...

[–] Omega_Haxors@lemmy.ml -1 points 7 months ago

Neurons can't NaN so it would be a very bad use of the technology.

[–] kakes@sh.itjust.works -4 points 7 months ago (1 children)

Honestly I've wondered this about shining a laser through some kind of laser-etched glass. Only problem is, I have no idea how to represent something like an activation function using only reflection and such.

[–] flashgnash@lemm.ee 3 points 7 months ago (1 children)

Think you might've commented on the wrong post

[–] kakes@sh.itjust.works 0 points 7 months ago (2 children)

Haha naw, it's the same basic idea, just using something inorganic (like glass) to represent a neural network instead of something like biological neurons.

[–] flashgnash@lemm.ee 3 points 7 months ago (1 children)

Cool idea, though existing computers are also an inorganic way to representing a neural net

[–] kakes@sh.itjust.works -1 points 7 months ago

Well, yes, but something like an etched glass would be better in basically every way, if it could be done. (See my other comment in this thread if you want more details)

[–] BreakDecks@lemmy.ml 1 points 7 months ago (1 children)

What on earth are you talking about?

[–] kakes@sh.itjust.works -1 points 7 months ago (1 children)

A neural network is an array of layered nodes, where each node contains some kind of activation function, and each connection represents some weight multiplier. Importantly, once the model is trained, it's stateless, meaning we don't need to store any extra data to use it - just inputs and outputs.

If we could take some sort of material, like a glass, and modify it so that if you shone a light through one end, the light would bounce in such a way as to emulate these functions and weights, you could create an extremely cheap, compact, fast, and power efficient neural network. In theory, at least.

[–] BreakDecks@lemmy.ml 1 points 7 months ago (1 children)

So just ML on an optical computer, or some sort of baseless sci-fi thing you made up?

[–] kakes@sh.itjust.works 1 points 7 months ago (1 children)

A mix of both, but keep in mind that I'm commenting on a post about a related made up sci-fi idea.

[–] BreakDecks@lemmy.ml 1 points 7 months ago

It most certainly is not: https://www.technologyreview.com/2023/12/11/1084926/human-brain-cells-chip-organoid-speech-recognition/

Neural organoids have been a thing for a few years now.