this post was submitted on 14 Sep 2023
16 points (75.0% liked)

ChatGPT

577 readers
1 users here now

Welcome to the ChatGPT community! This is a place for discussions, questions, and interactions with ChatGPT and its capabilities.

General discussions about ChatGPT, its usage, tips, and related topics are welcome. However, for technical support, bug reports, or feature requests, please direct them to the appropriate channels.

!chatgpt@lemdro.id

Rules

  1. Stay on topic: All posts should be related to ChatGPT, its usage, and relevant discussions.
  2. No support questions/bug reports: Please refrain from posting individual support questions or bug reports. This community is focused on general discussions rather than providing technical assistance.
  3. Describe examples: When discussing or sharing examples of ChatGPT interactions, please provide proper context and explanations to facilitate meaningful discussions.
  4. No self-promotion: Avoid excessive self-promotion, spamming, or advertising of external products or services.
  5. No inappropriate content: Do not post or request explicit, offensive, or inappropriate content. Keep the discussions respectful and inclusive.
  6. No personal information: Do not share personal information, including real names, contact details, or any sensitive data.
  7. No harmful instructions: Do not provide or request instructions for harmful activities, illegal actions, or unethical behaviour.
  8. No solicitation: Do not solicit or engage in any form of solicitation, including but not limited to commercial, political, or donation requests.
  9. No unauthorized use: Do not use ChatGPT to attempt unauthorized access, hacking, or any illegal activities.
  10. Follow OpenAI usage policy: Adhere to the OpenAI platform usage policy and terms of service.

Thank you for being a part of the ChatGPT community and adhering to these rules!

founded 1 year ago
MODERATORS
top 12 comments
sorted by: hot top controversial new old
[–] Pons_Aelius@kbin.social 7 points 1 year ago (3 children)

We still have no real idea how consciousness develops in humans so how can we even begin to create it?

[–] Lmaydev@programming.dev 2 points 1 year ago (1 children)

We don't have to. We create an artificial approximation.

We don't need to mimic our brains at all. We just need system that responds with the correct outputs to the inputs we give it.

Artificial intelligence if you will.

Humans aren't all that.

[–] Pons_Aelius@kbin.social 2 points 1 year ago (1 children)

We create an artificial approximation.

How do you approximate something we do not understand?

How do we know when we have created ir, when we do not understand what is it?

Humans aren’t all that.

If we aren’t all that won't anything we create be less than all that as well?

[–] Lmaydev@programming.dev 1 points 1 year ago* (last edited 1 year ago)

You don't need to understand a system to see what it produces.

This is actually exactly how neural networks work currently.

All we know is that for a given input it creates a given output. The actual formula it's using to calculate those is massively obviscated.

For example they are using machine learning to predict fluctuations in magnetic fields. We don't know the equations just it's starting state and ending state. The AI can still do the calculation even though we can't.

If we create an AI that performs as we want we don't need to understand it's internal workings.

The same way we have effective therapy even though we don't fully understand how the brain actually works.

It won't be less. Computers and machines already outperform in a huge array of tasks.

Computers massively outperform us at doing maths. Cars outperform us in speed of travel.

It's the whole point of technology.

We will one day be capable of creating system that think and understand humans better than we do.

[–] Spzi@lemm.ee 1 points 1 year ago (1 children)

We could engineer artificial flight without having a precise understanding of natural flight.

I think we don't need to understand how consciousness develops (unless you want to recreate exactly that developing process). But we do need to be able to define what it is, so that we know when to check the "done"-box. Wait, no. This, too, can be an iterative process.

So we need some idea what it is and what it isn't. We tinker around. We check if the result resembles what we intended. We refine our goals and processes, and try again. This will probably lead to a co-evolution of understanding and results. But a profound understanding isn't necessary (albeit very helpful) to get good results.

Also, maybe, there can be different kinds of consciousness. Maybe ours is just one of many possible. So clinging to our version might not be very helpful in the long run. Just like we don't try to recreate our eye when making cameras.

[–] Pons_Aelius@kbin.social 1 points 1 year ago

But we do need to be able to define what it is

Cool.,

What is it?

[–] Devjavu@lemmy.dbzer0.com -3 points 1 year ago (1 children)
[–] Pons_Aelius@kbin.social 2 points 1 year ago (1 children)
[–] Devjavu@lemmy.dbzer0.com 2 points 1 year ago* (last edited 1 year ago)

Our consciousness has developed by chance. We were not made by another conscious species, we did not make ourselves conscious. We are feeding in enourmous amounts of data into neural networks using different methods. Our nervous system does not differ much from a neural network and with the right conditions, the resulting model may have a consciousness. Those conditions are not known to us, so we try again and again and again until, by pure chance, we get a network that is self aware. I suspect that, the higher the complexity of the network, the higher the chance for something similar to our consciousness to develop.

With this, our current approach is entropy. Get as many differing conditions as possible and mash em together. It may spiral into consciousness.

[–] Blapoo@lemmy.ml 6 points 1 year ago

"Conscious". What even is it? Look around the animal kingdom and pick your own definition. Reacting to stimulus? Pattern recognition? Pattern reaction? A bunch of vectors between words?

I think of this as a function. Once models are advanced enough, will the question even matter? Once it feels like it can empathize, be curious, respect and react to another's feedback. These qualities, I value above most and don't even see from other humans.

[–] 1bluepixel@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

open article

"Google engineer Blake Lemoine" in opening line

close article

[–] De_Narm@lemmy.world 1 points 1 year ago

Consciousness can't be measured anyways. I know I'm conscious and that is everything I can know. There is no distinction between being conscious and simulating it. I cannot proof the consciousness of people around me any more than I could proof it for people in my dreams, animals or any given AI.