Flumpkin

joined 9 months ago
[–] Flumpkin 5 points 9 months ago (1 children)

The error message says ".exe" and looks like a dot net namespace.

[–] Flumpkin 2 points 9 months ago

The velomobile (electric or manual) is the most efficient transport in energy per mile. You could easily design something like a self driving podbike, maybe a little bigger, weighing maybe 100kg.

And self driving also allows for new configurations, e.g. two seats that face each other because you don't need a steering wheel. That means much more narrow and aerodynamic "micro cars" that could solve a lot of edge cases for people who can't drive or not that long or fast (50kmh / 30mph). They might compete with a big bus.

[–] Flumpkin 2 points 9 months ago (1 children)

Hmm 😇 The afterlife might be a good way to make it up. Have you seen "The Good Place"?

[–] Flumpkin 1 points 9 months ago

You’re just rephrasing the same approach, over, and over, and over. It’s like you’re not even reading what I’m saying.

No I read what you are saying. I just think that you are something that "acts intelligent without actually being intelligent". Here is why: All that you've written is based on very simple primitive brain cells and synapses and synaptic connections. It's self evident that this is not really something that is designed to be intelligent. You're just "really good at parroting sentences". And you clearly agree that I'm doing the same 😄

Clearly LLMs are not intelligent and don't understand, and it would need many other systems to make them so. But what they do show is that the "creative spark" even though they are very mediocre in their quality, can be created by using a critical mass of quantity. It's like it's just one small part of our mind, the "creative writing center" without intelligence. But it's there, just because we added more data and processing.

Quality through quantity, that is what we seem to be and what is so shocking. And it's obvious that there is a kind of disgust or bias against such a notion. A kind of embarrassment of the brain to just be thinking meat.

Now you might be absolutely right that my specific suggestion for an approach is bullshit, I don't know enough about it. But I am pretty sure we'll get there without understanding exactly how it works.

[–] Flumpkin 1 points 9 months ago

And how do you determine who falls in this category? Again, by a set of parameters which we’ve chosen.

Sure, that is my argument, that we choose to make social progress based on our nature and scientific understanding. I never claimed some 100% objective morality, I'm arguing that even though that does not exist, we can make progress. Basically I'm arguing against postmodernism / materialism.

For example: If we can scientifically / objectively show that some people are born in the wrong body and it's not some mental illness, and this causes suffering that we can alleviate, then moral arguments against this become invalid. Or like the gif says "can it".

I'm not arguing that some objective ground truth exists but that the majority of healthy human beings have certain values IF they are not tainted that if reinforced gravitate towards some sort of social progress.

You needn’t argue for the elimination of meaning, because meaning isn’t a substance present in reality - it’s a value we ascribe to things and thoughts.

Does mathematics exist? Is money real? Is love real?

If nobody is left to think about them, they do not exist. If nobody is left to think about an argument, it becomes meaningless or "nonsense".

[–] Flumpkin 1 points 9 months ago

I'm not arguing for "one single 100% objective morality". I'm arguing for social progress - maybe towards one of an infinite number of meaningful, functioning moralities that are objectively better than what we have now. Like optimizing or approximating a function that we know has no precise solution.

And "objective" can't mean some kind of ground truth by e.g. a divine creator. But you can have objective statistical measurements for example about happiness or suffering, or have an objective determination if something is likely to lead to extinction or not.

[–] Flumpkin 1 points 9 months ago* (last edited 9 months ago)

I agree somewhat with that but: only if the starting conditions were completely random. Otherwise if you set the conditions to be similar to what we know about humanity, you'd have to anticipate both cooperation and competition and parasitic behavior leading to wars and atrocities. And that also assumes that they actually have a chance to grow up for the suffering to have any meaning. If you just turn it off your science experiment at some point you have invalidated the argument.

Either way when you're playing god you'd have to morally justify yourself. Imagine you create a universe that eventually becomes an eternal hell where trillions of sentient beings are tortured through something like "I have no mouth but I must scream".

[–] Flumpkin 4 points 9 months ago (3 children)

You'd look at things like the holocaust or million other atrocities and say "this is fine". Also you can't assume they'd die out naturally in 5 billion years, they might colonize other planets and go on and on and on until you pull the switch. They might have created beautiful art and things and preserved much of their history for future generation and then poof all gone. What if they would find out? Would you say "I created them, therefor I own them and can do with my toys as I please". Really?

[–] Flumpkin 8 points 9 months ago (1 children)

Wow that guy looks evil

[–] Flumpkin 3 points 9 months ago (9 children)

My main argument would be that it would be incredibly unethical. And any intelligent civilization powerful enough to create a simulation like this would be more likely than not to be ethical, and if it was this unethical it is unlikely to exist for long. Those would be two potential reasons why the "infinite regress" in simulation theory is unlikely.

The Starmaker is an interesting exploration into simulation theory.

[–] Flumpkin 1 points 9 months ago (2 children)

You misrepresent or misunderstood my argument

[–] Flumpkin 1 points 9 months ago

Yeah, I imagine generative AI as like one small part of a human mind, so we'd need to create a whole lot more for AGI. But it's shocking (at least for me) that it works at all just through more data and compute power. That you can make qualitative leaps with just increasing the quantity. Maybe we'll see more progress now.

view more: ‹ prev next ›