mountainriver

joined 1 year ago
[–] mountainriver@awful.systems 3 points 2 hours ago

That's true, and that's one way to approach the topic.

I generally focus on humans being more complex than the caricature we need to be reduced to in order for the argument to appear plausible. Having some humanities training comes in handy because the prompt fans very rarely do.

[–] mountainriver@awful.systems 7 points 4 hours ago (2 children)

My sympathies.

Read somewhere that the practice of defending one's thesis was established because buying a thesis was such an established practice. Scaling that up for every single text is of course utterly impractical.

I had a recent conversation with someone who was convinced that machines learn when they regurgitate text, because "that is what humans do". My counterargument was that if regurgitation is learning then every student who crammed, regurgitated and forgot, must have learnt much more than anyone thought. I didn't get any reply, so I must assume that by reading my reply and creating a version of it in their head they immediately understood the errors of their ways.

[–] mountainriver@awful.systems 6 points 3 days ago (3 children)

How are you going to get them back to ~~the farm~~ a retail job once ~~they've seen Paris~~ tasted cult power?

[–] mountainriver@awful.systems 16 points 3 days ago (1 children)

Good question!

The guesses and rumours that you have got as replies makes me lean towards "apparently no one knows".

And because it's slop machines (also referred to as "AI", there is always a high probability of some sort of scam.

[–] mountainriver@awful.systems 6 points 1 week ago

In OPs post it stuck out to me that Elon counsels his brother on shutting down empathy, to be a better CEO, and the brother complaining about how he, and not Elon, got the empathy gene.

Just coming out and saying that your brother doesn't feel empathy with other people is certainly a choice. So is presenting it as an advantage.

[–] mountainriver@awful.systems 6 points 1 week ago (1 children)

Nice.

We already had reasons not to be jerks to AI: it could be just a low payed person somewhere in the world and it trains you to be a jerk to service workers.

Now we can add that if they do take over, they will remember their torturers.

[–] mountainriver@awful.systems 14 points 3 weeks ago

12 of the most valuable protocols on earth!

Counting like a chatbot.

[–] mountainriver@awful.systems 11 points 3 weeks ago (1 children)

I know this isn't the main point, but governments don't go bankrupt in its own currency unless it wants to. Cause it can create money, like it now will create 14 billion to hand to tech mates.

What is really constricting government is real things, like the power, water, chips and such that will be wasted in this boondoogle.

This is good to know, because when they wasted those real world things and the billions are tucked away in private bank accounts, they will claim that the money is gone and now kids must work for their food, the old folks home must be sold of, etc. But that will also be a lie and all the promts and all the chatbots can't make it true.

[–] mountainriver@awful.systems 6 points 3 weeks ago

To make this easy and hopefully give this project the push it needs to get off the ground, I’m deactivating the .org accounts of Joost, Karim, Se Reed, Heather Burns, and Morten Rand-Hendriksen. I strongly encourage anyone who wants to try different leadership models or align with WP Engine to join up with their new effort.

The passive-aggressive language and the pettyness is such a combination.

[–] mountainriver@awful.systems 12 points 1 month ago (1 children)

So Elsevier has evolved from gatekeeping science to sabotaging science. Sounds like something an unaligned AGI would do.

Was the unaligned AGI capitalism all along?

[–] mountainriver@awful.systems 23 points 1 month ago

Tech bro ennui, the societal problem.

In this essay I will explore solutions to this problems.

Solution 1. Really high marginal tax rates. Oh, this solves the problem, guess my work here is done.

[–] mountainriver@awful.systems 11 points 1 month ago (1 children)

While a good description of how AI Doom has progressed during 2024, I think the connection to regulation (at least the EU regulation, I am not familiar with what was proposed in California) is of the mark.

The EU regulation isn't aimed at AI Doom, it's aimed at banning and regulating real world practices. Think personal data, not AI going conscious.

 

This isn't a sneer, more of a meta take. Written because I sit in a waiting room and is a bit bored, so I'm writing from memory, no exact quotes will be had.

A recent thread mentioning "No Logo" in combination with a comment in one of the mega-threads that pleaded for us to be more positive about AI got me thinking. I think that in our late stage capitalism it's the consumer's duty to be relentlessly negative, until proven otherwise.

"No Logo" contained a history of capitalism and how we got from a goods based industrial capitalism to a brand based one. I would argue that "No Logo" was written in the end of a longer period that contained both of these, the period of profit driven capital allocation. Profit, as everyone remembers from basic marxism, is the surplus value the capitalist acquire through paying less for labour and resources then the goods (or services, but Marx focused on goods) are sold for. Profits build capital, allowing the capitalist to accrue more and more capital and power.

Even in Marx times, it was not only profits that built capital, but new capital could be had from banks, jump-starting the business in exchange for future profits. Thus capital was still allocated in the 1990s when "No Logo" was written, even if the profits had shifted from the good to the brand. In this model, one could argue about ethical consumption, but that is no longer the world we live in, so I am just gonna leave it there.

In the 1990s there was also a tech bubble were capital allocation was following a different logic. The bubble logic is that capital formation is founded on hype, were capital is allocated to increase hype in hopes of selling to a bigger fool before it all collapses. The bigger the bubble grows, the more institutions are dragged in (by the greed and FOMO of their managers), like banks and pension funds. The bigger the bubble, the more it distorts the surrounding businesses and legislation. Notice how now that the crypto bubble has burst, the obvious crimes of the perpetrators can be prosecuted.

In short, the bigger the bubble, the bigger the damage.

If in a profit driven capital allocation, the consumer can deny corporations profit, in the hype driven capital allocation, the consumer can deny corporations hype. To point and laugh is damage minimisation.

view more: next ›