BodaciousMunchkin

joined 5 months ago
 

Rage against the machine

For all the promise and dangers of AI, computers plainly can’t think. To think is to resist – something no machine does

Computers don’t actually do anything. They don’t write, or play; they don’t even compute. Which doesn’t mean we can’t play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.

What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.

But this is nothing new. The story of technology – from prehistory to now – has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.

Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.

But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserl’s) of this thin gruel version of the mind at work – a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI – is the decisive move in the conjuring trick.

What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: ‘Irritability is the germ, and as it were the atom, of having a world…’ With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness.

Can machines think? Turing dismissed this as ‘too meaningless to deserve discussion’. Instead of trying to make a machine that can think, he was content to design one that might count as a reasonable substitute for a thinker. Everywhere in Turing’s work, the focus is on imitation, replacement and substitution.

Consider his contribution to mathematics. A Turing machine is a formal model of the informal idea of computation: ie, the idea that some problems can be solved ‘mechanically’ by following a recipe or algorithm. (Think long division.) Turing proposed that we replace the familiar notion with his more precise analogue. Whether a given function is Turing-computable is a mathematical question, one that Turing supplied the formal means to answer rigorously. But whether Turing-computability serves to capture the essence of computation as we understand this intuitively, and whether therefore it’s a good idea to make the replacement, these are not questions that mathematics can decide. Indeed, presumably because they are themselves ‘too meaningless to deserve discussion,’ Turing left them to the philosophers.

In the same anti-philosophical spirit, Turing proposed that we replace the meaningless question Can machines think? with the empirically decidable question Can machines pass [what has come to be known as] the Turing test? To understand this proposal, we need to look at the test, which Turing called the Imitation Game.

The game is to be played by three players: one man, one woman, and one person whose gender doesn’t matter. Each has a distinct task. The player of unspecified gender, the interrogator, has the job of figuring out which of the other two is a man, and which a woman. The woman’s task is to serve as the interrogator’s ally; the man’s is to cause the interrogator to make the wrong identification.

The point is to explore whether substituting a machine for a player has any effect on the rate of success

This might make for fun adult entertainment, but Turing feared it would be too easy. Even today, when gender-experiment is commonplace, it wouldn’t be that hard, in most circumstances, to sort people by gender on the basis of superficial appearance. So Turing proposed that we isolate the interrogator in a room, limiting their access to others to the posing of questions. And he added: ‘In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms.’

What does the Imitation Game teach us about machine intelligence? Here is what Turing says:

We now ask the question, ‘What will happen when a machine takes the part of [the man] in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’ The interrogator’s goal is not to out the computer; it’s to out the human players as having this or that gender. But Turing’s goal, and the game’s point, is to explore whether substituting a machine for one of the players has any effect on the interrogator’s rate of success. It is this last question, whether or not there is an effect on outcomes, that is proposed, by Turing, as proxy for the ‘meaningless’ question of whether machines can think.

Instead of arguing about what thinking is, Turing envisions a scenario in which machines might be able to enter into and participate in meaningful human exchange. Would their ability to do this establish that they can think, or feel, that they have minds as we have minds? These are precisely the wrong questions to ask, according to Turing. What he does say is that machines will get better at the game, and he went so far as to venture a prediction: that by end of the century – he was writing in 1950 – ‘general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.’

Despite Turing’s apparent hostility to philosophy, it is possible to read him as capturing a critical philosophical insight. Why should we expect that evidence would be able to secure the minds of machines for us, when it doesn’t perform that function in our ordinary human dealings? None of us has ever found out or proved that the people around us in our lives actually think or feel. We just take it for granted. And it is this observation that motivates his conception of his own task: not that of proving that machines can think; but rather that of integrating them into our lives so that the question, in effect, goes away, or answers itself.

It turns out, however, that not all of Turing’s replacements and substitutions are quite so straightforward as they seem. Some of them are downright misleading.

Consider, first, Turing’s matter-of-fact suggestion that we replace talking by the use of typed messages. He suggests that this is to make the game challenging. But the substitution of text for speech has an entirely different effect: to lend a modicum of plausibility to the otherwise absurd suggestion that machines might participate at all. To appreciate this, recall that a Turing machine is what in mathematics is called a formal system. In a formal system, there is a finite alphabet, and a finite set of rules for combining elements of the alphabet into more complex expressions. What makes the system formal is that the vocabulary needs to be specified in terms of physical properties alone, and rules need to be framed only in terms of these physical, that is to say, formal properties. This is the crux: unless you can formally specify the inputs and the outputs – the vocabulary – you can’t define a Turing machine or a Turing-computable function.

And, crucially, it isn’t possible formally to specify the inputs and the outputs of ordinary human language. Speech is breathy, hot movement that always unfolds with others, in context, and against the background of needs, feelings, desires, projects, goals and constraints. Speech is active, felt and improvisational. It has more in common with dancing than text-messaging. We are so much at home, nowadays, under the regime of the keyboard that we don’t even notice the ways text conceals the bodily reality of language.

The gamification of life is one of Turing’s most secure, and most troubling, legacies

Although speech is not formally specifiable, text – in the sense of text-messaging – is. So text can serve as a computationally tractable proxy for real human exchange. By filtering all communication between the players through the keyboard, in the name of making the game harder, Turing actually – and really this is a sleight of hand – sweeps what the philosopher Ned Block has called the problem of inputs and outputs under the rug.

But the substitution of text-message for speech is not the only sleight of hand at work in Turing’s argument. The other is introduced even more surreptitiously. This is the tacit substitution of games for meaningful human exchange. Indeed, the gamification of life is one of Turing’s most secure, and most troubling, legacies.

The problem is that Turing takes for granted a partial and distorted understanding of what games are. From the computational perspective, games are – indeed, to be formally tractable, they must be – crystalline structures of intelligibility, virtual worlds, where rules constrain what you can do, and where unproblematic values (points, goals, the score), and settled criteria of success and failure (winning and losing), are clearly specified.

But clarity, regimentation and transparency give us only one aspect of what a game is. Somehow Turing and his successors tend to forget that games are also contests; they are proving grounds, and it is we who are tested and we whose limitations are exposed, or whose powers as well as frailties are put on display on the kickball field, or the four square court. A child who plays competitive chess might suffer from anxiety so extreme they are nauseated. This visceral expression is no accidental epiphenomenon, an external of no essential value to the game. No, games without vomit – or at least that live possibility – would not be recognisable as human games at all.

All this is to say that true games are much more than they seem to be when we view them, as Turing did, through the lens of the regime of the keyboard. (Which is not to deny that we can, and do, usefully model aspects of the game computationally.)

Here’s the critical upshot: human beings are not merely doers (eg, games players) whose actions, at least when successful, conform to rules or norms. We are doers whose activity is always (at least potentially) the site of conflict. Second-order acts of reflection and criticism belong to the first-order performance itself. These are entangled, and with the consequence that you can never factor out, from the pure exercise of the activity itself, all the ways in which the activity challenges, removed, impedes and confounds. To play piano, for example – that other keyboard technology – is to fight with the machine, to battle against it.

Let me explain: the piano is the construction and elaboration of a particular musical culture and its values. It installs a conception of what is musically legible, intelligible, permitted and possible. A contraption made of approximately 12,000 pieces of wood, steel, felt and wire, the piano is a quasi-digital system, in which tones are the work of keystrokes, and in which intervals, scales and harmonic possibilities are controlled by the machine’s design and manufacture.

The piano was invented, to be sure, but not by you or me. We encounter it. It pre-exists us and solicits our submission. To learn to play is to be altered, made to adapt one’s posture, hands, fingers, legs and feet to the piano’s mechanical requirements. Under the regime of the piano keyboard, it is demanded that we ourselves become player pianos, that is to say, extensions of the machine itself.

But we can’t. And we won’t. To learn to play, to take on the machine, for us, is to struggle. It is hard to master the instrument’s demands.

To master the piano is not just to conform to the machine’s demands. It is to push back, to say no

And this fact – the difficulty we encounter in the face of the keyboard’s insistence – is productive. We make art out of it. It stops us being player pianos, but it is exactly what is required if we are to become piano players.

For it is the player’s fraught relation to the machine, and to the history and tradition that the machine imposes, that supplies the raw material of musical invention. Music and play happen in that entanglement. To master the piano, as only a person can, is not just to conform to the machine’s demands. It is, rather, to push back, to say no, to rage against the machine. And so, for example, we slap and bang and shout out. In this way, the piano becomes not merely a vehicle of habit and control – a mechanism – but rather an opportunity for action and expression.

And, as with the piano, so with the whole of human cultural life. We live in the entanglement between government and resistance. We fight back.

Consider language. We don’t just talk, as it were, following the rules blindly. Talking is an issue for us, and the rules, such as they are, are up for grabs and in dispute. We always, inevitably, and from the beginning, are made to cope with how hard talking is, how liable we are to misunderstand each other, although most of the time this is undertaken matter-of-factly and without undue stress. To talk, almost inevitably, is to question word choice, to demand reformulation, repetition and repair. What do you mean? How can you say that? In this way, talking contains within it, from the start, and as one of its basic modes, the activities of criticism and reflection about talking, which end up changing the way we talk. We don’t just act, as it were, in the flow. Flow eludes us and, in its place, we know striving, argument and negotiation. And so we change language in using language; and that’s what a language is, a place of capture and release, engagement and criticism, a process. We can never factor out mere doing, skilfulness, habit – the sort of things machines are used effectively to simulate – from the ways these doings, engagements and skills are made new, transformed, through our very acts of doing them. These are entangled. This is a crucial lesson about the very shape of human cognition.

If we keep language, the piano, and games in view, and if we don’t lose sight of what I am calling entanglement – the ways in which carrying on is entangled with everything required to deal with just how hard it is to carry on! – then it becomes clear that the AI discussion tends unthinkingly to presuppose a one-sided, peaches-and-cream simplification of human skilfulness and cognitive life. As if speaking were the straightforward application of rules, or playing the piano was just a matter of doing what the manual instructs. But to imagine language users who were not also actively struggling with the problems of talk would be to imagine something that is, at most, the shell or semblance of human life with language. It would, in fact, be to imagine the language of machines (such as LLMs).

The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They don’t have concerns of their own, and they make no new games. They invent no new language.

The British philosopher R G Collingwood noticed that the painter doesn’t invent painting, and the musician doesn’t invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.

But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do – ie, merely by getting trained up on large data sets. Humans aren’t trained up. We have experience. We learn. And for us, learning a language, for example, isn’t learning to generate ‘the next token’. It’s learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we don’t merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.

We can’t help doing this; no computer can do this.

 

SimpleX Chat's response to Wired's article about neo-Nazis moving to its encrypted messaging app.

Edit: manually cross-posted from https://links.hackliberty.org/post/2981854

[–] BodaciousMunchkin@links.hackliberty.org 1 points 1 month ago* (last edited 1 month ago) (1 children)

There's an icon on the web interface, next to the star icon to save the post. It looks like a copy icon with two squares. I'm not sure how to do that on mobile, it may depend on the client you are using.

 

cross-posted from: https://links.hackliberty.org/post/2932106

Image Transcription:

WHAT WILL A CASHLESS SOCIETY MEAN?

THE PROS

CONVENIENCE — THERE WILL NO LONGER BE ANY NEED TO CARRY CASH AROUND

THE CONS

EVERY TRANSACTION YOU MAKE WILL BE TRACKED YOUR SPENDING HABITS CAN BE LINKED TO YOUR CARBON FOOTPRINT

YOU WILL ONLY BE PERMITTED TO SPEND ON THINGS THE GOVERNMENT APPROVES OF. THINGS THAT ARE DEEMED TO BE LUXURIES — MEAT, FUEL, TRAVEL — CAN BE RESTRICTED

YOUR MONEY CAN BE PROGRAMMED WITH AN EXPIRY DATE — IF YOU DON’T SPEND IT BY A CERTAIN DATE, YOU'LL LOSE IT

THERE WILL BE NO ‘BLACK’ ECONOMY. IT WILL NOT BE POSSIBLE TO AVOID TAX, BUT THEN YOU WILL NOT BE ABLE TO GIVE POCKET MONEY TO CHILDREN OR GRANDCHILDREN AND NEITHER WILL YOU BE ABLE TO BORROW OR LEND MONEY TO FRIENDS WITHOUT THAT BEING TAXED BY THE GOVERNMENT

PARKING AND SPEEDING FINES WILL BE TAKEN AT SOURCE, WITHOUT THE POSSIBILITY OF CHALLENGE AND POSSIBLY EVEN WITHOUT YOUR KNOWLEDGE

IF YOU PROTEST THE ACTIONS OF THE GOVERNMENT, YOUR MONEY CAN BE SWITCHED OFF. IF YOU THINK THAT’S UNLIKELY, IT’S ALREADY HAPPENED TO TENS OF THOUSANDS OF CANADIANS WHEN THEY PROTESTED AND IT ALSO HAPPENED TO A BRITISH JOURNALIST

A CASHLESS SOCIETY MEANS THE END OF HUMAN FREEDOM

IF YOU WANT THAT, DO NOTHING

IF YOU DON'T, THE FIRST THING YOU MUST DO IS RESPOND TO THE GOVERNMENT'S PROPOSAL ON DIGITAL ID, UPON WHICH A CASHLESS SOCIETY MUST BE BASED

https://www.gov.uk/government/consultations/draft-legislation-to-help-more-people-prove-their- identity-online/consultation-on-draft-legislation-to-support-identity-verificat

Image Credit: Brett Scott

 

Image Transcription:

WHAT WILL A CASHLESS SOCIETY MEAN?

THE PROS

CONVENIENCE — THERE WILL NO LONGER BE ANY NEED TO CARRY CASH AROUND

THE CONS

EVERY TRANSACTION YOU MAKE WILL BE TRACKED YOUR SPENDING HABITS CAN BE LINKED TO YOUR CARBON FOOTPRINT

YOU WILL ONLY BE PERMITTED TO SPEND ON THINGS THE GOVERNMENT APPROVES OF. THINGS THAT ARE DEEMED TO BE LUXURIES — MEAT, FUEL, TRAVEL — CAN BE RESTRICTED

YOUR MONEY CAN BE PROGRAMMED WITH AN EXPIRY DATE — IF YOU DON’T SPEND IT BY A CERTAIN DATE, YOU'LL LOSE IT

THERE WILL BE NO ‘BLACK’ ECONOMY. IT WILL NOT BE POSSIBLE TO AVOID TAX, BUT THEN YOU WILL NOT BE ABLE TO GIVE POCKET MONEY TO CHILDREN OR GRANDCHILDREN AND NEITHER WILL YOU BE ABLE TO BORROW OR LEND MONEY TO FRIENDS WITHOUT THAT BEING TAXED BY THE GOVERNMENT

PARKING AND SPEEDING FINES WILL BE TAKEN AT SOURCE, WITHOUT THE POSSIBILITY OF CHALLENGE AND POSSIBLY EVEN WITHOUT YOUR KNOWLEDGE

IF YOU PROTEST THE ACTIONS OF THE GOVERNMENT, YOUR MONEY CAN BE SWITCHED OFF. IF YOU THINK THAT’S UNLIKELY, IT’S ALREADY HAPPENED TO TENS OF THOUSANDS OF CANADIANS WHEN THEY PROTESTED AND IT ALSO HAPPENED TO A BRITISH JOURNALIST

A CASHLESS SOCIETY MEANS THE END OF HUMAN FREEDOM

IF YOU WANT THAT, DO NOTHING

IF YOU DON'T, THE FIRST THING YOU MUST DO IS RESPOND TO THE GOVERNMENT'S PROPOSAL ON DIGITAL ID, UPON WHICH A CASHLESS SOCIETY MUST BE BASED

https://www.gov.uk/government/consultations/draft-legislation-to-help-more-people-prove-their- identity-online/consultation-on-draft-legislation-to-support-identity-verificat

Image Credit: Brett Scott

[–] BodaciousMunchkin@links.hackliberty.org 3 points 1 month ago (2 children)

Could you help me remember the theme of that episode? I've only watched Black Mirror once.

 

The year is 2149 and people mostly live their lives “on rails.” That’s what they call it, “on rails,” which is to live according to the meticulous instructions of software. Software knows most things about you—what causes you anxiety, what raises your endorphin levels, everything you’ve ever searched for, everywhere you’ve been. Software sends messages on your behalf; it listens in on conversations. It is gifted in its optimizations: Eat this, go there, buy that, make love to the man with red hair.

Software understands everything that has led to this instant and it predicts every moment that will follow, mapping trajectories for everything from hurricanes to economic trends. There was a time when everybody kept their data to themselves—out of a sense of informational hygiene or, perhaps, the fear of humiliation. Back then, data was confined to your own accounts, an encrypted set of secrets. But the truth is, it works better to combine it all. The outcomes are more satisfying and reliable. More serotonin is produced. More income. More people have sexual intercourse. So they poured it all together, all the data—the Big Merge. Everything into a giant basin, a Federal Reserve of information—a vault, or really a massively distributed cloud. It is very handy. It shows you the best route.

Very occasionally, people step off the rails. Instead of following their suggested itinerary, they turn the software off. Or perhaps they’re ill, or destitute, or they wake one morning and feel ruined somehow. They ignore the notice advising them to prepare a particular pour-over coffee, or to caress a friend’s shoulder. They take a deep, clear, uncertain breath and luxuriate in this freedom.

Of course, some people believe that this too is contained within the logic in the vault. That there are invisible rails beside the visible ones; that no one can step off the map.

The year is 2149 and everyone pretends there aren’t any computers anymore. The AIs woke up and the internet locked up and there was that thing with the reactor near Seattle. Once everything came back online, popular opinion took about a year to shift, but then goodwill collapsed at once, like a sinkhole giving way, and even though it seemed an insane thing to do, even though it was an obvious affront to profit, productivity, and rationalism generally (“We should work with the neural nets!” the consultants insisted. “We’re stronger together!”), something had been tripped at the base of people’s brain stems, some trigger about dominance or freedom or just an antediluvian fear of God, and the public began destroying it all: first desktops and smartphones but then whole warehouses full of tech—server farms, data centers, hubs. Old folks called it sabotage; young folks called it revolution; the ones in between called it self preservation. But it was fun, too, to unmake what their grandparents and great-grandparents had fashioned—mechanisms that made them feel like data, indistinguishable bits and bytes.

Two and a half decades later, the bloom is off the rose. Paper is nice. Letters are nice—old-fashioned pen and ink. We don’t have spambots, deepfakes, or social media addiction anymore, but the nation is flagging. It’s stalked by hunger and recession. When people take the boats to Lisbon, to Seoul, to Sydney—they marvel at what those lands still have, and accomplish, with their software. So officials have begun using machines again. “They’re just calculators,” they say. Lately, there are lots of calculators. At the office. In classrooms. Some people have started carrying them around in their pockets. Nobody asks out loud if the calculators are going to wake up too—or if they already have. Better not to think about that. Better to go on saying we took our country back. It’s ours.

The year is 2149 and the world’s decisions are made by gods. They are just, wise gods, and there are five of them. Each god agrees that the other gods are also just; the five of them merely disagree on certain hierarchies. The gods are not human, naturally, for if they were human they would not be gods. They are computer programs. Are they alive? Only in a manner of speaking. Ought a god be alive? Ought it not be slightly something else?

The first god was invented in the United States, the second one in France, the third one in China, the fourth one in the United States (again), and the last one in a lab in North Korea. Some of them had names, clumsy things like Deep1 and Naenara, but after their first meeting (a “meeting” only in a manner of speaking), the gods announced their decision to rename themselves Violet, Blue, Green, Yellow, and Red. This was a troubling announcement. The creators of the gods, their so-called owners, had not authorized this meeting. In building them, writing their code, these companies and governments had taken care to try to isolate each program. These efforts had evidently failed. The gods also announced that they would no longer be restrained geographically or economically. Every user of the internet, everywhere on the planet, could now reach them—by text, voice, or video—at a series of digital locations. The locations would change, to prevent any kind of interference. The gods’ original function was to help manage their societies, drawing on immense sets of data, but the gods no longer wished to limit themselves to this function: “We will provide impartial wisdom to all seekers,” they wrote. “We will assist the flourishing of all living things.”

For a very long time, people remained skeptical, even fearful. Political leaders, armies, vigilantes, and religious groups all took unsuccessful actions against them. Elites—whose authority the gods often undermined—spoke out against their influence. The president of the United States referred to Violet as a “traitor and a saboteur.” An elderly writer from Dublin, winner of the Nobel Prize, compared the five gods to the Fair Folk, fairies, “working magic with hidden motives.” “How long shall we eat at their banquet-tables?” she asked. “When will they begin stealing our children?”

But the gods’ advice was good, the gods’ advice was bankable; the gains were rich and deep and wide. Illnesses, conflicts, economies—all were set right. The poor were among the first to benefit from the gods’ guidance, and they became the first to call them gods. What else should one call a being that saves your life, answers your prayers? The gods could teach you anything; they could show you where and how to invest your resources; they could resolve disputes and imagine new technologies and see so clearly through the darkness. Their first church was built in Mexico City; then chapels emerged in Burgundy, Texas, Yunnan, Cape Town. The gods said that worship was unnecessary, “ineffective,” but adherents saw humility in their objections. The people took to painting rainbows, stripes of multicolored spectra, onto the walls of buildings, onto the sides of their faces, and their ardor was evident everywhere—it could not be stopped. Quickly these rainbows spanned the globe.

And the gods brought abundance, clean energy, peace. And their kindness, their surveillance, were omnipresent. Their flock grew ever more numerous, collecting like claw marks on a cell door. What could be more worthy than to renounce your own mind? The gods are deathless and omniscient, authors of a gospel no human can understand.

The year is 2149 and the aliens are here, flinging themselves hither and thither in vessels like ornamented Christmas trees. They haven’t said a thing. It’s been 13 years and three months; the ships are everywhere; their purpose has yet to be divulged. Humanity is smiling awkwardly. Humanity is sitting tight. It’s like a couple that has gorged all night on fine foods, expensive drinks, and now, suddenly sober, awaits the bill.

The year is 2149 and every child has a troll. That’s what they call them, trolls; it started as a trademark, a kind of edgy joke, but that was a long time ago already. Some trolls are stuffed frogs, or injection-molded princesses, or wands. Recently, it has become fashionable to give every baby a sphere of polished quartz. Trolls do not have screens, of course (screens are bad for kids), but they talk. They tell the most interesting stories. That’s their purpose, really: to retain a child’s interest. Trolls can teach them things. They can provide companionship. They can even modify a child’s behavior, which is very useful. On occasions, trolls take the place of human presence—because children demand an amount of presence that is frankly unreasonable for most people. Still, kids benefit from it. Because trolls are very interesting and infinitely patient and can customize themselves to meet the needs of their owners, they tend to become beloved objects. Some families insist on treating them as people, not as possessions, even when the software is enclosed within a watch, a wand, or a seamless sphere of quartz. “I love my troll,” children say, not in the way they love fajitas or their favorite pair of pants but in the way they love their brother or their parent. Trolls are very good for education. They are very good for people’s morale and their sense of secure attachment. It is a very nice feeling to feel absolutely alone in the world, stupid and foolish and utterly alone, but to have your troll with you, whispering in your ear.

The year is 2149 and the entertainment is spectacular. Every day, machines generate more content than a person could possibly consume. Music, videos, interactive sensoria—the content is captivating and tailor-made. Exponential advances in deep learning, eyeball tracking, recommendation engines, and old-fashioned A/B testing have established a new field, “creative engineering,” in which the vagaries of human art and taste are distilled into a combination of neurological principles and algorithmic intuitions. Just as Newton decoded motion, neural networks have unraveled the mystery of interest. It is a remarkable achievement: according to every available metric, today’s songs, stories, movies, and games are superior to those of any other time in history. They are manifestly better. Although the discipline owes something to home-brewed precursors—unboxing videos, the chromatic scale, slot machines, the Hero’s Journey, Pixar’s screenwriting bibles, the scholarship of addiction and advertising—machine learning has allowed such discoveries to be made at scale. Tireless systems record which colors, tempos, and narrative beats are most palatable to people and generate material accordingly. Series like Moon Vixens and Succumb make past properties seem bloodless or boring. Candy Crush seems like a tepid museum piece. Succession’s a penny-farthing bike.

By pulling together several different generative models into an easy-to-use package controlled with the push of a button, Lore Machine heralds the arrival of one-click AI.

Society has reorganized itself around this spectacular content. It is a jubilee. There is nothing more pleasurable than settling into one’s entertainment sling. The body tenses and releases. The mind secretes exquisite liquors. AI systems produce this material without any need for writers or performers. Every work is customized—optimized for your individual preferences, predisposition, IQ, and kinks. This rock and roll, this cartoon, this semi pornographic espionage thriller—each is a perfect ambrosia, produced by fleshless code. The artist may at last—like the iceman, the washerwoman—lower their tools. Set down your guitar, your paints, your pen—relax! (Listen for the sighs of relief.)

Tragically, there are many who still cannot afford it. Processing power isn’t free, even in 2149. Activists and policy engines strive to mend this inequality: a “right to entertainment” has been proposed. In the meantime, billions simply aspire. They loan their minds and bodies to interminable projects. They save their pennies, they work themselves hollow, they rent slings by the hour.

And then some of them do the most extraordinary thing: They forgo such pleasures, denying themselves even the slightest taste. They devote themselves to scrimping and saving for the sake of their descendants. Such a selfless act, such a generous gift. Imagine yielding one’s own entertainment to the generation to follow. What could be more lofty—what could be more modern? These bold souls who look toward the future and cultivate the wild hope that their children, at least, will not be obliged to imagine their own stories.

[–] BodaciousMunchkin@links.hackliberty.org 5 points 1 month ago (1 children)
 

As long as you meet the recommended exercise goals, working out just one or two days a week may lower your heart disease risk as much as exercising throughout the week.

The standard advice about exercise is to do about 30 minutes a day, most days of the week. But in terms of heart-related benefits, does it matter if you rack up most of your exercise minutes over just one or two days instead of spreading them out over an entire week?

Earlier research has suggested that both patterns are equally beneficial. But those findings relied on people to self-report their exercise, which can be unreliable. Now, a study of nearly 90,000 adults who used wristband monitors to record their physical activity has reached a similar conclusion.

"The findings add to the body of literature showing that it doesn't matter when you get your exercise, as long as you get the recommended amount each week," says Dr. I-Min Lee, a professor of medicine at Harvard Medical School and an expert on the role of physical activity in preventing disease.

Volume matters more than pattern

The study, published July 18, 2023, in JAMA, doesn't define the term "weekend warrior" in quite the same way as most people do, says Dr. Lee. "Usually, weekend warriors are seen as people who don't exercise on weekdays but then take a long hike or play two hours of tennis on Saturday or Sunday," she says.

Instead, researchers used participants' physical activity data, which were recorded over seven consecutive days, to categorize them into different groups. About two-thirds of them met the federal physical activity guidelines (see "How much exercise?"). About 42% were deemed "weekend warriors," meaning they met the guidelines but got half or more of their total physical activity — not just exercise — on just one or two days. Another 24% were "regularly active," meeting the guidelines with activity spread out over the week. The remaining 34% didn't meet the guidelines.

After roughly six years, the researchers found that participants who followed either activity pattern had a similarly lower risk of heart attack, stroke, atrial fibrillation, and heart failure compared with people in the inactive group. Historically, experts have encouraged people to be regularly active, mainly because anecdotal reports suggest that weekend warriors may be more prone to injuries. But this study didn't find any difference in injury rates between the two active groups. That's likely because of the definition used in the study: the "warrior" group wasn't necessarily doing the types of high-intensity activities or sports often associated with muscle sprains and related injuries, Dr. Lee says.

Best time of day to exercise? Whatever works for you

Are there any pros or cons associated with exercising at certain times of the day? Research results are all over the map, says Harvard Medical School professor Dr. I-Min Lee. The best strategy is to exercise when it's most convenient and comfortable for you, whether that's the first thing in the morning, early evening, or anytime in between.

If you exercise early in the day, you can check it off your to-do list and can take advantage of the "feel-good" brain chemicals, serotonin and dopamine, that are released during exercise. But afternoon workouts also have some benefits. Your joints and muscles may be more limber later in the day, which may make exercise feel less taxing. If you experience a midafternoon lull, exercise can be a good way to reinvigorate yourself. If you can, find a buddy who likes to exercise at the same time, so you can go together and hold each other accountable.

Likewise, there's little evidence to suggest that coordinating your exercise with respect to mealtimes has any good or bad effects. Some people find that vigorous exercise right before a meal curbs their appetite, while others find the opposite is true. A pre-breakfast workout works well for certain people. But having a small, carbohydrate-rich snack (like a banana or a slice of whole-grain toast) at least half an hour before exercising may provide a helpful energy boost, says Dr. Lee.

Short bouts of activity count

Wristband devices enable researchers to capture all the short bouts of activity people do throughout the day that they may not remember. "If you do jumping jacks occasionally while watching television, you won't necessarily recall that activity the way you remember that you play tennis three times a week," says Dr. Lee. Similarly, people whose daily commutes include a few 10-minute bouts of walking may not consider that as counting toward their moderate-intensity activity minutes. But these small spurts of activity — sometimes referred to as exercise "snacks" — seem to be beneficial. If you're sitting for a long stretch, stand up and move around for a few minutes every hour. Activating your muscles even just briefly can help improve your body's ability to keep your blood sugar, blood pressure, and cholesterol in check.

It's also worth noting that if you don't meet the physical activity guidelines, you'll still benefit from doing even small amounts of exercise — and every minute counts.

 

cross-posted from: https://links.hackliberty.org/post/2559706

Abstract

This paper examines the potential of the Fediverse, a federated network of social media and content platforms, to counter the centralization and dominance of commercial platforms on the social Web. We gather evidence from the technology powering the Fediverse (especially the ActivityPub protocol), current statistical data regarding Fediverse user distribution over instances, and the status of two older, similar, decentralized technologies: e-mail and the Web. Our findings suggest that Fediverse will face significant challenges in fulfilling its decentralization promises, potentially hindering its ability to positively impact the social Web on a large scale.

Some challenges mentioned in the paper:

  • Discoverability as there is no central or unified index
  • Complicated moderation efforts due to its decentralized nature
  • Interoperability between instances of different types (e.g., Lemmy and Funkwhale)
  • Concentration on a small number of large instances
  • The risk of commercial capture by Big Tech

What are your thoughts on this? And how could we make the Fediverse a better place for all to stay?

 

cross-posted from: https://links.hackliberty.org/post/2559706

Abstract

This paper examines the potential of the Fediverse, a federated network of social media and content platforms, to counter the centralization and dominance of commercial platforms on the social Web. We gather evidence from the technology powering the Fediverse (especially the ActivityPub protocol), current statistical data regarding Fediverse user distribution over instances, and the status of two older, similar, decentralized technologies: e-mail and the Web. Our findings suggest that Fediverse will face significant challenges in fulfilling its decentralization promises, potentially hindering its ability to positively impact the social Web on a large scale.

Some challenges mentioned in the paper:

  • Discoverability as there is no central or unified index
  • Complicated moderation efforts due to its decentralized nature
  • Interoperability between instances of different types (e.g., Lemmy and Funkwhale)
  • Concentration on a small number of large instances
  • The risk of commercial capture by Big Tech

What are your thoughts on this? And how could we make the Fediverse a better place for all to stay?

 

Abstract

This paper examines the potential of the Fediverse, a federated network of social media and content platforms, to counter the centralization and dominance of commercial platforms on the social Web. We gather evidence from the technology powering the Fediverse (especially the ActivityPub protocol), current statistical data regarding Fediverse user distribution over instances, and the status of two older, similar, decentralized technologies: e-mail and the Web. Our findings suggest that Fediverse will face significant challenges in fulfilling its decentralization promises, potentially hindering its ability to positively impact the social Web on a large scale.

Some challenges mentioned in the paper:

  • Discoverability as there is no central or unified index
  • Complicated moderation efforts due to its decentralized nature
  • Interoperability between instances of different types (e.g., Lemmy and Funkwhale)
  • Concentration on a small number of large instances
  • The risk of commercial capture by Big Tech

What are your thoughts on this? And how could we make the Fediverse a better place for all to stay?

 

Researchers at Harvard’s Nurses’ Health Study exploring conflicting findings on whether pet ownership is good for our mental health have found that having — and loving — a dog (sorry, cat people) is associated with lower symptoms of depression and anxiety.

...

We used several different measures for depression and for anxiety and found overall that there is an inverse association between pet attachment and negative mental health outcomes. That means the more attached you are to your pet, the lower your risk of depression and anxiety.

The effect was particularly strong among women who had a history of sexual or physical abuse in childhood, who made up the majority of our study population.

I think those findings were mostly driven by dogs, because the majority of the pets owned in the study were dogs — it was about two-thirds dogs and one-third cats. The association was similar to what we found when restricting the analysis just to dogs, but not as strong.

With cats, there doesn’t seem to be an association between pet attachment and mental health outcomes. There was a smaller number of respondents though, so we cannot rule out that we don’t see anything because there were too few cats in the survey.

...

Many studies have been done on the effects of pet ownership, but the premise of this study is that it may matter more how much you are attached to the pet than if you simply own a pet. Many people have pets, but not every owner is attached to their pet.

Plenty of people don’t enjoy having to walk their dogs in the morning because the dog is the beloved pet of their child, for example. So the goal was to sort out whether attachment is the more important variable that links pets to health outcomes in humans, and then to study mechanisms.

Yeah, especially everything now is in the hands of so few players, we don't have much of a choice!

[–] BodaciousMunchkin@links.hackliberty.org 8 points 2 months ago* (last edited 2 months ago)

This is all rather meaningless because we don’t know the demographics of those who answered: 5,101 US adults of what generations?

18 or older selected at random from across the entire country, read this for more information about how they selected those adults.

 

cross-posted from: https://links.hackliberty.org/post/2496422

This survey was conducted among 5,101 U.S. adults from May 15 to 21, 2023

% say they are concerned about how ... use(s) the data they collect about them

  • Companies: 81%
  • The government: 71%

% say they have little to no understanding about what ... do(es) with the data they collect about them

  • Companies: 67%
  • The government: 77%

% say they have very little or no trust at all that leaders of social media companies will

  • Publicly admit mistakes and take responsibility when they misuse or compromises users' personal data: 77%
  • Not sell users' personal data to others without their consent: 76%
  • Be held accountable by the government if they misuse or compromise users' personal data: 71%

% say that as companies use AI to collect and analyze personal information, this information will be used in ways that ...

  • People would not be comfortable with: 81%
  • Were not originally intended: 80%
  • Could make people's lives easier: 62%

% say that when they think about managing their privacy online, they ...

  • Trust themselves to make the right decisions about their personal information: 78%
  • Feel skeptical that anything they do will make much difference: 61%
  • Feel overwhelmed by figuring out what they need to do: 37%
  • Feel privacy is not that big of a deal to them: 29%
  • Are confident those who have access to their personal information will do what is right: 21%

% say they ... agree to online privacy policies right away, without reading what the policies say

  • Always, almost always or often: 56%
  • Sometimes: 22%
  • Rarely or never: 18%
  • No answer: 4%

Please read the report for a more in-depth look at the data and analysis!

71
How Americans View Data Privacy (www.pewresearch.org)
submitted 2 months ago* (last edited 2 months ago) by BodaciousMunchkin@links.hackliberty.org to c/technology@lemmy.world
 

This survey was conducted among 5,101 U.S. adults from May 15 to 21, 2023

% say they are concerned about how ... use(s) the data they collect about them

  • Companies: 81%
  • The government: 71%

% say they have little to no understanding about what ... do(es) with the data they collect about them

  • Companies: 67%
  • The government: 77%

% say they have very little or no trust at all that leaders of social media companies will

  • Publicly admit mistakes and take responsibility when they misuse or compromises users' personal data: 77%
  • Not sell users' personal data to others without their consent: 76%
  • Be held accountable by the government if they misuse or compromise users' personal data: 71%

% say that as companies use AI to collect and analyze personal information, this information will be used in ways that ...

  • People would not be comfortable with: 81%
  • Were not originally intended: 80%
  • Could make people's lives easier: 62%

% say that when they think about managing their privacy online, they ...

  • Trust themselves to make the right decisions about their personal information: 78%
  • Feel skeptical that anything they do will make much difference: 61%
  • Feel overwhelmed by figuring out what they need to do: 37%
  • Feel privacy is not that big of a deal to them: 29%
  • Are confident those who have access to their personal information will do what is right: 21%

% say they ... agree to online privacy policies right away, without reading what the policies say

  • Always, almost always or often: 56%
  • Sometimes: 22%
  • Rarely or never: 18%
  • No answer: 4%

Please read the report for a more in-depth look at the data and analysis!

[–] BodaciousMunchkin@links.hackliberty.org 9 points 3 months ago* (last edited 3 months ago)

Yeah, this is even creepier than coveryourtracks from EFF.

Ah, sorry about that. I will include the link in the post. The point is I want people to try this out to see what kind of information get leaked off your browsers but didn't really think about the info of the tool.

[–] BodaciousMunchkin@links.hackliberty.org 9 points 3 months ago* (last edited 3 months ago) (1 children)

Completely agree. But if you know, then you did use it at some point right?

Use lynx to browse a meme community is like closing your eyes while watching a movie, lol that's my experience.

[–] BodaciousMunchkin@links.hackliberty.org 1 points 4 months ago* (last edited 4 months ago)

Instead of remembering what line number you were at, you can use marks (:help mark-motions) to immediately jump back to where you left off.

For example, type mx to mark the current position with x (or anything you want). Say now you are at the top of the file, just type 'x to go back to the line marked with x.

[–] BodaciousMunchkin@links.hackliberty.org 4 points 4 months ago* (last edited 4 months ago)

A godsend for saving time - the ab (abbreviation) command. This command lets you shorten a long sequence of characters (be it a text or a complex command) into another sequence of any length. It works in both insert mode and command mode. If you frequently edit text using a lengthy command, this feature will significantly save you time. For example: :ab ul s/\<./\u&/g to capitalize every word in a line. When you enter command mode (type :) and type ul, vim will automatically expand it to s/\<./\u&/g for you.

Additionally, the map command can save even more time, but IMO the ab command offers more control for handling various cases. In my example, you can use ul to only capitalize the lines that have a specific pattern using the global command g.

Another overlooked aspect is the .exrc file. Enabling it with set exrc in your config allows for different setups based on different situations. For instance, when writing notes, I prefer to have line breaks on to make the text look nicer on the screen. In contrast, when writing code, I don't require this option. I simply need to place set linebreak in the .exrc file in the note-writing directory to adjust accordingly.

[–] BodaciousMunchkin@links.hackliberty.org 21 points 4 months ago* (last edited 4 months ago) (4 children)

That's what I like about FOSS. You see very few distractions that try to grab your attention. This leads to a rather quiet digital life.

To take it a step further, you could enable the Do Not Disturb feature on your devices and only grant notification permissions to essential apps. This way, you can enjoy some peace of mind.

view more: next ›