Another small step towards WWIII
Flumpkin
The Last Ringbearer (annas-archive) by the paleontologist Kirill Eskov.
Eskov bases his novel on the premise that the Tolkien account is a "history written by the victors".[2][3] Mordor is home to an "amazing city of alchemists and poets, mechanics and astronomers, philosophers and physicians, the heart of the only civilization in Middle-earth to bet on rational knowledge and bravely pitch its barely adolescent technology against ancient magic", posing a threat to the war-mongering faction represented by Gandalf (whose attitude is described by Saruman as "crafting the Final Solution to the Mordorian problem") and the Elves.[2]
Macy Halford, in The New Yorker, writes that The Last Ringbearer retells The Lord of the Rings "from the perspective of the bad guys, written by a Russian paleontologist in the late nineties and wildly popular in Russia".[4] The book was written in the context of other Russian reinterpretations of Tolkien's works, such as Natalia Vasilyeva and Natalia Nekrasova's The Black Book of Arda [ru], which treats Melkor as good and the Valar and Eru Ilúvatar as tyrannical rulers.
Maybe that is what we need to do. "Decide" on certain moral questions based on best scientific data and our values and sound arguments and then stop debating them. Unless new scientific evidence challenges those moral edicts.
Somehow we keep going round in circles as a civilization.
There is nothing to keep you from using factors of 1024 (except he slightly ludicrous prefix "kibi" and "mebi"), but other than low level stuff like disc sectors or bios where you might want to use bit logic instead of division it's rather rare. I too started in the time when division op was more costly than bit level logic.
I'd argue that any user facing applications are better off with base 1000, except by convention. Like a majority of users don't know or care or need to care what bits or bytes do. It's programmers that like the beauty of the bit logic, not users. @mb_@lemm.ee
You forgot the journalists who frame narratives and the intellectuals who secrete the ideology that makes it all possible.
I think there are a few industrial processes that produce CO2 not from energy generation like aluminum smelting. So we should continue research & development, but it really shouldn't be solely in the hands of shell.
You all want to simplify this conflict as an epic battle between good and evil. Except it is a conflict between two great evils by means that creates more evil.
It's the same insane cognitive dissonance I see in MAGA. Just suggesting that there could be a diplomatic solution to this conflict or trying to understand the motivation or demands of your enemy, will just trigger the instinct to call me a chamberlain or trumpist or putinist. All the "progressives" are just gleefully reveling in this glorious slaughter of war and screaming YES! to total war.
Yeah. But maybe this is how you teach an AI a broader understanding of the real world. Or really a slightly less narrow view. Human brains also have to learn and reconcile all these conflicting data points and then create a kind of understanding from it. For any machine learning it would only be an intuitive instinct.
Like you would have a bunch of these "tables" that show relationships between various tokens and embody concepts. Maybe you need to combine different kind of models that are organized and trained differently to resolve such things. I only have a very surface level understanding of how machine learning works so I know this is very speculative. Maybe you're right and it can only ever reflect the training data. Then maybe you'd need to edit the training data, but you could also maybe use other AIs to "reinterpret" training data based on other models.
Like all the data on reddit, could you train a model to detect sarcasm or lies or to differentiate between liberal, leftist and fascist type of arguments? Not just recognizing the tokens or talking points, but the semantic of an argument? Like detecting a non sequitur. You probably need need "general knowledge" understanding for that. But any kind of AI like that would be incredibly interesting for social media so you client can tag certain posts, or root out bot / shill networks that work for special interests (fossil fuel, usa, china, russia).
So all the stuff "conflicting with each other and making a giant spider web of issues to juggle" might be what you can train an AI to pull apart into "appeal emotion" and "materialistic view" or "belief in inequality" or "preemptive bias counteractor". Maybe it actually could extract and help us communicate better.
Eh I really need to learn more about AI to understand the limits.
Would it be possible to create a kind of "formula" to express the abstract relationship of ethical makeup, location, year and field? Like convert a table of population, country, ethnicity mix per year and then train the model on that. It's clear that it doesn't understand the meaning or abstract concept, but it can associate and extrapolate things. So it could "interpret" what the image description says while training and then use the prompt better. So if you'd prompt "english queen 1700" it would output white queen, if you input year 2087 it would be ever so slightly less pasty.
That looks hilarious. It's a bit like that, except here it's humans on earth.
Oh wow it has eye tracking! I have high hopes for that feature.
But what I really want is to see and use my keyboard in VR and have an optimized desktop environment to pull up some text or document website quickly. I felt a bit trapped the last time I used VR and had to refer to documentations.
Huh, wondered the same thing. Apparently written by Aisha Sultan.