I agree, to me one of the most frustrating aspects of much online discussion of AI is that it focuses on trivial chatter and nonsense. In particular boring fanboyism when it comes to the likes of Musk or OpenAI. Meanwhile the truly Earth shattering long-term events are happening elsewhere, and this is one example of them. Halving unexpected deaths in hospital settings is such a huge thing and yet it goes barely reported, in comparison to the brain-dead ra-ra Silicon Valley gossip that passes for most discussion about AI.
I admit I don't know very much about any of this, but I've never heard of children who grow up in relatively isolated circumstances, for example home schooling, having lower functioning immune systems?
At this point, I'm pretty sure Chinese taikonauts will get to the Moon, before American astronauts return.
I don't have sources for the 2024 deflation in China and AI. (I qualified my initial statement "hard to know").
Robotics are a proxy for AI in manufacturing.
I suspect AI is about to give us a type of deflation no economist has ever seen or modeled before. What will happen when AI gives us the expert knowledge of doctors, lawyers, technicians, teachers, engineers, etc etc almost for free?
You can't talk of this scenario in terms of past models, because it's never happened before, but we can clearly see that it's just about to happen to us right ahead.
The last 12 months have seen the most sustained period of deflation in China since the late 1990s. It's hard to know how much AI is responsible, but I would guess it is to some extent. It's driving the reduction in prices in the manufacturing of so many things, EVs especially.
Many people assume unemployment will be AI's most destructive economic effect. That may be true, but before it causes a problem, there will be a far more immediate one to deal with - deflation.
Deflation is so destructive because it shrinks businesses' incomes while increasing the size of their debt relative to this income. If there is sustained deflation, then this leads to a spiraling collapse that takes asset prices like the stock market and property values with it. This was the main mechanism that caused most of the damage in the Great Depression.
If AI is on the cusp of giving us lawyers, doctors, and other experts knowledge for practically free, then it follows that there is massive deflation to come. There is already a backlash against AI in some quarters, I would expect it to grow when the deflation problem arrives.
If ever there was an industry that could do with some technological overhaul - its housing. 3D Printing threatens to do the job, and seems to have the right tools, but never takes off - will this be the one that does?
At $1,000 per module they offer solutions to homelessness in western countries.
ew data show both have stopped increasing. Is the change permanent? People are planning for this, though it's possible both power sources have a final spurt ahead of them.
This is still a few years ahead of expected schedule so it's hard to tell.
When might it integrate Lemmy?
I won't be surprised if Chinese astronauts reach the Moon before American ones return to it. Boeing's SLS seems to go from bad to worse, and SpaceX's Starliner is nowhere near ready to completely replace it.
Some people seem to expect SpaceX to work miracles. It has formidable problems to solve before using a Starliner to land astronauts on the Moon. The capability of refuelling Starliner in space, landing on the Moon, refueling there and taking off from it may take to the 2030s to solve.
The EU is to change the law to make social media owners and company executives personally liable with fines, or potential jail sentences, for failing to deal with misinformation that promotes violence. That's good, but teaching critical thinking is even more important.
AI is about to make the threat of misinformation orders of magnitude greater. It is now possible to fake images, video, and audio indistinguishable from reality. We need new ways to combat this, and relying on top-down approaches isn't enough. There's another likely consequence - expect lots of social media misinformation telling you how bad critical thinking is. The people who use misinformation don't want smart, informed people who can spot them lying.
Yes, it's an odd statement. The authors are all Harvard scientists, and I have checked what they post on Twitter, they aren't anti vaccine cranks. Though one of them, Al Ozonoff, does try to engage with such people. Perhaps this is a concession in a similar vein of outreach. The Hill is a conservative news website. Perhaps they felt they had to get their own dubious science a mention, and that was the price of publication for the Harvard scientists?