This is conceptually different, it just generates a few seconds of doomlike video that you can slightly influence by sending inputs, and pretends that In The Future™ entire games could be generated from scratch and playable on Sufficiently Advanced™ autocomplete machines.
Architeuthis
Stephanie Sterling of the Jimquisition outlines the thinking involved here. Well, she swears at everyone involved for twenty minutes. So, Steph.
She seems to think the AI generates .WAD files.
I guess they fell victim to one of the classic blunders: never assume that it can't be that stupid, and someone must be explaining it wrong.
Did LLama3.1 solve the hallucination problem?
I bet we would have heard if it had, since It's the albatross hanging on the neck of this entire technology.
but it can make a human way more efficient, and make 1 human able to do the work of 3-5 humans.
Not if you have to proof-read everything to spot the entirely convincing-looking but completely inaccurate parts, is the problem the article cites.
I’m truly surprised they didn’t cart Yud out for this shit
Self-proclaimed sexual sadist Yud is probably a sex scandal time bomb and really not ready for prime time. Plus it's not like he has anything of substance to add on top of Saltman's alarmist bullshit, so it would just be reminding people how weird in a bad way people in this subculture tend to be.
I liked how Scalzi brushed it away, basically your consciousness gets copied to a new body, which kills the old one, and an artifact of the transfer process is that for a few moments you experience yourself as a mind with two bodies, meaning you have at least the impression of continuity of self, which is enough for most people to get on with living in a new body and let philosophers do the worrying.
I feel like a subset of sci-fi and philosophical meandering really is just increasingly convoluted paths of trying to avoid or come to terms with death as a possibly necessary component of life.
Given rationalism's intellectual heritage, this is absolutely transhumanist cope for people who were counting on some sort of digital personhood upload as a last resort to immortality in their lifetimes.
You mean swapped out with something that has feelings that can be hurt by mean language? Wouldn't that be something.
Are we putting endocrine systems in LLMs now?
Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.
To be clear, this means that if you treat someone like shit all their life, saying you're sorry to their Sufficiently Similar Simulation™ like a hundred years after they are dead makes it ok.
This must be one of the most blatantly supernatural rationalist Accepted Truths, that if your simulation is of sufficiently high fidelity you will share some ontology of self with it, which by the way is how the basilisk can torture you even if you've been dead for centuries.
Seems unnecessary, due to the paradox of intolerance it's trivial to be made to look the bad guy if you are actively trying to curtail fash influence in the public discourse.
OpenAI manages to do an entire introduction of a new model without using the word "hallucination" even once.
Apparently it implements chain-of-thought, which either means they changed the RHFL dataset to force it to explain its 'reasoning' when answering or to do self questioning loops, or that it reprompts itsefl multiple times behind the scenes according to some heuristic until it synthesize a best result, it's not really clear.
Can't wait to waste five pools of drinkable water to be told to use C# features that don't exist, but at least it got like 25.2452323760909304593095% better at solving math olympiads as long as you allow it a few tens of tries for each question.