this post was submitted on 03 Jun 2024
22 points (100.0% liked)
TechTakes
1400 readers
115 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AGI growth lol from twitter
xcancel.com link
Edit: somebody also Did The Math (xcancel) "I eyeballed the rough numbers from your graph then re-plotted it as a linear rather than a logarithmic scale, because they always make me suspicious. You're predicting the effective compute is going to increase about twenty quadrillion times in a decade. That seems VERY unlikely."
i really, really don't get how so many people are making the leaps from "neural nets are effective at text prediction" to "the machine learns like a human does" to "we're going to be intellectually outclassed by Microsoft Clippy in ten years".
like it's multiple modes of failing to even understand the question happening at once. i'm no philosopher; i have no coherent definition of "intelligence", but it's also pretty obvious that all LLM's are doing is statistical extrapolation on language. i'm just baffled at how many so-called enthusiasts and skeptics alike just... completely fail at the first step of asking "so what exactly is the program doing?"
Same with when they added some features to the UI of gpt with the gpt40 chatbot thing. Don't get me wrong, the tech to do real time audioprocessing etc is impressive (but has nothing to do with LLMs, it was a different technique) but it certainly is very much smoke and mirrors.
I recall when they taught developers to be careful with small UI changes without backend changes as for non-insiders that feels like a massive change while the backend still needs a lot of work (so the client thinks you are 90% done while only 10% is done), but now half the tech people get tricked by the same problem.
i suppose there is something more "magical" about having the computer respond in realtime, and maybe it's that "magical" feeling that's getting so many people to just kinda shut off their brains when creators/fans start wildly speculating on what it can/will be able to do.
how that manages to override people's perceptions of their own experiences happening right in front of it still boggles my mind. they'll watch a person point out that it gets basic facts wrong or speaks incoherently, and assume the fault lies with the person for not having the true vision or what have you.
(and if i were to channel my inner 2010's reddit atheist for just a moment it feels distinctly like the ways people talk about Christian Rapture, where flaws and issues you're pointing out in the system get spun as personal flaws. you aren't observing basic facts about the system making errors, you are actively in ego-preserving denial about the "inevitability of ai")