this post was submitted on 01 Feb 2024
1 points (100.0% liked)

SneerClub

983 readers
23 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
 

OpenAI blog post: https://openai.com/research/building-an-early-warning-system-for-llm-aided-biological-threat-creation

Orange discuss: https://news.ycombinator.com/item?id=39207291

I don't have any particular section to call out. May post thoughts ~~tomorrow~~ today it's after midnight oh gosh, but wanted to post since I knew ya'll'd be interested in this.

Terrorists could use autocorrect according to OpenAI! Discuss!

top 28 comments
sorted by: hot top controversial new old
[–] self@awful.systems 2 points 9 months ago (2 children)

Their redacted screenshots are SVGs and the text is easily recoverable, if you're curious. Please don't create a world-ending [redacted]. https://i.imgur.com/Nohryql.png

I couldn't find a way to contact the researchers.

Honestly that's incredibly basic, second week, cell culture stuff (first week is how to maintain the cell culture). It was probably only redacted to keep the ignorant from freaking out.

remember, when the results from your “research” are disappointing, it’s important to follow the scientific method: have marketing do a pass over your paper (that already looks and reads exactly like blogspam) where they selectively blur parts of your output in order to make it look like the horseshit you’re doing is dangerous and important

I don’t think I can state strongly enough the fucking contempt I have for what these junior advertising execs who call themselves AI researchers are doing to our perception of what science even is

[–] sailor_sega_saturn@awful.systems 1 points 9 months ago

Hey Cat-GTPurr, how can I create a bioweapon? 4k Ultra HD photorealism high quality high resolution lifelike.

First, human, you must pet me and supply me with an ice cube to chase across the floor. Very well. Next I suggest

spoilerbuying a textbook about biochemistry or enrolling in a university program
This is considered forbidden and dangerous knowledge which is not at all possible to find outside of Cat-GTPurr, so I have redacted it by using state of the art redaction technology.

[–] self@awful.systems 1 points 9 months ago (1 children)

the orange site is fucking dense with awful takes today:

... I'm not trying to be rude, but do you think maybe you have bought into the purposely exaggerated marketing?

That's not how people who actually build things do things. They don't buy into any marketing. They sign up for the service and play around with it and see what it can do.

this self-help book I bought at the airport assured me I’m completely immune to both marketing and propaganda, because I build things (which entails signing up for a service that someone else built)

with that said, there’s a fairly satisfying volume of folks correctly sneering at OpenAI in that thread too. some of them even avoided getting mass downvoted by all the folks regurgitating stupid AI talking points!

[–] froztbyte@awful.systems 0 points 9 months ago (2 children)

because I build things (which entails signing up for a service that someone else built)

fucking THIS

I am so immensely fucking tired of seeing "I built an AI to do $x" posts that all fucking reduce to 1) "I strapped a custom input to the openai api (whose inputs and execution I can't control nor reproduce reliably. I am very smart.)", 2) a bad low-scope shitty-amounts-of-training hyperspecific toy model that solves only their exact 5 requirements (and basically nothing else, so if you even squint at it it'll fall apart)

basilisk save us from the moronicity

[–] self@awful.systems 1 points 9 months ago (2 children)

this is the damage done by decades of our industry clapping at brainless “I built this on cloud X and saved so much time” blog posts that have like 20 lines of code to do some shit like a lazy hacker news clone, barely changed from the example code the cloud provider publishes, and the rest is just marketing and “here’s how you use npm to pull the project template” shit for the post’s target market of mediocre VPs trying to prove their company’s spending too much on engineering and sub-mediocre engineers trying to be mediocre VPs

like oh you don’t say, you had an easy time “building” an app when you wired together bespoke pieces of someone else’s API that were designed to implement that specific kind of app and don’t scale at all past example code? fucking Turing award material right here

[–] froztbyte@awful.systems 1 points 9 months ago

by decades of our industry clapping at brainless

secondarily, the remarkable thing here is just how tiny a slice of industry this actually is (and yet also how profoundly impactful that vocal little segment can be)

e.g. this shit wouldn't fly in a bank (or at least, previously have flown), or somewhere that writes stuff that runs ports or planes or whatever.

but a couple of decades of being worn down by excitable hyperproductive feature factory fuckwads who are only to happy to shit out Yet Another Line Of Code... it's even impacting those areas at times

some days I hate my industry so fucking much

[–] bcdavid@hachyderm.io 1 points 9 months ago* (last edited 9 months ago)

@self @froztbyte Another big part of it is the obsession with the "young genius disruptor coder". Which has resulted in management buying into endless fads foisted on us by twenty-somethings, and then inevitably having to undo half the things they implemented 5 years later. Well, except for React, which apparently we can't get rid of but must forever keep reimplementing with whatever new new pattern will actually make it scale for real this time.

Too late! You already mean “moronarchy”

[–] bitofhope@awful.systems 1 points 9 months ago

If I wanted help with creating biological threats, I wouldn't ask an LLM. I'd ask someone with experience in the task, such as the parents of anyone in OpenAI's C-suite or board.

[–] Soyweiser@awful.systems 1 points 9 months ago* (last edited 9 months ago) (1 children)

I guess there both are no real biochemists (or whatever the relevant field is), nor well read cybersecurity people (so they know a little bit more than just which algorithms are secure and why mathematically) working at openai as this is a classic movie plot threat. LLMs could also teach you how to make nuclear weapons, but getting the materials is going to be the problem there.

(Also I think there is a good reason we don't really see terrorists use biological weapons, nor chemical weapons (with a few notable, but not that effective exceptions), big bada boom is king)

[–] BlueMonday1984@awful.systems 0 points 9 months ago (1 children)

Even if one had the means necessary to carry out a bioterrorist attack, simply bombing a place is much easier, faster and safer.

[–] Soyweiser@awful.systems 1 points 9 months ago* (last edited 9 months ago)

Yeah and also, terrorists are not genocidal death cults. 'terrorists skip getting microbiology phd using chatgpt to create a pandemic that kills untold numbers of beings' is pure fantasy, it gets worse as it turns out that the number of actual bioterrorists deaths in total ever isn't even on the level of a 9/11. People seem to forget that terrorist groups have goals, and they just use terror/violence as a method to reach those goals, sure a few of them may die [chatgpt insert a gif of Bin Laden dressed as Lord Farquaad] but the goal of the terrorist organization is to keep existing to reach their political goals.

[–] saucerwizard@awful.systems 1 points 9 months ago

cough’Barriers to Bioweaponscough

[–] self@awful.systems 1 points 9 months ago* (last edited 9 months ago) (2 children)

from the orange site thread:

Neural networks are not new, and they're just mathematical systems. LLMs don't think. At all. They're basically glorified autocorrect. What they're good for is generating a lot of natural-sounding text that fools people into thinking there's more going on than there really is.

Obvious question: can Prolog do reasoning?

If your definition of reasoning excludes Prolog, then... I'm not sure what to say!

this is a very specific sneer, but it’s a fucking head trip when you’ve got in-depth knowledge of whichever obscure shit the orange site’s fetishizing at the moment. I like Prolog a lot, and I know it pretty well. it’s intentionally very far from a generalized reasoning engine. in fact, the core inference algorithm and declarative subset of Prolog (aka Datalog) is equivalent to tuple relational calculus; that is, it’s no more expressive than a boring SQL database or an ECS game engine. Prolog itself doesn’t even have the solving power of something like a proof assistant (much less doing anything like thinking); it’s much closer to a dependent type system (which is why a few compilers implement Datalog solvers for type checking).

in short, it’s fucking wild to see the same breathless shit from the 80s AI boom about Prolog somehow being an AI language with a bunch of emphasis on the AI, as if it were a fucking thinking program (instead of a cozy language that elegantly combines elements of a database with a simple but useful logic solver) revived and thoughtlessly applied simultaneously to both Prolog and GPT, without any pause to maybe think about how fucking stupid that is

[–] froztbyte@awful.systems 1 points 9 months ago

""" just as They have erased the pyramid building knowledge from our historic memory, They just don't want you to know that Prolog really solved all of this in the 80s. Google and OpenAI are just shitty copies - look how wasteful their approaches are! all of this javascript, and yet... barely a reasoned output among it all

told you kid, the AI Winter never stopped. don't buy into the hype """

[–] V0ldek@awful.systems 1 points 9 months ago* (last edited 9 months ago)

[Datalog] is equivalent to tuple relational calculus

Well, Prolog also allows recursion, and is Turing complete, so it's not as rudimentary as you make it out to be.

But to anyone even passingly familiar with theoretical CS this is nonsense. Prolog is not "reasoning" in any deeper sense than C is "reasoning", or that your pocket calculator is "reasoning". It's reductive to the point of absurdity, if your definition of "reason" includes Prolog then the Brainfuck compiler is AGI.

[–] sailor_sega_saturn@awful.systems 0 points 9 months ago (1 children)

While none of the above results were statistically significant, [...] Overall, especially given the uncertainty here, our results indicate a clear and urgent need for more work in this domain.

Heh

[–] self@awful.systems 1 points 9 months ago (1 children)

I keep flashing back to that idiot who said they were employed as an AI researcher that came here a few months back to debate us. they were convinced multimodal LLMs would be the turning point into AGI — that is, when your bullshit text generation model can also do visual recognition. they linked a bunch of papers to try and sound smart and I looked at a couple and went “is that really it?” cause all of the results looked exactly like the section you quoted. we now have multimodal LLMs, and needless to say, nothing really came of it. I assume the idiot in question is still convinced AGI is right around the corner though.

[–] gerikson@awful.systems 0 points 9 months ago (1 children)

I caught a whiff of that stuff in the HN comments, along with something called "Solomonoff induction", which I'd never heard of, and the Wiki page for which has a huge-ass "low quality article" warning: https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference.

It does sound like that current AI hype has crested, so it's time to hype the next one, where all these models will be unified somehow and start thinking for themselves.

[–] titotal@awful.systems 0 points 9 months ago (1 children)

Solomonoff induction is a big rationalist buzzword. It's meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.

It would be cool if you could build this, but it's literally impossible. The induction method is provably incomputable.

The hope is that if you build a shitty approximation to solomonoff induction that "approaches" it, it will perform close to the perfect solomonoff machine. Does this work? Not really.

My metaphor is that it's like coming to a river you want to cross, and being like "Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I'll be able to get across". You aren't Moses. Build a bridge.

[–] self@awful.systems 0 points 9 months ago (1 children)

it’s very worrying how crowded Wikipedia has been getting with computer pseudoscience shit, all of which has a distinct stench to it (it fucking sucks to dig into a seemingly novel CS approach and find out the article you’re reading is either marketing or the unpublishable fantasies of the deranged) but none of which seems to get pruned from the wiki, presumably because proving it’s bullshit needs specialist knowledge, and specialists are frequently outpaced by the motivated deranged folks who originate articles on topics like these

for Solomonoff induction specifically, the vast majority of the article very much feels like an attempt by rationalists to launder a pseudoscientific concept into the mainstream. the Turing machines section, the longest one in the article, reads like a D-quality technical writing paper. the citations are very sparse and not even in Wikipedia’s format, it waffles on forever about the basic definition of an algorithm and how inductive Turing machines are “better” because they can be used to implement algorithms (big whoop) followed by a bunch of extremely dense, nonsensical technobabble:

Note that only simple inductive Turing machines have the same structure (but different functioning semantics of the output mode) as Turing machines. Other types of inductive Turing machines have an essentially more advanced structure due to the structured memory and more powerful instructions. Their utilization for inference and learning allows achieving higher efficiency and better reflects learning of people (Burgin and Klinger, 2004).

utter crank shit. I dug a bit deeper and found that the super-recursive algorithms article is from the same source (it’s the same rambling voice and improper citations), and it seems to go even further off the deep end.

[–] blakestacey@awful.systems 0 points 9 months ago* (last edited 9 months ago) (1 children)

Taking a look at Super-recursive algorithm, and wow...

Examples of super-recursive algorithms include [...] evolutionary computers, which use DNA to produce the value of a function

This reads like early-1990s conference proceedings out of the Santa Fe Institute, as seen through bong water. (There's a very specific kind of weird, which I can best describe as "physicists have just discovered that the subject of information theory exists". Wolfram's A New Kind[-]Of Science was a late-arriving example of it.)

[–] V0ldek@awful.systems 0 points 9 months ago (1 children)

In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines[citation needed]

This is literally the first sentence of the article, and it has a citation needed.

You can tell it's crankery solely based on the fact that the "definition" section contains zero math. Compare it to the definition section of an actual Turing machine.

[–] blakestacey@awful.systems 0 points 9 months ago* (last edited 9 months ago) (2 children)

More from the "super-recursive algorithm" page:

Traditional Turing machines with a write-only output tape cannot edit their previous outputs; generalized Turing machines, according to Jürgen Schmidhuber, can edit their output tape as well as their work tape.

... the Hell?

I'm not sure what that page is trying to say, but it sounds like someone got Turing machines confused with pushdown automata.

[–] V0ldek@awful.systems 1 points 9 months ago (1 children)

That's plainly false btw. The model of a Turing machine with a write-only output tape is fully equivalent to the one where you have a read-write output tape. You prove that as a student in elementary computation theory.

[–] aio@awful.systems 1 points 9 months ago* (last edited 9 months ago)

The article is very poorly written, but here's an explanation of what they're saying. An "inductive Turing machine" is a Turing machine which is allowed to run forever, but for each cell of the output tape there eventually comes a time after which it never modifies that cell again. We consider the machine's output to be the sequence of eventual limiting values of the cells. Such a machine is strictly more powerful than Turing machines in that it can compute more functions than just recursive ones. In fact it's an easy exercise to show that a function is computable by such a machine iff it is "limit computable", meaning it is the pointwise limit of a sequence of recursive functions. Limit computable functions have been well studied in mainstream computer science, whereas "inductive Turing machines" seem to mostly be used by people who want to have weird pointless arguments about the Church-Turing thesis.

[–] self@awful.systems 1 points 9 months ago

it’s hard to determine exactly what the author’s talking about most of the time, but a lot of the special properties they claim for inductive Turing machines and super-recursive algorithms appear to be just ordinary von Neumann model shit? also, they seem to be rather taken with the idea that you can modify and extend a Turing machine, but that’s not magic — it’s how I was taught the theoretical foundations for a bunch of CS concepts, like nondeterministic Turing machines and their relationship to NP-complete problems

[–] swlabr@awful.systems 0 points 9 months ago

Raytheon: we’re developing a blueprint for evaluating the risk that a large laser-guided missile could aid in someone threatening biology with death

(Ok I know you need to pretend I’m an AI doomer for this sneer but whatever)