this post was submitted on 06 Nov 2023
57 points (100.0% liked)

Technology

37724 readers
640 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
top 35 comments
sorted by: hot top controversial new old
[–] abhibeckert@beehaw.org 12 points 1 year ago* (last edited 1 year ago) (3 children)

ChatGPT 4 is estimated to use 700GB of “High Bandwidth Memory”.

… which will set you back about half a million dollars at current prices (which are high, because the manufacturers can’t keep up with demand). Or, you could just pay 20 bucks a month.

[–] lol3droflxp@kbin.social 3 points 1 year ago (2 children)
[–] conciselyverbose@kbin.social 9 points 1 year ago (2 children)

If it's actually High Bandwidth Memory, it's the VRAM they use for some video cards/SoCs.

It might be mostly the same components, but the high bandwidth part is important and harder to do. They get the much higher throughput by physically stacking the chips on top of each other directly on the chip. The much lower distance signals have to travel (combined with a lot of pins to send signal through) do more than you can do with traditional RAM.

[–] GiveMemes@jlai.lu 3 points 1 year ago (1 children)

There's a company making analog chips that do the matrix calculations at a (15 or) 60x (I forget which) more efficient rate than moden chips (by multiplying voltages I believe). Even though one is only about 1/3 the processing power of a modern gpu, stack enough together and you're cooking. The matrix multiplication aspect is what we're using the VRAM for right?

[–] conciselyverbose@kbin.social 3 points 1 year ago (1 children)

The actual models telling them what to multiply are, to my knowledge.

VRAM isn't the low level "working" memory. You still have to pull structures from memory and into actual use. If you're working on pen and paper, a bookshelf might be system storage and your desk might be RAM/VRAM, but you still need to copy the numbers from your desk onto the piece of paper you're working on. That's lower level cache, registers, the tensor cores, etc.

If the chip you're discussing is a better calculator, that's useful, but you still need the big desk to hold the huge amount of information you need to reference at any given time.

My brain is mush for some reason today, so that might not make sense, but better matrix operations shouldn't remove the need to have access to a huge model.

[–] GiveMemes@jlai.lu 1 points 1 year ago

Thanks for the informative reply! Looks like I need to brush up on my hardware knowledge lol

[–] lol3droflxp@kbin.social 1 points 1 year ago (2 children)

I get that this is expensive. However, it should also work with RAM if you accept slower speeds I guess. The question is of course if it’s still usable then.

[–] averyminya@beehaw.org 4 points 1 year ago

Most current locally hosted software has some option to offload to RAM, CPU, and disk. VRAM is fastest, but RAM and CPU offloading lets you cut down to less than 4GB VRAM for certain applications, at plenty reasonable speed.

[–] abhibeckert@beehaw.org 1 points 1 year ago* (last edited 1 year ago) (1 children)

GPT-4 is already kinda slow - it works best as a "conversational" tool where you ask follow up questions and clarify things that have already been said. That's painful when you have to wait 10 seconds for a response. I couldn't imagine it being useful if it was minutes.

[–] interolivary@beehaw.org 1 points 1 year ago

Having to wait 10 seconds for a response is "painful"?

[–] abhibeckert@beehaw.org 2 points 1 year ago* (last edited 1 year ago)

To put some numbers on it - RAM runs at tens of gigabytes per second (bytes, not bits). High Bandwidth Memory runs at several hundred or sometimes terabytes per second (OpenAI is likely using the latter, and that memory isn't just expensive it's also supply constrained, so the prices are astronomically high right now).

You can buy HBM, and you can use it as your main system RAM, but it's painfully expensive. The actual amount of bandwidth also scales linearly with with the amount of memory you buy as well. So a 500GB is 10x faster than 50GB - because it write to all of the chips simultaneously (and then read from all of them when you access the data back).

It's pretty standard on high end GPUs these days. Apple also uses it on all their computers (if you buy a Mac with 64GB of RAM, it'll run at 800MB/s - which isn't quite as fast as a high end GPU but it's close and it is HBM). It's part of why Macs are so expensive (and also why the cheaper ones have very little RAM).

[–] DavidGarcia@feddit.nl 3 points 1 year ago (1 children)

I highly doubt that, there are comparable models that are way smaller than that. No way they would waste that much money.

[–] abhibeckert@beehaw.org 3 points 1 year ago* (last edited 1 year ago)

There are comparable models to GPT 3.5 "Turbo", which is faster and 30x cheaper than GPT 4 (if you pay OpenAI's regular API prices).

I suspect that's because GPT-4 needs 30x more memory than 3.5.

I'm not aware of any other model that performs as well as GPT-4. In fact I suspect even 3.5 Turbo is the second best model.

[–] LoafyLemon@kbin.social 1 points 1 year ago (1 children)
[–] SSUPII@sopuli.xyz 3 points 1 year ago (1 children)

It could work, but do you want to way 15 minutes for an answer?

[–] Greg@lemmy.ca 2 points 1 year ago

Depends what you're using it for

[–] JackGreenEarth@lemm.ee 12 points 1 year ago (6 children)

I'd like this offline. Why are all the good chatbots proprietary online-only software?

[–] Woovie@artemis.camp 36 points 1 year ago (1 children)

They need insane amounts of compute

[–] JackGreenEarth@lemm.ee 4 points 1 year ago (1 children)

So? OpenAI aren't the only ones with large datacenters.

[–] Pyr_Pressure@lemmy.ca 1 points 1 year ago

They want your data

[–] Amaltheamannen@lemmy.ml 17 points 1 year ago (1 children)

Check out /r/localllama. Preferably you need a Nvidia you with >= 24 GB VRAM but it also works with a cpu and loads of normal RAM, if you can wait a minute or two for a lengthy answer. Loads of models to choose from, many with no censorship at all. Won't be as good as chatgptv4, but many are close to gpt3.

[–] GammaGames@beehaw.org 2 points 1 year ago

Just played with it the other week, they have some models that run on less extreme hardware too https://ollama.ai/

[–] librecat@lemmy.basedcount.com 7 points 1 year ago

If you have a high end GPU, or lots of RAM you can run some good quality LLMs offline. I recommend watching Matthew Berman for tutorials (there are some showing paid hosting aswell).

[–] CanadaPlus@lemmy.sdf.org 3 points 1 year ago (1 children)

By design, because they don't want some basement guy launching skynet.

I have to agree, I trust a handful of big shops, some of which could actually be killed by ethics people against the wishes of investors, far more than the entire internet. It still might not be enough, but there is no applying breaks whatsoever if anyone can take the next step.

[–] drwho@beehaw.org 3 points 1 year ago (1 children)

They don't want somebody toppling an oligarch, you mean.

[–] CanadaPlus@lemmy.sdf.org 3 points 1 year ago (1 children)

Which oligarch? I mean, yes there's definitely a degree of trusting "the right sort" there, but capitalism isn't a team sport and they're not a team. Honestly one of them might launch skynet anyway, if that's how the technology grows, but a few people are theoretically able to agree not to do something, while legions never can.

So do you think it should all be open sourced, then? And if so, are you a skeptic of "AI alignment", or even "AI safety"?

[–] drwho@beehaw.org 1 points 1 year ago

Any of them. They don't necessarily like each other or team up, but they are smart enough to understand that an upstart toppling one is a potential threat to all of them. All things being equal, keep the game board the way it is, without any unwelcome surprises coming in to kick things over.

I do think it should be open sourced, just so that those of us who aren't oligarchs have a chance to at least tread water a little longer. Those of us who aren't wealthy need all the help we can get during a time where our inherent disposability has been writ large as a warning.

Am I a skeptic of AI alignment? No. What I've observed is that AI systems tend to reflect their creators' goals and ethics quite well. Problem is, their goals and ethics are pretty much the same as the human race's for the last few centuries. Built in racism? No shit, it would have been strange if the construct hadn't acted that way.

Am I a skeptic of AI safety? Yes, I think the idea is complete bullshit. AI reflects the goals, prejudices, and ethics of its creators quite well, which if you look at human history is anything but safe and sound. To put it another way, if you've got the money and the chops to build an AI system, you're going to build it to make sure you don't lose what you have already and see if you can get hold of more of what you have (at first to recoup the cost, then just to get hold of more wealth). If you're the military you're going to want to make sure you're on equal footing with your enemies, both explicit and implicit at the very least (probably half of 'warfighting superiority' is propaganda; if you look at the breakdowns it's closer to equal footing with the usual margin of error).

[–] DarkThoughts@kbin.social 3 points 1 year ago (1 children)

I think KoboldAI runs locally, but like many current AI tools it's a pain in the ass to install, especially if you're on Linux, especially if you're using AMD GPUs. I wonder if we'll see some specialized AI related cards to slot into our pci ports or something. Not a whole lot of necessary options to fill them nowadays anyway. I'd also be interested in local AI voice changers too. Maybe even packaged like a Roland VT-4 voice transformer that sits between your mic & whatever audio other audio interface you might be using, where you just throw the trained voice models onto the device and it does all the real time computing for you.

I'm sure things get more refined over the next years though.

[–] off_brand_@beehaw.org 3 points 1 year ago (1 children)

It would actually be pretty cool to see TPUs you can just plug in. They come stock in a lot of Google products now, I think.

[–] drwho@beehaw.org 3 points 1 year ago (2 children)
[–] off_brand_@beehaw.org 2 points 1 year ago* (last edited 1 year ago)

Oh!! Awesome, thanks!

I've only watched recently without trying to build much myself for ML. I have the hardware but idk if I want to leave my bulky gaming machine on regularly just to run ML operations. Having a more dedicated piece of hardware to handle it makes the idea much more attractive to me.

Now I just have to learn everything. And then learn how to integrate a locally hosted TPU into the process.

[–] DarkThoughts@kbin.social 2 points 1 year ago (1 children)

Yeah, those seem cool, although probably still not something for my inept ass to use but it's nice to see products like these starting to pop up. Some are also not too insanely priced either. Anyone who did some benchmarks comparing them to just regular consumer gpus yet? I couldn't find anything.

[–] drwho@beehaw.org 1 points 1 year ago

I haven't yet. I just have a two GPU rig for training but I haven't done any formal benchmarking yet, just messing around. I'll add it to my to-do list, though.

[–] DavidGarcia@feddit.nl 1 points 1 year ago

It won't take long until cheap special purpose chips hit the market. Then you'll have your offline model. There are already models that run on consumer hardware, but it's for enthusiasts at the moment and not the same quality (but almost). But if you want to spend thousands on a PC that can handle the largest models, go ahead.

[–] autotldr@lemmings.world 3 points 1 year ago

🤖 I'm a bot that provides automatic summaries for articles:

Click here to see the summaryAccording to The Decoder, leaked screenshots and videos show a custom chatbot creator with many of the same features already available in ChatGPT using GPT-4, like web browsing and data analysis.

This morning, SEO tools developer Tibor Blaho shared a video of the UI for the feature in action, showing a GPT Builder option that lets users enter a prompt — an example reads “make a creative who helps generate visuals for new products.” — to create a chatbot.

Users can also upload files for a bespoke knowledgebase and toggle capabilities like web browsing and image generation.

Choi shared a screenshot that breaks down the Team plan’s features, like unlimited high-speed GPT-4 and four times longer context.

Recent ChatGPT beta features include live web results, image generation, and voice chat.

OpenAI says it will preview new tools at the developer conference on Monday, so we probably won’t have to wait long to find out if these rumors are accurate.


Saved 55% of original text.