this post was submitted on 31 Jul 2023
101 points (88.0% liked)
Technology
59106 readers
3421 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
AI is a highly advanced tool. Prompt engineers are the new white collar force multiplier. All the stupid articles are due to the efforts of venture capital to establish a monopoly with proprietary AI by using manipulative propaganda. AI on its own is unreliable and dumb. It has no long term persistent memory. It is nowhere near an Artificial General Intelligence. The large language model is just a massive statistical analysis of subject categorization and a statistical probability of what word should come next. Its only "memory" is what happens in a conversation as it is happening. This is not "learned" information. It is simply tinting the subject and word probabilities as they are happening. Larger models just have more subjects and a larger lexicon that often includes several human languages.
I swear, at this point, The Terminator movie was the greatest propaganda film for billionaire AI dominion. The fear mongering is a joke. Go download Oobabooga. Then go to the github of AI, a website called hugging face. Find the Llama2 7b uncensored model by The Bloke and get the one that has GGLM in the name. This will run on almost any modern computer. GGLM means this model can run with split operations. Like, it can run on more than one GPU. This also means it can run on a CPU or a combined CPU and GPU. All you really need is at least 16GB or system memory. If you only have 16GB of system mem use the 4bit version. Uncensored means you can talk about anything. Ask it dumb stuff, you're scared of like this fear mongering bullshit, these systems are not that bright. You're not going to take over the world with this rat Brain. Ask it about that simple command line problem you spent the last hour sorting out, or what is wrong with your spreadsheet function, or how some other example works, or whatever the F some regex or sed command does. This is where LLMs are freaking awesome. If you barely explore this new tech the news sounds like a bunch of parroting idiots. Which it is.
This.
Anyone who looks into this tech properly, beyond sensationalist headlines made to draw readers or outrageous claims to attract investors sees this emperor as the naked illusion that it is.
It's a great tool for what it's good at (generating convincing text outputs). And completely useless at others.
The risk to jobs currently are owners and managers with little to no knowledge trying to actually replace their employees with llms. These are companies setting them selves up for amazing and spectacular failure at this point in the game.
It's impossible to say how this will play out in the long run but currently it's interesting as a research tool, a tool for saving time when writing texts etc etc.
What happens when clever people integrate these models with other systems in intelligent and responsible ways is going to be interesting to follow.
Currently the most important thing to emphasize with AI is that a lot of the coverage and general writing on the subject matter is filled with misconceptions about how the technology works and what it is capable of. It's full on hypecycle season.
I'm currently deep diving into AI and specifically LLMs to strengthen my ability to give respondible advice about it and to explain it in an understandable manner to our bosses and decision makers at work.
There are lots of great deep dives and explainers out there all ready and a few manage to get the fundamentals right without going completely bonkers technical as well.. but the (and I hate using this word as it's being abused way to much) main stream media is not a source with even a grain of propper comprehension when it comes to what this technology is (and perhaps even more important isn't).
This is the video I currently recommended to get a good start at the subject of llms: https://youtu.be/-4Oso9-9KTQ
It is general enough for most people to follow but detailed enough to burst the biggest illusions on the subject.
This is one of the best takes I've seen for a while. LLMs seem like they are the "new Google" in that just searching for information super-charged productivity. Now, instead of using some google fu and having to wade through some links and read, someone can just ask a direct question to an LLM and get a reasonably good answer that may or may not need some work to fix up.
In fact, I've started using GPT to summarize large reports/emails and generate the base code for projects so that I just have to tweak it. It has made work that would take hours or days into an hour or two. Honestly, GPT and Llama have made me a much more productive person. Understanding how to use LLMs to one's advantage is going to be a skill going forward just like how effectively using a search engine is a skill now. It's not a skill that will likely be appreciated (much like how effectively googling isn't), but it will set workers apart.
My favorite has been exploring models and obscure software that is challenging to get working. When I get the inevitable failure, I just paste the entire error message into a WizardLM 30B model and I usually get quite helpful insights. I've gotten much further into compilation and finished installing projects I never would have managed otherwise. It has expanded my bashrc, and commands knowledge substantially. Sed, awk, and regex are easy now. I can practically get an AI to exit vim for me.
I just doubled the sysRAM in my machine 30min ago and am a quarter of the way into a 70B Llama 2 instruct GGLM. If the jump from 13B to 30B is any indication, this should be around 95%-98% accurate even with obscure technical questions.
I beat a Waymo self driving car off the stop sign yesterday instead of respecting its right of way and the passenger in my car with me remarked. I told them it’s because I know and don’t fear AI. It’s going to be protective of itself over taking the intersection, so why not take advantage of that fact. I said something like when Terminator happens and everyone is running around panicking, I’ll be that dude walking around pushing them over because I’m wearing a mask that confuses facial recognition.
Aside from sounding like some tough guy story about how cool and tough you are, your story is ridiculous because you assume AI is going to stay looking like the current methodology LLMs we use with zero advances or new directions to explore.