this post was submitted on 22 Aug 2024
480 points (95.3% liked)
Technology
59381 readers
3930 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I heard a lot of programmers say it
Edit: why is everyone downvoting me lol. I'm not agreeing with them but I've seen and met a lot that do.
They're falling for a hype train then.
I work in the industry. With several thousand of my peers every day that also code. I lead a team of extremely talented, tenured engineers across the company to take on some of the most difficult challenges it can offer us. I've been coding and working in tech for over 25 years.
The people who say this are people who either do not understand how AI (LLMs in this case) work, or do not understand programming, or are easily plied by the hype train.
We're so far off from this existing with the current tech, that it's not worth seriously discussing.
There are scripts, snippets of code that vscode's llm or VS2022's llm plugin can help with/bring up. But 9 times out of 10 there's multiple bugs in it.
If you're doing anything semi-complex it's a crapshoot if it gets close at all.
It's not bad for generating psuedo-code, or templates, but it's designed to generate code that looks right, not be right; and there's a huge difference.
AI Genned code is exceedingly buggy, and if you don't understand what it's trying to do, it's impossible to debug because what it generates is trash tier levels of code quality.
The tech may get there eventually, but there's no way I trust it, or anyone I work with trusts it, or considers it a serious threat or even resource beyond the novelty.
It's useful for non-engineers to get an idea of what they're trying to do, but it can just as easily send them down a bad path.
People use visual environments to draw systems and then generate code for specific controllers, that's in control systems design and such.
In that sense there are already situations where they don't write code directly.
But this has nothing to do with LLMs.
Just for designing systems in one place visual environments with blocks might be more optimal.
And often you still have actual developers reimplementing this shit because EE majors don't understand dereferencing null pointers is bad