This is so unrealistic. Developers don't drink decaf.
Programmer Humor
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
- Posts must be relevant to programming, programmers, or computer science.
- No NSFW content.
- Jokes must be in good taste. No hate speech, bigotry, etc.
regardless of experience, that's probably what makes him a junior
I do, exclusively
Getting rid of caffeine (decaf still has a little) has been amazing for me.
How so? I more than likely take in too much caffeine lol
I'm not the person you're replying to but for me, I used to get random headaches and jitters and I feel more consistent now.
The problem is the withdrawal period can be hard for some. It was for me, but overall worth it in the end.
I was shocked and appalled by this blatant inaccuracy.
Agreed. If you need to calculate rectangles ML is not the right tool. Now do the comparison for an image identifying program.
If anyone's looking for the magic dividing line, ML is a very inefficient way to do anything; but, it doesn't require us to actually solve the problem, just have a bunch of examples. For very hard but commonplace problems this is still revolutionary.
I think the joke is that the Jr. Developer sits there looking at the screen, a picture of a cat appears, and the Jr. Developer types "cat" on the keyboard then presses enter. Boom, AI in action!
The truth behind the joke is that many companies selling "AI" have lots of humans doing tasks like this behind the scene. "AI" is more likely to get VC money though, so it's "AI", I promise.
This is also how a lot (maybe most?) of the training data - that is, the examples - are made.
On the plus side, that's an entry-level white collar job in places like Nigeria where they're very hard to come by otherwise.
The correct tool for calculating the area of a rectangle is an elementary school kid who really wants that A.
Exactly. Explaining to a computer what a photo of a dog looks like is super hard. Every rule you can come up with has exceptions or edge cases. But if you show it millions of dog pictures and millions of not-dog pictures it can do a pretty decent job of figuring it out when given a new image it hasn't seen before.
Another problem is people using LLM like it's some form of general ML.
What ChatGPT actually comes up with in about 3 mins.
the comic is about using a machine learning algorithm instead of a hand-coded algorithm. not about using chatGPT to write a trivial program that no doubt exists a thousand times in the data it was trained on.
The strengths of Machine Learning are in the extremely complex programs.
Programs no junior dev would be able to accomplish.
So if the post can misrepresent the issue, then the commenter can do so too.
Lol, no. ML is not capable of writing extremely complex code.
It's basically like having a bunch of junior devs cranking out code that they don't really understand.
ML for coding is only really good at providing basic bitch code that is more time intensive than complex. And even that you have to check for hallucinations.
To reiterate what the parent comment of the one you replied to said, this isn't about chat GPT generating code, it's about using ML to create a indeterministic algorithm, that's why in the comic it's only very close to 12 and not 12 exactly.
Yes that is what they are good at. But not as good as a deterministic algorithm that can do the same thing. You use machine learning when the problem is too complex to solve deterministically, and an approximate result is acceptable.
I think the exact opposite, ML is good for automating away the trivial, repetitive tasks that take time away from development but they have a harder time with making a coherent, maintainable architecture of interconnected modules.
It is also good for data analysis, for example when the dynamics of a system are complex but you have a lot of data. In that context, the algorithm doesn't have to infer a model that matches reality completely, just one that is close enough for the region of interest.
I strongly disagree. ML is perfect for small bullshit like "What's the area of a rectangle" - it falls on its face when asked:
Can we build a website for our security paranoid client that wants the server to completely refuse to communicate with users that aren't authenticated as being employees... Oh, and our CEO requested a password recovery option on the login prompt.
I got interested and asked ChatGPT.
It gave a middle-management answer.
Guess we know who'll be the first to go.
Ahh the future of dev. Having to compete with AI and LLMs, while also being forced to hastily build apps that use those things, until those things can build the app themselves.
Let's invent a thing inventor, said the thing inventor inventor after being invented by a thing inventor.
And also, as a developer, you have to deal with the way Star Trek just isn't as good as it used to be.
Because you're all fucking nerds.
(Me too tho)
The sad thing is that no amount of mocking the current state of ML today will prevent it from taking all of our jobs tomorrow. Yes, there will be a phase where programmers, like myself, who refuse to use LLM as a tool to produce work faster will be pushed out by those that will work with LLMs. However, I console myself with the belief that this phase will last not even a full generation, and even those collaborative devs will find themselves made redundant, and we'll reach the same end without me having to eliminate the one enjoyable part of my job. I do not want to be reduced to being only a debugger for something else's code.
Thing is, at the point AI becomes self-improving, the last bastion of human-led development will fall.
I guess mocking and laughing now is about all we can do.
at the point AI becomes self-improving
This is not a foregone conclusion. Machines have mostly always been stronger and faster than humans, because humans are generally pretty weak and slow. Our strength is adaptability.
As anyone with a computer knows, if one tiny thing goes wrong it messes up everything. They are not adaptable to change. Most jobs require people to be adaptable to tiny changes in their routine every day. That's why you still can't replace accountants with spreadsheets, even though they've existed in some form for 50 years.
It's just a tool. If you don't want to use it, that's kinda weird. You aren't just "debugging" things. You use it as a junior developer who can do basic things.
Well, we could end capitalism, and demand that AI be applied to the betterment of humanity, rather than to increasing profits, enter a post-scarcity future, and then do whatever we want with our lives, rather than selling our time by the hour.
Did you just post your open ai api key on the internet?
Nah, this is a meme post about using chatgpt to check even numbers instead of simple code.
Same joke as the OP, different format.
Let's put it here in ascii format this free OpenAI API Key, token, just for the sake of history and search engines healthiness... 😂
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
But seriously, I hope they have already changed it.
After a small test, it doesn't work.
I can't wait for chatgpt sort
sort this d (gestures rudely at the concept of llms)
Well, if training is included, then why it is not included for the developer? From his first days of his life?
I see no mention of Hitler nor abusive language, are you sure that's a real AI? /s :-P
To be fair the human had how many years of training more than the AI to be fit to even attempt to solve this problem.
And hundreds of thousands of years of evolution pre-training the base model that their experience was layered on top of.
I don't know why, but "mechanical turk" keeps cropping up when I think about this sort of stuff.
Yea, but does the AI ask me why “x” doesn’t work as a multiplication operator 14 times while complaining about how this would be easier in Rust?
I'm hoping even a junior dev has had more than 60 hours of training.