This is the most obvious outcome ever. How could anyone not see this coming given the constant AI improvements?
Though good prompts can still make a big difference for now.
This is the most obvious outcome ever. How could anyone not see this coming given the constant AI improvements?
Though good prompts can still make a big difference for now.
Take a page from the AI companies' book - just claim AI "learned" from the CUDA SDK and call it fair use.
I'm sure it can, but then how does one even have the appointment set up in the first place? Which is a much harder part of the process (especially when starting from zero).
"Getting to a place" being a barrier may be a bit of a stretch (unless it's like really far and interferes with your work, etc.), but actually deciding to do therapy, what kind, finding a good therapist, and setting up the first appointment - that can be quite a massive barrier.
Can't you do it in the mobile app?
You don't need a facebook account a meta account was available as an alternative. That's great right? Much better!!!
Actually yes. The problem with needing a Facebook account was that it was part of an unrelated service (social network, messenger, etc.) that you couldn't separate. Meta accounts are separate accounts for VR only, much like the previous Oculus accounts.
For these kind of generic questions, ChatGPT is great at giving you the common fluff you'd find in a random "10 ways to improve your career" youtube video.
Which may still be useful advice, but you can probably already guess what it's going to say before hitting enter.
To be fair, the first iPhone did kinda suck in many ways, especially shortly after launch. Only the 2nd or 3rd generation had most of the basics in place.
As far as I know, that is mainly used where a better, bigger model generates training data for a more efficient smaller model to bring it a bit closer to its level.
Were there any cases of an already state of the art model using this method to improve itself?
"5pm today" can also get ambiguous if you're flying across time zones.
I can kind of see his point, but the things he is suggesting instead (biology, chemistry, finance) don't make sense for several reasons.
Besides the obvious "why couldn't AI just replace those people too" (even though it may take an extra few years), there is also a question of how many people can actually have a deep enough expertise to make meaningful contributions there - if we're talking about a massive increase of the amount of people going into those fields.
Prompt engineering is about expressing your intent in a way that causes an LLM to come to the desired result. (which right now sometimes requires weird phrases, etc.)
It will go away as soon as LLMs get good at inferring intent. It might not be a single model, it may require some extra steps, etc., but there is nothing uniquely "human" about writing prompts.
Future systems could for example start asking questions more often, to clarify your intent better, and then use that as an input to the next stage of tweaking the prompt.