Unpopular Opinion
Welcome to the Unpopular Opinion community!
How voting works:
Vote the opposite of the norm.
If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.
Guidelines:
Tag your post, if possible (not required)
- If your post is a "General" unpopular opinion, start the subject with [GENERAL].
- If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].
Rules:
1. NO POLITICS
Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.
2. Be civil.
Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Shitposts and memes are allowed but...
Only until they prove to be a problem. They can and will be removed at moderator discretion.
5. No trolling.
This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.
Instance-wide rules always apply. https://legal.lemmy.world/tos/
view the rest of the comments
This is the 'devil is in the details', as to why autodriving isn't there yet.
Having the code for every one of those edge cases in the office/lab via simulation has got to be a nightmare, and no way to be complete before releasing.
~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~
exactly. Anyone who has ever coded anything professionally knows how intense a problem like this is. There's a reason that no one, not even Google, Microsoft, or Apple have successfully done it. They may still be researching, but to think it's a simple problem that can be handwaved away with AI and models is incredibly naiive.
AI is just probability. This picture is probably a dog, with over 90% accuracy. Which is great when you're classifying cats and dogs - but we're doing real time live determinations of things while driving, and that's a completely different problem set. Now we need AI to predict with a much higher probability that there is a person in the street, or the street is dividing, or there is a construction zone, or the car ahead is starting to slow down, or.... 10,000 other edge cases.
I still don't understand how self driving vehicles hit things. Job #1 is don't hit things. If they can just do that they will be much better than human drivers.
Usually they misidentify the boundaries of the road, or objects moving on and off of the road.
~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~
That's why Tesla moved away from human code and instead they use neural nets to analyze video content of good human drivers. The point about edge cases still stands but the advantage Tesla has over most other manufacturers is that there's a ton of people already using FSD and reporting said edge cases back to Tesla so that they can be fixed.
There's also a possibility that going with human code along with radar/LiDAR is a dead end and once others realize this Tesla so far ahead that the rest can't catch up.