“Translation: all the times Tesla has vowed that all of its vehicles would soon be capable of fully driving themselves may have been a convenient act of salesmanship that ultimately turned out not to be true.”
Another way to say that, is Tesla scammed all of their customers, since you know, everyone saw this coming…
Since the first time I heard about FSD I’ve been wondering why Tesla (or others) doesn’t set up a system where drivers opt-in (no opt-in by default) to sending anonymized driving data to help train the model. The vast majority of the time, it’s probably modeling OK driving. At least no accidents. But the shitty driving and accidents are also useful as data about what to avoid.
Maybe they’re already doing this? But then I wonder why their FSD is getting shittier rather than improving. One would think with more driving data, good and bad examples, would only help.
That’s what they do except for the opt in part.
I would be shocked to discover they’re not already doing this.
https://www.reuters.com/technology/tesla-workers-shared-sensitive-images-recorded-by-customer-cars-2023-04-06/
Not enough paid humans sorting between which data is examples of good behaviour and which data is examples of bad behaviour. Not saying that is what is happening as we don’t even know if there is data, but that would be the weakness in that plan when run the way it would be run if instituted by elon.
Good point.
Yeah. Current generation learning models can do impressive things in the hands of a skilled engineer, but Elon is leading a round of class warfare against skilled engineers right now.
Shareholders need to decide which they really want to bet on to win.
That’s exactly how they train the model, but every Tesla is opted in with, to my knowledge, no option to opt out.