Tesla’s FSD Beta Driving Modes Raise Interesting Ethical Issues We Should Talk About


Have you ever walked through a stop sign? Of course you have. I think I did it this morning, actually. If you’re a driver who hasn’t, then hopefully being a liar works for you, because I bet you did. As far as illegal things go, going through a stop sign is about as minor as it gets, although it’s technically illegal and I guess for a decent reason, as stop signs are usually placed at places where full stop is at a minimum a pretty good idea. So with that in mind, should we be programming an autonomous car AI to commit this admittedly minor crime? Tesla seems to have already decided that it’s fine.

Tesla’s current version of their Full Self-Driving Beta (FSD) software contains a feature known as “FSD Profiles,” which started with version 10.3 released in October 2021 (but was later withdrawn for problems and came back very soon with 10.3.1).

As noted in other articles on these For updates, an important feature has been the introduction of three “profiles” for FSD behavior, which include Chill, Average, and Assertive.

In Average and Assertive modes, the small description text reveals a bit how the software will make the car behave, and reveals some details:

Image: Tesla, JDT

In these two modes, the description indicates that the car can perform a rolling stop. Let’s be absolutely clear on what it is: yes, it’s incredibly minor and maybe even insignificant, but it’s the car telling you that its programming may cause it to decide to do an illegal act.

The reason I make it such a big thing is that we are still quite early in the development of humanity of what we hope to someday be fully self-driving cars that we can still watch what we are capable of doing. do and really let’s ask ourselves if this is the path we want to take.

Is it? I’m honestly not sure.

The specific act – crossing a stop sign – is less important than the larger implications here. These implications are the fact that we have traffic laws in our books that are regularly broken because we are human beings and the overall driving experience can, in our opinion, be improved in some way. another by willful ignorance of some of these laws.

Almost all of us sometimes speed up too. And while you can speed up Tesla’s driver assistance systems (and others, of course) as well, it’s always been the human decision to do so. If you set the upper speed limit of your cruise control or Tier 2 semi-automatic driving system to 153 km / h, this is what the car will do, but it was your choice, not the car.

This situation is different because the driver is not part of the decision-making process that could cause the Tesla to cross a stop sign, breaking a law. If a cop sees you doing this and arrests you, who is to blame?

Is it the driver’s fault, as they have been informed that the car could be able succeed in such a crime when they have selected the driving mode? Or should the cop send the ticket to Tesla HQ, since it was their software that deliberately decided to go through the stop sign?

Do we want our eventual self-driving cars to be ready to break the laws? Does that mean we need to realistically take a look at our traffic laws and maybe tailor them a little better to real-world behaviors and situations? Should we just legalize slow stopping at certain intersections and conditions, and maybe have more flexible speed laws?

Or should we just program our cars to obey the law? It’s not part of what makes the possibility of computer-controlled driving so appealing to people is that computers can always do the safe thing and will never be tempted to break laws or execute signs. stop or speed because they are not overloaded with our imperfect, impulsive, excited and hungry human brains?

It may seem minor, but the line of thinking shown in these driving profiles is no different conceptually than if, say, Tesla succeeds in develop and sell their humanoid robot (stop laughing, it’s a thought experiment), and he includes a shoplifting mode that would allow him to attempt to steal items if he thought he could get away with it.

Image: Tesla, JDT

Of course, that doesn’t exist, but it’s not much different from a slider that allows the car to decide whether to break a traffic code.

We need to think about all of this now and decide what we want for our future. Do we want full respect for the law and the rule? Do we want some exceptions? The ability to override when necessary and allow illegal behavior? Give the decision to the machine, possibly with a sliding set of acceptable parameters?

Honestly, I’m not sure exactly how this should play out. What I’m sure is that we, collectively, as a society, need to take the time and do the admittedly difficult work of deciding on a standard set of rules, before we start trying to shit and shit. see how far we can push.

Because, remember, we’re humans, and part of that deal means we’re always going to push it, maybe too far.

Previous Main ICT challenges explored in 2021 (II)
Next Security Bugs in Third-Party URL Parsing Libraries Could Affect Multiple Web Applications | DoS attacks, leaks and more