Former Uber Autonomous Systems Chief Crashes in Tesla with FSD
The irony of fate is that the person who led the development of autonomous drivers at Uber was involved in a crash in his own Tesla with the Full Self-Driving (FSD) system activated. This happened to Raffi Krikorian, and this case highlights a much deeper problem than just a single road traffic incident.
This story underscores the awkward transitional phase of modern semi-autonomous technologies. They are capable of performing most vehicle control tasks, but still require a human to intervene instantly in case of a malfunction. Such cases show how precarious this balance between machine and driver can be.
Event Details
As Krikorian writes in his essay for The Atlantic, he was driving through residential neighborhoods with his children with FSD activated. After months of trouble-free use of the system off highways, he felt confident. Suddenly, the car began to turn, the steering wheel jerked sharply, and within seconds the Model X crashed into a concrete wall.
No one was injured, but the shock he experienced made him reflect. What struck him was not so much the accident itself, but the familiar pattern of responsibility distribution. He describes a phenomenon that researcher Madeleine Clare Eilish calls the “moral crumple zone.” This concept explains that when complex automated systems fail, the human operator assumes all the responsibility, similar to how a crumple zone absorbs the energy of an impact. Even if the system does the main work, legal liability still lies with the driver.
My Tesla tried to drive me into a lake today! FSD version 14.2.2.4 (2025.45.9.1)@Tesla @aelluswamy pic.twitter.com/ykWZFjUm8k — Daniel Milligan (@lilmill2000) February 16, 2026
Questions of Responsibility and Psychology
Tesla has repeatedly won lawsuits relying precisely on this principle. The manufacturer has repeatedly warned that driver assistance systems are not perfect, and the driver must be ready to take control instantly. However, the most interesting aspect of the essay is not legal, but psychological and physiological.
From a psychological perspective, semi-autonomous systems create a dangerous “gray zone.” They work so well that drivers stop actively controlling the car, but not well enough to completely eliminate the need for a human. Research speaks of “vigilance decrement”: when people monitor a system that almost never fails, their attention wanes.
When attention wanes, the physiological factor comes into play. Even the most prepared person often needs seconds to refocus, make a decision, and act. This pattern manifests everywhere where humans oversee automation—from airplanes to AI chatbots.
The technology builds trust by working reliably in most situations, and then relies on the human to save the situation in case of an unexpected malfunction. And when this rescue fails, the responsibility usually falls on the human.
The Inevitability of the Problem and the Future
The most difficult thing in this situation is that such an intermediate stage is likely inevitable. Technologies need to be tested in the real world to improve. This means coexisting with systems that can perform most tasks but still require a driver ready to intervene instantly.
The problem is that the better these systems work, the easier it is to forget that you bear the ultimate responsibility. This continues until the moment a crash report reminds you of this in all its harsh reality.
This case reveals a fundamental paradox of the current stage of autopilot development. Society and legislators face a challenge where technical capabilities outpace the human ability to interact with them effectively at a psychophysiological level. The question is not only who is to blame in a specific accident, but how to redesign the very concept of human-machine collaboration to minimize this dangerous gap between trust and actual ability to control. The future of autonomous transport will depend on finding an answer to this question.

