I, as someone with only experience by reading about it and nothing more. I believe four issues need to be addressed for self-driving to work:
1. Technology: It would need more than just applying LIDAR sensors and/or cameras. Here are a couple of examples as I do not think we can train the models with all possible scenarios that are possible in different lighting, weather, and/or human-driving cars along with self-driving cars conditions and with different car companies using different models/data it would be impossible to have a real self-driving car anytime soon:
2. Ethical: I do not know who will make the calls on some of the items highlighted in 21 Lessons from 21st Century (https://www.amazon.com/21-Lessons-for-21st-Century-audiobook/dp/B07DHSPZT2/ref=sr_1_1?crid=93D4MEEHJK0A&keywords=21+lessons+for+the+21st+century+yuval+noah+harari&qid=1692111972&sprefix=21+less%2Caps%2C74&sr=8-1); For example, suppose two kids chasing a ball jump right in front of a self-driving car. Based on its lightning calculations, the carβs algorithm concludes that the only way to avoid hitting the two kids is to swerve into the opposite lane, and risk colliding with an oncoming truck. The algorithm calculates that in such a case there is a 70 per cent chance that the owner of the car β who is fast asleep in the back seat β would be killed. What should the algorithm do? s if you program a self-driving car to stop and help strangers in distress, it will do so come hell or high water (unless, of course, you insert an exception clause for infernal or high-water scenarios). Similarly, if your self-driving car is programmed to swerve to the opposite lane in order to save the two kids in its path, you can bet your life this is exactly what it will do. Which means that when designing their self-driving car, Toyota or Tesla will be transforming a theoretical problem in the philosophy of ethics into a practical problem of engineering. Tesla will produce two models of the self-driving car: the Tesla Altruist and the Tesla Egoist. In an emergency, the Altruist sacrifices its owner to the greater good, whereas the Egoist does everything in its power to save its owner, even if it means killing the two kids. Customers will then be able to buy the car that best fits their favorite philosophical view. If more people buy the Tesla Egoist, you wonβt be able to blame Tesla for that. After all, the customer is always right. So maybe the state should intervene to regulate the market, and lay down an ethical code binding all self-driving cars? Some lawmakers will doubtless be thrilled by the opportunity to finally make laws that are always followed to the letter. Other lawmakers may be alarmed by such unprecedented and totalitarian responsibility. After all, throughout history the limitations of law enforcement provided a welcome check on the biases, mistakes and excesses of lawmakers. It was an extremely lucky thing that laws against homosexuality and against blasphemy were only partially enforced. Do we really want a system in which the decisions of fallible politicians become as inexorable as gravity?
3. Human Behavior: If humans are driving along with self-driving cars, which will be a scenario for the next few decades, and we all have seen how some people drive, I do not think unless we remove almost all the drivers from the road, we will be able to build a real system where cars communicate with each other to keep us save.
4. Liability: Who takes liability if a car runs into an accident? The owner of the car or car company. This problem can be solved by making cars a subscription rather than owning a car. However, we are at least a decade or two away from this happening on a large scale. Based on my reading on the internet, there are five levels of autonomous driving. Until level 3, the car owner is responsible for car accidents; at four and beyond, it is the car company. Do we believe the car companies will be ready to own millions of cars' liabilities anytime soon? I have my doubts, but others may have different opinions.
To summarize, we will not see real mass self-driving cars on the road in my lifetime. The driver-assist system will improve significantly over time. However, I am optimistic that my son and his generation will see a working system. I look forward to othersβ thoughts on this topic to refine my thinking.
I, as someone with only experience by reading about it and nothing more. I believe four issues need to be addressed for self-driving to work:
1. Technology: It would need more than just applying LIDAR sensors and/or cameras. Here are a couple of examples as I do not think we can train the models with all possible scenarios that are possible in different lighting, weather, and/or human-driving cars along with self-driving cars conditions and with different car companies using different models/data it would be impossible to have a real self-driving car anytime soon:
https://techcrunch.com/2021/01/16/startups-look-beyond-lidar-for-autonomous-vehicle-perception/ or https://sifted.eu/articles/wayve-autonomous-driving.
We have to include a rule-based engine to make it work, like https://www.oreilly.com/radar/podcast/the-technology-behind-self-driving-vehicles/. I do not know what combination of these will eventually work.
2. Ethical: I do not know who will make the calls on some of the items highlighted in 21 Lessons from 21st Century (https://www.amazon.com/21-Lessons-for-21st-Century-audiobook/dp/B07DHSPZT2/ref=sr_1_1?crid=93D4MEEHJK0A&keywords=21+lessons+for+the+21st+century+yuval+noah+harari&qid=1692111972&sprefix=21+less%2Caps%2C74&sr=8-1); For example, suppose two kids chasing a ball jump right in front of a self-driving car. Based on its lightning calculations, the carβs algorithm concludes that the only way to avoid hitting the two kids is to swerve into the opposite lane, and risk colliding with an oncoming truck. The algorithm calculates that in such a case there is a 70 per cent chance that the owner of the car β who is fast asleep in the back seat β would be killed. What should the algorithm do? s if you program a self-driving car to stop and help strangers in distress, it will do so come hell or high water (unless, of course, you insert an exception clause for infernal or high-water scenarios). Similarly, if your self-driving car is programmed to swerve to the opposite lane in order to save the two kids in its path, you can bet your life this is exactly what it will do. Which means that when designing their self-driving car, Toyota or Tesla will be transforming a theoretical problem in the philosophy of ethics into a practical problem of engineering. Tesla will produce two models of the self-driving car: the Tesla Altruist and the Tesla Egoist. In an emergency, the Altruist sacrifices its owner to the greater good, whereas the Egoist does everything in its power to save its owner, even if it means killing the two kids. Customers will then be able to buy the car that best fits their favorite philosophical view. If more people buy the Tesla Egoist, you wonβt be able to blame Tesla for that. After all, the customer is always right. So maybe the state should intervene to regulate the market, and lay down an ethical code binding all self-driving cars? Some lawmakers will doubtless be thrilled by the opportunity to finally make laws that are always followed to the letter. Other lawmakers may be alarmed by such unprecedented and totalitarian responsibility. After all, throughout history the limitations of law enforcement provided a welcome check on the biases, mistakes and excesses of lawmakers. It was an extremely lucky thing that laws against homosexuality and against blasphemy were only partially enforced. Do we really want a system in which the decisions of fallible politicians become as inexorable as gravity?
3. Human Behavior: If humans are driving along with self-driving cars, which will be a scenario for the next few decades, and we all have seen how some people drive, I do not think unless we remove almost all the drivers from the road, we will be able to build a real system where cars communicate with each other to keep us save.
4. Liability: Who takes liability if a car runs into an accident? The owner of the car or car company. This problem can be solved by making cars a subscription rather than owning a car. However, we are at least a decade or two away from this happening on a large scale. Based on my reading on the internet, there are five levels of autonomous driving. Until level 3, the car owner is responsible for car accidents; at four and beyond, it is the car company. Do we believe the car companies will be ready to own millions of cars' liabilities anytime soon? I have my doubts, but others may have different opinions.
To summarize, we will not see real mass self-driving cars on the road in my lifetime. The driver-assist system will improve significantly over time. However, I am optimistic that my son and his generation will see a working system. I look forward to othersβ thoughts on this topic to refine my thinking.
Congratulations on your new book. I look forward to reading it.