There are a lot of autonomous vehicles and models for autonomous vehicles around. Any electrical engineering, software engineering or even mechanical engineering college has records of dozens of simulated small scale autonomous vehicles developed by students. On the large scale, the world is not short of autonomous vehicles either, there is a multitude of autonomous or self-driving cars, bikes and aircrafts prototyped by companies like BMW and Tesla.
However, there is a pretty solid reason why we don’t see empty taxies pulling over for passengers without the need to be driven manually and its always a pilot controlling a plane during a storm, take off or landing. The reason being the massive security risks associated with autonomous or self-driving vehicles. An autonomous car has to be programmed to think and act not just like a human, but better than a human. The burning problem with people driving or operating vehicles is that all people do not make ideal drivers; and even “ideal” drivers can make grave mistakes, putting every drive through a road a potential risk.
To understand common driver ethics and code of conduct, there have been defined driving practices that are inculcated in driving licence tests to ensure that a driver follows optimum diving practices. This, however, does not always materialize into the form of safe driving. The reason being the human mind prioritizing things that it shouldn’t. A driver often prefers saving their vehicle from damage over harming a person, a driver has limited reflexes and in the time of a panic, cannot decide the safest course of actions within their response time; which is why although we might allow or even prefer robots for domestic usage to imitate humans, autonomous vehicles are not supposed to imitate humans.
Autonomous Vehicles Are Not Supposed to Imitate Humans
Even though autonomous vehicles are having difficulty imitating humans as it is, we are looking at creating what’s called Asimov’s laws. That is to say, the AI programs in the cars will be programmed with some fixed, unchangeable priorities and rules that they will not compromise on and these rules will form the criteria for the decisions made by the programs. Such as, always protect humans at the expense of property. While a driver might not follow this rule religiously, an AI program can be set to follow it so.
All this talk about autonomous systems and self-driving cars is pretty much incomplete without the mention of the one and only Elon Musk. Yes, Elon Musk and his many companies have been making autonomous vehicles way ahead of our time when you compare them to other companies.
That, however, is not enough. Tesla, Elon Musk’s vehicle company that excels in their development of electrically powered, semi-autonomous vehicles currently does not have the research or prototypes nor the simulated results with programs that actually work; rendering them unable to launch fully autonomous cars in the market. That being said, Tesla does have something that would be vital, crucial, to any autonomous vehicle or large scale artificially intelligent or cloud based network system. This something would be “security.”
Having seen different levels of security breaches throughout the world starting from plane hijacking to attacks on highly secure military headquarters all the way up to hacking of intelligence systems and private accounts secured by private companies like Sony, any security system is seen with skepticism. Especially any system connected to the internet or cloud, making itself highly vulnerable to hacking. Andrew Martin, professor of security systems at the University of Oxford, told Inverse in April 2017 that “existing cars are vulnerable to such attacks – albeit slightly simpler ones,” and that “in short, yes, it’s a big deal, and it’s high on the agenda of all the car safety people, and really only time will tell if they’ve done enough work on that.”
Tesla, despite the common vulnerability of autonomous systems, has the programs that are considered impenetrable. With dozens of companies struggling to match the penetration security against hacking attempts, Tesla holds a benchmark of success. But here’s where they really excel at: Tesla is planning on sharing their security techniques and programs with the common user and other companies, or in savvy words, open sourcing their programs.
Elon Musk, the owner and CEO of Tesla, recently announced in his Tweet: “Planning to open-source Tesla vehicle security software for free use by other car makers. Extremely important to a safe self-driving future for all.”
Although there have been alleged rumors about security breaches in Tesla’s systems, the said rumors are to be taken seriously and by open sourcing their security program, Tesla has showcased some pretty confident stance in their programs. Programmers and penetration testers hired by companies all over the world are going to understand and improve on the algorithms Tesla has been using.
Will Tesla’s programs really make autonomous vehicles safer and more convenient? Will other companies surpass Tesla by modifying their own algorithms? Only time will tell.
How useful was this post?
Click on a star to rate it!
Average rating / 5. Vote count:
We are sorry that this post was not useful for you!
Let us improve this post!
Thanks for your feedback!