Research Professor of Autonomous Systems, Defence and Systems Institute at University of South Australia
It’s a science fiction staple: driverless cars navigating crowded cities. But just how close are we to such a world? How long until an autonomous vehicle could be sitting in your driveway?
Last year Google announced it had autonomously driven six Toyota Priuses and an Audi TT over 140,000 miles (224,000 kilometres) in and around San Francisco.
Such vehicles are possible because they use:
- Sensors that detect the physical environment.
- Processing techniques that determine the speed, distance, shape and colour of objects around the cars.
- Object classification and fusion techniques that accurately recognise and interpret this data.
- Tracking techniques that can continuously monitor and predict the trajectories and behaviours of the objects in the environment.
There are many reasons why such technology, when at the appropriate level of readiness (let’s say within a couple of decades), will be quickly adopted.
Autonomous vehicles are programmed to drive according to the road rules, so there will be fewer accidents due to speeding, human error and poor visibility.
Some 40% of accidents are caused by reduced visibility (e.g. poor lighting or weather conditions) and more than 70% are the result of human error (e.g. speeding, loss of concentration, frustration, fatigue, driving under the influence of alcohol or drugs).
Autonomous vehicles will therefore reduce the more than 1 million deaths that occur on the world’s roads annually with a combined cost of US$55 billion.
Given that autonomous systems will drive more in accordance with the manufacturers' instructions than human drivers do, maintenance costs will be reduced and servicing regimes can be better planned.
Fewer crashes and more efficient driving will lead to fewer traffic jams and less pollution.
To put this into perspective, traffic jams in major US cities alone are estimated to cause 3.6 billion hours-worth of delays each year and 5.7 gigalitres of wasted fuel – the equivalent of US$67.5 billion in lost productivity.
Intelligent vehicles that regulate speed could substantially mitigate many of these problems and a majority of these jams could be avoided if just 20% are autonomously controlled.
Unfortunately, most laws applying to the movement of vehicles relate to drivers or riders, as they are assumed to be in control of the vehicle.
It’s harder to determine responsibility for an accident or infringement when an autonomous vehicle is involved, not least because autonomy exists on a sliding scale between the human and the intelligent technology taking the decisions.
It is, of course, likely that for some time, that these vehicles will also share the road with manned vehicles as there may be those who are unable, or choose not, to acquire the latest technology.
In this regard, legislators or insurance companies will need to decide whether they intend to insist that such recalcitrants eventually comply.
In the meantime we must also determine:
- Where culpability might lie for a collision between manned and unmanned vehicles.
- Whether there will be a need for “autonomous vehicle only” lanes.
- Whether there’s a need for special training and equipment for police and road traffic accident investigation squads, similar to air crash investigators.
In order to build a credible safety case we will need to demonstrate that the autonomous vehicle provides equivalent or better performance to that of a manned system, and that it is just as reliable.
This safety case will then drive the timescales for the introduction of the technology.
So, what failure rate and type of failures should we accept and how should we trade such requirements against criteria such as cost and performance?
Typically, these decisions are made on the basis of some statistically significant criteria such as “The life expectancy of a human shall not be altered by using such a system” or “The system should pose no greater risk to persons or property than that currently presented by a manned road vehicle.”
A metric such as the average number of kilometres driven per required human intervention could be useful for determining these criteria.
At present, press reports put the average intervention distance for the Google cars in the order of 1,000 miles (1,600 kilometres).
In this regard, it is noteworthy that most jurisdictions require learner drivers to accumulate around 100 hours of supervised driving experience, 15-20 hours of which must usually be at night.
This equates to around 4,000 miles (6,400 kilometres) before learners are granted a licence, a figure close to those already reported for the Google cars and probably achievable within two to three years.
A case can therefore be made for an autonomous vehicle’s performance to be assessed against their capacity to successfully complete an on-road practical driving examination, of the nature typically used for learner drivers.
We might even find that some of the accidents due solely to driver error are reduced.
In reality, we will probably require autonomous vehicles to function at a level superior to that considered acceptable for a human being.
Not only do we have a tendency to accept “human error” as a reason for failure and expect autonomous systems to have a much lower failure rate, but the improved safety case is a key justification for the development of autonomous vehicles.
Driverless vehicles that freely roam our streets, collecting and dropping off passengers in ways that have become familiar to us through science fiction movies are probably more than 20 years off.
But vehicles that drive themselves with little or no human assistance are probably much closer than many think – perhaps less than ten years away.
Would you buy an autonomous vehicle if it cost the same as a regular car of the same class?
Creative commons attribution - non commercial and no derivative works license.
(In the source there are three youtube videos.)