Chris Hayhurst, European consulting manager at MathWorks, examines whether former UK transport secretary Chris Grayling’s estimate that driverless cars will be on public roads within the next three years is realistic
Driverless technology already exists on UK roads, but it’s important to bear in mind that autonomy exists on a spectrum. Level 1 refers to automation of single functions, and Level 5 represents vehicles that can navigate any road in complete independence. Currently the majority are categorized into Stage 1 or 2 with a handful at Level 3, as achieving Level 4 and 5 can be extremely challenging. From consumer perceptions, to insurance, to the strict regulatory environment, as well as the technology itself, there is plenty to overcome before we see widespread adoption of fully driverless cars.
We are seeing the development of driverless cars progressing much more slowly than for other forms of driverless transportation, such as dump trucks being trialled at UK roadworks, which are already at the equivalent of Level 4. This is due to humans having to be factored into the decision process of driverless cars to ensure their safe operation. The impact of human interaction with driverless vehicles is complicated by our unpredictable nature – our actions and movements are difficult to track, meaning the number of different scenarios within which to test driverless technology is much greater to guarantee safety.
We are also seeing better progress made in driverless vehicles required to perform repetitive tasks because these are far easier to automate, such as in agriculture with driverless tractors and mining with vehicles that haul materials from one area of the mine to another.
Predicting human interaction
Some have suggested humans will purposefully test the limits of driverless vehicles, for example by not waiting for the green man and instead deliberately obstructing cars so they stop immediately – with potentially dangerous consequences. It is crucial driverless car developers take such predictions into account to ensure the technology remains safe and continues to function correctly and consistently, no matter what human interaction it encounters.
Driverless vehicles need to be able to work in all types of weather, from rain and storm, to snow and sleet. Heavy weather makes it particularly hard for driverless cars to operate, primarily due to the disruption to the sensor technology used to identify signs and road markings. In addition, it is harder for driverless cars to operate safely in countries where they have poorly regulated road systems or limited road infrastructure because the driving parameters are so much more complex and unpredictable for the technology to deal with.
The role of AI
And what about the role of AI in the development of autonomous systems and vehicles? In terms of machine learning, this technology can be used for teaching a system to recognize specific features in images or data. However, the ‘black box’ issue (lack of transparency) can be a problem if the system is safety critical. Due to the automotive industry being highly regulated, manufacturers need to ensure their systems can be properly testable and are fully certifiable, which can only be achieved if the AI algorithms in use can be understood by a person.
Testing is vital when designing autonomous systems such as driverless cars. But it’s just not possible to do so through real-world trials alone, as there are simply too many possible scenarios, variables and hazards to examine. Therefore, having the right modeling technology in place is key.
Capitalizing on development tools
There is an extensive and flexible set on offer to engineers right now that cover the entire development lifecycle of autonomous systems, from designing and choosing the right sensors, to those that develop and deploy algorithms for vehicle perception. This includes verification and validation tools, deployment tools and simulation and controls system design tools such as Simulink.
As real-world testing can be so expensive and resource heavy, using such simulation software – which allows users to recreate a variety of real-world situations driverless vehicles may encounter virtually – means developers can make extensive savings in both time and money.
Further to simulating different road surfaces and weather patterns, simulation also makes it possible to test how a varying number of sensors, radars and cameras affect the performance of the vehicle in a simulated environment before creating and testing it on the road. Sensors in particular can be costly – the more sensors on a car, the more expensive it becomes, so driverless vehicle developers can use this technology to work out this particular trade-off.
Looking to the future
Looking ahead, we are likely to see two main forms of autonomous features: safety features as standard, as well as additional convenience features such as self-parking that will come at a premium.
As previously mentioned, verification and validation tools, as well as AI, will remain invaluable in the development of autonomous systems in making sure the systems are certifiable. But human beings need to have visibility and understanding of the decisions AI systems are taking in order for independent third parties to certify them as safe for use around human beings.
It’s highly unlikely we will see fully driverless cars on the roads by 2021 due to the amount of testing there is still to be done. What is definite is that the key role technology is playing in the progress toward fully autonomous vehicles going mainstream.