Why we feel the need to abuse self-driving technology

0

Dr Peter Rossger, founder and owner of human machine interaction (HMI) consultancy Beyond HMI, looks at some of the reasons why drivers feel compelled to abuse self-driving technology

The automation of parts of the driving task is one of the key focal points of the automotive industry today, alongside electrification, connectivity and carsharing. In recent years, the industry has added technology to cars to support the driver in the successful execution of the driving task, all summarized under the banner ‘advanced driver assistance systems’ (ADAS). They support and inform the driver and, in some cases, such as with ABS, they interfere with the driving process. The core driving processes however – steering, standard braking and acceleration – are still under the full control and responsibility of the driver, defined by SAE J3016 as autonomy Level 1.

However, driving has changed recently, and with the combination of an adaptive cruise control and automated lane keeping, the core driving task has become automated… theoretically.

Since there are still certain doubts about the successful performance of these systems, the driver is forced to remain in the control loop. The driver takes the role of captain: the technology performs the task; it’s the human’s job to control it and intervene when something goes wrong. The most complex structure in the universe we know – our brain – is condemned to sit in a seat and wait for the technology to fail. Doesn’t sound too useful.

And it doesn’t work. We all know that our thoughts tend to wander when external stimuli are low. Those of us practicing Buddhist meditation face the monkeys in our brains every day and learn how to control them. For others, two alternatives remain: stimulate your brain or turn it off. This means reading emails, making a call, surfing the internet, or falling asleep. This ends up in the behavior we see on our streets: distracted or drowsy drivers.

Trust is core in the acceptance of technology in general, and in automated driving particularly. This includes both the trust of people outside the vehicle (as to whether or not it will really stop) and of people inside the vehicle.

Here we see two different characteristics come into play: over-trust and under-trust. Under-trust simply leads to the abandonment of the technology. Users don’t apply it because they underestimate its capabilities. Although a car could drive automatically in a certain context, the driver does not turn on the system, but keeps driving manually. This is a critical point as the benefits of the technology will not be used, and safer driving based on a well-designed system will not be applied.

Over-trust, leading to the misuse of technology, indicates, that the trust of the user in the capabilities of the machine is higher than appropriate. That’s the situation when drivers of autonomous cars fall asleep behind the wheel, do not react to system warnings, and just let go of all responsibility, believing that the car will solve the issue. Users have too much belief in the technology and the consequences could be deadly.

As we know, some humans experiment with technology by testing it, challenging it, and bringing it to its limits intentionally. We see them often. They are the curious, the ones that move developments forward and run campaigns for certain solutions, often becoming early adopters. The downside is that this can result in them endangering themselves and others.

Therefore, the solutions we see today on our streets today cannot be the final answer. That the driver needs to rattle the steering wheel every 10-30 seconds makes the rolling robot more annoying than supportive. Beeps, buzzing seats and other stimuli may be more distracting than helpful. And simply trusting that the driver will be wise enough not to challenge the system leads to the accidents we have seen so far.

Systems on SAE Level 2 and 3 offer a sharing of the driving task. Some parts are performed by the machine, some by the human. On Level 2 a constant surveillance by the driver is required, but on Level 3 we allow the driver to move out of the control loop. Both scenarios include different challenges and require different solutions. For Level 2, the mode awareness is key, the communication of who is doing what. Level 3 requires a guided handover procedure from machine to human. The driver needs to build reasonable situational awareness in a short time. For both, HMI solutions are required.

The situation today is not satisfying, but concepts become visible with time. A multimodal solution for the communication between driver and machine is required, with large and clear signals for the doubt-free transfer of information. HMI solutions will eventually be developed that solve these challenges. The driver and their capabilities, wishes, mental models, needs, use cases, and expectations need to be at the core of the development process for autonomous driving technology.

By Dr Peter Rossger, founder and owner, Beyond HMI

Dr Peter Rossger, founder and owner, Beyond HMI

Share.

About Author

Independently submitted opinions from our readers. Share your opinions by sending up to 500 words to wesley.doyle@ukimediaevents.com and you could earn US$100! Note: Payment will be made on publication on AutomotiveTestingTechnologyInternational.com. Only original contributions will be considered and the editor's decision is final.

Comments are closed.