Drivers’ reactions to automated vehicles
Professor Neville Stanton
BSc (Hons), PhD, FBPsS, FErgsS, MIET, MCIHT
University of Southampton
The concept of driverless cars, offering effortless transportation between destinations of interest, with little risk to the occupants or nearby pedestrians and providing the opportunity for drivers to better utilise their time (than by constantly monitoring the road), have been in the public consciousness for many years, perhaps too long without obvious benefits. Seemingly successful trials are frequently headlined with the occasional bad press when a vehicle has been involved in a tragic fatal accident. To provide on update on driverless vehicles, Neville Stanton of Southampton University presented at Science Café.
Neville’s background is uniquely as both an engineer and a psychologist drawing together an understanding of how humans interact with machinery in an effort to promote safety and optimise human efficiency in systems, especially automatic ones. This a discipline known as ‘human factors’, and in many situations can be a murky, often ill-defined border between the intuition of the human mind and the rational logic of AI. Neville has published over 500 papers and 50 books and is well qualified to provide a definitive viewpoint on the future of automatic vehicles. His skills also allow him to analyse transport incidents and make recommendations for accident prevention in the future.
Neville’s experience of driverless cars began in 1992 when Jaguar/Land Rover invited him to drive and comment on an early automated vehicle. Driving it was both terrifying and exhilarating. At that time, it was thought that a mere three months of intensive work would deal with the human factors side of the project. But 30 years’ later, the challenge still hasn’t quite been overcome.
Much of the initial research has been conducted in a driving simulator, which subsequently led to test track and on-road trials. The development of Adaptive Cruise Control was a particular highlight as this contributed to the first commercial implementation of the system in Jaguar vehicles.
The SAE (Society of Automotive Engineers) has created a lexicon of automotive autonomy:
Level 1: one element of the driving process is taken over in isolation, using data from sensors and cameras, but the driver is very much in charge. This started in the late 1990s at Mercedes-Benz, with its pioneering radar-managed cruise control, whilst Honda introduced lane-keep assist on the 2008 Legend. These were the first steps towards removing the driver’s obligations behind the wheel.
Level 2: computers control two or more elements. This is where we are at today: computers take over multiple functions from the driver – and are intelligent enough to combine speed and steering systems together using multiple data sources. The latest Mercedes S-class is Level 2-point-something. It takes over directional, throttle and brake functions for one of the most advanced cruise control systems yet seen – using detailed sat-nav data to brake automatically for corners ahead, keeping a set distance from the car in front and setting off again when traffic holdups clear.
Level 3: the car can control safety-critical functions: ‘conditional automation’ – a specific mode which lets all aspects of driving to be carried, but crucially, the driver must be on hand to respond to a request to intervene. Audi calls its A8 a Level 3 ready autonomous car – meaning the car has the potential to drive itself in certain circumstances and will assume control of all safety-critical functions. This is done by refining maps, radar and sensor inputs with faster processors.
Level 4: fully autonomous in controlled areas (future-planned geofenced metropolitan areas), as HD mapping, more timely data, car-to-car communications and off-site call centres (to deal with unusual hazards) improve accuracy.
Level 5: fully autonomous, where a driver is never needed. The difference between Level 4 and 5 is that full automation doesn’t require the car to be in the so-called ‘operational design domain’. Rather than working in a carefully managed (usually urban) environment with lots of dedicated lane markings and infrastructure, it will be able to self-drive anywhere. The disruption will be significant with some analysts claiming 21 million autonomous vehicles on the road, globally, by 2035.
This is not just an engineering challenge but also an area of concern for policy makers, insurers and litigators. Proposed changes to the Highway Code will apparently spell out what is and what is not allowed, as ministers pave the way for the next tranche of trials of driverless cars. A series of interim measures were published in April 2022, ahead of a full regulatory framework due to be implemented by 2025. Crucially, according to another recent motor magazine, the government has confirmed that drivers will not be held responsible for crashes in autonomous cars; that burden will fall to insurance companies. Mercedes is readying its new Level 3 self-driving technology on the new S-Class and EQS saloons – and the brand has confirmed that it is prepared to accept legal responsibility for accidents involving its cars when the system is engaged.
However, the company’s acceptance of liability falls within a limited set of parameters. Mercedes says it will only take the blame for an accident if it was directly caused by a fault with its technology. If the driver fails to comply with their duty of care (such as refusing to retake control of the car when prompted), they will be responsible for the resulting damage.
3. Where we are at
Unsurprisingly, Levels 1 and 5 (minimal automation or full automation) give little conceptual difficulty.
An overview of today’s automated cars is that we are struggling with partial automation (middle SAE levels) with concerns of drivers’ vigilance and workload (which can be affected if too high or too low). Even the Cadillac Super cruise offering, whilst suggesting a high level of automation, requires the driver to read the manual before using the car. This is the current Catch 22 of vehicle automation; remove responsibility but state that vigilance is still needed.
Various consortia work together and the best process for developing autonomous cars is:
i) Modify and design
ii) Dry simulator (see Fig. 1)
iii) Test Track
iv) Open roads
The bottom line remains that automated systems in cars do need supervision.
Fig. 1: Immersive driving simulator.
Problems in interfacing have led to collisions and fatalities in both simulator studies and real on-road situations. Such problems are not restricted to the road domain; the Air France flight AF447 crash in 2009 is a case in point. Design of appropriate HMI’s (Human Machine Interfaces) for the handover of control between an autonomous vehicle and human driver is critical to the success of automated vehicles. When vehicle control is handed back to a human driver, the driver needs to be aware of the vehicle status, road environment and pertinent road infrastructure as well as other road users.
A principal research question is therefore how to design Take-Over-Requests (TOR) whilst respecting the human-machine interface (HMI) and reaction times to achieve a safe TOR.
A driving simulator study in 2015 was conducted at the Fraunhofer Institute for Industrial Engineering. The period of time users needed to react on a TOR was measured for a highway scenario. The drivers were fully distracted by a secondary task, a challenging quiz game on a mobile phone (see Fig.2). All subjects were able to take over for a lane change within 10 seconds of the TOR, many with considerably less time
Fig. 2: Subject playing the quiz game while driving automated
Detailed research by Neville and his colleagues suggested a more nuanced layer of understanding was required. The aim of their subsequent study was to review existing research into driver control transitions and to determine the time it takes drivers to resume control from a highly automated vehicle in noncritical scenarios. (see Fig.3).
Fig. 3: The takeover-request icon shown on the instrument cluster. The icon is coupled with a computer-generated voice message stating, “Please resume control.”
The results (see Fig. 4) showed that significantly longer control transition times were found between driving with and without secondary tasks. Control transition times were substantially longer than those reported in the peer-reviewed literature.
Fig. 4: A distribution plot of takeover reaction time when drivers were prompted to resume manual control. The asterisk (*) marks the median value.
The research concluded that drivers take longer to resume control when under no time pressure compared with that reported in the literature. Moreover, drivers occupied by a secondary task exhibit larger variance and slower responses to requests to resume control.
In terms of application, intra- and interindividual differences need to be accommodated by vehicle manufacturers and policy makers alike to ensure inclusive design of contemporary systems and safety during control transitions.
Within the development loop, modification is a constant requirement. By way of example, in 2016, American Joshua Brown was killed when his Tesla failed to detect a tractor trailer crossing his path causing a collision. Another accident happened 3 years later to Jeremy Banner in similar circumstances suggesting such ‘edge cases” were failing to detect movement perpendicular to the direction of travel and no attempt had been made to rectify this wrong. Both had Autopilot, a Level 2 system, but of slightly different design. Since then, Tesla have responded to this challenge with an Autopilot update and a requirement for the driver to register their presence by touching the steering wheel on occasions.
Both these examples of TOR research and updating technology suggest how thorough and detailed research needs to be to tease out bugs in the system and to be consistently cognisant with details of human-machine interaction.
4. Psychological Models
To get the most from HMI studies, these are often placed within existing psychological models. One such model is Neisser’s PCM (Perceptual Cycle Model) of 1976 which has been widely applied in ergonomics research in domains including road, rail and aviation. The PCM assumes that information processing occurs in a cyclical manner drawing on top-down and bottom-up influences (see Fig.5).
Fig. 5: Perceptual cycle model of Neisser (1976). The flow of information occurs in both top down (TD) and bottom up (BU) directions.
The PCM represents the view that human cognition is reciprocally related to a person's interaction with the world. Internally held mental templates (schema) help a person to understand situations and anticipate certain types of information; these schemas direct a person's behaviour for seeking relevant information about the world in an interpretative manner, and experience ultimately modifies and updates schema while influencing further interaction with the environment.
Another method used (see Fig.6) to design and understand TOR’s are Operator Event Sequence Diagrams (OESD’s). They have been used in the design of HMI and interactions for over 60 years in a wide variety of applications, including analysis of aircraft landing procedures. Previous work undertaken in driving simulators has shown that the OESD’s can be used to anticipate the likely activities of drivers during the handover of vehicle control. Videos of drivers during the TOR were made on UK motorways and compared with the predictions from the OESD’s. As expected, there were strong correlations between those activities anticipated in the OESD’s and those observed during the handover of vehicle control from autonomous vehicles to the human driver.
Fig.6: LHS- Example OESD section showing processes for Phase A and Phase B (Handover protocol phase). Below-OESD task elements with description and an example
What have we learnt: automation is not, as yet, powerful enough to make drivers redundant in all circumstances. Current design criteria are:
· Only automate what you have to
· Support rather than replace driver
· Background automation, not foreground automation
· Chatty co-pilot not silent autopilot
· Continually design out system failures