The Future of Driving Begins with Trust

Published by Thomas Philips
November 27, 2018
People need to be able to quickly and reliably gauge what an autonomous vehicle is going to do next
Words Thomas Philips November 27, 2018

For many people, the theme of “digital transformation” conjures up a vague feeling somewhere between fascination and uncertainty. Fascination because digital technologies can drastically simplify complex activities, find amazing solutions and offer unique opportunities. Uncertainty because futuristic visions of complete automation seem to leave little room for human individuality and our ability to act.

“At Mercedes-Benz we are convinced that the digital transformation can only be designed successfully if it is deeply anchored within society. Humans and access to data must be at the heart of a digital transformation,” says Jasmin Eichler, Head of Research Future Technologies at Daimler AG. “That is why we are also working on solutions in the field of digitalisation which place the freedom, decision-making autonomy and individuality of human beings at their centre. We aim to create a balance between humans and technology. The approach we are following here is “Human first”.”

Because there are diverse issues around digital transformation, Mercedes-Benz is basing its endeavours on “open innovation”. Stakeholders from all different industries – business, research, art, industry or biology – are brought together for shared research purposes. The results are projects which consider the future of mobility from new perspectives and which produce exceptional problem-solving approaches. Mercedes-Benz presented some of these collaborative projects at the FutureInsight event in Berlin.

A future with autonomous vehicles

How do we establish trust between humans and machines? Autonomous driving is going to be an integral part of our future. When it comes to this topic, Mercedes-Benz regards empathy and trust as central factors for the success and acceptance of the transformation. The concept of “informed trust” takes on great importance here: “People need to be able to quickly and reliably gauge what an autonomous vehicle is going to do next. The vehicle must therefore provide information about its intentions in a way that people can grasp immediately and intuitively,” says Alexander Mankowsky, a futurologist at Daimler. Based on this information, the person needs to be able to decide what they are going to do and how they are going to respond to the situation. Among other innovations at FutureInsight, for this purpose Mercedes-Benz introduced concepts for a “cooperative vehicle”. Projects with external providers demonstrate further possibilities for how future autonomous vehicles could communicate and work together with their surroundings.

The cooperative vehicle – know intuitively what the car intends to do

The cooperative vehicle, based on an S-Class, features 360-degree light signalling. Turquoise light strips in the windscreen, the radiator grille, the headlamps, the exterior mirrors and the lower area of the windows indicate to pedestrians and surrounding traffic that the vehicle is operating in autonomous mode. Lamps on the roof provide information about the next actions that the vehicle is going to perform. Slow flashing means that the vehicle is braking. A stationary light shows that the vehicle is in autonomous driving mode, regardless whether it is driving or at a standstill. The lights on the roof also follow the movements of people at the side of the road and in front of the vehicle to signal that the vehicle is aware of their presence. In doing so, the cooperative vehicle recreates the natural eye contact that would have taken place between the driver and pedestrians. Rapid flashing indicates that it is about to move off.

The cooperative S-Class also informs its surroundings that it is about to enter into operation while it is still at the side of the road. The light strips around the vehicle emit an appropriate light signal. The exterior mirrors fold out and first the rear of the vehicle lifts up followed by the front. These movements resemble a living thing that is waking up and stretching. People can understand this communication intuitively.

Study shows pedestrians wish for 360-degree communication in turquoise

360-degree light signalling is particularly important when it comes to keeping pedestrians informed. This finding is the result of several light studies that Mercedes-Benz has conducted at its test facility in Sindelfingen, as well as at the recently opened site in Immendingen under the direction of Stefanie Faas from Daimler’s Innowerkstatt (innovation workshop). The research looked at how pedestrians react to different signaling autonomous vehicles in various traffic situations. It became clear that light signalling has a strong effect on the acceptance of autonomous driving vehicles, as well as on how safe pedestrians feel. In particular, people wish for light signalling in situations where there was before interaction with the driver. For example, people are used to seeking eye contact with a driver when they wished to cross the road. If light signalling is communicating that a vehicle is in the autonomous driving mode, pedestrians can feel safe even if the vehicle occupants are obviously not paying attention to what is happening in traffic. The majority of participants in the study preferred turquoise as the signalling colour. All participants favoured a 360-degree display. Mercedes-Benz is also contributing the results of the study on the theme of “autonomous driving” to SAE International, an international organisation dedicated to advancing mobility technology. There Mercedes-Benz recommends the use of turquoise, a colour which, up to this point, has not been used in the automotive sector to enable 360° signalling.

Visions of the future: the vehicle body as a means of communication

Going beyond the studies and the light signalling demonstrated based on the cooperative vehicle, Mercedes-Benz is already concerning itself with longer-range visions, which are intended to enable “informed trust” between humans and machines. Informed trust contrasts with blind trust and demands a certain knowledge of the object. Here the entire outer skin of the vehicle becomes a communication medium for 360-degree communication. The conventional body is transformed into a “digital exterior”.

Mercedes-Benz showed a first step in this direction back in 2015 with the F015 research vehicle. Among other features, this has a digital grille, which can be used as a communication medium. A year later the Vision Van, an electrically powered van with integral delivery drones for transporting parcels over the last mile, picked up on this motif. This is fitted with digital LED grilles at the front and rear, which the vehicle can use to warn traffic behind, for example, with messages such as “Vehicle stopping”. In 2018 the Vision URBANETIC, a mobility concept for on-demand, efficient and sustainable mobility, took this design further. The concept comprising an autonomous drive platform with interchangeable modules for transporting cargo and passengers can communicate with its surroundings by means of “digital shadowing” on the body. For example, the shadow of a pedestrian will be displayed when the vehicle’s 360-degree sensors perceive someone nearby. Due to this interaction, the pedestrian can feel confident that the vehicle has detected them and can act accordingly. Building on these innovations, Mercedes-Benz is now working on other solutions that provide vehicle occupants and passers-by with the same information about the vehicle’s perceptions and subsequent actions. In addition, the vehicle occupants should be able to decide what the vehicle communicates outwardly. This creates a cocooning effect inside the vehicle so that the vehicle feels like a protected space for its passengers.

Groove – interaction via reactive surfaces

The “Groove” project – a collaboration between Mercedes-Benz and the designers at Studio 7.5 in Berlin – is exploring the communicative potential of reactive surfaces. There is also a focus on collaboration between humans and autonomous vehicles. They developed a mobile, manoeuvrable membrane, which perceives its environment and responds to it in a similar way to a sea anemone. The aim of the project is to use these modes of expression to communicate an autonomous system’s processes and intentions to its surroundings. This should improve the interaction between humans and machine.

Polygon – different dimensions of informed trust based on animations

In collaboration with Japanese animation studio Polygon Pictures, Mercedes-Benz has designed animations of different scenarios in which autonomous vehicles could build informed trust with humans. The basic idea when designing animes is that a lot of emotion is expressed in a few strokes. The key issue that this project concerns itself with is therefore: How can these basic principles be used for intuitive communication between human and machine?

Answers are provided by, among other scenarios, the “AICAR” which shows an autonomous vehicle as an animated character. In addition to light signalling which informs of situations such as stopping, moving off or turning, the vehicle has various communication features that can express emotions.

Eye contact is the central focal point of a second scenario. Studies have shown that people intuitively seek eye contact with autonomous vehicles. In order to leverage this behaviour for active interaction between human and machine, Polygon Pictures has created a stylised eye design for autonomous vehicles. This makes it possible to present actions that the vehicle is about to perform and allows people to intuitively grasp what is going on.

A third scenario known as “AIMY” approaches the same topic in a somewhat more abstract way. Here the vehicle communicates with its environment via a target pointer. This target pointer consists of optical signals such as crosses or rays, which announce actions such as turning, accelerating or braking.

“See like a pony” – using the sensory system of animals as a model

The “SLAP – See like a pony” project orchestrated by Sabine Engelhardt from the Future Technologies division at Daimler AG is looking at the interaction between human and machine from a very unusual perspective. She is figuring out how ponies perceive their surroundings and drawing conclusions from that to assist with communication between people and autonomous cars. This approach first originated from Stanford professor Clifford Nass. In one of his lectures the sociologist compared autonomous cars to domestic animals: Their behaviour is predictable to a certain extent but there are, however, also actions which even humans cannot predict. Furthermore, communication between humans and animals chiefly involves body language – similar to the way in which it could take place between humans and machines. At the same time, neither animals nor machines can 100 percent predict and understand human actions. With the help of cameras, “SLAP” puts the researchers in the position to see the world from a pony’s perspective and in that way allows them to learn how they behave with humans. A familiar example is that horses show attention by the direction of their ears. This knowledge of attention helps considerably when interacting with the animals. The findings gleaned from that can be transferred to the design and technology of self-driving cars whereby their sensory attention can be made externally visible and can therefore be comprehended.

Maya Ganesh – the ethics of autonomous vehicles

Researcher and author Maya Ganesh is considering the topic of empathy in the context of the mobility of the future from the meta-perspective. She engages with the ethics of autonomous vehicles. In her lecture “Insight on Ethics; Society & AI” she discusses different aspects of ethics in the interaction between human and machine. Among other issues, she addresses the question of whether an autonomous vehicle can really be regarded as “autonomous”, i.e. whether it can really be understood as a “being” that has intelligence, consciousness, sensory perception and free will. At the same time she asks why people assume that all intelligence must be based on human intelligence, which is itself an arbitrary and changeable benchmark, and whether it is meaningful to apply human standards to the appraisal of machines. Based on these issues, she makes arguments for reclassifying the relationships between humans and machines – whether they be hybrids, cyborgs or robots – and as a consequence thereof for a re-evaluation of the ethical standards that apply to their interaction.

More: Videos