HARMAN
User interface for autonomous vehicles
Duration: 8 months
Role: Design lead (team of six)
Prompt
In collaboration with HARMAN, a Samsung company, and Carnegie Mellon University’s Human-Computer Interaction Institute, my team and I had the opportunity to explore and define the user experience of autonomous vehicles. The domain is new, vast, and there are many avenues we could have taken, but we were motivated to improve on the autonomous driving experience for those who would be effected most, the driver. Focused on Level 3-4 autonomy and a five to ten year timeline, the team had been working towards a product that aims to create an inclusive autonomous driving future.
Human-Vehicle collaboration
Summary…this is what we made
Key feature
Provides driver with different aspects of control in unique situations during the autonomous driving experience
Keeps drivers in autopilot mode, increasing vehicle safety and road predictability
Decreases driver’s cognitive load and fatigue
Engages the driver through preference input and interaction, increasing driving enjoyability
This control allows the driver to control the car without ever worrying about potential risks and consequences as long as the car is in autonomous mode. Clear delineation between the area for user action and vehicle communication allowed information and interaction to flow dynamically across the interface.
Research
The team conducted research over several weeks utilizing multiple methods to better understand the autonomous vehicle landscape and its users (drivers). The time used to research the topic allowed us to identify multiple avenues for further exploration and product opportunities. Through our research, the team discovered that the best experience in an autonomous vehicle is established when the driver has opportunities to exert control over the vehicle's AI.
Secondary research, literature review
Primary research
“Wizard of Oz” AV simulation
User interviews
Data analysis
Car study: Wizard of Oz AV simulation
In order to explore issues of the user’s sense of trust, safety, control, and enjoyment, we created a “Wizard of Oz” experiment simulating the experience of being a passenger in a self-driving car. We drove participants along a predetermined path with questionable incidents. A real driver, obscured by a foam core divider, operated the vehicle at all times with the participant in the passenger seat. We also had a facilitator in the backseat prompting the participant with questions and taking notes, in addition to cameras recording audio and visual information facing the participant and out the windshield. Participants were given the impression that the driver was there for liability reasons and that the vehicle was operating itself.
Interviews
Tesla Owners
We conducted phone interviews with eight Tesla owners to understand their experience with Tesla’s Autopilot system, which is the most advanced semi-autonomous driving system currently available. These users represented our best access to the experience of autonomous car users, who don’t currently exist.
Members of our team asked Tesla owners about reasons they like or dislike Tesla Autopilot, what driving situations make them uncomfortable when they turn it on or off, and if they wish anything about the experience was different.
Car Enthusiasts
We also conducted interviews over the phone and in-person with nine car enthusiasts to learn about what people love about driving, and how we might retain that enjoyment in an autonomous driving system.
They were asked about their experience with cars, what they love about their current vehicles, their thoughts on the future of cars, and what they think about maintaining car culture in an increasingly autonomous future.
Data analysis – Affinity diagramming
After gathering all our interviews from Tesla drivers, car enthusiasts, and our WoZ participants, we set on the task of converting all this raw data into insights. To do this, we conducted affinity diagramming, turning 1,100+ data points into a series of hierarchical groupings in order to uncover patterns in user comments. These groupings were then summarized into six themes:
1. How might we facilitate moments that build trust?
2. How might the AV learn the driver’s preferences and incorporate feedback?
4. How might we deliver information about the AV to the driver?
3. How can we communicate to the driver they have agency?
6. How might we maintain an enjoyable driving experience in an AV?
5. How may the driver personalize their experience in the AV?
Insights to ideas
With our areas of exploration in hand, the team engaged our clients at HARMAN in a creative workshop. Together, we made a creative matrix to generate as many ideas as possible, did some time-boxed sketching, and dot voted on the most intriguing ideas.
Afterward, the team attempted to scope down to one idea by creating multiple low-fidelity prototypes and factoring time, technical feasibility, as well as overall interest.
Early experiments helped us hone in on the concept of collaborative control, bridging a gap between autonomous vehicle and driver. Using this control, drivers would be able to express their preferences to the vehicle and adjust important aspects of driving in real-time.
Designing the interface
Based on our previous interviews, we had identified a number of key interactions that drivers wished to engage in during their drive including:
speed adjustments
lateral lane position
passing maneuvers
binary decisions such as "go or don't go"
We developed the curved interface to support these unique interactions that allow the driver to have shared control with their vehicle. Before proceeding with digital design and development, the team had created models and paper prototypes.
s-curve
The team began modeling interfaces with a combination of different degrees, size, and shape. Paired with simulated driving interactions, we tested these models with users and asked about their overall comfort, ease of use, and understanding. Using this qualitative data, the team was able to determine the initial shape and dimensions.
paper prototypes
medium-fidelity
Creating a new interface for autonomous vehicles presented unique problems: What do digital car controls look like and how will drivers know how to use them?
With these questions in mind, the design process included a number of iterations exploring different styles, colors, and visual elements.
User testing
The team conducted multiple rounds of user testing with different levels of fidelity to develop and design the appropriate interactions, language, and visuals to implement. We recruited a diverse group of users with different backgrounds and driving experience.
Final design
The final prototype was created in Framer. We used an iPad along with an interactive projection onto a custom-made plexiglass touchscreen to demonstrate the final design.