Opinion: What can we learn about E-scooters for the future of automated vehicles?

Over the years of 2020 and 2021, you may have noticed that we (at least in the UK) are seeing a huge upsurge in the use of E-scooters on our roads – personal vehicles that can reach speeds in excess of 30mph and are (for the most part) intended for use on public roads. Some of these E-scooters are privately owned, but many are featured as a ‘mobility as a service’ package where the user hires a vehicle on a mile-by-mile basis. These vehicles attempt to improve environmental outcomes, but are the trade-offs worth the risk to our safety?

https://www.flickr.com/photos/kristoffer-trolle/48263543577/ (CC by 2.0)

As an automated vehicle researcher, it’s hard not to notice some parallels between the roll-out of publicly available E-scooters, particularly in relation to inappropriate use and the public’s understanding of the legal boundaries of these vehicles. Needless to say, it does not fill me with confidence about the future of automated vehicle transportation. Here’s a summary of my observations over the past few months.

Having used this service myself, I was concerned with the lack of clarity around where it was appropriate to use my E-scooter. I was not sure if I should be in the centre of the road (like a motorbike) or whether I could use cycle lanes. Additionally, many users ride on the pavement to feel safe and away from traffic, however this poses a tremendous risk to pedestrians, as collisions with these vehicles can often be fatal. This is also a huge safety risk to the user, as there are numerous obstacles that can be collided with in pedestrianised areas.

There are some ways of guiding behaviour to align with legal requirements (rather than just asking users to read the terms and conditions), and oftentimes it’s better to use a prevention system or an interface that guides behaviour. For example, many services shut-down when attempted to be used in communal areas such as parks – a clear enforcement of an operational boundary. Additionally, providing the user with clear visualisations on go/no-go zones, and prompt the user to stay on the road, is essential to encourage appropriate use.

Additional safety breaches to safe conduct include multiple users (only yesterday I saw three individuals riding a single E-scooter), and not using a helmet (a legal requirement for riding a bicycle). It seems to me that every e-scooter user is currently breaching one or more safety guidelines or the law when using these vehicles. There is very little data available as to how dangerous these modes of transports are in practice, however, over time we should expect to gain a clearer picture as to how this test-scheme has fared.

For the future of automated vehicles, these exact issues will become more prevalent, and perhaps these situations will be far less forgiving due to the speed and mass of the vehicles in motion. For example, drivers may be inclined to activate automated features in a place where it is not safe to do so or attempt to leave the driving seat when they may be required to take control of the vehicle at a moment’s notice. In these vehicles, much like that of E-scooters, clear legal frameworks and operational boundaries will need to be communicated to the user, and not only baked into our law system.

Let’s learn from E-scooters and pay close attention to how the public interacts with this technology. Safety centres around not only clear policymaking, but also clear communication to the public and the user. It is important to invest in education, infrastructure, and smart user-centred design to ensure that injuries and deaths are not incurred needlessly, and that these types of services are catered for and not merely added into the chaotic fray of our modern transport system.

Supporting articles:

BBC. (2021). When and where can I ride an e-scooter legally. https://www.bbc.co.uk/news/uk-48106617

Express. (2021). Electric scooters: Gaps in safety knowledge could leave road users with ‘serious injuries’. https://www.express.co.uk/life-style/cars/1446324/electric-scooters-road-safety-risk-new-driving-law

Leicestershire Live. (2021). Car slams on brakes to avoid colliding with e-scooter being ridden illegally in Hinckley. https://www.leicestermercury.co.uk/news/local-news/car-slams-brakes-avoid-colliding-5492105

Yahoo! News. (2021). E-scooter rider who killed elderly cyclist after collision pleads guilty. https://sg.news.yahoo.com/escooter-rider-killed-elderly-cyclist-collision-pleads-guilty-055512109.html

Can interpersonal (human-to-human) communication inform the future of autonomous vehicles?

Imagine that you wake up and get ready for work. You exit your home and begin your commute in an autonomous vehicle. You are greeted by a virtual assistant, who asks you how you are feeling today, and where you’d like to travel. During this journey, both you and the autonomous vehicle are expected to be a team, and each of you may control the vehicle at different stages of the journey. Ultimately, you are both partially responsible for the vehicle’s safe operation – depending on who is in control and who holds liability (a tricky topic, and one best left for another blog post!).

What does this virtual assistant look like? How does it communicate? How emotionally connected are we to this technology? In an emergency how does the assistant handle the situation to keep you and others safe? Many of these questions are yet been answered, and the research community is divided over whether or not we can apply how humans naturally communicate with one another to answer these questions.

Developing an automation assistant for semi-autonomous vehicles (research article and book coming soon!)

Key works such as ‘The Media Equation’ by Reeves and Nass (1996) suggest that we treat machines as social agents, and that we often exhibit feelings and behaviours analogous to those in our inter-personal relationships such as empathy, frustration, and politeness. There are others that argue that how we treat humans is fundamentally different to how we treat machines, both from a reduction of harm perspective (i.e., we are less protective of harming a machine than another human; Bartneck et al., 2005). Those on this side of the debate state that communication between humans cannot be readily replicated by technology. In the centre, many influential works began investigating interpersonal communication and were then repurposed for human-computer interaction as the benefits of this work were realised as technology developed (e.g., Clark, 1996; Klein et al., 2004; 2005).

We are fundamentally limited to current or past technology to guide our conversations. But what does this debate mean for the future of AI and autonomous technology? As virtual assistants become smarter, more efficient, and perhaps more aware, the proposition that inter-personal communication can be beneficial for the human-robot interaction community may become more prevalent – if not only to understand how we can improve etiquette, effective communication of information, and promote natural communication.

During my doctoral research, I investigated how humans communicate with one another when handing over safety-critical tasks in areas such as healthcare, aviation, and control-rooms (Clark et al., 2019a). I wanted to understand how professionals, such as ambulance staff handing over a patient to an intensive-care unit used language, what strategies they preferred, and ultimately, what information they thought was critical to operational safety. I replicated a handful of strategies in an autonomous vehicle simulation and found that the lessons I had learnt from human-human communication, specifically in healthcare, were beneficial not only to human-computer interaction, but to an entirely different domain-of-study (Clark et al., 2019b). The source material of human-communication had provided me with communication strategies that has now taken the form of an in-vehicle automation assistant.

Replicating human-human communication in an ‘autonomous’ vehicle
(Clark et al., 2019b)

My new book: Human-Automation Interaction Design: Developing a Vehicle Automation Assistant, is expected to be published later this year. You’ll find the details of my journey from human-communication to an in-vehicle interface and all the steps in between including literature reviews, user-workshops, experiments.

Keep an eye on my work, and I look forward to bringing you more content soon!


Bartneck, C., Rosalia, C., Menges, R., & Deckers, I. (2005). Robot Abuse – A Limitation of the Media Equation. Rome, Italy: Interact 2005 Workshop on Abuse.

Clark, H. H. (1996). Using language. Cambridge: Cambridge University Press.

Clark, J. R., Stanton, N. A., & Revell, K. M. A. (2019a). Conditionally and highly automated vehicle handover: A study exploring vocal communication between two drivers. Transportation Research Part F: Psychology and Behaviour, 65, 699-715.

Clark, J. R., Stanton, N. A., & Revell, K. M. (2019b). Identified handover tools and techniques in high-risk domains: using distributed situation awareness theory to inform current practices. Safety Science, 118, 915-924.

Klein, G., Feltovich, P. J., Bradshaw, J. M., & Woods, D. D. (2005). Common ground and coordination in joint activity. Organizational Simulation, 53, 139-184.

Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intelligent Systems, 19(6), 91-95.

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people. Cambridge, UK: Cambridge University Press.