Light it Up: Evaluating Versatile Autonomous Vehicle-Cyclist External Human-Machine Interfaces

The social cues drivers exchange with cyclists to negotiate space-sharing will disappear as autonomous vehicles (AVs) join our roads, leading to safety concerns. External Human-Machine Interfaces (eHMIs) on vehicles can replace driver social signals, but how these should be designed to communicate with cyclists is unknown. We evaluated three eHMIs across multiple traffic scenarios in two stages. First, we compared eHMI versatility, acceptability and usability in a VR cycling simulator. Cyclists preferred colour-coded signals communicating AV intent, easily seen through quick glances. Second, we refined the interfaces based on our findings and compared them outdoors. Participants cycled around a moving car with real eHMIs. They preferred eHMIs using large surfaces on the vehicle and animations reinforcing colour changes. We conclude with novel design guidelines for versatile eHMIs based on first-hand interaction feedback. Our findings establish the factors that enable AVs to operate safely around cyclists across different traffic scenarios.


INTRODUCTION
Cyclists share the road with motorised vehicles, encountering them in various trafc scenarios, such as intersections and roundabouts [1].This exposes them to potential conficts, as cyclists and drivers seek to occupy the same space simultaneously [22].These conficts can pose signifcant dangers if not resolved; statistics from the UK indicate that over 60% of vehicle-cyclist collisions between 2015 and 2020 occurred at intersections and roundabouts [7].Clear communication is crucial for cyclist safety on the road; there were over 4,000 vehicle-cyclist collisions in the UK between 2015 and 2020 because one of the road users failed to interpret the other's intentions correctly [7], prompting cyclists and drivers to rely on social communication cues, such as facial expressions and hand gestures, to navigate through space-sharing conficts safely [1].
As autonomous vehicles (AVs) become a part of our roads [27], human drivers and the social cues they provide will disappear; cyclists and other road users will no longer be able to rely on current social interactions to negotiate the use of road space safely [22].This could lead to more ambiguities and dangerous encounters.In response, the automotive industry and the AutomotiveUI 1 community have suggested external Human-Machine Interfaces (eHMIs) to replace these missing social cues [8].eHMIs are "displays of any modality placed on the vehicle's exterior" [3].Examples of eHMIs include LED light strips on the vehicle's bonnet or a speaker mounted on the roof.These have primarily been developed and evaluated for scenarios involving pedestrians encountering AVs at crossings, where the focus is on the vehicle's front [8].However, cyclists present distinct requirements and challenges; they can be positioned anywhere around a vehicle and travel at higher speeds.Riders will encounter AVs in many trafc scenarios, including spontaneous ones like lane merging [1,17,18].eHMIs designed for AV-cyclist interactions must be versatile, meaning they must function consistently across all trafc scenarios and provide clear communication with cyclists throughout their journeys.
AV-cyclist eHMI research has gathered requirements for these interfaces based on how cyclists and human drivers currently communicate [1,2,5], designed early concepts [3], and evaluated them in individual trafc scenarios [18].To extend this, we conducted a two-stage evaluation of eHMIs in practice.Three versatile eHMIs were compared: a light ring around the vehicle, a rooftop emoji display, and on-road projections.These were tested across fve trafc scenarios critical for interactions between human-driven vehicles and cyclists [1]: (1) Controlled Intersections, (2) Roundabouts, (3) Uncontrolled intersections, (4) Lane merging and (5) Bottlenecks.We measured the versatility, acceptability and usability of eHMIs through measures of cyclist perception (questionnaires) and behaviour (speed and shoulder checks).
Stage 1 used a virtual reality (VR) cycling simulator with participants encountering AVs using eHMIs while navigating the fve scenarios.Results indicated that cyclists preferred colour-coded signals from the AV, with red indicating the AV will not yield and green that the AV will yield and give the cyclist right-of-way.A second iteration of each eHMI was developed based on this feedback and evaluated in Stage 2, a Wizard-of-Oz study conducted outdoors, with cyclists riding around a real moving car using actual eHMIs in the trafc scenarios.Cyclists preferred eHMIs using the entire vehicle body as a signalling platform rather than a single location, such as the roof, as they can infer the AV's message through 1 Automotive User Interfaces: auto-ui.orgquick glances.We used our fndings to develop guidelines assisting designers in contributing versatile AV-cyclist eHMIs suitable for real-world use.We contribute: • Two empirical evaluations (one in VR, one with an actual vehicle) investigating the versatility, acceptability and usability of two iterations of novel AV-cyclist eHMIs through cyclist perception and behaviour; • Insights into the features (e.g., animation, colour and placement) that enhance AV-cyclist eHMI efectiveness; • Novel design guidelines for versatile AV-cyclist eHMIs; • The frst versatile AV-cyclist eHMI based on the guidelines.

RELATED WORK
Introducing self-driving vehicles without an approach to resolving space-sharing conficts with cyclists (and other vulnerable road users, VRUs) could have signifcant safety implications as they encounter AVs across many diferent trafc scenarios with varying trafc control levels [17].This was highlighted in real-world studies by Pelikan [27] and Pokorny et al. [28], who observed autonomous shuttle bus-cyclist interactions and found that the absence of a human driver or an interface to communicate with the cyclist caused many issues in resolving space-sharing conficts.The very cautious driving style of the buses made their intentions unclear, meaning that cyclists hesitated to pass them, resulting in the buses making hard stops and cyclists being forced into oncoming trafc.These fndings ofer real-world evidence that there needs to be some facilitator to communicate the AV's intent and maintain cyclist safety on the road.Hagenzieker et al. [16] compared cyclists' perceptions of AVs vs. human-driven vehicles by asking riders to judge photographs of vehicle-cyclist encounters.Participants were more confdent that a human-driven vehicle was aware of them due to the availability of social cues; this suggests that there needs to be some form of explicit communication between an AV and a cyclist for these vehicles to replace human-driven ones.These foundational studies demonstrated the need for AV-cyclist interfaces and paved the way for research to explore their design space and requirements.
Both Berge et al. [5] (interview with cyclists) and Al-Taie et al. [2] (online survey) explored early requirements and potential placements of AV-cyclist interfaces.Cyclists wanted reassurance from AVs yielding to them when they have right-of-way; this is where most interactions happen.They preferred displays placed on the vehicle or environment rather than the bike or cyclist.There was great variability in cyclists' characteristics (e.g.experience or carried devices), and cyclists were already used to interfaces on the environment (e.g.trafc lights) or vehicle (e.g.directional indicators).This narrowed design space also corroborates with the emerging consensus that eHMIs are a promising solution to facilitating AV interactions [4,8,20].Holländer et al. [17] formed a taxonomy of VRUs that diferentiated cyclists from others.Cyclists move at higher speeds with physical efort, have less interaction time, and, unlike pedestrians, can be anywhere around a vehicle, not just the front.eHMIs must accommodate these requirements.However, in their literature review, Dey et al. [8] found that most eHMIs were designed and evaluated according to pedestrian needs, who encounter the vehicle's front at crossings.The most mature example is Dey et al.'s [9] lightband (LED strip on the AV front), which was evaluated with pedestrians in VR [13] and outdoor crossings [12].They also conducted an online survey to identify the colours and animation patterns the lights should use to communicate yielding at crossings [9].Pedestrians ranked green as the best.The authors still advised using cyan, a neutral colour without a predetermined meaning, making messages unlikely to be misinterpreted as instructions from the vehicle.They acknowledged, however, that cyan signals may cause ambiguity and need to be learned.It is unknown whether this generalises to cyclists, and we addressed this gap by evaluating a variation of lightband catered to cyclists [3].
Berge et al. [4] reviewed cycling support systems, including AV interfaces.A minority were eHMIs, and only two (Tracker [11] and CommDisk [32]) could facilitate interactions anywhere around a vehicle.However, cyclists were not the target road users for these, and unique requirements, such as versatility, were not considered.The authors concluded that more work should go into designing and evaluating eHMIs catered for cyclists.We address this limitation by taking an iterative design approach to evaluate and refne eHMIs based on cyclists' experience from interacting with eHMIs.Al-Taie et al. [1] used an in-the-wild approach to gather requirements for AV-cyclist interfaces (including eHMIs) based on current interactions with human drivers.First, they observed driver-cyclist encounters in multiple trafc scenarios.Then, they conducted a naturalistic study with cyclists wearing eye-trackers.Over 50% of encounters resulted in interaction, providing real-world evidence that AV-cyclist interaction must be facilitated.Most interactions happened when cyclists had right-of-way, and drivers should have yielded to them.The road users interacted diferently between traffc scenarios, cementing versatility as a key issue separating cyclist interfaces from pedestrian ones and that evaluations of cyclist interfaces should cover multiple trafc scenarios, an approach we took in this paper.
Al-Taie et al. [3] then conducted design sessions with cyclists and AutomotiveUI experts to develop eHMIs around a real car.Each session resulted in two designs for a specifc trafc scenario, e.g. a controlled intersection.The authors synthesised a taxonomy showing diferent eHMI features to combine overlapping ones for individual trafc scenarios and contributed the frst set of versatile AV-cyclist eHMIs.These were a cyan light bar around the car using animation patterns to communicate AV intent and awareness of cyclists, a rooftop display communicating through emoticons and an on-road projection showing riders the AV's intentions.See Section 3.2 for a detailed explanation of the eHMIs.We used these as a starting point in Stage 1 of the investigation as they address the most recent AV-cyclist eHMI requirements [1,3,4,17] and are the only available designs which consider versatility [4].However, they were never evaluated in practice; their usability, versatility and overall efectiveness are unproven.Evaluation is a critical next step in establishing their real-world suitability.Having cyclists use them in practice would reveal areas for refnement and uncover new requirements and guidelines for eHMIs.It would show, for the frst time, how results with human drivers [1] generalise to interactions with AVs.It would also validate Al-Taie et al.'s [3] method for designing versatile eHMIs, opening doors to developing new ones.The designs use features, such as a cyan lightbar, that were efective with pedestrians [8].Our evaluation would show how cyclists react to these and provide a starting point for synthesising eHMIs that function around multiple road user types.

Evaluating AV-Cyclist eHMIs
Cycling interfaces, such as augmented reality (AR) headsets [34], vibrating helmets [33] or eHMIs [8], are commonly evaluated in simulated or controlled outdoor environments.Hou et al. [18] evaluated fve AV-cyclist interfaces, two of which were eHMIs, in a VR simulator.Participants cycled in the virtual world and merged lanes with an AV behind them across diferent interface conditions.They were asked about their confdence in performing the lane merging manoeuvre and the perceived usefulness of each interface.Shoulder checking and stopping behaviour (i.e. if the cyclist let the AV pass) were also measured.The authors found that having an AV-cyclist interface improved participant confdence and performance.eHMIs placed on specifc car areas (e.g.windows or windscreen) did not perform well compared to ones using large surfaces, such as road projection, as they diverted cyclists' attention from the road.However, this work only explored lane merging, so the versatility of their designs is unknown.Our paper widens the scope and evaluates eHMIs in fve scenarios with diferent characteristics.A VR simulator also showed promise in examining these interfaces in a controlled setting without any practical, safety or environmental concerns, e.g.visibility issues for road projections [36].It also helped the authors use SAE level 5 AVs (no human driver present in all trafc scenarios [31]) without any human driver and easily switch between high-fdelity interface implementations.We took this approach in the frst stage of our investigation.However, we followed it up with a real-world evaluation of the eHMIs to understand the practical limitations of implementing the eHMIs and allow for more natural cycling behaviour.
Matviienko et al. [24] developed an augmented reality (AR) cycling simulator deployed on a Hololens2 to evaluate AV-cyclist interfaces that use AR.The authors only considered encounters at uncontrolled intersections, so how the interfaces performed beyond this scenario is unknown.They found that interfaces improved perceived safety and cycling performance as cyclists proceeded at the intersection with smaller gaps between them and the AV when an interface was used.The AR simulator helped trigger real cycling behaviours with participants cycling on a moving bicycle in physical space.We considered this approach; however, the feld-of-view limitations of current AR headsets and the need to conduct the study in a dark indoor space motivated us to proceed with a twostage investigation that used VR and outdoor space.A VR simulator allowed us to overcome any feld-of-view and immersiveness issues in AR simulators without sacrifcing any practical or environmental limitations on the eHMIs, and the outdoor study allowed participants to appreciate riding around real eHMIs mounted on a real car with greater ecological validity [36].
Some previous research has conducted outdoor evaluations of cycling interfaces; however, these are rarely conducted around moving cars.Vo et al. [33] evaluated a vibrating helmet that warned cyclists about nearby obstacles such as cars.Participants cycled on a 20m outdoor track.An experimenter controlled the helmet's cues remotely via Bluetooth.Participants were asked to state the direction and proximity of an obstacle based on the helmet's haptic cues.However, there were no real obstacles around the cyclist due to safety concerns; we used a real moving car in our investigation's second stage to trigger natural responses from cyclists, allowing us also to measure cycling behaviour such as speed changes and shoulder checks.Matviienko et al. [23] conducted a two-step study evaluating cues to assist child cyclists in navigation.These were frst explored in a screen/projector-based cycling simulator, followed by a test track study outdoors.This motivated us to take a similar direction, but we took an iterative design approach; our interfaces were revised based on participant feedback from the simulator evaluation before moving to the real-world study.
Stage 2 of our investigation required us to convince participants that they were riding around a driverless car to trigger natural interaction behaviour.We took Rothenbücher et al.'s [29] Ghost Driver method to hide the driver in a car seat costume and produce the illusion that the car was autonomous.This was used in Wizard of Oz studies investigating AV-pedestrian interaction.For example, Dey et al. [12] used Ghost Driver to evaluate lightband; a car (with the eHMI) approached a real, closed-of, pedestrian crossing and participants indicated their willingness to cross using a handheld slider.The eHMI helped to resolve ambiguity; participants were more willing to cross when the car used an eHMI.We took a similar approach in the second stage of our investigation.We show, for the frst time, how Ghost Driver can be used across multiple trafc scenarios to evaluate and compare multiple eHMIs with cyclists directly interacting with an 'autonomous' vehicle (as opposed to, e.g. using a slider), allowing us to gain novel insights into how cyclists interact with diferent eHMIs in a real-world setting.

Summary and Research Questions
eHMIs are a promising solution to facilitate AV-cyclist interactions necessary to navigate future trafc scenarios [1][2][3]5].However, there is no thorough evaluation of these interfaces and how riders interact with them across trafc scenarios with varying levels of trafc control [8].Existing work has instead focused on requirements gathering [1,3] and evaluating interfaces in a single scenario [18].Work with pedestrians has evaluated eHMIs in VR and real-world settings [8,12] and shown them to be good methods.However, cyclists interact with vehicles in many diferent ways than pedestrians.In this paper, we scale up these approaches to evaluate three eHMI designs across fve trafc scenarios.We answer the following research questions: RQ1 How versatile, acceptable and usable are eHMIs in terms of cyclist perception?RQ2 How versatile, acceptable and usable are eHMIs in terms of cycling behaviour?

STAGE 1: EHMI EVALUATION IN A VR CYCLING SIMULATOR
Three eHMI designs were evaluated across fve trafc scenarios, allowing us, for the frst time, to test the versatility of eHMIs and bring them closer to real-world use.

Participants
We recruited 20 participants (4 Female, 16 Male; Mean Age = 29, SD = 6.6) through social media advertising.Ten cycled at least once a week, two at least once a month, fve multiple times a year, and three once a year or less.Two participants had cycled in VR before.All had experience of riding in Glasgow (UK), on which our simulator was based.Participants were compensated with a £10 Amazon voucher.

Apparatus
The study used a Virtual Reality (VR) cycling simulator (see Figure 2) composed of a Giant Escape 3 size medium hybrid bicycle3 mounted on a Wahoo Kickr Snap smart trainer 4 .Similar to Hou et al. [18]'s simulator, the wheel-on trainer allowed cyclists to use the bike's back brake in the virtual environment without any alterations to the bike.A Coospo Bluetooth speed sensor 5 attached to the back wheel hub controlled speed in VR.We used a Meta Quest Pro headset 6 to display the virtual world and measure gaze behaviour (using its eye-tracker) during the study.As in Hou et al. [18]'s setup, the headset's left controller was attached to the handlebar centre to translate turn angles into the virtual world according to the controller rotation.The virtual environment was developed using Unity3D 2021.3.29f1; the EasyRoads3D package 7 was used due to its realistic textures and UK-like road infrastructure assets.A fan was placed 60cm in front of the participant to combat simulator sickness and increase immersion by simulating headwind [25].An iPad was used to complete post-condition surveys hosted on Qualtrics8 .

Implementing the eHMIs
We evaluated Al-Taie et al.'s [3] proposed eHMI designs (see Figure 3) as a starting point.This section explains how these were implemented in VR and why each concept was useful to evaluate.The interfaces were placed on a 2019 Citroen C3 3D model.The eHMIs used only visual cues to present information, avoiding potential real-world safety issues with cyclists wearing headphones or audio cues from the eHMIs masking other sounds in the environment (e.g.sirens).They work as follows: • Safe Zone: Uses red (AV not yielding) and green (AV yielding) projections to communicate AV behaviour plus a bonnet display of Stop/Proceed (white arrow in a blue background) trafc signs synchronised with the projections.Previous work suggested that on-road projections suit cyclists.They cover a large surface area and are easy to spot through quick glances [1].Projections using red/green were evaluated with cyclists for lane merging and were found usable [18].However, how they perform beyond lane merging is unknown; • Emoji-Car: Placed on the roof, a smaller surface still visible around the vehicle, so it suits cyclists' requirements [1,4].This was important to compare with eHMIs using attached to the car.Some results with pedestrians suggested that driving behaviour alone, without an eHMI, may be enough to communicate intent at crossings [19].However, it is unknown how these fndings generalise to cyclists.
The eHMIs communicate diferent messages (e.g.awareness or autonomous mode) using diferent cues, such as emojis or animation.This allowed us to compare complete designs catered to cyclists' expectations and needs and give valuable insights into the types of colour schemes, symbols, and animations eHMIs should use.The eHMIs started reacting to the cyclist when they were 20 meters away; this distance was determined through eight pilot tests.

Study Design
This within-subjects study had two independent variables: Scenario and eHMI.Participants used the VR simulator to interact with an SAE level 5 AV using each eHMI across fve Scenarios: (1) Controlled Intersection, (2) Roundabout, (3) Uncontrolled Intersection, (4) Lane Merging and ( 5) Bottleneck (road users moving toward each other in a narrow lane).These often prompted human driver-cyclist communication [1].Each scenario has diferent characteristics, e.g.trafc lights or AV position, allowing us to investigate eHMI versatility.Scenarios were grouped into four tracks, one for each eHMI condition.Each track was a straight 1km two-lane road.Riders cycled in the left lane and had the right of way at intersections and roundabouts.Tracks contained each of the fve scenarios where the AV yielded, plus two additional ones where the AV did not yield (see Figure 4).These two were excluded from analysis and used to ensure cyclists paid attention and did not assume the car would always yield.Tracks had a random Scenario order.Participants navigated the seven scenarios, placed 100m apart, until the track's end.All AVs in one track had the same eHMI.The eHMI sequence was counterbalanced using a Latin square.Scenarios were modelled after video footage of cycling in the city of Glasgow [1].UK trafc features were used.Lane Merging had obstacles requiring cyclists to enter from the right lane and exit from the left while merging lanes with a moving AV behind them.Bottleneck had parked cars on both sides; participants cycled in a narrow lane between them with the AV approaching, and one road user had to steer away.At intersections and roundabouts, the AV accelerated to 30mph (standard UK speed in urban areas) when it was 50 meters from the cyclist and stopped 50cm behind the give-way line if yielding.It accelerated to 25mph in Lane Merging and decelerated to 10mph when yielding.The AV drove at 15mph in Bottleneck, steered to the left (between two parked cars) and stopped when yielding.The vehicle maintained speed when not yielding in all scenarios.Controlled Intersection had red lights for 30 seconds in the non-yielding condition.AVs used directional indicators in Roundabout and Bottleneck.We collected the following data: • Post-scenario questionnaire.To measure the versatility aspect of RQ1, NASA TLX was used for an interaction's workload, and fve-point Likert scale (strongly disagree-strongly agree) questions asked: The AV was aware of my presence and I was confdent in the AV's next manoeuvre.These were derived from work showing that AV awareness and intent are key for AV-cyclist interaction [3,22].• Cycling behaviour.We addressed RQ2 by measuring speed (meters per second) and shoulder checks (Unity camera (head) Y-axis rotation> 45°; determined through eight pilot tests).These were logged every second while navigating each scenario.We collected gaze data: number of fxations on an area of interest (AOI); these cover vehicle (e.g.windscreen) and trafc control features (e.g.trafc lights, see Figure 6).• Post-track questionnaire.We measured the acceptability aspect of RQ1 using the Car Technology Acceptance Model (CTAM) [26] and usability with the User Experience Questionnaire -Short Version (UEQ-S) [30].These were previously used to evaluate cycling interfaces and pedestrian eHMIs [10,35].
• Qualitative data.Post-study semi-structured interviews were used to contextualise the fndings.Participants discussed and ranked each eHMI.They highlighted any points for improvement, discussed the diferent scenarios and identifed ones that they felt needed/did not need eHMIs.

Procedure
Each participant answered a survey on their demographics and cycling experience.The experimenter then briefed them about the study and showed them videos of the eHMIs, familiarising them with the signals before the study.They were then familiarised with the simulator; the experimenter ensured the participant was comfortable with the bike gear and saddle height by riding for three minutes with no headset.Each participant practised between 7 and 15 minutes of virtual cycling in a car park environment before starting the experiment.A start menu was shown in VR before each track, and the experimenter informed the participant which track and eHMI to select using the right headset controller based on the Latin square.The experimenter then reminded the participant of the eHMI signals and turned on the fan.The participant started cycling and navigated through the scenarios until reaching the track end.The VR app paused after each scenario, and the experimenter read out questions from the post-scenario questionnaire; the participant verbally answered and unpaused the app using the headset controller.After each track, the participant took of the headset and had a break while answering the post-track questionnaire on a tablet.This was done four times until they encountered all eHMI conditions.A semi-structured interview followed the experiment.The study took approximately 90 minutes.The University ethics committee approved the study.
3.6.2Cycling Behaviour.Data did not have a normal distribution, so we conducted a two-way ANOVA of Aligned-Rank Transformed (ART) data exploring efects of Scenario and eHMI on cycling behaviour, with post hoc comparisons using ART-C.
3.6.3Gaze Behaviour.Figure 6 shows the efect of eHMI on participant gaze behaviours.We conducted a Chi-square test of independence to investigate the relationship between eHMI and fxation counts.Post hoc tests were performed using a Chi-Square test of independence with a Bonferroni correction.We found a signifcant association between the variables ( 2 (36, 10970) = 2187.8,< .001).Post hoc comparisons showed that participants relied more on trafc control with No eHMI as they fxated on trafc signs/lights and road markings more often than with Safe Zone ( < .0001),Emoji-Car ( < .0001)and LightRing ( < .0001).Results also showed that Safe Zone required less visual attention (fewer fxations on the eHMI display) than Emoji-Car ( < .0001)and LightRing ( < .0001).
3.6.5Qalitative Results.We report themes based on the poststudy interviews.We conducted an inductive, data-driven, thematic analysis [6] of the interview transcripts (auto-transcribed by otter.ai 9 and corrected by an author).Transcripts were imported into NVivo 10 .One author extracted 42 unique codes from the data.Two authors sorted these into three themes based on code similarity.This was iterative; disagreements were discussed, and codes were remapped until resolved.Themes with two or more overlapping codes were reassessed and combined when necessary.Participant eHMI rankings are visualised in Figure 8; participants ranked Safe Zone as the best and No eHMI as the worst.
Theme 1: eHMI colours.Participants spoke about their experiences with colour-changing eHMIs communicating intent.They were comfortable with Safe Zone using red and green: "I would go with conventional colours [...] They are easy to understand.I felt safer" -P14.They felt that the colours were distinguishable and unambiguous; "Red and green.Super, super intuitive.I understood very quickly what was going on" -P2, and preferred colour changes over animation: "LightRing would be my favourite if it used Safe Zone's colours" -P16.
Theme 2: eHMI animations.LightRing's animations communicating AV intent were hard to distinguish on the move: "I didn't have time to concentrate on [the animation] while cycling" -P20.Some participants preferred animation to complement other distinguishable Theme 3: eHMI state distinguishability.Icons could help eHMIs be more detailed and explicit in their signals.However, participants could not easily diferentiate between the emojis in Emoji-Car from a distance.For example, P21 said "I spent more time trying to identify the emoji." and P13 said, "Interpreting emojis from far caused a lot of ambiguity".This could be because they are too detailed and share some features: "There is too much detail in the emojis, so I had to concentrate more.They are very similar.They both use yellow and have a similar shape." -P20.

Discussion and Design Changes
All designs were versatile; there were no interaction efects between Scenario and eHMI conditions in any result.This validates Al-Taie et al.'s [3] method for designing versatile eHMIs.However, we found areas where improvement was necessary; just proposing new designs based on cyclist expectations is insufcient, and frsthand interaction feedback needs to be part of an iterative design process.We used our fndings to refne each design (see Figure 9).
Controlled Intersection required a lower workload.Cyclists relied on trafc lights, even when eHMIs were present: "I didn't see the eHMI, I saw a green light and went." -P15.This was similar to fndings with human drivers; cyclists fxated more on trafc lights than nearby cars [1].AV position also impacted our results; participants experienced a higher workload, conducted more shoulder checks and were less confdent in AV intent when it was behind them when Lane Merging, but were more comfortable when it was in front of them at Bottleneck, even though there was no trafc control.
Red/green signals were positively evaluated throughout.The colours were easy to recognise and distinguish; distinguishability is a key AV-cyclist eHMI feature due to the many fast-paced scenarios riders navigate.Most eHMIs use one colour and animation to communicate yielding intent [8].However, we found that this hinders distinguishability and may not communicate enough information quickly.Our fndings align with Hou et al.'s [18], where red/green signals performed well for lane merging scenarios.Red/green are also useful for pedestrians [8,21], providing common ground for eHMIs accommodating multiple road user types.
However, due to the use of red/green in trafc lights, there is a risk of the signals being misinterpreted as instructions from the AV instead of its yielding intent.Nevertheless, participants were most confdent in AV intent when the colours were used in all scenarios, and the signals performed well in negotiation-based scenarios, such as Bottleneck.Some examples in trafc show that the same colour can convey diferent meanings (e.g., amber for pedestrian crossings, trafc lights, directional indicators, hazard lights, and on-car blind spot warnings).Human drivers also use hand signals similar to instructive ones from trafc control ofcers (e.g., waving for 'go' [15]) to communicate their intentions to cyclists rather than instruct them [1]; this efect may extend to red and green.The signal's perceived meaning may depend on its source, and our results captured this: "There is no rule telling me to stop.Even if it is red, the car will react to me if I go.A rule tells me to stop at trafc lights, and the lights communicate this rule." -P13.New trafc colours, such as cyan, could be a more efective approach to avoid red/green being misinterpreted [9], but these were not positively evaluated in our investigation.A longitudinal study with cyclists learning to interpret such signals might show diferent outcomes.However, our results showed they need suitable contrasts to be distinguishable and efective.A compelling area for future work is investigating suitable contrasting colours and comparing their performance with red/green eHMIs in diferent scenarios.
All refned designs incorporated red/green signals to enhance eHMI signal recognisability and distinguishability.We recognised the challenge for colourblind riders to diferentiate between red and green, so we incorporated animations, patterns, or symbols into our designs to enhance accessibility.We drew inspiration from trafc lights using light positions (red-top and green-bottom) and animations (fashing amber) to convey meaning.Safe Zone was the most positively evaluated.The eHMI covered a large surface and used red/green signals to communicate intent.Al-Taie et al. [1] discussed the advantages of using the road as a design space.
Safe Zone led to fewer shoulder checks and reduced the workload.Eye-tracking data showed it was easily visible with quick glances; cyclists spent less time fxating on the eHMI than others.Participants did not pay much attention to the bonnet display used in Safe Zone.They were sometimes unaware of its presence; "There was something on the bonnet?I did not know" -P4.Therefore, we relocated the bonnet display to the roof and replaced the trafc signs with colours synchronised with the projected signals.This spread the signals throughout the AV area and emphasised the idea of having displays in cyclists' peripheral vision, making it easier for them to process the colours and information.To accommodate colour-blind cyclists, we incorporated patterns on the roof display, using vertical lines for green and crossed lines for red.

Emoji-Car
Cyclists were slower around Emoji-Car and performed more shoulder checks than Safe Zone.This could be due to the display being on the roof, which currently does not display any signals for interaction; participants were not used to this.They also paid greater attention to interpreting the icons than colours in Safe Zone; eyetracking data supported this.Qualitative feedback indicated that participants had difculty distinguishing between emojis, requiring a higher workload.They were also confused by the lightning emoji and suggested an icon more aligned with standard trafc symbols: "I can't map lightning to anything meaningful" -P1.Therefore, eHMI signals must be easily distinguishable and understandable from a distance.Some participants incorrectly interpreted the top cyan light as a signal of the AV yielding, leading to potentially unsafe actions; P3 mentioned, "I saw the light on top and thought I could pass."The blinking arrow echoing directional indicators proved redundant and ambiguous, as participants were unsure whether it instructed them to turn or displayed the AV's turn direction.
We simplifed Emoji-Car by keeping it focused on communicating the AV's intent and awareness.The revised version used red triangles to communicate non-yielding (found in trafc signs, suggesting caution) and green bicycle symbols for yielding.We removed the cyan light and blinking arrow to avoid confusing riders, with the eHMI only communicating necessary information.To address colour-blind cyclists, we relied on icons to diferentiate signals.We deviated from Hou et al.'s [18] fndings where AV-cyclist interfaces placed on specifc car areas did not perform well for lane merging, as we wanted to investigate roof-placed interfaces visible from around the vehicle, recommended by previous research [1,4].This approach aimed to balance visibility and conformity to existing interface placements, such as taxi signs.
LightRing did not perform well; cyclists did not respond positively to a new colour (cyan) in trafc.Animations imposed a higher workload and were harder to distinguish than colours or icons.LightRing had a higher complexity; it incorporated features such as synchronising amber lights on the car's side with directional indicators, navy blue lights to indicate awareness, and animations communicating intent.This proved a hurdle as cyclists preferred a more straightforward interface closer to Safe Zone.LightRing's lights were changed to slowly pulse in green when the AV detects and yields to the cyclist and fash quickly in red when not yielding.Animations complement colour changes rather than being the primary source of information.It also helps colourblind cyclists distinguish between yielding conditions, as the animations (speedbased rather than directional) are easier to diferentiate [9].Flashing animations are used in trafc, e.g.some pedestrian crossing signs fash before changing state.LightRing still communicates AV state using cyan, as the signal changes are more apparent with animations and colours, it will not display multiple signals simultaneously, as in Emoji-Car.Here, cyan is used to communicate a new message not currently communicated by human drivers.
Overall, red/green was a useful colour scheme for eHMIs to communicate easily distinguishable messages about the AV's yielding intent across various scenarios.More complex messages, such as echoing a directional indicator, only added to the workload of using an eHMI.We adjusted all three designs based on cyclist feedback and behaviours observed in the simulator to evaluate a second iteration in a real-world setting.

STAGE 2: WIZARD-OF-OZ EVALUATION WITH AN 'AUTONOMOUS' CAR
Cyclists encountered eHMIs presented on a real car across multiple trafc scenarios to evaluate the refned designs and, for the frst time, explore how they may be realised through physical prototypes.

Participants
We recruited 20 participants (7 Female, 12 Male, 1 Non-Binary; Mean Age = 20.4,SD = 5.9) through social media advertising.Eleven cycled at least once a week, three at least once a month, two multiple times a year, and four once a year or less.Thirteen participants used their own bikes during the study.Five participated in the previous VR study.Participants were compensated with a £10 Amazon voucher.

Apparatus
We used a grey 2019 Citroen C3, the same car as in Stage 1.The driver wore sunglasses, black gloves and a car seat cover with holes for eyes and arms (see Figure 13).Participants never saw the driver, creating the illusion that the car was an SAE level 5 AV [29].LED strips and an LED matrix were used to build the eHMIs (see Figure 10).They were plugged into the car's USB port and controlled by an experimenter via an iPad over Bluetooth.The matrix was placed on the roof using a custom-built panel on a removable rack 11 .The rack was present in all conditions; participants were told these were the AV's sensors.Participants only encountered the car's front or left side, so eHMIs were only visible in these directions.We used velcro on the car body to attach/detach eHMIs between conditions, white chalk to draw road markings on the ground, and trafc cones to represent obstacles (see Figure 12).Participants were given a 11 HandiWorld roof rack: handiworld.com/handirack/Giant Escape 3 bicycle and a helmet if they did not have their own.The Tobii Pro Glasses 2 captured eye-tracking and head rotation (shoulder-checking) data.An iPhone 12 mini was placed on the handlebars to record speed using Cyclemeter12 .

Implementing eHMIs
All eHMIs were placed on the Citroen (see Figure 10) and controlled by an experimenter standing outside the vehicle.They were activated when the car reached specifc marked locations for each scenario.They worked as follows: • Safe Zone: We did not use projections because the study was outdoors in daylight, so they were barely visible.This was determined through an early pilot test comparing the visibility of the LED matrix, strip and projection (see Figure 11).We experimented with diferent projectors, including a Dell 1100MP projector with a high (>11,000) lumens value, but the road projections were still not visible in daylight.We also tried using red/green ambient light through 15,000 lumens LED torches stuck under the car, but they also had minimal visibility.Eventually, we used an LED light strip stuck around the bottom of the front half of the car; this was attached with velcro.This approach brought the lights close to the road surface and still emphasised the concept of Safe Zone being in cyclists' peripheral vision, especially when used with a roof display.The roof display (LED matrix) showed the red pattern seen in Figure 9 synchronised with red lights from the LED strip when the AV detected the cyclist but did not yield, and the green pattern with green LED lights when the AV was yielding.• Emoji-Car: The LED matrix displayed three green bicycle icons (one on each side) if the cyclist has been detected and the AV will yield, and three red triangles resembling warning signs if not.• LightRing: LED strips placed on the car's left (2 meters long) and front (1 meter long) using velcro.LEDs were always on in cyan, showing the car was autonomous and not reacting to the cyclist.They changed to green pulsing slowly (1 per second) when the AV will yield and red fashing rapidly (2 per second) when not yielding.• No eHMI: Baseline condition with no eHMI display present.
All displays were removed from the car or switched of.

Study Design
A within-subjects design was used with Scenario and eHMI as independent variables.Participants cycled around a moving vehicle with the four eHMI s in four scenarios: (1) Roundabout, (2) Uncontrolled Intersection, (3) Lane Merging and (4) Bottleneck.We excluded Controlled Intersection as Stage 1 showed it did not require an eHMI.
The study commenced in a coned-of outdoor space (see Figure 12): a 60m straight road intersecting with a 50m road on the left.We drew lane-dividing lines replicating a two-lane road and used cones to mark participant start and endpoints.Participants cycled along the 60m road in all scenarios until they reached the marked endpoint, except in Roundabout, where they made a U-turn.They then cycled back to the start point.Like Stage 1, scenarios were grouped into tracks in random order.AVs used the same eHMI within a track.The eHMI sequence was balanced using a Latin Square.
The AV always yielded to maintain participant safety, but participants were shown both yielding and non-yielding states before each track and told that the AV might not yield.One driver was used for all sessions.They ensured a ≥1m distance from the cyclist, as the UK Highway Code advises.The driver accelerated to 20mph in Roundabout and Uncontrolled Intersection and stopped 50cm (marked using chalk) behind the give-way line.They drove at 15mph in Lane Merging and Bottleneck and decelerated (steered to the left in Bottleneck) according to the cyclist's speed to yield.Directional indicators were used in Roundabout and Bottleneck.Measures were similar to those in Stage 1. Participants answered the same post-scenario and post-track questionnaires.We collected cycling speed (meters per second; logged every second in each scenario), shoulder-checking (Tobii Glasses' Gyroscope Y rotation >90°, determined through pilot tests), and eye-tracking data mapped to the AOIs using Tobii Pro Lab's AOI tool 13 .A post-study interview was also conducted.

Procedure
Each participant met the experimenter in the outdoor space.They frst answered a survey about their demographics and cycling experience.The experimenter briefed them about the study, instructed them about the diferent scenarios and showed the start and endpoints.Before each track, the experimenter showed the participant how the eHMI worked (with the lights on the vehicle) so that they were familiar with the signals before interaction.Those who did not use their own bike ensured they were comfortable with the experiment bike.The experimenter checked they had appropriate safety gear, mounted the iPhone to the handlebars, and calibrated the eye-tracking glasses.The experiment started, and the participant moved to the starting point.They started cycling, and the driver started driving once they saw a thumbs-up from the experimenter.The experimenter controlled the eHMI to react to the rider at the appropriate moment.After each scenario, the participant returned to their starting point and answered the post-scenario questionnaire while the experimenter put the next scenario's obstacles on the road.After each track, the experimenter switched the eHMIs as the participant answered the post-track questionnaire.The experiment ended once the participant cycled on all four tracks and experienced all eHMI conditions.This was followed by an interview with the same structure as the one from Stage 1.The University ethics committee approved the study.

Results
We report the results using the same structure as 1.We start by reporting our post-scenario and cycling behaviour results, followed by fndings from the post-track questionnaire (acceptability and usability) and qualitative feedback.Non-signifcant post hoc results are included in the supplementary material for clarity.4.6.1 Post-Scenario Qestionnaire.The data did not have a normal distribution, so we conducted an Aligned-Rank Transform (ART) two-way ANOVA exploring the efects of Scenario and eHMI on our outcomes.Post hoc tests between Scenario and eHMI pairs were conducted using the ART-C method.
Theme 1: eHMI placement.Placement is a key eHMI feature that enhances interface visibility [4].Participants praised LightRing's placement on the AV body; "You see better because you can kind of see the edge of the LED strip from wherever" -P8.Emoji-Car, which was placed on the roof, was harder to recognise: "Compared to LightRing, then you have to really look at the roof to see the emoji" -P18.
Theme 2: eHMI redundancy.Stage 1 showed participants wanted simple signals to communicate yielding, but subtle redundancy complementing colour changes was successful.Pulsing animations in LightRing successfully reinforced the AV's yielding intent ("LightRing fashing drew attention to itself and diferent fashing speeds were easy to spot" -P4).Redundant messages presented on the top and bottom of the AV in Safe Zone were well received.For example, P20 said, "Always redundancy is better.The top and bottom displays accommodated that".Theme 3: eHMIs and trafc control.eHMIs were helpful overall: "They're necessary.It adds clarity and reassurance" -P17.However, they were most valuable when right-of-way was up for negotiation with minimal trafc control: "It will beneft all these scenarios, but especially lane merging.I think I would say it's crucial" -P19.

Discussion
All refned eHMIs maintained versatility in real-world settings.However, their diferences with No eHMI became more apparent than in Stage 1.No eHMI received the lowest ratings across all metrics.Cyclists were slower and performed more shoulder checks.Visual attention was more spread out; they relied on more AOIs to infer AV yielding.This could be due to the study including a real vehicle, so cyclists may have felt less secure when encountering obstacles, and design improvements made the efect of having an eHMI more noticeable.However, the diferences between the displays (and their performance) were less prominent.This general trend was also observed with the fve participants who experienced the eHMIs in Stage 1. Stage 2 results emphasise that eHMIs communicating clear, easy-to-understand, and distinguishable messages signifcantly improve AV-cyclist interaction.
LightRing received better feedback in some metrics, with participants expressing a greater sense of safety and ranking it as the most preferred eHMI.One contributing factor was its communication of the AV's state through cyan lights: "I liked the cyan colour telling me everything is fne" -P12.LightRing covered a larger AV surface; Stage 1 showed this was a desirable feature.The animations used in LightRing provided redundancy in conveying AV-yielding intentions, enhancing participant confdence.In comparison, Emoji-Car was not as well-received.Its roof placement drew signifcant attention from riders, as indicated by eye-tracking and speed data; riders were slower than with the other eHMIs.The interface's use of icons rather than just colours added complexity.For example, P16 noted, "Using emojis stops it from using the entire display space, and the colours were less apparent."This fnding aligns with Hou et al.'s [18], who noted that placing eHMIs on specifc AV areas could divert cyclists' attention from the road.In comparison, participants could quickly infer signals from Safe Zone despite it being more abstract and relying solely on colour changes without icons or animations.The widespread distribution of lights (roof and car bottom) made them easier to locate through quick glances, as supported by eyetracking data, which suggested that cyclists often looked at the car's centre, not just the eHMI itself, to interpret the signals.According to our results, the design changes to enhance the visibility of Safe Zone have succeeded, even when no road projections were used.
Similar to Stage 1, Scenarios with more trafc control, e.g.Roundabout, needed lower workloads than spontaneous ones, e.g.Lane Merging.Participants had greater confdence in the AV's intent at Roundabout.This can be attributed to the well-defned rightof-way rules in the UK Highway Code, making interactions more predictable.Give-way lines indicated where the AV would stop, and the AV's gradual slowing down gave cyclists more time to interpret implicit cues from driving behaviour.In comparison, Lane Merging was challenging and less predictable.Cyclists were in front of the moving vehicle and needed to conduct more shoulder checks.They had limited time to process signals while moving.Right-of-way was unclear; it was up to the AV to slow down and let them pass.The diferences observed among the scenarios and how cyclists behave in them emphasise the challenges in achieving eHMI versatility.Despite this, we found that all scenarios would beneft from an eHMI; all required a higher workload when there was No eHMI.
Overall, Stage 2 showed a signifcant improvement in AV-cyclist interaction when eHMIs efectively convey clear, understandable, and easily distinguishable signals about the AV's intentions.Using contrasting colours allowed for clearer communication, demonstrating the fundamental concept of having a simple two-state colour encoding to communicate yielding intent.LightRing demonstrated the benefts of communicating messages through colour changes and animation from all around the AV, while Emoji-Car faced challenges because it was more complex due to its placement and use of icons, which required more attention from cyclists, and Safe Zone efectively balanced abstract signals with visibility enhancements.

LIMITATIONS AND FUTURE WORK
All road infrastructure was based on the UK Highway Code, and participants were UK-based.Our methods should be replicated in diferent trafc cultures to see if the same solutions are still efective.We used a city car (Citroen C3), so it is unknown how our fndings generalise to other vehicles, such as SUVs or buses.Our fndings are based on initial interaction: some participants experienced the eHMIs in both stages, but future work should consider a longitudinal study giving cyclists more time and experience with each interface to see if the experience changes the performance of the eHMIs.The evaluation only considered cyclists as the eHMIs were designed for them.However, these must work with other road users, such as drivers and pedestrians.We identifed factors that make eHMIs efective around cyclists and explored features previously used with pedestrians; future work can combine our results with those of other road users to design more inclusive interfaces.
Stage 1 participants were not moving in physical space with no real obstacles around them.This could impact results such as perceived safety.There were also some rendering limitations with the Meta Quest Pro.Some displays were not clearly rendered from a distance due to the headset resolution.The Quest Pro has a similar resolution and higher pixel density than common VR headsets (e.g., Quest 2), so limitations apply across similar simulators.We overcame some of Stage 1's limitations in Stage 2. However, we did not conduct the study on real roads due to safety concerns.Future work should evaluate the eHMIs on real roads; this is important, as scenarios may be more complex.We focused on one-to-one encounters, but real scenarios may have multiple cars or cyclists, so eHMIs must be scalable.We chose single interactions to give baseline knowledge that others could extend to more complex ones.We focused on versatility, covering a broader range of scenarios.Both issues must be resolved for AVs to work efectively.Our eHMIs communicate yielding intent through colour changes; we hypothesise that they are already scalable, as they broadcast what the AV will do rather than what other road users should do.
Participants did not see the driver under the car seat costume in Stage 2 and behaved as they would around an AV.This was evident in the results; No eHMI signifcantly under-performed compared to when an eHMI was present.Qualitative feedback also emphasised this: "It's so hard with no driver in the car!" -P3 and "I couldn't see anything!There were no signals." -P11.Due to safety concerns, the vehicle always yielded in Stage 2 and moved at a maximum speed of 20mph.Participants were still shown the non-yielding conditions and told that the vehicle might not yield to them before each scenario.However, how our results will generalise to faster, non-yielding AVs is unknown.

OVERALL DISCUSSION AND GUIDELINES
Our investigation provided insights into several eHMI features, such as animation, icons, colour and placement.We answered the RQs by measuring cyclist perception and behaviour towards the eHMIs.The interfaces were versatile, but needed adjustment from initial designs suggested in previous research [1,3], demonstrating the necessity of evaluating design ideas.Stage 2 showed that all the interfaces improved the interaction experience despite scenario diferences.They emphasised that eHMIs must communicate simple messages that are easy to understand and diferentiate through quick glances to be acceptable and usable.We achieved this by utilising large surfaces, such as the vehicle's body or road around it, to display simple red/green signals about its intent.We contribute novel design guidelines synthesised from our results and use them as headlines for discussion.We show how our fndings compare to ones with human drivers and highlight contrasting points from those with other road users.
eHMIs are key AV-cyclist interaction facilitators.Social cues from human drivers help riders safely plan their next manoeuvre [1,3,22].We found that this also applies to AV interactions.Unlike fndings with pedestrians [12], driving behaviour is insufcient to facilitate interaction, and eHMIs are an encouraging replacement for current human cues.Cyclists were more confdent in the AV's intent and awareness when eHMIs were present: they conducted fewer shoulder checks and were more comfortable riding at higher speeds.Qualitative fndings reinforced this: "You defnitely need eHMIs.When there was no intervention, I had no idea and no control.I felt unsafe" -P2 (Stage 2).eHMI level of detail could depend on trafc control level.eHMIs were most helpful in scenarios such as Lane Merging and Bottleneck; right-of-way was ambiguous with little trafc control to help cyclists.This aligns with previous work observing cyclists around human drivers; interaction was most likely in these settings [1].Participants still saw the value of eHMIs in more controlled scenarios.Designers could consider eHMI level of detail here.For example, messages may be displayed later in controlled scenarios to allow riders to infer AV intent from driving behaviour frst.
Versatile eHMIs can use the same signals between scenarios.This solves a key AV-cyclist interaction problem; cyclists expressed concerns about learning diferent signals to interact in diferent scenarios [3,5].A scenario-independent language communicating the AV's intent using abstract (not-yielding/yielding) signals was enough for cyclists to safely navigate all scenarios independent of trafc control level.This difers from fndings with human drivers, who used diferent signals in diferent scenarios, e.g., hand gestures communicating intent at uncontrolled intersections and facial expressions communicating awareness at roundabouts [1].It is also a positive step toward synthesising eHMIs that accommodate multiple road users; communicating intent using binary signals was also efective with pedestrians [8].
eHMIs should explicitly communicate intent and implicitly communicate awareness.Awareness and intent are considered to be two separate messages AVs should communicate [3,8,22].Abstracting them to one message is efective.For example, Safe Zone and the updated LightRing changed to green when the AV yielded, and the cyclist was detected, so awareness was communicated implicitly.In contrast, separating the messages overwhelmed cyclists.For example, LightRing's frst iteration communicated intent using animation and awareness through colour changes, and the updated Emoji-Car communicated awareness through bicycle symbols and intent by having the symbols green; riders were slower and fxated more on these signals compared to the abstract ones; more efort was required to process them.
eHMIs should be placed on large surfaces on the AV's body.Previous work suggested eHMIs could be placed on areas, such as the roof, to be viewable anywhere around the vehicle [1,3,4,32].We found that is not enough; cyclists preferred signals quickly viewable at a glance.This requires using large surfaces on the vehicle's body, as seen in LightRing.Utilising large surfaces supports eHMI versatility; we found the AV's position relative to cyclists impacts workload, confdence and cycling behaviour.While road projections were well received by cyclists both in Stage 1 and previous work [1,18], our experience implementing this in outdoor settings proved difcult and expensive with current technology.LEDs placed on the car are more feasible and easily seen in sunny environments.Road projections may also have diferent efects across varying road surfaces (e.g.gravel), so placing the displays on the car may be more practical.
Colour changes are critical to message distinguishability.Cyclists must easily distinguish between messages and understand their meaning.Colour is a primary distinguishable feature.It can be supported by animation or icons, but using these alone did not work; they added to the workload, and cyclists fxated on them more to determine yielding intent.Designers should ensure that any colours used contrast each other and are easily distinguishable from a distance; future work should identify potential colour pairs.AV-pedestrian research also showed colour takes precedence over animation [9].Designers should consider this overlap when developing eHMIs that work for multiple road users.
Echoing vehicle signals could confuse cyclists.This contradicts previous research suggesting eHMIs should help cyclists interpret vehicle signals (e.g. if directional indicators on the front/back are not visible) [1,3].The two approaches tested in Stage 1 (LightRing and Emoji-Car) increased workload and confused cyclists ("does the arrow [in Emoji-Car] mean I should turn or is the car turning?" -P3).Designers should develop eHMIs that communicate the AV's yielding intent without overlapping current vehicle signals.Cyclists must distinguish whether the message is from the vehicle signals or eHMI; this could be achieved using distinct placements, colours or animation patterns for novel eHMIs.eHMI signals should not signifcantly depart from the current traffc vocabulary.Cyclists wanted eHMIs to have a minimal learning curve and blend in with the current trafc vocabulary; this could be through colour (using red/green resulted in a lower workload and higher confdence in AV awareness and intent), animations (where fashing, similar to the ones found in crossings, required cyclists to fxate less than when stroking was used) or any icons used, as our participants could not map lightning emojis to non-yielding behaviour.Therefore, designers should ensure that cyclists are familiar with some aspect of the eHMI to avoid a signifcant learning overhead or misinterpretation of the messages.
Using these guidelines, we formed a new eHMI design (see Figure 18).A two-strip light band; the top displays red lights when the AV detects a cyclist but will not yield.The bottom shows a green light when the AV is yielding.Separating them into top/bottom makes it easier for colourblind riders to distinguish.This way, designers can use animations (e.g., fashing or progress bar) to communicate messages with diferent detail levels between scenarios.

CONCLUSION
We conducted a two-stage investigation comparing three AV-cyclist eHMIs to test their versatility, acceptability and usability.First, we

Figure 2 :
Figure 2: The VR cycling simulator: (A) Meta Quest Pro headset; (B) the headset's left controller mounted on the handlebars for steering; (C) a fan simulating headwind; (D) a wheel-on indoor bicycle trainer and (E), a Bluetooth speed sensor on the rear hub.

Figure 4 :
Figure 4: (A) Birdseye view of a track and the trafc Scenarios: (B) Controlled Intersection, (C) Roundabout, (D) Uncontrolled Intersection, (E) Lane Merging and (F) Bottleneck.(G) A cyclist encountering an AV with the Emoji-Car eHMI in a bottleneck.

Figure 5 :
Figure 5: Mean overall NASA TLX workload per Scenario and eHMI in Stage 1.

Figure 6 :
Figure 6: Cyclists' gaze fxations as a % of trial time visualised on a heatmap for each eHMI condition.

Figure 8 :
Figure 8: Percentage of participants that ranked each eHMI from worst (dark red) to best (dark green).

Figure 11 :Figure 12 :
Figure 11: Pilot comparing visibility of an LED matrix on the roof, LED strips on the body and a projector on the front bumper.

Figure 13 :
Figure 13: The Stage 2 procedure visualised.(A-B) The driver hidden in a car seat costume, (C) the cyclist performing a lane merging manoeuvre around an AV with LightRing and (D) the cyclist answering the post-scenario questionnaire.

Figure 14 :
Figure 14: Mean overall NASA TLX workload per Scenario and eHMI in Stage 2.

Figure 15 :
Figure 15: Cyclists' gaze fxations visualised as a heatmap for each eHMI condition.The dots show the number of fxations.Green represents smaller numbers and red represents larger numbers.

Figure 17 :
Figure 17: Percentage of participants that ranked each eHMI from worst (dark red) to best (dark green).

Figure 18 :
Figure 18: The two-strip eHMI.(A) Shows an encounter on the AV's side with AV yielding, (B) in front of the AV (AV not yielding).