First Aid Training

Making First Aid training engaging and memorable through active learning in Virtual Reality

Unity + C#

TYPE

Virtual Reality

XR Prototyper

ROLE

TIMELINE

COLLABORATORS

150 hours spread over 2 weeks

Agnieszka Zielke

Suzanne Dazo

The goal is to make remote first aid training engaging and memorable by presenting the player with realistic scenarios and active learning and learning through doing instead of passive learning followed by quizzes, which is the current norm.

The VR application is a cross-platform development - Meta Quest 2 + Valve Index

OVERVIEW

THE CHALLENGE

The initial idea was based on a discussion amongst the team members with our experience in various First Aid Training via remote video-based courses. We found these courses incredibly dry and not particularly memorable. This potentially life-saving set of skills requires a much more engaging and hands on pedagogy. The challenge is to give an active learning experience by creating a VR-based application.

Scenario: First Aid in a Car Crash

  • Make sure the area is safe and secure

  • Check on casualties and get information from bystanders

  • Contact emergency services 999/911/000

Key Features:

  • Cross-platform development (Quest 2 & Valve Index)

  • Autohand SDK + physics interactions

  • Teleportation-based Sequence Manager, including activity checkpoints

  • Guided interactions throughout the experience

  • Interactive dialogue with different branches created using the Yarnspinner package 

  • Integrated audio dialogue, using VoxBox text-to-speech

  • Integrated UI following collaborative prototyping in Shapes XR 

Phase 1: Scoping + Design

We decided to test out the concept on a car collision scene, where the user has to:

  • Assess the incident scene, take charge of the situation and remove any potential risks that may put them in danger before tending to the casualty.

  • Primary survey with a responsive casualty - finding out if they are in pain / bleeding and calling the emergency services if necessary. 

UI Development

UI design throughout the VR simulated experience is the key in guiding users through the process. 

The UI comprised several panels across the scene appearing in front of the player with a smooth transition as each milestone was reached. We used glow particle effects to highlight key physical interaction, Audio SFX effects throughout and audio voiceover for character dialogue responses to enrich the variety of feedback the user received. 

THE PROTOTYPE

Prototyping in Shapes XR

We prototyped different UI interactions collectively using the Shapes XR application and importing and modeling a simplified version of our environment into it. This collaborative approach allowed us to make multiple decisions quickly and stay aligned, particularly as we were working in a distributed manner across 3 different time zones.   

Phase 2: Development

Steps to be completed in sequence to finish training:

  1. Start point

  2. Bystander Dialogue

  3. Switch Off Car ignition

  4. Warning Triangle Placement

  5. Talk to Casualty and Call 999

1. Start Point

Player needs to go through introductory instructions to decide next steps.

2. Bystander Dialogue

Try to find out what happened and ask them to stop smoking.

3. Switch Off Car ignition

Push the door wide open to access the car ignition and switch it off.

4. Warning Triangle Placement

Place warning triangle in correct position to inform incoming traffic.

5. Talk to Casualty and Call 999

Gently tap on the casualty’s shoulder and check if they are responding. Call 999.

  • The user needs to follow a specific sequence and is notified with visual and audio feedback if incorrect teleport point is selected.

  • The correct teleport point is highlighted with an animated green arrow.

Interaction Features:

  • Response options pop up on how the user can interact with the bystander or the emergency services on the phone.

  • If incorrect response is chosen, there is visual and audio feedback.

Key decision:

  • Visual : Green arrow hovering over the teleport point to guide the user to follow the correct sequence.

  • Audio : Varying feedback for correct and incorrect selections.

  • Menu : Minimalistic design, with change in colors for correct and incorrect selections.

  • The user interaction with the objects and characters in the scene was very important. We wrote the script for the scenario to be conversational to give the user a close enough real life simulation.

    Successful collaboration:

  • Strictly working in ‘Pull Requests’ on GitHub to minimize merge conflicts and allow for comments and feedback from the teammates.

  • Successfully managed to navigate working across different timezones by clearly organizing task tickets and having regular catch ups on Discord.

    We have established the below items as lessons learnt from the project:

  • Plan out prefab groups. Prefabs by “mini-scene” would have led to fewer scene merge conflicts when we serialized fields in Unity 

  • Expanding our Teleport sequencer to a full state machine

    Future Steps:

  • Different environment / incident scenarios.

  • Different types of casualties. How would you talk to a kid in the same scenario? 

  • Multiple casualties in the incident.

  • Unresponsive casualties, more physical interactions.

  • Different weather conditions? How would you make sure that the patient is comfortable till the ambulance arrives?

REFLECTIONS:

Next
Next

Language Learning with Visuals