Language Learning with Visuals
Understanding a language through recognizing images that best describe sentences
Unity + C#
TYPE
Virtual Reality
XR Prototyper
ROLE
TIMELINE
Team
1 Week
Solo
A VR language learning application where the user are asked to select images to best explain their understanding of the sentence in the target language. It is conceptualized as one of the initial step in the process of learning a language.
The VR application is made for Meta Quest 2.
This prototype was created in collaboration with Pearson Education, creators of MondlyVR. The core brief that was set up : How might we create a language leaning game in VR that keeps learners engaged for months of daily practice?
OVERVIEW
THE CHALLENGE
This prototype for language learning is based on personal experience when I was working in a city, in India where I did not understand the regional language. English was spoken, but often people spoke in the regional language in meetings/conversations. I made a point to carefully understand atleast the context of what the conversation is about and reply with a mixture of the regional and English language. Can there be a interactive VR application that let’s you recognize images to learn what a sentence could approximately mean?
THE PROTOTYPE
INITIAL SKETCHES
In the initial game screen, the user is prompted to select either ‘Context’ or ‘Detail’ option. Context - What does the sentence approximately mean? Detail- Who is it addressed to/ who is asking etc. the user needs to select correct images for both options for a successful completion of the tasks.
USER FLOW
KEY FEATURES
Developed in Unity for Meta Quest 2
Extended XR Interaction toolkit
Minimalistic UI with Interactive buttons having varying audio and visual feedback for correct and incorrect selections
Audio pronunciation of the sentence using text-to-speech
Hint button to highlight crucial parts of the sentence to be recognized
Using the interactive button -Audio, an audio pronunciation of the sentence is played which is generated through text-to-speech.
Using the interactive button - Hint, important keywords are highlighted to be recognized through images which will then help the user to understand the sentence.
Visual feedback for correct and incorrect selection highlighted in green and red accompanied with relevant audio feedback.
This helps the user to eliminate options to move towards the correct choices.
Winning condition is emphasized by showing just the correct choices and eliminates the incorrect choices.
Visual feedback using spot lights and clapping audio cue over the correct choices.
The conceptual idea from drawing board to VR was successfully achieved in the 1 week time.
Feedback from Pearson Education- “The idea is very original. We have never come across an application that uses images as a way to determine user understanding in language learning. However, the application can be further pushed to make it more three dimensional and make use of the VR interface in a more productive way.”
Additional further ideas to address the feedback are to include 3D objects instead of images which can transform this application to an active learning experience.
REFLECTIONS: