Reality hack ‘25; Intentional Locomotion
Project Synopsis
This project explores the use of IMU-based micro-gesture detection to create an intentional locomotion system for VR, allowing users to experience near-room-scale movement while lying down. By using IMUs mounted on the feet, we detect small gestures and translate them into movement data that controls a player rig. In combination with BCI (Brain-Computer Interface) technology, which allows users to interact with virtual objects through focus, this system enables a more intuitive and immersive VR experience without requiring full-body motion.
The Problem: Disconnect Between Real and Virtual Bodies in VR
A common challenge in VR is the disconnect between a player's real-world body and their virtual avatar. Traditional locomotion systems either require excessive real-world movement (leading to fatigue) or rely on artificial controls (which can feel unnatural). Most VR applications simplify the virtual body to mitigate this issue, but we propose an alternative: reducing the need for real-world movement while enhancing control over the virtual body.
The Solution: Intentional Locomotion
Instead of relying on 1:1 physical movement, intentional locomotion focuses on using small, deliberate gestures to drive movement in VR. Our hackathon prototype, built in just two days, demonstrated this principle with two IMUs mounted on the feet. The system detects micro-gestures—small intentional movements—and translates them into walking and navigation controls.
With further development, this concept could expand beyond simple foot-based gestures. Imagine a future where EMG (electromyography) sensors detect slight muscle activations across the entire body, with examples such as:
Walking in VR by flexing leg muscles without actually moving.
Turning by subtly shifting weight or activating core muscles.
Reaching and grabbing by engaging forearm and finger muscles, even without fully extending the arms.
Jumping by engaging calf muscles, while remaining seated or lying down.
Leaning and dodging by detecting micro-movements in the torso.
Over time, users could relearn movement in this system, refining control to the point where their virtual body responds instantly and precisely to micro-gestures, creating a fully immersive, fatigue-free VR experience.
Proof of Concept & User Experience
Despite its simplicity, the IMU-based prototype successfully allowed users to explore a demo environment with ease. While the BCI functionality was originally planned for interaction, it was pulled at the last minute due to hardware failure. However, the fundamental concept remains: a future where users can move and interact in VR through intentional gestures and cognitive input, rather than full-body motion.
Tools & Technologies Used
Custom IMU-Based Devices (2x)
Muse S BCI (not used in final demo due to hardware failure)
Unity 2022
Blender
C# & C++
While we did not win any awards at MIT Reality Hack ‘25 with this project, we are still very proud of what we have built, we believe this could be the first steps to a future VR we want to see, and will keep working to that future. This project serves as an early step toward a future where VR locomotion is driven by intent rather than physical strain, opening new possibilities for accessibility, immersion, and control in virtual environments.