Researchers from Nanyang Technological University in Singapore have introduced MaskFi, a revolutionary method for tracking human movement in the metaverse using WiFi sensing. Unlike traditional tracking systems, which often rely on device-based sensors or cameras, WiFi sensing has the potential to offer real-time, high-accuracy movement tracking without the limitations of low-light conditions or physical obstructions. The MaskFi system leverages unsupervised learning, allowing artificial intelligence models to be trained more efficiently and achieve remarkable accuracy, reaching approximately 97%. This breakthrough paves the way for a new metaverse modality that can provide a 1:1 real-world representation in real-time.
Researchers from Nanyang Technological University in Singapore have unveiled a groundbreaking method for tracking human movement in the metaverse using WiFi sensing, introducing MaskFi. Unlike conventional tracking systems relying on device-based sensors or cameras, WiFi sensing has the potential to offer real-time, high-accuracy movement tracking without being hindered by low-light conditions or physical obstructions.
The current methods of capturing human activity in the metaverse often use device-based sensors, cameras, or a combination of both. However, these modalities have immediate limitations. Device-based sensing systems, such as hand-held controllers with motion sensors, capture information at only one point on the human body, limiting their ability to model complex activities. On the other hand, camera-based tracking systems face challenges in low-light environments and when there are physical obstructions.
WiFi sensing, based on radio signals used to send and receive WiFi data, has been used by scientists for years to track human movement. This technology, akin to radar, can sense objects in space and has been fine-tuned to pick up various human activities, including heartbeats and breathing patterns, and can even sense individuals through walls.
However, integrating WiFi sensing with artificial intelligence models has proven challenging, particularly in training the models due to the need for massive, labeled data sets. The team from Nanyang Technological University addressed this challenge with MaskFi, a novel unsupervised multimodal human activity recognition (HAR) solution.
MaskFi leverages unlabeled video and WiFi activity data for model training, eliminating the need for extensive labeling of data sets, which is typically the most time-consuming part of such experiments. Using unsupervised learning, the AI model is pretrained on a smaller data set and iteratively refined until it can predict output states with a satisfactory level of accuracy.
The researchers reported that the MaskFi system achieved around 97% accuracy across two related benchmarks, showcasing its potential to revolutionize movement tracking in the metaverse. This breakthrough opens the door to a new metaverse modality capable of providing a 1:1 real-world representation in real-time. As the technology advances, MaskFi may play a pivotal role in enhancing immersive experiences within the metaverse.
(TRISTAN GREENE, COINTELEGRAPH, 2024)