A developer behind the multiplayer mixed reality game Lasertag has implemented continuous scene meshing on Meta’s Quest 3 and 3S headsets—eliminating the need for a manual room scan and bypassing several limitations of Meta’s default system.
Meta’s current mixed reality setup requires users to perform a one-time scan of their room, generating a static 3D mesh that apps can use to anchor virtual content to the physical world. But the system comes with notable drawbacks: the scan adds friction at app launch and quickly becomes outdated if the room’s layout changes.
By contrast, Lasertag developer Julian Triveri uses Quest’s Depth API to reconstruct the room dynamically. The API delivers real-time depth data using stereo vision from the headset’s tracking cameras. While commonly used for basic occlusion effects, Triveri has extended its application much further.
In the public version of Lasertag, the game uses depth data to determine laser collisions—ensuring beams interact correctly with physical geometry or hit other players. The app skips the default scene mesh entirely to streamline gameplay.
The experimental beta version of Lasertag takes this a step further by building a 3D volumetric representation of the player’s surroundings directly on the GPU. This technique allows the game to simulate collisions with parts of the real world even if the player isn’t currently looking at them—provided the headset has previously observed that space. Triveri has also developed an internal tool to convert this volumetric data into a traditional mesh using a Unity implementation of the marching cubes algorithm.
Earlier builds even explored shared environmental mapping between multiple headsets. In one test, each device exchanged heightmap data with the others in real time, constructing a collective map of the space. While this feature isn’t in the current build, it hints at the future of collaborative mixed reality environments.
So why hasn’t Meta adopted continuous meshing across the board?
Devices like Apple’s Vision Pro and the Pico 4 Ultra already offer persistent, automatic meshing thanks to onboard hardware depth sensors. The Quest 3 lacks such hardware and instead relies on computational depth sensing, which is more taxing on CPU, GPU, and battery life. This explains why many Quest apps don’t even implement dynamic occlusion, let alone real-time meshing.
By choosing continuous depth mapping, Lasertag sacrifices performance and power efficiency in exchange for a more seamless mixed reality experience—an approach Meta’s official system hasn’t yet adopted. In January, Meta hinted at plans for automatically updating scene meshes, but it appears this will still require an initial room scan.
Lasertag is available for free on the Meta Horizon Store. While the public version uses depth data frame-by-frame, the beta release gradually constructs a persistent 3D volume of the environment. Developers can explore Triveri’s approach firsthand—he has open-sourced the technique on GitHub.