LONG BEACH, CALIFORNIA – I’m on the sun deck of the Queen Mary with Steve Lukas on the release day of his app to Meta’s storefront.
Lukas’ 15-year-old son Christian joined him for Augmented World Expo last week, and pressed the button a few hours earlier on stage to launch Jigsaw Night into early access for buyers.
A gentle breeze whips away the heat of the sun from our skin as we breathe in the fresh ocean air and talk about what he’s built over the last seven months. Or eight years, depending on how he looks at it. We’re piecing together a jigsaw puzzle and filling in the last hole in the picture on the box with a satisfying click. Each of us is wearing a Quest 3 headset.
“I don’t know what that looked like to you,” Lukas apologizes for the limits of his code at launch.
“Can you do me a favor and hold [the puzzle] and walk over here?” I asked him, gesturing to the side of the ship.
He carries it over and hangs the finished puzzle in the air as we look out over the water to the Aquarium of the Pacific.
“This is nice,” he says.
“This is nice,” I agree.
“Why did this take so long and why does this feel so good,” I ask him.
“I’ve been working on shared spaces for a little over eight years,” Lukas replies.
“Ever since I integrated ARKit, the Microsoft HoloLens and the Vive, and from that I was able to get 6DOF controllers with an AR transparent headset and anyone could see me using ARKIt. And I said, that’s a magical moment. But it takes all this tech to make that happen. And so from there, I’ve been looking at all the different ways to make that happen easier. And the progress of technology has brought these tools to do automatic spatial anchors, automatic detection, and I had encountered that there are folks who said, ‘you can’t do it manually, it needs to be automatic.’
“That’s like saying, ‘we won’t make cars until we get rid of the stick shift.’ And then you have years that you don’t learn. Because once you get past ‘this works’, then you go onto other problems. Thinking about these problems hasn’t been done by that many people. Some have been doing it very well, but the number of people looking at these problems is smaller than I expected it to be.”
After our puzzling session, which goes on longer than intended because we can’t stop talking, he shows me the component list of his Unity project. Among the neatly sorted pieces in Unity:
- World Anchoring
- Meta Quest Platform
- Puzzle Generation
- Puzzle Tracking
- Puzzle Database
- Music Box
- Voice Chat
- Mixed Reality Utility Kit
- Wit.Ai
- LivCam
- FacebookSDK
- Leaderboards
- Shop
When I saw him before the AWE show floor opened on Tuesday, he wasn’t certain Jigsaw Night was going to launch. After I completed my single-player demo, and told him I thought it was fantastic work, he committed to the plan he executed with his family on Thursday.
If you have a pair of Quest 2, 3, 3S, or Pro headsets that run Horizon OS, Jigsaw Night is currently one of the best experiences you can buy from Meta for those headsets to have in the same room with someone else.
Jigsaw Night isn’t perfect yet, but it is already better than most apps because that component list does some things people have only experienced in business or research labs with access to thousands of dollars of equipment.
For instance, Lukas uses the terms “co-presence” and “co-location” to mean two different things, while other developers without his understanding of the medium might use them interchangeably. In Jigsaw Night, you can be co-located in the same physical space as several other players also in the room with you. Or you can have a couple of people remote in from another location using co-presence, sending their body movements or voice over the internet, and receiving back yours in return. From this, you all have a shared reality — a playground sharing an architecture for spending time together — and the idea is so important that a stupid number of Silicon Valley marketing leaders have tried to brand this experience in different ways.
Making a shared space in headsets of any kind that allows for both co-presence and co-located interactions is exceedingly rare. It’s the stuff pursued by platform giants like Microsoft, Apple, Google, and Meta, as well as by startups hoping for acquisition by one of those larger companies. In essence, the race to make a shared reality for people is both competitive and vulnerable to rapidly changing executive priorities. This means many developers have opted to avoid using these early tools, helping them to avoid platform lock-in and the limited market opportunity associated with it.
Not Lukas and not Jigsaw Night. This isn’t just a puzzle game. It’s a study in both the blockers and opportunities in front of the entire XR industry. Lukas’ work might as well be a sample project demonstrating best practices for mixed reality design on Horizon OS — the sample project Meta should have made for developers and headset buyers so they can understand the potential of this mixed reality platform.
I’ve written about Lukas’ work twice before. First when his son Christian, then a 3rd grader, demonstrated this same essential concept to his classmates in 2017 on HoloLens. Then I covered Lukas’ work again in 2018 as Magic Leap shipped its first version of limited field of view high cost Augmented Reality goggles as a group of developers showed off this general idea. The same idea that Lukas is now bringing directly to VR and mixed reality via Quest headsets.
One particularly impactful moment testing Jigsaw Night came when I activated the LIV virtual camera. The camera is more fully featured than some mobile phone cameras, allowing for angles in first-person or third-person, adjustment for FOV and stabilization, as well as for a selfie cam that can hang in midair or be handheld.
The magic is that the LIV camera is technically capable of only filming what appears in your shared virtual reality space. The camera cannot capture your physical environment, even while you are looking at it, as the headset passes AR through VR. The feature only works because Jigsaw Night triggers a scan of your physical room, giving the headset and apps a semantic understanding of its shape as well as the furniture, windows, and doors inside. The feature has been available in Meta’s ecosystem since early 2024, and still today very few developers utilize it.
A four-minute video I captured with the LIV tool shows the effect of the combination my first time testing it out. Inside the headset, I can still see my hotel room aboard the permanently docked ship in Long Beach using passthrough mixed reality. On the LIV selfie cam I see only VR, the same as the viewer sees, using Meta’s latest avatar tech and my room represented as a sparse representation of what actually surrounds me. I only really understand what’s happening at the end of the video, when I see my own reflection in a physical mirror and end the recording.
0:00
Many of the videos you see on UploadVR showing gameplay are often not our first times attempting to do what you see depicted. It often takes practice. For instance, learning through trial and error how a developer implements the simple human action of grasping an object.
The 4-minute video I embedded above is my first time trying LIV integration in his app, and what you see is that it just worked, demonstrating what amounts to a master class in interaction design. Lukas took the effort to support both pinch and grasp gestures for holding things with hand tracking, and supports the same with both the trigger and grip buttons on controllers. He even supports the use of one each — tracked controller for advanced interactions in one hand and just your hand for the other.
That video is what led me to invite Lukas to the Queen Mary for a puzzling session together on the top deck. I wanted to test co-location away from the Augmented World Expo show floor, where earlier in the week I saw him concerned as he struggled with the intense wireless interference. In fact, our first article on Tuesday was only published that day because Lukas’ and I exited the event so he could Airdrop video to me illustrating the work.
I left my controller in the room, put my headset in a bag, and brought Lukas up to the top of the ship. A little while later, Scott Stein from CNET joined us. Lukas put on a headset and supplied Stein with one too. I logged onto the terrible hotel Wi-Fi and clicked “Party Up”. That activates an invisible handshake between the three headsets and creates a Meta Shared Spatial Anchor, which essentially locks our digital content to the same anchoring point in the world.
0:00
And then we were chatting as we pieced together a double-sided puzzle. Stein and I reached for the same piece from opposite sides, bumping hands and then laughing.
Lukas and I believe in about 20 years, children will live in a world where they cannot conceive what it was like being unable to simply hand purely digital 3D content to a friend or family member. Right now in 2025, doing that is such a rarity an early access app from a single developer showed up the entire market by integrating features which should be standard, but aren’t yet.
Jigsaw Night gets so much right out of the box that it becomes a more tangible piece of our world, creating memories in places VR headsets don’t usually go — at least not yet.
“I feel like the focus of the immersive technology conversation has recently moved away from co-present collaboration before it’s even really truly begun,” Lukas wrote me over direct message. “This industry is always looking to tomorrow’s technology to solve mainstream adoption, but really it’s all possible right here for us now. I just hope there are enough of us out there that are still focused on it, and am grateful that we now have a framework – and maybe even a new medium – to have that conversation.”
0:00