How Magic Leap Works: SDK Edition

Some time ago I wrote about how I thought Magic Leap might work based primarily on patents that were published by the company. While patents do give some indication of what the company is working on, they do not reveal what features a product will have. At the time, that list of features didn’t exist. 

It does now. 

Some time ago I wrote about how I thought Magic Leap might work based primarily on patents that were published by the company. While patents do give some indication of what the company is working on, they do not reveal what features a product will have. At the time, that list of features didn’t exist. 

It does now. 

  • Audio: Stereo audio output and voice microphone recording are supported.
  • Camera: Capture still images and videos from the color camera.
  • Dispatch: Allows apps to open URLs using the Magic Leap Browser or other apps.
  • Eye Tracking: Ability to retrieve fixation point position and eye centers. Blinks can also be detected.
  • Graphics: OpenGL ES, Desktop, and Vulkan rendering paths.
  • Hand Gestures & Key Point Tracking: Recognize the user’s hand poses (gestures) and track the position of identifiable points on hands such as the tip of the index fingers.
  • Head Tracking: Headpose is tracked in full six degrees of freedom (DOF).
  • Image Tracking: Track the position and orientation of specified image targets in the user’s environment.
  • Input (Control / MLMA Support): Retrieve either 3 DOF (orientation) or full 6 DOF (position and orientation) from the Magic Leap Control. Detect button and touchpad presses and the analog trigger. Trigger values range from 0 to 1. A range of touchpad gestures are also supported, as are haptic vibration and LED ring feedback. This interface works seamlessly with both physical Magic Leap Controls and the Magic Leap Mobile App.
  • Light Tracking: Provides information (luminance, global color temperature) about the ambient light of the user’s environment.
  • Media Codec: Low-level, hardware-accelerated media encoding and decoding.
  • Media Player: Simple, straightforward media playback interface.
  • Meshing: Converts the world’s depth data into a connected triangle mesh that can be used for occlusion and physics.
  • Music Service: Supports connecting and listening to a streaming music service.
  • Occlusion: An interface for feeding depth data to the Magic Leap platform for hardware occlusion.
  • Planes: Recognize planar surfaces in the user’s environment for placing content. This includes semantic tagging for ceilings, floors, and walls.
  • Raycast: Fire a ray and get the point of intersection with the world’s depth data.
  • Secure Storage:Save data from your app to the device’s encrypted storage.

From this list, and other information in the developer documentation, we can make some better guesses at what the Magic Leap One will be capable of doing and what it will struggle with.

Eye Tracking

Depending on how well it works, eye tracking is going to be a big part of using Magic Leap. Without a screen to touch or a mouse to move, selecting things in Mixed Reality has always been challenging. Gestures are not as accurate as you might like and using headpose means a lot of head moving. Being able to just look at something and then either blink or do a small gesture will be a revolution in this space. As a real world example you can see what Eyefluence (a company recently purchased by Google) has done in this space. We don’t really know how it works but they don’t even need blinks to confirm actions. If this level of eye tracking is in the Magic Leap One then it will be heavily used for user interaction

Hand Gestures

Magic Leap One will support 8 hand gestures. This is notably more than Hololens. These gestures in concert with eye tracking will form the basis of interaction with magic leap. At least for applications that don’t use the controller


The Magic Leap controller is fascinating. It has the basics you might expect: touchpad, home button, trigger, etc. But it also features 6dof tracking. We do not know how accurate this tracking will be but to provide orientation as well as position is quite challenging using accelerometers alone. As we have written about in the past, there are indications Magic Leap is using magnetic tracking to facilitate this. The protrusions that stick out the side of the lightwear might well house the magnetic coils necessary for this tracking. If this is the case, we may have tracking accuracy comparable to the Vive but without the need for clunky external cameras or lighthouses. 

Another thing revealed in the docs but not outlined in detail is the existence of a Magic Leap mobile application. Presumable for Android and iOS, this application can be used as a controller for Magic Leap. The documentation implies that it will transfer touch inputs from the phone to Magic Leap likely working like the trackpad on the controller. I think this might point to the controller being sold separately as you can use your smartphone if you don’t want to spend the extra money. It will be interesting to see what the mobile app will look like.

Occlusion/Depth Mapping

Occlusion is supported in Magic Leap directly in the API. Occlusion is a notoriously hard problem that no one has quite got right before in a consumer device. It requires accurate depth mapping that both project tango and hololens struggled with. That said, there are good examples of occlusion working in those devices for very specific circumstances. I suspect Magic Leap is doing some amount of guesswork, potentially using machine learning, to make better guesses at hard edges for occlusion. 

The documentation points to Magic Leap using IR based depth sensors similar to those used in Project Tango and Hololens. This means they will struggle to work in bright sunshine and on dark materials. 

The documentation makes it clear that this will not work for dynamic, moving objects. It states:

The environment is expected to mostly be static and change (such as objects moving or people walking in front of the device) only happens occasionally. For small changes, the reconstruction should slowly update over several seconds to add new objects or remove moved objects. Environments with significant or continuous change may lead to holes or incorrect geometry.

This means occlusion won’t work if someone steps into your field of view. Digital objects will appear in front of them. Hopefully, in future iterations this can be improved. 

Image Tracking

Image tracking is built directly into the Magic Leap SDK.  This allows Magic Leap to identify a given image and track its position in the real world. This isn’t a feature I was expecting to see at the API level. I can envision an application that takes artwork on your wall and bring it to life. Below is an example of this, though not running on Magic Leap. Warning, the audio in this video is intense.

ARCore/ARKit Parity

The SDK also includes features such as plane detection and light tracking. These are sort of table stakes for an AR toolkit at this point. 

Lumin OS

Magic Leap will run a custom operating system called Lumin OS. Lumin is based off of Android but heavily modified to support the requirements of spatial computing. This is good news for developers as it means the platform will be familiar for anyone with experience working with android or working with linux in general. It also means that Magic Leap is not trying to reinvent the wheel as it will leverage the years of work that has gone into the android open source project. 

One thing we can see in the listed features above is something called Dispatches. This sounds like a renaming of Androids Intent system. This is the system that allows you to select certain apps to use for different actions, like selecting a default browser. While it remains to be seen if Magic Leap will allow users to select default applications or if they will force users into their in house apps, this is a good indication that we might get more choice on this platform than some other notable platforms.

How well does it work?

The documentation gives us a very good idea of what to expect from Magic Leap. We pretty much know what this thing is going to be at this point and on the surface it looks amazing. No device has had this range of features in such a small form factor before. But the question that this does not answer is how well does it work. Is the controller tracking accurate? How is the image fidelity? Is eye tracking accurate? What kind of performance can we expect from the GPU/CPU? You can have all the features in the world, but if they perform poorly it won’t matter. We’ll have to wait for the device to come out later this year to find out.

4 thoughts on “How Magic Leap Works: SDK Edition”

  1. > Hand Gestures & Key Point Tracking: Recognize the user’s hand poses
    > (gestures)
    > and track the position of identifiable points on hands such as the tip of the
    > index fingers.

    When acquiring the position of a point such as the index finger, I think that it will get with MagicLeap.MLHand.KeyPoints, but three arrays of Vector 3 will be returned. Do you know the position of the index finger point in the order of elements?

    Which finger position can be acquired for the second element and the third element? do you know?
    Vector3 [] UnityEngine.Experimental.XR.MagicLeap.MLHand.KeyPoints

Leave a Reply

Your email address will not be published. Required fields are marked *