Skip to content

QuestVisionKit is a collection of template and reference projects demonstrating how to use Meta Quest’s new Passthrough Camera API for advanced AR/VR vision, tracking, and shader effects.

License

Notifications You must be signed in to change notification settings

xrdevrob/QuestCameraKit

QuestCameraKit is a collection of template and reference projects demonstrating how to use Meta Quest’s new Passthrough Camera API (PCA) for advanced AR/VR vision, tracking, and shader effects.

Follow on X Join our Discord

Table of Contents

PCA Samples

1. 🎨 Color Picker

  • Purpose: Convert a 3D point in space to its corresponding 2D image pixel.
  • Description: This sample shows the mapping between 3D space and 2D image coordinates using the Passthrough Camera API. We use MRUK's EnvironmentRaycastManager to determine a 3D point in our environment and map it to the location on our WebcamTexture. We then extract the pixel on that point, to determine the color of a real world object.
How to run this sample
  • Open the ColorPicker scene.
  • Build the scene and run the APK on your headset.
  • Aim the ray onto a surface in your real space and press the A button or pinch your fingers to observe the cube changing its color to the color in your real environment.

Color Picker

2. 🍎 Object Detection with Unity Sentis

  • Purpose: Convert 2D screen coordinates into their corresponding 3D points in space.
  • Description: Use the Unity Sentis framework to infer different ML models to detect and track objects. Learn how to convert detected image coordinates (e.g. bounding boxes) back into 3D points for dynamic interaction within your scenes. In this sample you will also see how to filter labels. This means e.g. you can only detect humans and pets, to create a more safe play-area for your VR game. The sample video below is filtered to monitor, person and laptop. The sample is running at around 60 fps.
How to run this sample
  • Open the ObjectDetection scene.
  • Install Unity Sentis (tested with [email protected]).
  • Select the labels you want to track. Leaving the list empty tracks all objects.
    Show all available labels
    personbicyclecarmotorbikeaeroplanebustraintruck
    boattraffic lightfire hydrantstop signparking meterbenchbirdcat
    doghorsesheepcowelephantbearzebragiraffe
    backpackumbrellahandbagtiesuitcasefrisbeeskissnowboard
    sports ballkitebaseball batbaseball gloveskateboardsurfboardtennis racketbottle
    wine glasscupforkknifespoonbowlbananaapple
    sandwichorangebroccolicarrothot dogpizzadonutcake
    chairsofapottedplantbeddiningtabletoilettvmonitorlaptop
    mouseremotekeyboardcell phonemicrowaveoventoastersink
    refrigeratorbookclockvasescissorsteddy bearhair driertoothbrush
  • Build and deploy to Quest. Use the trigger to scan the environment; markers will appear for detections above the configured confidence threshold.

Object Detection

3. 📱 QR Code Tracking with ZXing

  • Purpose: Detect and track QR codes in real time. Open webviews or log-in to 3rd party services with ease.
  • Description: Similarly to the object detection sample, get QR code coordinated and projects them into 3D space. Detect QR codes and call their URLs. You can select between a multiple or single QR code mode. The sample is running at around 70 fps for multiple QR codes and a stable 72 fps for a single code. Users are able to choose between CenterOnly and PerCorner raycasting modes via an enum in the inspector. This enables more accurate rotation tracking for use cases that require it (PerCorner), while preserving a faster fallback (CenterOnly).
How to run this sample
  • Open the QRCodeTracking scene.
  • Ensure ZXing DLLs are present (the editor script auto-adds the ZXING_ENABLED define).
  • Choose Single or Multiple detection mode and the raycast mode (CenterOnly vs PerCorner).
  • Build to Quest, point the headset toward QR codes, and interact with the spawned markers.

QR Code Tracking

4. 🪟 Shader Samples

  • Purpose: Apply a custom shader effect to virtual surfaces.
  • Description: A shader which takes our camera feed as input to manipulate the content behind it. Right now the project contains a Pixelate, Refract, Water, Zoom, Blur, GameBoy Green and VirtualBoy Red effect. Additionally examples for colorblindness red, green, blue and total have been added (Protanopia, Deuteranopia, Tritanopia, Achromatopsia). Frosted Glass shader is work in progress!
How to run this sample
  • Open the Shader Samples scene or the WIP Frosted Glass scene.
  • Ensure the shared PCA prefab is present in the scene root.
  • Toggle the different shader GameObjects to see Pixelate, Water, Zoom, Blur, GameBoy, VirtualBoy, and colorblindness filters mapped onto PCA textures.

Shader Samples

5. 🧠 OpenAI vision model

  • Purpose: Ask OpenAI's vision model (or any other multi-modal LLM) for context of your current scene.
  • Description: We use a the OpenAI Speech to text API to create a command. We then send this command together with a screenshot to the Vision model. Lastly, we get the response back and use the Text to speech API to turn the response text into an audio file in Unity to speak the response. The user can select different speakers, models, and speed. For the command we can add additional instructions for the model, as well as select an image, image & text, or just a text mode. The whole loop takes anywhere from 2-6 seconds, depending on the internet connection.
How to run this sample
  • Open the ImageLLM scene.
  • Create an OpenAI API key and enter it on the OpenAI Manager prefab.
  • Select your desired model and optionally give the LLM additional instructions.
  • Ensure your Quest headset is connected to a fast/stable network.
  • Build the scene and run the APK on your headset.
  • Use the voice input (controller or hand gesture) to issue commands; the headset captures a PCA frame and plays back the LLM response via TTS.

[!NOTE] File uploads are currently limited to 25 MB and the following input formats are supported: mp3, mp4, mpeg, mpga, m4a, wav, webm.

You can send commands and receive results in any of these languages:

Show all supported languages
Afrikaans Arabic Armenian Azerbaijani Belarusian Bosnian Bulgarian Catalan Chinese
Croatian Czech Danish Dutch English Estonian Finnish French Galician
German Greek Hebrew Hindi Hungarian Icelandic Indonesian Italian Japanese
Kannada Kazakh Korean Latvian Lithuanian Macedonian Malay Marathi Maori
Nepali Norwegian Persian Polish Portuguese Romanian Russian Serbian Slovak
Slovenian Spanish Swahili Swedish Tagalog Tamil Thai Turkish Ukrainian
Urdu Vietnamese Welsh
OpenAI.vision.whisper.model.mp4

6. 🎥 WebRTC video streaming

  • Purpose: Stream the Passthrough Camera stream over WebRTC to another client using WebSockets.
  • Description: This sample uses SimpleWebRTC, which is a Unity-based WebRTC wrapper that facilitates peer-to-peer audio, video, and data communication over WebRTC using Unitys WebRTC package. It leverages NativeWebSocket for signaling and supports both video and audio streaming. You will need to setup your own websocket signaling server beforehand, either online or in LAN. You can find more information about the necessary steps here
How to run this sample
  • In Package Manager, click the + button → Add package from git URL and install, in order:
    1. https://github.com/endel/NativeWebSocket.git#upm
    2. https://github.com/Unity-Technologies/com.unity.webrtc.git
    3. https://github.com/FireDragonGameStudio/SimpleWebRTC.git?path=/Assets/SimpleWebRTC
  • Open the WebRTC-Quest scene.
  • On [BuildingBlock] Camera Rig/TrackingSpace/CenterEyeAnchor/Client-STUNConnection, set your WebSocket signaling server address (LAN or cloud).
  • Build and deploy WebRTC-Quest to your Quest 3.
  • Open the WebRTC-SingleClient scene in the editor (or deploy to another device) to act as the receiving peer.
  • Launch both apps. Perform the Start gesture with your left hand (or press the menu button) on Quest to begin streaming.

Troubleshooting

  • If you hit compiler errors, re-add the three git packages listed above.
  • Use Tools ▸ Update WebRTC Define Symbol after importing packages.
  • Ensure your own WebSocket signaling server is running (tutorial here).
  • For LAN streaming, leave the STUN address empty; otherwise keep the default.
  • Enable the Web Socket Connection active toggle so Quest connects on start.
  • WebRTC works with Vulkan and OpenGLES3. If you use GLES3:
    • In Project Settings ▸ XR Plug-in Management ▸ Oculus: disable Low Overhead Mode (GLES).
    • In XR Plug-in Management ▸ OpenXR: disable Meta Quest: Occlusion and Meta XR Subsampled Layout.
  • Ignore PST warnings about opaque textures / low overhead / camera stack for this sample.

PCA WebRTC

7. 📅 Experimental QR Code detection

  • Purpose: Detect and track QRCodes without having to use a third-party library.
  • Description: As this feature is still in experimental state, make sure to use experimental mode on your Quest3 when testing. Unity usually asks for enabling that, before building. You can activate it via command line command using Meta Quest Hub too. More information can be found here - MR Utility Kit QRCode Detection and here - Mobile Experimental Features
How to run this sample
  • Install TextMeshPro Essentials if prompted.
  • Enable Experimental Mode on your Quest 3/3S (headset UI or Quest Hub CLI).
  • Open the QRCodeDetection scene.
  • Build and deploy to your device, then use the controllers to interact with the UI (hand tracking is not yet supported).

Shader Samples

Update Notes

Meta XR SDK v81 refresh - One script to rule them all

The legacy WebCamTexture helpers have been fully retired. Every sample now talks directly to PassthroughCameraAccess component, which is part of the MRUK package, and consumes the native render texture, intrinsics, and per-frame pose data. It also offers us timestamps and both eyes at the same time.

  • Single PCA component – All samples reference the same configured PCA component; no more bespoke permission scripts or manifest edits.
  • Direct GPU textures – Sentis inference, QR tracking, Frosted Glass shaders, WebRTC, and OpenAI capture now pull the PCA render texture directly, preserving latency and aspect ratio.
  • Pose-aware reprojection – Object/QR samples reconstruct rays with per-frame PCA pose + intrinsics so markers no longer drift when the user nods.
  • Environment-aware markers – Marker placement re-samples environment normals and optionally drops detections below a configurable confidence threshold.
  • Shared marker assets – Marker prefab/pool/controller moved to Assets/Samples/Common/Markers, keeping Sentis + QR samples in sync.
  • Shader updates – Camera-mapped shaders understand PCA UV scale/offsets so effects don’t mirror, split into quadrants, or flip vertically.

Getting Started with PCA

Prerequisites

  • Meta Quest Device: Ensure you are runnning on a Quest 3 or Quest 3s and your device is updated to HorizonOS v74 or later.
  • Unity: Recommended is Unity 6. Also runs on Unity 2022.3. LTS.
  • Camera Passthrough API does not work in the Editor or XR Simulator.
  • Get more information from the Meta Quest Developer Documentation

Installation

  1. Clone the Repository:

    git clone https://github.com/xrdevrob/QuestCameraKit.git
    
  2. Open the Project in Unity: Launch Unity and open the cloned project folder.

  3. Configure Dependencies: Follow the instructions in the section below to run one of the samples.

Troubleshooting & Known Issues

  • For sample 7 QR Code Detection, make sure experimental mode is active. You can find more information about the necessary steps here.
  • If switching betwenn Unity 6 and other versions such as 2023 or 2022 it can happen that your Android Manifest is getting modified and the app won't run anymore. Should this happen to you make sure to go to Meta > Tools > Update AndroidManifest.xml or Meta > Tools > Create store-compatible AndroidManifest.xml. After that make sure you add back the horizonos.permission.HEADSET_CAMERA manually into your manifest file.

Community Contributions

News

Acknowledgements & Credits

License

This project is licensed under the MIT License. See the LICENSE file for details. Feel free to use the samples for your own projects, though I would appreciate if you would leave some credits to this repo in your work ❤️

Contact

For questions, suggestions, or feedback, please open an issue in the repository or contact me on X, LinkedIn, or at [email protected]. Find all my info here or join our growing XR developer community on Discord.

Sponsor Support on Patreon


Happy coding and enjoy exploring the possibilities with QuestCameraKit!


About

QuestVisionKit is a collection of template and reference projects demonstrating how to use Meta Quest’s new Passthrough Camera API for advanced AR/VR vision, tracking, and shader effects.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published