By the end of this lesson, you will:
expo-gl and ARCoreAugmented Reality (AR) merges digital content with the physical world. Apps like IKEA Place, Pokemon GO, and Snapchat use AR to create experiences that blur the line between virtual and real. Users don't just look at content-they place it in their living room, on their desk, on their face.
AR creates shareable moments. When users see a virtual dinosaur in their backyard or try on digital sneakers, they take screenshots and share. In this module, we'll build AR features that make your app feel magical.
ARCore is Google's platform for building augmented reality experiences on Android. It provides:
Key Concepts:
Install required packages:
npx expo install expo-gl expo-three three@0.160.0
npm install expo-gl-cpp
Configure app.json for AR:
{
"expo": {
"plugins": [
[
"expo-camera",
{
"cameraPermission": "Allow $(PRODUCT_NAME) to access camera for AR"
}
]
],
"android": {
"package": "com.yourcompany.arapp"
}
}
}
💡 Tip: AR requires physical devices. Simulators/emulators don't support ARCore. Test on Android phones with ARCore support.
Initialize an AR session with camera and plane detection:
import { useEffect, useRef } from 'react';
import { View, StyleSheet } from 'react-native';
import { GLView } from 'expo-gl';
import { Renderer, Camera, Scene } from 'expo-three';
import * as THREE from 'three';
export default function BasicARScreen() {
const glViewRef = useRef(null);
const onContextCreate = async (gl) => {
// Setup Three.js renderer
const renderer = new Renderer({ gl });
renderer.setSize(gl.drawingBufferWidth, gl.drawingBufferHeight);
// Create scene
const scene = new Scene();
scene.background = null; // Transparent for AR
// Setup camera
const camera = new Camera();
camera.position.set(0, 0, 0);
// Add a simple cube for testing
const geometry = new THREE.BoxGeometry(0.1, 0.1, 0.1);
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const cube = new THREE.Mesh(geometry, material);
cube.position.set(0, 0, -0.5); // 0.5m in front of camera
scene.add(cube);
// Animation loop
const animate = () => {
requestAnimationFrame(animate);
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
renderer.render(scene, camera);
gl.endFrameEXP();
};
animate();
};
return (
<View style={styles.container}>
<GLView
ref={glViewRef}
style={styles.glView}
onContextCreate={onContextCreate}
/>
</View>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
},
glView: {
flex: 1,
},
});
AR requires camera access:
import { Camera } from 'expo-camera';
import { useEffect, useState } from 'react';
import { Alert } from 'react-native';
export default function ARPermissions() {
const [hasPermission, setHasPermission] = useState(false);
useEffect(() => {
requestPermissions();
}, []);
const requestPermissions = async () => {
const { status } = await Camera.requestCameraPermissionsAsync();
if (status !== 'granted') {
Alert.alert(
'Camera Required',
'AR features need camera access to detect surfaces and track movement'
);
return;
}
setHasPermission(true);
};
return hasPermission;
}
Detect horizontal surfaces (floors, tables) and vertical surfaces (walls):
import { useEffect, useState } from 'react';
export default function PlaneDetector() {
const [detectedPlanes, setDetectedPlanes] = useState([]);
useEffect(() => {
// In a real AR implementation with ARCore
startPlaneDetection();
}, []);
const startPlaneDetection = () => {
// ARCore automatically detects planes
// Planes have: center point, extent (size), orientation
// Simulated plane data structure
const plane = {
id: 'plane_1',
type: 'horizontal', // or 'vertical'
centerPose: {
x: 0,
y: 0,
z: -1, // 1 meter in front
},
extent: {
width: 1.5, // meters
depth: 1.5,
},
polygon: [
// Corner points
{ x: -0.75, y: 0, z: -0.25 },
{ x: 0.75, y: 0, z: -0.25 },
{ x: 0.75, y: 0, z: -1.75 },
{ x: -0.75, y: 0, z: -1.75 },
],
};
setDetectedPlanes((prev) => [...prev, plane]);
};
return detectedPlanes;
}
Plane Types:
Plane Properties:
centerPose: Center of plane in world spaceextent: Width and height of planepolygon: Boundary pointstrackingState: Tracking, paused, stoppedRender plane boundaries to help users understand detected surfaces:
import * as THREE from 'three';
const createPlaneVisualization = (plane) => {
// Create plane mesh
const geometry = new THREE.PlaneGeometry(plane.extent.width, plane.extent.depth);
const material = new THREE.MeshBasicMaterial({
color: 0x00ff00,
transparent: true,
opacity: 0.3,
side: THREE.DoubleSide,
});
const mesh = new THREE.Mesh(geometry, material);
// Position at plane center
mesh.position.set(
plane.centerPose.x,
plane.centerPose.y,
plane.centerPose.z
);
// Rotate to match plane orientation
if (plane.type === 'horizontal') {
mesh.rotation.x = -Math.PI / 2; // Horizontal
}
return mesh;
};
// Usage in AR scene
const onPlaneDetected = (plane) => {
const planeMesh = createPlaneVisualization(plane);
scene.add(planeMesh);
};
Determine where to place virtual objects based on user taps:
import { useState } from 'react';
import { TouchableWithoutFeedback } from 'react-native';
export default function HitTestExample() {
const [hitTestResults, setHitTestResults] = useState([]);
const handleScreenTap = (event) => {
const { locationX, locationY } = event.nativeEvent;
// Normalize screen coordinates (0-1 range)
const normalizedX = locationX / screenWidth;
const normalizedY = locationY / screenHeight;
// Perform hit test (ARCore provides this)
const results = performHitTest(normalizedX, normalizedY);
if (results.length > 0) {
const hit = results[0]; // First hit result
console.log('Hit position:', hit.pose);
// Place object at hit location
placeObjectAtPose(hit.pose);
}
};
const performHitTest = (x, y) => {
// ARCore hit test returns:
// - pose: 3D position and orientation
// - distance: Distance from camera
// - trackingState: Reliability of hit
return [
{
pose: {
position: { x: 0.1, y: 0, z: -1 },
rotation: { x: 0, y: 0, z: 0, w: 1 },
},
distance: 1.0,
trackingState: 'tracking',
},
];
};
return (
<TouchableWithoutFeedback onPress={handleScreenTap}>
{/* AR View */}
</TouchableWithoutFeedback>
);
}
Hit Test Types:
Create anchors to pin virtual objects to real-world locations:
const placeObjectAtPose = (pose) => {
// Create anchor at pose
const anchor = {
id: generateId(),
pose: pose,
trackingState: 'tracking',
};
// Create 3D object
const geometry = new THREE.BoxGeometry(0.1, 0.1, 0.1);
const material = new THREE.MeshStandardMaterial({ color: 0xff0000 });
const cube = new THREE.Mesh(geometry, material);
// Position at anchor
cube.position.set(
pose.position.x,
pose.position.y,
pose.position.z
);
// Add to scene
scene.add(cube);
// Store anchor reference
anchors.push({ anchor, object: cube });
};
// Update anchor positions each frame
const updateAnchors = () => {
anchors.forEach(({ anchor, object }) => {
if (anchor.trackingState === 'tracking') {
// ARCore updates anchor pose automatically
object.position.set(
anchor.pose.position.x,
anchor.pose.position.y,
anchor.pose.position.z
);
}
});
};
Match virtual object lighting to the environment:
const setupLightEstimation = (scene) => {
// Ambient light (overall illumination)
const ambientLight = new THREE.AmbientLight(0xffffff, 0.5);
scene.add(ambientLight);
// Directional light (sun/dominant light source)
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);
scene.add(directionalLight);
// Update lights based on ARCore light estimation
const updateLighting = (lightEstimate) => {
// ARCore provides:
// - colorCorrection: RGBA for ambient light
// - pixelIntensity: Brightness (0-1)
// - mainLightDirection: Primary light direction
// - mainLightIntensity: Primary light brightness
ambientLight.intensity = lightEstimate.pixelIntensity;
directionalLight.intensity = lightEstimate.mainLightIntensity;
directionalLight.position.set(
lightEstimate.mainLightDirection.x,
lightEstimate.mainLightDirection.y,
lightEstimate.mainLightDirection.z
);
};
return updateLighting;
};
Manage AR session states:
import { useEffect, useRef } from 'react';
export default function ARSessionManager() {
const sessionRef = useRef(null);
useEffect(() => {
startARSession();
return () => {
stopARSession();
};
}, []);
const startARSession = () => {
console.log('Starting AR session');
// Initialize ARCore session
sessionRef.current = {
state: 'running',
planesDetected: [],
anchors: [],
};
};
const pauseARSession = () => {
if (sessionRef.current) {
sessionRef.current.state = 'paused';
console.log('AR session paused');
}
};
const resumeARSession = () => {
if (sessionRef.current) {
sessionRef.current.state = 'running';
console.log('AR session resumed');
}
};
const stopARSession = () => {
if (sessionRef.current) {
sessionRef.current.state = 'stopped';
console.log('AR session stopped');
}
};
return {
pauseARSession,
resumeARSession,
stopARSession,
};
}
Session States:
Complete AR plane detection with tap-to-place:
import { useState, useRef } from 'react';
import { View, Text, TouchableWithoutFeedback, StyleSheet } from 'react-native';
import { GLView } from 'expo-gl';
import { Renderer, Camera, Scene } from 'expo-three';
import * as THREE from 'three';
export default function ARPlacementScreen() {
const [planesDetected, setPlanesDetected] = useState(0);
const [objectsPlaced, setObjectsPlaced] = useState(0);
const sceneRef = useRef(null);
const cameraRef = useRef(null);
const onContextCreate = async (gl) => {
const renderer = new Renderer({ gl });
renderer.setSize(gl.drawingBufferWidth, gl.drawingBufferHeight);
const scene = new Scene();
scene.background = null;
sceneRef.current = scene;
const camera = new Camera();
camera.position.set(0, 1.6, 0); // Eye level (1.6m)
cameraRef.current = camera;
// Add grid to visualize floor
const gridHelper = new THREE.GridHelper(10, 10);
gridHelper.position.y = -1.6;
scene.add(gridHelper);
// Lighting
const ambientLight = new THREE.AmbientLight(0xffffff, 0.6);
scene.add(ambientLight);
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);
directionalLight.position.set(1, 1, 1);
scene.add(directionalLight);
// Animation loop
const animate = () => {
requestAnimationFrame(animate);
renderer.render(scene, camera);
gl.endFrameEXP();
};
animate();
// Simulate plane detection
setTimeout(() => {
setPlanesDetected(3);
}, 2000);
};
const handleScreenTap = (event) => {
const { locationX, locationY } = event.nativeEvent;
// Place object at tapped location (simplified)
if (sceneRef.current) {
const geometry = new THREE.SphereGeometry(0.05, 16, 16);
const material = new THREE.MeshStandardMaterial({
color: Math.random() * 0xffffff,
});
const sphere = new THREE.Mesh(geometry, material);
// Random position for demo (in real AR, use hit test result)
sphere.position.set(
(Math.random() - 0.5) * 2,
-1.4,
(Math.random() - 0.5) * -2
);
sceneRef.current.add(sphere);
setObjectsPlaced((prev) => prev + 1);
}
};
return (
<TouchableWithoutFeedback onPress={handleScreenTap}>
<View style={styles.container}>
<GLView style={styles.glView} onContextCreate={onContextCreate} />
<View style={styles.overlay}>
<Text style={styles.instructions}>
{planesDetected > 0
? 'Tap to place objects'
: 'Move device to detect surfaces'}
</Text>
<Text style={styles.stats}>Planes: {planesDetected}</Text>
<Text style={styles.stats}>Objects: {objectsPlaced}</Text>
</View>
</View>
</TouchableWithoutFeedback>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
},
glView: {
flex: 1,
},
overlay: {
position: 'absolute',
top: 50,
left: 20,
right: 20,
backgroundColor: 'rgba(0, 0, 0, 0.7)',
padding: 16,
borderRadius: 8,
},
instructions: {
color: '#fff',
fontSize: 16,
marginBottom: 12,
},
stats: {
color: '#fff',
fontSize: 14,
marginTop: 4,
},
});
| Pitfall | Solution |
|---|---|
| App crashes on simulator | AR requires physical devices, test on real Android phones |
| Planes not detected | Ensure good lighting, move device slowly, scan textured surfaces |
| Objects drift over time | Use anchors to fix objects to planes |
| Poor tracking | Avoid low-light, featureless surfaces (blank walls) |
| Battery drain | Pause AR session when app backgrounded |
expo-gl for 3D renderingIn the next lesson, we'll explore 3D Object Placement, learning how to load 3D models, manipulate objects with gestures, and create interactive AR experiences.