Air-Canvaser
A real-time gesture-based drawing system powered by computer vision
⸻
Overview
Air-Canvaser is a real-time hand-tracking web application that allows users to draw in the air using only their index finger and a webcam. The system tracks hand landmarks, interprets motion patterns, and renders smooth strokes onto a virtual canvas — creating a natural, touchless drawing experience.
This project explores the intersection of human-computer interaction, computer vision, and real-time rendering directly inside the browser.
⸻
The Problem
Traditional drawing tools require physical contact — mouse, stylus, or touchscreen. I wanted to experiment with a more natural interaction model:
Can hand motion alone become a reliable digital brush?
Most “air canvas” implementations rely on color detection (such as tracking a colored glove). I aimed to build a more robust and modern solution using landmark-based hand tracking.
⸻
The Solution
Air-Canvaser uses MediaPipe Hands to detect and track 21 hand landmarks in real time. The system specifically tracks the index fingertip (landmark 8) and converts its movement into drawing coordinates on a canvas.
To improve stability and user experience, I implemented:
• Motion smoothing to reduce jitter
• Velocity-based logic to prevent accidental strokes
• Gesture controls for drawing state management
• Dynamic brush effects (glow / neon style)
• Canvas reset and state handling
The result is a smooth, responsive, and visually immersive air-drawing system.
⸻
How It Works (Technical Breakdown)
1. Hand Detection
The webcam stream is processed frame-by-frame using MediaPipe Hands to extract hand landmarks.
2. Landmark Mapping
The index fingertip coordinates are normalized and mapped to the canvas coordinate system.
3. Motion Filtering
To prevent shaky lines:
• Position smoothing is applied
• Rapid or unpredictable movements temporarily pause drawing
• Velocity thresholds control stroke continuity
4. Rendering Engine
Using the HTML5 Canvas API:
• Lines are drawn between previous and current finger positions
• Glow effects are added using shadow blur
• Brush thickness can be dynamically adjusted
⸻
Tech Stack
• Next.js + React — Frontend architecture
• TypeScript — Type safety and maintainability
• MediaPipe Hands — Real-time hand landmark detection
• WebRTC — Camera access
• HTML5 Canvas API — Rendering engine
• Tailwind CSS — UI styling