EyeTrax is a Python library that provides webcam-based eye tracking. Extract facial features, train a model and predict gaze with an easy‑to‑use interface.
- Real‑time gaze estimation
- Multiple calibration workflows
- Optional filtering (Kalman / KDE)
- Model persistence – save / load a trained
GazeEstimator
- Virtual-camera overlay that integrates with streaming software (e.g., OBS) via the bundled
eyetrax-virtualcam
CLI
From PyPI
pip install eyetrax
git clone https://github.com/ck-zhang/eyetrax && cd eyetrax
# editable install — pick one
python -m pip install -e .
pip install uv && uv sync
The EyeTrax package provides two command‑line entry points
Command | Purpose |
---|---|
eyetrax-demo |
Run an on‑screen gaze overlay demo |
eyetrax-virtualcam |
Stream the overlay to a virtual webcam |
Options
Flag | Values | Default | Description |
---|---|---|---|
--filter |
kalman , kde , none |
none |
Smoothing filter |
--camera |
int | 0 |
Physical webcam index |
--calibration |
9p , 5p , lissajous |
9p |
Calibration routine |
--background (demo only) |
path | — | Background image |
--confidence (KDE only) |
0–1 | 0.5 |
Contour probability |
eyetrax-demo --filter kalman
eyetrax-virtualcam --filter kde --calibration 5p
OBS_demo.mp4
from eyetrax import GazeEstimator, run_9_point_calibration
import cv2
# Create estimator and calibrate
estimator = GazeEstimator()
run_9_point_calibration(estimator)
# Save model
estimator.save_model("gaze_model.pkl")
# Load model
estimator = GazeEstimator()
estimator.load_model("gaze_model.pkl")
cap = cv2.VideoCapture(0)
while True:
# Extract features from frame
ret, frame = cap.read()
features, blink = estimator.extract_features(frame)
# Predict screen coordinates
if features is not None and not blink:
x, y = estimator.predict([features])[0]
print(f"Gaze: ({x:.0f}, {y:.0f})")
If you find EyeTrax useful, consider starring the repo or contributing. If you use it in your research, please cite it. The project is available under the MIT license.