Skip to content

abhishek-gola/Lightglue_slam

Repository files navigation

LightGlue_SLAM

A minimal, modular SLAM pipeline built around LightGlue feature matching with a modern learned feature extractor (SuperPoint by default; optional XFeat).

  • Frontend: SuperPoint (default) or XFeat extractor + LightGlue matcher
  • Motion estimation: 2-view initialization via Essential matrix, pose recovery
  • Mapping: incremental triangulation, 2D-3D tracking via PnP+RANSAC
  • Local optimization: small-window bundle adjustment using SciPy least squares
  • Outputs: ORB-SLAM3-style CameraTrajectory.txt and KeyFrameTrajectory.txt, sparse map PLY

This code is intentionally compact and readable, taking inspiration from ORB-SLAM3 but without copying large portions of code.

Requirements

  • Python 3.9+
  • CUDA-capable GPU recommended

Install dependencies (CPU-only will work for small demos, but GPU is recommended):

python -m venv .venv && source .venv/bin/activate
pip install -r LightGlue_SLAM/requirements.txt

Notes:

  • SuperPoint and LightGlue are included via the lightglue Python package.
  • XFeat is optional. If installed (pip install xfeat), run with --feature xfeat.

Quick start

Assume you have a folder of undistorted RGB images and camera intrinsics.

python -m LightGlue_SLAM.run_slam \
  --images_dir /path/to/RGB_frames \
  --fx 525 --fy 525 --cx 319.5 --cy 239.5 \
  --feature superpoint --device cuda \
  --out_dir /media/abhishek/hugedrive1/SpatialAI/LightGlue_SLAM/output

Artifacts written to --out_dir:

  • CameraTrajectory.txt (all frames)
  • KeyFrameTrajectory.txt
  • map_points.ply (sparse 3D points)

You can visualize or post-process with your own tools.

Running on existing data in this repo

Example with the provided RGB_frames directory:

python -m LightGlue_SLAM.run_slam \
  --images_dir /media/abhishek/hugedrive1/SpatialAI/RGB_frames \
  --fx 525 --fy 525 --cx 319.5 --cy 239.5 \
  --feature superpoint --device cuda \
  --stride 1 --max_kp 4096

If depth is available and aligned, you can later densify with the existing tool tools/trajectory_to_ply.py using your preferred method. This SLAM outputs a sparse PLY directly for quick inspection.

Feature extractor options

  • superpoint (default): robust and well-tested with LightGlue
  • xfeat: experimental support (requires pip install xfeat). If unavailable, code falls back to SuperPoint.

Design overview

  • features.py: wrappers for SuperPoint and XFeat
  • matcher.py: LightGlue wrapper, with OpenCV fallback
  • geometry.py: calibrated geometry utilities (E-matrix, PnP, triangulation, BA residuals)
  • mapping.py: data structures for keyframes and landmarks
  • tracker.py: initialization, tracking, keyframe decision, local mapping
  • slam.py: high-level orchestrator
  • run_slam.py: CLI

Notes

  • Images are assumed undistorted and in the same resolution as intrinsics.
  • This is a compact reference implementation; for production, consider adding loop closure and global BA.
  • If you encounter dependency issues, ensure the correct CUDA/Torch versions are installed for your GPU.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages