Leap slam using Kinect Edit
- https://github.com/tum-vision/lsd_slam code release of LSD slam
https://www.youtube.com/watch?v=LZChzEcLNzI Semi-Dense Visual Odometry for a Monocular Camera (ICCV '13) http://vision.in.tum.de/research/vslam/lsdslam?redirect=2 LSD-SLAM is a novel, direct monocular SLAM technique: Instead of using keypoints, it directly operates on image intensities both for tracking and mapping. The camera is tracked using direct image alignment, while geometry is estimated in the form of semi-dense depth maps, obtained by filtering over many pixelwise stereo comparisons. We then build a Sim(3) pose-graph of keyframes, which allows to build scale-drift corrected, large-scale maps including loop-closures. LSD-SLAM runs in real-time on a CPU, and even on a modern smartphone.
The latest issue of IEEE Trans. on Robotics is all about SLAM. There are some interesting new algorithms, and articles on all the old ones. This stuff actually works now, with nothing more than a camera as input.
Willow Garage is implementing some SLAM algorithms and open sourcing the code by putting it back into OpenCV.