机器人工程本科专业课教学资源汇总(2018年暑假补充学习用)
手机应用软件:Robotics Engineering - Apps on Google Play
This Robotics Engineering App provides the basic know-how on the foundations of robotics: modelling, planning and control. The App takes the user through a step-by step design process in this rapidly advancing specialty area of robot design.This App provides the professional engineer and student with important and detailed methods and examples ...
----
GitHub+awesome
在github中搜索awesome+关键词,可以看到非常多有用的资源。
----
Python +Robotics
https://github.com/AtsushiSakai/PythonRobotics
----
编程基础部分:
Matlab:https://github.com/uhub/awesome-matlab
Python:https://github.com/vinta/awesome-python
C++:https://github.com/fffaraz/awesome-cpp
如,机器人学:https://github.com/kiloreux/awesome-robotics
This is a list of various books, courses and other resources for robotics. It's an attempt to gather useful material in one place for everybody who wants to learn more about the field.
Courses
- Artificial Intelligence for Robotics Udacity
- Robotics Nanodegree Udacity
- Autonomous Mobile Robots edX
- Underactuated Robotics edX
- Autonomous Mobile Robots edX
- Robot Mechanics and Control, Part I edX
- Robot Mechanics and Control, Part II edX
- Autonomous Navigation for Flying Robots edX
- Robotics Micromasters edX
- Robotics Specialization by GRASP Lab Coursera
- Control of Mobile Robots Coursera
- QUT Robot Academy QUT
- Robotic vision QUT
- Introduction to robotics MIT
- Robotics: Vision Intelligence and Machine Learning edX
- Applied robot design Stanford University
- Introduction to Robotics Stanford University
- Introduction to Mobile Robotics University of Freiburg
- Robotics edx
- Columbia Robotics edx
Books
- Probabilistic Robotics (Intelligent Robotics and Autonomous Agents series)
- Introduction to Autonomous Mobile Robots (Intelligent Robotics and Autonomous Agents series)
- Springer Handbook of Robotics
- Planning Algorithms
- A gentle introduction to ROS
- A Mathematical Introduction to Robotic Manipulation
- Learning Computing With Robots
- Robotics, Vision and Control: Fundamental Algorithms in MATLAB (Springer Tracts in Advanced Robotics)
- INTECH Books
- Introduction to Autonomous Robots
- Principles of Robot Motion: Theory, Algorithms, and Implementations
- Introduction to Modern Robotics: Mechanics, Planning, and Control [pdf]
- Learning ROS for Robotics Programming
- Mastering ROS for Robotics Programming
- Behavior Trees in Robotics and AI: An Introduction [pdf]
- Automated Planning and Acting [pdf]
Software and Libraries
Gazebo Robot Simulator
ROS The Robot Operating System (ROS) is a flexible framework for writing robot software. It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms.
ROS2 ROS2 is a new version of ROS with radical design changes and improvement over older ROS version.
RobWork RobWork is a collection of C++ libraries for simulation and control of robot systems. RobWork is used for research and education as well as for practical robot applications.
MRPT Mobile Robot Programming Toolkit provides developers with portable and well-tested applications and libraries covering data structures and algorithms employed in common robotics research areas.
Robotics Library The Robotics Library (RL) is a self-contained C++ library for robot kinematics, motion planning and control. It covers mathematics, kinematics and dynamics, hardware abstraction, motion planning, collision detection, and visualization.
Simbad 2D/3D simulator in Java and Jython.
Morse General purpose indoor/outdoor 3D simulator.
Carmen CARMEN is an open-source collection of software for mobile robot control. CARMEN is modular software designed to provide basic navigation primitives including: base and sensor control, logging, obstacle avoidance, localization, path planning, and mapping.
Peekabot Peekabot is a real-time, networked 3D visualization tool for robotics, written in C++. Its purpose is to simplify the visualization needs faced by a roboticist daily.
YARP Yet Another Robot Platform.
V-REP Robot simulator, 3D, source available, Lua scripting, APIs for C/C++, Python, Java, Matlab, URBI, 2 physics engines, full kinematic solver.
Webots Webots is a development environment used to model, program and simulate mobile robots.
Drake A planning, control and analysis toolbox for nonlinear dynamical systems.
Neurorobotics Platform (NRP) An Internet-accessible simulation system that allows the simulation of robots controlled by spiking neural networks.
The Player Project Free Software tools for robot and sensor applications
Open AI's Roboschool Open-source software for robot simulation, integrated with OpenAI Gym.
ViSP Open-source visual servoing platform library, is able to compute control laws that can be applied to robotic systems.
ROS Behavior Trees Open-source library to create robot's behaviors in form of Behavior Trees running in ROS (Robot Operating System).
Papers
Conferences
- ACM/IEEE International Conference on Human Robot Interaction (HRI)
- CISM IFToMM Symposium on Robot Design, Dynamics and Control (RoManSy)
- IEEE Conference on Decision and Controls (CDC)
- IEEE International Conference on Rehabilitation Robotics (ICORR)
- IEEE International Conference on Robotics and Automation (ICRA)
- IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
- IEEE-RAS International Conference on Humanoid Robots (Humanoids)
- International Symposium of Robotic Research (ISRR)
- International Symposium of Experimental Robotics (ISER)
- Robotica
- Robotics: Science and Systems Conference (RSS)
- The International Workshop on the Algorithmic Foundations of Robotics (WAFR)
Journals
- Autonomous Robots
- Bioinspiration & Biomimetics
- Frontiers in Robotics and AI
- IEEE Robotics & Automation Magazine
- IEEE Transactions on Haptics
- IEEE Transactions on Robotics
- IEEE/ASME Transactions on Mechatronics
- International Journal of Social Robotics
- Journal of Field Robotics
- Journal of Intelligent & Robotic Systems
- Mechatronics
- Robotics and Computer-Integrated Manufacturing
- Robotics and Autonomous Systems
- The International Journal of Robotics Research
Competitions
- ICRA Robot Challenges
- RobotChallenge
- DARPA Robotics Challenge
- European Robotics Challenges
- First Robotics Competition
- VEX Robotics Competition
- RoboCup
- Eurobot International Students Robotics Contest
- RoboMasters
- RoboSoft, Grand Challenge
- Intelligent Ground Vehicle Competition
- Robotex The biggest robotics festival in Europe
Companies
- Boston Dynamics robotics R&D company, creator of the state of the art Atlas and Spot robots
- iRobot manufacturer of the famous Roomba robotic vacuum cleaner
- PAL Robotics
- Aldebaran Robotics creator of the NAO robot
- ABB Robotics the largest manufacturer of industrial robots
- KUKA Robotics major manufacturer of industrial robots targeted at factory automation
- FANUC industrial robots manufacturer with the biggest install base
- Rethink Robotics creator of the collaborative robot Baxter
- DJI industry leader in drones for both commerical and industrial needs.
- The construct sim A cloud based tool for building modern, future-proof robot simulations.
- Fetch Robotics A robotics startup in San Jose, CA building the future of e-commerce fulfillment and R&D robots.
- Festo Robotics Festo is known for making moving robots that move like animals such as the sea gull like SmartBird, jellyfish, butterflies and kangaroos.
Misc
- IEEE Spectrum Robotics robotics section of the IEEE Spectrum magazine
- MIT Technology Review Robotics robotics section of the MIT Technology Review magazine
- reddit robotics subreddit
- RosCON conference (video talks included)
- Carnegie Mellon Robotics Academy
- Let's Make Robots
- How do I learn Robotics?
- Free NXT Lego MindStorms NXT-G code tutorials
- StackExachange Robotics community
- 47 Programmable robotic kits
Related awesome lists
- Awesome Artificial Intelligence
- Awesome Computer Vision
- Awesome Machine Learning
- Awesome Deep Learning
- Awesome Deep Vision
- Awesome Reinforcement Learning
- Awesome Robotics
- Awesome Robotics Libraries
Awesome links, software libraries, papers, and other intersting links that are useful for robots.
Relevant Awesome Lists
- Kiloreaux/awesome-robotics - Learn about Robotics.
- Robotics Libraries - Another list of awesome robotics libraries.
- Computer Vision
-
Deep Learning - Neural networks.
- TensorFlow - Library for machine intelligence.
- Papers - The most cited deep learning papers.
- Deep Vision - Deep learning for computer vision
- Data Visualization - See what your robot is doing with any programming language.
Simulators
- V-REP - Create, Simulate, any Robot.
- Microsoft Airsim - Open source simulator based on Unreal Engine for autonomous vehicles from Microsoft AI & Research.
- Bullet Physics SDK - Real-time collision detection and multi-physics simulation for VR, games, visual effects, robotics, machine learning etc. Also see pybullet.
Visualization, Video, Display, and Rendering
- Pangolin - A lightweight portable rapid development library for managing OpenGL display / interaction and abstracting video input.
- PlotJuggler - Quickly plot and re-plot data on the fly! Includes optional ROS integration.
- Data Visualization - A list of awesome data visualization tools.
Machine Learning
TensorFlow related
- Keras - Deep Learning library for Python. Convnets, recurrent neural networks, and more. Runs on TensorFlow or Theano.
- keras-contrib - Keras community contributions.
- TensorFlow - An open-source software library for Machine Intelligence.
- recurrentshop - Framework for building complex recurrent neural networks with Keras.
- tensorpack - Neural Network Toolbox on TensorFlow.
- tensorlayer - Deep Learning and Reinforcement Learning Library for Researchers and Engineers.
- TensorFlow-Examples - TensorFlow Tutorial and Examples for beginners.
- hyperas - Keras + Hyperopt: A very simple wrapper for convenient hyperparameter optimization.
- elephas - Distributed Deep learning with Keras & Spark
- PipelineAI - End-to-End ML and AI Platform for Real-time Spark and Tensorflow Data Pipelines.
- sonnet - Google Deepmind APIs on top of TensorFlow.
- visipedia/tfrecords - Demonstrates the use of TensorFlow's TFRecord data format.
Image Segmentation
- tf-image-segmentation - Image Segmentation framework based on Tensorflow and TF-Slim library.
- Keras-FCN
Logging and Messaging
- spdlog - Super fast C++ logging library.
- lcm - Lightweight Communications and Marshalling, message passing and data marshalling for real-time systems where high-bandwidth and low latency are critical.
Tracking
- simtrack - A simulation-based framework for tracking.
- ar_track_alvar - AR tag tracking library for ROS.
- artoolkit5 - Augmented Reality Toolkit, which has excellent AR tag tracking software.
Robot Operating System (ROS)
- ROS - Main ROS website.
- ros2/design - Design documentation for ROS 2.0 effort.
Kinematics, Dynamics, Constrained Optimization
- jrl-umi3218/Tasks - Tasks is library for real time control of robots and kinematic trees using constrained optimization.
- jrl-umi3218/RBDyn - RBDyn provides a set of classes and functions to model the dynamics of rigid body systems.
- ceres-solver - Solve Non-linear Least Squares problems with bounds constraints and general unconstrained optimization problems. Used in production at Google since 2010.
- orocos_kinematics_dynamics - Orocos Kinematics and Dynamics C++ library.
- flexible-collsion-library - Performs three types of proximity queries on a pair of geometric models composed of triangles, integrated with ROS.
- robot_calibration - generic robot kinematics calibration for ROS
Calibration
- handeye-calib-camodocal - generic robot hand-eye calibration.
- robot_calibration - generic robot kinematics calibration for ROS
- kalibr - camera and imu calibration for ROS
Reinforcement Learning
- TensorForce - A TensorFlow library for applied reinforcement learning
- gqcnn - Grasp Quality Convolutional Neural Networks (GQ-CNNs) for grasp planning using training datasets from the Dexterity Network (Dex-Net)
- Guided Policy Search - Guided policy search (gps) algorithm and LQG-based trajectory optimization, meant to help others understand, reuse, and build upon existing work.
Drivers for Sensors, Devices and Arms
- libfreenect2 - Open source drivers for the Kinect for Windows v2 and Xbox One devices.
- iai_kinect2 - Tools for using the Kinect One (Kinect v2) in ROS.
- grl - Generic Robotics Library: Cross platform drivers for Kuka iiwa and Atracsys FusionTrack with optional v-rep and ros drivers. Also has cross platform Hand Eye Calibration and Tool Tip Calibration.
Datasets
- pascal voc 2012 - The classic reference image segmentation dataset.
- openimages - Huge imagenet style dataset by Google.
- COCO - Objects with segmentation, keypoints, and links to many other external datasets.
- cocostuff - COCO additional full scene segmentation including backgrounds and annotator.
- Google Brain Robot Data - Robotics datasets including grasping, pushing, and pouring.
- Materials in Context - Materials Dataset with real world images in 23 categories.
- Dex-Net 2.0 - 6.7 million pairs of synthetic point clouds and grasps with robustness labels.
Dataset Collection
- cocostuff - COCO additional full scene segmentation including backgrounds and annotator.
Linear Algebra & Geometry
- Eigen - Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms.
- Boost.QVM - Quaternions, Vectors, Matrices library for Boost.
- Boost.Geometry - Boost.Geometry contains instantiable geometry classes, but library users can also use their own.
- SpaceVecAlg - Implementation of spatial vector algebra for 3D geometry with the Eigen3 linear algebra library.
- Sophus - C++ implementation of Lie Groups which are for 3D Geometry, using Eigen.
Point Clouds
- libpointmatcher - An "Iterative Closest Point" library robotics and 2-D/3-D mapping.
- Point Cloud Library (pcl) - The Point Cloud Library (PCL) is a standalone, large scale, open project for 2D/3D image and point cloud processing.
Simultaneous Localization and Mapping (SLAM)
- ElasticFusion - Real-time dense visual SLAM system.
- co-fusion - Real-time Segmentation, Tracking and Fusion of Multiple Objects. Extends ElasticFusion.
- Google Cartographer - Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations.
- OctoMap - An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Contains the main OctoMap library, the viewer octovis, and dynamicEDT3D.
- ORB_SLAM2 - Real-Time SLAM for Monocular, Stereo and RGB-D Cameras, with Loop Detection and Relocalization Capabilities.
The list of vision-based SLAM / Visual Odometry open source projects, libraries, dataset, tools, and studies
Index
Libraries
Basic vision and trasformation libraries
Thread-safe queue libraries
Loop detection
Graph Optimization
Map library
Dataset
Dataset for benchmark/test/experiment/evalutation
Tools
Projects
RGB (Monocular):
[1] Georg Klein and David Murray, "Parallel Tracking and Mapping for Small AR Workspaces", Proc. ISMAR 2007 [2] Georg Klein and David Murray, "Improving the Agility of Keyframe-based SLAM", Proc. ECCV 2008
- DSO. Available on ROS
Direct Sparse Odometry, J. Engel, V. Koltun, D. Cremers, In arXiv:1607.02565, 2016 A Photometrically Calibrated Benchmark For Monocular Visual Odometry, J. Engel, V. Usenko, D. Cremers, In arXiv:1607.02555, 2016
- LSD-SLAM. Available on ROS
LSD-SLAM: Large-Scale Direct Monocular SLAM, J. Engel, T. Schöps, D. Cremers, ECCV '14 Semi-Dense Visual Odometry for a Monocular Camera, J. Engel, J. Sturm, D. Cremers, ICCV '13
- ORB-SLAM. Available on ROS
[1] Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE > Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. (2015 IEEE Transactions on Robotics Best Paper Award). PDF. [2] Dorian Gálvez-López and Juan D. Tardós. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE > Transactions on Robotics, vol. 28, no. 5, pp. 1188-1197, 2012. PDF.
- Nister's Five Point Algorithm for Essential Matrix estimation, and FAST features, with a KLT tracker
D. Nister, “An efficient solution to the five-point relative pose problem,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 26, no. 6, pp. 756–770, 2004.
- SVO-SLAM. Available on ROS
Christian Forster, Matia Pizzoli, Davide Scaramuzza, "SVO: Fast Semi-direct Monocular Visual Odometry," IEEE International Conference on Robotics and Automation, 2014.
RGB and Depth (Called RGBD):
Real-Time Visual Odometry from Dense RGB-D Images, F. Steinbucker, J. Strum, D. Cremers, ICCV, 2011
- Dense Visual SLAM for RGB-D Cameras. Available on ROS
[1]Dense Visual SLAM for RGB-D Cameras (C. Kerl, J. Sturm, D. Cremers), In Proc. of the Int. Conf. on Intelligent Robot Systems (IROS), 2013. [2]Robust Odometry Estimation for RGB-D Cameras (C. Kerl, J. Sturm, D. Cremers), In Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2013 [3]Real-Time Visual Odometry from Dense RGB-D Images (F. Steinbruecker, J. Sturm, D. Cremers), In Workshop on Live Dense Reconstruction with Moving Cameras at the Intl. Conf. on Computer Vision (ICCV), 2011.
- RTAB MAP - Real-Time Appearance-Based Mapping. Available on ROS
Online Global Loop Closure Detection for Large-Scale Multi-Session Graph-Based SLAM, 2014 Appearance-Based Loop Closure Detection for Online Large-Scale and Long-Term Operation, 2013
- ORB2-SLAM. Available on ROS
[1] Raúl Mur-Artal, J. M. M. Montiel and Juan D. Tardós. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE > Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015. (2015 IEEE Transactions on Robotics Best Paper Award). [2] Dorian Gálvez-López and Juan D. Tardós. Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Transactions on Robotics, vol. 28, no. 5, pp. 1188-1197, 2012.
Kahler, O. and Prisacariu, V.~A. and Ren, C.~Y. and Sun, X. and Torr, P.~H.~S and Murray, D.~W. Very High Frame Rate Volumetric Integration of Depth Images on Mobile Device. IEEE Transactions on Visualization and Computer Graphics (Proceedings International Symposium on Mixed and Augmented Reality 2015
Real-time Large Scale Dense RGB-D SLAM with Volumetric Fusion, T. Whelan, M. Kaess, H. Johannsson, M.F. Fallon, J. J. Leonard and J.B. McDonald, IJRR '14
[1] ElasticFusion: Real-Time Dense SLAM and Light Source Estimation, T. Whelan, R. F. Salas-Moreno, B. Glocker, A. J. Davison and S. Leutenegger, IJRR '16 [2] ElasticFusion: Dense SLAM Without A Pose Graph, T. Whelan, S. Leutenegger, R. F. Salas-Moreno, B. Glocker and A. J. Davison, RSS '15
Martin Rünz and Lourdes Agapito. Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects. 2017 IEEE International Conference on Robotics and Automation (ICRA)
RGBD and LIDAR:
- Google's cartographer. Available on ROS
----
awesome-deep-vision-web-demo
A curated list of awesome deep vision web demo
Contributing
Please feel free to pull requests to add papers.
Vision Demo List
Hand-written Digit Recognition
- https://tensorflow-mnist.herokuapp.com/
- https://erkaman.github.io/regl-cnn/src/demo.html
- https://transcranial.github.io/keras-js/#/mnist-cnn
Image Segmentation
- CRF+RNN (ICCV 2015) http://www.robots.ox.ac.uk/~szheng/crfasrnndemo
Image Classification
- VGG-16 https://deeplearning4j.org/demo-classifier-vgg16
- Illustration2vec http://demo.illustration2vec.net/
- Leiden Univ. http://goliath.liacs.nl/
- Clarifai https://www.clarifai.com/demo
- Google Colud Vision API http://vision-explorer.reactive.ai/#/?_k=aodf68
- IBMWatson Vision API https://visual-recognition-demo.mybluemix.net/
- Karpathy: MNIST ConvNet http://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html
- Karpathy: CIFAR10 ConvNet http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html
- keras-js: IMAGENET 50-layer Residual Network https://transcranial.github.io/keras-js/#/resnet50
- keras-js: IMAGENET Inception-v3 https://transcranial.github.io/keras-js/#/inception-v3
- keras-js: IMAGENET SqueezeNet v1.1 https://transcranial.github.io/keras-js/#/squeezenet-v1.1
- Teachable Machine: 3 Classes with online video https://teachablemachine.withgoogle.com/
Object Detection
Text Detection
- Single Shot Text Detector with Regional Attention (ICCV 2017) http://128.227.246.42:5555/
Age Estimation
AutoEncoder
- VAE : MNIST Geneartation http://www.dpkingma.com/sgvb_mnist_demo/demo.html
- VAE : keras-js MNIST https://transcranial.github.io/keras-js/#/mnist-vae
- VAE : Gray Face Generation http://vdumoulin.github.io/morphing_faces/online_demo.html
- Karpathy: Denoising AutoEncoder http://cs.stanford.edu/people/karpathy/convnetjs/demo/autoencoder.html
GAN
- GAN : 1D Gaussian Distribution Fitting http://cs.stanford.edu/people/karpathy/gan/
- DCGAN : Asian Color Face Generation http://carpedm20.github.io/faces/
- DCGAN : Character Generation http://mattya.github.io/chainer-DCGAN/
- ACGAN : keras-js MNIST https://transcranial.github.io/keras-js/#/mnist-acgan
- Girls Chacacter Generation : http://make.girls.moe/#/
- GAN-playground : https://reiinakano.github.io/gan-playground/
Style Transfer
- On/Off-line Style Transfer, Deep Dream https://deepdreamgenerator.com/gallery
- Offline Style Transfer, http://demos.algorithmia.com/deep-style/
Image Translation
- pix2pix https://affinelayer.com/pixsrv/index.html
- pix2pix (human face) http://fotogenerator.npocloud.nl/
Colorization
- sketch to color image https://paintschainer.preferred.tech/
- draw sketch and colorize it https://sketch.pixiv.net/draw
- deepcolor http://color.kvfrans.com/
- gray photo to color photo http://demos.algorithmia.com/colorize-photos/
Image Captioning
Visual Q&A
Google's AI Experiment
- Quick Draw https://quickdraw.withgoogle.com/#
- Auto Draw http://www.autodraw.com/
- Sketch RNN (Draw together with a neural network) https://aiexperiments.withgoogle.com/sketch-rnn-demo
Gaze Manipulation
Super-Resolution
Saliency Map
- SALICON (ICCV 2015) http://salicon.net/demo/#
Font Generation
Image to ASCII Code
Image Completion
OCR (Optical Character Recognition)
Human Pose Estimation
Others
Neural-Net Demo
- tensorflow play ground http://playground.tensorflow.org/
- karpathy: toy 2d classification http://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html
Text To Speech
Speech Noise Reduction
- SEGAN ('17.03) http://veu.talp.cat/segan/
Singing Generation
- Neural Singing ('17.04) http://www.dtic.upf.edu/~mblaauw/IS2017_NPSS/
Sound Synthesizer
- Neural Synthesizer https://aiexperiments.withgoogle.com/sound-maker/view/
----
PythonRobotics
Python codes for robotics algorithm.
Table of Contents
- What is this?
- Requirements
- How to use
- Localization
- Mapping
- SLAM
-
Path Planning
- Dynamic Window Approach
- Grid based search
- Model Predictive Trajectory Generator
- State Lattice Planning
- Probabilistic Road-Map (PRM) planning
- Voronoi Road-Map planning
- Rapidly-Exploring Random Trees (RRT)
- Cubic spline planning
- B-Spline planning
- Eta^3 Spline path planning
- Bezier path planning
- Quintic polynomials planning
- Dubins path planning
- Reeds Shepp planning
- LQR based path planning
- Optimal Trajectory in a Frenet Frame
- Path tracking
- License
- Contribution
- Support
- Authors
What is this?
This is a Python code collection of robotics algorithms, especially for autonomous navigation.
Features:
-
Widely used and practical algorithms are selected.
-
Minimum dependency.
-
Easy to read for understanding each algorithm's basic idea.
Requirements
-
Python 3.6.x
-
numpy
-
scipy
-
matplotlib
-
pandas
How to use
-
Install the required libraries.
-
Clone this repo.
-
Execute python script in each directory.
-
Add star to this repo if you like it
.
Localization
Extended Kalman Filter localization
This is a sensor fusion localization with Extended Kalman Filter(EKF).
The blue line is true trajectory, the black line is dead reckoning trajectory,
the green point is positioning observation (ex. GPS), and the red line is estimated trajectory with EKF.
The red ellipse is estimated covariance ellipse with EKF.
Ref:
Unscented Kalman Filter localization
This is a sensor fusion localization with Unscented Kalman Filter(UKF).
The lines and points are same meaning of the EKF simulation.
Ref:
Particle filter localization
This is a sensor fusion localization with Particle Filter(PF).
The blue line is true trajectory, the black line is dead reckoning trajectory,
and the red line is estimated trajectory with PF.
It is assumed that the robot can measure a distance from landmarks (RFID).
This measurements are used for PF localization.
Ref:
Histogram filter localization
This is a 2D localization example with Histogram filter.
The red cross is true position, black points are RFID positions.
The blue grid shows a position probability of histogram filter.
In this simulation, x,y are unknown, yaw is known.
The filter integrates speed input and range observations from RFID for localization.
Initial position is not needed.
Ref:
Mapping
Gaussian grid map
This is a 2D Gaussian grid mapping example.
Ray casting grid map
This is a 2D ray casting grid mapping example.
k-means object clustering
This is a 2D object clustering with k-means algorithm.
Object shape recognition using circle fitting
This is an object shape recognition using circle fitting.
The blue circle is the true object shape.
The red crosses are observations from a ranging sensor.
The red circle is the estimated object shape using circle fitting.
SLAM
Simultaneous Localization and Mapping(SLAM) examples
Iterative Closest Point (ICP) Matching
This is a 2D ICP matching example with singular value decomposition.
It can calculate a rotation matrix and a translation vector between points to points.
Ref:
EKF SLAM
This is an Extended Kalman Filter based SLAM example.
The blue line is ground truth, the black line is dead reckoning, the red line is the estimated trajectory with EKF SLAM.
The green crosses are estimated landmarks.
Ref:
FastSLAM 1.0
This is a feature based SLAM example using FastSLAM 1.0.
The blue line is ground truth, the black line is dead reckoning, the red line is the estimated trajectory with FastSLAM.
The red points are particles of FastSLAM.
Black points are landmarks, blue crosses are estimated landmark positions by FastSLAM.
Ref:
FastSLAM 2.0
This is a feature based SLAM example using FastSLAM 2.0.
The animation has the same meanings as one of FastSLAM 1.0.
Ref:
Graph based SLAM
This is a graph based SLAM example.
The blue line is ground truth.
The black line is dead reckoning.
The red line is the estimated trajectory with Graph based SLAM.
The black stars are landmarks for graph edge generation.
Ref:
Path Planning
Dynamic Window Approach
This is a 2D navigation sample code with Dynamic Window Approach.
Grid based search
Dijkstra algorithm
This is a 2D grid based shortest path planning with Dijkstra's algorithm.
In the animation, cyan points are searched nodes.
A* algorithm
This is a 2D grid based shortest path planning with A star algorithm.
In the animation, cyan points are searched nodes.
Its heuristic is 2D Euclid distance.
Potential Field algorithm
This is a 2D grid based path planning with Potential Field algorithm.
In the animation, the blue heat map shows potential value on each grid.
Ref:
Model Predictive Trajectory Generator
This is a path optimization sample on model predictive trajectory generator.
This algorithm is used for state lattice planner.
Path optimization sample
Lookup table generation sample
Ref:
State Lattice Planning
This script is a path planning code with state lattice planning.
This code uses the model predictive trajectory generator to solve boundary problem.
Ref:
Uniform polar sampling
Biased polar sampling
Lane sampling
Probabilistic Road-Map (PRM) planning
This PRM planner uses Dijkstra method for graph search.
In the animation, blue points are sampled points,
Cyan crosses means searched points with Dijkstra method,
The red line is the final path of PRM.
Ref:
Voronoi Road-Map planning
This Voronoi road-map planner uses Dijkstra method for graph search.
In the animation, blue points are Voronoi points,
Cyan crosses mean searched points with Dijkstra method,
The red line is the final path of Vornoi Road-Map.
Ref:
Rapidly-Exploring Random Trees (RRT)
Basic RRT
This is a simple path planning code with Rapidly-Exploring Random Trees (RRT)
Black circles are obstacles, green line is a searched tree, red crosses are start and goal positions.
RRT*
This is a path planning code with RRT*
Black circles are obstacles, green line is a searched tree, red crosses are start and goal positions.
Ref:
RRT with dubins path
Path planning for a car robot with RRT and dubins path planner.
RRT* with dubins path
Path planning for a car robot with RRT* and dubins path planner.
RRT* with reeds-sheep path
Path planning for a car robot with RRT* and reeds sheep path planner.
Informed RRT*
This is a path planning code with Informed RRT*.
The cyan ellipse is the heuristic sampling domain of Informed RRT*.
Ref:
Batch Informed RRT*
This is a path planning code with Batch Informed RRT*.
Ref:
Closed Loop RRT*
A vehicle model based path planning with closed loop RRT*.
In this code, pure-pursuit algorithm is used for steering control,
PID is used for speed control.
Ref:
-
Motion Planning in Complex Environments using Closed-loop Prediction
-
Real-time Motion Planning with Applications to Autonomous Urban Driving
-
[1601.06326] Sampling-based Algorithms for Optimal Motion Planning Using Closed-loop Prediction
LQR-RRT*
This is a path planning simulation with LQR-RRT*.
A double integrator motion model is used for LQR local planner.
Ref:
Cubic spline planning
A sample code for cubic path planning.
This code generates a curvature continuous path based on x-y waypoints with cubic spline.
Heading angle of each point can be also calculated analytically.
B-Spline planning
This is a path planning with B-Spline curse.
If you input waypoints, it generates a smooth path with B-Spline curve.
The final course should be on the first and last waypoints.
Ref:
Eta^3 Spline path planning
This is a path planning with Eta^3 spline.
Ref:
Bezier path planning
A sample code of Bezier path planning.
It is based on 4 control points Beier path.
If you change the offset distance from start and end point,
You can get different Beizer course:
Ref:
Quintic polynomials planning
Motion planning with quintic polynomials.
It can calculate 2D path, velocity, and acceleration profile based on quintic polynomials.
Ref:
Dubins path planning
A sample code for Dubins path planning.
Ref:
Reeds Shepp planning
A sample code with Reeds Shepp path planning.
Ref:
LQR based path planning
A sample code using LQR based path planning for double integrator model.
Optimal Trajectory in a Frenet Frame
This is optimal trajectory generation in a Frenet Frame.
The cyan line is the target course and black crosses are obstacles.
The red line is predicted path.
Ref:
-
Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame
-
Optimal trajectory generation for dynamic street scenarios in a Frenet Frame
Path tracking
Pure pursuit tracking
Path tracking simulation with pure pursuit steering control and PID speed control.
The red line is a target course, the green cross means the target point for pure pursuit control, the blue line is the tracking.
Ref:
Stanley control
Path tracking simulation with Stanley steering control and PID speed control.
Ref:
Rear wheel feedback control
Path tracking simulation with rear wheel feedback steering control and PID speed control.
Ref:
Linear–quadratic regulator (LQR) steering control
Path tracking simulation with LQR steering control and PID speed control.
Ref:
Linear–quadratic regulator (LQR) speed and steering control
Path tracking simulation with LQR speed and steering control.
Ref:
Model predictive speed and steering control
Path tracking simulation with iterative linear model predictive speed and steering control.
Ref:
License
MIT
Contribution
A small PR like bug fix is welcome.
If your PR is merged multiple times, I will add your account to the author list.
Support
You can support this project financially via Patreon.
You can get e-mail technical supports about the codes if you are being a patron.
PayPal donation is also welcome.
Authors
----
量子机器学习-量子机器人
The computing field must have a change from classical to quantum.
计算领域必须从经典变为量子。
https://github.com/krishnakumarsekar/awesome-quantum-machine-learning
----
Fall 2017 - Vision Algorithms for Mobile Robotics
UZH-BMINF020 / ETH-151-0632-00L
The course is open to all the students of the University of Zurich and ETH. Students should register through their own institutions.
Goal of the Course
For a robot to be autonomous, it has to perceive and understand the world around it. This course introduces you to the key computer vision algorithms used in mobile robotics, such as feature extraction, multiple view geometry, dense reconstruction, tracking, image retrieval, event-based vision, and visual-inertial odometry (the algorithms behind Google Tango, Apple ARKit, Google ARCore, Microsoft Hololens, Magic Leap and the Mars rovers). Basics knowledge of algebra, geomertry, and matrix calculus are required.
Time and location
Lectures: every Thursday from 10:15 to 12:00 in ETH LFW C5, Universitätstrasse 2, 8092 Zurich.
Exercises: Thursdays, roughly every two weeks, from 13:15 to 15:00 in ETH HG E 1.1, Rämistrasse 101, 8092 Zurich.
Please check out the course agenda below for the exact schedule.
Course Program, Slides, and Add-on Material
Official course program (please notice that this is a tentative schedule and that the effective content of the lecture can change from week to week.
Date |
Lecture and Exercise Title |
Slides and add-on material |
21.09.2017 | Lecture 01 - Introduction to Computer Vision and Visual Odometry |
Slides (last update 21.09.2017) Visual odometry tutorial Part I Visual odometry tutorial Part II SLAM survey paper |
28.09.2017 | Lecture 02 - Image Formation 1: perspective projection and camera models | Slides (last update 27.09.2017) |
05.10.2017 |
Lecture 03 - Image Formation 2: camera calibration algorithms Exercise 01 - Augmented reality wireframe cube |
Slides (last update 04.10.2017) Additional reading on P3P and PnP problems Exercise 01 (last update 04.10.2017) Solutions (last update 12.10.2017) Introduction to Matlab |
12.10.2017 | Lecture 04 - Filtering & Edge detection Exercise 02 - PnP problem |
Slides (last update 12.10.2017) Exercise 02 (last update 12.10.2017) Solutions (last update 16.10.2017) |
19.10.2017 | Lecture 05 - Point Feature Detectors, Part 1 Exercise 03 - Harris detector + descriptor + matching |
Slides (last update 19.10.2017) Exercise 03 (last update 17.10.2017) Solutions (last update 24.10.2017) |
26.10.2017 | Lecture 06 - Point Feature Detectors, Part 2 |
Slides (last update 26.10.2017) Additional reading on feature detection |
02.11.2017 |
Lecture 07 - Multiple-view geometry 1 Exercise 04 - Stereo vision: rectification, epipolar matching, disparity, triangulation |
Slides (last update 01.11.2017) Additional reading on stereo image rectification Exercise 04(last update 31.10.2017) Solutions (last update 31.10.2017) |
09.11.2017 |
Lecture 08 - Multiple-view geometry 2 Exercise 05 - Two-view Geometry |
Slides (last update 9.11.2017) Additional reading on 2-view geometry Exercise 05 (last update 8.11.2017) Solutions (last update 14.11.2017) |
16.11.2017 |
Lecture 09 - Multiple-view geometry 3 Exercise 06 - P3P algorithm and RANSAC |
Slides (last update 22.11.2017) Additional reading on open-source VO algorithms Exercise 06 (last update 16.11.2017) Solutions (last update 20.11.2017) |
23.11.2017 |
Lecture 10 - Dense 3D Reconstruction Exercise session: Intermediate VO Integration |
Slides (last update 29.11.2017) Additional reading on dense 3D reconstruction Find the VO project downloads below |
30.11.2017 |
Lecture 11 - Optical Flow and Tracking (Lucas-Kanade) Exercise 07 - Lucas-Kanade tracker |
Slides (last update 29.11.2017) Additional reading on Lucas-Kanade Exercise 07 (last update 30.11.2017) Solutions (last update 06.12.2017) |
07.12.2017 |
Lecture 12 - Place recognition Exercise session: Deep Learning Tutorial |
Slides (last update 07.12.2017) Additional reading on Bag-of-Words-based place recognition Optional exercise on place recognition(last update 06.12.2017) Deep Learning Slides(last update 07.12.2017) |
14.12.2017 |
Lecture 13 - Visual inertial fusion Exercise 08 - Bundle Adjustment |
Slides (last update 14.12.2017) Advanced Slides for intrerested reader Additional reading on visual-inertial fusion Exercise 08 (last update 13.12.2017) Solutions (last update 17.12.2017) |
21.12.2017 |
Lecture 14 - Event based vision + Scaramuzza's lab visit with live demos Exercise session: final VO integration |
Slides (last update 19.12.2017) Additional reading on event-based vision |
Oral Exam Questions (last udpate 21.12.2017)
The oral exam will last 30 minutes and will consist of one application question followed by two theoretical questions. This documentcontains a "non exhaustive" list of possible application questions and an "exhaustive" list of all the topics that you should learn about the course, which will be subject of discussion in the theoretical part.
Grading and optional Mini Project (last udpate 22.11.2017)
The final grade is based on the oral exam (30 minutes, exam date for UZH: Jan. 18; exam date for ETH students will be between January 22 and February 9 2018, dates communicated by ETH). Mini projects are optional and up to the students. Depending on the result of the mini project (see Project Specification in the table below), the student will be rewarded with a grade increase of up to 0.5 on the final grade. However, notice that the mini project can be quite time consuming. Mini project specification and files can be found in the table below. The deadline for the project is Sunday, 07.01.2018, 23:59:59, and it can be submitted via e-mail to the assistants (detailed instructions in specification).
Description | Link(size) |
Project Specification | vo_project_statement.pdf (600 kB, last updated 22.11.2017) |
FAQ | Frequently Asked Questions |
Parking garage dataset (easy) | parking.zip (208.3 MB) |
KITTI 00 dataset (hard) | kitti00.zip (2.3 GB) |
Malaga 07 dataset (hard) | malaga-urban-dataset-extract-07.zip (2.4 GB) |
Matlab script to load datasets | main.m (2.6 kB) |
Recommended Textbooks
(All available in the NEBIS catalogue)
- Robotics, Vision and Control: Fundamental Algorithms, 2nd Ed., by Peter Corke 2017. The PDF of the book can be freely downloaded (only with ETH v*n) from the author's webpage.
- Computer Vision: Algorithms and Applications, by Richard Szeliski, Springer, 2010. The PDF of the book can be freely downloaded from the author's webpage.
- An Invitation to 3D Vision, by Y. Ma, S. Soatto, J. Kosecka, S.S. Sastry.
- Multiple view Geometry, by R. Hartley and A. Zisserman.
- Chapter 4 of "Autonomous Mobile Robots", by R. Siegwart, I.R. Nourbakhsh, D. Scaramuzza. PDF
Spring 2015 - Autonomous Mobile Robots
The course is currently open to all the students of the University of Zurich and ETH (Bachelor's and Master's). Lectures take place every Monday (from 16.02.2014 to 30.05.2014) from 14:15 to 16:00 in the ETH main building (HG) in room E 1.2. Exercise take place almost every second Tuesday from 10:15 to 12:00 in the ETH main building in room G1.
The course is also given as an MOOC (Massive Open Online Course) under edX.
Course Program
Recommended Textbook
R. Siegwart, I.R. Nourbakhsh, and D. Scaramuzza
Introduction to autonomous mobile robots 2nd Edition (hardback)
A Bradford Book, The MIT Press, ISBN: 978-0-262-01535-6, February, 2011
The book can be bought during the first lecture or on Amazon.
Archived slides, videos, and lecture recordings
Since 2007, Prof. Davide Scaramuzza has been teaching this course at ETH Zurich and since 2012 the course has been shared also with University of Zurich. The lectures are based on Prof. Scaramuzza's book Autonomous Mobile Robots, MIT Press. Recordings of previous lectures (until 2012) can be watched or downloaded, only by ETH students, here.
You can download all the lecture slides and videos of past lectures (updated in 2010) from the following links:
- Power Point slides: AMR_lecture_slides.zip
- Videos Part 1: AMR_lecture_videos_1.zip
- Videos Part 2: AMR_lecture_videos_2.zip
- Videos Part 3: AMR_lecture_videos_3.zip
- Videos Part 4: AMR_lecture_videos_4.zip
- Videos Part 5: AMR_lecture_videos_5.zip
----
----