From f164773dec20bdd82d2e547c320064a71f631668 Mon Sep 17 00:00:00 2001 From: Atsushi Sakai Date: Sun, 16 Sep 2018 08:54:41 +0900 Subject: [PATCH] change paper dir and put back oritinal README --- README.md | 734 +++++++++++++++++++++++++++++++++++++++- doc/PythonRoboticsPaper | 1 - {doc => paper}/.gitkeep | 0 3 files changed, 726 insertions(+), 9 deletions(-) delete mode 160000 doc/PythonRoboticsPaper rename {doc => paper}/.gitkeep (100%) diff --git a/README.md b/README.md index 0642ee73..c942dc1b 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,69 @@ Python codes for robotics algorithm. * [What is this?](#what-is-this) * [Requirements](#requirements) * [How to use](#how-to-use) - * [Documentation](#documentation) + * [Localization](#localization) + * [Extended Kalman Filter localization](#extended-kalman-filter-localization) + * [Unscented Kalman Filter localization](#unscented-kalman-filter-localization) + * [Particle filter localization](#particle-filter-localization) + * [Histogram filter localization](#histogram-filter-localization) + * [Mapping](#mapping) + * [Gaussian grid map](#gaussian-grid-map) + * [Ray casting grid map](#ray-casting-grid-map) + * [k-means object clustering](#k-means-object-clustering) + * [Object shape recognition using circle fitting](#object-shape-recognition-using-circle-fitting) + * [SLAM](#slam) + * [Iterative Closest Point (ICP) Matching](#iterative-closest-point-icp-matching) + * [EKF SLAM](#ekf-slam) + * [FastSLAM 1.0](#fastslam-10) + * [FastSLAM 2.0](#fastslam-20) + * [Graph based SLAM](#graph-based-slam) + * [Path Planning](#path-planning) + * [Dynamic Window Approach](#dynamic-window-approach) + * [Grid based search](#grid-based-search) + * [Dijkstra algorithm](#dijkstra-algorithm) + * [A* algorithm](#a-algorithm) + * [Potential Field algorithm](#potential-field-algorithm) + * [Model Predictive Trajectory Generator](#model-predictive-trajectory-generator) + * [Path optimization sample](#path-optimization-sample) + * [Lookup table generation sample](#lookup-table-generation-sample) + * [State Lattice Planning](#state-lattice-planning) + * [Uniform polar sampling](#uniform-polar-sampling) + * [Biased polar sampling](#biased-polar-sampling) + * [Lane sampling](#lane-sampling) + * [Probabilistic Road-Map (PRM) planning](#probabilistic-road-map-prm-planning) + * [Voronoi Road-Map planning](#voronoi-road-map-planning) + * [Rapidly-Exploring Random Trees (RRT)](#rapidly-exploring-random-trees-rrt) + * [Basic RRT](#basic-rrt) + * [RRT*](#rrt) + * [RRT with dubins path](#rrt-with-dubins-path) + * [RRT* with dubins path](#rrt-with-dubins-path-1) + * [RRT* with reeds-sheep path](#rrt-with-reeds-sheep-path) + * [Informed RRT*](#informed-rrt) + * [Batch Informed RRT*](#batch-informed-rrt) + * [Closed Loop RRT*](#closed-loop-rrt) + * [LQR-RRT*](#lqr-rrt) + * [Cubic spline planning](#cubic-spline-planning) + * [B-Spline planning](#b-spline-planning) + * [Eta^3 Spline path planning](#eta3-spline-path-planning) + * [Bezier path planning](#bezier-path-planning) + * [Quintic polynomials planning](#quintic-polynomials-planning) + * [Dubins path planning](#dubins-path-planning) + * [Reeds Shepp planning](#reeds-shepp-planning) + * [LQR based path planning](#lqr-based-path-planning) + * [Optimal Trajectory in a Frenet Frame](#optimal-trajectory-in-a-frenet-frame) + * [Path tracking](#path-tracking) + * [move to a pose control](#move-to-a-pose-control) + * [Pure pursuit tracking](#pure-pursuit-tracking) + * [Stanley control](#stanley-control) + * [Rear wheel feedback control](#rear-wheel-feedback-control) + * [Linear–quadratic regulator (LQR) steering control](#linearquadratic-regulator-lqr-steering-control) + * [Linear–quadratic regulator (LQR) speed and steering control](#linearquadratic-regulator-lqr-speed-and-steering-control) + * [Model predictive speed and steering control](#model-predictive-speed-and-steering-control) + * [Arm Navigation](#arm-navigation) + * [Two joint arm to point control](#two-joint-arm-to-point-control) + * [N joint arm to point control](#n-joint-arm-to-point-control) + * [Aerial Navigation](#aerial-navigation) + * [drone 3d trajectory following](#drone-3d-trajectory-following) * [License](#license) * [Contribution](#contribution) * [Support](#support) @@ -32,18 +94,20 @@ See this paper for more details: - [\[1808\.10703\] PythonRobotics: a Python code collection of robotics algorithms](https://arxiv.org/abs/1808.10703) ([BibTeX](https://github.com/AtsushiSakai/PythonRoboticsPaper/blob/master/python_robotics.bib)) -# Documentation - -Documentation is available online: [https://pythonrobotics.readthedocs.io/](https://pythonrobotics.readthedocs.io/) # Requirements - Python 3.6.x + - numpy + - scipy + - matplotlib + - pandas -- [cvxpy](http://www.cvxpy.org/en/latest/) + +- [cvxpy](http://www.cvxpy.org/en/latest/) # How to use @@ -53,10 +117,661 @@ Documentation is available online: [https://pythonrobotics.readthedocs.io/](http 3. Execute python script in each directory. -4. Add star to this repo if you like it :smiley:. +4. Add star to this repo if you like it :smiley:. + +# Localization + +## Extended Kalman Filter localization + + + +This is a sensor fusion localization with Extended Kalman Filter(EKF). + +The blue line is true trajectory, the black line is dead reckoning trajectory, + +the green point is positioning observation (ex. GPS), and the red line is estimated trajectory with EKF. + +The red ellipse is estimated covariance ellipse with EKF. + +Ref: + +- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/) + +## Unscented Kalman Filter localization + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Localization/unscented_kalman_filter/animation.gif) + +This is a sensor fusion localization with Unscented Kalman Filter(UKF). + +The lines and points are same meaning of the EKF simulation. + +Ref: + +- [Discriminatively Trained Unscented Kalman Filter for Mobile Robot Localization](https://www.researchgate.net/publication/267963417_Discriminatively_Trained_Unscented_Kalman_Filter_for_Mobile_Robot_Localization) -# License +## Particle filter localization + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Localization/particle_filter/animation.gif) + +This is a sensor fusion localization with Particle Filter(PF). + +The blue line is true trajectory, the black line is dead reckoning trajectory, + +and the red line is estimated trajectory with PF. + +It is assumed that the robot can measure a distance from landmarks (RFID). + +This measurements are used for PF localization. + +Ref: + +- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/) + + +## Histogram filter localization + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Localization/histogram_filter/animation.gif) + +This is a 2D localization example with Histogram filter. + +The red cross is true position, black points are RFID positions. + +The blue grid shows a position probability of histogram filter. + +In this simulation, x,y are unknown, yaw is known. + +The filter integrates speed input and range observations from RFID for localization. + +Initial position is not needed. + +Ref: + +- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/) + +# Mapping + +## Gaussian grid map + +This is a 2D Gaussian grid mapping example. + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Mapping/gaussian_grid_map/animation.gif) + +## Ray casting grid map + +This is a 2D ray casting grid mapping example. + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Mapping/raycasting_grid_map/animation.gif) + +## k-means object clustering + +This is a 2D object clustering with k-means algorithm. + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Mapping/kmeans_clustering/animation.gif) + +## Object shape recognition using circle fitting + +This is an object shape recognition using circle fitting. + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Mapping/circle_fitting/animation.gif) + +The blue circle is the true object shape. + +The red crosses are observations from a ranging sensor. + +The red circle is the estimated object shape using circle fitting. + +# SLAM + +Simultaneous Localization and Mapping(SLAM) examples + +## Iterative Closest Point (ICP) Matching + +This is a 2D ICP matching example with singular value decomposition. + +It can calculate a rotation matrix and a translation vector between points to points. + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/SLAM/iterative_closest_point/animation.gif) + +Ref: + +- [Introduction to Mobile Robotics: Iterative Closest Point Algorithm](https://cs.gmu.edu/~kosecka/cs685/cs685-icp.pdf) + + +## EKF SLAM + +This is an Extended Kalman Filter based SLAM example. + +The blue line is ground truth, the black line is dead reckoning, the red line is the estimated trajectory with EKF SLAM. + +The green crosses are estimated landmarks. + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/SLAM/EKFSLAM/animation.gif) + +Ref: + +- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/) + +## FastSLAM 1.0 + +This is a feature based SLAM example using FastSLAM 1.0. + +The blue line is ground truth, the black line is dead reckoning, the red line is the estimated trajectory with FastSLAM. + +The red points are particles of FastSLAM. + +Black points are landmarks, blue crosses are estimated landmark positions by FastSLAM. + + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/SLAM/FastSLAM1/animation.gif) + + +Ref: + +- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/) + +- [SLAM simulations by Tim Bailey](http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm) + +## FastSLAM 2.0 + +This is a feature based SLAM example using FastSLAM 2.0. + +The animation has the same meanings as one of FastSLAM 1.0. + + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/SLAM/FastSLAM2/animation.gif) + + +Ref: + +- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/) + +- [SLAM simulations by Tim Bailey](http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm) + + +## Graph based SLAM + +This is a graph based SLAM example. + +The blue line is ground truth. + +The black line is dead reckoning. + +The red line is the estimated trajectory with Graph based SLAM. + +The black stars are landmarks for graph edge generation. + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/SLAM/GraphBasedSLAM/animation.gif) + +Ref: + +- [A Tutorial on Graph-Based SLAM](http://www2.informatik.uni-freiburg.de/~stachnis/pdf/grisetti10titsmag.pdf) + + +# Path Planning + +## Dynamic Window Approach + +This is a 2D navigation sample code with Dynamic Window Approach. + +- [The Dynamic Window Approach to Collision Avoidance](https://www.ri.cmu.edu/pub_files/pub1/fox_dieter_1997_1/fox_dieter_1997_1.pdf) + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/DynamicWindowApproach/animation.gif) + + +## Grid based search + +### Dijkstra algorithm + +This is a 2D grid based shortest path planning with Dijkstra's algorithm. + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/Dijkstra/animation.gif) + +In the animation, cyan points are searched nodes. + +### A\* algorithm + +This is a 2D grid based shortest path planning with A star algorithm. + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/AStar/animation.gif) + +In the animation, cyan points are searched nodes. + +Its heuristic is 2D Euclid distance. + +### Potential Field algorithm + +This is a 2D grid based path planning with Potential Field algorithm. + +![PotentialField](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/PotentialFieldPlanning/animation.gif) + +In the animation, the blue heat map shows potential value on each grid. + +Ref: + +- [Robotic Motion Planning:Potential Functions](https://www.cs.cmu.edu/~motionplanning/lecture/Chap4-Potential-Field_howie.pdf) + + +## Model Predictive Trajectory Generator + +This is a path optimization sample on model predictive trajectory generator. + +This algorithm is used for state lattice planner. + +### Path optimization sample + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/ModelPredictiveTrajectoryGenerator/kn05animation.gif) + +### Lookup table generation sample + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/ModelPredictiveTrajectoryGenerator/lookuptable.png?raw=True) + +Ref: + +- [Optimal rough terrain trajectory generation for wheeled mobile robots](http://journals.sagepub.com/doi/pdf/10.1177/0278364906075328) + + +## State Lattice Planning + +This script is a path planning code with state lattice planning. + +This code uses the model predictive trajectory generator to solve boundary problem. + +Ref: + +- [Optimal rough terrain trajectory generation for wheeled mobile robots](http://journals.sagepub.com/doi/pdf/10.1177/0278364906075328) + +- [State Space Sampling of Feasible Motions for High-Performance Mobile Robot Navigation in Complex Environments](http://www.frc.ri.cmu.edu/~alonzo/pubs/papers/JFR_08_SS_Sampling.pdf) + + +### Uniform polar sampling + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/StateLatticePlanner/UniformPolarSampling.gif) + + +### Biased polar sampling + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/StateLatticePlanner/BiasedPolarSampling.gif) + + +### Lane sampling + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/StateLatticePlanner/LaneSampling.gif) + +## Probabilistic Road-Map (PRM) planning + +![PRM](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/ProbabilisticRoadMap/animation.gif) + +This PRM planner uses Dijkstra method for graph search. + +In the animation, blue points are sampled points, + +Cyan crosses means searched points with Dijkstra method, + +The red line is the final path of PRM. + +Ref: + +- [Probabilistic roadmap \- Wikipedia](https://en.wikipedia.org/wiki/Probabilistic_roadmap) + +   +## Voronoi Road-Map planning + +![VRM](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/VoronoiRoadMap/animation.gif) + +This Voronoi road-map planner uses Dijkstra method for graph search. + +In the animation, blue points are Voronoi points, + +Cyan crosses mean searched points with Dijkstra method, + +The red line is the final path of Vornoi Road-Map. + +Ref: + +- [Robotic Motion Planning](https://www.cs.cmu.edu/~motionplanning/lecture/Chap5-RoadMap-Methods_howie.pdf) + + +## Rapidly-Exploring Random Trees (RRT) + +### Basic RRT + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/RRT/animation.gif) + +This is a simple path planning code with Rapidly-Exploring Random Trees (RRT) + +Black circles are obstacles, green line is a searched tree, red crosses are start and goal positions. + +### RRT\* + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/RRTstar/animation.gif) + +This is a path planning code with RRT\* + +Black circles are obstacles, green line is a searched tree, red crosses are start and goal positions. + +Ref: + +- [Incremental Sampling-based Algorithms for Optimal Motion Planning](https://arxiv.org/abs/1005.0416) + +- [Sampling-based Algorithms for Optimal Motion Planning](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.419.5503&rep=rep1&type=pdf) + + +### RRT with dubins path + +![PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/RRTDubins/animation.gif) + +Path planning for a car robot with RRT and dubins path planner. + + +### RRT\* with dubins path + +![AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/RRTStarDubins/animation.gif) + +Path planning for a car robot with RRT\* and dubins path planner. + + +### RRT\* with reeds-sheep path + +![Robotics/animation.gif at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/RRTStarReedsShepp/animation.gif)) + +Path planning for a car robot with RRT\* and reeds sheep path planner. + +### Informed RRT\* + +![irrt](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/InformedRRTStar/animation.gif)) + +This is a path planning code with Informed RRT\*. + +The cyan ellipse is the heuristic sampling domain of Informed RRT\*. + +Ref: + +- [Informed RRT\*: Optimal Sampling-based Path Planning Focused via Direct Sampling of an Admissible Ellipsoidal Heuristic](https://arxiv.org/pdf/1404.2334.pdf) + +### Batch Informed RRT\* + +![irrt](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/BatchInformedRRTStar/animation.gif)) + +This is a path planning code with Batch Informed RRT\*. + +Ref: + +- [Batch Informed Trees \(BIT\*\): Sampling\-based Optimal Planning via the Heuristically Guided Search of Implicit Random Geometric Graphs](https://arxiv.org/abs/1405.5848) + +### Closed Loop RRT\* + +A vehicle model based path planning with closed loop RRT\*. + +![CLRRT](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/ClosedLoopRRTStar/animation.gif) + +In this code, pure-pursuit algorithm is used for steering control, + +PID is used for speed control. + +Ref: + +- [Motion Planning in Complex Environments +using Closed-loop Prediction](http://acl.mit.edu/papers/KuwataGNC08.pdf) + +- [Real-time Motion Planning with Applications to +Autonomous Urban Driving](http://acl.mit.edu/papers/KuwataTCST09.pdf) + +- [[1601.06326] Sampling-based Algorithms for Optimal Motion Planning Using Closed-loop Prediction](https://arxiv.org/abs/1601.06326) + +### LQR-RRT\* + +This is a path planning simulation with LQR-RRT\*. + +A double integrator motion model is used for LQR local planner. + +![LQRRRT](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/LQRRRTStar/animation.gif) + +Ref: + +- [LQR\-RRT\*: Optimal Sampling\-Based Motion Planning with Automatically Derived Extension Heuristics](http://lis.csail.mit.edu/pubs/perez-icra12.pdf) + +- [MahanFathi/LQR\-RRTstar: LQR\-RRT\* method is used for random motion planning of a simple pendulum in its phase plot](https://github.com/MahanFathi/LQR-RRTstar) + + +## Cubic spline planning + +A sample code for cubic path planning. + +This code generates a curvature continuous path based on x-y waypoints with cubic spline. + +Heading angle of each point can be also calculated analytically. + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/CubicSpline/Figure_1.png?raw=True) +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/CubicSpline/Figure_2.png?raw=True) +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/CubicSpline/Figure_3.png?raw=True) + + +## B-Spline planning + +![B-Spline](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/BSplinePath/Figure_1.png?raw=True) + +This is a path planning with B-Spline curse. + +If you input waypoints, it generates a smooth path with B-Spline curve. + +The final course should be on the first and last waypoints. + +Ref: + +- [B\-spline \- Wikipedia](https://en.wikipedia.org/wiki/B-spline) + +## Eta^3 Spline path planning + +![eta3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/Eta3SplinePath/animation.gif?raw=True) + +This is a path planning with Eta^3 spline. + +Ref: + +- [\eta^3-Splines for the Smooth Path Generation of Wheeled Mobile Robots](https://ieeexplore.ieee.org/document/4339545/) + + + +## Bezier path planning + +A sample code of Bezier path planning. + +It is based on 4 control points Beier path. + +![Bezier1](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/BezierPath/Figure_1.png?raw=True) + +If you change the offset distance from start and end point, + +You can get different Beizer course: + +![Bezier2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/BezierPath/Figure_2.png?raw=True) + + +Ref: + +- [Continuous Curvature Path Generation Based on Bezier Curves for Autonomous Vehicles](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.294.6438&rep=rep1&type=pdf) + +## Quintic polynomials planning + +Motion planning with quintic polynomials. + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/QuinticPolynomialsPlanner/animation.gif) + +It can calculate 2D path, velocity, and acceleration profile based on quintic polynomials. + +Ref: + +- [Local Path Planning And Motion Control For Agv In Positioning](http://ieeexplore.ieee.org/document/637936/) + + +## Dubins path planning + +A sample code for Dubins path planning. + +![dubins](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/DubinsPath/animation.gif?raw=True) + +Ref: + +- [Dubins path - Wikipedia](https://en.wikipedia.org/wiki/Dubins_path) + +## Reeds Shepp planning + +A sample code with Reeds Shepp path planning. + +![RSPlanning](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/ReedsSheppPath/animation.gif?raw=true) + +Ref: + +- [15.3.2 Reeds\-Shepp Curves](http://planning.cs.uiuc.edu/node822.html) + +- [optimal paths for a car that goes both forwards and backwards](https://pdfs.semanticscholar.org/932e/c495b1d0018fd59dee12a0bf74434fac7af4.pdf) + +- [ghliu/pyReedsShepp: Implementation of Reeds Shepp curve\.](https://github.com/ghliu/pyReedsShepp) + + +## LQR based path planning + +A sample code using LQR based path planning for double integrator model. + +![RSPlanning](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/LQRPlanner/animation.gif?raw=true) + + +## Optimal Trajectory in a Frenet Frame + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathPlanning/FrenetOptimalTrajectory/animation.gif) + +This is optimal trajectory generation in a Frenet Frame. + +The cyan line is the target course and black crosses are obstacles. + +The red line is predicted path. + +Ref: + +- [Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame](https://www.researchgate.net/profile/Moritz_Werling/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame/links/54f749df0cf210398e9277af.pdf) + +- [Optimal trajectory generation for dynamic street scenarios in a Frenet Frame](https://www.youtube.com/watch?v=Cj6tAQe7UCY) + + +# Path tracking + +## move to a pose control + +This is a simulation of moving to a pose control + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathTracking/move_to_pose/animation.gif) + +Ref: + +- [P. I. Corke, "Robotics, Vision and Control" \| SpringerLink p102](https://link.springer.com/book/10.1007/978-3-642-20144-8) + + +## Pure pursuit tracking + +Path tracking simulation with pure pursuit steering control and PID speed control. + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathTracking/pure_pursuit/animation.gif) + +The red line is a target course, the green cross means the target point for pure pursuit control, the blue line is the tracking. + +Ref: + +- [A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles](https://arxiv.org/abs/1604.07446) + +## Stanley control + +Path tracking simulation with Stanley steering control and PID speed control. + +![2](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathTracking/stanley_controller/animation.gif) + +Ref: + +- [Stanley: The robot that won the DARPA grand challenge](http://robots.stanford.edu/papers/thrun.stanley05.pdf) + +- [Automatic Steering Methods for Autonomous Automobile Path Tracking](https://www.ri.cmu.edu/pub_files/2009/2/Automatic_Steering_Methods_for_Autonomous_Automobile_Path_Tracking.pdf) + + + +## Rear wheel feedback control + +Path tracking simulation with rear wheel feedback steering control and PID speed control. + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathTracking/rear_wheel_feedback/animation.gif) + +Ref: + +- [A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles](https://arxiv.org/abs/1604.07446) + + +## Linear–quadratic regulator (LQR) steering control + +Path tracking simulation with LQR steering control and PID speed control. + +![PythonRobotics/figure_1.png at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathTracking/lqr_steer_control/animation.gif) + +Ref: + +- [ApolloAuto/apollo: An open autonomous driving platform](https://github.com/ApolloAuto/apollo) + + +## Linear–quadratic regulator (LQR) speed and steering control + +Path tracking simulation with LQR speed and steering control. + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/PathTracking/lqr_speed_steer_control/animation.gif) + +Ref: + +- [Towards fully autonomous driving: Systems and algorithms \- IEEE Conference Publication](http://ieeexplore.ieee.org/document/5940562/) + + +## Model predictive speed and steering control + +Path tracking simulation with iterative linear model predictive speed and steering control. + + + +Ref: + +- [notebook](https://github.com/AtsushiSakai/PythonRobotics/blob/master/PathTracking/model_predictive_speed_and_steer_control/notebook.ipynb) + + +# Arm Navigation + +## Two joint arm to point control + +Two joint arm to a point control simulation. + +This is a interactive simulation. + +You can set the goal position of the end effector with left-click on the ploting area. + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/ArmNavigation/two_joint_arm_to_point_control/animation.gif) + + +## N joint arm to point control + +N joint arm to a point control simulation. + +This is a interactive simulation. + +You can set the goal position of the end effector with left-click on the ploting area. + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/ArmNavigation/n_joint_arm_to_point_control/animation.gif) + +In this simulation N = 10, however, you can change it. + +# Aerial Navigation + +## drone 3d trajectory following + +This is a 3d trajectory following simulation for a quadrotor. + +![3](https://github.com/AtsushiSakai/PythonRobotics/raw/master/AerialNavigation/drone_3d_trajectory_following/animation.gif) + + +# License MIT @@ -80,7 +795,7 @@ E-mail consultant is also available.   -Your comment using [![Say Thanks!](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/AtsushiSakai) is also welcome. +Your comment using [![Say Thanks!](https://img.shields.io/badge/Say%20Thanks-!-1EAEDB.svg)](https://saythanks.io/to/AtsushiSakai) is also welcome. This is a list: [Users comments](https://github.com/AtsushiSakai/PythonRobotics/blob/master/users_comments.md) @@ -97,3 +812,6 @@ This is a list: [Users comments](https://github.com/AtsushiSakai/PythonRobotics/ - [Antonin RAFFIN](https://github.com/araffin) - [Alexis Paques](https://github.com/AlexisTM) + + + diff --git a/doc/PythonRoboticsPaper b/doc/PythonRoboticsPaper deleted file mode 160000 index 9cc193bf..00000000 --- a/doc/PythonRoboticsPaper +++ /dev/null @@ -1 +0,0 @@ -Subproject commit 9cc193bfd0a13e29d898f5185d7d4193dcaa1d30 diff --git a/doc/.gitkeep b/paper/.gitkeep similarity index 100% rename from doc/.gitkeep rename to paper/.gitkeep