mirror of
https://github.com/AtsushiSakai/PythonRobotics.git
synced 2026-01-11 07:18:00 -05:00
Standardize "Ref:" to "Reference" across files (#1210)
Updated all instances of "Ref:" to "Reference" for consistency in both code and documentation. This change improves clarity and aligns with standard terminology practices.
This commit is contained in:
@@ -5,7 +5,7 @@ A rocket powered landing with successive convexification
|
||||
author: Sven Niederberger
|
||||
Atsushi Sakai
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
- Python implementation of 'Successive Convexification for 6-DoF Mars Rocket Powered Landing with Free-Final-Time' paper
|
||||
by Michael Szmuk and Behcet Acıkmese.
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ Left-click the plot to set the goal position of the end effector
|
||||
Author: Daniel Ingram (daniel-s-ingram)
|
||||
Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref: P. I. Corke, "Robotics, Vision & Control", Springer 2017,
|
||||
Reference: P. I. Corke, "Robotics, Vision & Control", Springer 2017,
|
||||
ISBN 978-3-319-54413-7 p102
|
||||
- [Robotics, Vision and Control]
|
||||
(https://link.springer.com/book/10.1007/978-3-642-20144-8)
|
||||
|
||||
@@ -4,7 +4,7 @@ Ensemble Kalman Filter(EnKF) localization sample
|
||||
|
||||
author: Ryohei Sasaki(rsasaki0109)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
Ensemble Kalman filtering
|
||||
(https://rmets.onlinelibrary.wiley.com/doi/10.1256/qj.05.135)
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ Distance Map
|
||||
|
||||
author: Wang Zheng (@Aglargil)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [Distance Map]
|
||||
(https://cs.brown.edu/people/pfelzens/papers/dt-final.pdf)
|
||||
|
||||
@@ -4,7 +4,7 @@ Object shape recognition with L-shape fitting
|
||||
|
||||
author: Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
- Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners -
|
||||
The Robotics Institute Carnegie Mellon University
|
||||
https://www.ri.cmu.edu/publications/
|
||||
|
||||
@@ -3,7 +3,7 @@ Behavior Tree
|
||||
|
||||
author: Wang Zheng (@Aglargil)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [Behavior Tree](https://en.wikipedia.org/wiki/Behavior_tree_(artificial_intelligence,_robotics_and_control))
|
||||
"""
|
||||
|
||||
@@ -3,7 +3,7 @@ State Machine
|
||||
|
||||
author: Wang Zheng (@Aglargil)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [State Machine]
|
||||
(https://en.wikipedia.org/wiki/Finite-state_machine)
|
||||
@@ -23,7 +23,7 @@ def deflate_and_encode(plantuml_text):
|
||||
"""
|
||||
zlib compress the plantuml text and encode it for the plantuml server.
|
||||
|
||||
Ref: https://plantuml.com/en/text-encoding
|
||||
Reference https://plantuml.com/en/text-encoding
|
||||
"""
|
||||
plantuml_alphabet = (
|
||||
string.digits + string.ascii_uppercase + string.ascii_lowercase + "-_"
|
||||
|
||||
@@ -3,7 +3,7 @@ Elastic Bands
|
||||
|
||||
author: Wang Zheng (@Aglargil)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [Elastic Bands: Connecting Path Planning and Control]
|
||||
(http://www8.cs.umu.se/research/ifor/dl/Control/elastic%20bands.pdf)
|
||||
|
||||
@@ -5,7 +5,7 @@ eta^3 polynomials planner
|
||||
author: Joe Dinius, Ph.D (https://jwdinius.github.io)
|
||||
Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
- [eta^3-Splines for the Smooth Path Generation of Wheeled Mobile Robots]
|
||||
(https://ieeexplore.ieee.org/document/4339545/)
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ A converter between Cartesian and Frenet coordinate systems
|
||||
|
||||
author: Wang Zheng (@Aglargil)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame]
|
||||
(https://www.researchgate.net/profile/Moritz_Werling/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame/links/54f749df0cf210398e9277af.pdf)
|
||||
|
||||
@@ -4,7 +4,7 @@ Frenet optimal trajectory generator
|
||||
|
||||
author: Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame]
|
||||
(https://www.researchgate.net/profile/Moritz_Werling/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame/links/54f749df0cf210398e9277af.pdf)
|
||||
|
||||
@@ -4,7 +4,7 @@ Potential Field based path planner
|
||||
|
||||
author: Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
https://www.cs.cmu.edu/~motionplanning/lecture/Chap4-Potential-Field_howie.pdf
|
||||
|
||||
"""
|
||||
|
||||
@@ -4,7 +4,7 @@ Quintic Polynomials Planner
|
||||
|
||||
author: Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [Local Path planning And Motion Control For Agv In Positioning](https://ieeexplore.ieee.org/document/637936/)
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ author: Atsushi Sakai (@Atsushi_twi)
|
||||
https://github.com/AtsushiSakai/PythonRobotics/blob/master/PathPlanning
|
||||
/ModelPredictiveTrajectoryGenerator/lookup_table_generator.py
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- State Space Sampling of Feasible Motions for High-Performance Mobile Robot
|
||||
Navigation in Complex Environments
|
||||
|
||||
@@ -4,7 +4,7 @@ Nonlinear MPC simulation with CGMRES
|
||||
|
||||
author Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
Shunichi09/nonlinear_control: Implementing the nonlinear model predictive
|
||||
control, sliding mode control https://github.com/Shunichi09/PythonLinearNonlinearControl
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ class PathFinderController:
|
||||
# [-pi, pi] to prevent unstable behavior e.g. difference going
|
||||
# from 0 rad to 2*pi rad with slight turn
|
||||
|
||||
# Ref: The velocity v always has a constant sign which depends on the initial value of α.
|
||||
# The velocity v always has a constant sign which depends on the initial value of α.
|
||||
rho = np.hypot(x_diff, y_diff)
|
||||
v = self.Kp_rho * rho
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ Path tracking simulation with Stanley steering control and PID speed control.
|
||||
|
||||
author: Atsushi Sakai (@Atsushi_twi)
|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
- [Stanley: The robot that won the DARPA grand challenge](http://isl.ecst.csuchico.edu/DOCS/darpa2005/DARPA%202005%20Stanley.pdf)
|
||||
- [Autonomous Automobile Path Tracking](https://www.ri.cmu.edu/pub_files/2009/2/Automatic_Steering_Methods_for_Autonomous_Automobile_Path_Tracking.pdf)
|
||||
|
||||
|
||||
42
README.md
42
README.md
@@ -168,7 +168,7 @@ All animation gifs are stored here: [AtsushiSakai/PythonRoboticsGifs: Animation
|
||||
|
||||
<img src="https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/Localization/extended_kalman_filter/animation.gif" width="640" alt="EKF pic">
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [documentation](https://atsushisakai.github.io/PythonRobotics/modules/localization/extended_kalman_filter_localization_files/extended_kalman_filter_localization.html)
|
||||
|
||||
@@ -186,7 +186,7 @@ It is assumed that the robot can measure a distance from landmarks (RFID).
|
||||
|
||||
These measurements are used for PF localization.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/)
|
||||
|
||||
@@ -207,7 +207,7 @@ The filter integrates speed input and range observations from RFID for localizat
|
||||
|
||||
Initial position is not needed.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/)
|
||||
|
||||
@@ -256,7 +256,7 @@ It can calculate a rotation matrix, and a translation vector between points and
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Introduction to Mobile Robotics: Iterative Closest Point Algorithm](https://cs.gmu.edu/~kosecka/cs685/cs685-icp.pdf)
|
||||
|
||||
@@ -275,7 +275,7 @@ Black points are landmarks, blue crosses are estimated landmark positions by Fas
|
||||

|
||||
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [PROBABILISTIC ROBOTICS](http://www.probabilistic-robotics.org/)
|
||||
|
||||
@@ -321,7 +321,7 @@ This is a 2D grid based the shortest path planning with D star algorithm.
|
||||
|
||||
The animation shows a robot finding its path avoiding an obstacle using the D* search algorithm.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [D* Algorithm Wikipedia](https://en.wikipedia.org/wiki/D*)
|
||||
|
||||
@@ -346,7 +346,7 @@ This is a 2D grid based path planning with Potential Field algorithm.
|
||||
|
||||
In the animation, the blue heat map shows potential value on each grid.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Robotic Motion Planning:Potential Functions](https://www.cs.cmu.edu/~motionplanning/lecture/Chap4-Potential-Field_howie.pdf)
|
||||
|
||||
@@ -362,7 +362,7 @@ This script is a path planning code with state lattice planning.
|
||||
|
||||
This code uses the model predictive trajectory generator to solve boundary problem.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Optimal rough terrain trajectory generation for wheeled mobile robots](https://journals.sagepub.com/doi/pdf/10.1177/0278364906075328)
|
||||
|
||||
@@ -390,7 +390,7 @@ Cyan crosses means searched points with Dijkstra method,
|
||||
|
||||
The red line is the final path of PRM.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Probabilistic roadmap \- Wikipedia](https://en.wikipedia.org/wiki/Probabilistic_roadmap)
|
||||
|
||||
@@ -406,7 +406,7 @@ This is a path planning code with RRT\*
|
||||
|
||||
Black circles are obstacles, green line is a searched tree, red crosses are start and goal positions.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Incremental Sampling-based Algorithms for Optimal Motion Planning](https://arxiv.org/abs/1005.0416)
|
||||
|
||||
@@ -426,7 +426,7 @@ A double integrator motion model is used for LQR local planner.
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [LQR\-RRT\*: Optimal Sampling\-Based Motion Planning with Automatically Derived Extension Heuristics](https://lis.csail.mit.edu/pubs/perez-icra12.pdf)
|
||||
|
||||
@@ -441,7 +441,7 @@ Motion planning with quintic polynomials.
|
||||
|
||||
It can calculate a 2D path, velocity, and acceleration profile based on quintic polynomials.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Local Path Planning And Motion Control For Agv In Positioning](https://ieeexplore.ieee.org/document/637936/)
|
||||
|
||||
@@ -451,7 +451,7 @@ A sample code with Reeds Shepp path planning.
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [15.3.2 Reeds\-Shepp Curves](http://planning.cs.uiuc.edu/node822.html)
|
||||
|
||||
@@ -477,7 +477,7 @@ The cyan line is the target course and black crosses are obstacles.
|
||||
|
||||
The red line is the predicted path.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Optimal Trajectory Generation for Dynamic Street Scenarios in a Frenet Frame](https://www.researchgate.net/profile/Moritz_Werling/publication/224156269_Optimal_Trajectory_Generation_for_Dynamic_Street_Scenarios_in_a_Frenet_Frame/links/54f749df0cf210398e9277af.pdf)
|
||||
|
||||
@@ -492,7 +492,7 @@ This is a simulation of moving to a pose control
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [P. I. Corke, "Robotics, Vision and Control" \| SpringerLink p102](https://link.springer.com/book/10.1007/978-3-642-20144-8)
|
||||
|
||||
@@ -503,7 +503,7 @@ Path tracking simulation with Stanley steering control and PID speed control.
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Stanley: The robot that won the DARPA grand challenge](http://robots.stanford.edu/papers/thrun.stanley05.pdf)
|
||||
|
||||
@@ -517,7 +517,7 @@ Path tracking simulation with rear wheel feedback steering control and PID speed
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [A Survey of Motion Planning and Control Techniques for Self-driving Urban Vehicles](https://arxiv.org/abs/1604.07446)
|
||||
|
||||
@@ -528,7 +528,7 @@ Path tracking simulation with LQR speed and steering control.
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [Towards fully autonomous driving: Systems and algorithms \- IEEE Conference Publication](https://ieeexplore.ieee.org/document/5940562/)
|
||||
|
||||
@@ -539,7 +539,7 @@ Path tracking simulation with iterative linear model predictive speed and steeri
|
||||
|
||||
<img src="https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathTracking/model_predictive_speed_and_steer_control/animation.gif" width="640" alt="MPC pic">
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [documentation](https://atsushisakai.github.io/PythonRobotics/modules/path_tracking/model_predictive_speed_and_steering_control/model_predictive_speed_and_steering_control.html)
|
||||
|
||||
@@ -551,7 +551,7 @@ A motion planning and path tracking simulation with NMPC of C-GMRES
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [documentation](https://atsushisakai.github.io/PythonRobotics/modules/path_tracking/cgmres_nmpc/cgmres_nmpc.html)
|
||||
|
||||
@@ -591,7 +591,7 @@ This is a 3d trajectory generation simulation for a rocket powered landing.
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- [documentation](https://atsushisakai.github.io/PythonRobotics/modules/aerial_navigation/rocket_powered_landing/rocket_powered_landing.html)
|
||||
|
||||
|
||||
@@ -331,7 +331,7 @@ are vectors and matrices, but the concepts are exactly the same:
|
||||
- Use the process model to predict the next state (the prior)
|
||||
- Form an estimate part way between the measurement and the prior
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
1. Roger Labbe’s
|
||||
|
||||
@@ -552,7 +552,7 @@ a given (X,Y) value.
|
||||
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_28_1.png
|
||||
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
1. Roger Labbe’s
|
||||
|
||||
@@ -91,7 +91,7 @@ the target minimum velocity :math:`v_{min}` can be calculated as follows:
|
||||
:math:`v_{min} = \frac{d_{t+1}+d_{t}}{\Delta t} = \frac{d_{t+1}+d_{t}}{(\kappa_{t+1}-\kappa_{t})}\frac{tan\dot{\delta}_{max}}{WB}`
|
||||
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `Vehicle Dynamics and Control <https://link.springer.com/book/10.1007/978-1-4614-1433-9>`_
|
||||
|
||||
@@ -5,3 +5,8 @@ Ensamble Kalman Filter Localization
|
||||
|
||||
This is a sensor fusion localization with Ensamble Kalman Filter(EnKF).
|
||||
|
||||
Code Link
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. autofunction:: Localization.ensemble_kalman_filter.ensemble_kalman_filter.enkf_localization
|
||||
|
||||
|
||||
@@ -23,28 +23,40 @@ is estimated trajectory with EKF.
|
||||
|
||||
The red ellipse is estimated covariance ellipse with EKF.
|
||||
|
||||
Code: `PythonRobotics/extended_kalman_filter.py at master ·
|
||||
AtsushiSakai/PythonRobotics <https://github.com/AtsushiSakai/PythonRobotics/blob/master/Localization/extended_kalman_filter/extended_kalman_filter.py>`__
|
||||
|
||||
Kalman Filter with Speed Scale Factor Correction
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
|
||||
.. image:: ekf_with_velocity_correction_1_0.png
|
||||
:width: 600px
|
||||
|
||||
This is a Extended kalman filter (EKF) localization with velocity correction.
|
||||
|
||||
This is for correcting the vehicle speed measured with scale factor errors due to factors such as wheel wear.
|
||||
|
||||
Code: `PythonRobotics/extended_kalman_ekf_with_velocity_correctionfilter.py
|
||||
AtsushiSakai/PythonRobotics <https://github.com/AtsushiSakai/PythonRobotics/blob/master/Localization/extended_kalman_filter/extended_kalman_ekf_with_velocity_correctionfilter.py>`__
|
||||
|
||||
Filter design
|
||||
Code Link
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Position Estimation Kalman Filter
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. autofunction:: Localization.extended_kalman_filter.extended_kalman_filter.ekf_estimation
|
||||
|
||||
Extended Kalman Filter algorithm
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Localization process using Extended Kalman Filter:EKF is
|
||||
|
||||
=== Predict ===
|
||||
|
||||
:math:`x_{Pred} = Fx_t+Bu_t`
|
||||
|
||||
:math:`P_{Pred} = J_f P_t J_f^T + Q`
|
||||
|
||||
=== Update ===
|
||||
|
||||
:math:`z_{Pred} = Hx_{Pred}`
|
||||
|
||||
:math:`y = z - z_{Pred}`
|
||||
|
||||
:math:`S = J_g P_{Pred}.J_g^T + R`
|
||||
|
||||
:math:`K = P_{Pred}.J_g^T S^{-1}`
|
||||
|
||||
:math:`x_{t+1} = x_{Pred} + Ky`
|
||||
|
||||
:math:`P_{t+1} = ( I - K J_g) P_{Pred}`
|
||||
|
||||
|
||||
|
||||
Filter design
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In this simulation, the robot has a state vector includes 4 states at
|
||||
time :math:`t`.
|
||||
@@ -82,27 +94,9 @@ In the code, “observation” function generates the input and observation
|
||||
vector with noise
|
||||
`code <https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_filter.py#L34-L50>`__
|
||||
|
||||
Kalman Filter with Speed Scale Factor Correction
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In this simulation, the robot has a state vector includes 5 states at
|
||||
time :math:`t`.
|
||||
|
||||
.. math:: \textbf{x}_t=[x_t, y_t, \phi_t, v_t, s_t]
|
||||
|
||||
x, y are a 2D x-y position, :math:`\phi` is orientation, v is
|
||||
velocity, and s is a scale factor of velocity.
|
||||
|
||||
In the code, “xEst” means the state vector.
|
||||
`code <https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_ekf_with_velocity_correctionfilter.py#L163>`__
|
||||
|
||||
The rest is the same as the Position Estimation Kalman Filter.
|
||||
|
||||
Motion Model
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Position Estimation Kalman Filter
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The robot model is
|
||||
|
||||
@@ -137,9 +131,61 @@ Its Jacobian matrix is
|
||||
|
||||
:math:`\begin{equation*} = \begin{bmatrix} 1& 0 & -v \sin(\phi) \Delta t & \cos(\phi) \Delta t\\ 0 & 1 & v \cos(\phi) \Delta t & \sin(\phi) \Delta t\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \end{equation*}`
|
||||
|
||||
Observation Model
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The robot can get x-y position information from GPS.
|
||||
|
||||
So GPS Observation model is
|
||||
|
||||
.. math:: \textbf{z}_{t} = g(\textbf{x}_t) = H \textbf{x}_t
|
||||
|
||||
where
|
||||
|
||||
:math:`\begin{equation*} H = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix} \end{equation*}`
|
||||
|
||||
The observation function states that
|
||||
|
||||
:math:`\begin{equation*} \begin{bmatrix} x' \\ y' \end{bmatrix} = g(\textbf{x}) = \begin{bmatrix} x \\ y \end{bmatrix} \end{equation*}`
|
||||
|
||||
Its Jacobian matrix is
|
||||
|
||||
:math:`\begin{equation*} J_g = \begin{bmatrix} \frac{\partial x'}{\partial x} & \frac{\partial x'}{\partial y} & \frac{\partial x'}{\partial \phi} & \frac{\partial x'}{\partial v}\\ \frac{\partial y'}{\partial x}& \frac{\partial y'}{\partial y} & \frac{\partial y'}{\partial \phi} & \frac{\partial y'}{ \partial v}\\ \end{bmatrix} \end{equation*}`
|
||||
|
||||
:math:`\begin{equation*} = \begin{bmatrix} 1& 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \end{bmatrix} \end{equation*}`
|
||||
|
||||
|
||||
Kalman Filter with Speed Scale Factor Correction
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is a Extended kalman filter (EKF) localization with velocity correction.
|
||||
|
||||
This is for correcting the vehicle speed measured with scale factor errors due to factors such as wheel wear.
|
||||
|
||||
|
||||
In this simulation, the robot has a state vector includes 5 states at
|
||||
time :math:`t`.
|
||||
|
||||
.. math:: \textbf{x}_t=[x_t, y_t, \phi_t, v_t, s_t]
|
||||
|
||||
x, y are a 2D x-y position, :math:`\phi` is orientation, v is
|
||||
velocity, and s is a scale factor of velocity.
|
||||
|
||||
In the code, “xEst” means the state vector.
|
||||
`code <https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_ekf_with_velocity_correctionfilter.py#L163>`__
|
||||
|
||||
The rest is the same as the Position Estimation Kalman Filter.
|
||||
|
||||
.. image:: ekf_with_velocity_correction_1_0.png
|
||||
:width: 600px
|
||||
|
||||
Code Link
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: Localization.extended_kalman_filter.ekf_with_velocity_correction.ekf_estimation
|
||||
|
||||
|
||||
Motion Model
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The robot model is
|
||||
|
||||
@@ -178,33 +224,7 @@ Its Jacobian matrix is
|
||||
Observation Model
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Position Estimation Kalman Filter
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The robot can get x-y position infomation from GPS.
|
||||
|
||||
So GPS Observation model is
|
||||
|
||||
.. math:: \textbf{z}_{t} = g(\textbf{x}_t) = H \textbf{x}_t
|
||||
|
||||
where
|
||||
|
||||
:math:`\begin{equation*} H = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix} \end{equation*}`
|
||||
|
||||
The observation function states that
|
||||
|
||||
:math:`\begin{equation*} \begin{bmatrix} x' \\ y' \end{bmatrix} = g(\textbf{x}) = \begin{bmatrix} x \\ y \end{bmatrix} \end{equation*}`
|
||||
|
||||
Its Jacobian matrix is
|
||||
|
||||
:math:`\begin{equation*} J_g = \begin{bmatrix} \frac{\partial x'}{\partial x} & \frac{\partial x'}{\partial y} & \frac{\partial x'}{\partial \phi} & \frac{\partial x'}{\partial v}\\ \frac{\partial y'}{\partial x}& \frac{\partial y'}{\partial y} & \frac{\partial y'}{\partial \phi} & \frac{\partial y'}{ \partial v}\\ \end{bmatrix} \end{equation*}`
|
||||
|
||||
:math:`\begin{equation*} = \begin{bmatrix} 1& 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \end{bmatrix} \end{equation*}`
|
||||
|
||||
Kalman Filter with Speed Scale Factor Correction
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The robot can get x-y position infomation from GPS.
|
||||
The robot can get x-y position information from GPS.
|
||||
|
||||
So GPS Observation model is
|
||||
|
||||
@@ -225,32 +245,8 @@ Its Jacobian matrix is
|
||||
:math:`\begin{equation*} = \begin{bmatrix} 1& 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ \end{bmatrix} \end{equation*}`
|
||||
|
||||
|
||||
Extended Kalman Filter
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Localization process using Extended Kalman Filter:EKF is
|
||||
|
||||
=== Predict ===
|
||||
|
||||
:math:`x_{Pred} = Fx_t+Bu_t`
|
||||
|
||||
:math:`P_{Pred} = J_f P_t J_f^T + Q`
|
||||
|
||||
=== Update ===
|
||||
|
||||
:math:`z_{Pred} = Hx_{Pred}`
|
||||
|
||||
:math:`y = z - z_{Pred}`
|
||||
|
||||
:math:`S = J_g P_{Pred}.J_g^T + R`
|
||||
|
||||
:math:`K = P_{Pred}.J_g^T S^{-1}`
|
||||
|
||||
:math:`x_{t+1} = x_{Pred} + Ky`
|
||||
|
||||
:math:`P_{t+1} = ( I - K J_g) P_{Pred}`
|
||||
|
||||
Ref:
|
||||
~~~~
|
||||
Reference
|
||||
^^^^^^^^^^^
|
||||
|
||||
- `PROBABILISTIC-ROBOTICS.ORG <http://www.probabilistic-robotics.org/>`__
|
||||
|
||||
@@ -16,6 +16,11 @@ The filter uses speed input and range observations from RFID for localization.
|
||||
|
||||
Initial position information is not needed.
|
||||
|
||||
Code Link
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: Localization.histogram_filter.histogram_filter.histogram_filter_localization
|
||||
|
||||
Filtering algorithm
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@@ -107,7 +112,7 @@ There are two ways to calculate the final positions:
|
||||
|
||||
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `_PROBABILISTIC ROBOTICS: <http://www.probabilistic-robotics.org>`_
|
||||
|
||||
@@ -15,6 +15,12 @@ It is assumed that the robot can measure a distance from landmarks
|
||||
|
||||
This measurements are used for PF localization.
|
||||
|
||||
Code Link
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: Localization.particle_filter.particle_filter.pf_localization
|
||||
|
||||
|
||||
How to calculate covariance matrix from particles
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@@ -30,7 +36,7 @@ The covariance matrix :math:`\Xi` from particle information is calculated by the
|
||||
|
||||
- :math:`\mu_j` is the :math:`j` th mean state of particles.
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `_PROBABILISTIC ROBOTICS: <http://www.probabilistic-robotics.org>`_
|
||||
|
||||
@@ -7,7 +7,13 @@ This is a sensor fusion localization with Unscented Kalman Filter(UKF).
|
||||
|
||||
The lines and points are same meaning of the EKF simulation.
|
||||
|
||||
References:
|
||||
Code Link
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
.. autofunction:: Localization.unscented_kalman_filter.unscented_kalman_filter.ukf_estimation
|
||||
|
||||
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `Discriminatively Trained Unscented Kalman Filter for Mobile Robot Localization <https://www.researchgate.net/publication/267963417_Discriminatively_Trained_Unscented_Kalman_Filter_for_Mobile_Robot_Localization>`_
|
||||
|
||||
@@ -578,7 +578,7 @@ reckoning and control functions are passed along here as well.
|
||||
|
||||
.. image:: ekf_slam_1_0.png
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
- `PROBABILISTIC ROBOTICS <http://www.probabilistic-robotics.org/>`_
|
||||
|
||||
@@ -17,7 +17,7 @@ The black stars are landmarks for graph edge generation.
|
||||
.. include:: graphSLAM_formulation.rst
|
||||
.. include:: graphSLAM_SE2_example.rst
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `A Tutorial on Graph-Based SLAM <http://www2.informatik.uni-freiburg.de/~stachnis/pdf/grisetti10titsmag.pdf>`_
|
||||
|
||||
@@ -13,7 +13,7 @@ You can get different Beizer course:
|
||||
|
||||
.. image:: Figure_2.png
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Continuous Curvature Path Generation Based on Bezier Curves for
|
||||
Autonomous
|
||||
|
||||
@@ -7,7 +7,7 @@ Eta^3 Spline path planning
|
||||
|
||||
This is a path planning with Eta^3 spline.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `\\eta^3-Splines for the Smooth Path Generation of Wheeled Mobile
|
||||
Robots <https://ieeexplore.ieee.org/document/4339545/>`__
|
||||
|
||||
@@ -35,7 +35,7 @@ Low Speed and Merging and Stopping Scenario
|
||||
|
||||
This scenario illustrates the trajectory planning at low speeds with merging and stopping behaviors.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Optimal Trajectory Generation for Dynamic Street Scenarios in a
|
||||
Frenet
|
||||
|
||||
@@ -63,7 +63,7 @@ This is a 2D grid based shortest path planning with D star algorithm.
|
||||
|
||||
The animation shows a robot finding its path avoiding an obstacle using the D* search algorithm.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `D* search Wikipedia <https://en.wikipedia.org/wiki/D*>`__
|
||||
|
||||
@@ -74,7 +74,7 @@ This is a 2D grid based path planning and replanning with D star lite algorithm.
|
||||
|
||||
.. image:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathPlanning/DStarLite/animation.gif
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Improved Fast Replanning for Robot Navigation in Unknown Terrain <http://www.cs.cmu.edu/~maxim/files/dlite_icra02.pdf>`_
|
||||
|
||||
@@ -88,7 +88,7 @@ This is a 2D grid based path planning with Potential Field algorithm.
|
||||
|
||||
In the animation, the blue heat map shows potential value on each grid.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Robotic Motion Planning:Potential
|
||||
Functions <https://www.cs.cmu.edu/~motionplanning/lecture/Chap4-Potential-Field_howie.pdf>`__
|
||||
|
||||
@@ -16,7 +16,7 @@ Lookup table generation sample
|
||||
|
||||
.. image:: lookup_table.png
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Optimal rough terrain trajectory generation for wheeled mobile
|
||||
robots <https://journals.sagepub.com/doi/pdf/10.1177/0278364906075328>`__
|
||||
|
||||
@@ -13,7 +13,7 @@ Cyan crosses means searched points with Dijkstra method,
|
||||
|
||||
The red line is the final path of PRM.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Probabilistic roadmap -
|
||||
Wikipedia <https://en.wikipedia.org/wiki/Probabilistic_roadmap>`__
|
||||
|
||||
@@ -97,7 +97,7 @@ Each velocity and acceleration boundary condition can be calculated with each or
|
||||
|
||||
:math:`v_{xe}=v_ecos(\theta_e), v_{ye}=v_esin(\theta_e)`
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `Local Path Planning And Motion Control For Agv In
|
||||
|
||||
@@ -380,7 +380,7 @@ Hence, we have:
|
||||
- :math:`v = (t - φ)`
|
||||
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `15.3.2 Reeds-Shepp
|
||||
Curves <https://lavalle.pl/planning/node822.html>`__
|
||||
|
||||
@@ -53,7 +53,7 @@ This is a path planning code with Informed RRT*.
|
||||
|
||||
The cyan ellipse is the heuristic sampling domain of Informed RRT*.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Informed RRT\*: Optimal Sampling-based Path Planning Focused via
|
||||
Direct Sampling of an Admissible Ellipsoidal
|
||||
@@ -68,7 +68,7 @@ Batch Informed RRT\*
|
||||
|
||||
This is a path planning code with Batch Informed RRT*.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Batch Informed Trees (BIT*): Sampling-based Optimal Planning via the
|
||||
Heuristically Guided Search of Implicit Random Geometric
|
||||
@@ -87,7 +87,7 @@ In this code, pure-pursuit algorithm is used for steering control,
|
||||
|
||||
PID is used for speed control.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Motion Planning in Complex Environments using Closed-loop
|
||||
Prediction <https://acl.mit.edu/papers/KuwataGNC08.pdf>`__
|
||||
@@ -109,7 +109,7 @@ A double integrator motion model is used for LQR local planner.
|
||||
|
||||
.. image:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathPlanning/LQRRRTStar/animation.gif
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `LQR-RRT\*: Optimal Sampling-Based Motion Planning with Automatically
|
||||
Derived Extension
|
||||
|
||||
@@ -22,7 +22,7 @@ Lane sampling
|
||||
|
||||
.. image:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathPlanning/StateLatticePlanner/LaneSampling.gif
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Optimal rough terrain trajectory generation for wheeled mobile
|
||||
robots <https://journals.sagepub.com/doi/pdf/10.1177/0278364906075328>`__
|
||||
|
||||
@@ -11,7 +11,7 @@ Cyan crosses mean searched points with Dijkstra method,
|
||||
|
||||
The red line is the final path of Vornoi Road-Map.
|
||||
|
||||
Ref:
|
||||
Reference
|
||||
|
||||
- `Robotic Motion Planning <https://www.cs.cmu.edu/~motionplanning/lecture/Chap5-RoadMap-Methods_howie.pdf>`__
|
||||
|
||||
|
||||
@@ -134,7 +134,7 @@ Simulation results
|
||||
|
||||
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `Towards fully autonomous driving: Systems and algorithms <https://ieeexplore.ieee.org/document/5940562/>`__
|
||||
|
||||
@@ -113,7 +113,7 @@ The optimal control input `u` can be calculated as:
|
||||
|
||||
where `K` is the feedback gain matrix obtained by solving the Riccati equation.
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
- `ApolloAuto/apollo: An open autonomous driving platform <https://github.com/ApolloAuto/apollo>`_
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ speed control.
|
||||
The red line is a target course, the green cross means the target point
|
||||
for pure pursuit control, the blue line is the tracking.
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `A Survey of Motion Planning and Control Techniques for Self-driving
|
||||
|
||||
@@ -6,7 +6,7 @@ PID speed control.
|
||||
|
||||
.. image:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathTracking/rear_wheel_feedback/animation.gif
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
- `A Survey of Motion Planning and Control Techniques for Self-driving
|
||||
Urban Vehicles <https://arxiv.org/abs/1604.07446>`__
|
||||
|
||||
@@ -6,7 +6,7 @@ control.
|
||||
|
||||
.. image:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathTracking/stanley_controller/animation.gif
|
||||
|
||||
References:
|
||||
Reference
|
||||
~~~~~~~~~~~
|
||||
|
||||
- `Stanley: The robot that won the DARPA grand
|
||||
|
||||
@@ -10,7 +10,7 @@ This is an electric wheelchair control demo by [Katsushun89](https://github.com/
|
||||
|
||||

|
||||
|
||||
Ref:
|
||||
Reference:
|
||||
|
||||
- [toioと同じように動くWHILL Model CR (in Japanese)](https://qiita.com/KatsuShun89/items/68fde7544ecfe36096d2)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user