update docs

This commit is contained in:
Atsushi Sakai
2018-11-16 21:21:08 +09:00
parent 906681c471
commit 953cc3857e
26 changed files with 576 additions and 130 deletions

View File

@@ -4,7 +4,22 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Inverse Kinematics for a Planar Two-Link Robotic Arm\n",
"## Two joint arm to point control\n",
"\n",
"![TwoJointArmToPointControl](https://github.com/AtsushiSakai/PythonRobotics/raw/master/ArmNavigation/two_joint_arm_to_point_control/animation.gif)\n",
"\n",
"\n",
"\n",
"\n",
"This is two joint arm to a point control simulation.\n",
"\n",
"This is a interactive simulation.\n",
"\n",
"You can set the goal position of the end effector with left-click on the ploting area.\n",
"\n",
"\n",
"\n",
"### Inverse Kinematics for a Planar Two-Link Robotic Arm\n",
"\n",
"A classic problem with robotic arms is getting the end-effector, the mechanism at the end of the arm responsible for manipulating the environment, to where you need it to be. Maybe the end-effector is a gripper and maybe you want to pick up an object and maybe you know where that object is relative to the robot - but you cannot tell the end-effector where to go directly. Instead, you have to determine the joint angles that get the end-effector to where you want it to be. This problem is known as inverse kinematics.\n",
"\n",
@@ -403,7 +418,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.2"
"version": "3.6.6"
}
},
"nbformat": 4,

View File

@@ -4,15 +4,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# _KF Basics - Part I_\n",
"----\n"
"## KF Basics - Part I\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### What is the need to describe belief in terms of PDF's?\n",
"### Introduction\n",
"#### What is the need to describe belief in terms of PDF's?\n",
"This is because robot environments are stochastic. A robot environment may have cows with Tesla by side. That is a robot and it's environment cannot be deterministically modelled(e.g as a function of something like time t). In the real world sensors are also error prone, and hence there'll be a set of values with a mean and variance that it can take. Hence, we always have to model around some mean and variances associated."
]
},
@@ -20,7 +20,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"##### What is Expectation of a Random Variables?\n",
"#### What is Expectation of a Random Variables?\n",
" Expectation is nothing but an average of the probabilites\n",
" \n",
"$$\\mathbb E[X] = \\sum_{i=1}^n p_ix_i$$\n",
@@ -56,7 +56,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"##### What is the advantage of representing the belief as a unimodal as opposed to multimodal?\n",
"#### What is the advantage of representing the belief as a unimodal as opposed to multimodal?\n",
"Obviously, it makes sense because we can't multiple probabilities to a car moving for two locations. This would be too confusing and the information will not be useful."
]
},
@@ -71,9 +71,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Variance, Covariance and Correlation\n",
"----\n",
"### Variance\n",
"### Variance, Covariance and Correlation\n",
"\n",
"#### Variance\n",
"Variance is the spread of the data. The mean does'nt tell much **about** the data. Therefore the variance tells us about the **story** about the data meaning the spread of the data.\n",
"\n",
"$$\\mathit{VAR}(X) = \\frac{1}{n}\\sum_{i=1}^n (x_i - \\mu)^2$$"
@@ -104,7 +104,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Covariance\n",
"#### Covariance\n",
"\n",
"This is for a multivariate distribution. For example, a robot in 2-D space can take values in both x and y. To describe them, a normal distribution with mean in both x and y is needed.\n",
"\n",
@@ -224,10 +224,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Gaussians \n",
"----\n",
"\n",
"\n",
"### Gaussians \n",
"\n",
"#### Central Limit Theorem\n",
"\n",
@@ -285,7 +282,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Gaussian Distribution\n",
"#### Gaussian Distribution\n",
"A Gaussian is a *continuous probability distribution* that is completely described with two parameters, the mean ($\\mu$) and the variance ($\\sigma^2$). It is defined as:\n",
"\n",
"$$ \n",
@@ -337,7 +334,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Why do we need Gaussian distributions?\n",
"#### Why do we need Gaussian distributions?\n",
"Since it becomes really difficult in the real world to deal with multimodal distribution as we cannot put the belief in two seperate location of the robots. This becomes really confusing and in practice impossible to comprehend. \n",
"Gaussian probability distribution allows us to drive the robots using only one mode with peak at the mean with some variance."
]
@@ -346,8 +343,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Gaussian Properties\n",
"----\n",
"### Gaussian Properties\n",
"\n",
"**Multiplication**\n",
"\n",
"\n",
@@ -569,7 +566,7 @@
"\n",
"\n",
"\n",
"##### numpy einsum examples"
"** numpy einsum examples **"
]
},
{
@@ -757,7 +754,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.5"
"version": "3.6.6"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -4,7 +4,19 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extended Kalman Filter Localization\n",
"## Extended Kalman Filter Localization\n",
"\n",
"![EKF](https://github.com/AtsushiSakai/PythonRobotics/raw/master/Localization/extended_kalman_filter/animation.gif)\n",
"\n",
"This is a sensor fusion localization with Extended Kalman Filter(EKF).\n",
"\n",
"The blue line is true trajectory, the black line is dead reckoning\n",
"trajectory,\n",
"\n",
"the green point is positioning observation (ex. GPS), and the red line\n",
"is estimated trajectory with EKF.\n",
"\n",
"The red ellipse is estimated covariance ellipse with EKF.\n",
"\n",
"Code; [PythonRobotics/extended\\_kalman\\_filter\\.py at master · AtsushiSakai/PythonRobotics](https://github.com/AtsushiSakai/PythonRobotics/blob/master/Localization/extended_kalman_filter/extended_kalman_filter.py)"
]
@@ -13,15 +25,24 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Kalman Filter"
"### Kalman Filter"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Ref"
"### Ref\n",
"\n",
"- [PROBABILISTIC\\-ROBOTICS\\.ORG](http://www.probabilistic-robotics.org/)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

View File

@@ -40,6 +40,7 @@ See this paper for more details:
modules/path_tracking
modules/arm_navigation
modules/aerial_navigation
modules/appendix
Indices and tables

View File

@@ -23,19 +23,35 @@ def generate_rst(npath):
# print(npath)
# generate dir
dirpath = "./modules/" + os.path.dirname(npath)[3:]
dirpath = os.path.dirname(npath)
# print(dirpath)
if not os.path.exists(dirpath):
os.makedirs(dirpath)
rstpath = os.path.abspath("./modules/" + npath[3:-5] + "rst")
print(rstpath)
# print(rstpath)
basename = os.path.basename(rstpath)
cmd = "jupyter nbconvert --to rst "
cmd += npath
cmd += " --output "
cmd += rstpath
print(cmd)
subprocess.call(cmd, shell=True)
cmd = "rm -rf "
cmd += "./modules/"
cmd += basename[:-4]
cmd += "*"
print(cmd)
subprocess.call(cmd, shell=True)
cmd = "mv "
cmd += dirpath
cmd += "/*.rst ./modules/"
print(cmd)
subprocess.call(cmd, shell=True)
cmd = "mv "
cmd += dirpath
cmd += "/*_files ./modules/"
print(cmd)
subprocess.call(cmd, shell=True)

View File

@@ -1,9 +1,12 @@
# *KF Basics - Part I*
----------------------
KF Basics - Part I
------------------
Introduction
~~~~~~~~~~~~
What is the need to describe belief in terms of PDFs?
''''''''''''''''''''''''''''''''''''''''''''''''''''''
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is because robot environments are stochastic. A robot environment
may have cows with Tesla by side. That is a robot and its environment
@@ -14,7 +17,7 @@ Hence, we always have to model around some mean and variances
associated.
What is Expectation of a Random Variables?
''''''''''''''''''''''''''''''''''''''''''
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expectation is nothing but an average of the probabilites
@@ -40,17 +43,17 @@ In the continous form,
What is the advantage of representing the belief as a unimodal as opposed to multimodal?
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Obviously, it makes sense because we cant multiple probabilities to a
car moving for two locations. This would be too confusing and the
information will not be useful.
## Variance, Covariance and Correlation
---------------------------------------
Variance, Covariance and Correlation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Variance
~~~~~~~~
^^^^^^^^
Variance is the spread of the data. The mean doesnt tell much **about**
the data. Therefore the variance tells us about the **story** about the
@@ -73,7 +76,7 @@ data meaning the spread of the data.
Covariance
~~~~~~~~~~
^^^^^^^^^^
This is for a multivariate distribution. For example, a robot in 2-D
space can take values in both x and y. To describe them, a normal
@@ -170,8 +173,8 @@ Covariance taking the data as **population** with :math:`\frac{1}{N}`
# Gaussians
-----------
Gaussians
~~~~~~~~~
Central Limit Theorem
^^^^^^^^^^^^^^^^^^^^^
@@ -204,11 +207,11 @@ increase the sample size.(Generally, for n>=30)
.. image:: /Users/atsushisakai/Dropbox/Program/python/PythonRobotics/docs/modules/Localization/Kalmanfilter_basics_14_1.png
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_14_1.png
Gaussian Distribution
'''''''''''''''''''''
^^^^^^^^^^^^^^^^^^^^^
A Gaussian is a *continuous probability distribution* that is completely
described with two parameters, the mean (:math:`\mu`) and the variance
@@ -241,11 +244,11 @@ charecteristic **bell curve**.
.. image:: /Users/atsushisakai/Dropbox/Program/python/PythonRobotics/docs/modules/Localization/Kalmanfilter_basics_16_0.png
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_16_0.png
Why do we need Gaussian distributions?
''''''''''''''''''''''''''''''''''''''
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since it becomes really difficult in the real world to deal with
multimodal distribution as we cannot put the belief in two seperate
@@ -254,8 +257,8 @@ impossible to comprehend. Gaussian probability distribution allows us to
drive the robots using only one mode with peak at the mean with some
variance.
## Gaussian Properties
----------------------
Gaussian Properties
~~~~~~~~~~~~~~~~~~~
**Multiplication**
@@ -318,7 +321,7 @@ New mean is
.. image:: /Users/atsushisakai/Dropbox/Program/python/PythonRobotics/docs/modules/Localization/Kalmanfilter_basics_19_1.png
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_19_1.png
**Addition**
@@ -369,7 +372,7 @@ of the two.
.. image:: /Users/atsushisakai/Dropbox/Program/python/PythonRobotics/docs/modules/Localization/Kalmanfilter_basics_21_1.png
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_21_1.png
.. code:: ipython3
@@ -435,7 +438,7 @@ of the two.
.. image:: /Users/atsushisakai/Dropbox/Program/python/PythonRobotics/docs/modules/Localization/Kalmanfilter_basics_22_0.png
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_22_0.png
This is a 3D projection of the gaussians involved with the lower surface
@@ -443,8 +446,7 @@ showing the 2D projection of the 3D projection above. The innermost
ellipse represents the highest peak, that is the maximum probability for
a given (X,Y) value.
numpy einsum examples
'''''''''''''''''''''
\*\* numpy einsum examples \*\*
.. code:: ipython3
@@ -545,7 +547,7 @@ numpy einsum examples
.. image:: /Users/atsushisakai/Dropbox/Program/python/PythonRobotics/docs/modules/Localization/Kalmanfilter_basics_28_1.png
.. image:: Kalmanfilter_basics_files/Kalmanfilter_basics_28_1.png
References:

View File

@@ -1,16 +1,12 @@
# *KF Basics - Part 2*
----------------------
KF Basics - Part 2
------------------
+-----------------------------------------------------------------------+
| ### Probabilistic Generative Laws |
+=======================================================================+
| \*1st Law**: |
+-----------------------------------------------------------------------+
| he belief representing the state :math:`x_{t}`, is conditioned on all |
| past states, measurements and controls. This can be shown |
| mathematically by the conditional probability shown below: |
+-----------------------------------------------------------------------+
### Probabilistic Generative Laws
**1st Law**: The belief representing the state :math:`x_{t}`, is
conditioned on all past states, measurements and controls. This can be
shown mathematically by the conditional probability shown below:
.. math:: p(x_{t} | x_{0:t-1},z_{1:t-1},u_{1:t})
@@ -53,8 +49,8 @@ Probability**
:math:`\bullet` $p(z_{t} \| x_{t})
:raw-latex:`\rightarrow `$\ **Measurement Probability**
### Conditional dependence and independence example:
----------------------------------------------------
Conditional dependence and independence example:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:math:`\bullet`\ **Independent but conditionally dependent**
@@ -88,8 +84,8 @@ than Coin 1. This in turn increases the conditional probability that B
occurs. This suggests that A and B are not independent. On the other
hand, given C (Coin 1 is selected), A and B are independent.
### Bayes Rule:
---------------
Bayes Rule:
~~~~~~~~~~~
Posterior = $:raw-latex:`\frac{Likelihood*Prior}{Marginal}` $
@@ -132,8 +128,8 @@ had cancer (A). This is the chance of a true positive, 80% in our case.
:math:`\bullet`\ Pr(X|not A) = Chance of a positive test (X) given that
you didnt have cancer (~A). This is a false positive, 9.6% in our case.
### Bayes Filter Algorithm
--------------------------
Bayes Filter Algorithm
~~~~~~~~~~~~~~~~~~~~~~
The basic filter algorithm is:
@@ -162,8 +158,21 @@ information here as the gaussians get multiplied here. (Multiplication
of gaussian values allows the resultant to lie in between these numbers
and the resultant covariance is smaller.
### Bayes filter localization example:
--------------------------------------
Bayes filter localization example:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: ipython3
from IPython.display import Image
Image(filename="bayes_filter.png",width=400)
.. image:: Kalmanfilter_basics_2_files/Kalmanfilter_basics_2_5_0.png
:width: 400px
Given - A robot with a sensor to detect doorways along a hallway. Also,
the robot knows how the hallway looks like but doesnt know where it is
@@ -199,8 +208,8 @@ in the map.
Do note that the robot knows the map but doesnt know where exactly it
is on the map.
### Bayes and Kalman filter structure
-------------------------------------
Bayes and Kalman filter structure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The basic structure and the concept remains the same as bayes filter for
Kalman. The only key difference is the mathematical representation of
@@ -229,8 +238,8 @@ or acceleration.
:math:`\ast` denotes *convolution*.
### Kalman Gain
---------------
Kalman Gain
~~~~~~~~~~~
.. math:: x = (\mathcal L \bar x)
@@ -279,8 +288,8 @@ The variance in terms of the Kalman gain:
is a scaling term that chooses a value partway between :math:`\mu_z` and
:math:`\bar\mu`.**
### Kalman Filter - Univariate and Multivariate
-----------------------------------------------
Kalman Filter - Univariate and Multivariate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\ **Prediction**\
@@ -318,8 +327,8 @@ are vectors and matrices, but the concepts are exactly the same:
- Use the process model to predict the next state (the prior)
- Form an estimate part way between the measurement and the prior
### References:
---------------
References:
~~~~~~~~~~~
1. Roger Labbes
`repo <https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python>`__

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

View File

Before

Width:  |  Height:  |  Size: 5.6 KiB

After

Width:  |  Height:  |  Size: 5.6 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

View File

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

View File

Before

Width:  |  Height:  |  Size: 3.5 KiB

After

Width:  |  Height:  |  Size: 3.5 KiB

View File

@@ -0,0 +1,352 @@
Two joint arm to point control
------------------------------
.. figure:: https://github.com/AtsushiSakai/PythonRobotics/raw/master/ArmNavigation/two_joint_arm_to_point_control/animation.gif
:alt: TwoJointArmToPointControl
TwoJointArmToPointControl
This is two joint arm to a point control simulation.
This is a interactive simulation.
You can set the goal position of the end effector with left-click on the
ploting area.
Inverse Kinematics for a Planar Two-Link Robotic Arm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A classic problem with robotic arms is getting the end-effector, the
mechanism at the end of the arm responsible for manipulating the
environment, to where you need it to be. Maybe the end-effector is a
gripper and maybe you want to pick up an object and maybe you know where
that object is relative to the robot - but you cannot tell the
end-effector where to go directly. Instead, you have to determine the
joint angles that get the end-effector to where you want it to be. This
problem is known as inverse kinematics.
Credit for this solution goes to:
https://robotacademy.net.au/lesson/inverse-kinematics-for-a-2-joint-robot-arm-using-geometry/
First, lets define a class to make plotting our arm easier.
.. code:: ipython3
%matplotlib inline
from math import cos, sin
import numpy as np
import matplotlib.pyplot as plt
class TwoLinkArm:
def __init__(self, joint_angles=[0, 0]):
self.shoulder = np.array([0, 0])
self.link_lengths = [1, 1]
self.update_joints(joint_angles)
def update_joints(self, joint_angles):
self.joint_angles = joint_angles
self.forward_kinematics()
def forward_kinematics(self):
theta0 = self.joint_angles[0]
theta1 = self.joint_angles[1]
l0 = self.link_lengths[0]
l1 = self.link_lengths[1]
self.elbow = self.shoulder + np.array([l0*cos(theta0), l0*sin(theta0)])
self.wrist = self.elbow + np.array([l1*cos(theta0 + theta1), l1*sin(theta0 + theta1)])
def plot(self):
plt.plot([self.shoulder[0], self.elbow[0]],
[self.shoulder[1], self.elbow[1]],
'r-')
plt.plot([self.elbow[0], self.wrist[0]],
[self.elbow[1], self.wrist[1]],
'r-')
plt.plot(self.shoulder[0], self.shoulder[1], 'ko')
plt.plot(self.elbow[0], self.elbow[1], 'ko')
plt.plot(self.wrist[0], self.wrist[1], 'ko')
Lets also define a function to make it easier to draw an angle on our
diagram.
.. code:: ipython3
from math import sqrt
def transform_points(points, theta, origin):
T = np.array([[cos(theta), -sin(theta), origin[0]],
[sin(theta), cos(theta), origin[1]],
[0, 0, 1]])
return np.matmul(T, np.array(points))
def draw_angle(angle, offset=0, origin=[0, 0], r=0.5, n_points=100):
x_start = r*cos(angle)
x_end = r
dx = (x_end - x_start)/(n_points-1)
coords = [[0 for _ in range(n_points)] for _ in range(3)]
x = x_start
for i in range(n_points-1):
y = sqrt(r**2 - x**2)
coords[0][i] = x
coords[1][i] = y
coords[2][i] = 1
x += dx
coords[0][-1] = r
coords[2][-1] = 1
coords = transform_points(coords, offset, origin)
plt.plot(coords[0], coords[1], 'k-')
Okay, we now have a TwoLinkArm class to help us draw the arm, which
well do several times during our derivation. Notice there is a method
called forward_kinematics - forward kinematics specifies the
end-effector position given the joint angles and link lengths. Forward
kinematics is easier than inverse kinematics.
.. code:: ipython3
arm = TwoLinkArm()
theta0 = 0.5
theta1 = 1
arm.update_joints([theta0, theta1])
arm.plot()
def label_diagram():
plt.plot([0, 0.5], [0, 0], 'k--')
plt.plot([arm.elbow[0], arm.elbow[0]+0.5*cos(theta0)],
[arm.elbow[1], arm.elbow[1]+0.5*sin(theta0)],
'k--')
draw_angle(theta0, r=0.25)
draw_angle(theta1, offset=theta0, origin=[arm.elbow[0], arm.elbow[1]], r=0.25)
plt.annotate("$l_0$", xy=(0.5, 0.4), size=15, color="r")
plt.annotate("$l_1$", xy=(0.8, 1), size=15, color="r")
plt.annotate(r"$\theta_0$", xy=(0.35, 0.05), size=15)
plt.annotate(r"$\theta_1$", xy=(1, 0.8), size=15)
label_diagram()
plt.annotate("Shoulder", xy=(arm.shoulder[0], arm.shoulder[1]), xytext=(0.15, 0.5),
arrowprops=dict(facecolor='black', shrink=0.05))
plt.annotate("Elbow", xy=(arm.elbow[0], arm.elbow[1]), xytext=(1.25, 0.25),
arrowprops=dict(facecolor='black', shrink=0.05))
plt.annotate("Wrist", xy=(arm.wrist[0], arm.wrist[1]), xytext=(1, 1.75),
arrowprops=dict(facecolor='black', shrink=0.05))
plt.axis("equal")
plt.show()
.. image:: Planar_Two_Link_IK_files/Planar_Two_Link_IK_5_0.png
Its common to name arm joints anatomically, hence the names shoulder,
elbow, and wrist. In this example, the wrist is not itself a joint, but
we can consider it to be our end-effector. If we constrain the shoulder
to the origin, we can write the forward kinematics for the elbow and the
wrist.
| :math:`elbow_x = l_0\cos(\theta_0)`
| :math:`elbow_y = l_0\sin(\theta_0)`
| :math:`wrist_x = elbow_x + l_1\cos(\theta_0+\theta_1) = l_0\cos(\theta_0) + l_1\cos(\theta_0+\theta_1)`
| :math:`wrist_y = elbow_y + l_1\sin(\theta_0+\theta_1) = l_0\sin(\theta_0) + l_1\sin(\theta_0+\theta_1)`
Since the wrist is our end-effector, lets just call its coordinates
\ :math:`x`\ and \ :math:`y`\ . The forward kinematics for our
end-effector is then
| :math:`x = l_0\cos(\theta_0) + l_1\cos(\theta_0+\theta_1)`
| :math:`y = l_0\sin(\theta_0) + l_1\sin(\theta_0+\theta_1)`
A first attempt to find the joint angles :math:`\theta_0` and
:math:`\theta_1` that would get our end-effector to the desired
coordinates :math:`x` and :math:`y` might be to try solving the forward
kinematics for :math:`\theta_0` and :math:`\theta_1`, but that would be
the wrong move. An easier path involves going back to the geometry of
the arm.
.. code:: ipython3
from math import pi
arm.plot()
label_diagram()
plt.plot([0, arm.wrist[0]],
[0, arm.wrist[1]],
'k--')
plt.plot([arm.wrist[0], arm.wrist[0]],
[0, arm.wrist[1]],
'b--')
plt.plot([0, arm.wrist[0]],
[0, 0],
'b--')
plt.annotate("$x$", xy=(0.6, 0.05), size=15, color="b")
plt.annotate("$y$", xy=(1, 0.2), size=15, color="b")
plt.annotate("$r$", xy=(0.45, 0.9), size=15)
plt.annotate(r"$\alpha$", xy=(0.75, 0.6), size=15)
alpha = pi-theta1
draw_angle(alpha, offset=theta0+theta1, origin=[arm.elbow[0], arm.elbow[1]], r=0.1)
plt.axis("equal")
plt.show()
.. image:: Planar_Two_Link_IK_files/Planar_Two_Link_IK_7_0.png
The distance from the end-effector to the robot base (shoulder joint) is
:math:`r` and can be written in terms of the end-effector position using
the Pythagorean Theorem.
:math:`r^2` = :math:`x^2 + y^2`
Then, by the law of cosines, :math:`r`\ 2 can also be written as:
:math:`r^2` = :math:`l_0^2 + l_1^2 - 2l_0l_1\cos(\alpha)`
Because :math:`\alpha` can be written as :math:`\pi - \theta_1`, we can
relate the desired end-effector position to one of our joint angles,
:math:`\theta_1`.
:math:`x^2 + y^2` = :math:`l_0^2 + l_1^2 - 2l_0l_1\cos(\alpha)`
:math:`x^2 + y^2` = :math:`l_0^2 + l_1^2 - 2l_0l_1\cos(\pi-\theta_1)`
:math:`2l_0l_1\cos(\pi-\theta_1) = l_0^2 + l_1^2 - x^2 - y^2`
| :math:`\cos(\pi-\theta_1) = \frac{l_0^2 + l_1^2 - x^2 - y^2}{2l_0l_1}`
| :math:`~`
| :math:`~`
| :math:`\cos(\pi-\theta_1) = -cos(\theta_1)` is a trigonometric
identity, so we can also write
:math:`\cos(\theta_1) = \frac{x^2 + y^2 - l_0^2 - l_1^2}{2l_0l_1}`
which leads us to an equation for :math:`\theta_1` in terms of the link
lengths and the desired end-effector position!
:math:`\theta_1 = \cos^{-1}(\frac{x^2 + y^2 - l_0^2 - l_1^2}{2l_0l_1})`
This is actually one of two possible solutions for :math:`\theta_1`, but
well ignore the other possibility for now. This solution will lead us
to the “arm-down” configuration of the arm, which is whats shown in the
diagram. Now well derive an equation for :math:`\theta_0` that depends
on this value of :math:`\theta_1`.
.. code:: ipython3
from math import atan2
arm.plot()
plt.plot([0, arm.wrist[0]],
[0, arm.wrist[1]],
'k--')
p = 1 + cos(theta1)
plt.plot([arm.elbow[0], p*cos(theta0)],
[arm.elbow[1], p*sin(theta0)],
'b--', linewidth=5)
plt.plot([arm.wrist[0], p*cos(theta0)],
[arm.wrist[1], p*sin(theta0)],
'b--', linewidth=5)
beta = atan2(arm.wrist[1], arm.wrist[0])-theta0
draw_angle(beta, offset=theta0, r=0.45)
plt.annotate(r"$\beta$", xy=(0.35, 0.35), size=15)
plt.annotate("$r$", xy=(0.45, 0.9), size=15)
plt.annotate(r"$l_1sin(\theta_1)$",xy=(1.25, 1.1), size=15, color="b")
plt.annotate(r"$l_1cos(\theta_1)$",xy=(1.1, 0.4), size=15, color="b")
label_diagram()
plt.axis("equal")
plt.show()
.. image:: Planar_Two_Link_IK_files/Planar_Two_Link_IK_9_0.png
Consider the angle between the displacement vector :math:`r` and the
first link :math:`l_0`; lets call it :math:`\beta`. If we extend the
first link to include the component of the second link in the same
direction as the first, we form a right triangle with components
:math:`l_0+l_1cos(\theta_1)` and :math:`l_1sin(\theta_1)`, allowing us
to express :math:`\beta` as
:math:`\beta = \tan^{-1}(\frac{l_1\sin(\theta_1)}{l_0+l_1\cos(\theta_1)})`
We now have an expression for this angle :math:`\beta` in terms of one
of our arms joint angles. Now, can we relate :math:`\beta` to
:math:`\theta_0`? Yes!
.. code:: ipython3
arm.plot()
label_diagram()
plt.plot([0, arm.wrist[0]],
[0, arm.wrist[1]],
'k--')
plt.plot([arm.wrist[0], arm.wrist[0]],
[0, arm.wrist[1]],
'b--')
plt.plot([0, arm.wrist[0]],
[0, 0],
'b--')
gamma = atan2(arm.wrist[1], arm.wrist[0])
draw_angle(beta, offset=theta0, r=0.2)
draw_angle(gamma, r=0.6)
plt.annotate("$x$", xy=(0.7, 0.05), size=15, color="b")
plt.annotate("$y$", xy=(1, 0.2), size=15, color="b")
plt.annotate(r"$\beta$", xy=(0.2, 0.2), size=15)
plt.annotate(r"$\gamma$", xy=(0.6, 0.2), size=15)
plt.axis("equal")
plt.show()
.. image:: Planar_Two_Link_IK_files/Planar_Two_Link_IK_12_0.png
Our first joint angle :math:`\theta_0` added to :math:`\beta` gives us
the angle between the positive :math:`x`-axis and the displacement
vector :math:`r`; lets call this angle :math:`\gamma`.
:math:`\gamma = \theta_0+\beta`
:math:`\theta_0`, our remaining joint angle, can then be expressed as
:math:`\theta_0 = \gamma-\beta`
We already know :math:`\beta`. :math:`\gamma` is simply the inverse
tangent of :math:`\frac{y}{x}`, so we have an equation of
:math:`\theta_0`!
:math:`\theta_0 = \tan^{-1}(\frac{y}{x})-\tan^{-1}(\frac{l_1\sin(\theta_1)}{l_0+l_1\cos(\theta_1)})`
We now have the inverse kinematics for a planar two-link robotic arm. If
youre planning on implementing this in a programming language, its
best to use the atan2 function, which is included in most math libraries
and correctly accounts for the signs of :math:`y` and :math:`x`. Notice
that :math:`\theta_1` must be calculated before :math:`\theta_0`.
| :math:`\theta_1 = \cos^{-1}(\frac{x^2 + y^2 - l_0^2 - l_1^2}{2l_0l_1})`
| :math:`\theta_0 = atan2(y, x)-atan2(l_1\sin(\theta_1), l_0+l_1\cos(\theta_1))`

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -0,0 +1,9 @@
.. _appendix:
Appendix
==============
.. include:: Kalmanfilter_basics.rst
.. include:: Kalmanfilter_basics_2.rst

View File

@@ -3,19 +3,8 @@
Arm Navigation
==============
Two joint arm to point control
------------------------------
.. include:: Planar_Two_Link_IK.rst
Two joint arm to a point control simulation.
This is a interactive simulation.
You can set the goal position of the end effector with left-click on the
ploting area.
Check this doc for machematical background: `Jupyter notebook documentation <https://github.com/AtsushiSakai/PythonRobotics/blob/master/ArmNavigation/two_joint_arm_to_point_control/Planar_Two_Link_IK.ipynb>`_
|arm_to_point|
N joint arm to point control
----------------------------
@@ -38,6 +27,5 @@ Arm navigation with obstacle avoidance simulation.
|obstacle|
.. |arm_to_point| image:: https://github.com/AtsushiSakai/PythonRobotics/raw/master/ArmNavigation/two_joint_arm_to_point_control/animation.gif
.. |n_joints_arm| image:: https://github.com/AtsushiSakai/PythonRobotics/raw/master/ArmNavigation/n_joint_arm_to_point_control/animation.gif
.. |obstacle| image:: https://github.com/AtsushiSakai/PythonRobotics/raw/master/ArmNavigation/arm_obstacle_navigation/animation.gif

View File

@@ -0,0 +1,29 @@
Extended Kalman Filter Localization
-----------------------------------
.. figure:: https://github.com/AtsushiSakai/PythonRobotics/raw/master/Localization/extended_kalman_filter/animation.gif
:alt: EKF
EKF
This is a sensor fusion localization with Extended Kalman Filter(EKF).
The blue line is true trajectory, the black line is dead reckoning
trajectory,
the green point is positioning observation (ex. GPS), and the red line
is estimated trajectory with EKF.
The red ellipse is estimated covariance ellipse with EKF.
Code; `PythonRobotics/extended_kalman_filter.py at master ·
AtsushiSakai/PythonRobotics <https://github.com/AtsushiSakai/PythonRobotics/blob/master/Localization/extended_kalman_filter/extended_kalman_filter.py>`__
Kalman Filter
~~~~~~~~~~~~~
Ref
~~~
- `PROBABILISTIC-ROBOTICS.ORG <http://www.probabilistic-robotics.org/>`__

View File

@@ -3,26 +3,8 @@
Localization
============
Extended Kalman Filter localization
-----------------------------------
.. include:: extended_kalman_filter_localization.rst
.. raw:: html
<img src="https://github.com/AtsushiSakai/PythonRobotics/raw/master/Localization/extended_kalman_filter/animation.gif" width="640">
This is a sensor fusion localization with Extended Kalman Filter(EKF).
The blue line is true trajectory, the black line is dead reckoning
trajectory,
the green point is positioning observation (ex. GPS), and the red line
is estimated trajectory with EKF.
The red ellipse is estimated covariance ellipse with EKF.
Ref:
- `PROBABILISTIC ROBOTICS`_
Unscented Kalman Filter localization
------------------------------------

View File

@@ -90,7 +90,7 @@ Ref:
Conference
Publication <http://ieeexplore.ieee.org/document/5940562/>`__
.. include:: ./PathTracking/model_predictive_speed_and_steer_control/Model_predictive_speed_and_steering_control.rst
.. include:: Model_predictive_speed_and_steering_control.rst
Ref: