Merge pull request #130 from AtsushiSakai/ekfnotebook

EKF documentation
This commit is contained in:
Atsushi Sakai
2018-11-19 21:26:21 +09:00
committed by GitHub
5 changed files with 313 additions and 42 deletions

View File

@@ -11,12 +11,12 @@ import math
import matplotlib.pyplot as plt
# Estimation parameter of EKF
Q = np.diag([1.0, 1.0])**2 # Observation x,y position covariance
R = np.diag([0.1, 0.1, np.deg2rad(1.0), 1.0])**2 # predict state covariance
Q = np.diag([0.1, 0.1, np.deg2rad(1.0), 1.0])**2 # predict state covariance
R = np.diag([1.0, 1.0])**2 # Observation x,y position covariance
# Simulation parameter
Qsim = np.diag([0.5, 0.5])**2
Rsim = np.diag([1.0, np.deg2rad(30.0)])**2
Qsim = np.diag([1.0, np.deg2rad(30.0)])**2
Rsim = np.diag([0.5, 0.5])**2
DT = 0.1 # time tick [s]
SIM_TIME = 50.0 # simulation time [s]
@@ -36,13 +36,13 @@ def observation(xTrue, xd, u):
xTrue = motion_model(xTrue, u)
# add noise to gps x-y
zx = xTrue[0, 0] + np.random.randn() * Qsim[0, 0]
zy = xTrue[1, 0] + np.random.randn() * Qsim[1, 1]
z = np.array([[zx, zy]])
zx = xTrue[0, 0] + np.random.randn() * Rsim[0, 0]
zy = xTrue[1, 0] + np.random.randn() * Rsim[1, 1]
z = np.array([[zx, zy]]).T
# add noise to input
ud1 = u[0, 0] + np.random.randn() * Rsim[0, 0]
ud2 = u[1, 0] + np.random.randn() * Rsim[1, 1]
ud1 = u[0, 0] + np.random.randn() * Qsim[0, 0]
ud2 = u[1, 0] + np.random.randn() * Qsim[1, 1]
ud = np.array([[ud1, ud2]]).T
xd = motion_model(xd, ud)
@@ -62,7 +62,7 @@ def motion_model(x, u):
[0.0, DT],
[1.0, 0.0]])
x = F.dot(x) + B.dot(u)
x = F@x + B@u
return x
@@ -74,7 +74,7 @@ def observation_model(x):
[0, 1, 0, 0]
])
z = H.dot(x)
z = H@x
return z
@@ -120,16 +120,16 @@ def ekf_estimation(xEst, PEst, z, u):
# Predict
xPred = motion_model(xEst, u)
jF = jacobF(xPred, u)
PPred = jF.dot(PEst).dot(jF.T) + R
PPred = jF@PEst@jF.T + Q
# Update
jH = jacobH(xPred)
zPred = observation_model(xPred)
y = z.T - zPred
S = jH.dot(PPred).dot(jH.T) + Q
K = PPred.dot(jH.T).dot(np.linalg.inv(S))
xEst = xPred + K.dot(y)
PEst = (np.eye(len(xEst)) - K.dot(jH)).dot(PPred)
y = z - zPred
S = jH@PPred@jH.T + R
K = PPred@jH.T@np.linalg.inv(S)
xEst = xPred + K@y
PEst = (np.eye(len(xEst)) - K@jH)@PPred
return xEst, PEst
@@ -153,7 +153,7 @@ def plot_covariance_ellipse(xEst, PEst):
angle = math.atan2(eigvec[bigind, 1], eigvec[bigind, 0])
R = np.array([[math.cos(angle), math.sin(angle)],
[-math.sin(angle), math.cos(angle)]])
fx = R.dot(np.array([[x, y]]))
fx = R@(np.array([x, y]))
px = np.array(fx[0, :] + xEst[0, 0]).flatten()
py = np.array(fx[1, :] + xEst[1, 0]).flatten()
plt.plot(px, py, "--r")
@@ -175,7 +175,7 @@ def main():
hxEst = xEst
hxTrue = xTrue
hxDR = xTrue
hz = np.zeros((1, 2))
hz = np.zeros((2, 1))
while SIM_TIME >= time:
time += DT
@@ -189,7 +189,7 @@ def main():
hxEst = np.hstack((hxEst, xEst))
hxDR = np.hstack((hxDR, xDR))
hxTrue = np.hstack((hxTrue, xTrue))
hz = np.vstack((hz, z))
hz = np.hstack((hz, z))
if show_animation:
plt.cla()

View File

@@ -25,24 +25,182 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Kalman Filter"
"### Filter design\n",
"\n",
"In this simulation, the robot has a state vector includes 4 states at time $t$.\n",
"\n",
"$$\\textbf{x}_t=[x_t, y_t, \\theta_t, v_t]$$\n",
"\n",
"x, y are a 2D x-y position, $\\theta$ is orientation, and v is velocity.\n",
"\n",
"In the code, \"xEst\" means the state vector. [code](https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_filter.py#L168)\n",
"\n",
"And, $P_t$ is covariace matrix of the state,\n",
"\n",
"$Q$ is covariance matrix of process noise, \n",
"\n",
"$R$ is covariance matrix of observation noise at time $t$ \n",
"\n",
" \n",
"\n",
"The robot has a speed sensor and a gyro sensor.\n",
"\n",
"So, the input vecor can be used as each time step\n",
"\n",
"$$\\textbf{u}_t=[v_t, \\omega_t]$$\n",
"\n",
"Also, the robot has a GNSS sensor, it means that the robot can observe x-y position at each time.\n",
"\n",
"$$\\textbf{z}_t=[x_t,y_t]$$\n",
"\n",
"The input and observation vector includes sensor noise.\n",
"\n",
"In the code, \"observation\" function generates the input and observation vector with noise [code](https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_filter.py#L34-L50)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Ref\n",
"### Motion Model\n",
"\n",
"- [PROBABILISTIC\\-ROBOTICS\\.ORG](http://www.probabilistic-robotics.org/)"
"The robot model is \n",
"\n",
"$$ \\dot{x} = vcos(\\phi)$$\n",
"\n",
"$$ \\dot{y} = vsin((\\phi)$$\n",
"\n",
"$$ \\dot{\\phi} = \\omega$$\n",
"\n",
"\n",
"So, the motion model is\n",
"\n",
"$$\\textbf{x}_{t+1} = F\\textbf{x}_t+B\\textbf{u}_t$$\n",
"\n",
"where\n",
"\n",
"$\\begin{equation*}\n",
"F=\n",
"\\begin{bmatrix}\n",
"1 & 0 & 0 & 0\\\\\n",
"0 & 1 & 0 & 0\\\\\n",
"0 & 0 & 1 & 0 \\\\\n",
"0 & 0 & 0 & 0 \\\\\n",
"\\end{bmatrix}\n",
"\\end{equation*}$\n",
"\n",
"$\\begin{equation*}\n",
"B=\n",
"\\begin{bmatrix}\n",
"sin(\\phi)dt & 0\\\\\n",
"cos(\\phi)dt & 0\\\\\n",
"0 & dt\\\\\n",
"1 & 0\\\\\n",
"\\end{bmatrix}\n",
"\\end{equation*}$\n",
"\n",
"$dt$ is a time interval.\n",
"\n",
"This is implemented at [code](https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_filter.py#L53-L67)\n",
"\n",
"Its Javaobian matrix is\n",
"\n",
"$\\begin{equation*}\n",
"J_F=\n",
"\\begin{bmatrix}\n",
"\\frac{dx}{dx}& \\frac{dx}{dy} & \\frac{dx}{d\\phi} & \\frac{dx}{dv}\\\\\n",
"\\frac{dy}{dx}& \\frac{dy}{dy} & \\frac{dy}{d\\phi} & \\frac{dy}{dv}\\\\\n",
"\\frac{d\\phi}{dx}& \\frac{d\\phi}{dy} & \\frac{d\\phi}{d\\phi} & \\frac{d\\phi}{dv}\\\\\n",
"\\frac{dv}{dx}& \\frac{dv}{dy} & \\frac{dv}{d\\phi} & \\frac{dv}{dv}\\\\\n",
"\\end{bmatrix}\n",
"=\n",
"\\begin{bmatrix}\n",
"1& 0 & -v sin(\\phi)dt & cos(\\phi)dt\\\\\n",
"0 & 1 & v cos(\\phi)dt & sin(\\phi) dt\\\\\n",
"0 & 0 & 1 & 0\\\\\n",
"0 & 0 & 0 & 1\\\\\n",
"\\end{bmatrix}\n",
"\\end{equation*}$\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"cell_type": "markdown",
"metadata": {},
"outputs": [],
"source": []
"source": [
"# Observation Model\n",
"\n",
"The robot can get x-y position infomation from GPS.\n",
"\n",
"So GPS Observation model is\n",
"\n",
"$$\\textbf{z}_{t} = H\\textbf{x}_t$$\n",
"\n",
"where\n",
"\n",
"$\\begin{equation*}\n",
"B=\n",
"\\begin{bmatrix}\n",
"1 & 0 & 0& 0\\\\\n",
"0 & 1 & 0& 0\\\\\n",
"\\end{bmatrix}\n",
"\\end{equation*}$\n",
"\n",
"Its Jacobian matrix is\n",
"\n",
"$\\begin{equation*}\n",
"J_H=\n",
"\\begin{bmatrix}\n",
"\\frac{dx}{dx}& \\frac{dx}{dy} & \\frac{dx}{d\\phi} & \\frac{dx}{dv}\\\\\n",
"\\frac{dy}{dx}& \\frac{dy}{dy} & \\frac{dy}{d\\phi} & \\frac{dy}{dv}\\\\\n",
"\\end{bmatrix}\n",
"=\n",
"\\begin{bmatrix}\n",
"1& 0 & 0 & 0\\\\\n",
"0 & 1 & 0 & 0\\\\\n",
"\\end{bmatrix}\n",
"\\end{equation*}$\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extented Kalman Filter\n",
"\n",
"Localization process using Extendted Kalman Filter:EKF is\n",
"\n",
"=== Predict ===\n",
"\n",
"$x_{Pred} = Fx_t+Bu_t$\n",
"\n",
"$P_{Pred} = J_FP_t J_F^T + Q$\n",
"\n",
"=== Update ===\n",
"\n",
"$z_{Pred} = Hx_{Pred}$ \n",
"\n",
"$y = z - z_{Pred}$\n",
"\n",
"$S = J_H P_{Pred}.J_H^T + R$\n",
"\n",
"$K = P_{Pred}.J_H^T S^{-1}$\n",
"\n",
"$x_{t+1} = x_{Pred} + Ky$\n",
"\n",
"$P_{t+1} = ( I - K J_H) P_{Pred}$\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Ref:\n",
"\n",
"- [PROBABILISTIC\\-ROBOTICS\\.ORG](http://www.probabilistic-robotics.org/)"
]
}
],
"metadata": {

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.6 MiB

After

Width:  |  Height:  |  Size: 2.3 MiB

View File

@@ -10,6 +10,11 @@ import math
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append("../../")
from matplotrecorder import matplotrecorder
show_animation = True
@@ -55,14 +60,11 @@ def calc_dynamic_window(x, config):
x[3] + config.max_accel * config.dt,
x[4] - config.max_dyawrate * config.dt,
x[4] + config.max_dyawrate * config.dt]
# print(Vs, Vd)
# [vmin,vmax, yawrate min, yawrate max]
dw = [max(Vs[0], Vd[0]), min(Vs[1], Vd[1]),
max(Vs[2], Vd[2]), min(Vs[3], Vd[3])]
#print(dw)
return dw
@@ -76,7 +78,6 @@ def calc_trajectory(xinit, v, y, config):
traj = np.vstack((traj, x))
time += config.dt
# print(len(traj))
return traj
@@ -98,7 +99,7 @@ def calc_final_input(x, u, dw, config, goal, ob):
speed_cost = config.speed_cost_gain * \
(config.max_speed - traj[-1, 3])
ob_cost = calc_obstacle_cost(traj, ob, config)
#print(ob_cost)
# print(ob_cost)
final_cost = to_goal_cost + speed_cost + ob_cost
@@ -110,9 +111,6 @@ def calc_final_input(x, u, dw, config, goal, ob):
min_u = [v, y]
best_traj = traj
# print(min_u)
# input()
return min_u, best_traj
@@ -144,8 +142,8 @@ def calc_to_goal_cost(traj, goal, config):
goal_magnitude = math.sqrt(goal[0]**2 + goal[1]**2)
traj_magnitude = math.sqrt(traj[-1, 0]**2 + traj[-1, 1]**2)
dot_product = (goal[0]*traj[-1, 0]) + (goal[1]*traj[-1, 1])
error = dot_product / (goal_magnitude*traj_magnitude)
dot_product = (goal[0] * traj[-1, 0]) + (goal[1] * traj[-1, 1])
error = dot_product / (goal_magnitude * traj_magnitude)
error_angle = math.acos(error)
cost = config.to_goal_cost_gain * error_angle
@@ -197,7 +195,7 @@ def main():
x = motion(x, u, config.dt)
traj = np.vstack((traj, x)) # store state history
#print(traj)
# print(traj)
if show_animation:
plt.cla()
@@ -209,6 +207,7 @@ def main():
plt.axis("equal")
plt.grid(True)
plt.pause(0.0001)
matplotrecorder.save_frame()
# check goal
if math.sqrt((x[0] - goal[0])**2 + (x[1] - goal[1])**2) <= config.robot_radius:
@@ -218,7 +217,13 @@ def main():
print("Done")
if show_animation:
plt.plot(traj[:, 0], traj[:, 1], "-r")
plt.show()
plt.pause(0.0001)
for i in range(10):
matplotrecorder.save_frame()
matplotrecorder.save_movie("animation.gif", 0.1)
plt.show()
if __name__ == '__main__':

View File

@@ -20,10 +20,118 @@ The red ellipse is estimated covariance ellipse with EKF.
Code; `PythonRobotics/extended_kalman_filter.py at master ·
AtsushiSakai/PythonRobotics <https://github.com/AtsushiSakai/PythonRobotics/blob/master/Localization/extended_kalman_filter/extended_kalman_filter.py>`__
Kalman Filter
Filter design
~~~~~~~~~~~~~
Ref
~~~
In this simulation, the robot has a state vector includes 4 states at
time :math:`t`.
.. math:: \textbf{x}_t=[x_t, y_t, \theta_t, v_t]
x, y are a 2D x-y position, :math:`\theta` is orientation, and v is
velocity.
In the code, “xEst” means the state vector.
`code <https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_filter.py#L168>`__
And, :math:`P_t` is covariace matrix of the state,
:math:`Q` is covariance matrix of process noise,
:math:`R` is covariance matrix of observation noise at time :math:`t`
 
The robot has a speed sensor and a gyro sensor.
So, the input vecor can be used as each time step
.. math:: \textbf{u}_t=[v_t, \omega_t]
Also, the robot has a GNSS sensor, it means that the robot can observe
x-y position at each time.
.. math:: \textbf{z}_t=[x_t,y_t]
The input and observation vector includes sensor noise.
In the code, “observation” function generates the input and observation
vector with noise
`code <https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_filter.py#L34-L50>`__
Motion Model
~~~~~~~~~~~~
The robot model is
.. math:: \dot{x} = vcos(\phi)
.. math:: \dot{y} = vsin((\phi)
.. math:: \dot{\phi} = \omega
So, the motion model is
.. math:: \textbf{x}_{t+1} = F\textbf{x}_t+B\textbf{u}_t
where
:math:`\begin{equation*} F= \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{equation*}`
:math:`\begin{equation*} B= \begin{bmatrix} sin(\phi)dt & 0\\ cos(\phi)dt & 0\\ 0 & dt\\ 1 & 0\\ \end{bmatrix} \end{equation*}`
:math:`dt` is a time interval.
This is implemented at
`code <https://github.com/AtsushiSakai/PythonRobotics/blob/916b4382de090de29f54538b356cef1c811aacce/Localization/extended_kalman_filter/extended_kalman_filter.py#L53-L67>`__
Its Javaobian matrix is
:math:`\begin{equation*} J_F= \begin{bmatrix} \frac{dx}{dx}& \frac{dx}{dy} & \frac{dx}{d\phi} & \frac{dx}{dv}\\ \frac{dy}{dx}& \frac{dy}{dy} & \frac{dy}{d\phi} & \frac{dy}{dv}\\ \frac{d\phi}{dx}& \frac{d\phi}{dy} & \frac{d\phi}{d\phi} & \frac{d\phi}{dv}\\ \frac{dv}{dx}& \frac{dv}{dy} & \frac{dv}{d\phi} & \frac{dv}{dv}\\ \end{bmatrix} = \begin{bmatrix} 1& 0 & -v sin(\phi)dt & cos(\phi)dt\\ 0 & 1 & v cos(\phi)dt & sin(\phi) dt\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\\ \end{bmatrix} \end{equation*}`
Observation Model
=================
The robot can get x-y position infomation from GPS.
So GPS Observation model is
.. math:: \textbf{z}_{t} = H\textbf{x}_t
where
:math:`\begin{equation*} B= \begin{bmatrix} 1 & 0 & 0& 0\\ 0 & 1 & 0& 0\\ \end{bmatrix} \end{equation*}`
Its Jacobian matrix is
:math:`\begin{equation*} J_H= \begin{bmatrix} \frac{dx}{dx}& \frac{dx}{dy} & \frac{dx}{d\phi} & \frac{dx}{dv}\\ \frac{dy}{dx}& \frac{dy}{dy} & \frac{dy}{d\phi} & \frac{dy}{dv}\\ \end{bmatrix} = \begin{bmatrix} 1& 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \end{bmatrix} \end{equation*}`
Extented Kalman Filter
======================
Localization process using Extendted Kalman Filter:EKF is
=== Predict ===
:math:`x_{Pred} = Fx_t+Bu_t`
:math:`P_{Pred} = J_FP_t J_F^T + Q`
=== Update ===
:math:`z_{Pred} = Hx_{Pred}`
:math:`y = z - z_{Pred}`
:math:`S = J_H P_{Pred}.J_H^T + R`
:math:`K = P_{Pred}.J_H^T S^{-1}`
:math:`x_{t+1} = x_{Pred} + Ky`
:math:`P_{t+1} = ( I - K J_H) P_{Pred}`
Ref:
~~~~
- `PROBABILISTIC-ROBOTICS.ORG <http://www.probabilistic-robotics.org/>`__