Enhance lqr steering control docs (#1015)

* enhance lqr steering control docs

* enhance lqr steering control docs

* enhance lqr steering control docs

* enhance lqr steering control docs

* improve lqr steering control docs
This commit is contained in:
Atsushi Sakai
2024-05-25 23:31:54 +09:00
committed by GitHub
parent cfb736316a
commit 5751829bab
4 changed files with 111 additions and 7 deletions

View File

@@ -55,7 +55,7 @@ def update(state, a, delta):
return state
def PIDControl(target, current):
def pid_control(target, current):
a = Kp * (target - current)
return a
@@ -70,10 +70,11 @@ def solve_DARE(A, B, Q, R):
solve a discrete time_Algebraic Riccati equation (DARE)
"""
X = Q
maxiter = 150
Xn = Q
max_iter = 150
eps = 0.01
for i in range(maxiter):
for i in range(max_iter):
Xn = A.T @ X @ A - A.T @ X @ B @ \
la.inv(R + B.T @ X @ B) @ B.T @ X @ A + Q
if (abs(Xn - X)).max() < eps:
@@ -178,7 +179,7 @@ def closed_loop_prediction(cx, cy, cyaw, ck, speed_profile, goal):
dl, target_ind, e, e_th = lqr_steering_control(
state, cx, cy, cyaw, ck, e, e_th)
ai = PIDControl(speed_profile[target_ind], state.v)
ai = pid_control(speed_profile[target_ind], state.v)
state = update(state, ai, dl)
if abs(state.v) <= stop_speed:
@@ -202,8 +203,9 @@ def closed_loop_prediction(cx, cy, cyaw, ck, speed_profile, goal):
if target_ind % 1 == 0 and show_animation:
plt.cla()
# for stopping simulation with the esc key.
plt.gcf().canvas.mpl_connect('key_release_event',
lambda event: [exit(0) if event.key == 'escape' else None])
plt.gcf().canvas.mpl_connect(
'key_release_event',
lambda event: [exit(0) if event.key == 'escape' else None])
plt.plot(cx, cy, "-r", label="course")
plt.plot(x, y, "ob", label="trajectory")
plt.plot(cx[target_ind], cy[target_ind], "xg", label="target")

View File

@@ -8,7 +8,7 @@ This folder contains documentation for the Python Robotics project.
#### Install Sphinx and Theme
```
pip install sphinx sphinx-autobuild sphinx-rtd-theme
pip install sphinx sphinx-autobuild sphinx-rtd-theme sphinx_rtd_dark_mode sphinx_copybutton sphinx_rtd_dark_mode
```
#### Building the Docs

View File

@@ -8,7 +8,109 @@ control.
.. image:: https://github.com/AtsushiSakai/PythonRoboticsGifs/raw/master/PathTracking/lqr_steer_control/animation.gif
Overview
~~~~~~~~
The LQR (Linear Quadratic Regulator) steering control model implemented in lqr_steer_control.py provides a simulation
for an autonomous vehicle to track a desired trajectory by adjusting steering angle based on feedback from the current state and the desired trajectory.
This model utilizes a combination of PID speed control and LQR steering control to achieve smooth and accurate trajectory tracking.
Vehicle motion Model
~~~~~~~~~~~~~~~~~~~~~
The below figure shows the geometric model of the vehicle used in this simulation:
.. image:: lqr_steering_control_model.png
:width: 600px
The `e` and `theta` represent the lateral error and orientation error, respectively, with respect to the desired trajectory.
And :math:`\dot{e}` and :math:`\dot{\theta}` represent the rates of change of these errors.
The :math:`e_t` and :math:`theta_t` are the updated values of `e` and `theta` at time `t`, respectively, and can be calculated using the following kinematic equations:
.. math:: e_t = e_{t-1} + \dot{e}_{t-1} dt
.. math:: \theta_t = \theta_{t-1} + \dot{\theta}_{t-1} dt
Where `dt` is the time difference.
The change rate of the `e` can be calculated as:
.. math:: \dot{e}_t = V \sin(\theta_{t-1})
Where `V` is the vehicle speed.
If the :math:`theta` is small,
.. math:: \theta \approx 0
the :math:`\sin(\theta)` can be approximated as :math:`\theta`:
.. math:: \sin(\theta) = \theta
So, the change rate of the `e` can be approximated as:
.. math:: \dot{e}_t = V \theta_{t-1}
The change rate of the :math:`\theta` can be calculated as:
.. math:: \dot{\theta}_t = \frac{V}{L} \tan(\delta)
where `L` is the wheelbase of the vehicle and :math:`\delta` is the steering angle.
If the :math:`\delta` is small,
.. math:: \delta \approx 0
the :math:`\tan(\delta)` can be approximated as :math:`\delta`:
.. math:: \tan(\delta) = \delta
So, the change rate of the :math:`\theta` can be approximated as:
.. math:: \dot{\theta}_t = \frac{V}{L} \delta
The above equations can be used to update the state of the vehicle at each time step.
To formulate the state-space representation of the vehicle dynamics as a linear model,
the state vector `x` and control input vector `u` are defined as follows:
.. math:: x_t = [e_t, \dot{e}_t, \theta_t, \dot{\theta}_t]^T
.. math:: u_t = \delta_t
The state transition equation can be represented as:
.. math:: x_{t+1} = A x_t + B u_t
where:
:math:`\begin{equation*} A = \begin{bmatrix} 1 & dt & 0 & 0\\ 0 & 0 & v & 0\\ 0 & 0 & 1 & dt\\ 0 & 0 & 0 & 0 \\ \end{bmatrix} \end{equation*}`
:math:`\begin{equation*} B = \begin{bmatrix} 0\\ 0\\ 0\\ \frac{v}{L} \\ \end{bmatrix} \end{equation*}`
LQR controller
~~~~~~~~~~~~~~~
The Linear Quadratic Regulator (LQR) controller is used to calculate the optimal control input `u` that minimizes the quadratic cost function:
:math:`J = \sum_{t=0}^{N} (x_t^T Q x_t + u_t^T R u_t)`
where `Q` and `R` are the weighting matrices for the state and control input, respectively.
for the linear model:
:math:`x_{t+1} = A x_t + B u_t`
The optimal control input `u` can be calculated as:
:math:`u_t = -K x_t`
where `K` is the feedback gain matrix obtained by solving the Riccati equation.
References:
~~~~~~~~~~~
- `ApolloAuto/apollo: An open autonomous driving platform <https://github.com/ApolloAuto/apollo>`_
- `Linear Quadratic Regulator (LQR) <https://en.wikipedia.org/wiki/Linear%E2%80%93quadratic_regulator>`_

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB