Apollo MPC算法之前使用qpOASES Solver, 现在替换为OSQP, OSQP 开发者自己做个Benchmark, 测试Solver性能,结果现实OSQP的运行效率是比qpOASES要高的, 我们自己也做过测试,对于同样的MPC实时优化问题, OSQP的求解速度的确比qpOASES要快很多。 OSQP Benchmark 使用qpOASES Solver构造二次规划问题的形式与OSQP Solver构造二次规划问题的形式不相同,关于qpOASES Solver是如何构造二次规划问题的可以参考: qpOASES QP Formulation
关于Apollo使用的基于道路进行线性化的车辆动力学公式以及离散化方式请参考: Linearized Lateral Error Dynamics 现在假设在采样时刻 k k k 我们已经得到如下车辆Error Dynamics 线性系统状态方程 x k + 1 = A ( k ) x k + B ( k ) u k + C ( k ) x_{k + 1} = A(k)x_{k} +B(k)u_{k} + C(k) xk+1=A(k)xk+B(k)uk+C(k) 同时, 我们状态量和输入量的上下限 x m i n x_{min} xmin, x m a x x_{max} xmax, u m i n u_{min} umin, u m a x u_{max} umax已经规定好, Q ( k ) Q(k) Q(k)和 R ( k ) R(k) R(k)为 k k k时刻的状态惩罚矩阵和输入惩罚矩阵。 A k A_{k} Ak, B k B_{k} Bk和 C k C_{k} Ck分别为 k k k时刻 Linearized Error Dynamics的系统矩阵, 输入矩阵和干扰矩阵。
在 k k k时刻,将这些量带入初始化OSQP Solver:
MpcOsqp::MpcOsqp(const Eigen::MatrixXd &matrix_a, const Eigen::MatrixXd &matrix_b, const Eigen::MatrixXd &matrix_q, const Eigen::MatrixXd &matrix_r, const Eigen::MatrixXd &matrix_initial_x, const Eigen::MatrixXd &matrix_u_lower, const Eigen::MatrixXd &matrix_u_upper, const Eigen::MatrixXd &matrix_x_lower, const Eigen::MatrixXd &matrix_x_upper, const Eigen::MatrixXd &matrix_x_ref, const int max_iter, const int horizon, const double eps_abs) : matrix_a_(matrix_a), matrix_b_(matrix_b), matrix_q_(matrix_q), matrix_r_(matrix_r), matrix_initial_x_(matrix_initial_x), matrix_u_lower_(matrix_u_lower), matrix_u_upper_(matrix_u_upper), matrix_x_lower_(matrix_x_lower), matrix_x_upper_(matrix_x_upper), matrix_x_ref_(matrix_x_ref), max_iteration_(max_iter), horizon_(horizon), eps_abs_(eps_abs) { state_dim_ = matrix_b.rows(); control_dim_ = matrix_b.cols(); ADEBUG << "state_dim" << state_dim_; ADEBUG << "control_dim_" << control_dim_; num_param_ = state_dim_ * (horizon_ + 1) + control_dim_ * horizon_; }参数对应表格:
参数代码变量名称 A k A_{k} Akmatrix_a B k B_{k} Bkmatrix_b Q k Q_{k} Qkmatrix_q R k R_{k} Rkmatrix_r x k x_{k} xkmatrix_initial_x x m a x x_{max} xmaxmatrix_x_upper x m i n x_{min} xminmatrix_x_lower u m a x u_{max} umaxmatrix_u_upper u m i n u_{min} uminmatrix_u_lowerApollo MPC使用如下优化目标函数, u 0 ∗ = min x k , u k ∑ k = 0 N ( x k − x r ) T Q ( x k − x r ) + ∑ k = 0 N − 1 u k T R u k x k + 1 = A x k + B u k x m i n ≤ x k ≤ x m a x u m i n ≤ x k ≤ u m a x x 0 = x ˉ u_{0} ^{*} = \min_{x_{k}, u_{k}} \sum_{k=0}^{N} (x_k-x_r)^T Q (x_k-x_r) + \sum_{k=0}^{N - 1}u_k^T R u_k \\ x_{k+1} = A x_k + B u_k \\ x_{\rm min} \le x_k \le x_{\rm max} \\ u_{\rm min} \le x_k \le u_{\rm max} \\ x_{0} = \bar{x} u0∗=xk,ukmink=0∑N(xk−xr)TQ(xk−xr)+k=0∑N−1ukTRukxk+1=Axk+Bukxmin≤xk≤xmaxumin≤xk≤umaxx0=xˉ 其中 N N N为prediction horizon。
OSQP优化问题的标准形式如下: min x 1 2 x T P x + q T x s u b j e c t t o l ≤ A c x ≤ u \min_{x}~~~~\dfrac{1}{2}x^{T}Px + q^{T}x \\ subject~to~ l \leq A_{c}x \leq u xmin 21xTPx+qTxsubject to l≤Acx≤u 上述二次规划问题只包括不等式约束不包括等式约束,对于等式约束 A e q x = b e q A_{eq}x = b_{eq} Aeqx=beq,应该对其进行如下变换,将其化为不等式约束: b e q ≤ A e q x ≤ b e q b_{eq} \leq A_{eq}x \leq b_{eq} beq≤Aeqx≤beq 对于Apollo模型预测控制问题,等式约束来自于系统的状态方程 x k + 1 = A ( k ) x k + B ( k ) u k + C ( k ) x_{k + 1} = A(k)x_{k} + B(k)u_{k} + C(k) xk+1=A(k)xk+B(k)uk+C(k)。 目前开源出的代码中, OSQP等式约束中是忽略了 C k C_{k} Ck这一项的, 只用了约束 x k = A k x k + B k u k x_{k} = A_{k}x_{k} + B_{k}u_{k} xk=Akxk+Bkuk。 针对于OSQP求解器的该种接口形式,需要对MPC优化问题进行重新构造,从而适配接口,也就是说要求出对应于上节提到优化问题的Hessian Matrix P P P, Gradient Vector q q q, Equality Constraint Matrix A c A_{c} Ac和Equality Constraint Vectors l l l, u u u。 OSQP Solver中的矩阵数据采取CSC格式,参考博客如下: CSC Format1 CSC Format2 所以针对下面的矩阵P, 若使用需要用五个关键变量去描述这个矩阵
矩阵行数 numRows: 类型 int 数值 3矩阵列数 numCols:类型 int 数值 3colPtrs: the index corresponding to the start of a new column: 类型 Array {int} 数值 (0, 2, 3, 6), 这一个array的长度为矩阵列数加1, 第一个元素一直为0, 第二个元素是第一列非零元素,以此类推.rowIndices: the row index of the entry: 类型 Array{int} 数值(0, 2, 1, 0, 1, 2)values non-zero matrix entries in column major 类型 Array {double} Array(1.0,2.0,3.0,4.0,5.0,6.0) P = [ 1 0 4 0 3 5 2 0 6 ] P = \left[ \begin{matrix} 1 & 0 & 4\\ 0 & 3 & 5 \\ 2 & 0 & 6\end{matrix} \right] P=⎣⎡102030456⎦⎤为了防止大家混淆变量,我们重新梳理一下变量和的含义, 上一节中提到的车辆Linearized Error Dynamics中的 x ( k ) x(k) x(k)包括横向和纵向误差变量一共六个状态变量, k k k为当前采样时刻。 x ( k ) = [ e l a t ( k ) e ˙ l a t ( k ) e h e a d i n g ( k ) e ˙ h e a d i n g ( k ) e s t a t i o n ( k ) e s p e e d ( k ) ] x(k) = \begin{bmatrix} e_{lat}(k) \\ \dot{e}_{lat}(k) \\ e_{heading}(k) \\ \dot{e}_{heading}(k) \\e_{station}(k) \\ e_{speed}(k) \end{bmatrix} x(k)=⎣⎢⎢⎢⎢⎢⎢⎡elat(k)e˙lat(k)eheading(k)e˙heading(k)estation(k)espeed(k)⎦⎥⎥⎥⎥⎥⎥⎤ 输入 u k u_{k} uk包括方向盘转角 δ \delta δ和纵向加速度 a a a u ( k ) = [ δ ( k ) a ( k ) ] u(k) = \begin{bmatrix} \delta(k) \\ a(k) \end{bmatrix} u(k)=[δ(k)a(k)] OSQP solver中 x x x为二次规划问题的决策变量,由当前时刻的状态变量,当前时刻的输入,预测的状态变量和输入构成。 x = [ x ( k ) x ( k + 1 ) ⋮ x ( k + N ) u ( k ) u ( k + 1 ) ⋮ u ( k + N − 1 ) ] x = \begin{bmatrix} x(k) \\ x(k + 1) \\ \vdots \\x(k + N) \\ u(k) \\ u(k + 1) \\ \vdots \\ u(k + N - 1) \end{bmatrix} x=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡x(k)x(k+1)⋮x(k+N)u(k)u(k+1)⋮u(k+N−1)⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤ 其中 N N N为预测步长。 Hessian矩阵 P P P形式如下: P = diag ( Q , Q , . . . , Q , R , . . . , R ) P = \text{diag}(Q, Q, ..., Q, R, ..., R) P=diag(Q,Q,...,Q,R,...,R)
void MpcOsqp::CalculateKernel(std::vector<c_float> *P_data, std::vector<c_int> *P_indices, std::vector<c_int> *P_indptr) { // col1:(row,val),...; col2:(row,val),....; ... std::vector<std::vector<std::pair<c_int, c_float>>> columns; columns.resize(num_param_); int value_index = 0; // state and terminal state for (size_t i = 0; i <= horizon_; ++i) { for (size_t j = 0; j < state_dim_; ++j) { // (row, val) columns[i * state_dim_ + j].emplace_back(i * state_dim_ + j, matrix_q_(j, j)); ++value_index; } } // control const size_t state_total_dim = state_dim_ * (horizon_ + 1); for (size_t i = 0; i < horizon_; ++i) { for (size_t j = 0; j < control_dim_; ++j) { // (row, val) columns[i * control_dim_ + j + state_total_dim].emplace_back( state_total_dim + i * control_dim_ + j, matrix_r_(j, j)); ++value_index; } } CHECK_EQ(value_index, num_param_); int ind_p = 0; for (size_t i = 0; i < num_param_; ++i) { // TODO(SHU) Check this P_indptr->emplace_back(ind_p); for (const auto &row_data_pair : columns[i]) { P_data->emplace_back(row_data_pair.second); // val P_indices->emplace_back(row_data_pair.first); // row ++ind_p; } } P_indptr->emplace_back(ind_p); }Gradient Vector 构造形式入下: q = [ − Q x r − Q x r ⋮ − Q x r 0 ⋮ 0 ] q = \begin{bmatrix} -Q x_r \\ -Q x_r \\ \vdots \\ -Q x_r \\ 0\\ \vdots\\ 0 \end{bmatrix} q=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡−Qxr−Qxr⋮−Qxr0⋮0⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤
// reference is always zero void MpcOsqp::CalculateGradient() { // populate the gradient vector gradient_ = Eigen::VectorXd::Zero( state_dim_ * (horizon_ + 1) + control_dim_ * horizon_, 1); for (size_t i = 0; i < horizon_ + 1; i++) { gradient_.block(i * state_dim_, 0, state_dim_, 1) = -1.0 * matrix_q_ * matrix_x_ref_; } ADEBUG << "Gradient_mat"; ADEBUG << gradient_; }Equality Constraint Matrix构造如下, 包括上下两个部分,上部分对应等式约束,下部分对应不等式约束, 每个部分又分为左右两部分, 左部分对应于决策变量中的状态量 x ( k ) x(k) x(k)部分,右部分对应决策变量中输入量 u ( k ) u(k) u(k)部分。 A c = [ − I 0 0 ⋯ 0 0 0 ⋯ 0 A − I 0 ⋯ 0 B 0 ⋯ 0 0 A − I ⋯ 0 0 B ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ − I 0 0 ⋯ B I 0 0 ⋯ 0 0 0 ⋯ 0 0 I 0 ⋯ 0 0 0 ⋯ 0 0 0 I ⋯ 0 0 0 ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ I 0 0 ⋯ 0 0 0 0 ⋯ 0 I 0 ⋯ 0 0 0 0 ⋯ 0 0 I ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 ⋯ 0 0 0 ⋯ I ] A_c = \left[ \begin{array}{ccccc|cccc} -I & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0\\ A & -I & 0 & \cdots & 0 & B & 0 & \cdots & 0 \\ 0 & A & -I & \cdots & 0 & 0 & B & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & -I & 0 & 0 & \cdots & B\\ \hline I & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0\\ 0 & I & 0 & \cdots & 0 & 0 & 0 & \cdots & 0\\ 0 & 0 & I & \cdots & 0 & 0 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & I & 0 & 0 & \cdots & 0\\ 0 & 0 & 0 & \cdots & 0 & I & 0 & \cdots & 0\\ 0 & 0 & 0 & \cdots & 0 & 0 & I & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 0 & 0 & \cdots & I \end{array} \right] Ac=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡−IA0⋮0I00⋮000⋮00−IA⋮00I0⋮000⋮000−I⋮000I⋮000⋮0⋯⋯⋯⋱⋯⋯⋯⋯⋱⋯⋯⋯⋱⋯000⋮−I000⋮I00⋮00B0⋮0000⋮0I0⋮000B⋮0000⋮00I⋮0⋯⋯⋯⋱⋯⋯⋯⋯⋱⋯⋯⋯⋱⋯000⋮B000⋮000⋮I⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤
// equality constraints x(k+1) = A*x(k) void MpcOsqp::CalculateEqualityConstraint(std::vector<c_float> *A_data, std::vector<c_int> *A_indices, std::vector<c_int> *A_indptr) { constexpr double kEpsilon = 1e-6; // block matrix Eigen::MatrixXd matrix_constraint = Eigen::MatrixXd::Zero( state_dim_ * (horizon_ + 1) + state_dim_ * (horizon_ + 1) + control_dim_ * horizon_, state_dim_ * (horizon_ + 1) + control_dim_ * horizon_); Eigen::MatrixXd state_identity_mat = Eigen::MatrixXd::Identity( state_dim_ * (horizon_ + 1), state_dim_ * (horizon_ + 1)); ADEBUG << "state_identity_mat" << state_identity_mat; matrix_constraint.block(0, 0, state_dim_ * (horizon_ + 1), state_dim_ * (horizon_ + 1)) = -1 * state_identity_mat; ADEBUG << "matrix_constraint"; ADEBUG << matrix_constraint; Eigen::MatrixXd control_identity_mat = Eigen::MatrixXd::Identity(control_dim_, control_dim_); for (size_t i = 0; i < horizon_; i++) { matrix_constraint.block((i + 1) * state_dim_, i * state_dim_, state_dim_, state_dim_) = matrix_a_; } ADEBUG << "matrix_constraint with A"; ADEBUG << matrix_constraint; for (size_t i = 0; i < horizon_; i++) { matrix_constraint.block((i + 1) * state_dim_, i * control_dim_ + (horizon_ + 1) * state_dim_, state_dim_, control_dim_) = matrix_b_; } ADEBUG << "matrix_constraint with B"; ADEBUG << matrix_constraint; Eigen::MatrixXd all_identity_mat = Eigen::MatrixXd::Identity(num_param_, num_param_); matrix_constraint.block(state_dim_ * (horizon_ + 1), 0, num_param_, num_param_) = all_identity_mat; ADEBUG << "matrix_constraint with I"; ADEBUG << matrix_constraint; std::vector<std::vector<std::pair<c_int, c_float>>> columns; columns.resize(num_param_ + 1); int value_index = 0; // state and terminal state for (size_t i = 0; i < num_param_; ++i) { // col for (size_t j = 0; j < num_param_ + state_dim_ * (horizon_ + 1); ++j) // row if (std::fabs(matrix_constraint(j, i)) > kEpsilon) { // (row, val) columns[i].emplace_back(j, matrix_constraint(j, i)); ++value_index; } } ADEBUG << "value_index"; ADEBUG << value_index; int ind_A = 0; for (size_t i = 0; i < num_param_; ++i) { A_indptr->emplace_back(ind_A); for (const auto &row_data_pair : columns[i]) { A_data->emplace_back(row_data_pair.second); // value A_indices->emplace_back(row_data_pair.first); // row ++ind_A; } } A_indptr->emplace_back(ind_A); }Constraint Vector形式如下: l = [ − x 0 0 ⋮ 0 x m i n ⋮ x m i n u m i n ⋮ u m i n ] u = [ − x 0 0 ⋮ 0 x m a x ⋮ x m a x u m a x ⋮ u m a x ] l = \begin{bmatrix} -x_0 \\ 0 \\ \vdots \\ 0 \\ x_{min}\\ \vdots\\ x_{min}\\ u_{min}\\ \vdots\\ u_{min}\\ \end{bmatrix} \quad u = \begin{bmatrix} -x_0 \\ 0 \\ \vdots \\ 0 \\ x_{max}\\ \vdots\\ x_{max}\\ u_{max}\\ \vdots\\ u_{max}\\ \end{bmatrix} l=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡−x00⋮0xmin⋮xminumin⋮umin⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤u=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡−x00⋮0xmax⋮xmaxumax⋮umax⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤
void MpcOsqp::CalculateConstraintVectors() { // evaluate the lower and the upper inequality vectors Eigen::VectorXd lowerInequality = Eigen::MatrixXd::Zero( state_dim_ * (horizon_ + 1) + control_dim_ * horizon_, 1); Eigen::VectorXd upperInequality = Eigen::MatrixXd::Zero( state_dim_ * (horizon_ + 1) + control_dim_ * horizon_, 1); for (size_t i = 0; i < horizon_; i++) { lowerInequality.block(control_dim_ * i + state_dim_ * (horizon_ + 1), 0, control_dim_, 1) = matrix_u_lower_; upperInequality.block(control_dim_ * i + state_dim_ * (horizon_ + 1), 0, control_dim_, 1) = matrix_u_upper_; } ADEBUG << " matrix_u_lower_"; for (size_t i = 0; i < horizon_ + 1; i++) { lowerInequality.block(state_dim_ * i, 0, state_dim_, 1) = matrix_x_lower_; upperInequality.block(state_dim_ * i, 0, state_dim_, 1) = matrix_x_upper_; } ADEBUG << " matrix_x_lower_"; // evaluate the lower and the upper equality vectors Eigen::VectorXd lowerEquality = Eigen::MatrixXd::Zero(state_dim_ * (horizon_ + 1), 1); Eigen::VectorXd upperEquality; lowerEquality.block(0, 0, state_dim_, 1) = -1 * matrix_initial_x_; upperEquality = lowerEquality; lowerEquality = lowerEquality; ADEBUG << " matrix_initial_x_"; // merge inequality and equality vectors lowerBound_ = Eigen::MatrixXd::Zero( 2 * state_dim_ * (horizon_ + 1) + control_dim_ * horizon_, 1); lowerBound_ << lowerEquality, lowerInequality; ADEBUG << " lowerBound_ "; upperBound_ = Eigen::MatrixXd::Zero( 2 * state_dim_ * (horizon_ + 1) + control_dim_ * horizon_, 1); upperBound_ << upperEquality, upperInequality; ADEBUG << " upperBound_"; }求解器主函数部分如下
bool MpcOsqp::Solve(std::vector<double> *control_cmd) { ADEBUG << "Before Calc Gradient"; CalculateGradient(); ADEBUG << "After Calc Gradient"; CalculateConstraintVectors(); ADEBUG << "MPC2Matrix"; OSQPData *data = Data(); ADEBUG << "OSQP data done"; ADEBUG << "OSQP data n" << data->n; ADEBUG << "OSQP data m" << data->m; for (int i = 0; i < data->n; ++i) { ADEBUG << "OSQP data q" << i << ":" << (data->q)[i]; } ADEBUG << "OSQP data l" << data->l; for (int i = 0; i < data->m; ++i) { ADEBUG << "OSQP data l" << i << ":" << (data->l)[i]; } ADEBUG << "OSQP data u" << data->u; for (int i = 0; i < data->m; ++i) { ADEBUG << "OSQP data u" << i << ":" << (data->u)[i]; } OSQPSettings *settings = Settings(); ADEBUG << "OSQP setting done"; OSQPWorkspace *osqp_workspace = osqp_setup(data, settings); ADEBUG << "OSQP workspace ready"; osqp_solve(osqp_workspace); auto status = osqp_workspace->info->status_val; ADEBUG << "status:" << status; // check status if (status < 0 || (status != 1 && status != 2)) { AERROR << "failed optimization status:\t" << osqp_workspace->info->status; osqp_cleanup(osqp_workspace); FreeData(data); c_free(settings); return false; } else if (osqp_workspace->solution == nullptr) { AERROR << "The solution from OSQP is nullptr"; osqp_cleanup(osqp_workspace); FreeData(data); c_free(settings); return false; } size_t first_control = state_dim_ * (horizon_ + 1); for (size_t i = 0; i < control_dim_; ++i) { control_cmd->at(i) = osqp_workspace->solution->x[i + first_control]; ADEBUG << "control_cmd:" << i << ":" << control_cmd->at(i); } // Cleanup osqp_cleanup(osqp_workspace); FreeData(data); c_free(settings); return true; }针对上述二次规划问题构造形式我觉得可以做出如下改进:
如果我们改变目标函数形式,变为如下形式的话 u 0 ∗ = min x k , u k ∑ k = 0 N ( x k − x r ) T Q ( x k − x r ) + ∑ k = 0 N − 1 ( u k − u r ) T R ( u k − u r ) x k + 1 = A x k + B u k x m i n ≤ x k ≤ x m a x u m i n ≤ x k ≤ u m a x x 0 = x ˉ u_{0} ^{*} = \min_{x_{k}, u_{k}} \sum_{k=0}^{N} (x_k-x_r)^T Q (x_k-x_r) + \sum_{k=0}^{N - 1}(u_k - u_r)^T R (u_k - u_r) \\ x_{k+1} = A x_k + B u_k \\ x_{\rm min} \le x_k \le x_{\rm max} \\ u_{\rm min} \le x_k \le u_{\rm max} \\ x_{0} = \bar{x} u0∗=xk,ukmink=0∑N(xk−xr)TQ(xk−xr)+k=0∑N−1(uk−ur)TR(uk−ur)xk+1=Axk+Bukxmin≤xk≤xmaxumin≤xk≤umaxx0=xˉ 我们需要变换 Gradient Vector q = [ − Q x r − Q x r ⋮ − Q x r − R u r ⋮ − R u r ] q = \begin{bmatrix} -Q x_r \\ -Q x_r \\ \vdots \\ -Q x_r \\ -R u_r\\ \vdots\\ -R u_r \end{bmatrix} q=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡−Qxr−Qxr⋮−Qxr−Rur⋮−Rur⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤干扰矩阵 C C C也应该考虑, 此时应该修改Constraint Vector: l = [ − x 0 − C ⋮ − C x m i n ⋮ x m i n u m i n ⋮ u m i n ] u = [ − x 0 − C ⋮ − C x m a x ⋮ x m a x u m a x ⋮ u m a x ] l = \begin{bmatrix} -x_0 \\ -C \\ \vdots \\ -C \\ x_{min}\\ \vdots\\ x_{min}\\ u_{min}\\ \vdots\\ u_{min}\\ \end{bmatrix} \quad u = \begin{bmatrix} -x_0 \\ -C \\ \vdots \\-C \\ x_{max}\\ \vdots\\ x_{max}\\ u_{max}\\ \vdots\\ u_{max}\\ \end{bmatrix} l=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡−x0−C⋮−Cxmin⋮xminumin⋮umin⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤u=⎣⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎡−x0−C⋮−Cxmax⋮xmaxumax⋮umax⎦⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎤