top of page

Advanced Robotics: Stochastic Model Predictive Control (SMPC) - A hands on approach Part 1

Updated: Sep 25

It’s been a while since my last post on Polynomial Chaos-based Model Predictive Control (SMPC). For readers who are new to the topic, I recommend starting with some hands-on coding to build intuition before diving into the more intricate theory of SMPC. If you haven’t checked out the blog yet, I encourage you to give it a read! Part 2 is taking longer than expected due to other commitments, but it’s in progress and will be out soon.


“Boston Dynamics’ Atlas is a prime example of where Stochastic Model Predictive Control (SMPC) could play a crucial role. A humanoid robot like Atlas faces constant uncertainty: sensor noise, actuator delays, ground contact variability, and unexpected external disturbances
“Boston Dynamics’ Atlas is a prime example of where Stochastic Model Predictive Control (SMPC) could play a crucial role. A humanoid robot like Atlas faces constant uncertainty: sensor noise, actuator delays, ground contact variability, and unexpected external disturbances

Before starting, it is important that you are clear with the idea of Optimal Control Theory and Mathematical Optimisation. My always go to lecture playlist is Optimal Control by Radhakant Padhi (IISC) to get started on the prerequisite of this tutorial.


To get an understanding on applications of optimisation in robotics, you can also follow Zachary Manchester's (CMU Professor) optimal control playlist on youtube.



Follow the link for the playlist  on Optimal Controls by Radhakant Padhi
Follow the link for the playlist on Optimal Controls by Radhakant Padhi

The Big Picture: What Are We Solving?


Let's start with the classic Model Predictive Control (MPC) problem. In MPC, we solve for the optimal control law through minimising certain quantity (maybe speed, fuel, or energy) while ensuring certain constraints are not violated (for example acceleration should not exceed certain number) for a given range of future time steps and take the first value as control input.


MPC Philosophy - Let's suppose you've already made some control decisions in the past. To determine the next control action, you simulate the system's behavior over the next N time steps (prediction horizon) , predicting how it will evolve. During this process, you optimize a sequence of future control actions (control horizon) while satisfying constraints imposed on the system (eg. you have a max acceleration input). From this optimized sequence, you select only the first control input and apply it as the decision for the current time step. Then, at the next time step, you repeat this entire process. Schwenzer, M., Ay, M., Bergs, T. et al. Review on model predictive control: an engineering perspective. Int J Adv Manuf Technol 117, 1327–1349 (2021).
MPC Philosophy - Let's suppose you've already made some control decisions in the past. To determine the next control action, you simulate the system's behavior over the next N time steps (prediction horizon) , predicting how it will evolve. During this process, you optimize a sequence of future control actions (control horizon) while satisfying constraints imposed on the system (eg. you have a max acceleration input). From this optimized sequence, you select only the first control input and apply it as the decision for the current time step. Then, at the next time step, you repeat this entire process. Schwenzer, M., Ay, M., Bergs, T. et al. Review on model predictive control: an engineering perspective. Int J Adv Manuf Technol 117, 1327–1349 (2021).

Imagine you're a self-driving car engineer. Your job is to make the car follow a perfect trajectory while:

  • Minimising energy use (don't burn unnecessary fuel)

  • Never crossing the lane lines (safety first!)


Without uncertainty (deterministic MPC):

Goal: Follow a path while minimising energy use
Constraint: "NEVER cross the lane line" 
Problem: Too strict! Noise makes this impossible

But here's the problem: Real life has noise! Wind, sensor errors, road imperfections... The perfect mathematically modelled car gets pushed around. If you enforce "NEVER cross the lane line," the optimisation often becomes infeasible, no solution exists that guarantees perfection in a noisy world.


You must bring uncertainty into the picture, somehow. Typically we model the system with noise and bring in probabilistic constraint. We introduce a particular type of probabilistic constraint called chance constraint which requires that a constraint be satisfied with at least a certain probability (confidence level). We can address the above goal as given below.


Note that we will be dealing with chance constraint involving on linear relations only.

With uncertainty (chance-constrained SMPC):

Goal: Follow a path while minimising energy use
Constraint: "Stay in lane 95% of the time"
Solution: Practical! Allows small risk for feasibility

In math terms: Instead of equality or inequality constraint Cx ≤ b (always true), we want

P(Cx ≤ b) ≥ 0.95 (true with 95% probability).


It simply means....

I don't need to be perfect every single time. I just need to be really good most of the time. A 5% chance of a small lane violation is acceptable.

Sounds philosophical as well lol!


To begin with, we will implement the most basic SMPC code in C++ using Eigen and Matplot++ in this blog. In the later part of the blog series, we will discuss more advanced approaches. like tube or scenario based SMPC. Since we are going with the most basic approach, we will heavily simplify the problem by taking assumptions.



Chance Constrained SMPC: Assumptions



Chance Constrained SMPC: Main Code


Before we begin our coding, we must create the skeleton for the project. We will use standard CMake build approach to create our project structure.


Linux is always recommended for aerospace and robotics application purposes. I have implemented it on Linux, but I assume it will also work on Windows. Let me know if you face any problems.


Make sure you create this directory tree. It will be helpful later to add more approaches for SMPC.

Working Directory Tree
|--/SMPC
|  |--/include
|	  |--/ChanceConstrainedSMPC
|		 |--ChanceConstrainedSMPC.hpp
|  |--/src
|     |--/ChanceConstrainedSMPC
|	     |--ChanceConstrainedSMPC.cpp
|     |--main.cpp
|--CMakeLists.txt

Note: Green colour denotes directory and yellow colour denotes filename


Let us create the CMakeLists.txt


This setup is to ensure that your project is compiled by locating the necessary .cpp and header files throughout the directory tree, along with the Eigen library. Additionally, it automatically downloads the third-party library Matplot++ during the build process.


We won’t go into the details of how CMakeLists.txt works, as it’s beyond the scope of this discussion. For now, you can think of it as a configuration file that defines a "recipe" (i.e., a Makefile) for building your project. Once this recipe is generated, it's used to compile and link your code into an executable.

SMPC Header File


Now let us come to the header file definition of Chance Constrained SMPC,. Here we create ChanceConstrainedSMPC.hpp

Here we define a structure and a class.


The structure ChanceConstraint is to initialise the probabilistic constraint given by P(Cx ≤ b) ≥ alpha. Inside the structure, we hold the three values C which is the constraint matrix, b is the upper bound and alpha denotes the probability of confidence (the constraint is always satisfied with probability alpha)


SMPC Functions Definition


Then we come to ChanceConstrainedSMPC class. Let us go through functions one by one.


The constructor of the ChanceConstrainedSMPC  class sets up the SMPC system by taking in the system dynamics (A, B), cost matrices (Q, R), prediction horizon, process noise variance, and a list of chance constraints.


Here A and B denotes the matrix for our assumed discrete time linear time-invariant system.



The following is the code for sampling the process noise

For the cost function, we use the cost function given below that minimises control effort and deviation, for N time steps. The definition of Q and R is given as well.



Here is the class constructor definition

The function extracts the number of states and control inputs from the dimensions of A and B, then checks that all matrix dimensions are valid and consistent. It also verifies that each chance constraint has a proper confidence level (alpha in (0,1)) and matching dimensions between C and b. A Gaussian noise generator is initialised based on the given noise variance. Finally, the constructor prints key details like state and input dimensions, prediction horizon, noise variance, and the number of chance constraints.


Defining Prediction and Control matrix



The code for generating the two matrix is given below. We call phi as prediction matrix and gamma as control matrix



It is very important to generate these matrix so that we can convert our cost function as single quadratic function expression and directly apply QP solver to the cost function at a later stage.



Constraint Satisfaction Check

checkConstraintSatisfaction()  takes the current state of the system and tests it against every safety rule (the chance constraints) to see if anything is out of bounds. As it checks each rule, it notes the biggest amount by which the state crosses a limit and reports two things: a simple yes or no on whether all the rules are met, and the worst amount of overshoot if they are not. This gives the controller an instant verdict on whether the system is still operating safely and, if not, how serious the breach is. If you observe the code, it is just calculating Cx - b to figure out if Cx ≤ b is violated. To get the max violation, we calculate max ( Cx-b , 0) which gives an idea that how far our violation is from the line Cx-b (Cx>b),

simulateStep()  advances the discrete-time linear system by generating one realisation of the process noise via sampleProcessNoise() , then computing the next state. This is a single-sample Monte Carlo rollout (not an expectation), so the returned nextState is a stochastic draw from the conditional distribution. Notice that we also call checkConstraintSatisfaction() and store violation history for final analysis.


Reformulating Chance Constraints in SMPC


Solver: Projected-Gradient QP


Here is the code for the algorithm given above.


Computing the MPC Control with Chance Constraints

main.cpp code

The Problem: Guiding a Robot on a Circular Path


Our mission is to make a robot follow a circular path with a radius of 0.8 units, centered at the origin, completing a full circle every 4 seconds. The robot’s motion is governed by a 2D kinematic model, with a state vector capturing its position and velocity in the x and y directions, controlled by accelerations.


However, random disturbances (modelled as Gaussian noise) can push the robot off course, so we need to ensure it stays within a safe polygonal region (distance from origin ≤ 0.9 units) with 95% probability.


The challenge is:

  • Tracking: The robot should hug the circular path as closely as possible, minimising deviations in position and velocity.

  • Safety: It must stay within a safe circular boundary of radius 1.0 around the origin, approximated by polygon, but with high probability (95% confidence) to account for noise.



2D Kinematic Model & Discretisation

Hence the final main.cpp code is


The rest of the code is dedicated to visualization using Matplot++. Thanks to ChatGPT, I was able to generate clean plotting routines that bring the simulation results to life. I’ve intentionally skipped the details of Matplot++ here, since plotting is not the focus of this tutorial. Instead, the emphasis has been on building intuition for Chance-Constrained SMPC.

Build Process

From the SMPC/ directory:

mkdir build && cd build
cmake ..
make

The executable will appear at SMPC/bin/cc_smpc.

Run the simulation:

cd ..
./bin/cc_smpc

This will generate plots and print statistics to the console.


ree

The robot successfully tracks the circular reference path while staying inside the probabilistic safety boundary. Small deviations occur due to process noise, but constraints enforce that the system stays inside the safe region most of the time.


Individually, it is able to track both x and y position
Individually, it is able to track both x and y position

Distance from origin over time
Distance from origin over time

Applied Control Signal over time
Applied Control Signal over time

Constraint Violation Statistics
Constraint Violation Statistics
  • Violation Rate: Only ~1.75% of steps breached constraints, within tolerance for a 95% confidence requirement.

  • Average Violation: Small (≈ 0.01), meaning even when violated, the robot barely crossed the boundary.

  • Final Distance: 0.7083, which is very close to the target radius of 0.8.


This confirms that the chance-constrained SMPC maintained both tracking performance and probabilistic safety, even with noise in the dynamics.


Limitations of the approach

Since this is the first and the most basic tutorial, we have heavily simplified the process for a better understanding. That being said, there are a lot a limitations associated with this approach.


Gaussian Assumption: Requires normally distributed noise

Linear Dynamics: No nonlinear system support

Individual Constraints: No joint probability corrections

Basic Solver: Production systems need OSQP/qpOASES

No Model Mismatch: Assumes perfect dynamics knowledge

This implementation provides analytical tractability for linear-Gaussian systems but requires extensions for nonlinear dynamics, non-Gaussian noise, or joint chance constraints.


Conclusion


We started from the basics of discretising a simple kinematic model, built up the cost function in quadratic form, and integrated probabilistic safety constraints into the controller. While the implementation here is intentionally simplified, it already demonstrates how uncertainty can be handled systematically in optimal control. In future posts, we’ll extend this foundation toward more advanced formulations like tube-based SMPC, scenario approaches, and industrial-grade solvers. Until then, I encourage you to experiment with the code, tweak the constraints, and explore how these ideas behave in practice.











 
 
 

Comments


bottom of page