[MUSIC] Welcome back to the AR track. We're now on week 4 and we're ready to start running algorithms on the robot and building up a level of autonomy. From teleoperation of the robot, you should have noticed that the robot does not exactly follow the velocity commands that you provide, regardless of how well you can calibrate the robot. For example, if you tell the robot to move forwards by ten centimeters, you might notice that it'll only move eight centimeter. Or it might turn slightly to the right or slightly to the left. This kind of control is called open loop control because there's no feedback to regulate how fast the robot is actually going. This week, we will incorporate the pose information that we get from the AprilTags to allow the robot to correct its trajectory as it moves. You can think of it as being similar to trying to walk forwards with your eyes closed, versus with your eyes open. Now first, we need to define our robot kinematic model. There's a lot going on in this picture. So let's slowly go through each term one by one. For simplicity, let's define our AprilTag position as the goal, which is the thick dot that you see in the image. And so we want the robot to move to where the AprilTag is. And also, let's say that we want to align the robot X-axis with the X-axis of the tag. This might seem overly specific right now but we'll see later that we can generalize this problem to any desired position and orientation that we want the robot to move to. Now let's define our AprilTag axes as XG and YG and our robot axes as XR and YR. Then, the delta X and the delta Y are the distances between the robot and the tag along the X and the Y axes of the tag. And the angle theta defines the angle between AprilTag X-axis and the robot X-axis. This is very similar to the information that you get when you call get measurements in the API. Now let's set rho to be the straight line distance between the robot and the tag, alpha as the angle between the robot X-axis and this line, and beta to be the angle between this line and the tag X axis. Finally, we have our control inputs which are v and omega, which are the linear and angular velocities of the robot. By defining these terms, we can write a mathematical model for how the robot's movements will affect all of these values, allowing us to minimize rho, alpha, and beta. Which basically allows us to move the robot to the tag. From each AprilTag measurement, we can directly observe delta X, delta Y, and theta. From these values we can then calculate rho alpha and beta. You can think of rho being how far we are from the tag, alpha as being the angular error between the heading direction of the robot and the direct line to the tag. And beta being the error between the heading direction of the robot and the desired heading direction at the end. In order for the robot to have reached the tag and achieve the desired heading direction, we must have all three of these state variables be zero at the end. Using some simple geometry we can calculate the rate of change of each of these quantities that we want to minimize in terms of our control inputs v and omega. As you can see in this equation, we have control over all three state variables from the two control inputs. Now given the previous state equations, let us define a control law as the first equation that we see here. The k subscript variables are simply gains that you, as the user, will set to optimize the system. So for example, if you want the robot to minimize rho a little bit faster, you would just set k rho to be a little bit higher. It's simple to throw out a control law like this. But we must show that his law will actually drive our system to a state where each of the state variables, rho, alpha and beta are minimized. To do this, we must show that using this control law, our system is stable. To simplify things, let's assume that our robot is almost heading towards the tag. What that means is that alpha is close to 0. Then we can make the simplification that sine of alpha is 0 and cosine of alpha is 1. Now you'll find that this is actually the same as linearizing the previous set of equations around 0. When we do this, we get the first set of equations that you see here. This is much simpler than the previous set of equations. And we can work with these to prove that the system is stable. Now in order for the system to be stable, we must have that all of the eigenvalues in the matrix in the equation above be negative. To calculate these eigenvalues, we can look at the characteristic polynomial of the matrix, which we can solve to then find the eigenvalues. In the end, we'll find that the system is stable if k rho is positive, k beta is negative and k alpha is larger than k rho. Which you will also see, when you implement these in the simulation and also on the robot. So by implementing the control law that we mentioned earlier, you should be able to get your robot to move towards the tag as shown in the video here. Notice how it always maintains a smooth path, as it tries to minimize rho, alpha, and beta, at the same time. By tuning the gains on each variable, you can also increase the speed of convergence of each variable. For example, if you don't care about the final orientation of the robot, you can set k beta to zero, and then the robot will only try to reach the final position goal. In addition, if you want to generalize this algorithm further and allow the robot to move to any desired goal or any desired position, It's very simple. Basically, all you need to do is when you get a measurement of the robot's current position and orientation, simply subtract the goal position and orientation from that value. And then now the difference between those is the error which you can then feed into the control law that we had before. Implementing the algorithm on the robot should be very similar what you do in simulation. However, it's very important to make sure things are working in simulation first, as it's much harder and takes much longer to debug issues the algorithm on the actual robot. Also, you'll notice that setting a nonzero k beta will lead to the robot performing large curvy paths that may cause it to lose the tag from its field of view. In that case your robot won't actually see any measurements from the tag and so at best It'll just stop. And so if you're implementing this on the robot and you only have a single tag, it's best to set k beta to zero so that the robot only tries to reach the position of the tag. And so stays looking at the tag at all times. Now if you want to take this idea further, you can replace the position information from the AprilTag with another visual perception system. A good place to start would be the ball tracking that you did in this Estimation and Learning course, although it would require a little bit more work to estimate the 3D position of the ball. However, this is very doable if you know the radius of the ball in real life as well as the radius of the ball in the image. In addition, you can also track other visual features in the scene that allow you to get an estimate of the robot's pose which you can then use to feed into the system and move it to a desired goal position.