3.2. Examples for the Perception in the EKF
- document 1 document 2 document 3
- niveau 1 niveau 2 niveau 3
- audio 1 audio 2 audio 3
Descriptif
In this video we discuss the secondtwo equations of the Kalman filter.
Thème
Notice
Documentation
Liens
Avec les mêmes intervenants
-
3.1. Examples for the Action in the EKFMartinelliAgostino
In part 2, we have seen the equations of the Bayes filter, which are the general equations which allow us to update the probability distribution, as the data from both proprioceptive sensors and
-
2.7. Grid Localization: an example in 1DMartinelliAgostino
Now that we have the equations of the Bayes filter, we need a method in order to implement in real cases these equations. So, in the following, I want to discuss two methods, which are commonly
-
3.7. Observability Rank CriterionMartinelliAgostino
In this video, we discuss an automatic method which is analytical and allows us to answer the question if a state is observable or not: this method is the Observability Rank Criterion which has
-
3.6. Observability in roboticsMartinelliAgostino
In this video we discuss a fundamental issue which arises when we deal with an estimation problem: understanding if the system contains enough information to perform the estimation of the state.
-
3.5. Simultaneous Localization and Mapping (SLAM)MartinelliAgostino
In this video, we are discussing the SLAM problem: simultaneous localization and mapping.
-
3.8. Applications of the Observability Rank CriterionMartinelliAgostino
In this video we want to apply the observability rank criterion to understand the observability properties of the system that we saw in the previous videos.
-
3.3. The EKF is a weight meanMartinelliAgostino
In this video I want to discuss the second two equations of the Kalman filter. And in particular I want to show that these actually perform a kind of weight mean.
-
3.4. The use of the EKF in roboticsMartinelliAgostino
In this video I want to explain the steps that we have to follow in order to implement an extended Kalman filter in robotics.
-
2.5. Reminds on probabilityMartinelliAgostino
In this sequence I want to remind you a few concepts in the theory of probability and then in the next one we finally derive the equations of the Bayes filter. So the concept that I want to
-
2.8. The Extended Kalman Filter (EKF)MartinelliAgostino
We have seen the grid localization, and the advantage of this approach is that we can deal with any kind of probability distribution; in particular we don't need to do a Gaussian assumption. The
-
2.6. The Bayes FilterMartinelliAgostino
The equations of the Bayes filters are the equation that allow us to update the probability distribution for the robot to be in a given configuration by integrating the information that are in the
-
2.3. Wheel encoders for a differential drive vehicleMartinelliAgostino
In this video, we want to discuss the case of a wheel encoders in 2D, and in particular the case of a robot equipped with a differential drive which is very popular in mobile robotics.
Sur le même thème
-
Et si l’intelligence artificielle déferlait sur les océans ?
L’évolution des technologies d’observation et de modélisation a joué un rôle central dans l’accroissement des connaissances sur le fonctionnement des océans ou dans le développement des activités
-
La vidéo sous-marine au service de la recherche halieutique
Accessible à de nombreuses applications, tant en biologie ou qu'en technologie des pêches, la vidéo sous-marine est de plus en plus utilisée dans le domaine de la recherche halieutique. Les progrès
-
3.1. Examples for the Action in the EKFMartinelliAgostino
In part 2, we have seen the equations of the Bayes filter, which are the general equations which allow us to update the probability distribution, as the data from both proprioceptive sensors and
-
2.7. Grid Localization: an example in 1DMartinelliAgostino
Now that we have the equations of the Bayes filter, we need a method in order to implement in real cases these equations. So, in the following, I want to discuss two methods, which are commonly
-
5.9. Other approaches: Planning-based approachesVasquez GoveaAlejandro Dizan
In this video we are going to study a second, and probably the most promising alternative for motion prediction: planning-based algorithms.
-
5.7. Typical Trajectories: drawbacksVasquez GoveaAlejandro Dizan
In previous videos we have discussed how to implement the typical trajectories and motion patterns approach. In this video we are going to discuss what are the drawbacks of such an approach,
-
5.8. Other approaches: Social ForcesVasquez GoveaAlejandro Dizan
In this video we will review one of the alternatives we are proposing to the use of Hidden Markov models and typical trajectories: the Social Force model.
-
5.6. Predicting Human MotionVasquez GoveaAlejandro Dizan
In video 5.5 we have defined an HMM in Python. In this video we are going to learn how to use it to estimate and predict motion.
-
5.5. From trajectories to discrete time-state modelsVasquez GoveaAlejandro Dizan
In this video we are going to apply the concepts we have reviewed in the video 5.4 into real trajectories.
-
5.4. Bayesian filter inferenceVasquez GoveaAlejandro Dizan
In this video we will review the base filter and we will study a particular instance of the Bayesian filter called Hidden Markov models which is a discrete version of a Bayesian filter.
-
5.3b. Learning typical trajectories 2/2Vasquez GoveaAlejandro Dizan
In this video we are aiming to improve on the results we obtained in video 5.3a, in particular with respect to the greyed-out trajectories that are badly represented.
-
5.3a. Learning typical trajectories 1/2Vasquez GoveaAlejandro Dizan
In video 5.2 we showed how to apply the expectation maximization clustering algorithm to two-dimensional data. In this video we will learn how to apply it to trajectory data. And then we will be