WEBVTT
00:00:00.830 --> 00:00:03.460
So we are discussing the
SLAM problem, the simultaneous
00:00:03.460 --> 00:00:08.080
localization and mapping,
and what I want to do now is to
00:00:08.080 --> 00:00:12.930
see the framework of the
extended Kalman filter, some
00:00:13.230 --> 00:00:16.930
properties which regard
each computational complexity.
00:00:17.580 --> 00:00:21.690
So we know that the state
that we are estimating include
00:00:21.690 --> 00:00:25.060
both the position and
the orientation so the
00:00:25.060 --> 00:00:28.230
configuration of the robot and
the configuration of the feature.
00:00:30.200 --> 00:00:33.570
We saw this, for instance,
in the case of 2D and SR has
00:00:34.060 --> 00:00:35.460
three components in this case.
00:00:35.460 --> 00:00:38.890
But what I want to do now is
more general that the case of 2D.
00:00:40.850 --> 00:00:44.980
We have seen that the Jacobian
Fx has the following structure.
00:00:46.830 --> 00:00:49.140
So this first block regards
only the robot, here we have
00:00:49.140 --> 00:00:52.990
the correlation
between the robot and the
00:00:53.870 --> 00:00:56.940
map which is Zero. Zero
here and here, we have an
00:00:56.940 --> 00:01:00.480
identity matrix, we did
mention, of the number which
00:01:00.480 --> 00:01:02.210
scales with the
number of the feature.
00:01:02.210 --> 00:01:06.630
Fu consists of a block which regards
00:01:06.930 --> 00:01:11.050
only the robot and another
block which is always zero
00:01:11.920 --> 00:01:16.140
and then, in the hypothesis
that the robot only observes
00:01:16.150 --> 00:01:19.690
local feature we have
Matrix H, the Jacobian of the
00:01:19.690 --> 00:01:22.460
observation function
which has a sparse structure.
00:01:23.920 --> 00:01:27.420
So this is, for instance, a
structure when the robot is
00:01:27.420 --> 00:01:28.770
observing the feature J.
00:01:31.410 --> 00:01:34.570
So starting from the
structure of this matrix, we want to
00:01:34.730 --> 00:01:38.560
understand the
computational complexity, in particular,
00:01:38.560 --> 00:01:41.370
how this complexity scale
with the number of feature N.
00:01:41.370 --> 00:01:46.240
So the first two equations
of the Kalman filter are the
00:01:46.240 --> 00:01:51.070
following. So regarding the
mean value, we immediately
00:01:51.070 --> 00:01:55.100
observe that we just have to
update according to this equation.
00:01:55.700 --> 00:01:59.240
Only the first component
which regards the robot
00:01:59.240 --> 00:02:03.500
configuration and so this
equation, the implementation of
00:02:03.500 --> 00:02:05.580
this equation, the
cost is independent of N.
00:02:06.910 --> 00:02:10.760
Regarding the second
equation, we have to compute this
00:02:12.020 --> 00:02:14.680
quantity starting
from this structure.
00:02:15.100 --> 00:02:19.990
We also set the matrix P in this
00:02:19.990 --> 00:02:24.930
way, P is equal to P R
so this is a block which
00:02:24.960 --> 00:02:29.880
only regards the error on
the robot, P R N which is the
00:02:29.880 --> 00:02:32.800
correlation between the
robot and the map, M is for the
00:02:32.800 --> 00:02:37.260
map, so as dimension
which is the dimension of the
00:02:37.260 --> 00:02:40.890
robot configuration times
the dimension of the map so,
00:02:40.890 --> 00:02:44.370
for instance, in the 2D
case, this will be 2 N.
00:02:44.370 --> 00:02:47.780
The transpose of this here
00:02:50.230 --> 00:02:53.850
and P M which regards only the map.
00:02:55.340 --> 00:02:59.640
So by substituting this
here and the matrix here
00:02:59.640 --> 00:03:02.440
are given here and here,
we obtain this quantity.
00:03:08.300 --> 00:03:10.260
So first of all, you
observe that this block.
00:03:10.260 --> 00:03:15.170
.. to compute this,
we have to do operation
00:03:15.900 --> 00:03:19.890
which does not scale with N
because these blocks are all
00:03:20.260 --> 00:03:24.870
the dimensions which depend
on the dimension of S R here.
00:03:26.100 --> 00:03:30.440
This scales linearly with N and
00:03:31.040 --> 00:03:34.690
also this is independent of
N, so the total cost of this
00:03:34.690 --> 00:03:37.240
operation scale linearly with N.
00:03:37.450 --> 00:03:40.780
So we can conclude that the
two equations of the Kalman
00:03:40.780 --> 00:03:45.450
filter are a cost
which scale linearly with N
00:03:45.990 --> 00:03:49.520
because the second
equation here, the cost depends
00:03:53.140 --> 00:03:59.810
linearly on N. So the total cost
of these two equations is O of N.
00:04:00.030 --> 00:04:03.620
Let us pass to the second two
00:04:03.620 --> 00:04:06.380
equations so the perception.
00:04:08.820 --> 00:04:13.730
So the equations are these
and the structure of H is the
00:04:13.730 --> 00:04:18.500
following as we
derived before, so it's
00:04:18.740 --> 00:04:21.550
a sparse structure where it
is different from zero only
00:04:21.550 --> 00:04:23.920
for the first component
which regards the robot
00:04:23.920 --> 00:04:27.290
configuration and the component
which refers to the feature J.
00:04:28.980 --> 00:04:32.520
So let us start by
computing this quantity here, P H
00:04:32.520 --> 00:04:36.320
transpose which appears
here, here and of course also
00:04:36.320 --> 00:04:39.610
here, here and here
that is the transpose.
00:04:39.610 --> 00:04:44.070
So we have
00:04:45.290 --> 00:04:49.370
this matrix product here
and we observe that we have to
00:04:49.370 --> 00:04:54.790
multiply all this line
00:04:54.790 --> 00:04:58.390
here by this column vector here.
00:04:58.890 --> 00:05:03.540
And we only have results
different from zero here and here.
00:05:03.970 --> 00:05:08.560
So since here we have N, L,
M, and S to be involved in
00:05:08.560 --> 00:05:12.250
the computation, we have a
number of products which are
00:05:12.250 --> 00:05:16.580
scaled linearly with N so the
complexity of this product is of N.
00:05:21.430 --> 00:05:26.100
If we multiply this result
by H on this side, that is
00:05:26.100 --> 00:05:30.260
this product here, we still
have O N computation to do.
00:05:31.330 --> 00:05:35.020
So the total computational
cost of this quantity here is
00:05:35.380 --> 00:05:39.900
of the N and the complexity
for the first equation is of
00:05:39.930 --> 00:05:44.700
the N. Regarding the second
equation, we observe that we
00:05:44.700 --> 00:05:49.040
have to do the update of
all the elements of the matrix
00:05:49.280 --> 00:05:53.280
which is in the square so
the complexity of this equation
00:05:53.540 --> 00:05:56.670
is O N square.
00:05:57.070 --> 00:05:59.430
So we conclude that the
implementation of the extended
00:05:59.430 --> 00:06:03.690
Kalman filter when the
observation is sparse, so when the
00:06:04.460 --> 00:06:07.640
robot only observes a local
feature, scales quadratically
00:06:07.640 --> 00:06:08.650
with the number of features.