WEBVTT
00:00:00.180 --> 00:00:04.940
In this session, we will
present the Stern algorithm for
00:00:04.940 --> 00:00:09.520
decoding. In fact, the idea is to
00:00:09.520 --> 00:00:14.320
combine two algorithms
that we have seen before, the
00:00:14.320 --> 00:00:17.350
Lee and Brickell algorithm
and the Birthday Decoding.
00:00:17.860 --> 00:00:22.100
So, instead of a full
Gaussian elimination, we will
00:00:22.100 --> 00:00:26.860
simply have a partial Gaussian
elimination as presented here.
00:00:27.510 --> 00:00:32.410
And if we look at the lower
part, what is called step 1,
00:00:32.410 --> 00:00:36.340
in red here in this
slide, it is, in fact,
00:00:38.020 --> 00:00:42.200
a smaller CSD problem with
a smaller matrix H', with a
00:00:45.280 --> 00:00:49.030
smaller target syndrome s'
00:00:50.980 --> 00:00:52.530
and with the weight p.
00:00:53.490 --> 00:00:58.030
So, now we are going to
solve this smaller problem in the
00:00:58.030 --> 00:01:02.790
first step. In fact, we
are going to find all or
00:01:02.790 --> 00:01:06.470
most solutions to that problem e'.
00:01:07.130 --> 00:01:11.890
And for every e'found
in step 1, we will check
00:01:12.070 --> 00:01:17.030
in step 2 whether
this particular solution
00:01:17.170 --> 00:01:21.580
e'can be extended to a
solution of the full CSD problem.
00:01:23.350 --> 00:01:28.350
If we solve the first
step using enumeration, as
00:01:28.350 --> 00:01:32.540
I presented as the very
first algorithm, then we obtain
00:01:32.540 --> 00:01:37.270
an algorithm which is very similar
to the Lee and Brickell algorithm.
00:01:37.270 --> 00:01:41.970
But, if we use for step 1
a Birthday Decoding, then
00:01:42.010 --> 00:01:46.900
we obtain a significant
speedup for decoding and it is
00:01:46.900 --> 00:01:48.270
in fact the Dumer Algorithm.
00:01:48.270 --> 00:01:50.930
Now, I will describe this algorithm.
00:01:50.930 --> 00:01:55.650
So, we are trying to solve CSD of H,
00:01:55.680 --> 00:01:59.830
s, w and this time we
have two additional integer
00:01:59.830 --> 00:02:01.740
parameters p and l.
00:02:02.640 --> 00:02:06.000
And we repeat the following:
we pick a permutation matrix
00:02:06.000 --> 00:02:08.600
P, we make
00:02:10.480 --> 00:02:15.080
a partial Gaussian
elimination as you can see on the
00:02:15.080 --> 00:02:19.940
right of the slide,
obtaining U, H', H'',
00:02:19.940 --> 00:02:24.740
s', s''. We solve the
00:02:24.740 --> 00:02:29.520
small CSD problem with H', s'
00:02:29.550 --> 00:02:34.020
and p - we solve this one
with Birthday Decoding - and
00:02:34.020 --> 00:02:37.930
for every solution of the
problem we will construct the
00:02:37.930 --> 00:02:41.400
second part of the error e''.
00:02:41.750 --> 00:02:46.750
And if we are lucky,
this second part of the
00:02:46.750 --> 00:02:51.260
error pattern e''has a
sufficiently small weight, and
00:02:51.260 --> 00:02:53.720
then, provides the
solution to our problem.
00:02:55.540 --> 00:02:58.050
So, what I presented
is the Dumer algorithm.
00:02:58.270 --> 00:03:01.660
In fact, this algorithm
was published in 1991.
00:03:02.480 --> 00:03:05.930
But Stern
algorithm, which is slightly
00:03:07.250 --> 00:03:10.860
more expensive, was
presented two years before.
00:03:11.350 --> 00:03:14.460
However, the difference
between the two algorithms is
00:03:14.460 --> 00:03:19.430
very small and Stern was the
first person to use Birthday
00:03:19.430 --> 00:03:20.810
Decoding for decoding.
00:03:20.900 --> 00:03:24.560
For this reason, I will now
refer to this algorithm as
00:03:24.560 --> 00:03:26.120
the Stern/Dumer algorithm.
00:03:26.640 --> 00:03:30.960
So, how do we analyze the
complexity of this algorithm?
00:03:30.960 --> 00:03:35.590
The iteration cost is
a bit more complex now.
00:03:35.990 --> 00:03:39.650
We have a first term
which is K times (n-k-l)
00:03:41.670 --> 00:03:45.420
which correspond to the
Gaussian elimination; a second
00:03:45.440 --> 00:03:50.240
term here which corresponds
to the cost of the Birthday
00:03:50.240 --> 00:03:54.730
Decoding for the step 1,
for the sub-CSD problem
00:03:55.130 --> 00:03:59.730
and the last term which
correspond to the final check of
00:03:59.770 --> 00:04:04.770
the error e''.
Altogether, we can write in that
00:04:04.800 --> 00:04:08.020
case and more generally in
most of the analysis we are
00:04:08.020 --> 00:04:11.830
going to make in the
sequel of this course,
00:04:13.060 --> 00:04:17.990
we have a formula with
coefficients here K0, K1, K2
00:04:18.710 --> 00:04:23.260
which are small numbers
like one or two, or one
00:04:23.290 --> 00:04:28.000
half. We will simplify, in
practice, all those formulas
00:04:28.000 --> 00:04:32.090
by setting all those
coefficients to one
00:04:32.640 --> 00:04:37.630
and we will write
simplified formulas, like this
00:04:37.630 --> 00:04:42.580
one, which are going to
be true but only up to a
00:04:42.580 --> 00:04:47.370
constant factor. So, K above is the
00:04:47.370 --> 00:04:52.130
cost up to a constant factor
of the Stern/Dumer algorithm'
00:04:52.130 --> 00:04:57.010
s iteration. The
probability of success is very
00:04:57.010 --> 00:05:00.290
similar to the probability
of success we have for the Lee
00:05:00.290 --> 00:05:02.000
and Brickell
algorithm: P∞ with this formula.
00:05:02.240 --> 00:05:06.620
We deduce N∞, the
expected number of iterations,
00:05:06.990 --> 00:05:08.960
and we have this
formula for the workfactor.
00:05:10.050 --> 00:05:14.240
It depends on two
additional parameters, p and l, which
00:05:15.200 --> 00:05:16.830
have to be optimized.
00:05:17.550 --> 00:05:19.180
Now, going back to
the workfactor formula.
00:05:19.180 --> 00:05:24.060
The optimization parameters p and
00:05:24.200 --> 00:05:29.130
l will grow together with
the problem parameters n, k
00:05:29.130 --> 00:05:33.480
and w. So, in
practice, for cryptographic
00:05:33.480 --> 00:05:38.070
size, the formula
can be much simpler.
00:05:38.290 --> 00:05:41.670
In particular, the part
corresponding to the Gaussian
00:05:41.670 --> 00:05:45.730
elimination never dominates
and we will simply remove it
00:05:45.730 --> 00:05:50.210
for the analysis. This
provides this more simple formula.
00:05:51.420 --> 00:05:55.930
Now, if we look at this
formula, it is the sum of two terms.
00:05:56.550 --> 00:06:01.330
And in most practical
situation this sum is
00:06:01.330 --> 00:06:04.530
minimal when the
two addends are equal.
00:06:05.000 --> 00:06:08.790
And this provides the
formula that is here in green,
00:06:09.290 --> 00:06:13.460
where the minimum is taken over
only one of the two parameters, p.
00:06:14.380 --> 00:06:18.710
And the second parameter l is
given with the formula on the right.
00:06:19.420 --> 00:06:24.070
And of course, this formula is
true but up to a constant factor.
00:06:24.560 --> 00:06:29.030
In the next session, we will
study an even more involved
00:06:29.060 --> 00:06:32.910
algorithm, a new variant
of information set decoding.