WEBVTT
00:00:00.090 --> 00:00:03.140
In this session, I will
detail two combinatorial
00:00:03.140 --> 00:00:05.050
solutions to the decoding problem.
00:00:05.230 --> 00:00:07.940
The first one is
the Exhaustive Search.
00:00:08.890 --> 00:00:13.820
To find our w columns,
we will simply enumerate
00:00:14.020 --> 00:00:18.940
all the tuples j1 to
jw and check whether
00:00:19.860 --> 00:00:23.890
the corresponding column
plus the syndrome is equal to
00:00:23.890 --> 00:00:27.840
zero modulo 2. In detail
here is how we will do.
00:00:28.840 --> 00:00:32.630
We have w loops enumerating
00:00:34.170 --> 00:00:39.090
the indices from j1 to jw, and in
00:00:39.090 --> 00:00:43.830
the innermost loop,
we add the w columns
00:00:43.860 --> 00:00:47.670
plus the syndrome and
either we test the value of the
00:00:47.670 --> 00:00:51.470
syndrome or we store it
depending on what is our need.
00:00:51.960 --> 00:00:56.020
The cost of this
algorithm is at most w(n,w)
00:00:57.920 --> 00:01:00.680
column additions plus a
certain number of tests.
00:01:01.120 --> 00:01:05.780
In practice, the cost
for the test is negligible
00:01:05.940 --> 00:01:10.660
and we will just ignore
that term, and we have this
00:01:10.660 --> 00:01:12.450
cost for the algorithm.
00:01:13.520 --> 00:01:14.620
But we can do better.
00:01:15.190 --> 00:01:18.110
Instead, we may keep
track of partial sums.
00:01:18.610 --> 00:01:23.080
In the first loop, line 1,
we add the syndrome to the
00:01:23.080 --> 00:01:28.010
first column. In the
second loop, we add the first
00:01:28.010 --> 00:01:30.580
partial sum to the second column.
00:01:30.770 --> 00:01:35.140
We obtain a second partial
sum and so on until, in the
00:01:35.420 --> 00:01:40.130
inner loop, we add the
previous partial sum to the last
00:01:40.130 --> 00:01:44.540
column. If we do
this, since each line i
00:01:44.570 --> 00:01:48.040
is executed (n,i)
times, we have a total
00:01:50.300 --> 00:01:52.300
of column additions
which is given here.
00:01:52.770 --> 00:01:57.690
And this cost is
dominated by (n,w), when w is
00:01:57.690 --> 00:02:00.630
not too large. So, the
total cost for the Exhaustive
00:02:00.630 --> 00:02:03.780
Search is (n,w) column operations.
00:02:04.430 --> 00:02:07.710
And for that cost, we obtain
all the solutions to our problem.
00:02:08.090 --> 00:02:10.460
We can do better with
the Birthday Decoding.
00:02:11.670 --> 00:02:15.020
In this algorithm, we
split the matrix into two equal
00:02:15.020 --> 00:02:19.180
parts and we will
enumerate all solutions by
00:02:19.180 --> 00:02:24.170
enumerating those two
lists using error pattern of
00:02:24.170 --> 00:02:28.950
weight w/2. If the
intersection of those two lists
00:02:29.250 --> 00:02:32.350
is not empty, then we have
a solution to our problem.
00:02:32.870 --> 00:02:34.090
Here is the algorithm.
00:02:35.270 --> 00:02:40.140
We first enumerate all
errors of weight w/2, compute the
00:02:42.130 --> 00:02:46.690
syndrome and store each of the
00:02:46.690 --> 00:02:49.640
computed values. This
will cost (n/2,w/2),
00:02:51.610 --> 00:02:53.700
that is the size of the first list.
00:02:53.700 --> 00:02:58.080
At this point, we may check the
00:02:58.080 --> 00:03:01.130
intersection of the
two lists, L1 and L2.
00:03:01.980 --> 00:03:05.030
This last part of the
algorithm will cost (L1 * L2)/2
00:03:08.820 --> 00:03:10.360
to the size of the syndrome.
00:03:10.710 --> 00:03:14.930
Coming back to the
Birthday Decoding slide, the
00:03:14.930 --> 00:03:19.730
cost of the above
enumeration will be given by the
00:03:19.730 --> 00:03:23.510
formula here with binomial
coefficient which can also be
00:03:23.510 --> 00:03:26.660
written as 2L + (L²/2^(n-k)) where L
00:03:29.030 --> 00:03:31.220
is the size common to the two lists.
00:03:31.590 --> 00:03:34.870
To measure the exact
complexity of this algorithm, we
00:03:34.870 --> 00:03:38.880
have to take into account
that an error pattern of weight
00:03:38.880 --> 00:03:43.450
w splits evenly between the
two halves of the matrix with
00:03:43.450 --> 00:03:46.680
a probability P which
may be smaller than one.
00:03:46.860 --> 00:03:51.270
So, we may have to repeat
the process several times with
00:03:51.270 --> 00:03:53.240
different splits of the matrix.
00:03:53.660 --> 00:03:57.130
And to obtain all
solutions, we may have to repeat
00:03:58.920 --> 00:04:03.240
the process of computing the two
lists and their intersections 1/P times.
00:04:04.790 --> 00:04:09.650
This will multiply the
complexity by 1/P and the final
00:04:09.650 --> 00:04:14.360
cost for obtaining all
solutions is the following, with a
00:04:14.360 --> 00:04:18.080
first term corresponding to
the construction of the list
00:04:18.080 --> 00:04:20.890
and the second term
corresponding to the number of solutions.
00:04:21.220 --> 00:04:25.900
We may relax this algorithm
a little bit by enlarging the
00:04:25.900 --> 00:04:30.680
matrices H1 and H2,
allowing a small overlap.
00:04:31.160 --> 00:04:35.840
If we do that, the
probability of success will be close
00:04:35.840 --> 00:04:40.640
to one and after a single
repetition or a constant number
00:04:40.640 --> 00:04:45.100
of repetitions of the
list computation plus
00:04:45.100 --> 00:04:47.990
intersection, we obtain the solution.
00:04:48.420 --> 00:04:52.890
The total cost of that will be 2
00:04:52.890 --> 00:04:55.810
* √ (n,w)
00:04:57.760 --> 00:05:00.600
plus a second term which
is equal to the number of
00:05:00.600 --> 00:05:03.560
solutions to our problem.
00:05:03.780 --> 00:05:06.680
And this is true up to
a small constant factor.
00:05:06.980 --> 00:05:10.460
The next session will be
devoted to the use of linear
00:05:10.460 --> 00:05:13.380
algebra to solve the
syndrome decoding problem.