Canal-U

Mon compte

Résultats de recherche

Nombre de programmes trouvés : 831
Label UNT Vidéocours

le (6m38s)

1.7. Reed-Solomon Codes

Reed-Solomon codes were introduced by Reed and Solomon in the 1960s. These codes are still used in storage device, from compact-disc player to deep-space application. And they are widely used mainly because of two features: first of all, because they are MDS code, that is, they attain the maximum error detection and correction capacity. The second thing is that they have efficient decoding algorithms. Reed-Solomon codes are particularly useful for burst error correction, that is, they are effective for channels that have memory.So, suppose that we consider n and k ...
Voir la vidéo
Label UNT Vidéocours

le (5m40s)

1.8. Goppa Codes

In this session, we will talk about another family of codes that have an efficient decoding algorithm: the Goppa codes. One limitation of the generalized Reed-Solomon codes is the fact that the length is bounded by the size of the field over which it is defined. This implies that these codes are useful when we use a large field size. In the sequence, we'll present a method to obtain a new code over small alphabets by exploiting the properties of the generalized Reed-Solomon codes. So, the idea is to construct a generalized ...
Voir la vidéo
Label UNT Vidéocours

le (5m35s)

1.9. McEliece Cryptosystem

This is the last session of the first week of this MOOC. We have already all the ingredients to talk about code-based cryptography. Recall that in 1976 Diffie and Hellman published their famous paper "New Directions in Cryptography", where they introduced public key cryptography providing a solution to the problem of key exchange. Mathematically speaking, public key cryptography considers the notion of one-way trapdoor function that is easy in one direction, hard in the reverse direction unless you have a special information called the trapdoor. The security of the most ...
Voir la vidéo
Label UNT Vidéocours

le (5m35s)

2.1. Formal Definition

Welcome to the second week of this MOOC entitled Code-Based Cryptography. This week, we will talk in detail about the McEliece cryptosystem. First, in this session, we will describe formally the McEliece and the Niederreiter systems, which are the principal public-key schemes, based on error-correcting code. Let K be a security parameter. An encryption scheme is defined by the following spaces: the space of all possible messages, the space of all ciphertexts, the space of the public-keys, and the space of the secret-keys.Then, we need to define the ...
Voir la vidéo
Label UNT Vidéocours

le (4m43s)

2.2. Security-Reduction Proof

Welcome to the second session. We will talk about the security-reduction proof. The security of a given cryptographic algorithm is reduced to the security of a known hard problem. To prove that a cryptosystem is secure, we select a problem which we know is hard to solve, and we reduce the problem to the security of the cryptosystem. Since the problem is hard to solve, the cryptosystem is hard to break. A security reduction is a proof that an adversary able to attack the scheme is able to solve some presumably hard ...
Voir la vidéo
Label UNT Vidéocours

le (3m15s)

2.3. McEliece Assumptions

In this session, we will talk about McEliece assumptions. The security of the McEliece scheme is based on two assumptions as we have already seen: the hardness of decoding a random linear code and the problem of distinguishing a code with a prescribed structure from a random one. In this sequence, we will study in detail these two assumptions. The first assumption claims that decoding a random linear code is difficult.  First, notice that the general decoding problem is basically a re-writing of the Syndrome Decoding problem. And both are equivalent ...
Voir la vidéo
Label UNT Vidéocours

le (5m32s)

2.4. Notions of Security

In this session, we will study the notion of security of public-key scheme. A public-key scheme is one-way if the probability of success of any adversary running in polynomial time is negligible. That is, without the private key, it is computationally impossible to recover the plaintext. For the McEliece, if we assume that the general decoding problem of a linear code is on average a difficult problem and there exists no efficient distinguisher for Goppa codes, then the McEliece scheme has the One-Wayness property. However, McEliece is vulnerable to many ...
Voir la vidéo
Label UNT Vidéocours

le (5m5s)

2.5. Critical Attacks - Semantic Secure Conversions

In this session, we will study critical attacks against the public-key cryptosystem. The partial knowledge on the plaintext reduces drastically the computational cost of the attack to the McEliece cryptosystem. For example, suppose that the adversary knows r bits of the plaintext. Then the difficulty of recovering the remaining k - r bits in the complete McEliece with parameters [n, k] is equivalent to that of recovering the full plaintext in the McEliece with parameters [n, k - r]. This is given by this formula. You just ...
Voir la vidéo
Label UNT Vidéocours

le (3m45s)

2.6. Reducing the Key Size

In the next three sessions, I will explain how to reduce the key size of code-based cryptosystem. Circulant matrices are the central point in many attempts to reduce the key size of code-based cryptosystems since they provide efficient representation. A circulant matrix is a square matrix, its rows are obtained by cyclically shifting the first row. An alternative representation of an n-tuple of elements is using polynomial. Thus, this matrix can be described by a polynomial. And the i-th row of a circulant matrix can be expressed by this formula. ...
Voir la vidéo
Label UNT Vidéocours

le (4m41s)

2.7. Reducing the Key Size - LDPC codes

LDPC codes have an interesting feature: they are free of algebraic structure. We will study in detail this proposal for the McEliece cryptosystem in this session. LDPC codes were originally introduced by Gallager, in his doctoral thesis, in 1963. One of the characteristic of LDPC codes is the existence of several iterative decoding algorithms which achieve excellent performances. Tanner, later, in the 1981, introduced a graphical representation to these codes as bipartite graph. However, they remained almost forgotten by the coding theory community until 1996 when MacKay and Neal re-discovered ...
Voir la vidéo

 
FMSH
 
Facebook Twitter
Mon Compte