E-Book, Englisch, 400 Seiten
Reihe: Collection IRIS
Berrou Codes and turbo codes
1. Auflage 2011
ISBN: 978-2-8178-0039-4
Verlag: Springer Paris
Format: PDF
Kopierschutz: 1 - PDF Watermark
E-Book, Englisch, 400 Seiten
Reihe: Collection IRIS
ISBN: 978-2-8178-0039-4
Verlag: Springer Paris
Format: PDF
Kopierschutz: 1 - PDF Watermark
This book is devoted to one of the essential functions of modern telecommunications systems: channel coding or error correction coding. Its main topic is iteratively decoded algebraic codes, convolutional codes and concatenated codes.
Autoren/Hrsg.
Weitere Infos & Material
1;Title Page ;3
2;Copyright Page ;4
3;Codes and Turbo Codes;5
4;Foreword;7
5;Table of Contents;10
6;Chapter 1 Introduction;15
6.1;1.1 Digital messages;17
6.2;1.2 A first code;18
6.3;1.3 Hard input decoding and soft input decoding;21
6.4;1.4 Hard output decoding and soft output decoding;25
6.5;1.5 The performance measure;25
6.6;1.6 What is a good code?;29
6.7;1.7 Families of codes;31
7;Chapter 2 Digital communications;33
7.1;2.1 DigitalModulations;33
7.1.1;2.1.1 Introduction;33
7.1.2;2.1.2 Linear Memoryless Modulations;36
7.1.2.1;Amplitude-shift keying with M states: M-ASK;36
7.1.2.2;Phase Shift Keying with M states (M-PSK);39
7.1.2.3;Quadrature Amplitude Modulation using two quadrature carriers (MQAM);40
7.1.3;2.1.3 Memoryless modulation with M states (M-FSK);43
7.1.4;2.1.4 Modulations with memory by continuous phase frequency shift keying (CPFSK);45
7.1.4.1;Continuous phase-frequency-shift keying with modulation index h = 1/2: Minimum Shift Keying (MSK);46
7.1.4.2;L-ary Raised Cosine modulation (L-RC);48
7.1.4.3;Gaussian minimum shift keying modulation (GMSK);49
7.2;2.2 Structure and performance of the optimal receiver on a Gaussian channel;51
7.2.1;2.2.1 Structure of the coherent receiver;51
7.2.2;2.2.2 Performance of the coherent receiver;56
7.2.2.1;Amplitude shift keying with M states;56
7.2.2.2;Phase shift keying with M states;60
7.2.2.3;Amplitude modulation on two quadrature carriers – M-QAM;63
7.2.2.4;Frequency shift keying – M-FSK;66
7.2.2.5;Continuous phase frequency shift keying – CPFSK;69
7.3;2.3 Transmission on a band-limited channel;73
7.3.1;2.3.1 Introduction;73
7.3.2;2.3.2 Intersymbol interference;74
7.3.3;2.3.3 Condition of absence of ISI: Nyquist criterion;77
7.3.3.1;Optimal distribution of filtering between transmission and reception;80
7.3.4;2.3.4 Expression of the error probability in presence of Nyquist filtering;82
7.4;2.4 Transmission on fading channels;83
7.4.1;2.4.1 Characterization of a fading channel;83
7.4.1.1;Coherence bandwidth;84
7.4.1.2;Coherence time;87
7.5;2.4.2 Transmission on non-frequency-selective slow-fading channels;87
7.5.1;Performance on a Rayleigh channel;87
7.5.2;Performance on a Rayleigh channel with diversity;89
7.5.3;Transmission on a slow-fading frequency-selective channel;92
7.5.4;Transmission with equalization at reception;95
8;Chapter 3 Theoretical limits;96
8.1;3.1 Information theory;96
8.1.1;3.1.1 Transmission channel;96
8.1.2;3.1.2 An example: the binary symmetric channel;97
8.1.2.1;Configurations of errors on the binary symmetric channel;97
8.1.2.2;Mutual information and capacity of the binary symmetric channel;98
8.1.3;3.1.3 Overview of the fundamental coding theorem;99
8.1.4;3.1.4 Geometrical interpretation;100
8.1.5;3.1.5 Random coding;101
8.1.5.1;Codes imitating random coding;102
8.2;3.2 Theoretical limits to performance;104
8.2.1;3.2.1 Binary input and real output channel;104
8.2.2;3.2.2 Capacity of a transmission channel;105
8.2.2.1;Shannon limit of a band-limited continuous input and output Gaussian channel;106
8.2.2.2;Capacity of a discrete input Gaussian channel;106
8.2.2.3;Capacity of the Rayleigh channel;108
8.3;3.3 Practical limits to performance;109
8.3.1;3.3.1 Gaussian binary input channel;109
8.3.2;3.3.2 Gaussian continuous input channel;110
8.3.3;3.3.3 Some examples of limits;112
8.4;3.4 Minimum distances required;113
8.4.1;3.4.1 MHD required with 4-PSK modulation;113
8.4.2;3.4.2 MHD required with 8-PSK modulation;115
8.4.3;3.4.3 MHD required with 16-QAM modulation;117
8.5;Bibliography;120
9;Chapter 4 Block codes;121
9.1;4.1 Block codes with binary symbols;122
9.1.1;4.1.1 Generator matrix of a binary block code;122
9.1.2;4.1.2 Dual code and parity check matrix;124
9.1.3;4.1.3 Minimum distance;125
9.1.4;4.1.4 Extended codes and shortened codes;126
9.1.5;4.1.5 Product codes;127
9.1.6;4.1.6 Examples of binary block codes;127
9.1.6.1;Parity check code;127
9.1.6.2;Repetition code;128
9.1.6.3;Hamming code;129
9.1.6.4;Maximum length code;130
9.1.6.5;Hadamard code;130
9.1.6.6;Reed-Muller codes;131
9.1.7;4.1.7 Cyclic codes;132
9.1.7.1;Definition and polynomial representation;132
9.1.7.2;Cyclic code in systematic form;134
9.1.7.3;Implementation of an encoder;136
9.1.7.4;BCH codes;137
9.2;4.2 Block codes with non-binary symbols;142
9.2.1;4.2.1 Reed-Solomon codes;142
9.2.2;4.2.2 Implementing the encoder;144
9.3;4.3 Decoding and performance of codes with binary symbols;144
9.3.1;4.3.1 Error detection;144
9.3.1.1;Detection capability;145
9.3.1.2;Probability of non-detection of errors;145
9.3.2;4.3.2 Error correction;146
9.3.2.1;Hard decoding;146
9.3.2.2;Soft decoding;149
9.4;4.4 Decoding and performance of codes with non-binary symbols . .;155
9.4.1;4.4.1 Hard input decoding of Reed-Solomon codes;155
9.4.2;4.4.2 Peterson’s directmethod;156
9.4.2.1;Description of the algorithm for codes with non-binary symbols;156
9.4.2.2;Simplification of Peterson’s algorithm for binary codes;160
9.4.2.3;Chien algorithm;162
9.4.3;4.4.3 Iterativemethod;163
9.4.3.1;Berlekamp-Massey algorithm for codes with non-binary symbols;164
9.4.3.2;Euclid’s algorithm;166
9.4.3.3;Calculating coefficients ej by a transform;167
9.4.3.4;Berlekamp-Massey algorithm for binary cyclic codes;169
9.4.3.5;Euclid’s algorithm for binary codes;170
9.4.4;4.4.4 Hard input decoding performance of Reed-Solomon codes;171
9.5;Bibliography;172
9.6;Appendix: Notions about Galois fields and minimal polynomials ;173
9.6.1;Definition;173
9.6.2;Primitive element of a Galois field;174
9.6.3;Minimal polynomial with coefficients in F2 associated with an element of a Galois field Fq;174
9.6.4;Minimal polynomial with coefficients in Fq associated with an element in a Galois field Fq;176
9.6.5;Primitive polynomials;176
10;Chapter 5 Convolutional codes and their decoding;179
10.1;5.1 History;179
10.2;5.2 Representations of convolutional codes;181
10.2.1;5.2.1 Generic representation of a convolutional encoder;181
10.2.2;5.2.2 Polynomial representation;184
10.2.3;5.2.3 Tree of a code;185
10.2.4;5.2.4 Trellis of a code;185
10.2.5;5.2.5 Statemachine of a code;188
10.3;5.3 Code distances and performance;190
10.3.1;5.3.1 Choosing a good code;190
10.3.2;5.3.2 RTZ sequences;190
10.3.3;5.3.3 Transfer function and distance spectrum;192
10.3.4;5.3.4 Performance;195
10.4;5.4 Decoding convolutional codes;198
10.4.1;5.4.1 Model of the transmission chain and notations;199
10.4.2;5.4.2 The Viterbi algorithm;199
10.4.2.1;Example of applying the Viterbi algorithm;202
10.4.3;5.4.3 The Maximum A Posteriori algorithm or MAP algorithm;204
10.5;5.5 Convolutional block codes;204
10.5.1;5.5.1 Trellis termination;205
10.5.1.1;Classical trellis termination;205
10.5.1.2;Tail-biting;206
10.5.2;5.5.2 Puncturing;208
10.6;Bibliography;210
11;Chapter 6 Concatenated codes;212
11.1;6.1 Parallel concatenation and serial concatenation;214
11.2;6.2 Parallel concatenation and LDPC codes;217
11.3;6.3 Permutations;219
11.4;6.4 Turbo crossword;219
11.5;Bibliography;222
12;Chapter 7 Convolutional turbo codes;223
12.1;7.1 The history of turbo codes;223
12.2;7.2 Multiple concatenation of RSC codes;225
12.3;7.3 Turbo codes;227
12.3.1;7.3.1 Termination of constituent codes;231
12.3.2;7.3.2 The permutation function;232
12.3.2.1;Regular permutation;233
12.3.2.2;Necessity for disorder;235
12.3.2.3;Intra-symbol disorder;237
12.3.2.4;Irregular permutations;239
12.3.2.5;Quadratic Permutation;241
12.4;7.4 Decoding turbo codes;245
12.4.1;7.4.1 Turbo decoding;245
12.4.2;7.4.2 SISO decoding and extrinsic information;248
12.4.2.1;Notations;249
12.4.2.2;Decoding following the Maximum A Posteriori (MAP) criterion;250
12.4.2.3;The simplified Max-Log-MAP or SubMAP algorithm;252
12.4.3;7.4.3 Practical considerations;255
12.5;7.5 m-binary turbo codes;259
12.5.1;7.5.1 m-binary RSC encoders;259
12.5.2;7.5.2 m-binary turbo codes;261
12.5.2.1;8-state double-binary turbo code;263
12.5.2.2;16-state double-binary turbo code;264
12.6;7.6 Analysis tools;266
12.6.1;7.6.1 Theoretical performance ;266
12.6.2;7.6.2 Asymptotic behaviour;266
12.6.2.1;Error impulse method;267
12.6.2.2;Modified error impulse method;269
12.6.2.3;Double error impulse method;269
12.6.3;7.6.3 Convergence;269
12.6.3.1;Transfer function for a SISO decoder of extrinsic information;270
12.6.3.1.1;a. Definition of the mutual information (MI);270
12.6.3.1.2;b. Definition of the a priori mutual information;271
12.6.3.1.3;c. Calculation of the output mutual information;272
12.6.3.1.4;d. Practical method to obtain the transfer function of the extrinsic information;273
12.6.3.1.5;e. An example;274
12.6.3.2;EXIT chart;274
12.7;Bibliography;276
13;Chapter 8 Turbo product codes;281
13.1;8.1 History;281
13.2;8.2 Product codes;281
13.2.1;Definition;282
13.2.2;Example 8.1;282
13.3;8.3 Hard input decoding of product codes;283
13.3.1;8.3.1 Row-column decoding;283
13.3.1.1;Property;283
13.3.1.2;Example 8.2;284
13.3.2;8.3.2 The Reddy-Robinson algorithm;284
13.3.2.1;Example 8.3;286
13.4;8.4 Soft input decoding of product codes;287
13.4.1;8.4.1 The Chase algorithm with weighted input;287
13.4.1.1;Example 8.4;289
13.4.2;8.4.2 Performance of the Chase-Pyndiah algorithm;290
13.4.3;8.4.3 The Fang-Battail algorithm;290
13.4.3.1;Example 8.5;292
13.4.4;8.4.4 The Hartmann-Nazarov algorithm;295
13.4.4.1;Fast Hadamard transform;297
13.4.4.2;Example 8.6;298
13.4.5;8.4.5 Other soft input decoding algorithms;299
13.5;8.5 Implantation of the Chase-Pyndiah algorithm;301
13.6;Bibliography;303
14;Chapter 9 LDPC codes;306
14.1;9.1 Principle of LDPC codes;306
14.1.1;9.1.1 Parity check code;307
14.1.1.1;Definition;307
14.1.1.2;Parity code with three bits;307
14.1.1.3;Parity check code with n bits;309
14.1.2;9.1.2 Definition of an LDPC code;310
14.1.2.1;Linear block codes;310
14.1.2.2;Low density parity check codes;311
14.1.2.3;Coding rate;312
14.1.3;9.1.3 Encoding;313
14.1.3.1;Generic encoding;313
14.1.3.2;Specific constructions;315
14.1.3.3;Summary;316
14.1.3.4;Analogy between an LDPC code and a turbo code;316
14.1.4;9.1.4 Decoding LDPC codes;317
14.1.4.1;Hard input algorithm;318
14.1.4.2;Belief propagation algorithm;318
14.1.5;9.1.5 Randomconstruction of LDPC codes;321
14.1.5.1;Optimization of irregularity profiles;321
14.1.5.2;Optimization of cycle size;323
14.1.5.3;Selecting the code by the impulse method;323
14.1.5.4;Selecting the code by simulation;323
14.1.6;9.1.6 Some geometrical constructions of LDPC codes;324
14.1.6.1;Cayley / Ramanujan constructions;324
14.1.6.2;Kou, Lin and Fossorier’s Euclidian / Projective Geometry LDPC;325
14.1.6.3;Constructions based on permutation matrices;325
14.1.6.4;Matrices based on Pseudo random generators;326
14.1.6.5;Array-based LDPC;326
14.1.6.6;BIBDs, Latin rectangles;326
14.2;9.2 Architecture for decoding LDPC codes for the Gaussian channel;327
14.2.1;9.2.1 Analysis of the complexity;327
14.2.2;9.2.2 Architecture of a generic node processor (GNP);328
14.2.2.1;Choice of a generic operator;331
14.2.3;9.2.3 Generic architecture for message propagation;331
14.2.3.1;Presentation of the model;331
14.2.3.2;Example of an implementation;333
14.2.4;9.2.4 Combining parameters of the architecture;334
14.2.5;9.2.5 Example of synthesis of an LDPC decoder architecture . .;337
14.2.5.1;Flooding schedule (according to parities);337
14.2.5.2;Horizontal interleaving schedule;338
14.2.6;9.2.6 Sub-optimal decoding algorithm;339
14.2.6.1;Single message decoding algorithm (. = 1);339
14.2.6.2;Sub-optimal PNP algorithms (. > 1);341
14.2.7;9.2.7 Influence of quantization;342
14.2.8;9.2.8 State of the art of published LDPC decoder architectures;344
14.3;Bibliography;346
15;Chapter 10 Turbo codes and large spectral efficiency transmissions;352
15.1;10.1 Turbo trellis coded modulation (TTCM);352
15.2;10.2 Pragmatic turbo coded modulation;356
15.3;Bibliography ;366
16;Chapter 11 The turbo principle applied to equalization and detection;367
16.1;11.1 Turbo equalization;368
16.1.1;11.1.1 Multipath channels and intersymbol interference;368
16.1.2;11.1.2 The equalization function;370
16.1.3;11.1.3 Combining equalization and decoding;374
16.1.4;11.1.4 Principle of turbo equalization;377
16.1.5;11.1.5 MAP turbo equalization;380
16.1.5.1;Implementation of the BCJR-MAP equalizer;380
16.1.5.2;Example of performance;384
16.1.5.3;Complexity of the MAP turbo equalizer and alternative solutions;386
16.1.5.4;Architectures and applications;387
16.1.6;11.1.6 MMSE turbo equalization;389
16.1.6.1;Principle of soft-input soft-ouput linear MMSE equalization;390
16.1.6.2;Adaptive implementation of the equalizer;398
16.1.6.3;Examples of performance;400
16.1.6.4;Example of implementation and applications;403
16.2;11.2 Multi-user turbo detection and its application to CDMA systems;404
16.2.1;11.2.1 Introduction and some notations;404
16.2.2;11.2.2 Multi-user detection;405
16.2.2.1;Standard receiver;405
16.2.2.2;Optimal joint detection;406
16.2.2.3;Decorrelator detector;407
16.2.2.4;Linear MMSE detector;407
16.2.2.5;Iterative detector;408
16.2.3;11.2.3 Turbo CDMA;411
16.2.3.1;Turbo SIC detector;411
16.2.3.2;Some simulations;412
16.2.3.3;Turbo SIC/RAKE detector;412
16.3;11.3 Conclusions;413
16.4;Bibliography;415
17;Index;421




