VLSI Architectures for Forward Error-Control Decoders:Maximum Likelihood Decoding and the Viterbi Algorithm

Maximum Likelihood Decoding and the Viterbi Algorithm

Maximum likelihood decoding involves the estimation of the transmitted symbol sequence u by deter- mining the path in the trellis which maximizes the probability p(y½u). Hence, this approach is also called ML sequence estimation (MLSE). The ML principle is quite general though and can be applied equally well to the problem of estimating individual symbols, i.e., for symbol-by-symbol detection. For example, consider the case of ML symbol-by-symbol detection when the transmitted symbols c are ±1, and the received symbols are y = c + h, where h is a zero mean additive white Gaussian noise (AWGN) with a probability density function (PDF) given by

VLSI Architectures for Forward Error-Control Decoders-0109

Hence, to determine if p(y½c = 1) ³ p(y½c = -1), all that needs to be done is to check if (y - 1)2 £ (y + 1)2. In other words, as y can be positive or negative, one needs to compute the Euclidean distance between the observation y and all possible transmitted symbol values (±1 in this case) and choose as the ML result the transmitted symbol value that results in the smallest Euclidean distance.

Note that the appearance of the Euclidean distance metric in the example above is a direct consequence of the AWGN assumption with signal-independent noise. In some applications such as optical communications employing nonreturn to zero (NRZ) modulation, the noise variance for a “1” and a “0” are significantly different. In such cases, the distance metric needs to be modified accordingly. Furthermore, in situations where the noise is not Gaussian, there may not exist a distance metric and instead the PDFs themselves need to be estimated and employed in the decoding process.

The above description can be extended to the observed sequence y and the code sequence c, where the latter is known to be the output of the encoder FSM that takes u as the input. The decoder knows the number of states in the FSM and the state-transition matrix, but it does not know what state sequence has been traversed at any given time index. The job of the ML decoder is to determine the most likely state sequence traversed based on the observation y from which it can then determine the most likely code sequence c and more importantly, the most likely input sequence u. The ML decoder is said to execute an MLSE procedure.

As mentioned earlier, the Viterbi algorithm [34] was first proposed as an efficient procedure to compute the ML solution to the problem of decoding convolutional codes.

Comments

Popular posts from this blog

SRAM:Decoder and Word-Line Decoding Circuit [10–13].

ASIC and Custom IC Cell Information Representation:GDS2

Timing Description Languages:SDF