VLSI Architectures for Forward Error-Control Decoders:Viterbi Decoder Architectures
Viterbi Decoder Architectures
Convolutional codes find use in a wide range of applications such as magnetic disk-drive channels, satellite, and in wireless communications. These codes are typically low-rate codes, i.e., R = 1/2 or R = 1/3 for most commonly employed codes. The Viterbi algorithm [34] is an efficient algorithm for decoding convolutional codes. The complexity of a Viterbi decoder is inversely proportional to the code rate unless the code rate is increased via puncturing.
Convolutional codes can be viewed as linear block codes with memory, i.e., the output of the linear encoder depends not only on the present information symbols but also on previous information symbols. Thus, a convolutional encoder is a finite-state machine (FSM) and the Viterbi algorithm is a particularly efficient method of estimating the state sequence of an FSM from a sequence of noisy observations of its output.
Convolutional Encoder
Consider an (no , ko , K) convolutional code where the encoder is an FSM with K - 1 memory elements that encodes a ko-bit data symbol into an no-bit code symbol as shown in Figure 82.7(a). The parameter K is called the constraint length of the code. At time index k, the N = 2K-1 possible encoder states are denoted as si(k) where i = 0, … , N - 1 and the branch connecting states si (k) with sj (k + 1) is denoted as bi , j (k). Note that si (k) and bi,j (k) are labels and not variables.
Given the current state si (k) and an input symbol u(k), the encoder transitions to a unique next state sj(k + 1) and generates a unique output symbol c(k). Hence, for an input sequence of L data symbols
0 s0(k + 1)while generating L, no-bit code symbols c �- (c(0), c(1), ¼, c(L - 1)). An efficient method of describing all possible state transitions in an encoder is by using a trellis diagram. A section of the trellis of the encoder of Figure 82.7(a) is depicted in Figure 82.7(b), where the solid edges correspond to u(k) = 1 and dashed edges correspond to u(k) = 0. Note that the number of branches emerging from or ending in any state is 2ko .
The received sequence y = (y(0), y(1), ¼, y(L - 1)) at the receiver is simply a noisy version of the transmitted code sequence c. Thus, the convolutional decoding problem can be defined as estimating the input sequence u given the noisy sequence y. Two major approaches for estimating the transmitted bits u(k) from y are: (1) maximum likelihood (ML) decoding, and (2) maximum a posteriori probability (MAP) decoding. MAP is known to be much more complex than ML decoding. Furthermore, the results of MAP and ML decoding are identical when the probability distribution of the input u(k) is uniform. In 1966, Andrew Viterbi [34] proposed a computationally efficient procedure to implement ML decoding of convolutional codes. As a result, the Viterbi algorithm is almost universally applied whenever there is a convolutional code in a communication link. Hence, we will focus on the Viterbi algorithm and the ML decoding approach in this section.
Next, we provide a brief description of the ML decoding principle and then describe the Viterbi algorithm.
Comments
Post a Comment