VLSI Architectures for Forward Error-Control Decoders:Turbo Decoder Architectures

Turbo Decoder Architectures

Turbo decoders are composed of two or more constituent soft-input soft-output (SISO) decoders, which correspond to the component codes employed in the transmitter, and an interconnection of these constituent decoders through an interleaver/deinterleaver [45,46]. The decoding algorithm employed in the constituent decoders is the MAP algorithm or SOVA, but it is well known that MAP-based turbo decoders outperform SOVA-based turbo decoders. The MAP algorithm provides the LLR of the trans- mitted code symbols u(k). The LLRs are iteratively updated by the constituent decoders. However, the use of iterative processing results in a large computational and storage complexity, and hence high-power dissipation in the receiver. Therefore, low-power and high-throughput architectures for turbo decoders have recently been investigated [36, 47–64] for wireless and broadband applications. In this section, we first describe the algorithm and VLSI architecture of SISO MAP decoding and then present high- throughput and low-power turbo decoder architectures.

MAP Decoder

In this section, the algorithm for the SISO MAP decoder is described, followed by a description of a baseline VLSI architecture for a SISO MAP decoder.

(1) The MAP Algorithm. Unlike the Viterbi algorithm, the MAP algorithm determines each of the transmitted information symbols u(k) independently by maximizing the a posteriori probability P(u(ky), where y denotes the received encoded bits used to determine uk . The BCJR algorithm [48] solves the MAP decoding problem efficiently and its log-domain version, i.e., the log-MAP algorithm has the advantage that it can be formulated in terms of additions instead of multiplications.

A log-MAP algorithm estimates the LLR of data symbol u(k) denoted as L(u(k)) defined below

VLSI Architectures for Forward Error-Control Decoders-0118

where the max operation is performed over all states si(k - 1) that are connected to state sj(k). The a(sj(k)) update is performed recursively starting from si(0) and is referred to as forward iteration. The second term in Eq. (82.26) l(si(k - 1), sj(k)), is referred to as the branch metric and is related to the probability that a transition from si(k - 1) to sj(k) occurs. The branch metric l(si(k - 1), sj(k)) is computed from the channel output, noise statistics, and the error-free output of the branch connecting si(k - 1) and sj(k) at time k. Note that each trellis section has 2K possible transitions. The third term b(sj(k)), is referred to as the backward metric and is equal to the probability that the trellis reaches state sj(k) given the future observations y(k + 1 : L - 1). The backward metric b(sj(k)) is computed recursively as

VLSI Architectures for Forward Error-Control Decoders-0119

where the max operation is performed over all states sj(k + 1) that are connected to state si(k). The b(si(k)) update is performed recursively starting from sj(L - 1) and is referred to as backward iteration.

Hence, the decoding process consists of three steps. First, the branch metrics in each trellis section are computed. Second, the forward and backward metrics a(sj(k)) and b(si(k)) are computed recursively via Eq. (82.27) and Eq. (82.28). Third, the LLR L(u(k)) is computed as

VLSI Architectures for Forward Error-Control Decoders-0120

can be employed instead of the max, in which case the performance of the log-domain MAP algorithm approaches that of the BCJR algorithm [49] to within 0.05 dB. The second term in Eq. (82.30) is referred to as the correction factor.

(2) MAP decoder architectures. Numerous architectures can be employed to implement the log-MAP algorithm [50]. However, the trellis sweep over all L observed symbols requires large memory to hold the forward and backward metrics until they are used in the LLR computation in Eq. (82.29). Hence, the sliding window log-MAP algorithm has become popular as it minimizes the metric storage requirements [49].

The sliding window log-MAP decoding algorithm is derived via the property that the forward and backward metrics a and b converge after a few constraint lengths have been traversed in the trellis, independent of the initial conditions [49]. We refer to this property as the warm-up property and the warm-up period is assumed to have a duration of L symbols. Owing to the warm-up property, the state metrics (a and b) computation can be partitioned into windows of size L. Further, the computations in each window can be done in parallel. Figure 82.13 shows an example of a decoding flow where the warm-up property is employed only for computing backward metrics. The warm-up or initialization period is depicted by dashed lines and the computation period by solid lines. This warm-up property will be exploited later in deriving parallel and block-interleaved pipelined (BIP) architectures.

VLSI Architectures for Forward Error-Control Decoders-0121

VLSI Architectures for Forward Error-Control Decoders-0122

Figure 82.14 shows the VLSI architecture of a decoder whose data-flow graph is shown in Figure 82.13. The architecture has units for the computation of branch metrics (l-unit), one forward recursion (a-unit), two backward recursions and a buffer to store backward metrics b1 and b 2 (b-unit), and the L metric processing unit (L-MPU). The computations in l-MPU and L-MPU can be implemented in a feed- forward manner and thus these do not limit the throughput. However, the forward and backward recursions are computed via an array of ACS kernels in a state-parallel manner. The ACS kernel for a MAP decoder in Figure 82.15 shows that the correction factor in Eq. (82.30) is implemented via a look-up-table (LUT) and state metric rescaling is employed to avoid overflows [47]. As is the case in a Viterbi decoder, it is the critical path delay of the ACS unit in Figure 82.15 that limits the throughput.

Comments

Popular posts from this blog

SRAM:Decoder and Word-Line Decoding Circuit [10–13].

ASIC and Custom IC Cell Information Representation:GDS2

Timing Description Languages:SDF