Posts

Showing posts from September, 2015

Content-Addressable Memory:CAM Architecture.

Image
Introduction In the ordinary memory circuit designs, such as DRAMs and SRAMs, the memory devices utilize write cycle to store data and read cycle to retrieve the stored data by addressing specific memory location, called an address. In other words, each read cycle and write cycle can access only one specific memory location which is indicated by an address. As a result, the data access of ordinary memory devices is operated in a sequential manner. In the high-speed data search applications, for instance, Internet routers [1–5], image processing [6–8], and pattern recognitions [9–11], the time required for finding data stored in memory array is as short as possible to achieve high-speed data search performance. Because of the sequential data access manner, the data search performance of the ordinary memory devices relies on fast memory bandwidth that cannot be well applied to search-intensive applications. If a memory device can provide useful function, called search function, that compa

Dynamic Random Access Memory:Concept of 2-Bit DRAM Cell.

Image
Concept of 2-Bit DRAM Cell The 2-bit DRAM is an important architecture in the multi-level DRAM. Let us discuss an example of a multi-level technique used for a 4-Gb DRAM by NEC [17]. Table 55.1 lists both the 2-bit/4-level storage concept and the conventional 1-bit/2-level storage concept. In the conventional 1-bit/2-level DRAM cell, the storage voltage levels are V cc or GND, corresponding to logic values “1” or “0”. The signal charge is one half the maximum storage charge. In the 2-bit/4-level DRAM cell, the storage voltage levels are V cc, two-thirds V cc, one-third V cc, and GND , corresponding to logic values “11”, “10”, “01”, and “10”, respectively. Three reference voltage levels are used to detect these four storage levels. Reference levels are positioned at the midlevel between the four storage levels. Thus, the signal charge between the storage and reference levels is one sixth of the maximum storage charge. Sense and Timing Scheme The circuit diagram of the 2-bit/4-leve

Dynamic Random Access Memory:Multi-Level DRAM.

Image
Multi-Level DRAM In modern application-specific IC (ASIC) memory designs, there are some important items—memory capacity, fabrication yield, and access speed—that need to be considered. The memory capacity required for ASIC application has been increasing very rapidly, and the bit-cost reduction is one of the most important issues for file application DRAMs. In order to achieve high yield, it is important to reduce the defect-sensitive area on a chip. The multi-level storage DRAM technique is one of the circuit technologies that can reduce the effective cell size. It can store multiple voltage levels in a single DRAM cell. For example, in a four-level system, each DRAM cell corresponds to 2-bit data of “11”, “10”, “01”, and “00”. Thus, the multi-level storage technique can improve the chip density and reduce the defect-sensitive area on a DRAM chip, and it is one of the solutions to the “density and yield” problem.

Dynamic Random Access Memory:Gb SDRAM Bank Architecture

Image
Gb SDRAM Bank Architecture To consider the Gb SDRAM realization, the chip layout and bank/data bus architecture is important for data access. Figure 55.14 shows the conventional bank/data bus architecture of 1-Gb SDRAM [16]. It contains 64 DQ pins, 32 ´ 32-Mb SDRAM blocks, and four banks; and they all prefetch 4 bits. During the read cycle, the eight 32-Mb DRAM blocks of one bank are accessed simultaneously. The 256-bit data is accessed to the 64 DQ pins and 4 bits are prefetched. In an activated 32-Mb array block, 32-bit data is accessed and associated with eight specific DQ pins. Therefore, it requires a data I/O bus switching circuit between the 32-Mb SDRAM bank and the eight DQ pins. It makes the data I/O bus more complex, and the access time is slower. In order to simplify the bus structure, the distributed bank (D-bank) architecture is proposed as shown in Figure 55.15. The 1-Gb SDRAM is implemented by 32 ´ 32-Mb distributed banks. A 32-Mb distributed bank contains two 16-

Dynamic Random Access Memory:Prefetch and Pipelined Architecture in SDRAMs.

Prefetch and Pipelined Architecture in SDRAMs The system clock activates the SDRAM architecture. In order to speed up the average access time, it is possible to use the system clock to store the next address in the input latch or to be sequentially clocked out for each address access output from the output buffer, as shown in Figure 55.13 [15]. During the read cycle of the prefetch SDRAM, more than one data word is fetched from the memory array and sent to the output buffer. Using the system clock to control the prefetch register and buffer, multiple words of data can be sequentially clocked out for each address access. As shown in Figure 55.13, the SDRAM has a 6-clock-cycle RAS latency to prefetch 4-bit data.

Dynamic Random Access Memory:Synchronous (Clocked) DRAMs

Image
Synchronous (Clocked) DRAMs The application of multimedia is a very hot topic nowadays, and the multimedia systems require high speed and large memory capacity to improve the quality of data processing. Under this trend, high density, high bandwidth, and fast access time are the key requirements of future DRAMs. The synchronous DRAM (SDRAM) has the characteristic of fast access speed, and is widely used for memory application in multimedia systems. The first SDRAM appeared in the 16-Mb generation, and the current state-of-the-art product is a Gb SDRAM with GB/s bandwidth [10–14]. Conventionally, the internal signals in asynchronous (non-clocked) DRAMs are generated by “address transition detection” (ATD) techniques. The ATD clock can be used to activate the address decoder and driver, the sense amplifier, and the peripheral circuit of DRAMs. Therefore, the asynchronous DRAMs require no external system clocks and have a simple interface. However, during the asynchronous DRAM access

Dynamic Random Access Memory:Read/Write Circuit.

Image
Read/Write Circuit As shown in the previous section, the readout process is destructive because the resulting voltage of the cell capacitor C s will no longer be ( V DD – V t) or 0 V. Thus, the same data must be amplified and written to the cell in every readout process. Next to the storage cells, a sense amplifier with positive feedback structure, as shown in Figure 55.7, is the most important component in a memory chip to amplify the small readout signal in the readout process. The input and output nodes of the differential positive feedback sense amplifier are connected to the bit-lines BL and BL. The small readout signal appearing between BL and BL is detected by the differential sense amplifier and amplified to a full-voltage swing at BL and BL. For example, if the DRAM memory cell in BL has a stored data “1”, then a small positive voltage D V (1) will be generated and added to the bit-line BL voltage after the readout process. The voltage in the bit-line BL will be D V (1) + V DD/

Dynamic Random Access Memory:DRAM Memory Cell

Image
DRAM Memory Cell In early CMOS DRAM storage cell design, three-transistor and four-transistor cells were used in 1-Kb and 4-Kb generations. Later, a particular one-transistor cell, as shown in Figure 55.4(a), became the industry standard [5,6]. The one-transistor (1T) cell achieves smaller cell size and low cost. The cell consists of an n-channel MOSFET and a storage capacitor C s. The charge is stored in the capacitor C s and the n-channel MOSFET functions as the access transistor. The gate of the n-channel MOSFET is connected to the word-line WL and its source/drain is connected to the bit-line. The bit-line has a capacity C BL , including the parasitic load of the connected circuits. The DRAM cell stores one bit of information as the charge on the cell storage capacitor C s. Typical values for the storage capacitor C s are 30 to 50 fF. When the cell stores “1”, the capacitor is charged to V DD – V t. When the stores “0”, the capacitor is discharged to 0 V. During the READ

Flash Memories:Flash Memory System.

Image
Flash Memory System Applications and Configurations Flash memory is a single-transistor memory with floating gate for storing charges. Since 1985, the mass production of Flash memory has shared the market of non-volatile memory. The advantages of high density and electrical erasable operation make Flash memory an indispensable memory in the applications of programmable systems, such as network hubs, modems, PC BIOS, microprocessor-based systems, etc. Recently, image cameras and voice recorders have adopted Flash memory as the storage media. These applications require battery operation, which cannot afford large power consumption. Flash memory, a true non-volatile memory, is very suitable for these portable applications because stand-by power is not necessary. In the interest of portable systems, the specification requirements of Flash memory include some special features that other memories (e.g., DRAM, SRAM) do not have. For example, multiple internal voltages with single external p

Dynamic Random Access Memory:Basic DRAM Architecture.

Image
Introduction The first dynamic RAM (DRAM) was proposed in 1970 with a capacity of 1 Kb. Since then, DRAMs have been the major driving force behind VLSI technology development. The density and performance of DRAMs have increased at a very fast pace. In fact, the densities of DRAMs have quadrupled about every three years. The first experimental Gb DRAM was proposed in 1995 [1,2] and remains commercially available in 2000. However, multi-level storage DRAM techniques are used to improve the chip density and to reduce the defect-sensitive area on a DRAM chip [3,4]. The developments in VLSI technology have produced DRAMs that realize a cheaper cost per bit compared with other types of memories. Basic DRAM Architecture The basic block diagram of a standard DRAM architecture is shown in Figure 55.1. Unlike SRAM, the addresses on the standard DRAM memory are multiplexed into two groups to reduce the address input pin counts and to improve the cost-effectiveness of packaging. Although the

Flash Memories:Evolution of Flash Memory Technology.

Image
Evolution of Flash Memory Technology In this section, as in Table 54.3, the development of device structures, process technology, and array architectures for Flash memory are listed by date. The burgeoning development in Flash memory devices reveals a prospective future.

Flash Memories:Flash Memory Array Structures.

Image
Flash Memory Array Structures NOR Type Array In general, most of the Flash memory array, as shown in Figure 54.25(a), is the NOR-type array [49–61]. In this array structure, two neighboring memory cells share a bit-line contact and a common source line. Therefore, a half the drain contact size and half the source line width is occupied in the unit memory cell. Since the memory cell is connected to the bit-line directly, the NOR-type array features random access and lower series resistance characteristics. The NOR-type array can be operated in a larger read current and thus a faster read operation speed. However, the drawback of the NOR-type array is the large cell area per unit cell. In order to maintain the advantages in NOR-type array and also reduce the cell size, there were several efforts to improve the array architectures. The major improvement in the NOR-type array is the elimination of bit-line contacts—the employment of buried bit-line configuration [52]. This concept evolv

Flash Memories:Variations of Device Structure.

Image
Variations of Device Structure CHEI Enhancement As mentioned above, alternative operation modes are proposed to achieve pervasive purposes and various features, which are approached either by CHEI or FN tunneling injection. Furthermore, it is indicated that the over 90% of the Flash memory product ever shipped is the CHEI-based Flash memory device [79]. With the major manufacturers’ competition, many innovations and efforts are dedicated to improve the performance and reliability of CHEI schemes [50,53,56,57,61,80–83]. As described in Eq. 54.11, an increase in the electric field can enhance the probability of the electrons gaining enough energy. Therefore, the major approach to improve the channel hot electron injection efficiency is to enhance the electric field near the drain side. One of the structure modifications is utilizing the large-angle implanted p-pocket (LAP) around the drain to improve the programming speed [56,57,60,83]. LAP has also been used to enhance the punch-thr

Flash Memories:Basic Flash Memory Device Structures.

Image
Basic Flash Memory Device Structures n-Channel Flash Cell Based on the concept proposed by researchers at Toshiba Corp., the developments in Flash memory have burgeoned since the end of 1980s. There are three categories of device structures based on the n-channel MOS structure. Besides the triple polysilicon Flash cell, the most popular Flash cell structures are the ETOX cell and the split-gate cell. In 1985, Mukherjee et al. [7,9] proposed a source-erase Flash cell called the ETOX ( E PROM with T unnel OX ide). This cell structure is the same as that of the UV-EPROM, as shown in Figure 54.6, but with a thin tunnel oxide layer. The cell is programmed by CHEI and erased by applying a high voltage at the source terminal. A split-gate memory cell was proposed by Samachisa et al. in 1987 [8]. This split-gate Flash cell with a drain-erase type has two polysilicon layers, as shown in Figure 54.7. The cell can be regarded as two transistors in series. One is a floating gate memory, w

Embedded Memory:Design Examples

Image
Design Examples Three examples of embedded memory designs are described. The first one is a flexible embedded DRAM design from Siemens Corp. [5]. The second one is the embedded memories in MPEG environment from Toshiba Corp. [14]. The last one is the embedded memory design for a 64-bit superscaler RISC micro- processor from Toshiba Corp. and Silicon Graphics, Inc. [15]. A Flexible Embedded DRAM Design [5] There is an increasing gap between processor and DRAM speed: processor performance increases by 60% per year in contrast to only a 10% improvement in the DRAM core. Deep cache structures are used to alleviate this problem, albeit at the cost of increased latency, which limits the performance of many applications. Merging a microprocessor with DRAM can reduce the latency by a factor of 5 to 10, increase the bandwidth by a factor of 50 to 100, and improve the energy efficiency by a factor of 2 to 4 [16]. Developing memory is a time-consuming task and cannot be compared with a hi