Memory Or Random Access Memory Ram Computer Science Essay Example
Memory Or Random Access Memory Ram Computer Science Essay Example

Memory Or Random Access Memory Ram Computer Science Essay Example

Available Only on StudyHippo
  • Pages: 15 (4036 words)
  • Published: August 7, 2018
  • Type: Analysis
View Entire Sample
Text preview

Modern digital systems depend on the efficient storage and retrieval of significant amounts of data. Whether in the form of circuits or systems, memories store substantial digital information. Semiconductor memory arrays are vital for storing extensive data in all digital systems. The required memory capacity differs based on the particular application. These semiconductor memories are also known as VLSI memories.

According to surveys, around 30% of the global semiconductor industry is attributed to memory chips. Throughout the years, advancements in technology have been propelled by increasingly dense memory designs. The available data storage capacity on a single integrated circuit is exponentially growing, with a doubling rate of approximately every two years.

Semiconductor memory is typically categorized based on the method of storing and accessing data. Although each type has distinct cell designs, the overall structure, or

...

ganization, and access mechanisms remain similar.

The memory array allows for both writing and reading of data bits, commonly referred to as Random Access Memory (RAM). There are two categories of read/write memory: Dynamic RAM (DRAM) and Static RAM (SRAM), based on the operation type of each cell.

Read-only memory, or ROM, only allows retrieval of previously stored data and does not allow modifications to the stored information contents during normal operation. ROMs are classified into different types based on the type of data storage (data writing operation), as shown above.

Mask ROM involves writing data during chip fabrication using a photo mask, while programmable ROM allows the data to be written electrically after the chip fabrication process. PROMs are classified into different types based on how the data is erased, including fuse ROMs, EPROMs, and EEPROMs.

Flash memory and Ferroelectric RAM (FRAM) are

View entire sample
Join StudyHippo to see entire essay

two additional types of memories. Flash memory shares similarities with EEPROM in terms of data erasing operation.

Electronic devices heavily rely on semiconductor memories as essential components.

Read/Write or Random Access Memory

Read-Only Memory

Ferroelectric RAM (FRAM)

Flash Memory

Electrically erasable PROM

Erasable PROM

Programmable ROM (PROM)

Mask programmed ROM is a form of ROM that permanently encodes data during manufacturing. This implies that the data on the ROM chip cannot be altered or modified after programming.

Static RAM, also known as SRAM

Dynamic RAM or DRAM

Nowadays, most memories are created using MOSFET-based transistors, although this is not the case for all applications. High-density and high-speed applications utilize a mixture of bipolar and MOS technologies. Alongside MOS and bipolar memories, there are also ongoing developments in other memory technologies.

The range of electronic memory capacity in digital systems varies, starting with less than 100 bits for a basic function and going up to standalone chips with 256 Mb or more (1 Mb equals 210 bits). Circuit designers typically refer to memory capacities in terms of bits because each bit is stored using a separate flip-flop or similar circuit. Conversely, system designers usually express memory capacities in bytes (8 bits), with each byte representing a single alphanumeric character.

Memory capacity in scientific computing systems is often expressed using words, which typically vary from 32 to 128 bits. Each byte or word is stored at a specific location identified by a unique numeric address. Memory storage capacity is commonly measured in kilobytes (K bytes) or megabytes (M bytes). Capacities that are powers of 2 are the most prevalent due to the binary nature of memory addressing. Consequently, it is conventionally accepted that 1K byte equals 1,024 bytes and 64K bytes

equals 65,536 bytes. In most memory systems, each memory operation cycle can only handle storing or retrieving one byte or word.

The overall storage capacity, memory speed, and power consumption are determined by the following key design criteria:

The amount of data bits stored per unit area, or the memory's area efficiency, is what determines the cost per bit for memory.

The speed of memory, known as memory access time, refers to the time it takes to store or retrieve a specific data bit in the memory array.

The memory array's power consumption, both static and dynamic, is considered.

The memory cells in a semiconductor memory are accountable for the storage of individual bits. The structures of these memory cells, illustrated in Figure 1.1 and Figure 1.2, are distinct. These circuits mainly consist of MOSFETs and capacitors.

The DRAM cell depicted in figure 1.1(a) is comprised of a capacitor and a switch transistor. The data is stored in the capacitors as either the presence or absence of charge: A filled capacitor represents the data "1", whereas an empty capacitor represents the data "0". The charge within the capacitors gradually decreases due to leakage current. Consequently, a refresh operation becomes necessary. As a result of this refresh operation, it is referred to as dynamic memory. The use of this particular structure allows for a higher density.

Figure 1.1 Equivalent circuits of memory cells. (a) DRAM, (b) SRAM

The SRAM cell displayed in figure 1.1(b) utilizes a six-transistor latch structure to retain the state of each cell node. In a typical SRAM, six MOSFETs are utilized to store each memory bit. Depending on the application, there are other variants of SRAM memory

cells that use 8, 10, or more transistors. Unlike other types of memory, SRAMs do not require a refresh operation because the cell data can be maintained in one of two possible states as long as a power supply is present.

Figure 1.2(a) illustrates the Mask ROM cell, where data is programmed through a Mask pattern by blowing out the fuse at each cell. The programming operation can only be performed once. In contrast, EPROM and EEPROM allow data to be rewritten in the cell using ultraviolet rays or tunnel current respectively. Multiple blocks of memory can be erased simultaneously. One can easily identify EPROMs by their transparent fused quartz window on the top of the package, which enables seeing the silicon chip and allows for UV light during erasing. EPROMs are becoming popular as a mass storage medium due to their large storage capacity.

Figure 1.2 shows the equivalent circuits of memory cells, including the Mask ROM, EPROM (EEPROM), and FRAM.

The FRAM or FeRAM cell is structured similarly to DRAM, with the exception of the ferroelectric capacitor. The polarization of the ferroelectric material is changed to modify the cell data. Using Perovskite crystal material in memory cells of this RAM type allows for polarization in either direction to store the desired value. Even without power supply, the polarization remains, creating a nonvolatile memory.

The preferred memory array organization is shown in the Fig. 1.0. This organization is a random-access architecture, meaning that memory locations can be accessed in any order at a fixed rate regardless of physical location, for reading or writing. The data storage structure consists of individual memory cells arranged in a grid

of rows and columns. Each cell can store one bit of binary information. Cells within the same row share connections, as do cells within the same column. This structure has 2N rows (word lines) and 2M columns (bit lines). Bit selection is achieved using a multiplexer circuit to direct cell outputs to data registers. Thus, the total number of memory cells in this array is 2N x 2M.

Figure 1.0 shows the organization of a conceptual random-access memory array.

In order to access a specific data bit in this array, a particular memory cell must have its corresponding word line and bit line activated. The addresses needed for this selection process are provided by the memory controller or processor. The row and column selection tasks are carried out by separate row and column decoders. The row decoder chooses one word line from a pool of 2N lines based on an N-bit row address, while the column decoder selects one bit line from 2M options using an M-bit column address.

Both read/write memory arrays and read-only memory arrays can be utilized by this organization.

Static Read/Write (or Random Access) memory (SRAM) can read and write data to its memory cells and keep the memory contents as long as it has power supply voltage. SRAM is currently made using CMOS technology, which provides low static power dissipation, high noise margin, and fast switching speed.

The CMOS SRAM cells are constructed using a basic latch circuit, as depicted in Figure 1.0.

Figure 1.0 shows a CMOS SRAM cell consisting of six transistors [1].

The storage unit is comprised of 4 nMOS and 2 pMOS transistors, forming two inverters. Additionally, there are 2 nMOS transistors functioning

as access switches. To store data, the SRAM employs four transistors for each bit, creating two cross-coupled inverters with two stable states representing 0 and 1. Moreover, an additional pair of access transistors manage read and write operations for accessing the storage unit. Typically, traditional SRAMs use six MOSFETs per memory bit for storage purposes. However, there are alternative types of SRAM chips available that utilize more than six transistors per bit such as 8T or even 10T configurations.

The word line (WL in figure) allows access to the cell. It controls the two access transistors M5 and M6, which determine if the cell should be connected to the bit lines: BL and BL. These bit lines are used for transferring data during read and write operations. Having two bit lines, one for the signal and one for its inverse, is not required but generally provided to enhance noise margins.

SRAMs can be made of both BJTs and MOSFETs as transistor types. BJTs offer high speed but consume a large amount of power, while MOSFETs offer lower power consumption and are widely used in modern SRAMs.

SRAMs are classified into asynchronous and synchronous types based on their function.

When examining the functionality of the static read/write memory, we must consider the following:

Relatively large parasitic column capacitances, CC and CC,

AˆAˆ Column pull-up pMOS transistors,

Figure 1.0. A CMOS static memory cell featuring column pull-up transistors and parasitic elements.

column capacitances. [1]

When all S signals are '0' and none of the word lines is selected, the pass transistors n3 and n4 are not active. This causes the data to be preserved in all memory cells. The column capacitances are then charged by

the drain currents of the pull-up pMOS transistors p3 and p4.

When performing read or write operations, we choose the cell by asserting the word line signal S='1'.

When performing a write operation, we apply a low voltage to one of the bit lines while keeping the other one high.

In order to input a '0' into the cell, the column voltage (VC) is decreased (C = 0). This lower voltage is transmitted through the pass transistor (n3) to the gates of the corresponding inverter (n2, p2), causing its input to become high. As a result, the signal at the other inverter is set to Q = 0.

To set the signal Q as 1 in the cell, the opposite column voltage VC is forced to low (C = 0) in a similar manner as writing '1' in the cell.

During the read '1' operation, when Q = 1, transistors n3, p1, n4, and n2 are activated. This keeps the column voltage VC at a steady high level (3.5V), while the opposite column voltage V A? C is lowered to discharge the column capacitance CC through transistors n4 and n2 so that VC > VC. Similarly, during the read '0' operation, VC < VC. The small difference (0.5V) between the column voltages must be detected by the sense amplifiers in the data-read circuitry.

Efforts in design are focused on reducing the cell area and power consumption to accommodate a large number of cells on a chip. Subthreshold leakage currents play a significant role in the cell's steady-state power consumption, resulting in the use of a higher threshold voltage in memory circuits. The layout of the cell is meticulously optimized to eliminate

any unnecessary area, further reducing its overall size.

The text discusses the read operation of the SRAM cell with six transistors, depicted in Figure 1.0. The left side stores a "0" while the right side stores a "1", indicating that M1 is active and M2 is inactive. Initially, both b and b are charged to a high voltage near VDD using column pull-up transistors (not shown in the figure). The standby state keeps the row selection line low, but it is raised to VDD during the read operation, causing access transistors M3 and M4 to turn on. As shown in Figure 1.0, current starts flowing through M3 and M1 towards ground. This current gradually discharges the capacitance Cbit of the cell. On the other side of the cell, the voltage on remains high as there is no path to ground through M2. The difference between b and b is sent to a sense amplifier, which generates a valid low output. This output is then stored in a data buffer.

The following HTML code represents a paragraph containing a figure caption for an SRAM cell used in 'read' operations. The figure is labeled as Figure 1.0 and the reference number is [1].

The operation of writing 0 or 1 is achieved by lowering one bitline, either b or b, while keeping the other bitline at approximately VDD. In Figure 2.0, to write 1, b is lowered, and to write 0, b is lowered. The cell needs to be designed in such a way that the conductance of M4 is several times greater than M6 so that the drain of M2 is pulled below VS. This triggers a regenerative effect

between the two inverters. Eventually, M1 turns off and its drain voltage increases to VDD due to the pull-up action of M5 and M3. At the same time, M2 turns on and aids M4 in lowering the output to its intended low value. Once the cell switches to the new state, the row line can be returned to its low standby level.

Figure 2.0 shows the Six-transistor SRAM cell used for the 'write' operation. [1]

The SRAM cell's design for a successful write operation includes the transistor pair M6-M4. According to Figure 2.0, when the cell is initialized for writing, these transistors create a pseudo-NMOS inverter. As a result, current passes through both devices and reduces the voltage at the node from its initial VDD value.

Note that the bitline b- is pulled low before the wordline goes up in order to decrease the overall delay. This is because the bitline has a high capacitance and will take time to discharge.

Originally, the earliest semiconductor memories were constructed using bipolar technology. Presently, bipolar memories primarily find their utility in high-speed applications. Listed below are the bipolar technologies:

DCTL (Direct-Coupled Transistor Logic) Technology is being discussed.

Emitter-Coupled Logic (ECL) Technology

BiCMOS Technology

SOI Technology, which stands for Silicon-on-Insulator, is a cutting-edge technology that involves the use of silicon on an insulator material.

AS-SRAMs, short for Application-specific SRAMs, are memory chips created specifically for particular applications. These chips are tailored to meet the specific requirements and performance criteria of their intended application.

Application-specific SRAMs are manufactured with extra logic circuitry to guarantee compatibility with a particular task. These SRAMs are usually produced using high-density, optimized processes that include customized features such as buried contacts and straps

in order to reduce the size of the memory cell. Here are some examples:

Serially Accessed Memory:

The FIFO is a type of shift register memory architecture that allows for serial transfer of data in and out. It is commonly constructed using SRAM cells to ensure data is preserved within the FIFO.

Dual-Port RAMs:

The dual-port RAMs enable two separate devices to concurrently read from and write to a shared memory. This communication between the devices takes place through a common memory. A series of multiport SRAMs with a built-in self-test (BIST) interface has been designed using a synchronous self-timed architecture.

Content-Addressable Memories, also known as CAMs, are a type of computer memory that allows data to be accessed based on its content rather than its location. CAMs are useful in applications where fast and efficient searching is required, such as in database management systems.

The CAM, or content-addressable memory, is used in various ways. It is incorporated as embedded modules on larger VLSI chips and can also be used as a standalone memory for specific system applications. Unlike typical memories that link data with an address, the CAM links an address with data. CAMs have diverse applications such as database management, disk caching, pattern and image recognition, and artificial intelligence.

DRAM, which stands for Dynamic Random Access Memory, is the primary memory utilized in desktop and larger computers. It consists of individual cells comprised of a single MOS transistor and a storage capacitor (Figure 1-1). Each cell is responsible for storing one bit of information. Unfortunately, the charge held by the capacitor tends to dissipate due to the sub-threshold current produced by the cell transistor. Hence, it is necessary to

refresh the charge multiple times per second. On average, a storage capacitance has a range of 20 to 50 fF.

Figure 1-1. Single transistor DRAM cell [2]

The memory cell is programmed by applying a positive or negative charge to the capacitor. This programming process occurs during a write cycle by activating the cell transistor (connecting it to the power supply or VCC) and applying either VCC or 0V (ground) to the capacitor. The transistor's gate (referred to as the word line) is then held at ground to separate the capacitor's charge from other memory cells. This capacitor can be accessed for subsequent writes, reads, or refreshes.

Figure 1-2 depicts a simplified DRAM diagram, where the memory cell gates are connected to the rows. The process of reading or writing data in the DRAM consists of two main stages, as shown in Figure 1-3. The row (X) and column (Y) addresses are inputted on the same pads and multiplexed. The first stage validates the row addresses, while the subsequent stage validates the column addresses.

Figure 1-2. Diagram showing a simplified description of Dynamic Random Access Memory (DRAM) [1]

The timing of accessing DRAM is shown in Figure 1-3 [1].

Usually, the initial step before performing any operation is to precharge each column capacitance to a high level.

To conduct a read/write operation on the cell, the word line is set to a high state (S = 1). This action links the storage capacitance to the bit line.

The write operation is performed by applying high or low voltage to the bit line. This causes the storage capacitance to charge (write '1') or discharge (write '0') through the access transistor.

During the read operation,

charges move between the storage capacitance C1 and the column capacitance CC. This causes a small change in the column voltage, indicating a '1' or '0'. The sense amplifier can then amplify this difference.

When the read operation is performed, it should be pointed out that the charge stored on storage capacitance C1 is depleted (also known as "destructive readout"). Consequently, the data needs to be refreshed each time a read operation occurs. [2] [3]

Row addresses are validated internally by the RAS clock and are present on address pads. A signal with a bar on top indicates it is active when at a low level. The X addresses select a row through row decode, while non-selected rows remain at 0V. Each selected row's cell is connected to a sense amplifier, which detects charge in the cell's capacitor and translates it to a 1 or 0. There is one sense amplifier for each cell in a row, and each is connected to a column (Y address). This step involves reading all cells in the row using the sense amplifier, which can be time-consuming and critical due to the high time constant of the row composed of memory cell gates. The sense amplifier must also read a weak charge of approximately 30 femtoFarads (30fF).

After the initial step, the address pads contain column addresses that are verified internally by the Column Address Access (CAS) clock. Data from each selected memory cell is verified in a sense amplifier. The process of transferring the data from the sense amplifier to the Dout pin occurs quickly through the column decode and the output buffer. On memory data sheets, the access time from

RAS is referred to as tRAC, while the access time from CAS is denoted as tCAC. In a standard DRAM with an access time of 60ns, tRAC is equal to 60ns and tCAC is equal to 15ns.

To maintain data integrity, it is necessary to refresh each DRAM memory cell. Each row of cells is refreshed every cycle. If the product specification states, "Refresh cycle = 512 cycles per 8ms," then there are 512 rows and each individual row must be refreshed every eight milliseconds. During the row access step, all the cells from the same row are read by the sense amplifier. The sense amplifier has two roles: transmitting data to the output buffer if selected by the column address and re-transmitting (writing) information into the memory cell to refresh it. When one row is selected, all the cells of that row are read by the sense amplifiers and refreshed individually. Burst or distributed refresh methods can be used. Burst refresh involves performing a series of refresh cycles until all rows have been accessed, which in this example occurs every 8ms. No other commands are allowed during the refresh. Using the distributed method and the given example, a refresh is performed every 12.6AZA's (8ms divided by 512). Figure 1-1 illustrates these two modes.

Figure 1-1 shows examples of burst and distributed refresh[1].

There are three methods for refreshing standard DRAMs: RAS-only refresh, CAS-before-RAS refresh, and hidden refresh. In a RAS-only refresh, the address lines receive a row address and then RAS is lowered. In a CAS-before-RAS refresh, CAS is first lowered and then a refresh cycle occurs each time RAS is lowered. In a hidden refresh, the

user performs a read or write cycle and then raises and lowers RAS.

DRAMs have a slower on-chip circuitry for reading data from each cell compared to other memory ICs, resulting in slower speeds. To tackle this problem, DRAMs are divided into different sub-categories with varying system interface circuitry to enhance performance. Furthermore, each design is tailored to meet the specific requirements of diverse applications.

The different types of DRAMs are shown in Figure 1-2[1].

Fast Page Mode memory operates at a faster speed compared to regular DRAM. It reduces the access time to memory cells. The DRAM addresses are multiplexed on the same package pins. When new data is stored in the same row as the previous data, accessing it only requires changing the column address. By enabling fast mode, you can access data of the same row by modifying the column address.

Cache DRAM, which was created by Mitsubishi, combines a specific amount of main memory and a specific amount of SRAM cache memory on a single chip. The transfer between DRAM and SRAM occurs in a single clock cycle.

Technically, the EDRAM functions as a cache DRAM (CDRAM) by utilizing the internal structure of a fast page mode DRAM. Rather than incorporating a separate SRAM cache, the EDRAM utilizes the sense amplifiers present in the fast page mode DRAM to serve as a SRAM cache during data reading and access.

SDRAMs, or synchronous DRAMs, are a specific type of DRAM that synchronizes the read and write cycles with the processor clock. This synchronization allows SDRAM to optimize read and write requests. Unlike other types of DRAM that use nanoseconds as a measurement for speed, SDRAM's speed is rated

in MHz. The design of SDRAM incorporates two separate banks, which enables simultaneous active rows in each bank. Consequently, concurrent access, refresh, and recharge operations become possible. To achieve this functionality, an internal finite state machine driven by a clock processes incoming instructions. Compared to asynchronous DRAMs lacking synchronization [2], SDRAM benefits from a more complex operation pattern thanks to its clock-driven interface.

Figure 1-1 [1] illustrates the block diagram of a 4Mbit SDRAM.

The mode register of the SDRAM is set through a cycle known as the mode register set. Its size depends on the number of address pins on the device, and it must be reprogrammed whenever any of its programmable features require modification.

DDR DRAMs, also known as Double Data Rate DRAMs, function by fetching data from an SDRAM at a frequency twice that of the clock. This allows the device to deliver data during both rising and falling edges of the clock signal. Consequently, the effective bandwidth for a particular frequency is doubled.

Get an explanation on any task
Get unstuck with the help of our AI assistant in seconds
New