Random access memory
From Wikipedia, the free encyclopedia
Random access memory (usually known by its acronym, RAM) is a type of computer data storage. It today takes the form of integrated circuits that allow the stored data to be accessed in any order, i.e. at random. The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1]
This contrasts with storage mechanisms such as tapes, magnetic discs and optical discs, which rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than the data transfer, and the retrieval time varies depending on the physical location of the next item.
The word RAM is mostly associated with volatile types of memory, where the information is lost when power is switched off. However, many other types of memory are RAM as well (i.e. Random Access Memory), including most types of ROM and a kind of flash memory called NOR-Flash.
Contents[hide] |
[edit] History
The first type of random access memory was the magnetic core memory, developed in 1951, and used in all computers up until the development of the static and dynamic RAM integrated circuits in the late 1960s and early 1970s. Prior to the development of the magnetic core memory computers used relays or vacuum tubes to perform memory functions.
[edit] Overview
[edit] Types of RAM
Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.
As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as "permanent" storage in traditional computers. Many newer products such as PDAs and small music players (up to 160 GB in Jan 2008) do not have hard disks, but often rely on flash memory to maintain data between sessions of use; the same can be said about products such as mobile phones, advanced calculators, synthesizers etc; even certain categories of personal computers, such as the OLPC XO-1, have begun replacing magnetic disk with so called flash drives. There are two basic types of flash memory: the NOR type, which is capable of true random access, and the NAND type, which is not; the former is therefore often used in place of ROM, while the latter is used in most memory cards and solid-state drives, due to a lower price.
[edit] Memory hierarchy
Many computer systems have a memory hierarchy consisting of CPU registers, on-die SRAM caches, external caches, DRAM, paging systems, and virtual memory or swap space on a hard drive. This entire pool of memory may be referred to as "RAM" by many developers, even though the various subsystems can have very different access times, violating the original concept behind the random access term in RAM. Even within a hierarchy level such as DRAM, the specific row, column, bank, rank, channel, or interleave organization of the components make the access time variable, although not to the extent that rotating storage media or a tape is variable. (Generally, the memory hierarchy follows the access time with the fast CPU registers at the top and the slow hard drive at the bottom.)
In most modern personal computers, the RAM comes in easily upgraded form of modules called memory modules or DRAM modules about the size of a few sticks of chewing gum. These can quickly be replaced should they become damaged or too small for current purposes. As suggested above, smaller amounts of RAM (mostly SRAM) are also integrated in the CPU and other ICs on the motherboard, as well as in hard-drives, CD-ROMs, and several other parts of the computer system.
[edit] Swapping
If a computer becomes low on RAM during intensive application cycles, the computer can resort to swapping. In this case, the computer temporarily uses hard drive space as additional memory. Constantly relying on this type of backup memory is called thrashing, which is generally undesirable because it lowers overall system performance. In order to reduce the dependency on swapping, more RAM can be installed.
[edit] Other uses of the term
Other physical devices with read/write capability can have "RAM" in their names: for example, DVD-RAM. "Random access" is also the name of an indexing method: hence, disk storage is often called "random access" because the reading head can move relatively quickly from one piece of data to another, and does not have to read all the data in between. However the final "M" is crucial: "RAM" (provided there is no additional term as in "DVD-RAM") always refers to a solid-state device.
[edit] "RAM disks"
Software can "partition" a portion of a computer's RAM, allowing it to act as a much faster hard drive that is called a RAM disk. Unless the memory used is non-volatile, a RAM disk loses the stored data when the computer is shut down. However, volatile memory can retain its data when the computer is shut down if it has a separate power source, usually a battery.
[edit] Recent developments
Several new types of non-volatile RAM, which will preserve data while powered down, are under development. The technologies used include carbon nanotubes and the magnetic tunnel effect. In summer 2003, a 128 KB magnetic RAM chip manufactured with 0.18 µm technology was introduced. The core technology of MRAM is based on the magnetic tunnel effect. In June 2004, Infineon Technologies unveiled a 16 MB prototype again based on 0.18 µm technology. Nantero built a functioning carbon nanotube memory prototype 10 GB array in 2004. Whether some of these technologies will be able to eventually take a significant market share from either DRAM, SRAM, or flash-memory technology, remains to be seen however.
In 2006, "Solid-state drives" (based on flash memory) with capacities exceeding 150 gigabytes and speeds far exceeding traditional disks have become available. This development has started to blur the definition between traditional random access memory and "disks", dramatically reducing the difference in performance
[edit] Memory wall
The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. [2]
Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation (PDF):
“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat (more on power consumption below). Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.”
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. The data on Intel Processors clearly shows a slowdown in performance improvements in recent processors. However, Intel's new processors, Core 2 Duo (codenamed Conroe) show a significant improvement over previous Pentium 4 processors; due to a more efficient architecture, performance increased while clock rate actually decreased.
[edit] See also
- CAS latency (CL)
- DIMM
- DVD-RAM
- Dual-channel architecture
- Error-correcting code (ECC)
- Registered/Buffered memory
- Compact Flash
- PC card
- Memory shaving
- Static RAM (SRAM)
- STT RAM (Spin Torque Transfer RAM)
- Non-Volatile RAM (NVRAM)
- Dynamic RAM (DRAM)
- Fast Page Mode DRAM
- EDO RAM or Extended Data Out DRAM
- XDR DRAM
- SDRAM or Synchronous DRAM
- DDR SDRAM or Double Data Rate Synchronous DRAM; now being replaced by DDR2 SDRAM
- RDRAM or Rambus DRAM
[edit] Terminology
[edit] Notes and references
- ^ Strictly speaking, modern types of DRAM are therefore not truly (or technically) random access, as data are read in burst; the name DRAM has stuck however.
- ^ The term was coined in Hitting the Memory Wall: Implications of the Obvious (PDF).
[edit] External links
- How RAM Works – Article by Jeff Tyson and Dave Coustan
Personal tools
Interaction
Toolbox
Languages
- العربية
- Bosanski
- Brezhoneg
- Български
- Català
- Česky
- Dansk
- Deutsch
- Eesti
- Ελληνικά
- Español
- Esperanto
- فارسی
- Français
- Furlan
- Gaeilge
- Galego
- 한국어
- Hrvatski
- Bahasa Indonesia
- Italiano
- עברית
- Kiswahili
- Latina
- Latviešu
- Magyar
- മലയാളം
- Bahasa Melayu
- Nederlands
- 日本語
- Norsk (bokmål)
- Norsk (nynorsk)
- Polski
- Português
- Română
- Русский
- Shqip
- Simple English
- Slovenčina
- Slovenščina
- Српски / Srpski
- Srpskohrvatski / Српскохрватски
- Suomi
- Svenska
- ไทย
- Tiếng Việt
- Тоҷикӣ
- Türkçe
- Українська
- ייִדיש
- 中文