Thursday 20 November 2014

MEMORY ORGANIZATION : CACHE MEMORY By Hamizah Binti Abd Wahab

What We Want In Memory????


We had like to have a memory system that :

  • Performs like 2-10 GB of fast SRAM typical SRAM sizes are in MB, not GB
  • Costs like 1-4 GB of DRAM (slower) typical DRAMs are order of magnitude slower than SRAM
  • TIPS : Use a hierarchy of memory technologies:


TIPS : Exploit “Principle of Locality”
  • Keep data used often in a small fast SRAM ; called “CACHE”, often on the same chip as the CPU
  • Keep all data in a bigger but slower DRAM ; called “main memory”, usually separate chip
  • Access Main Memory only rarely, for remaining data
  • The reason this strategy works: LOCALITY

Memory Hierarchy Levels


  • Block (aka line): unit of copying
May be multiple words
  • If accessed data is present in upper level
Hit: access satisfied by upper level
~Hit ratio: hits/accesses
  • If accessed data is absent
Miss: block copied from lower level
~Time taken: miss penalty
~Miss ratio: misses/accesses = 1 – hit ratio
Then accessed data supplied from upper level


Direct Mapped Cache

  • Location determined by address
  • Direct mapped: only one choice
~(Word address) modulo (#Line in cache)
~E.g: 10110 modulo 8 = 110
  • #line is a power of 2
  • Use low-order address bits



Tags and Valid Bits
  • How do we know which particular block is stored in a cache location?
~Store block address as well as the data 
~Actually, only need the high-order bits 
~Called the tag
  • What if there is no data in a location?
~Valid bit: 1 = present, 0 = not present 
~Initially 0 
Cache Example
  • 8-blocks, 1 word/block, direct mapped
  • Initial state








 Cache Analogy

Students doing research offer another commonplace cache example. Suppose you are writing a paper on quantum computing. Would you go to the library, check out one book, return home, get the necessary information from that book, go back to the library, check out another book, return home, and so on? No, you would go to the library and check out all the books you might need and bring them all home. The library is analogous to main memory, and your home is, again, similar to cache.

Measuring Cache Performance

  • Components of CPU time

Program execution cycles

~Includes cache hit time

Memory stall cycles

~Mainly from cache misses

  • With simplifying assumptions :



Average Access Time

  • Average Memory Access Time ( AMAT )
AMAT = Hit time + Miss rate x Miss penalty
  • Example :
CPU with 1 ns clock, hit time = 1 cycle, miss penalty = 20 cycles, I-cache miss rate = 5%
AMAT = 1 + 0.05 x 20 = 2ns
~2 cycles per instruction 
Source :

  1.  Lecture Note Computer Organization and Architecture Chapter 9 : Memory Organization-Cache Memory
  2. http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/Memory/introCache.html
  3. http://www.articlesbase.com/hardware-articles/cache-memory-675304.html
  4. http://www.cs.fsu.edu/~hawkes/cda3101lects/chap7/F7.5.html



~HAMIZAH BINTI ABD WAHAB~
~B031410226~


No comments:

Post a Comment