Department of Electrical and Computer Engineering
Dalhousie University

ECED 3403: Computer Architecture

Assignment 3

Last updated: 22 July 2018


  • The design section of this assignment is due on 24 July. Demonstrations and final submission can take place no later than 2 August 2018. Assignments submitted after this date will receive a grade of zero.

  • For a copy of assignment 3, click here.

  • For background information on cache memory, click here.

  • Some thoughts on cache-initialization:

    In WT (write-through), cache-consistency is assumed (that is, primary memory = cache memory locations), so a READ from a primary memory location could overwrite a cache location without any problems. A WRITE simply overwrites the cache and primary memory. However, it is necessary to ensure that all cache addresses are invalid at start-up so that the cache doesn't return invalid contents on a READ.

    In WB (write-back), if the Dirty-bit is clear, the cache location won't be written back to primary memory on a WRITE, so the DB should be clear on initialization. READ will cause problems if there is an address match, so an invalid initial address is needed.

    In both cases, the initial address should not be one that is likely to be read, such as 0xFFFE or 0xFFFF (the startup vector address found in vector 15). A safer alternative is 0x0000, since it is unlikely that the addresses associated with the initialization sequence will refer to location 0x0000 (the address of the first device's data and control-status registers).

  • A problem with a cache-miss is deciding which cache line to replace. One technique is to use an LRU (least-recently used) algorith. In this case, the cache-line that is least-recently used (sometimes referred to, somewhat incorrectly, as the "oldest") is identified for replacement. (Note, whether replacement involves returning the contents back to primary memory depends on the replacement strategy and whether other techniques, such as the dirty-bit are used.)

    The following is a possible algorithm that can be used to implement LRU and PLRU (pseudo-LRU). Each cache line is associated with an LRU field which contains an "age" count.

    Rather than decrementing the LRU-fields in all cache-lines (the youngest cache-line has the largest LRU value, while the oldest has a value of zero), only decrement the LRU in those cache-lines that are younger than the cache-line being accessed. For example, consider the following cache (Line refers to the cache line, including the Contents [not shown], Address is the primary memory address which mapped into this location, and LRU is the binary representation of the LRU count: 00, 01, 10, and 11):

    LineAddressLRU
    a200000
    b300001
    c400010
    d500011

    If cache-line 'b' is accessed (for example, a read or a write to primary memory address 3000), its current LRU is 01 (which means that no LRU less than 01 should be changed; i.e., 00). The LRU of cache-line 'b' becomes 11 (youngest) since it is the most recently used. The LRU of 'a' is not touched (as it is less than 01), while 'c' and 'd' are decremented since their LRUs are greater than 01. The cache now reads:

    LineAddressLRU
    a200000
    b300011
    c400001
    d500010

    Now, if cache-line 'a' is accessed (for example, a read or a write to primary memory address 2000), each LRU is decremented and 'a' becomes 11. The cache state is:

    LineAddressLRU
    a200011
    b300010
    c400000
    d500001

    This algorithm ensures that everything that was older than the cache-line being replaced remains older than those younger than the replaced cache-line.

  • The hybrid cache (direct-mapping into eight pages of four cache-lines each) needs a three-bit address to specify which page is being referred to. The page number (0 throught 7) should be extracted from bits 2, 3, and 4 of the MAR.

    Each cache line consists of four fields: physical address, contents (word), dirty-bit, and the two-bit LRU.

  • Since cache consistency is important, device registers and their addresses should not be included in the cache. The reason for this is straight forward: if a device status change occurs (i.e., DBA or OV and a byte arriving), the change will not be reflected in the cache (why?).

  • All four combinations of cache organization (associative and direct mapping) and replacement strategy (write-back and write-through) should be tested to understand how each approach operates under different scenarios. One set of software can be created and the different combinations can be generated using #ifdefs.

  • Storing words in the cache rather than bytes will simplify your design, since it avoids the unlikely problem of half an instruction or data item being in cache. Important dates: