What is Cache Memory Mapping?
Definition – All needed data is transferred from the primary memory to cache memory area, so it is known as “Cache Memory Mapping“.
Mapping in Computer Architecture
There are several cache mapping techniques which are used in computer architecture, such as
Direct Mapping
Direct mapping is very simplest mapping technique because in which every block of primary memory is mapped into with single possible cache line.
In Direct mapping, every memory block is allotted for particular line in the cache memory. Some time memory block is engaged with recently cache line, then fresh block is required for loading, and previously block is deleted. The address space is divided into two segments like as index field and tag field, and a tag field is saved into cache memory.
A = B modulo C
Where to assign
A = cache line number
B = main memory block number
c = number of lines in the cache
Direct Mapped Cache
Working of direct mapped cache is splitted in some steps, such as –
If, CPU arises memory request then
- The line number field has unique address that helps to access the specific line of cache.
- CPU address contains tag field, and it compares to tag of line.
- If two tags have matched, then cache hit is fire and needed word is found in cache.
- If those tags are not same (no match), then cache miss fire.
- If cache miss is occurred then needed word is fetched from primary memory.
- Finally, it is saved in the cache memory along with fresh recently tag and it is replaced to previous tag.
Associative Mapping
Associative mapping is very flexible technique because in which all content and addresses of memory word are saved into associative memory. Each block is capable to enter in the cache’s line. So, it can be identified that word is necessary with help of word id bits in the block, and due to this get possible to swap any word on the any area in the cache memory. So, we can consider that associative mapping is fastest and great flexible.
Set-Associative Mapping
Set-Associative mapping is the combination of direct and associative cache mapping techniques.
Set-Associative mapping helps to remove all issues of direct mapping technique. Set-Associative helps to address all issues of possible thrashing in direct mapping technique. It maps the all blocks with cache, then some line work together, and generates a SET. Set-Associative mapping permits to all words which are presented in the cache for same index address of multiple words in the memory.
Any block can be mapped within the SET.
For Example – Use 6 bit for tag then 64 tags
Memory Address – All blocks that are presented in the cache memory are spited into 64 sets, and it contains two blocks for each set.
Fully Associative Mapping and Cache
Fully associative mapping has some considerations such as –
- In which use primary memory’s block can be mapped with freely available cache line.
- It is more flexible compare to direct mapping.
- If cache is packed then this algorithm is required.
Fully Associative Cache
In fully associative can be determined that block of primary memory is presented in cache, to do comparison between tag bits of memory address in cache, and primary memory block is built in parallel.
2-Way Set Associative Cache
In which, cache memory consists 4,096 blocks, and that are containing 2 lines each
4-way set associative cache
In which, cache memory will have 2,048 blocks containing four lines each (8,192 lines / 4).
16-way set associative cache
In 16-way set associative cache the memory cache will have 512 blocks containing 16 lines each.
N-Way Set Associative Cache
N-Way Set Associative Cache helps to decrease the conflictions that are provided by N blocks in every set where to map data with that set may be found.