Direct Mapped Cache is the simplest type of cache mapping technique. In Direct Mapped Cache, each main memory block can be cached in only one specific location within the cache.
In this technique, the cache is divided into several blocks (also known as cache lines), each block typically ranging from 2 to 16 bytes. The main memory is also divided into multiple blocks (also known as memory blocks), with each block being of the same size as that of a cache block. Each main memory address corresponds to a specific memory block and can be used to calculate the index of that block within the cache using its binary representation.
In Direct Mapped Cache, the index value for a particular main memory address is simply a few bits from its binary representation. For instance, if we have a 64KB-sized cache with 32-byte (i.e., 4-byte) blocks and a 1MB-sized main memory with 32-byte blocks, then we could divide the main memory address into two parts: lower 26 bits representing offset and higher 6 bits representing index value. Thus, in this case, our index value would range from 0 to 63.
When CPU requests data from a specific memory address, the cache first checks whether that address’s corresponding cache line is already present within itself. If it finds that it exists in a particular location in the cache, then it means there has been a hit; otherwise, it needs to read the relevant memory block from main storage and write it into an appropriate location within the cache. If there isn’t enough space available within the cache at that moment, then some replacement policy must be used to decide which existing block should be replaced.
One advantage of Direct Mapped Cache is its simplicity in implementation – easy hardware implementation and efficient operation. However, since each memory block maps only to one particular cache line, this may lead to many instances where accessing certain memory addresses results in numerous cache misses, negatively affecting system performance.




