Page Cache / Buffer Cache
In computing, a page cache, sometimes also called disk cache, is a transparent cache for the pages originating from a secondary storage device such as a hard disk drive (HDD) or a solid-state drive (SSD). The operating system keeps a page cache in otherwise unused portions of the main memory (RAM), resulting in quicker access to the contents of cached pages and overall performance improvements. A page cache is implemented in kernels with the paging memory management, and is mostly transparent to applications.
Usually, all physical memory not directly allocated to applications is used by the operating system for the page cache. Since the memory would otherwise be idle and is easily reclaimed when applications request it, there is generally no associated performance penalty and the operating system might even report such memory as “free” or “available”.
When compared to main memory, hard disk drive read/writes are slow and random accesses require expensive disk seeks; as a result, larger amounts of main memory bring performance improvements as more data can be cached in memory. Separate disk caching is provided on the hardware side, by dedicated RAM or NVRAM chips located either in the disk controller (in which case the cache is integrated into a hard disk drive and usually called disk buffer[3]), or in a disk array controller. Such memory should not be confused with the page cache.
Under Linux, the number of megabytes of main memory currently used for the page cache is indicated in the Cached column
$ free -h
total used free shared buff/cache available
Mem: 3.8Gi 2.2Gi 142Mi 1.0Mi 1.5Gi 1.4Gi
Swap: 2.0Gi 19Mi 2.0Gi
Memory conservation
Pages in the page cache modified after being brought in are called dirty pages.
Since non-dirty pages in the page cache have identical copies in secondary storage (e.g. hard disk drive or solid-state drive), discarding and reusing their space is much quicker than paging out application memory, and is often preferred over flushing the dirty pages into secondary storage and reusing their space. Executable binaries, such as applications and libraries, are also typically accessed through page cache and mapped to individual process spaces using virtual memory (this is done through the mmap system call on Unix-like operating systems). This not only means that the binary files are shared between separate processes, but also that unused parts of binaries will be flushed out of main memory eventually, leading to memory conservation.
Writing
If data is written, it is first written to the Page Cache and managed as one of its dirty pages. Dirty means that the data is stored in the Page Cache, but needs to be written to the underlying storage device first. The content of these dirty pages is periodically transferred (as well as with the system calls sync or fsync) to the underlying storage device. The system may, in this last instance, be a RAID controller or the hard disk directly.
Reading
File blocks are written to the Page Cache not just during writing, but also when reading files. For example, when you read a 100-megabyte file twice, once after the other, the second access will be quicker, because the file blocks come directly from the Page Cache in memory and do not have to be read from the hard disk again. The following example shows that the size of the Page Cache has increased after a good, 200-megabytes video has been played.
user@adminpc:~$ free -m
total used free shared buffers cached
Mem: 3884 1812 2071 0 60 1328
-/+ buffers/cache: 424 3459
Swap: 1956 0 1956
user@adminpc:~$ vlc video.avi
[...]
user@adminpc:~$ free -m
total used free shared buffers cached
Mem: 3884 2056 1827 0 60 1566
-/+ buffers/cache: 429 3454
Swap: 1956 0 1956
user@adminpc:~$
Reference
- https://en.wikipedia.org/wiki/Page_cache
- https://www.thomas-krenn.com/en/wiki/Linux_Page_Cache_Basics