The difference between buffer and cache in linux free command

  

[[email protected] ~]#free -m
total used free shared buffers cachedMem: 503 368 134 056 249-/+ buffers/cache: 62 440Swap: 1023 0 1023Mem: indicates physical memory statistics - /+ buffers /cached: indicates the physical memory cache statistics Swap: indicates the use of swap partitions on the hard disk, here we do not care. The total physical memory of the system: 255268Kb (256M), but the currently available memory b of the system is not the 16936Kb of the first line free tag, it only represents the unallocated memory. Line 1 Mem:total: Indicates the total amount of physical memory. Used: indicates the total amount allocated to the cache (including buffers and caches), but some of them may not be actually used. Free: Unallocated memory. Shared: shared memory, the general system will not be used, not discussed here. Buffers: The number of buffers allocated by the system but not used. Cached: The number of caches allocated by the system but not used. The difference between buffer and cache is shown later. Total = used + free Line 2 -/+ buffers/cached:used: That is, used in the first line - buffers-cached is also the total amount of memory actually used. Free: The sum of unused buffers and cache and unallocated memory. This is the current available memory of the system. Free 2= buffers1 + cached1 + free1 //free2 is the second line, buffers1, etc. is the difference between the first line buffer and cache: A buffer is something that has yet to be "written" to disk.A cache is something that has Been "read" from the disk and stored for later use The difference between the used/free of the second line (mem) and the third line (-/+ buffers/cache) used/free. The difference between the two is that from the perspective of use, the first line is from the perspective of the OS, because for the OS, buffers/cached are used, so his available memory is 16936KB, and the used memory is 238332KB. This includes the kernel (OS) using +Application(X, oracle, etc) using +buffers+cached. The third line refers to the application, buffers/cached is equal to available for the application. Because buffer/cached is to improve the performance of file reading, when the application needs to use the memory, the buffer/cached will be quickly recycled. So from the application point of view, available memory = system free memory + buffers + cached.

two. The difference between buffer and cache

A buffer is something that has yet to be "written" Disk. A cache is something that has been "read" from the disk and stored for later use.

2.1 Cache Cache: cache, which is a small capacity but speed between the CPU and main memory. Very high memory. Since the speed of the CPU is much higher than the main memory, the CPU directly waits for a certain period of time to access the data from the memory. The Cache stores a part of the data that the CPU has just used or recycled. When the CPU uses the part of the data again, it can be accessed from the Cache. Directly called, which reduces the waiting time of the CPU and improves the efficiency of the system. The Cache is further divided into a Level 1 Cache (L1 Cache) and a Level 2 Cache (L2 Cache). The L1 Cache is integrated inside the CPU. The L2 Cache is generally soldered on the motherboard at the early stage. Now it is also integrated in the CPU. The common capacity is 256 KB. Or 512KB L2 Cache.

The cache in the Linuxfree command is different from the cache mentioned above. The cached is to save the read data. If you hit it after re-reading (find the required data), don't read the hard disk. If you don't hit, you can read the hard disk. The data is organized according to the frequency of reading, placing the most frequently read content in the most easily found location, and continuously dumping the content that is no longer read until it is deleted from it.

2.2 Buffer Buffer: A buffer for storing data between devices that are not synchronized or devices with different priorities. Through the buffer, the mutual waiting between processes can be reduced, so that when the data is read from a slow device, the operation process of the fast device does not interrupt. Buffers are designed according to the read and write of the disk, and the distributed write operations are concentrated to reduce disk fragmentation and repeated seeks of the hard disk, thereby improving system performance. Both are data in RAM. Simply put, the buffer is about to be written to disk, and the cache is read from the disk.

The buffer and cache displayed in the Free command, they are all occupied memory: buffer: as the buffer cache memory, is the read and write buffer of the block device, closer to the storage device, or directly the buffer of the disk Area. Cache: As the memory of the page cache, the cache of the file system is the buffer of the memory. If the value of the cache is large, the number of files stored in the cache is large. If frequently accessed files can be cached, the read IO of the disk will be very small


================= =========================================================== =========================================================== ==


The difference between Buffer and Cache Buffer and cache operations are not the same.

Buffer is designed to increase the speed of data exchange between memory and hard disk (or other I/O devices).

Cache (cache) is designed to improve the speed of data exchange between cpu and memory, which is the common level of cache, L2 cache, L3 cache.

The instructions and read data used by the cpu in executing the program are all memory-oriented, that is, obtained from memory. Due to the slow memory read and write speed, in order to improve the speed of data exchange between cpu and memory, a cache is added between cpu and memory, which is faster than memory, but the cost is high, and because there is too much integration in the cpu. Circuit, so the general cache is relatively small. In order to further improve the speed, intel and other companies have added a second-level cache or even a three-level cache. It is designed according to the local principle of the program, which is the instruction executed by the cpu and the accessed data. Often in a certain block, so put this piece of content into the cache, the cpu does not have to access the memory, which increases the access speed. Of course, if there is no content required by the CPU in the cache, you still need to access the memory.

Buffers are designed according to the read and write of the disk, and the distributed write operations are concentrated to reduce disk fragmentation and repeated seeks of the hard disk, thereby improving system performance. Linux has a daemon that periodically clears the buffered content (that is, writes to disk), or you can manually clear the buffer by using the sync command. For example: I have an ext2 U disk here, I cp a 3M MP3 inside, but the U disk light does not jump, after a while (or manually input sync) U disk lights will jump. Buffering is cleared when the device is unloaded, so sometimes it takes a few seconds to unload a device.

Modify the number to the right of vm.swappiness in /etc/sysctl.conf to adjust the swap usage policy the next time you turn it on. The range of numbers is 0 to 100. The larger the number, the more likely to use swap. The default is 60, you can try it. – both are data in RAM.

In a nutshell, the buffer is about to be written to disk, and the cache is read from disk.

Buffers are allocated by various processes and are used in areas such as input queues. A simple example is that a process requires multiple fields to be read in. Before all fields are read in, the process saves the previously read fields in the buffer.

Cache is often used on disk I /O requests, if there are multiple processes to access a file, then the file is made into a cache to facilitate the next access, which can improve the system performance.

In a nutshell, a buffer is a channel, a cache is a container, and a cache can be translated into "cache”

Copyright © Windows knowledge All Rights Reserved