Memory-management methods normally requires the entire process to be in memory before the process can execute. However, examination of real programs shows that the entire program is not needed to be in physical memory.
It is better not to load the whole process in memory for execution:
The goal is to partially load a process. Therefore, the partially loaded process:
The purpose of this project is to test several page replacement algorithms and compare their effectiveness.
C programs were written to simulate the following page replacement algorithms:
The programs calculate the following:
Virtual memory is a technique that allows the execution of processes that are not completely in memory. It abstracts main memory into an extremely large, uniform array of storage, separating logical memory as viewed by the user from physical memory.
Virtual memory also allows processes to share files easily and to implement shared memory through page sharing. In addition, it provides an efficient mechanism for process creation. Virtual memory is not easy to implement, however, and may substantially decrease performance if it is used carelessly.
This memory holds those pages that are not present in main memory. The secondary memory is usually a high-speed disk. It is known as the swap device, and the section of disk used for this purpose is known as swap space.
This table has the ability to mark an entry invalid through a valid -invalid bit or a special value of protection bits.
When a program is being executed, the operating systems has two options:
1. load the entire program in physical memory, or
2. load sections of the program in physical memory only as they are needed.
The latter technique is known as demand paging and is commonly used in virtual memory systems. With demand-paged virtual memory, pages are only loaded when they are demanded during program execution; pages that are never accessed are thus never loaded into physical memory.
Memory protection in a paged environment is accomplished by protection bits associated with each frame. Normally, these bits are kept in the page table. One particular bit generally attached to each entry in the page table is the valid-invalid bit.
Illegal addresses are trapped by use of the valid-invalid bit. The operating system sets this bit for each page to allow or disallow access to the page. Thus, when a process tries to access a page that is marked invalid, the operating system causes a trap: a page fault.
effective access time = (1 – p) * ma + p * page-fault-time.
where,
p | is the probability of a page fault (0 ≤ p ≤ 1) |
ma | is the memory access time. For most computer systems it ranges from 10 to 200 ns. |
page-fault-time | A typical hard disk has an average latency of 3 ms, a seek time of 5 ms, and a transfer time of 0.05 ms. Thus, the total page fault time is about 8 ms, including hardware and software time. |
As we can see, the effective access time is directly proportional to the page fault rate (p). To keep process slowdown due to paging at a reasonable level, it is important to keep the page-fault rate low in a demand-paging system. Otherwise, the effective access time increases, slowing process execution dramatically.
How should the OS distribute the frames among the various processes?
The easiest way to split m frames among n processes is to give everyone an equal share, m/n frames. For instance, if there are 93 frames and 5 processes, each process will get 18 frames. The three leftover frames can be used as a free-frame buffer pool.
Different processes need differing amounts of memory. Consider a system with a 1 KB frame size. If a small student process of 10 KB and an interactive database of 127 KB are the only two processes running in a system with 62 free frames, it does not make much sense to give each process 31 frames. The student process does not need more than 10 frames, so the other 21 are, strictly speaking, wasted.
Proportional allocation allocates available memory to each process according to its size. Let the size of the virtual memory for process pi be si, and define
S = Σsi
Then, if the total number of available frames is m, we allocate ai frames to process pi, where ai is approximately
ai = si / S x m
In our above example, with proportional allocation, we would split 62 frames between two processes, one of 10 pages and one of 127 pages, by allocating 4 frames and 57 frames, respectively.
10 / 137 x 62 = 4.5255 ≈ 4, and
127 / 137 X 62 = 57.4745 ≈ 57.
If no frame is free, find one frame that is not currently being used and free it.
There are many different page-replacement algorithms. Every operating system has its own replacement scheme. To select a particular replacement algorithm, we want the one with the lowest page-fault rate.
We evaluate an algorithm by running it on a particular string of memory references and computing the number of page faults. The string of memory references is called a reference string. We can generate reference strings artificially (by using a random-number generator), or we can trace a given system and record the address of each memory reference.
The latter choice produces a large number of data. Hence, the former choice is used in this project.
To determine the number of page faults for a particular reference string and page-replacement algorithm, we need to know the number of page frames available. Obviously, as the number of frames available increases, the number of page faults decreases.
For simplicity, the following reference string is used to determine the effectiveness of the page replacement algorithms. In this project, however, the reference string length is 1 x 106.
The simplest and a low-overhead page-replacement algorithm. In a FIFO replacement algorithm the operating system maintains a queue of all pages currently in memory, with the page at the head of the queue the oldest one and the page at the tail the most recent arrival. On a page fault, the page at the head is removed and the new page added to the tail of the queue.
Total Page Faults: 15
Page Fault Rate: 15 / 20 = 0.75
The least recently used page replacement algorithm uses the recent past as an approximation of the near future, it replaces the page that has not been used for the longest period of time.
LRU replacement associates with each page the time of that page’s last use. When a page must be replaced, LRU chooses the page that has not been used for the longest period of time. We can think of this strategy as the optimal page-replacement algorithm looking backward in time, rather than forward.
Total Page Faults: 12
Page Fault Rate: 12 / 20 = 0.60
The reference bit for a page is set by the hardware whenever that page is referenced (reads or writes). Reference bits are associated with each entry in the page table. Initially, all bits are cleared (to 0) by the operating system. As a user process executes, the bit associated with each page referenced is set (to 1) by the hardware. After some time, we can determine which pages have been used and which have not been used by examining the reference bits, without knowing the order of use.
The Additional-Reference-Bits technique maintains several reference bits. On each page access, the highest order reference bit is set to 1. At regular intervals, the system will shift all of the reference bits one space to the right, inserting a 0 as the new highest order bit. When a page must be evicted from memory, the system can scan through the pages to find the one with the lowest value in its reference bits.
The Least Frequently Used (LFU) page-replacement algorithm is a counting-based page replacement technique that requires that the page with the smallest count be replaced.
Total Page Faults: 13
Page Fault Rate: 13 / 20 = 0.65
The Most Frequently Used (MFU) page-replacement algorithm is another counting-based page replacement technique based on the argument that the page with the highest count will be selected for replacement.
Total Page Faults: 13
Page Fault Rate: 13 / 20 = 0.65
It’s the simplest algorithm to implement. However, it’s not the most effective. Its performance can sometimes be good while at other times it can be bad. The algorithm is implemented with a first-in first-out queue. For a requested page, if it is in the queue, it doesn’t need to be replaced. Otherwise, the page that has been in the queue for longest is replaced with it. Because of the uncertainty of the order of pages that are demanded, the performance of this algorithm varies. As a consequence, it’s considered to be less effective than LRU.
The family of Least Recently Used (LRU) algorithms are considered to be most optimal of all the page replacement algorithms. Experiments show that there is no significant difference between the varying implementations of it (via counters, reference bits or additional-reference-bits). The algorithm will have the smallest page fault rate and a relatively fast average memory access time.
MFU and LFU page replacement algorithms are uncommon. The implementation of these algorithms is expensive, and they do not approximate the optimal page replacement well. These algorithms are considered to be least practical of all the others that were mentioned. They use counters as a means for deciding which pages to replace. LFU selects the page which has the smallest count for replacement while MFU selects the page which has the highest count. These algorithms both have higher page fault rate and slower average memory access time than LRU. Likely, they perform worse than the FIFO algorithm.