- Explain the concept of Reentrancy.
It is a useful, memory-saving technique for multiprogrammed timesharing systems. A Reentrant Procedure is one in which multiple users can share a single copy of a programduring the same period. Reentrancy has 2 key aspects: The program code cannot modify itself, and the local data for each user process must be stored separately. Thus, the permanent part is the code, and the temporary part is the pointer back to the calling programand local variables used by that program. Each execution instance is called activation. It executes the code in the permanent part, but has its own copy of local variables/parameters. The temporary part associated with each activation is the activation record. Generally, the activation record is kept on the stack.
Note: A reentrant procedure can be interrupted and called by an interrupting program, and still execute correctly on returning to the procedure.
2. Explain Belady's Anomaly.
Also called FIFO anomaly. Usually, on increasing the number of frames allocated to aprocess' virtual memory, the process execution is faster, because fewer page faults occur. Sometimes, the reverse happens, i.e., the execution time increases even when more frames are allocated to the process. This is Belady's Anomaly. This is true for certain page reference patterns.
3. What is a binary semaphore? What is its use?
A binary semaphore is one, which takes only 0 and 1 as values. They are used to implement mutual exclusion and synchronize concurrent processes.
4. What is thrashing?
It is a phenomenon in virtual memory schemes when the processor spends most of its time swapping pages, rather than executing instructions. This is due to an inordinate number of page faults.
5. List the Coffman's conditions that lead to a deadlock.
Mutual Exclusion: Only oneØ process may use a critical resource at a time.
HoldØ & Wait: A process may be allocated some resources while waiting for others.
No Pre-emption: No resource can be forcible removed from aØ process holding it.
Circular Wait: A closed chain of processes exist such that eachØ process holds at least one resource needed by another process in the chain.
6. What are short-, long- and medium-term scheduling?
Long term scheduler determines which programs are admitted to the system for processing. It controls the degree of multiprogramming. Once admitted, a job becomes a process.
Medium term scheduling is part of the swapping function. This relates to processes that are in a blocked or suspended state. They are swapped out of real-memory until they are ready to execute. The swapping-in decision is based on memory-management criteria.
Short term scheduler, also know as a dispatcher executes most frequently, and makes the finest-grained decision of which process should execute next. This scheduler is invoked whenever an event occurs. It may lead to interruption of one process by preemption.
7. What are turnaround time and response time?
Turnaround time is the interval between the submission of a job and its completion. Response time is the interval between submission of a request, and the first response to that request.
8. What are the typical elements of a process image?
User data: Modifiable part of user space. May includeØ program data, user stack area, and programs that may be modified.
UserØ program: The instructions to be executed.
System Stack: EachØ process has one or more LIFO stacks associated with it. Used to store parameters and calling addresses for procedure and system calls.
Ø Process control Block (PCB): Info needed by the OS to control processes.
9. What is the Translation Lookaside Buffer (TLB)?
In a cached system, the base addresses of the last few referenced pages is maintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory reference causes 2 physical memory accesses-- one to fetch appropriate page-table entry, and one to fetch the desired data. Using TLB in-between, this is reduced to just one physical memory access in cases of TLB-hit.
10. What is the resident set and working set of a process?
Resident set is that portion of the process image that is actually in real-memory at a particular instant. Working set is that subset of resident set that is actually needed for execution. (Relate this to the variable-window size method for swapping techniques.)
11. When is a system in safe state?
The set of dispatchable processes is in a safe state if there exists at least one temporal order in which all processes can be run to completion without resulting in a deadlock.
12. What is cycle stealing?
We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may force the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cyclestealing can be done only at specific break points in an instruction cycle.
13. What is meant by arm-stickiness?
If one or a few processes have a high access rate to data on one track of a storage disk, then they may monopolize the device by repeated requests to that track. This generally happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multisurface disks are more likely to be affected by this than low density ones.
14. What are the stipulations of C2 level security?
C2 level security provides for:
DiscretionaryØ Access Control
Identification and AuthenticationØ
AuditingØ
Resource reuseØ
15. What is busy waiting?
The repeated execution of a loop of code while waiting for an event to occur is called busy-waiting. The CPU is not engaged in any real productive activity during this period, and the process does not progress toward completion.
16. Explain the popular multiprocessor thread-scheduling strategies.
Load Sharing:Ø Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue. Note that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
GangØ Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group schedulingpredated this strategy.
Dedicated processor assignment: Provides implicitØ scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool.
DynamicØ scheduling: The number of thread in a program can be altered during the course of execution.
17. When does the condition 'rendezvous' arise?
In message passing, it is the condition in which, both, the sender and receiver are blocked until the message is delivered.
18. What is a trap and trapdoor?
Trapdoor is a secret undocumented entry point into a program used to grant access without normal methods of access authentication. A trap is a software interrupt, usually the result of an error condition.
19. What are local and global page replacements?
Local replacement means that an incoming page is brought in only to the relevant process' address space. Global replacement policy allows any page frame from any process to be replaced. The latter is applicable to variable partitions model only.
20. Define latency, transfer and seek time with respect to disk I/O.
Seek time is the time required to move the disk arm to the required track. Rotational delay or latency is the time it takes for the beginning of the required sector to reach the head. Sum of seek time (if any) and latency is the access time. Time taken to actually transfer a span of data is transfer time. 22. What is time-stamping?
It is a technique proposed by Lamport, used to order events in a distributed system without the use of clocks. This scheme is intended to order events consisting of the transmission of messages. Each system 'i' in the network maintains a counter Ci. Every time a system transmits a message, it increments its counter by 1 and attaches the time-stamp Ti to the message. When a message is received, the receiving system 'j' sets its counter Cj to 1 more than the maximum of its current value and the incoming time-stamp Ti. At each site, the ordering of messages is determined by the following rules: For messages x from site i and y from site j, x precedes y if one of the following conditions holds....(a) if Ti<Tj or (b) if Ti=Tj and i<j.
23. How are the wait/signal operations for monitor different from those for semaphores?
If a process in a monitor signal and no task is waiting on the condition variable, the signal is lost. So this allows easier program design. Whereas in semaphores, every operation affects the value of the semaphore, so the wait and signal operations should be perfectly balanced in the program.
24. In the context of memory management, what are placement and replacement algorithms?
Placement algorithms determine where in available real-memory to load a program. Common methods are first-fit, next-fit, best-fit. Replacement algorithms are used when memory is full, and one process (or part of a process) needs to be swapped out to accommodate a new program. The replacement algorithm determines which are the partitions to be swapped out.
25. In loading programs into memory, what is the difference between load-time dynamic linking and run-time dynamic linking?
For load-time dynamic linking: Load module to be loaded is read into memory. Any reference to a target external module causes that module to be loaded and the references are updated to a relative address from the start base address of the application module.
With run-time dynamic loading: Some of the linking is postponed until actual reference during execution. Then the correct module is loaded and linked.
26. What are demand- and pre-paging?
With demand paging, a page is brought into memory only when a location on that page is actually referenced during execution. With pre-paging, pages other than the one demanded by a page fault are brought in. The selection of such pages is done based on common access patterns, especially for secondary memory devices.
27. Paging a memory management function, while multiprogramming a processor management function, are the two interdependent?
Yes.
28. What is page cannibalizing?
Page swapping or page replacements are called page cannibalizing.
29. What has triggered the need for multitasking in PCs?
Increased speed and memory capacity of microprocessors together with the support fir virtual memory and
Growth of client server computing
30. What are the four layers that Windows NT have in order to achieve independence?
Hardware abstraction layer
Kernel
Subsystems
System Services.
31. What is SMP?
To achieve maximum efficiency and reliability a mode of operation known as symmetric multiprocessing is used. In essence, with SMP any process or threads can be assigned to any processor.
32. What are the key object oriented concepts used by Windows NT?
Encapsulation
Object class and instance
33. Is Windows NT a full blown object oriented operating system? Give reasons.
No Windows NT is not so, because its not implemented in object oriented language and the data structures reside within one executive component and are not represented as objects and it does not support object oriented capabilities .
34. What is a drawback of MVT?
It does not have the features like
ability to support multiple processors
virtual storage
source level debugging
35. What is process spawning?
When the OS at the explicit request of another process creates a process, this action is called process spawning.
36. How many jobs can be run concurrently on MVT?
15 jobs
37. List out some reasons for process termination.
Normal completion
Time limit exceeded
Memory unavailable
Bounds violation
Protection error
Arithmetic error
Time overrun
I/O failure
Invalid instruction
Privileged instruction
Data misuse
Operator or OS intervention
Parent termination.
38. What are the reasons for process suspension?
The set of dispatchable processes is in a safe state if there exists at least one temporal order in which all processes can be run to completion without resulting in a deadlock.
12. What is cycle stealing?
We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may force the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cyclestealing can be done only at specific break points in an instruction cycle.
13. What is meant by arm-stickiness?
If one or a few processes have a high access rate to data on one track of a storage disk, then they may monopolize the device by repeated requests to that track. This generally happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multisurface disks are more likely to be affected by this than low density ones.
14. What are the stipulations of C2 level security?
C2 level security provides for:
DiscretionaryØ Access Control
Identification and AuthenticationØ
AuditingØ
Resource reuseØ
15. What is busy waiting?
The repeated execution of a loop of code while waiting for an event to occur is called busy-waiting. The CPU is not engaged in any real productive activity during this period, and the process does not progress toward completion.
16. Explain the popular multiprocessor thread-scheduling strategies.
Load Sharing:Ø Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue. Note that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
GangØ Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group schedulingpredated this strategy.
Dedicated processor assignment: Provides implicitØ scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool.
DynamicØ scheduling: The number of thread in a program can be altered during the course of execution.
17. When does the condition 'rendezvous' arise?
In message passing, it is the condition in which, both, the sender and receiver are blocked until the message is delivered.
18. What is a trap and trapdoor?
Trapdoor is a secret undocumented entry point into a program used to grant access without normal methods of access authentication. A trap is a software interrupt, usually the result of an error condition.
19. What are local and global page replacements?
Local replacement means that an incoming page is brought in only to the relevant process' address space. Global replacement policy allows any page frame from any process to be replaced. The latter is applicable to variable partitions model only.
20. Define latency, transfer and seek time with respect to disk I/O.
Seek time is the time required to move the disk arm to the required track. Rotational delay or latency is the time it takes for the beginning of the required sector to reach the head. Sum of seek time (if any) and latency is the access time. Time taken to actually transfer a span of data is transfer time. 22. What is time-stamping?
It is a technique proposed by Lamport, used to order events in a distributed system without the use of clocks. This scheme is intended to order events consisting of the transmission of messages. Each system 'i' in the network maintains a counter Ci. Every time a system transmits a message, it increments its counter by 1 and attaches the time-stamp Ti to the message. When a message is received, the receiving system 'j' sets its counter Cj to 1 more than the maximum of its current value and the incoming time-stamp Ti. At each site, the ordering of messages is determined by the following rules: For messages x from site i and y from site j, x precedes y if one of the following conditions holds....(a) if Ti<Tj or (b) if Ti=Tj and i<j.
23. How are the wait/signal operations for monitor different from those for semaphores?
If a process in a monitor signal and no task is waiting on the condition variable, the signal is lost. So this allows easier program design. Whereas in semaphores, every operation affects the value of the semaphore, so the wait and signal operations should be perfectly balanced in the program.
24. In the context of memory management, what are placement and replacement algorithms?
Placement algorithms determine where in available real-memory to load a program. Common methods are first-fit, next-fit, best-fit. Replacement algorithms are used when memory is full, and one process (or part of a process) needs to be swapped out to accommodate a new program. The replacement algorithm determines which are the partitions to be swapped out.
25. In loading programs into memory, what is the difference between load-time dynamic linking and run-time dynamic linking?
For load-time dynamic linking: Load module to be loaded is read into memory. Any reference to a target external module causes that module to be loaded and the references are updated to a relative address from the start base address of the application module.
With run-time dynamic loading: Some of the linking is postponed until actual reference during execution. Then the correct module is loaded and linked.
26. What are demand- and pre-paging?
With demand paging, a page is brought into memory only when a location on that page is actually referenced during execution. With pre-paging, pages other than the one demanded by a page fault are brought in. The selection of such pages is done based on common access patterns, especially for secondary memory devices.
27. Paging a memory management function, while multiprogramming a processor management function, are the two interdependent?
Yes.
28. What is page cannibalizing?
Page swapping or page replacements are called page cannibalizing.
29. What has triggered the need for multitasking in PCs?
Increased speed and memory capacity of microprocessors together with the support fir virtual memory and
Growth of client server computing
30. What are the four layers that Windows NT have in order to achieve independence?
Hardware abstraction layer
Kernel
Subsystems
System Services.
31. What is SMP?
To achieve maximum efficiency and reliability a mode of operation known as symmetric multiprocessing is used. In essence, with SMP any process or threads can be assigned to any processor.
32. What are the key object oriented concepts used by Windows NT?
Encapsulation
Object class and instance
33. Is Windows NT a full blown object oriented operating system? Give reasons.
No Windows NT is not so, because its not implemented in object oriented language and the data structures reside within one executive component and are not represented as objects and it does not support object oriented capabilities .
34. What is a drawback of MVT?
It does not have the features like
ability to support multiple processors
virtual storage
source level debugging
35. What is process spawning?
When the OS at the explicit request of another process creates a process, this action is called process spawning.
36. How many jobs can be run concurrently on MVT?
15 jobs
37. List out some reasons for process termination.
Normal completion
Time limit exceeded
Memory unavailable
Bounds violation
Protection error
Arithmetic error
Time overrun
I/O failure
Invalid instruction
Privileged instruction
Data misuse
Operator or OS intervention
Parent termination.
38. What are the reasons for process suspension?
1. swapping
2. interactive user request
3. timing
4. parent process request
39. What is process migration?
It is the transfer of sufficient amount of the state of process from one machine to the target machine
40. What is mutant?
In Windows NT a mutant provides kernel mode or user mode mutual exclusion with the notion of ownership.
41. What is an idle thread?
The special thread a dispatcher will execute when no ready thread is found.
42. What is FtDisk?
It is a fault tolerance disk driver for Windows NT.
43. What are the possible threads a thread can have?
ReadyØ
StandbyØ
RunningØ
WaitingØ
TransitionØ
Terminated.Ø
44. What are rings in Windows NT?
Windows NT uses protection mechanism called rings provides by the process to implement separation between the user mode and kernel mode.
45. What is Executive in Windows NT?
In Windows NT, executive refers to the operating system code that runs in kernel mode.
46. What are the sub-components of I/O manager in Windows NT?
Network redirector/ ServerØ
Cache manager.Ø
File systemsØ
Network driverØ
Device driverØ
47. What are DDks? Name an operating system that includes this feature.
DDks are device driver kits, which are equivalent to SDKs for writing device drivers.Windows NT includes DDks.
48. What level of security does Windows NT meets?
C2 level security.
1. What are the basic functions of an operating system? -
Operating system controls and coordinates the use of the hardware among
the various applications programs for various uses. Operating system
acts as resource allocator and manager. Since there are many possibly
conflicting requests for resources the operating system must decide
which requests are allocated resources to operating the computer system
efficiently and fairly. Also operating system is control program which
controls the user programs to prevent errors and improper use of the
computer. It is especially concerned with the operation and control of
I/O devices.
2. Why paging is used? -
Paging is solution to external fragmentation problem which is to permit
the logical address space of a process to be noncontiguous, thus
allowing a process to be allocating physical memory wherever the latter
is available.
3. While running DOS on a PC, which command would be used to duplicate the entire diskette? diskcopy
4. What resources are used when a thread created? How do they differ from those when a process is created? -
When a thread is created the threads does not require any new resources
to execute the thread shares the resources like memory of the process
to which they belong to. The benefit of code sharing is that it allows
an application to have several different threads of activity all within
the same address space. Whereas if a new process creation is very
heavyweight because it always requires new address space to be created
and even if they share the memory then the inter process communication
is expensive when compared to the communication between the threads.
5. What is virtual memory? -
Virtual memory is hardware technique where the system appears to have
more memory that it actually does. This is done by time-sharing, the
physical memory and storage parts of the memory one disk when they are
not actively being used.
6. What is Throughput, Turnaround time, waiting time and Response time? -
Throughput – number of processes that complete their execution
per time unit. Turnaround time – amount of time to execute a
particular process. Waiting time – amount of time a process has
been waiting in the ready queue. Response time – amount of time
it takes from when a request was submitted until the first response is
produced, not output (for time-sharing environment).
7. What is the state of the processor, when a process is waiting for some event to occur? - Waiting state
8. What is the important aspect of a real-time system or Mission Critical Systems? -
A real time operating system has well defined fixed time constraints.
Process must be done within the defined constraints or the system will
fail. An example is the operating system for a flight control computer
or an advanced jet airplane. Often used as a control device in a
dedicated application such as controlling scientific experiments,
medical imaging systems, industrial control systems, and some display
systems. Real-Time systems may be either hard or soft real-time. Hard real-time:
Secondary storage limited or absent, data stored in short term memory,
or read-only memory (ROM), Conflicts with time-sharing systems, not
supported by general-purpose operating systems. Soft real-time:
Limited utility in industrial control of robotics, Useful in
applications (multimedia, virtual reality) requiring advanced
operating-system features.
9. What is the difference between Hard and Soft real-time systems? -
A hard real-time system guarantees that critical tasks complete on
time. This goal requires that all delays in the system be bounded from
the retrieval of the stored data to the time that it takes the operating
system to finish any request made of it. A soft real time system where a
critical real-time task gets priority over other tasks and retains that
priority until it completes. As in hard real time systems kernel delays
need to be bounded
10. What
is the cause of thrashing? How does the system detect thrashing? Once
it detects thrashing, what can the system do to eliminate this problem? -
Thrashing is caused by under allocation of the minimum number of pages
required by a process, forcing it to continuously page fault. The system
can detect thrashing by evaluating the level of CPU utilization as
compared to the level of multiprogramming. It can be eliminated by
reducing the level of multiprogramming.
11. What is multi tasking, multi programming, multi threading? - Multi programming:
Multiprogramming is the technique of running several programs at a time
using timesharing. It allows a computer to do several things at the
same time. Multiprogramming creates logical parallelism. The concept of
multiprogramming is that the operating system keeps several jobs in
memory simultaneously. The operating system selects a job from the job
pool and starts executing a job, when that job needs to wait for any i/o
operations the CPU is switched to another job. So the main idea here is
that the CPU is never idle. Multi tasking:
Multitasking is the logical extension of multiprogramming .The concept
of multitasking is quite similar to multiprogramming but difference is
that the switching between jobs occurs so frequently that the users can
interact with each program while it is running. This concept is also
known as time-sharing systems. A time-shared operating system uses CPU
scheduling and multiprogramming to provide each user with a small
portion of time-shared system. Multi threading:
An application typically is implemented as a separate process with
several threads of control. In some situations a single application may
be required to perform several similar tasks for example a web server
accepts client requests for web pages, images, sound, and so forth. A
busy web server may have several of clients concurrently accessing it.
If the web server ran as a traditional single-threaded process, it would
be able to service only one client at a time. The amount of time that a
client might have to wait for its request to be serviced could be
enormous. So it is efficient to have one process that contains multiple
threads to serve the same purpose. This approach would multithread the
web-server process, the server would create a separate thread that would
listen for client requests when a request was made rather than creating
another process it would create another thread to service the request.
To get the advantages like responsiveness, Resource sharing economy and
utilization of multiprocessor architectures multithreading concept can
be used.
12. What is hard disk and what is its purpose? -
Hard disk is the secondary storage device, which holds the data in
bulk, and it holds the data on the magnetic medium of the disk.Hard
disks have a hard platter that holds the magnetic medium, the magnetic
medium can be easily erased and rewritten, and a typical desktop machine
will have a hard disk with a capacity of between 10 and 40 gigabytes.
Data is stored onto the disk in the form of files.
13. What is fragmentation? Different types of fragmentation? - Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are too small to satisfy any request. External Fragmentation:
External Fragmentation happens when a dynamic memory allocation
algorithm allocates some memory and a small piece is left over that
cannot be effectively used. If too much external fragmentation occurs,
the amount of usable memory is drastically reduced. Total memory space
exists to satisfy a request, but it is not contiguous. Internal Fragmentation:
Internal fragmentation is the space wasted inside of allocated memory
blocks because of restriction on the allowed sizes of allocated blocks.
Allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used
14. What is DRAM? In which form does it store data? -
DRAM is not the best, but it’s cheap, does the job, and is available
almost everywhere you look. DRAM data resides in a cell made of a
capacitor and a transistor. The capacitor tends to lose data unless it’s
recharged every couple of milliseconds, and this recharging tends to
slow down the performance of DRAM compared to speedier RAM types.
15. What is Dispatcher? -
Dispatcher module gives control of the CPU to the process selected by
the short-term scheduler; this involves: Switching context, Switching to
user mode, Jumping to the proper location in the user program to
restart that program, dispatch latency – time it takes for the
dispatcher to stop one process and start another running.
16. What is CPU Scheduler? -
Selects from among the processes in memory that are ready to execute,
and allocates the CPU to one of them. CPU scheduling decisions may take
place when a process: 1.Switches from running to waiting state.
2.Switches from running to ready state. 3.Switches from waiting to
ready. 4.Terminates. Scheduling under 1 and 4 is non-preemptive. All
other scheduling is preemptive.
17. What is Context Switch? -
Switching the CPU to another process requires saving the state of the
old process and loading the saved state for the new process. This task
is known as a context switch. Context-switch time is pure overhead,
because the system does no useful work while switching. Its speed varies
from machine to machine, depending on the memory speed, the number of
registers which must be copied, the existed of special instructions(such
as a single instruction to load or store all registers).
18. What is cache memory? -
Cache memory is random access memory (RAM) that a computer
microprocessor can access more quickly than it can access regular RAM.
As the microprocessor processes data, it looks first in the cache memory
and if it finds the data there (from a previous reading of data), it
does not have to do the more time-consuming reading of data from larger
memory.
19. What is a Safe State and what is its use in deadlock avoidance? -
When a process requests an available resource, system must decide if
immediate allocation leaves the system in a safe state. System is in
safe state if there exists a safe sequence of all processes. Deadlock
Avoidance: ensure that a system will never enter an unsafe state.
20. What is a Real-Time System? -
A real time process is a process that must respond to the events within
a certain time period. A real time operating system is an operating
system that can run real time processes successfully
No comments:
Post a Comment