Term
|
Definition
when different computational results occur depending on the particular timing and resulting order of execution of statements across separate threads or processes |
|
|
Term
What is one way to fix a race condition |
|
Definition
Ensure that only one process or thread is allowed to change a variable or I/O device until that process has completed is required sequence of operations |
|
|
Term
What occurred with the Therac-25 and when did it happen? |
|
Definition
Between June and 1985 and July 1987 3 patients were killed do to a race condition associated with the command screen because the software was improperly synchronized.
|
|
|
Term
What is mutual exclusion? |
|
Definition
If process Pi is executing in its critical section, then no other process can be executing in their critical sections
|
|
|
Term
|
Definition
If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next, and this selection cannot be postponed indefinitely
|
|
|
Term
|
Definition
There exists a bound, or limit, on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
|
|
|
Term
What are three ways to solve the critical section problem? |
|
Definition
1. initialize a global variable before a thread has entered the critical section and check to see what the thread is set to and set it again after the thread has exited
2. Initialization using flags
3. Use a combination of both
|
|
|
Term
What are some issues with the critical section problem? |
|
Definition
1.The code can be complex and hard to figure out
2. Busy Waiting |
|
|
Term
|
Definition
When a thread actively takes up CPU cycles when waiting for other processes to exit from the critical section. |
|
|
Term
|
Definition
A protected variable or abstract data type that constitutes a classic method of controlling access by several processes to a common resource in a parallel programming environment |
|
|
Term
|
Definition
When a semaphore is busy waiting because the process spins (continually uses CPU) while waiting for the semaphore (the "lock) |
|
|
Term
What is an alternative to busy waiting? |
|
Definition
|
|
Term
|
Definition
When a process enters a wait queue, and uses no CPU resources while in the waiting queue. |
|
|
Term
When a process is blocked can it be busy waiting |
|
Definition
A process that is blocked is on a waiting queue, and not using the CPU. Thus, it cannot be busy-waiting, because it’s not using the CPU.
|
|
|
Term
Why is turning off interrupts not a general solution for solving the critical section problem?
|
|
Definition
leads to a decrease in the responsiveness to external events. Devices often use interrupts, and if interrupts are turned off these devices (external events) cannot
be serviced. Therefore, the technique should only be used for critical sections that are very short and only by system code (i.e., the operating system).
|
|
|
Term
Is a blocked process necessarily deadlocked? |
|
Definition
A process is deadlocked if and only if there is no way for it to become unblocked and a blocked process is not deadlocked if there is a sequence of operations that will unblock it (not including the terminating of a process by some external event). Also, this does not imply that this sequence of operations needed to unblock will happen. In fact another sequence can lead to this process becoming deadlocked.
|
|
|
Term
Can a processes RAM memory be preempted? Yes or No (Circle one). Explain. Based on your answer to this question, can RAM memory be involved as a resource in a deadlock? Yes or No. (Circle one). Explain.
|
|
Definition
Yes, since the contents of RAM can be written to disk to allow memory swapping, preemption is possible and the preempted process can always regain the information from the hard drive when it is swapped
back into RAM. No, RAM memory cannot be involved as a resource in a deadlock because it can be preempted.
|
|
|
Term
How can two (heavy-weight) processes share memory in a paged memory environment?
|
|
Definition
Two HWP’s (Heavy weight processes) can share memory in a paged
memory environment by referencing the same physical frame from
their page tables.
|
|
|
Term
Why might it be useful for two heavy-weight processes to share memory? |
|
Definition
This can be useful to enable HWP's to synchronize and communicate. |
|
|
Term
Contrast sharing memory between two HWP's and sharing memory between two light-weight processes within the same HWP |
|
Definition
By default, HWP’s don’t share writable RAM. By
default, light-weight processes (threads) within a HWP share writable RAM.
Specific system calls are needed to enable HWP’s to share frames of RAM for
writing.
|
|
|
Term
Which of the prior instructions required a MMU (memory management
unit) logical to physical address translation?
a. LOAD R1, R2 ; load machine register R1 from machine register R2
b. LOAD R1, Variable ; load machine register R1 from Variable
c. JMP Label1 ; unconditional branch to Label1
|
|
Definition
Each time that an (logical) address is generated and before the memory access is made, this will require a MMU logical to physical address translation. That is, once for instruction (a), twice for (b) and once for (c).
|
|
|
Term
What is a bounded buffer? |
|
Definition
When one process consumes items from a buffer and another process produces items into a buffer |
|
|
Term
What is synchronization abstraction within monitors? |
|
Definition
automatically ensures that only one thread can be active within the monitor |
|
|
Term
When talking about monitors what is a wait? |
|
Definition
When the invoking thread is suspended until another thread calls signal on the same condition variable and gives up the lock on the monitor |
|
|
Term
When talking about monitors what is a signal? |
|
Definition
It resumes exactly one suspended process association with condition variable. This causes no effect if no process is suspended |
|
|
Term
|
Definition
When a set of processes is in a deadlock state and is waiting for an event that can be caused only by another process in the set |
|
|
Term
What are some examples of resources |
|
Definition
– a lock on a java Object through a synchronized method
– a lock on a Semaphore
– mutually exclusive access to a digital video camera to control position
– mutual exclusion to draw a certain object onto a computer display
– mutual exclusion to print a file to a printer
– use of a server thread instance from a pool of server threads (may be a fixed number of server threads)
|
|
|
Term
What are the necessary conditions for a deadlock |
|
Definition
• 1) Mutual exclusion
– There must be at least one resource that cannot be shared amongst processes
– “Cannot be shared”
• Request/use/release style of resource usage
• 2) Hold & wait
– There must be at least one process holding a resource and waiting on another
• 3) No preemption
– There can be no preemption of resources
– E.g., generally don’t get a deadlock with CPU resource because O/S typically uses preemptive scheduling
• 4) Circular wait
– There must be a blocked set of processes: {P0, …, PN-1}
– Where P(i+1 mod n) is waiting on a resource held by Pi
|
|
|
Term
If there is a deadlock is there a circular wait or if there is a circular wait is there a deadlock? |
|
Definition
if there is a deadlock there is a circular and if there is no circular wait there is no deadlock, but there are cases where there is a circular wait but no deadlock |
|
|
Term
What are the ways to deal with deadlocks |
|
Definition
prevention, avoidance(prediction), detection/recovery, user detection and recovery |
|
|
Term
What is a resource allocation state? |
|
Definition
• A “snapshot” (at a particular time) of the current allocation of resources to processes
• Assumes a fixed set of processes, and a fixed set of resources
• Does not include any change in allocation of resources to processes because that is the resource allocation state over time (I.e., several resource allocation states)
|
|
|
Term
What is the deadlock avoidance algorithm? |
|
Definition
1. Each process indicates the maximum number of units of each resource type it can request, before it starts requesting any resources
2. Process makes a request for resources
3. System determines, if after granting that request, whether the resource allocation state of the system will be safe
Sates that are safe are not deadlocked, and so we are assuring that the next resource allocation state of the system will not be deadlocked
4. If the next state will be safe, grant the request
5. Otherwise, block the requesting process until the request can be safely granted
6. Follow this procedure for every resource request
|
|
|
Term
When talking about allocation tables, when is it in a safe state? |
|
Definition
• A state is safe if the system can allocate resources to each process (up to its maximum) in some order and still avoid a deadlock
• A system is in a safe state only if there exists a safe sequence
|
|
|
Term
What is the safety algorithm? |
|
Definition
1. Let Work and Finish be vectors of length m and n, respectively (n= number of processes; m = number of resources). Initialize: Work = Available
Finish [i] = false for i = 1,2,3, …, n. // dealt with process i?
2. Find i such that both:
(a) Finish [i] == false
(b) Needi ≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi // simulate process i terminating
Finish[i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state. Otherwise, in unsafe state.
|
|
|
Term
What are the types of RAM memory addresses? |
|
Definition
• Symbolic addresses
• Relocatable addresses
• Absolute (physical) addresses
|
|
|
Term
|
Definition
Converting the (relative or symbolic) address used in a program to to an actual physical (absolute) RAM address
|
|
|
Term
When does address binding time occur? |
|
Definition
- During compilation (or assembly)
- During load time
- During execution
|
|
|
Term
What does a Memory Management Unit (MMU) do? |
|
Definition
• Translates logical addresses to physical addresses
– I.e., performs address binding
• Also enables memory protection
|
|
|
Term
What are some hardware possibilities of an MMU |
|
Definition
• Relocation register
• Base & limit register
• Page table support
• Segmentation support
|
|
|
Term
What is contiguous memory allocation? |
|
Definition
– All of program loaded into a single region of RAM
– Relatively simple strategy-- memory protection can be provided by a MMU with base & limit registers
|
|
|
Term
What are some ways to prevent running out of memory? |
|
Definition
1. Swapping out one process for another when needed.
2. Load a program in parts |
|
|
Term
What is non-contiguous memory allocation? |
|
Definition
Breaking up program into fixed size, relatively small
units of RAM |
|
|
Term
Describe a Paged Memory Allocation |
|
Definition
• Logical address space of process is contiguous
• However, in real memory, frames can be non contiguous
• Each logical page of a process can be stored in any (different) physical frame of RAM
|
|
|
Term
What does a page table do? |
|
Definition
Page table maps from logical page numbers of a process to frame numbers of physical RAM
|
|
|
Term
What does the MMU with paging look like |
|
Definition
|
|
Term
How is memory protection in a paged environment accomplished? |
|
Definition
A) Only frames that map through the page table can be accessed
B) Page table entries can be extended to include protection information (page-table length register can accomplish part of this goal)
|
|
|
Term
What is the solution for not storing all of the page tables in fast memory in the MMU |
|
Definition
Keep some page table entries in special, fast, cache memory
|
|
|
Term
What is the formula for Effective Access Time (EAT)?
|
|
Definition
EAT = P(hit) * hit-time + P(miss) * miss-time |
|
|
Term
Why would we expect a relatively large percentage of TLB hits?
|
|
Definition
• Locality of reference!
– When program code and/or data that process is using comprise a relatively small collection of pages
|
|
|
Term
When a process with a different address space is
started, we may not be able to use the same TLB
entries. Why?
|
|
Definition
Because the new address space has a different page table, which provides different mappings from logical page numbers to frame numbers.
|
|
|
Term
Why would we have to keep page tables contiguously allocated? |
|
Definition
because your accessing it using an index and it wouldn't make sense to have a page table of a page table.
because much of our other RAM is allocated non contiguously in pages, we may be able to obtain such large contiguous regions of memory
|
|
|
Term
can we do memory protection with MMU implemented with a |
|
Definition
no, could still access it |
|
|