HOME


Top 10 List of Week 07

  1. Process Synchronization
    Let’s start with the basics since most of what we learned this week revolves around this concept. Process synchronization is what the computer does when two or more processes cooperate with each other. Formally, a cooperative process is a process that can affect or be affected by the execution of other processes. In this case, the order of execution of the different processes must be preserved, as both may refer to information generated by the previously executed process. Computers utilize process synchronization in order to preserve the order of these cooperative processes. There are various synchronization mechanisms and concepts related to this, which I’ll discuss below.

  2. Race Condition
    A race condition occurs when ore than one processes executes code or shares resources in such a way that its behaviour depends on relative timing or the interleaving of multiple threads or processes. When this results in one or more possible outcomes, it may result in bugs and is generally classified as nondeterministic. On the other hand, Thread-safe code is free of race conditions when accessed by multiple threads.

  3. Critical Sections, and How To Deal With them
    A critical section is a segment of code that has to be executed as a single atomic action. This means that when multiple cooperating processes want to execute this critical section, only one must do so at a given point of time. To solve this critical section problem, the solution must satisfy 3 conditions: Mutual Exclusion (only one process in the critical section), Progress (if there are no processes in the critical section, any one of the other processes that require access to the section must be allowed access), and Bounded Waiting (there is a limit for how many other processes can get into the critical section, before an access request is granted).

  4. Peterson’s Solution (to the Critical Section Problem)
    There isn’t a wikipedia page for the founder of this solution, Gary L. Peterson, so I’m unable to properly give the gist of what this guy is all about. Which is kind of sad given how important this topic seems to be in the field of computer science. With that being said, Peterson’s Algorithm is a software-based solution to the critical section problem. While my knowledge hasn’t reached the point where I can fully wrap my head around this algorithm; from what I can gather, this solution involves two processes which take turns executing the critical section based on a boolean flag variable inside a while loop. This satisfies the conditions in point number 3, though it suffers from the disadvantage of being limited to only 2 processes and involves Busy waiting.

  5. Semaphores
    This week’s assignment involved the use of semaphores to signal which process should be executed when. In my case, I had to arrange the execution of some processes a certain way, so I used semaphores to determine which processes signal which at the end of execution. Semaphores in operating systems (specifically the ones we use in C’s POSIX semaphore library) are much like its real-life counterpart with the flags and all. To put it simply, they can give signals and force certain bits of code to wait for a particular signal to be received.

  6. Deadlocks
    Deadlocks (not to be confused with the hairstyle) are situations in which a process enters a waiting state because another waiting process is holding the demanded resource. This is actually a fairly common problem in multi-processing, where several processes share a specific type of mutually exclusive resource known as a soft lock.

  7. Deadlock Requirements
    As I’ve said earlier, deadlocks are a pretty nasty occurrence. Therefore, it is within the best interest of system designers and programmers to prevent such a thing from happening in the first place. There are four conditions that must hold for a deadlock to occur: The first is no pre-emption, which means a resource can be released only voluntarily by the process holding it, after the process has completed its task. The second is mutual exclusion otherwise known as mutex, which is a special type of binary semaphore which controls access to the shared resource. The third is hold and wait, where processes must be stopped from holding single or multiple resources while simultaneously waiting for one or more others. Finally, there is cirular wait, which imposes a total ordering of all resource types.

  8. Deadlock Prevention
    To prevent deadlocks from occurring, we must eliminate at least one of the four conditions listed above. We can’t eliminate mutual exclusion, as some resources are inherently non-shareable. We can eliminate hold and wait by allocating all resources to the process before the start of its execution, and have processes make new request for resources after releasing the current set of resources. We can eliminate no-preemption by preempting resources from the process when they are required by other high-priority processes. We can also eliminate circular wait by assigning each resource to a designated number, then have the processes request the resources in within the numbering order.

  9. Deadlock Avoidance
    There is a way to avoid deadlocks entirely called the banker’s algorithm, supposedly named due to it being used in banking systems to check if customers can take out loans or not. While I’m not sure if this is true or not (I’m not a banker) it’s helpful to know this algorithm nonetheless. In short, this algorithm was conceived by Edsger Dijkstra (yup, the one who came up with Dijkstra’s Algorithm, among other things) and simulates the allocation of predetermined maximum possible amounts of all resources, before making a state that checks for possible deadlock conditions for the pending activities, then finally deciding whether the program should continue or not. This is a gross oversimplification, but you can read all about it in the link I’ve attached, as I could spend days trying to form a coherent explanation of this topic.

  10. What else did I learn this week?
    Not too long ago I discovered the existence of different shells in linux. This came as a bit of a surprise for me, as I thought that bash was the mainstream standard, which is true to some extent, as most distributions come with bash preinstalled. However, other than bash, there’s actually quite a few other shells out there such as Tcsh, Ksh, Zsh, and FISH. For some reason, programmers seem to come up with the funniest names for their work (take WINE, PHP, GNU, which are all recursive acronyms).