HOME


Top 10 List of Week 06

  1. Concurrency, in a nutshell
    Concurrency, put simply, is the execution of multiple instructions at the same time. In the operating system, this happens when there are several threads that run in parallel. This ties back to some concepts which we learned back in Intro to Computer Organization, where we learned the basics of concurrency via the pipeline in MIPS.

  2. Threads
    In the previous point, I mentioned the term thread. What this term means is an independent set of values that are executed by the individual registers in a processor. These threads are the software units that are affected by control flow, and are controlled by the instruction pointer (program counter). Threads get their name from the way that the sequence of values which the instruction pointer takes on forms a path of executions weaving through the program code, hence the name “thread”.

  3. Context Switching
    Context Switching is the act of storing the context (i.e. the state of a process) so that it can be reloaded when required. This essentially acts as sort of a ‘save point’ in video game terms. This feature enables a single CPU to be shared by multiple proccesses.

  4. What is a Process, anyway?
    This week, we’ve thrown around the term ‘process’ a lot, but what exactly is it? Basically, a process is an active program that is under execution inside the computer’s processor. This does not only include the program code itself, it involves the program counter, process stac, registers, etc. We may think that a program itself is a process running in the computer, but this is a bit of a fallacy. In fact, the program is only a passive entity including the file contents, while the real process involves more active entities such as the program counter and other resources.

  5. Context Switching vs Interrupt Handling
    If you’re like me and upon reading about context switching you thought “huh, this kinda sounds like how interrupts work”, then you’re probably wondering what the difference is between these two. Context switching involves storing the state of a thread in order for it to be reloaded once needed and executed as it should have earlier. During interrupt handling, on the other hand, the processor receives an asynchronous signal that requires it to pause the previous process and immediately switch to the interrupt. This requires a context switch to work. As such, interrupt handling is one of the many use cases of context switching. Feels nice when a bunch of topics you already know meet in a nice little crossroads like this, doesn’t it?

  6. Scheduling
    Process scheduling is the act of determining which process is in the ready state, and moving such states into the running state. This is to ensure that the CPU is kept busy in order to deliver minimum latency for the executed programs. This is done by one of two ways, Non Pre-emptive scheduling where the currently executing process steps away from the CPU on its own, and Pre-emptive scheduling where the OS decides which process is done instead of waiting for the current process to give up its resources.

  7. System Calls
    System calls are the interface that bridge the process and the operating system. These system calls are generally available as assembly language instructions, which differ from processor to processor. These system calls are made when a process requires access to a resource, which the OS interprets and is executed within the kernel. Among the many types of system calls, there are ones that manage process control, file management, device management, information maintenance, and interprocess communication.

  8. Compatibility Layers
    If you’ve ever wanted to run a windows .exe program on a linux system, you may have tried using WINE (Wine Is Not an Emulator). Like the name suggests, programs like these aren’t emulators which simulate an entire CPU environment. Instead, what they do is (to me) something much more interesting. They take the system calls within the program and translate them into the system calls that are native to the running system. Among many other uses, this allows us to run (with various levels of stability) Windows-native programs on a linux system.

  9. How Do Computers Manage Processes?
    Processes can be classified based on its state and information in its control block, from there, the scheduler takes over. The different types of process states include New (It has just been created), Ready (It is waiting to be assinged to the processor), Running (Where the instructions are being executed by the processor), Waiting (Where it is waiting for some event to occur or complete), and Terminated (Where it has finished its execution).

  10. What Else Did I Learn This Week?
    Post-Midterm Exams, I wanted to do a little bit of refreshing so I thought I might play some video games on my computer. There was an issue, though. The games I wanted to play were made for Windows systems. However, there was a silver lining. Over the past few years there’s been a pretty huge leap forward with the development of Proton which is a compatibility layer which runs alongside WINE. As stated before, compatibility layers translate the system calls in the program into native linux-compatible system calls. This allows games written for Windows systems to run on linux with pretty negligible perfomance differences. As of now, a huge 73% of the top 1000 popular games on Steam are able to be run with Proton, which is honestly pretty mindblowing considering the state of foreign programs on linux mere years ago. Who says you can’t learn a thing or two from video games? ;)