Posts

Showing posts from July, 2025

CST334 - Week 6 Reflection

 Weekly Learning Reflection         The core focus this week was firmly rooted in safely managing access to shared resources using synchronization primitives like mutex locks, semaphores, and condition variables. I learned that mutexes are essential for protecting critical sections, ensuring that only one thread can access shared data at a time. We extended this idea by using condition variables (pthread_cond_t) to block threads until specific conditions are met, a powerful tool for coordinating complex thread interactions. I applied this in the big readers-writers assignment this week, where multiple threads may read from a shared database concurrently, but writes must happen exclusively. Implementing this pattern required carefully checking conditions, signaling waiting threads, and managing a maximum number of concurrent readers to avoid race conditions or deadlocks. I loved how well the lecture and reading matched up with the programming assignment...

CST334 - Week 5 Reflection

 Weekly Learning Reflection          This week marked a deep dive into concurrency, parallelism, thread scheduling, critical sections, race conditions, how threads interact with shared memory, and why synchronization is vital in multithreaded programs. One immediate concept that stuck out to me was the dichotomy between concurrency and parallelism. Concurrency and a single CPU handling different processes made sense to me and now I see the power and complexity of parallelism--while concurrency relies on time-sharing/context switching, parallelism executes threads simultaneously across multiple CPU cores. While threads share the same address space, program code, global variables, and heap, each thread actually has its own stack to maintain separate execution states. This allows the threads to run independently while avoiding each other's function calls and local variables. Through utilizing distinct stacks, threads can be created and switched between each ot...

CST334 - Week 4 Reflection

Week 4 Reflection          The sheer irony of how much there is to remember in a class deeply rooted in memory management is not lost on me. While last week's deep dive into the inner workings of memory was already feeling granular, this week zoomed in on another granule--the mechanics of how virtual addresses become physical through Paging. Paging, Translation Lookaside Buffers (TLB), and multi-level paging are all playing a role in the choreography of address translation in a way that contextualizes the overarching workflow of the OS more specifically than our prior more general memory or even segmentation discussions.            The lectures laid out how the memory management unit (MMU) handles virtual memory through page tables and page directories. The fact that each process gets the illusion of contiguous memory, while the OS is translating those virtual addresses behind the scenes, illustrates the magic of computer scien...

CST334 - Week 3 Reflection

   Week 3 Learning Reflection CST334 is taking the trophy as the densest and most knowledge-packed CSUMB CS Online course in the journey so far. That isn't to say I haven't felt inundated with new knowledge for every programming class already completed, but this one involves far more moving pieces than previous courses. This week covered a broad range of system concepts, programming approaches, binary-hex-decimal-bit math, and further gazing into the eye of computer architecture's magical madness--which I shall attempt to briefly all recall and recount here.     Starting with concepts covered, this week was big on memory management whether that was base and bounds  address translation , garbage collection, common mistakes when allocating and subsequently freeing memory, fragmentation, segmentation of code/heap/stack to conserve memory, different algorithms for freeing/allocating memory such as best fit/worst fit/first fit, how the MMU/hardware factors into ...

CST334 - Week 2 Reflection

 Week 2 Learning Reflection This week marked another action-packed marathon of new knowledge as we explored how processes are created and how they are subsequently handled by the CPU. We dove deep into the mechanics of process scheduling, the life cycle of processes, and the fork() and exec() system calls. Understanding fork() revealed how a parent process creates an identical child process, with each distinguished by its return value—zero for the child, and the child's PID for the parent. Similarly, exec() was introduced as a powerful system call that replaces the current process with a new one, effectively restarting the process in place. We also visualized and discussed the three main machine states—ready, running, and blocked—and examined valid state transitions. This helped build a clearer picture of how the operating system orchestrates multitasking at the process level. The importance of the scheduler became increasingly apparent as we examined how processes are chosen to r...