Concurrency Concepts Simulator

Threads Simulation

Threads are sequences of execution within the same process. They share the process's memory space (code, data segments) but have their own stack and registers.

This simulation shows multiple threads running "concurrently" (interleaved execution on a single core) and accessing a conceptual shared area (no locking here).

Why use threads? Threads are used for parallelism (doing multiple tasks truly at once on multi-core systems) and concurrency (managing multiple tasks that progress over time, improving responsiveness, e.g., UI thread and background worker). They are lighter weight than processes.

Threads Pool

Execution Area

Finished

Simulation Log will appear here...

Mutex Simulation

A Mutex (Mutual Exclusion) is a locking mechanism used to protect shared resources from being accessed by multiple threads simultaneously. Only one thread can hold the lock (acquire the mutex) at a time.

This simulation shows threads competing to access a critical section (shared resource). Threads must acquire the mutex before entering and release it upon exiting. Others wait in a queue if the mutex is locked.

Why use mutexes? Mutexes prevent race conditions, where the outcome of computations depends on the unpredictable timing of thread execution when accessing shared data. They ensure data integrity.

Threads Pool

Waiting Queue

Waiting for Mutex

Mutex & Resource

Mutex Unlocked
Critical Section

Finished

Simulation Log will appear here...

Semaphore Simulation

A Semaphore is a synchronization primitive that controls access to a shared resource pool with a limited number of units. It maintains a counter.

  • wait() (or P()/acquire()): Decrements the counter. If counter becomes negative (or zero, depending on implementation), the thread blocks/waits.
  • signal() (or V()/release()): Increments the counter. If threads are waiting, it wakes one up.

This simulation shows threads accessing a resource pool limited by the semaphore's count.

Why use semaphores? Semaphores generalize mutexes (a binary semaphore acts like a mutex). They are useful for controlling access to a pool of identical resources (e.g., database connections, buffer slots).

Threads Pool

Waiting Queue

Waiting for Semaphore

Semaphore & Resources

Semaphore Count: N/A
Resource Pool

Finished

Simulation Log will appear here...

Forking (Process Creation) Simulation

Forking is a system call used primarily in Unix-like operating systems to create a new process, called the child process, which is an almost exact copy of the calling process (the parent process).

  • The child process gets its own separate memory space (a copy of the parent's at the time of fork). Changes in one process's memory do not affect the other.
  • Both parent and child continue execution from the point right after the fork() call.
  • fork() returns the child's Process ID (PID) to the parent, and 0 to the child.

This simulation shows a parent process forking child processes. Note how they run independently and conceptually have separate "data".

Why use forking? Forking is the primary way new processes are created in Unix/Linux. It's often followed by `exec()` in the child to replace its memory space with a new program. It provides strong isolation between processes.

Process Tree

Simulation Log will appear here...