Operating System Interview Questions 2026
Top 40 Operating System Interview Questions & Answers (2026)
Last Updated: March 2026 | Level: Beginner to Advanced | Format: Q&A with Tables & Diagrams
Operating Systems is one of the most tested subjects in technical interviews at top product companies (Google, Microsoft, Amazon, Flipkart) and for roles involving backend engineering, embedded systems, and cloud infrastructure. This guide covers the 40 most important OS interview questions with clear explanations, comparison tables, and conceptual diagrams described in text.
Table of Contents
- Processes & Threads (Q1–Q10)
- Memory Management (Q11–Q20)
- File Systems (Q21–Q28)
- Deadlocks (Q29–Q34)
- CPU Scheduling (Q35–Q40)
Processes & Threads
Q1. What is a process? How does it differ from a program? Easy
A program is a passive, static set of instructions stored on disk (an executable file). A process is an active instance of a program in execution — it has its own memory space, resources, and state.
Diagram (Process in Memory):
Higher Address
┌──────────────┐
│ Stack │ ← Function calls, local variables (grows downward)
│ ↓ │
│ (gap) │
│ ↑ │
│ Heap │ ← Dynamic memory (malloc/new, grows upward)
├──────────────┤
│ BSS │ ← Uninitialized global/static variables
├──────────────┤
│ Data │ ← Initialized global/static variables
├──────────────┤
│ Text │ ← Program code (read-only)
└──────────────┘
Lower Address
| Attribute | Program | Process |
|---|---|---|
| Nature | Passive (file on disk) | Active (executing) |
| Memory | No runtime memory | Has stack, heap, code, data |
| Instance | One program can create many processes | One execution instance |
| Lifetime | Permanent until deleted | Exists only while running |
A process has:
- PID (Process ID)
- PCB (Process Control Block) — OS data structure tracking all process info
- State (New, Ready, Running, Waiting, Terminated)
Q2. What is a thread? How does it differ from a process? Easy
- Program Counter
- Stack
- Registers
Process vs. Thread — Visual:
Process A Process B
┌────────────────────┐ ┌────────────────────┐
│ Code (shared) │ │ Code (shared) │
│ Data (shared) │ │ Data (shared) │
│ Heap (shared) │ │ Heap (shared) │
│ ┌──────┐ ┌──────┐ │ │ ┌──────┐ ┌──────┐ │
│ │ T1 │ │ T2 │ │ │ │ T1 │ │ T2 │ │
│ │Stack │ │Stack │ │ │ │Stack │ │Stack │ │
│ └──────┘ └──────┘ │ │ └──────┘ └──────┘ │
└────────────────────┘ └────────────────────┘
| Aspect | Process | Thread |
|---|---|---|
| Memory | Own address space | Shared with other threads |
| Creation overhead | High | Low |
| Communication | IPC (pipes, sockets, shared memory) | Direct (shared memory) |
| Isolation | Strong (crash in P1 ≠ affect P2) | Weak (crash in T1 can kill all threads) |
| Context switch | Expensive | Cheaper |
| Example | Chrome tabs (each is a process) | Browser's JS engine vs. rendering thread |
Q3. What are the states of a process? Explain the state diagram. Easy
Process State Transition Diagram:
admit dispatch
New --------→ Ready ─────────→ Running
↑ │
│ I/O complete │ I/O wait / event wait
│ event occurs ↓
Waiting ←─────── Running
│
↓ exit
Terminated
| State | Description |
|---|---|
| New | Process is being created |
| Ready | Waiting in queue to be assigned CPU |
| Running | Instructions being executed by CPU |
| Waiting | Waiting for I/O or an event (not the CPU) |
| Terminated | Finished execution; PCB being cleaned up |
Additional states:
- Suspended Ready: Ready but swapped to disk (paging)
- Suspended Wait: Waiting and swapped to disk
Q4. What is a zombie process and an orphan process? Medium
Zombie Process:
- A process that has finished execution but still has an entry in the process table because its parent hasn't called
wait()to collect its exit status. - The zombie holds minimal resources (just its PID and exit status).
- Too many zombies = PID exhaustion.
Orphan Process:
- A child process whose parent has terminated before the child.
- The OS reassigns orphans to the
initprocess (PID 1 on Linux), which periodically callswait()to clean them up.
Parent dies early → Child becomes Orphan → Adopted by init (PID 1)
Child dies, parent doesn't call wait() → Child becomes Zombie
Q5. What is inter-process communication (IPC)? What are the methods? Medium
| IPC Method | Description | Speed | Direction |
|---|---|---|---|
| Pipe | Unidirectional byte stream | Fast | One-way |
| Named Pipe (FIFO) | Like pipe but has a name in filesystem | Fast | One-way |
| Message Queue | Messages stored in kernel queue | Medium | Both ways |
| Shared Memory | Processes share a memory segment | Fastest | Both ways |
| Semaphore | Synchronization, not data transfer | N/A | Signaling |
| Socket | Communication over network/local | Variable | Both ways |
| Signal | Asynchronous notification | Fast | One-way |
Shared Memory Diagram:
Process A Kernel Process B
┌─────────┐ ┌──────────────┐ ┌─────────┐
│ │←──→│ Shared Memory│←──→│ │
│ Code │ │ Segment │ │ Code │
└─────────┘ └──────────────┘ └─────────┘
(fastest IPC — no kernel involvement for R/W)
Q6. What is context switching? What is its cost? Medium
What gets saved (PCB contents):
- Program Counter (next instruction)
- CPU registers
- Memory management info (page tables, limits)
- I/O status, open file list
Cost factors:
- Direct cost: Time to save and restore PCB (~microseconds)
- Indirect cost: CPU cache is invalidated — new process loads cold (TLB flushes)
- Frequency matters: Too many context switches = more overhead than useful work ("thrashing")
Thread context switch is cheaper than process context switch because threads share the same address space (no TLB flush).
Q7. What is the difference between user-level threads and kernel-level threads? Hard
| Aspect | User-Level Threads | Kernel-Level Threads |
|---|---|---|
| Managed by | User-space library | OS kernel |
| Kernel awareness | Kernel sees one process | Kernel sees individual threads |
| Context switch | Very fast (no system call) | Slower (system call needed) |
| Parallelism on multi-core | ❌ All threads on one core | ✅ True parallelism |
| Blocking I/O | Blocks ALL threads in process | Only blocks that thread |
| Examples | Early Java green threads, coroutines | POSIX threads (pthreads), Java modern threads |
Threading Models:
- Many-to-One (M:1): Multiple user threads → 1 kernel thread. No parallelism.
- One-to-One (1:1): Each user thread → 1 kernel thread. True parallelism. (Linux pthreads)
- Many-to-Many (M:N): Multiple user threads → multiple kernel threads. Best of both. (Go goroutines, Erlang)
Q8. What is a race condition? How do you prevent it? Medium
Thread 1: Read x (x=5) → Compute x+1 → Write x=6
Thread 2: Read x (x=5) → Compute x+1 → Write x=6 ← Should be 7!
Prevention mechanisms:
- Mutex (Mutual Exclusion Lock): Only one thread enters critical section at a time
- Semaphore: Counting synchronization primitive
- Monitor: Language-level synchronization (Java
synchronized) - Atomic operations: CPU-level guarantees (compare-and-swap)
- Immutability: If data is never written, no race conditions
// POSIX Mutex example
pthread_mutex_t lock;
pthread_mutex_lock(&lock);
// Critical section — only one thread here
counter++;
pthread_mutex_unlock(&lock);
Q9. What is the critical section problem? State its requirements. Medium
Three Requirements (Peterson/Dijkstra):
- Mutual Exclusion: Only one process can be in its critical section at a time
- Progress: If no process is in the critical section and processes want to enter, selection can't be postponed indefinitely
- Bounded Waiting: There must be a limit on how many times other processes can enter their critical section before a waiting process gets its turn (no starvation)
Process Structure:
┌─────────────────────┐
│ Entry Section │ ← Request permission to enter
├─────────────────────┤
│ Critical Section │ ← Access shared resource
├─────────────────────┤
│ Exit Section │ ← Release permission
├─────────────────────┤
│ Remainder Section │ ← Non-critical code
└─────────────────────┘
Q10. What is a semaphore? Explain binary and counting semaphores. Medium
- wait(S) / P(S): Decrement S; if S < 0, block the process
- signal(S) / V(S): Increment S; if a process is blocked, wake it
Binary Semaphore (Mutex): Counting Semaphore:
S ∈ {0, 1} S ∈ {0, 1, 2, ... N}
0 = locked Value = number of available resources
1 = unlocked N = resource pool size (e.g., 5 DB connections)
Producer-Consumer with semaphores:
semaphore full = 0; // items in buffer
semaphore empty = N; // empty slots in buffer
semaphore mutex = 1; // mutual exclusion
// Producer
wait(empty);
wait(mutex);
// add item to buffer
signal(mutex);
signal(full);
// Consumer
wait(full);
wait(mutex);
// remove item from buffer
signal(mutex);
signal(empty);
Memory Management
Q11. What is virtual memory and why is it important? Easy
Key benefits:
- Process isolation: Each process thinks it owns all memory; processes can't access each other's memory
- Run programs larger than RAM: Only active pages need to be in RAM (demand paging)
- Simplifies linking/loading: Programs compiled to start at address 0
- Memory overcommitment: Sum of all virtual spaces can exceed physical RAM
Virtual to Physical Address Translation:
Virtual Address Page Table Physical Memory
┌─────────────┐ ┌───────────┐ ┌──────────────┐
│ Page# | Offset│──→│Page# → Frame#│──→ │ Frame + Offset│
└─────────────┘ └───────────┘ └──────────────┘
(CPU generates) (OS maintains) (actual RAM)
Q12. What is paging? How does the page table work? Medium
- Virtual memory blocks → Pages
- Physical memory blocks → Frames
- Pages and frames are the same size (commonly 4KB)
Address Translation:
- Virtual Address =
[Page Number | Page Offset] - Page Table maps Page Number → Frame Number
- Physical Address =
[Frame Number | Page Offset]
Virtual Address (32-bit, 4KB pages):
Bits 31-12: Page Number (20 bits → 2^20 = 1M pages)
Bits 11-0: Page Offset (12 bits → 4KB per page)
Page Table Entry stores:
- Frame number
- Valid/Invalid bit
- Dirty bit (page modified?)
- Reference bit (page accessed recently?)
- Protection bits (R/W/X)
Problem: Page tables can be huge (2^20 entries × 4 bytes = 4MB per process!).
Solutions: Multi-level page tables, Inverted page tables, TLB (cache for page table lookups).
Q13. What is a TLB (Translation Lookaside Buffer)? Medium
TLB Operation:
CPU generates VA
↓
TLB Lookup
↙ ↘
TLB Hit TLB Miss
(fast!) → Walk page table (slow, ~100+ cycles)
↓ ↓
PA Load translation into TLB, retry
↓
Access Physical Memory
TLB Metrics:
- TLB has 64–1024 entries typically
- Hit rate: 98–99% for most programs (locality of reference)
- Effective Access Time (EAT) = (TLB hit rate × TLB time) + (TLB miss rate × (TLB time + page table time))
TLB flush happens on context switch (unless tagged with ASID — Address Space Identifier).
Q14. What is segmentation in memory management? Medium
| Aspect | Paging | Segmentation |
|---|---|---|
| Unit size | Fixed (4KB) | Variable (logical size) |
| Programmer visibility | Transparent | Visible (segment:offset) |
| Internal fragmentation | Yes (last page partially used) | No |
| External fragmentation | No | Yes (holes between segments) |
| Protection | Per-page | Per-segment (code R/X, data R/W) |
| Sharing | Harder | Natural (share code segment) |
Modern systems use segmented paging (x86-64): segments provide protection/addressing, then paging handles the actual translation.
Q15. What is swapping? How does it relate to virtual memory? Medium
Page-level swapping (demand paging):
- When a page not in RAM is accessed → page fault
- OS retrieves the page from disk (swap space)
- If RAM is full: OS uses a page replacement algorithm to evict a page
Swap performance:
RAM access: ~50–100 nanoseconds
SSD swap: ~50–100 microseconds (1000× slower)
HDD swap: ~5–10 milliseconds (100,000× slower!)
Thrashing: When a system spends more time swapping than executing. Caused by too many processes competing for too little RAM. Solution: reduce degree of multiprogramming.
Q16. Explain page replacement algorithms. Hard
| Algorithm | Strategy | Optimal? | Implementable? |
|---|---|---|---|
| FIFO | Evict oldest page | No | Yes (simple) |
| Optimal (OPT) | Evict page not used longest in future | Best | No (requires future knowledge) |
| LRU | Evict least recently used | Near-optimal | Yes (expensive) |
| LRU-Clock (Second Chance) | FIFO with reference bit | Near-LRU | Yes (efficient) |
| LFU | Evict least frequently used | No | Yes |
FIFO Anomaly (Bélády's Anomaly): With FIFO, adding more frames can sometimes increase page faults. FIFO is the only common algorithm that suffers from this.
Example — Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 | 3 frames
FIFO: 9 page faults
LRU: 8 page faults
OPT: 6 page faults (theoretical minimum)
Q17. What is memory fragmentation? Internal vs External? Easy
Internal Fragmentation: Wasted space inside an allocated block.
Process needs 10KB → Allocated 12KB (fixed block size) → 2KB wasted internally
External Fragmentation: Wasted space outside allocated blocks — free memory exists but is not contiguous.
Total free: 20KB, but split as: [5KB][8KB][7KB] → Can't allocate a 15KB process
| Internal Fragmentation | External Fragmentation | |
|---|---|---|
| Caused by | Fixed-size allocation | Variable-size allocation |
| Location of waste | Inside allocated block | Between allocated blocks |
| Present in | Paging | Segmentation, early allocation schemes |
| Solution | Smaller blocks | Compaction, best-fit allocation |
Q18. What is a page fault? Describe the page fault handling sequence. Medium
Page Fault Handling Sequence:
1. CPU accesses virtual address
2. TLB miss → check page table
3. Page table entry: INVALID → trap to OS (page fault interrupt)
4. OS determines: is this a valid access?
├── No (illegal address) → Segfault (SIGSEGV), kill process
└── Yes (valid but page on disk)
5. OS finds a free frame (or evicts a page using replacement algorithm)
6. OS schedules disk read to load page into frame
7. Process blocked; another process runs (CPU not wasted)
8. Disk I/O completes → interrupt
9. OS updates page table: marks page valid, sets frame number
10. Process resumes; re-executes the faulting instruction
Q19. What is buddy system memory allocation? Hard
Available: 64KB block. Process requests 12KB → Round up to 16KB
64KB → Split → [32KB | 32KB]
32KB → Split → [16KB | 16KB] (32KB buddy freed)
Allocate 16KB ✓
When freed: 16KB + its 16KB buddy → merge to 32KB → merge to 64KB
Advantages: Fast allocation/deallocation, simple coalescing.
Disadvantage: Internal fragmentation (12KB request wastes 4KB).
The Linux kernel uses the buddy system for physical page allocation.
Q20. What is slab allocation? Hard
Structure:
Cache (for "process" objects)
├── Slab 1: [process|process|process|free] ← Full slab
├── Slab 2: [process|free|free|free] ← Partial slab
└── Slab 3: [free|free|free|free] ← Empty slab
Key idea: Pre-allocate slabs of memory divided into object-sized chunks. When a kernel object is needed, grab from a partial slab — no initialization overhead (objects are kept in initialized state after free).
Benefits:
- Eliminates internal fragmentation within slabs
- No repeated initialization/destruction overhead
- Cache-friendly (related objects near each other in memory)
The Linux kernel uses SLUB allocator (improved slab) for all kernel object allocation.
File Systems
Q21. What is a file system? What are its key functions? Easy
Key functions:
- Naming: Provide human-readable names for data (files and directories)
- Organization: Directory structure (tree hierarchy)
- Storage allocation: Track which disk blocks belong to which file
- Access control: Permissions (who can read/write/execute)
- Reliability: Journaling, checksums, redundancy
- Performance: Buffering, caching, prefetching
Common file systems:
| File System | OS | Max File Size | Max Volume Size | Features |
|---|---|---|---|---|
| ext4 | Linux | 16 TB | 1 EB | Journaling, extents |
| NTFS | Windows | 16 TB | 256 TB | ACLs, compression |
| APFS | macOS/iOS | 8 EB | 8 EB | Snapshots, encryption |
| FAT32 | Cross-platform | 4 GB | 2 TB | Wide compatibility |
| ZFS | Linux/BSD | 16 EB | 256 ZB | CoW, RAID-Z, snapshots |
| Btrfs | Linux | 16 EB | 16 EB | CoW, subvolumes |
Q22. What is an inode in Linux file systems? Medium
Inode contains:
┌──────────────────────────────┐
│ File type (regular, dir, sym)│
│ Permissions (rwxrwxrwx) │
│ Owner UID / Group GID │
│ File size (bytes) │
│ Timestamps: atime/mtime/ctime│
│ Link count │
│ Block pointers: │
│ 12 direct block pointers │
│ 1 single indirect pointer │
│ 1 double indirect pointer │
│ 1 triple indirect pointer │
└──────────────────────────────┘
Key insight: A file's name is stored in the directory entry, NOT in the inode. This enables hard links — multiple directory entries pointing to the same inode.
stat filename # View inode contents
ls -i filename # Show inode number
df -i # Show inode usage on filesystem
Q23. What is the difference between hard links and symbolic links? Medium
Hard Link: Another directory entry pointing to the same inode. Both names refer to the same data.
Symbolic Link (Symlink): A special file that contains the path to another file (like a Windows shortcut).
Hard Link: Symbolic Link:
file.txt ──→ inode 1234 file.txt ──→ inode 1234
link.txt ──→ inode 1234 link.txt ──→ inode 5678
↓ ↓
[data blocks] "file.txt" (path string)
↓
inode 1234
↓
[data blocks]
| Aspect | Hard Link | Symbolic Link |
|---|---|---|
| Points to | Same inode | Path string |
| Cross-filesystem | ❌ No | ✅ Yes |
| Points to directories | ❌ No (usually) | ✅ Yes |
| Survives original deletion | ✅ Yes | ❌ No (dangling symlink) |
| File size shown | Same as original | Size of path string |
| Inode | Same as original | Different inode |
Q24. What is journaling in file systems? Medium
Without journaling:
Power fails mid-write → File system inconsistent → fsck (slow check on boot)
With journaling:
1. Write change to journal (commit record)
2. Apply change to file system
3. Mark journal entry as complete
On crash recovery: replay journal from last commit → consistency guaranteed
Journaling modes (ext4):
| Mode | What's journaled | Speed | Safety |
|---|---|---|---|
| Journal | Data + metadata | Slowest | Safest |
| Ordered (default) | Metadata only; data written before metadata | Medium | Good |
| Writeback | Metadata only; data written anytime | Fastest | Less safe |
Q25. What is RAID and what are the common levels? Hard
| RAID Level | Technique | Min Disks | Redundancy | Performance | Capacity |
|---|---|---|---|---|---|
| RAID 0 | Striping only | 2 | ❌ None | ↑↑ Read/Write | 100% |
| RAID 1 | Mirroring | 2 | ✅ Full (1 disk can fail) | ↑ Read | 50% |
| RAID 5 | Striping + parity | 3 | ✅ 1 disk failure | ↑ Read | (N-1)/N |
| RAID 6 | Striping + 2 parity | 4 | ✅ 2 disk failures | ↑ Read | (N-2)/N |
| RAID 10 | Mirror + Stripe | 4 | ✅ 1 per mirror pair | ↑↑ Read/Write | 50% |
RAID 5 Parity:
Disk 1: A1 A2 A3
Disk 2: B1 B2 B3
Disk 3: P1 P2 P3 (P = XOR of corresponding blocks)
If Disk 1 fails: A1 = XOR(B1, P1) — reconstructed!
Q26. What is the difference between absolute and relative paths? Easy
Absolute path: Full path from the root of the file system.
Linux: /home/aditya/projects/app.py
Windows: C:\Users\Aditya\Projects\app.py
Relative path: Path relative to the current working directory.
If CWD = /home/aditya:
Relative: projects/app.py or ./projects/app.py
Parent: ../other_user/file.txt
Special path entries:
.— current directory..— parent directory~— home directory (shell expansion)-— previous directory (in bash)
Q27. What is file system mounting? Medium
Before mount: After mounting /dev/sdb1 at /mnt/usb:
/ /
├── home/ ├── home/
├── etc/ ├── etc/
└── mnt/ └── mnt/
└── usb/ (empty) └── usb/ ← Access point
├── photos/ ← USB contents
└── docs/
mount /dev/sdb1 /mnt/usb # Mount USB drive
mount -t ntfs /dev/sdb1 /mnt/usb # Specify file system type
umount /mnt/usb # Unmount
cat /etc/fstab # Permanent mount configuration
Q28. What is a buffer cache and page cache in the OS? Hard
Buffer Cache: Caches disk blocks (raw block I/O). Reduces trips to physical disk.
Page Cache: Caches pages of files accessed via the file system. In modern Linux, these have merged into a unified page cache.
User Process reads file:
→ Check page cache → Hit: return cached data (no I/O)
→ Miss: read from disk, cache it, return
Write-back caching:
→ Write goes to cache first (fast return to user)
→ Cache written to disk periodically (async, called "dirty page writeback")
→ Risk: data loss on power failure (mitigated by fsync() or journaling)
Key commands:
free -h # See buffer/cache usage
echo 3 > /proc/sys/vm/drop_caches # Drop page cache (testing)
sync # Force dirty buffers to disk
Deadlocks
Q29. What is a deadlock? What are the four necessary conditions? Easy
The Four Coffman Conditions (ALL four must hold simultaneously for deadlock):
-
Mutual Exclusion: At least one resource must be held in a non-shareable mode (only one process can use it at a time)
-
Hold and Wait: A process holds at least one resource AND is waiting to acquire additional resources held by other processes
-
No Preemption: Resources cannot be forcibly taken from a process; they must be released voluntarily
-
Circular Wait: There exists a circular chain of processes, each waiting for a resource held by the next in the chain
Circular Wait Example:
P1 holds R1, wants R2
P2 holds R2, wants R3
P3 holds R3, wants R1
→ Circular wait: P1→R2→P2→R3→P3→R1→P1 (deadlock!)
Q30. What are the methods for handling deadlocks? Medium
| Strategy | Approach | Overhead | Use case |
|---|---|---|---|
| Prevention | Ensure ≥1 Coffman condition never holds | High design cost | Safety-critical systems |
| Avoidance | Dynamically refuse resource allocations that might lead to deadlock (Banker's algorithm) | Runtime overhead | Systems with known resource needs |
| Detection + Recovery | Allow deadlocks; detect and recover | Detection overhead | Most OS (Linux, Windows) |
| Ignore (Ostrich) | Pretend deadlocks don't happen | None | Most desktop OS! |
Most real-world OS (Linux, Windows) use the ostrich approach for non-critical resources — deadlocks are rare, and prevention is too costly. For system resources (locks, interrupts), prevention techniques are used.
Q31. Explain the Banker's Algorithm. Hard
A Safe State: There exists a sequence in which every process can eventually get all the resources it needs and finish.
Data structures:
n = number of processes, m = number of resource types
Available[m] = currently free instances of each resource
Max[n][m] = maximum demand of each process
Allocation[n][m] = currently allocated to each process
Need[n][m] = Max - Allocation (remaining need)
Safety Algorithm:
1. Work = Available; Finish[i] = false for all i
2. Find i such that: Finish[i] == false AND Need[i] ≤ Work
3. If found: Work = Work + Allocation[i]; Finish[i] = true; goto 2
4. If all Finish[i] == true → Safe state ✓
Else → Unsafe state (deny the request)
Example:
Process | Max (A B C) | Alloc (A B C) | Need (A B C)
P0 | 7 5 3 | 0 1 0 | 7 4 3
P1 | 3 2 2 | 2 0 0 | 1 2 2
P2 | 9 0 2 | 3 0 2 | 6 0 0
Available: A=3, B=3, C=2
Safe sequence: P1 → P3 → P4 → P2 → P0
Q32. What is deadlock detection? Medium
Resource Allocation Graph:
Circle = Process, Square = Resource, dot inside = instance
P1 →→ R1 (P1 requesting R1)
R1 →→ P2 (R1 allocated to P2)
P2 →→ R2 (P2 requesting R2)
R2 →→ P1 (R2 allocated to P1)
→ Cycle exists → Deadlock!
For single-instance resources: cycle = deadlock.
For multi-instance resources: use a matrix-based detection algorithm (similar to Banker's but detects, not avoids).
Detection frequency:
- Run after every resource allocation (expensive)
- Run periodically (less overhead, delayed detection)
- Run when CPU utilization drops significantly
Q33. What are the deadlock recovery methods? Medium
1. Process Termination:
- Abort all deadlocked processes: Simple but loses work
- Abort one at a time: Least cost first (priority, execution time, resources held), repeat detection after each termination
Selection criteria (which process to kill):
- Priority of the process
- How long it has computed and how long until completion
- Resources the process has used and resources it needs
- How many processes will need to be terminated
2. Resource Preemption:
- Forcibly take resources from processes and give to others
- Challenges:
- Victim selection: Minimize cost
- Rollback: Process losing resource must be rolled back to safe state (requires checkpointing)
- Starvation: Ensure same process not always the victim (add to cost factor)
Q34. What is livelock and starvation? How do they differ from deadlock? Hard
| Condition | Description | Progress? | Blocked? |
|---|---|---|---|
| Deadlock | Processes permanently blocked, waiting for each other | ❌ None | ✅ Blocked |
| Livelock | Processes keep changing state responding to each other but no progress | ❌ None | ❌ Not blocked |
| Starvation | A process is perpetually denied resources (others always get priority) | Others progress | One process blocked |
Livelock analogy:
Two people in a narrow hallway:
Person A steps left → Person B steps left → A steps right → B steps right → repeat forever
Both are "active" (not blocked) but make no progress
Starvation solution: Aging — gradually increase the priority of processes that have waited long.
Livelock solution: Random backoff (like in Ethernet CSMA/CD — if collision, wait a random time before retrying).
CPU Scheduling
Q35. What are the goals of CPU scheduling? Easy
| Goal | Metric | Who cares |
|---|---|---|
| CPU Utilization | Keep CPU busy | All systems |
| Throughput | Processes completed per unit time | Batch systems |
| Turnaround Time | Submission to completion | Batch systems |
| Waiting Time | Time in ready queue | Interactive |
| Response Time | First response after request | Interactive/real-time |
| Fairness | No starvation | All systems |
Key formulas:
Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
Response Time = First Run Time - Arrival Time
Q36. Explain FCFS scheduling with an example. Easy
Example:
Process | Arrival | Burst
P1 | 0 | 8
P2 | 1 | 4
P3 | 2 | 2
Gantt Chart:
| P1 (0-8) | P2 (8-12) | P3 (12-14) |
0 8 12 14
Waiting Times: P1=0, P2=7, P3=10
Average Waiting Time = (0+7+10)/3 = 5.67 ms
Convoy Effect: Short processes stuck behind one long process — P3 waits 10ms for a 2ms job!
FCFS is simple and fair in order, but has the highest average waiting time when burst times vary widely.
Q37. Explain SJF and SRTF scheduling. Medium
Shortest Job First (SJF): Non-preemptive. When CPU is free, pick the process with the shortest burst time.
Shortest Remaining Time First (SRTF): Preemptive version of SJF. When a new process arrives, if its burst time < remaining time of current process, preempt.
Process | Arrival | Burst
P1 | 0 | 8
P2 | 1 | 4
P3 | 2 | 2
P4 | 3 | 1
SRTF Gantt:
|P1(0-1)|P2(1-2)|P3(2-4)|P4(4-5)|P2(5-7)|P1(7-15)|...
Wait times: P1=7, P2=2, P3=0, P4=0
Average = 2.25 ms (optimal!)
SJF is provably optimal for minimizing average waiting time (non-preemptive) and average turnaround time (preemptive SRTF).
Practical problem: Burst time is not known in advance. Estimated using exponential averaging:
τ(n+1) = α × t(n) + (1-α) × τ(n)
where t(n) = actual last burst, τ(n) = predicted last burst, α ∈ [0,1]
Q38. Explain Round Robin scheduling. What is the impact of time quantum? Medium
Example (quantum = 4ms):
Process | Arrival | Burst
P1 | 0 | 8
P2 | 0 | 4
P3 | 0 | 2
Gantt: |P1(0-4)|P2(4-8)|P3(8-10)|P1(10-14)|
Wait: P1=6, P2=4, P3=6; Average=5.33ms
Time Quantum impact:
Q → ∞: Round Robin becomes FCFS
Q → 0: Processor sharing (infinite context switching overhead)
Q = optimal: Usually 10-100ms; 80% of CPU bursts should be < Q
Rule of thumb: Quantum should be larger than most CPU bursts but not so large that short jobs wait too long.
Q39. What is Priority Scheduling? How does it handle starvation? Medium
Priority types:
- Internal: Burst time, memory needs, I/O activity (computed by OS)
- External: Assigned by user/administrator
Preemptive vs Non-preemptive:
Non-preemptive: Once running, process runs to completion (or block)
Preemptive: New high-priority process can interrupt running process
Starvation problem: Low-priority processes may never run if high-priority processes keep arriving.
Solution — Aging: Gradually increase the priority of processes that have waited long.
Every 15 minutes a process waits → priority increased by 1
Process with initial priority 127 → after 32 hours → priority 0 (highest)
Q40. What is the Multilevel Queue and Multilevel Feedback Queue scheduling? Hard
Multilevel Queue: Partitions the ready queue into multiple queues based on process type, each with its own scheduling algorithm.
Priority 0 (Highest): System processes → FCFS
Priority 1: Interactive processes → Round Robin (Q=8ms)
Priority 2: Interactive editing → Round Robin (Q=16ms)
Priority 3: Batch processes → FCFS
Priority 4 (Lowest): Background/student → FCFS
Processes are permanently assigned to one queue — no movement between queues.
Multilevel Feedback Queue (MLFQ): Like MLQ but processes can move between queues based on behavior. This is used in most modern OS schedulers.
Queue 0 (Q=8ms): New processes start here (high priority)
Queue 1 (Q=16ms): If doesn't finish in Q0 → moved here
Queue 2 (FCFS): If doesn't finish in Q1 → moved here (lowest priority)
If process does I/O (short burst) → move UP (CPU-bound processes sink down)
If process waits too long → moved UP (aging, prevents starvation)
Why MLFQ is powerful:
- CPU-bound processes naturally sink to lower queues
- I/O-bound (interactive) processes stay in high queues → good response time
- No prior knowledge of burst time needed
- Approximates SRTF behavior
MLFQ is the basis for the Linux CFS (Completely Fair Scheduler) and Windows scheduler.
Quick Reference: OS Interview Cheat Sheet
| Topic | Key Point |
|---|---|
| Process vs Thread | Process = own address space; Thread = shared address space |
| Context Switch | Saves PCB; threads cheaper (no TLB flush) |
| Zombie | Process done, parent hasn't called wait() |
| Semaphore | wait() decrements; signal() increments; binary = mutex |
| Virtual Memory | Illusion of large address space; demand paging |
| TLB | Cache for page table entries; flush on context switch |
| Page Fault | Page not in RAM; OS fetches from disk |
| Deadlock Conditions | ME + Hold&Wait + No Preemption + Circular Wait |
| Banker's Algorithm | Deadlock avoidance; grant only if safe state remains |
| FCFS | Simple; convoy effect |
| SJF/SRTF | Optimal avg wait; needs burst time prediction |
| Round Robin | Fair; quantum size critical |
| MLFQ | Adaptive; basis for real OS schedulers |
| ext4 inode | All metadata except filename; 12 direct + indirect blocks |
| Hard vs Sym link | Hard: same inode; Sym: path string, can cross FS |
| RAID 5 | Striping + single parity; 1 disk failure tolerated |
Common OS Interview Topics by Company
| Company | Frequently Asked |
|---|---|
| Concurrency, virtual memory, Linux internals, system calls | |
| Microsoft | Windows architecture, NTFS, thread models, memory management |
| Amazon | Linux processes, IPC, performance tuning, networking + OS |
| Startups | Process/thread basics, deadlocks, scheduling, file systems |
| Embedded roles | Real-time OS, interrupt handling, memory constraints |
Prepared for placements 2026 | PapersAdda.com
Explore this topic cluster
More resources in Interview Questions
Use the category hub to browse similar questions, exam patterns, salary guides, and preparation resources related to this topic.
More in Interview Questions
More from PapersAdda
Top 30 System Design Interview Questions for 2026
Top 30 HR Interview Questions with Best Answers (2026)
Top 40 React.js Interview Questions & Answers (2026)
Top 50 Data Structures Interview Questions 2026