PapersAdda

Operating System Interview Questions 2026

32 min read
Interview Questions
Advertisement Placement

Top 40 Operating System Interview Questions & Answers (2026)

Last Updated: March 2026 | Level: Beginner to Advanced | Format: Q&A with Tables & Diagrams

Operating Systems is one of the most tested subjects in technical interviews at top product companies (Google, Microsoft, Amazon, Flipkart) and for roles involving backend engineering, embedded systems, and cloud infrastructure. This guide covers the 40 most important OS interview questions with clear explanations, comparison tables, and conceptual diagrams described in text.


Table of Contents

  1. Processes & Threads (Q1–Q10)
  2. Memory Management (Q11–Q20)
  3. File Systems (Q21–Q28)
  4. Deadlocks (Q29–Q34)
  5. CPU Scheduling (Q35–Q40)

Processes & Threads

Q1. What is a process? How does it differ from a program? Easy

A program is a passive, static set of instructions stored on disk (an executable file). A process is an active instance of a program in execution — it has its own memory space, resources, and state.

Diagram (Process in Memory):

Higher Address
┌──────────────┐
│    Stack     │  ← Function calls, local variables (grows downward)
│      ↓       │
│    (gap)     │
│      ↑       │
│    Heap      │  ← Dynamic memory (malloc/new, grows upward)
├──────────────┤
│     BSS      │  ← Uninitialized global/static variables
├──────────────┤
│     Data     │  ← Initialized global/static variables
├──────────────┤
│     Text     │  ← Program code (read-only)
└──────────────┘
Lower Address
AttributeProgramProcess
NaturePassive (file on disk)Active (executing)
MemoryNo runtime memoryHas stack, heap, code, data
InstanceOne program can create many processesOne execution instance
LifetimePermanent until deletedExists only while running

A process has:

  • PID (Process ID)
  • PCB (Process Control Block) — OS data structure tracking all process info
  • State (New, Ready, Running, Waiting, Terminated)

Q2. What is a thread? How does it differ from a process? Easy

  • Program Counter
  • Stack
  • Registers

Process vs. Thread — Visual:

Process A                    Process B
┌────────────────────┐      ┌────────────────────┐
│  Code (shared)     │      │  Code (shared)     │
│  Data (shared)     │      │  Data (shared)     │
│  Heap (shared)     │      │  Heap (shared)     │
│ ┌──────┐ ┌──────┐  │      │ ┌──────┐ ┌──────┐  │
│ │ T1   │ │ T2   │  │      │ │ T1   │ │ T2   │  │
│ │Stack │ │Stack │  │      │ │Stack │ │Stack │  │
│ └──────┘ └──────┘  │      │ └──────┘ └──────┘  │
└────────────────────┘      └────────────────────┘
AspectProcessThread
MemoryOwn address spaceShared with other threads
Creation overheadHighLow
CommunicationIPC (pipes, sockets, shared memory)Direct (shared memory)
IsolationStrong (crash in P1 ≠ affect P2)Weak (crash in T1 can kill all threads)
Context switchExpensiveCheaper
ExampleChrome tabs (each is a process)Browser's JS engine vs. rendering thread

Q3. What are the states of a process? Explain the state diagram. Easy

Process State Transition Diagram:

         admit          dispatch
  New --------→ Ready ─────────→ Running
                  ↑                  │
                  │  I/O complete    │ I/O wait / event wait
                  │  event occurs    ↓
              Waiting ←─────── Running
                                    │
                                    ↓ exit
                               Terminated
StateDescription
NewProcess is being created
ReadyWaiting in queue to be assigned CPU
RunningInstructions being executed by CPU
WaitingWaiting for I/O or an event (not the CPU)
TerminatedFinished execution; PCB being cleaned up

Additional states:

  • Suspended Ready: Ready but swapped to disk (paging)
  • Suspended Wait: Waiting and swapped to disk

Q4. What is a zombie process and an orphan process? Medium

Zombie Process:

  • A process that has finished execution but still has an entry in the process table because its parent hasn't called wait() to collect its exit status.
  • The zombie holds minimal resources (just its PID and exit status).
  • Too many zombies = PID exhaustion.

Orphan Process:

  • A child process whose parent has terminated before the child.
  • The OS reassigns orphans to the init process (PID 1 on Linux), which periodically calls wait() to clean them up.
Parent dies early → Child becomes Orphan → Adopted by init (PID 1)

Child dies, parent doesn't call wait() → Child becomes Zombie

Q5. What is inter-process communication (IPC)? What are the methods? Medium

IPC MethodDescriptionSpeedDirection
PipeUnidirectional byte streamFastOne-way
Named Pipe (FIFO)Like pipe but has a name in filesystemFastOne-way
Message QueueMessages stored in kernel queueMediumBoth ways
Shared MemoryProcesses share a memory segmentFastestBoth ways
SemaphoreSynchronization, not data transferN/ASignaling
SocketCommunication over network/localVariableBoth ways
SignalAsynchronous notificationFastOne-way

Shared Memory Diagram:

Process A          Kernel           Process B
┌─────────┐    ┌──────────────┐    ┌─────────┐
│         │←──→│ Shared Memory│←──→│         │
│  Code   │    │   Segment    │    │  Code   │
└─────────┘    └──────────────┘    └─────────┘
             (fastest IPC — no kernel involvement for R/W)

Q6. What is context switching? What is its cost? Medium

What gets saved (PCB contents):

  • Program Counter (next instruction)
  • CPU registers
  • Memory management info (page tables, limits)
  • I/O status, open file list

Cost factors:

  1. Direct cost: Time to save and restore PCB (~microseconds)
  2. Indirect cost: CPU cache is invalidated — new process loads cold (TLB flushes)
  3. Frequency matters: Too many context switches = more overhead than useful work ("thrashing")

Thread context switch is cheaper than process context switch because threads share the same address space (no TLB flush).


Q7. What is the difference between user-level threads and kernel-level threads? Hard

AspectUser-Level ThreadsKernel-Level Threads
Managed byUser-space libraryOS kernel
Kernel awarenessKernel sees one processKernel sees individual threads
Context switchVery fast (no system call)Slower (system call needed)
Parallelism on multi-core❌ All threads on one core✅ True parallelism
Blocking I/OBlocks ALL threads in processOnly blocks that thread
ExamplesEarly Java green threads, coroutinesPOSIX threads (pthreads), Java modern threads

Threading Models:

  • Many-to-One (M:1): Multiple user threads → 1 kernel thread. No parallelism.
  • One-to-One (1:1): Each user thread → 1 kernel thread. True parallelism. (Linux pthreads)
  • Many-to-Many (M:N): Multiple user threads → multiple kernel threads. Best of both. (Go goroutines, Erlang)

Q8. What is a race condition? How do you prevent it? Medium

Thread 1: Read x (x=5) → Compute x+1 → Write x=6
Thread 2: Read x (x=5) → Compute x+1 → Write x=6  ← Should be 7!

Prevention mechanisms:

  1. Mutex (Mutual Exclusion Lock): Only one thread enters critical section at a time
  2. Semaphore: Counting synchronization primitive
  3. Monitor: Language-level synchronization (Java synchronized)
  4. Atomic operations: CPU-level guarantees (compare-and-swap)
  5. Immutability: If data is never written, no race conditions
// POSIX Mutex example
pthread_mutex_t lock;
pthread_mutex_lock(&lock);
// Critical section — only one thread here
counter++;
pthread_mutex_unlock(&lock);

Q9. What is the critical section problem? State its requirements. Medium

Three Requirements (Peterson/Dijkstra):

  1. Mutual Exclusion: Only one process can be in its critical section at a time
  2. Progress: If no process is in the critical section and processes want to enter, selection can't be postponed indefinitely
  3. Bounded Waiting: There must be a limit on how many times other processes can enter their critical section before a waiting process gets its turn (no starvation)
Process Structure:
┌─────────────────────┐
│   Entry Section     │  ← Request permission to enter
├─────────────────────┤
│  Critical Section   │  ← Access shared resource
├─────────────────────┤
│    Exit Section     │  ← Release permission
├─────────────────────┤
│  Remainder Section  │  ← Non-critical code
└─────────────────────┘

Q10. What is a semaphore? Explain binary and counting semaphores. Medium

  • wait(S) / P(S): Decrement S; if S < 0, block the process
  • signal(S) / V(S): Increment S; if a process is blocked, wake it
Binary Semaphore (Mutex):    Counting Semaphore:
S ∈ {0, 1}                  S ∈ {0, 1, 2, ... N}

0 = locked                  Value = number of available resources
1 = unlocked                N = resource pool size (e.g., 5 DB connections)

Producer-Consumer with semaphores:

semaphore full = 0;   // items in buffer
semaphore empty = N;  // empty slots in buffer  
semaphore mutex = 1;  // mutual exclusion

// Producer
wait(empty);
wait(mutex);
  // add item to buffer
signal(mutex);
signal(full);

// Consumer
wait(full);
wait(mutex);
  // remove item from buffer
signal(mutex);
signal(empty);

Memory Management

Q11. What is virtual memory and why is it important? Easy

Key benefits:

  1. Process isolation: Each process thinks it owns all memory; processes can't access each other's memory
  2. Run programs larger than RAM: Only active pages need to be in RAM (demand paging)
  3. Simplifies linking/loading: Programs compiled to start at address 0
  4. Memory overcommitment: Sum of all virtual spaces can exceed physical RAM

Virtual to Physical Address Translation:

Virtual Address        Page Table           Physical Memory
┌─────────────┐       ┌───────────┐         ┌──────────────┐
│  Page# | Offset│──→│Page# → Frame#│──→    │ Frame + Offset│
└─────────────┘       └───────────┘         └──────────────┘
     (CPU generates)   (OS maintains)         (actual RAM)

Q12. What is paging? How does the page table work? Medium

  • Virtual memory blocks → Pages
  • Physical memory blocks → Frames
  • Pages and frames are the same size (commonly 4KB)

Address Translation:

  • Virtual Address = [Page Number | Page Offset]
  • Page Table maps Page Number → Frame Number
  • Physical Address = [Frame Number | Page Offset]
Virtual Address (32-bit, 4KB pages):
Bits 31-12: Page Number (20 bits → 2^20 = 1M pages)
Bits 11-0:  Page Offset (12 bits → 4KB per page)

Page Table Entry stores:
- Frame number
- Valid/Invalid bit
- Dirty bit (page modified?)
- Reference bit (page accessed recently?)
- Protection bits (R/W/X)

Problem: Page tables can be huge (2^20 entries × 4 bytes = 4MB per process!).
Solutions: Multi-level page tables, Inverted page tables, TLB (cache for page table lookups).


Q13. What is a TLB (Translation Lookaside Buffer)? Medium

TLB Operation:

CPU generates VA
       ↓
  TLB Lookup
  ↙         ↘
TLB Hit    TLB Miss
(fast!)    → Walk page table (slow, ~100+ cycles)
  ↓              ↓
 PA        Load translation into TLB, retry
  ↓
Access Physical Memory

TLB Metrics:

  • TLB has 64–1024 entries typically
  • Hit rate: 98–99% for most programs (locality of reference)
  • Effective Access Time (EAT) = (TLB hit rate × TLB time) + (TLB miss rate × (TLB time + page table time))

TLB flush happens on context switch (unless tagged with ASID — Address Space Identifier).


Q14. What is segmentation in memory management? Medium

AspectPagingSegmentation
Unit sizeFixed (4KB)Variable (logical size)
Programmer visibilityTransparentVisible (segment:offset)
Internal fragmentationYes (last page partially used)No
External fragmentationNoYes (holes between segments)
ProtectionPer-pagePer-segment (code R/X, data R/W)
SharingHarderNatural (share code segment)

Modern systems use segmented paging (x86-64): segments provide protection/addressing, then paging handles the actual translation.


Q15. What is swapping? How does it relate to virtual memory? Medium

Page-level swapping (demand paging):

  • When a page not in RAM is accessed → page fault
  • OS retrieves the page from disk (swap space)
  • If RAM is full: OS uses a page replacement algorithm to evict a page

Swap performance:

RAM access:  ~50–100 nanoseconds
SSD swap:    ~50–100 microseconds (1000× slower)
HDD swap:    ~5–10 milliseconds (100,000× slower!)

Thrashing: When a system spends more time swapping than executing. Caused by too many processes competing for too little RAM. Solution: reduce degree of multiprogramming.


Q16. Explain page replacement algorithms. Hard

AlgorithmStrategyOptimal?Implementable?
FIFOEvict oldest pageNoYes (simple)
Optimal (OPT)Evict page not used longest in futureBestNo (requires future knowledge)
LRUEvict least recently usedNear-optimalYes (expensive)
LRU-Clock (Second Chance)FIFO with reference bitNear-LRUYes (efficient)
LFUEvict least frequently usedNoYes

FIFO Anomaly (Bélády's Anomaly): With FIFO, adding more frames can sometimes increase page faults. FIFO is the only common algorithm that suffers from this.

Example — Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 | 3 frames

FIFO: 9 page faults
LRU:  8 page faults
OPT:  6 page faults (theoretical minimum)

Q17. What is memory fragmentation? Internal vs External? Easy

Internal Fragmentation: Wasted space inside an allocated block.

Process needs 10KB → Allocated 12KB (fixed block size) → 2KB wasted internally

External Fragmentation: Wasted space outside allocated blocks — free memory exists but is not contiguous.

Total free: 20KB, but split as: [5KB][8KB][7KB] → Can't allocate a 15KB process
Internal FragmentationExternal Fragmentation
Caused byFixed-size allocationVariable-size allocation
Location of wasteInside allocated blockBetween allocated blocks
Present inPagingSegmentation, early allocation schemes
SolutionSmaller blocksCompaction, best-fit allocation

Q18. What is a page fault? Describe the page fault handling sequence. Medium

Page Fault Handling Sequence:

1. CPU accesses virtual address
2. TLB miss → check page table
3. Page table entry: INVALID → trap to OS (page fault interrupt)
4. OS determines: is this a valid access?
   ├── No (illegal address) → Segfault (SIGSEGV), kill process
   └── Yes (valid but page on disk)
5. OS finds a free frame (or evicts a page using replacement algorithm)
6. OS schedules disk read to load page into frame
7. Process blocked; another process runs (CPU not wasted)
8. Disk I/O completes → interrupt
9. OS updates page table: marks page valid, sets frame number
10. Process resumes; re-executes the faulting instruction

Q19. What is buddy system memory allocation? Hard

Available: 64KB block. Process requests 12KB → Round up to 16KB

64KB → Split → [32KB | 32KB]
32KB → Split → [16KB | 16KB] (32KB buddy freed)
Allocate 16KB ✓

When freed: 16KB + its 16KB buddy → merge to 32KB → merge to 64KB

Advantages: Fast allocation/deallocation, simple coalescing.
Disadvantage: Internal fragmentation (12KB request wastes 4KB).

The Linux kernel uses the buddy system for physical page allocation.


Q20. What is slab allocation? Hard

Structure:

Cache (for "process" objects)
├── Slab 1: [process|process|process|free]  ← Full slab
├── Slab 2: [process|free|free|free]         ← Partial slab
└── Slab 3: [free|free|free|free]            ← Empty slab

Key idea: Pre-allocate slabs of memory divided into object-sized chunks. When a kernel object is needed, grab from a partial slab — no initialization overhead (objects are kept in initialized state after free).

Benefits:

  • Eliminates internal fragmentation within slabs
  • No repeated initialization/destruction overhead
  • Cache-friendly (related objects near each other in memory)

The Linux kernel uses SLUB allocator (improved slab) for all kernel object allocation.


File Systems

Q21. What is a file system? What are its key functions? Easy

Key functions:

  1. Naming: Provide human-readable names for data (files and directories)
  2. Organization: Directory structure (tree hierarchy)
  3. Storage allocation: Track which disk blocks belong to which file
  4. Access control: Permissions (who can read/write/execute)
  5. Reliability: Journaling, checksums, redundancy
  6. Performance: Buffering, caching, prefetching

Common file systems:

File SystemOSMax File SizeMax Volume SizeFeatures
ext4Linux16 TB1 EBJournaling, extents
NTFSWindows16 TB256 TBACLs, compression
APFSmacOS/iOS8 EB8 EBSnapshots, encryption
FAT32Cross-platform4 GB2 TBWide compatibility
ZFSLinux/BSD16 EB256 ZBCoW, RAID-Z, snapshots
BtrfsLinux16 EB16 EBCoW, subvolumes

Q22. What is an inode in Linux file systems? Medium

Inode contains:

┌──────────────────────────────┐
│ File type (regular, dir, sym)│
│ Permissions (rwxrwxrwx)      │
│ Owner UID / Group GID        │
│ File size (bytes)            │
│ Timestamps: atime/mtime/ctime│
│ Link count                   │
│ Block pointers:              │
│   12 direct block pointers   │
│   1 single indirect pointer  │
│   1 double indirect pointer  │
│   1 triple indirect pointer  │
└──────────────────────────────┘

Key insight: A file's name is stored in the directory entry, NOT in the inode. This enables hard links — multiple directory entries pointing to the same inode.

stat filename    # View inode contents
ls -i filename   # Show inode number
df -i            # Show inode usage on filesystem

Hard Link: Another directory entry pointing to the same inode. Both names refer to the same data.

Symbolic Link (Symlink): A special file that contains the path to another file (like a Windows shortcut).

Hard Link:                      Symbolic Link:
                                
file.txt ──→ inode 1234        file.txt ──→ inode 1234
link.txt ──→ inode 1234        link.txt ──→ inode 5678
               ↓                               ↓
           [data blocks]              "file.txt" (path string)
                                               ↓
                                          inode 1234
                                               ↓
                                          [data blocks]
AspectHard LinkSymbolic Link
Points toSame inodePath string
Cross-filesystem❌ No✅ Yes
Points to directories❌ No (usually)✅ Yes
Survives original deletion✅ Yes❌ No (dangling symlink)
File size shownSame as originalSize of path string
InodeSame as originalDifferent inode

Q24. What is journaling in file systems? Medium

Without journaling:

Power fails mid-write → File system inconsistent → fsck (slow check on boot)

With journaling:

1. Write change to journal (commit record)
2. Apply change to file system
3. Mark journal entry as complete

On crash recovery: replay journal from last commit → consistency guaranteed

Journaling modes (ext4):

ModeWhat's journaledSpeedSafety
JournalData + metadataSlowestSafest
Ordered (default)Metadata only; data written before metadataMediumGood
WritebackMetadata only; data written anytimeFastestLess safe

Q25. What is RAID and what are the common levels? Hard

RAID LevelTechniqueMin DisksRedundancyPerformanceCapacity
RAID 0Striping only2❌ None↑↑ Read/Write100%
RAID 1Mirroring2✅ Full (1 disk can fail)↑ Read50%
RAID 5Striping + parity3✅ 1 disk failure↑ Read(N-1)/N
RAID 6Striping + 2 parity4✅ 2 disk failures↑ Read(N-2)/N
RAID 10Mirror + Stripe4✅ 1 per mirror pair↑↑ Read/Write50%

RAID 5 Parity:

Disk 1: A1  A2  A3
Disk 2: B1  B2  B3
Disk 3: P1  P2  P3   (P = XOR of corresponding blocks)

If Disk 1 fails: A1 = XOR(B1, P1) — reconstructed!

Q26. What is the difference between absolute and relative paths? Easy

Absolute path: Full path from the root of the file system.

Linux:   /home/aditya/projects/app.py
Windows: C:\Users\Aditya\Projects\app.py

Relative path: Path relative to the current working directory.

If CWD = /home/aditya:
Relative: projects/app.py  or  ./projects/app.py
Parent:   ../other_user/file.txt

Special path entries:

  • . — current directory
  • .. — parent directory
  • ~ — home directory (shell expansion)
  • - — previous directory (in bash)

Q27. What is file system mounting? Medium

Before mount:                    After mounting /dev/sdb1 at /mnt/usb:
/                                /
├── home/                        ├── home/
├── etc/                         ├── etc/
└── mnt/                         └── mnt/
    └── usb/  (empty)                └── usb/        ← Access point
                                         ├── photos/  ← USB contents
                                         └── docs/
mount /dev/sdb1 /mnt/usb         # Mount USB drive
mount -t ntfs /dev/sdb1 /mnt/usb # Specify file system type
umount /mnt/usb                   # Unmount
cat /etc/fstab                    # Permanent mount configuration

Q28. What is a buffer cache and page cache in the OS? Hard

Buffer Cache: Caches disk blocks (raw block I/O). Reduces trips to physical disk.

Page Cache: Caches pages of files accessed via the file system. In modern Linux, these have merged into a unified page cache.

User Process reads file:
  → Check page cache → Hit: return cached data (no I/O)
                     → Miss: read from disk, cache it, return

Write-back caching:
  → Write goes to cache first (fast return to user)
  → Cache written to disk periodically (async, called "dirty page writeback")
  → Risk: data loss on power failure (mitigated by fsync() or journaling)

Key commands:

free -h              # See buffer/cache usage
echo 3 > /proc/sys/vm/drop_caches  # Drop page cache (testing)
sync                 # Force dirty buffers to disk

Deadlocks

Q29. What is a deadlock? What are the four necessary conditions? Easy

The Four Coffman Conditions (ALL four must hold simultaneously for deadlock):

  1. Mutual Exclusion: At least one resource must be held in a non-shareable mode (only one process can use it at a time)

  2. Hold and Wait: A process holds at least one resource AND is waiting to acquire additional resources held by other processes

  3. No Preemption: Resources cannot be forcibly taken from a process; they must be released voluntarily

  4. Circular Wait: There exists a circular chain of processes, each waiting for a resource held by the next in the chain

Circular Wait Example:
P1 holds R1, wants R2
P2 holds R2, wants R3
P3 holds R3, wants R1
→ Circular wait: P1→R2→P2→R3→P3→R1→P1 (deadlock!)

Q30. What are the methods for handling deadlocks? Medium

StrategyApproachOverheadUse case
PreventionEnsure ≥1 Coffman condition never holdsHigh design costSafety-critical systems
AvoidanceDynamically refuse resource allocations that might lead to deadlock (Banker's algorithm)Runtime overheadSystems with known resource needs
Detection + RecoveryAllow deadlocks; detect and recoverDetection overheadMost OS (Linux, Windows)
Ignore (Ostrich)Pretend deadlocks don't happenNoneMost desktop OS!

Most real-world OS (Linux, Windows) use the ostrich approach for non-critical resources — deadlocks are rare, and prevention is too costly. For system resources (locks, interrupts), prevention techniques are used.


Q31. Explain the Banker's Algorithm. Hard

A Safe State: There exists a sequence in which every process can eventually get all the resources it needs and finish.

Data structures:

n = number of processes, m = number of resource types

Available[m]        = currently free instances of each resource
Max[n][m]           = maximum demand of each process
Allocation[n][m]    = currently allocated to each process
Need[n][m]          = Max - Allocation (remaining need)

Safety Algorithm:

1. Work = Available; Finish[i] = false for all i
2. Find i such that: Finish[i] == false AND Need[i] ≤ Work
3. If found: Work = Work + Allocation[i]; Finish[i] = true; goto 2
4. If all Finish[i] == true → Safe state ✓
   Else → Unsafe state (deny the request)

Example:

Process | Max (A B C) | Alloc (A B C) | Need (A B C)
  P0    |   7  5  3  |   0  1  0     |   7  4  3
  P1    |   3  2  2  |   2  0  0     |   1  2  2
  P2    |   9  0  2  |   3  0  2     |   6  0  0

Available: A=3, B=3, C=2
Safe sequence: P1 → P3 → P4 → P2 → P0

Q32. What is deadlock detection? Medium

Resource Allocation Graph:

Circle = Process, Square = Resource, dot inside = instance

P1 →→ R1 (P1 requesting R1)
R1 →→ P2 (R1 allocated to P2)
P2 →→ R2 (P2 requesting R2)
R2 →→ P1 (R2 allocated to P1)
→ Cycle exists → Deadlock!

For single-instance resources: cycle = deadlock.
For multi-instance resources: use a matrix-based detection algorithm (similar to Banker's but detects, not avoids).

Detection frequency:

  • Run after every resource allocation (expensive)
  • Run periodically (less overhead, delayed detection)
  • Run when CPU utilization drops significantly

Q33. What are the deadlock recovery methods? Medium

1. Process Termination:

  • Abort all deadlocked processes: Simple but loses work
  • Abort one at a time: Least cost first (priority, execution time, resources held), repeat detection after each termination

Selection criteria (which process to kill):

  • Priority of the process
  • How long it has computed and how long until completion
  • Resources the process has used and resources it needs
  • How many processes will need to be terminated

2. Resource Preemption:

  • Forcibly take resources from processes and give to others
  • Challenges:
    • Victim selection: Minimize cost
    • Rollback: Process losing resource must be rolled back to safe state (requires checkpointing)
    • Starvation: Ensure same process not always the victim (add to cost factor)

Q34. What is livelock and starvation? How do they differ from deadlock? Hard

ConditionDescriptionProgress?Blocked?
DeadlockProcesses permanently blocked, waiting for each other❌ None✅ Blocked
LivelockProcesses keep changing state responding to each other but no progress❌ None❌ Not blocked
StarvationA process is perpetually denied resources (others always get priority)Others progressOne process blocked

Livelock analogy:

Two people in a narrow hallway:
Person A steps left → Person B steps left → A steps right → B steps right → repeat forever
Both are "active" (not blocked) but make no progress

Starvation solution: Aging — gradually increase the priority of processes that have waited long.

Livelock solution: Random backoff (like in Ethernet CSMA/CD — if collision, wait a random time before retrying).


CPU Scheduling

Q35. What are the goals of CPU scheduling? Easy

GoalMetricWho cares
CPU UtilizationKeep CPU busyAll systems
ThroughputProcesses completed per unit timeBatch systems
Turnaround TimeSubmission to completionBatch systems
Waiting TimeTime in ready queueInteractive
Response TimeFirst response after requestInteractive/real-time
FairnessNo starvationAll systems

Key formulas:

Turnaround Time = Completion Time - Arrival Time
Waiting Time = Turnaround Time - Burst Time
Response Time = First Run Time - Arrival Time

Q36. Explain FCFS scheduling with an example. Easy

Example:

Process | Arrival | Burst
P1      |   0     |   8
P2      |   1     |   4
P3      |   2     |   2

Gantt Chart:
| P1 (0-8) | P2 (8-12) | P3 (12-14) |
0          8          12          14

Waiting Times: P1=0, P2=7, P3=10
Average Waiting Time = (0+7+10)/3 = 5.67 ms

Convoy Effect: Short processes stuck behind one long process — P3 waits 10ms for a 2ms job!

FCFS is simple and fair in order, but has the highest average waiting time when burst times vary widely.


Q37. Explain SJF and SRTF scheduling. Medium

Shortest Job First (SJF): Non-preemptive. When CPU is free, pick the process with the shortest burst time.

Shortest Remaining Time First (SRTF): Preemptive version of SJF. When a new process arrives, if its burst time < remaining time of current process, preempt.

Process | Arrival | Burst
P1      |   0     |   8
P2      |   1     |   4
P3      |   2     |   2
P4      |   3     |   1

SRTF Gantt:
|P1(0-1)|P2(1-2)|P3(2-4)|P4(4-5)|P2(5-7)|P1(7-15)|... 
Wait times: P1=7, P2=2, P3=0, P4=0
Average = 2.25 ms (optimal!)

SJF is provably optimal for minimizing average waiting time (non-preemptive) and average turnaround time (preemptive SRTF).

Practical problem: Burst time is not known in advance. Estimated using exponential averaging:

τ(n+1) = α × t(n) + (1-α) × τ(n)
where t(n) = actual last burst, τ(n) = predicted last burst, α ∈ [0,1]

Q38. Explain Round Robin scheduling. What is the impact of time quantum? Medium

Example (quantum = 4ms):

Process | Arrival | Burst
P1      |   0     |   8
P2      |   0     |   4
P3      |   0     |   2

Gantt: |P1(0-4)|P2(4-8)|P3(8-10)|P1(10-14)|
Wait: P1=6, P2=4, P3=6; Average=5.33ms

Time Quantum impact:

Q → ∞:    Round Robin becomes FCFS
Q → 0:    Processor sharing (infinite context switching overhead)
Q = optimal: Usually 10-100ms; 80% of CPU bursts should be < Q

Rule of thumb: Quantum should be larger than most CPU bursts but not so large that short jobs wait too long.


Q39. What is Priority Scheduling? How does it handle starvation? Medium

Priority types:

  • Internal: Burst time, memory needs, I/O activity (computed by OS)
  • External: Assigned by user/administrator

Preemptive vs Non-preemptive:

Non-preemptive: Once running, process runs to completion (or block)
Preemptive:     New high-priority process can interrupt running process

Starvation problem: Low-priority processes may never run if high-priority processes keep arriving.

Solution — Aging: Gradually increase the priority of processes that have waited long.

Every 15 minutes a process waits → priority increased by 1
Process with initial priority 127 → after 32 hours → priority 0 (highest)

Q40. What is the Multilevel Queue and Multilevel Feedback Queue scheduling? Hard

Multilevel Queue: Partitions the ready queue into multiple queues based on process type, each with its own scheduling algorithm.

Priority 0 (Highest): System processes      → FCFS
Priority 1:           Interactive processes → Round Robin (Q=8ms)
Priority 2:           Interactive editing   → Round Robin (Q=16ms)
Priority 3:           Batch processes       → FCFS
Priority 4 (Lowest):  Background/student    → FCFS

Processes are permanently assigned to one queue — no movement between queues.

Multilevel Feedback Queue (MLFQ): Like MLQ but processes can move between queues based on behavior. This is used in most modern OS schedulers.

Queue 0 (Q=8ms):   New processes start here (high priority)
Queue 1 (Q=16ms):  If doesn't finish in Q0 → moved here
Queue 2 (FCFS):    If doesn't finish in Q1 → moved here (lowest priority)

If process does I/O (short burst) → move UP (CPU-bound processes sink down)
If process waits too long → moved UP (aging, prevents starvation)

Why MLFQ is powerful:

  • CPU-bound processes naturally sink to lower queues
  • I/O-bound (interactive) processes stay in high queues → good response time
  • No prior knowledge of burst time needed
  • Approximates SRTF behavior

MLFQ is the basis for the Linux CFS (Completely Fair Scheduler) and Windows scheduler.


Quick Reference: OS Interview Cheat Sheet

TopicKey Point
Process vs ThreadProcess = own address space; Thread = shared address space
Context SwitchSaves PCB; threads cheaper (no TLB flush)
ZombieProcess done, parent hasn't called wait()
Semaphorewait() decrements; signal() increments; binary = mutex
Virtual MemoryIllusion of large address space; demand paging
TLBCache for page table entries; flush on context switch
Page FaultPage not in RAM; OS fetches from disk
Deadlock ConditionsME + Hold&Wait + No Preemption + Circular Wait
Banker's AlgorithmDeadlock avoidance; grant only if safe state remains
FCFSSimple; convoy effect
SJF/SRTFOptimal avg wait; needs burst time prediction
Round RobinFair; quantum size critical
MLFQAdaptive; basis for real OS schedulers
ext4 inodeAll metadata except filename; 12 direct + indirect blocks
Hard vs Sym linkHard: same inode; Sym: path string, can cross FS
RAID 5Striping + single parity; 1 disk failure tolerated

Common OS Interview Topics by Company

CompanyFrequently Asked
GoogleConcurrency, virtual memory, Linux internals, system calls
MicrosoftWindows architecture, NTFS, thread models, memory management
AmazonLinux processes, IPC, performance tuning, networking + OS
StartupsProcess/thread basics, deadlocks, scheduling, file systems
Embedded rolesReal-time OS, interrupt handling, memory constraints

Prepared for placements 2026 | PapersAdda.com

Advertisement Placement

Explore this topic cluster

More resources in Interview Questions

Use the category hub to browse similar questions, exam patterns, salary guides, and preparation resources related to this topic.

More in Interview Questions

More from PapersAdda

Share this article: