PapersAdda

Cse Interview Questions Placement 2026

30 min read
Interview Questions
Advertisement Placement

Computer Science Core (Theory) Interview Questions for Placement 2026

Last Updated: March 2026


Core computer science interviews focus on theoretical foundations including algorithms, data structures, operating systems, databases, and computer networks. This guide covers 25 essential theory questions that product-based companies frequently ask.


Top 25 Computer Science Core Interview Questions

1. Explain the time and space complexity of QuickSort and when it performs worst.

Algorithm Steps:

  1. Pick pivot element
  2. Partition: elements < pivot go left, > pivot go right
  3. Recursively sort subarrays

Time Complexity:

Best Case: O(n log n)

  • Balanced partitions (pivot is median)
  • Recurrence: T(n) = 2T(n/2) + O(n)

Average Case: O(n log n)

  • Random pivot selection
  • Expected balanced partitions

Worst Case: O(n²)

  • Unbalanced partitions
  • Occurs when:
    • Array already sorted and first/last element chosen as pivot
    • All elements are identical
    • Recurrence: T(n) = T(n-1) + O(n)

Space Complexity:

  • In-place: O(log n) average (recursion stack)
  • Worst case: O(n) recursion depth

Optimization - Randomized QuickSort:

  • Random pivot selection
  • Expected O(n log n) regardless of input
  • No worst-case scenarios in practice

Comparison with MergeSort:

  • QuickSort: In-place, cache-friendly, O(n²) worst
  • MergeSort: Stable, O(n log n) guaranteed, O(n) space

2. What is the difference between Process and Thread?

FeatureProcessThread
DefinitionIndependent executing programLightweight unit within process
MemorySeparate address spaceShares process memory
CommunicationIPC (pipes, sockets, shared memory)Direct memory sharing
CreationExpensive (fork)Cheap
Context SwitchHeavy (MMU involvement)Light
IsolationCrash doesn't affect othersCrash kills entire process
ResourcesOwn file descriptors, heapShares file descriptors, code, heap
SchedulingOS schedules processesKernel/User schedules threads

Process Components:

  • Code segment (text)
  • Data segment (global variables)
  • Heap (dynamic memory)
  • Stack (local variables, function calls)
  • PCB (Process Control Block)

Thread Components:

  • Thread ID
  • Program Counter
  • Register set
  • Stack
  • Shares: code, data, heap, open files

When to use Threads:

  • Parallel computation
  • Responsive U/O (non-blocking)
  • Shared data processing
  • Lightweight concurrency

When to use Processes:

  • Isolation required (security)
  • Independent applications
  • Fault tolerance (crash isolation)
  • Different privilege levels

3. Explain Deadlock - Conditions, Prevention, and Banker's Algorithm.

Four Necessary Conditions (Coffman):

  1. Mutual Exclusion: Resources non-shareable
  2. Hold and Wait: Process holds resources while waiting
  3. No Preemption: Resources cannot be forcibly taken
  4. Circular Wait: Circular chain of waiting processes

Deadlock Handling:

1. Prevention: Remove one Coffman condition

  • Mutual Exclusion: Spool everything (not always possible)
  • Hold and Wait: Request all resources at once
  • No Preemption: Allow resource preemption
  • Circular Wait: Order resources, request in order

2. Avoidance:

  • Analyze resource allocation state
  • Only grant if safe state maintained
  • Banker's Algorithm

3. Detection & Recovery:

  • Allow deadlocks, detect periodically
  • Recovery: Process termination or resource preemption

4. Ignore:

  • Ostrich algorithm (used in most OS)
  • Reboot if deadlock occurs

Banker's Algorithm:

Data Structures:

  • Available[m]: Available instances of each resource type
  • Max[n][m]: Maximum demand of each process
  • Allocation[n][m]: Currently allocated
  • Need[n][m] = Max - Allocation

Safety Algorithm:

  1. Initialize Work = Available, Finish[i] = false
  2. Find process where Need ≤ Work and Finish = false
  3. Work = Work + Allocation, Finish = true
  4. Repeat until no such process exists
  5. If all Finish = true, system is safe

Resource Request Algorithm:

  1. Check if Request ≤ Need, else error
  2. Check if Request ≤ Available, else wait
  3. Pretend to allocate, check safety
  4. If safe, allocate; else, process waits

4. What are ACID properties in DBMS?

A - Atomicity:

  • All operations complete or none
  • Transaction is indivisible unit
  • Implementation: Write-ahead logging, shadow paging
  • Rollback on failure undoes all changes

C - Consistency:

  • Database remains in valid state
  • All integrity constraints maintained
  • Ensured by: Database constraints, triggers, application logic
  • Sum of transferred amounts must remain constant

I - Isolation:

  • Concurrent transactions don't interfere
  • Result as if transactions executed sequentially
  • Isolation Levels:
    • READ UNCOMMITTED: Lowest, dirty reads possible
    • READ COMMITTED: No dirty reads
    • REPEATABLE READ: No non-repeatable reads
    • SERIALIZABLE: Highest, no anomalies

D - Durability:

  • Committed transactions survive failures
  • Changes permanent
  • Implementation:
    • Write-ahead logging (WAL)
    • Redo logs
    • Database backups
    • Battery-backed write cache

Concurrency Control:

  • Locking: Shared (S), Exclusive (X) locks
  • Two-Phase Locking (2PL): Growing + shrinking phases
  • Timestamp Ordering: Older transactions priority
  • Optimistic: Validate at commit

5. Explain TCP 3-Way Handshake and 4-Way Termination.

Purpose: Synchronize sequence numbers, exchange parameters, prevent stale connections.

Steps:

Client                    Server
   |    SYN, seq=x        |
   |--------------------->|
   |                      |
   |  SYN-ACK, seq=y,     |
   |  ack=x+1             |
   |<---------------------|
   |                      |
   |    ACK, seq=x+1,     |
   |    ack=y+1           |
   |--------------------->|
   |                      |
   |    Connection        |
   |    Established       |

1. SYN:

  • Client sends SYN=1, initial sequence number (ISN=x)
  • Enters SYN_SENT state

2. SYN-ACK:

  • Server sends SYN=1, ACK=1
  • Server ISN=y, ack=x+1
  • Enters SYN_RECEIVED state

3. ACK:

  • Client sends ACK=1, seq=x+1, ack=y+1
  • Both in ESTABLISHED state

Why 3-way?

  • Prevents stale/delayed connection requests
  • Confirms both directions work
  • Synchronizes sequence numbers

TCP 4-Way Termination (Connection Close):

Steps:

Client                    Server
   |    FIN, seq=u        |
   |--------------------->|
   |                      |
   |    ACK, seq=v,       |
   |    ack=u+1           |
   |<---------------------|
   |  (Half-close:        |
   |   Client→Server done)|
   |                      |
   |    FIN, seq=w,       |
   |    ack=u+1           |
   |<---------------------|
   |                      |
   |    ACK, seq=u+1,     |
   |    ack=w+1           |
   |--------------------->|
   |                      |
   |    Connection        |
   |    Closed            |

1. FIN: Active closer sends FIN 2. ACK: Passive closer acknowledges 3. FIN: Passive closer sends FIN (when ready) 4. ACK: Active closer acknowledges

TIME_WAIT State:

  • Active closer waits 2×MSL (Maximum Segment Lifetime)
  • Ensures last ACK received
  • Prevents stale segments

6. What is the difference between IPv4 and IPv6?

FeatureIPv4IPv6
Address Size32 bits128 bits
Address Space4.3 billion3.4 × 10³⁸
FormatDotted decimal (192.168.1.1)Hexadecimal (2001:0db8::1)
Header Size20-60 bytesFixed 40 bytes
ChecksumYesNo (handled by upper layers)
FragmentationRouters and hostsOnly hosts
NATRequired (address shortage)Not needed
ConfigurationManual or DHCPAuto-configuration, DHCPv6
SecurityIPsec optionalIPsec mandatory (implementation)
QoSLimitedFlow label field
MulticastOptionalRequired
BroadcastYesNo (replaced by multicast)

IPv6 Address Types:

  • Unicast: One-to-one
  • Multicast: One-to-many
  • Anycast: One-to-nearest

IPv6 Address Representation:

  • 8 groups of 4 hex digits
  • Leading zeros can be omitted
  • One group of consecutive zeros can be replaced with ::
  • Example: 2001:0db8:0000:0000:0000:ff00:0042:8329 → 2001:db8::ff00:42:8329

Transition Mechanisms:

  • Dual Stack: Both protocols simultaneously
  • Tunneling: IPv6 over IPv4 network
  • Translation: NAT64, protocol translation

7. Explain Virtual Memory, Paging, and Page Faults.

  • Abstraction giving each process illusion of large contiguous memory
  • Allows execution of programs larger than physical RAM
  • Protection between processes

Benefits:

  1. Larger address space than physical memory
  2. Memory isolation between processes
  3. Efficient memory sharing (shared libraries)
  4. Simplified memory management

Paging:

  • Fixed-size blocks: Pages (logical) / Frames (physical)
  • Typical size: 4KB
  • Eliminates external fragmentation

Page Table:

  • Maps virtual page numbers to physical frame numbers
  • Stored in memory
  • MMU (Memory Management Unit) performs translation

Page Fault:

  • CPU references page not in physical memory
  • OS handles fault:
    1. Find free frame (or victim)
    2. Load page from disk
    3. Update page table
    4. Restart instruction

Page Replacement Algorithms:

  • FIFO: First-in-first-out (suffers Belady's anomaly)
  • LRU: Least Recently Used (good, expensive)
  • Optimal: Replace page used farthest in future (theoretical)
  • Clock: Second chance algorithm (approximates LRU)

TLB (Translation Lookaside Buffer):

  • Cache for page table entries
  • Speeds up address translation
  • Typical hit rate: 99%

Thrashing:

  • Excessive page faults due to insufficient frames
  • CPU utilization drops despite high multiprogramming
  • Solution: Working set model, page fault frequency

8. What is Normalization? Explain 1NF, 2NF, 3NF, BCNF.

1NF (First Normal Form):

  • Atomic values (no repeating groups)
  • Each cell contains single value
  • Each row is unique

Violation Example:

Student: ID, Name, Subjects      ← Multiple subjects in one cell

1NF Solution:

Student_Subject: ID, Name, Subject (separate rows)

2NF (Second Normal Form):

  • Must be in 1NF
  • No partial dependency (non-prime attributes depend on full candidate key)
  • Relevant for composite keys

Violation Example:

Enrollment: (StudentID, CourseID), StudentName, CourseName
StudentName depends only on StudentID (partial dependency)

2NF Solution:

Student: StudentID, StudentName
Course: CourseID, CourseName
Enrollment: StudentID, CourseID

3NF (Third Normal Form):

  • Must be in 2NF
  • No transitive dependency (non-prime → non-prime)

Violation Example:

Employee: EmpID, EmpName, DeptID, DeptName
DeptName depends on DeptID, not directly on EmpID (transitive)

3NF Solution:

Employee: EmpID, EmpName, DeptID
Department: DeptID, DeptName

BCNF (Boyce-Codd Normal Form):

  • Stricter version of 3NF
  • For every FD X → Y, X must be a superkey

BCNF Violation:

Student: (Student, Course), Instructor
Instructor → Course (Instructor determines Course)
Instructor is not a superkey

BCNF Solution:

Student_Instructor: Student, Instructor
Instructor_Course: Instructor, Course

Trade-offs:

  • Higher normalization: Less redundancy, more joins
  • Denormalization: Used for performance in read-heavy systems

9. Explain the Sliding Window Protocol in TCP.

Concept:

  • Sender can transmit multiple packets without acknowledgment
  • Window size determines unacknowledged data allowed
  • Dynamic adjustment based on network conditions

Sender Window:

[Sent & ACKed] [Sent, not ACKed] [Can send] [Cannot send yet]
              ↑                 ↑         ↑
            lastACK           sent    lastACK + window

Receiver Window:

  • Advertised in TCP header (Window Size field)
  • Indicates available buffer space

Types:

1. Go-Back-N:

  • Cumulative acknowledgments
  • Timeout: Retransmit all from lost packet
  • Window size up to 2ⁿ-1 (n = sequence number bits)

2. Selective Repeat:

  • Individual acknowledgments
  • Timeout: Retransmit only lost packet
  • Receiver buffers out-of-order packets
  • Window size up to 2ⁿ⁻¹

TCP Window Scaling:

  • Original window size: 16 bits (64KB max)
  • Window scale option for high-bandwidth networks
  • Effective window up to 1GB

Congestion Control:

  • Slow Start: Exponential window growth
  • Congestion Avoidance: Linear growth
  • Fast Retransmit: Retransmit on 3 duplicate ACKs
  • Fast Recovery: Avoid slow start after fast retransmit

Throughput: Throughput = Window Size / RTT


10. What is the difference between Compile-time and Run-time Polymorphism?

AspectCompile-time (Static)Run-time (Dynamic)
Also calledEarly binding, static bindingLate binding, dynamic binding
MechanismFunction overloading, operator overloadingVirtual functions, overriding
ResolutionCompiler decidesJVM/runtime decides
SpeedFaster (decided at compile)Slower (lookup at runtime)
FlexibilityLess flexibleMore flexible
ExampleMethod overloadingMethod overriding

Compile-time Polymorphism:

Function Overloading:

class Calculator {
    int add(int a, int b) { return a+b; }
    double add(double a, double b) { return a+b; }
    int add(int a, int b, int c) { return a+b+c; }
};
  • Same name, different parameters
  • Return type alone doesn't differentiate

Operator Overloading:

Complex operator+(const Complex& c) {
    return Complex(real+c.real, imag+c.imag);
}

Run-time Polymorphism:

Method Overriding:

class Shape {
public:
    virtual double area() = 0;  // Pure virtual
};

class Circle : public Shape {
public:
    double area() override { return 3.14*r*r; }
};

Shape* s = new Circle();
s->area();  // Calls Circle::area() at runtime

Virtual Function Table (vtable):

  • Compiler creates table of function pointers
  • Each object has pointer to vtable (vptr)
  • Runtime: vptr → vtable → function

When to use:

  • Static: Performance critical, known at compile
  • Dynamic: Extensibility, plugin architecture

11. Explain Binary Search Tree (BST) and its operations complexity.

Properties:

  • Left child < Parent
  • Right child > Parent
  • No duplicates (or handled by convention)
  • In-order traversal gives sorted sequence

Operations:

Search:

1. Start at root
2. If target == node, found
3. If target < node, go left
4. If target > node, go right
5. Repeat until found or null

Insert:

  • Search for position (will end at null)
  • Insert as leaf

Delete:

  • Leaf node: Simply remove
  • One child: Replace with child
  • Two children: Replace with in-order successor/predecessor

Complexity:

OperationAverageWorst
SearchO(log n)O(n)
InsertO(log n)O(n)
DeleteO(log n)O(n)

Worst case: Skewed tree (inserted in sorted order)

Balanced BST Variants:

  • AVL Tree: Height balanced (|left-right| ≤ 1)
  • Red-Black Tree: Color balanced, less strict than AVL
  • B-Tree: Multi-way tree for disk storage
  • Splay Tree: Self-adjusting

AVL Tree Rotations:

  • Single rotation (LL, RR cases)
  • Double rotation (LR, RL cases)
  • Guarantees O(log n) always

Applications:

  • Indexing (databases)
  • Symbol tables
  • Expression parsers
  • 3D game engines (BSP trees)

12. What is Semaphore and how does it solve the Producer-Consumer Problem?

Types:

1. Binary Semaphore (Mutex):

  • Values: 0 or 1
  • Used for mutual exclusion
  • Like a lock

2. Counting Semaphore:

  • Values: 0 to N
  • Controls access to N resources

Operations:

  • wait(P): Decrement, block if negative
  • signal(V): Increment, wake waiting process

Atomicity: Both operations must be atomic (hardware support needed).

Producer-Consumer Problem:

Scenario:

  • Producer generates data, puts in buffer
  • Consumer removes and processes data
  • Buffer has finite size N

Requirements:

  1. Producer waits if buffer full
  2. Consumer waits if buffer empty
  3. Mutual exclusion on buffer access

Solution with Semaphores:

semaphore mutex = 1;      // Mutual exclusion
semaphore empty = N;      // Empty slots
semaphore full = 0;       // Filled slots

Producer:
    while(true) {
        item = produce();
        wait(empty);      // Wait for empty slot
        wait(mutex);      // Enter critical section
        buffer[in] = item;
        in = (in + 1) % N;
        signal(mutex);    // Exit critical section
        signal(full);     // Notify consumer
    }

Consumer:
    while(true) {
        wait(full);       // Wait for filled slot
        wait(mutex);      // Enter critical section
        item = buffer[out];
        out = (out + 1) % N;
        signal(mutex);    // Exit critical section
        signal(empty);    // Notify producer
        consume(item);
    }

Key Points:

  • wait(mutex) must come after wait(empty)/wait(full)
  • Order prevents deadlock
  • signal() operations can be in any order

13. Explain Dijkstra's Algorithm and when it fails.

Algorithm:

1. Initialize: dist[source] = 0, dist[others] = ∞
2. Create priority queue (min-heap) of vertices
3. While queue not empty:
   a. u = vertex with minimum dist
   b. For each neighbor v of u:
      if dist[u] + weight(u,v) < dist[v]:
         dist[v] = dist[u] + weight(u,v)
         prev[v] = u
4. Return dist[]

Time Complexity:

  • Adjacency matrix: O(V²)
  • Binary heap: O(E log V)
  • Fibonacci heap: O(E + V log V)

Space Complexity: O(V)

When Dijkstra Fails:

Negative Edge Weights:

  • Dijkstra assumes once vertex processed, shortest path found
  • Negative edges may provide shorter path later
  • Example:
    A --5--> B --(-10)--> C
    |                    ↑
    --------2-------------
    
    Dijkstra gives A→C = 2, but A→B→C = -5

Negative Cycles:

  • Even worse - shortest path undefined (keep decreasing)

Alternatives:

  • Bellman-Ford: Handles negative weights, O(VE)
  • SPFA: Queue-based optimization of Bellman-Ford
  • Floyd-Warshall: All-pairs shortest path

When to use:

  • Non-negative weights only
  • Single source shortest path
  • Large graphs (use with heap)

14. What are the different types of Database Indexes?

1. B-Tree Index:

  • Most common type
  • Self-balancing tree
  • O(log n) search, insert, delete
  • Good for range queries
  • Default in most databases

2. B+ Tree Index:

  • Variant of B-Tree
  • All data in leaf nodes (linked)
  • Better for range scans
  • Used in MySQL InnoDB

3. Hash Index:

  • Hash table structure
  • O(1) lookup for equality
  • No range query support
  • Used for exact match
  • Example: PostgreSQL hash index

4. Bitmap Index:

  • One bitmap per distinct value
  • Efficient for low cardinality columns
  • Good for data warehousing
  • Example: Gender, status columns

5. Clustered Index:

  • Determines physical order of data
  • Only one per table
  • Leaf nodes contain actual data
  • Example: MySQL PK is clustered

6. Non-Clustered Index:

  • Separate from data
  • Contains index key + pointer to data
  • Multiple allowed per table
  • Extra lookup required

7. Composite Index:

  • Multiple columns
  • Order matters (leftmost prefix rule)
  • Example: INDEX(last_name, first_name)

8. Covering Index:

  • Contains all columns needed for query
  • No table access required
  • "Covering index scan"

Index Selection Considerations:

  • Query patterns
  • Cardinality
  • Write frequency
  • Storage cost
  • Maintenance overhead

15. Explain the differences between HTTP/1.1, HTTP/2, and HTTP/3.

FeatureHTTP/1.1HTTP/2HTTP/3
Year199720152022
TransportTCPTCPQUIC (UDP)
MultiplexingNo (head-of-line blocking)YesYes
Header CompressionNoHPACKQPACK
Server PushNoYesYes
Binary ProtocolNoYesYes
Connection SetupMultiple TCPSingle TCP0-RTT / 1-RTT
SecurityOptional TLSUsually TLS 1.2+TLS 1.3 mandatory

HTTP/1.1 Limitations:

  • One request per TCP connection (without pipelining)
  • Head-of-line blocking
  • Multiple TCP connections needed
  • Verbose headers repeated

HTTP/2 Improvements:

1. Multiplexing:

  • Multiple streams over single TCP connection
  • No head-of-line blocking at HTTP layer

2. Binary Framing:

  • More efficient parsing
  • Smaller messages

3. Header Compression (HPACK):

  • Static and dynamic tables
  • Reduces header size

4. Server Push:

  • Server can send resources proactively
  • Example: Push CSS/JS with HTML

5. Stream Prioritization:

  • Client can specify importance

HTTP/3 Improvements:

QUIC Protocol:

  • Built on UDP
  • Connection migration (IP change doesn't drop)
  • 0-RTT connection establishment
  • Built-in TLS 1.3

Benefits:

  • Eliminates TCP head-of-line blocking
  • Faster connection establishment
  • Better performance on mobile (switching networks)

Adoption:

  • HTTP/2: Widely adopted (60%+ websites)
  • HTTP/3: Growing adoption (Google, Facebook, Cloudflare)

16. What is the difference between SQL and NoSQL databases?

FeatureSQL (Relational)NoSQL (Non-relational)
SchemaFixed, predefinedFlexible, dynamic
ScalingVertical (scale up)Horizontal (scale out)
Data ModelTables (rows/columns)Document, key-value, column, graph
ACIDFull ACID supportBASE (Basically Available, Soft state, Eventual consistency)
JoinsSupportedGenerally not supported
Best forComplex queries, transactionsHigh volume, unstructured data
ExamplesMySQL, PostgreSQL, OracleMongoDB, Cassandra, Redis, Neo4j

SQL Database Types:

  • MySQL: Open source, web applications
  • PostgreSQL: Advanced features, extensibility
  • Oracle: Enterprise, mission-critical
  • SQL Server: Microsoft ecosystem

NoSQL Types:

1. Document Store:

  • JSON/BSON documents
  • Flexible schema
  • Example: MongoDB, CouchDB

2. Key-Value Store:

  • Simple hash table
  • Ultra-fast lookups
  • Example: Redis, DynamoDB, Riak

3. Wide-Column Store:

  • Column families
  • Massive scale
  • Example: Cassandra, HBase

4. Graph Database:

  • Nodes and edges
  • Relationship-focused
  • Example: Neo4j, Amazon Neptune

When to use SQL:

  • ACID transactions required
  • Complex joins and queries
  • Structured data with relationships
  • Medium scale

When to use NoSQL:

  • Massive scale (Big Data)
  • Rapid prototyping (schema evolves)
  • Unstructured/semi-structured data
  • High write throughput

Polyglot Persistence:

  • Using multiple database types
  • Each service uses best-fit database

17. Explain RAID levels and their characteristics.

RAID 0 (Striping):

  • Data split across disks
  • No redundancy
  • High performance
  • Capacity: Sum of all disks
  • Failure: Any disk fails = data lost
  • Use: Non-critical data, speed needed

RAID 1 (Mirroring):

  • Exact copy on two disks
  • High redundancy
  • Read performance improved
  • Capacity: 50% of total
  • Use: Critical data, OS drives

RAID 5 (Striping with Parity):

  • Data striped, parity distributed
  • Can survive one disk failure
  • Good balance of speed and redundancy
  • Capacity: (N-1)/N of total
  • Write penalty (parity calculation)
  • Use: File servers, general purpose

RAID 6 (Striping with Double Parity):

  • Two parity blocks
  • Can survive two disk failures
  • More write overhead than RAID 5
  • Capacity: (N-2)/N of total
  • Use: Large arrays, critical data

RAID 10 (1+0, Mirrored Stripes):

  • Striped array of mirrors
  • Can survive multiple failures (if not same mirror)
  • Excellent performance and redundancy
  • Capacity: 50% of total
  • Expensive
  • Use: Databases, high-performance applications

Comparison:

LevelRedundancyCapacityReadWrite
0None100%FastFast
1High50%FastMedium
5Medium(N-1)/NFastSlow
6High(N-2)/NFastSlow
10High50%FastFast

18. What is Cache Memory and explain its mapping techniques.

Principle of Locality:

  • Temporal: Recently accessed likely accessed again
  • Spatial: Nearby addresses likely accessed

Cache Levels:

  • L1: On-chip, 32-64KB, 1-4 cycles
  • L2: On-chip, 256KB-1MB, 10 cycles
  • L3: Shared, 4-64MB, 20-50 cycles

Performance Metrics:

  • Hit Rate: Fraction of accesses found in cache
  • Miss Rate: 1 - Hit Rate
  • Miss Penalty: Time to fetch from lower level

Mapping Techniques:

1. Direct Mapping:

  • Each memory block maps to exactly one cache line
  • Index = Block address mod Number of cache lines
  • Simple, fast, but rigid
  • May cause conflicts (thrashing)

2. Fully Associative:

  • Block can go anywhere in cache
  • All entries searched in parallel
  • Flexible, no conflicts
  • Expensive (comparators for each entry)

3. Set-Associative (N-way):

  • Compromise approach
  • Cache divided into sets
  • Block maps to set, can be any line within set
  • N comparators needed
  • Common: 2-way, 4-way, 8-way

Write Policies:

  • Write-through: Update cache and memory immediately
  • Write-back: Update cache only, write to memory on eviction

Cache Coherency (Multi-core):

  • MESI protocol: Modified, Exclusive, Shared, Invalid
  • Ensures all cores see consistent data

19. Explain the purpose and working of DNS.

Why DNS:

  • Humans remember names, computers use IPs
  • Decentralized management
  • Scalable

DNS Hierarchy:

Root (.)
├── TLD (.com, .org, .net, .in)
│   ├── Second-level (google.com)
│   │   ├── Subdomain (mail.google.com)
│   │   └── Subdomain (drive.google.com)

DNS Query Types:

1. Recursive Query:

  • DNS resolver does all work
  • Returns final answer or error
  • Clients typically use recursive

2. Iterative Query:

  • Server returns best answer it has
  • Resolver may need to query more servers
  • Used between DNS servers

DNS Resolution Process:

1. Browser checks cache
2. OS checks hosts file, then cache
3. Query to configured DNS resolver
4. Resolver checks cache
5. If not cached:
   a. Query Root Server → TLD server address
   b. Query TLD Server → Authoritative server address
   c. Query Authoritative Server → IP address
6. Return IP to client
7. Cache at each level

Record Types:

RecordPurpose
AIPv4 address
AAAAIPv6 address
CNAMECanonical name (alias)
MXMail server
NSName server
TXTText (SPF, DKIM)
SOAStart of authority
PTRReverse DNS

DNS Caching:

  • TTL (Time To Live) controls cache duration
  • Reduces query load and latency

20. What is the difference between Monolithic and Microservices Architecture?

AspectMonolithicMicroservices
StructureSingle unified applicationMultiple independent services
DeploymentOne unitIndependently
ScalingWhole applicationIndividual services
TechnologySingle stackPolyglot (different per service)
ComplexitySimpler initiallyHigher initially
FailureWhole app downGraceful degradation
DatabaseSharedPer-service
CommunicationIn-processNetwork (HTTP/gRPC/messaging)
Team StructureSingle teamMultiple small teams

Monolithic Architecture:

Advantages:

  • Simple to develop and test
  • Easy deployment
  • Better performance (no network calls)
  • Simple debugging

Disadvantages:

  • Hard to scale specific features
  • Technology lock-in
  • Large codebase maintenance
  • Risky deployments

Microservices Architecture:

Advantages:

  • Independent scaling
  • Technology flexibility
  • Fault isolation
  • Small, autonomous teams
  • Easier to understand services

Disadvantages:

  • Network latency
  • Data consistency challenges
  • Complex deployment
  • Distributed system complexity
  • Need for DevOps culture

When to use:

  • Monolithic: Small team, simple application, rapid prototyping
  • Microservices: Large scale, multiple teams, need for scalability

Hybrid Approaches:

  • Modular monolith
  • Service-oriented architecture (SOA)
  • Serverless functions

21. Explain the concept of Load Balancing.

Benefits:

  1. High availability (failover)
  2. Scalability (add/remove servers)
  3. Performance (reduce individual load)
  4. Maintenance flexibility (drain connections)

Types of Load Balancers:

1. Layer 4 (Transport Layer):

  • Based on IP address and port
  • Faster, less overhead
  • No content inspection
  • Example: TCP/UDP load balancing

2. Layer 7 (Application Layer):

  • Based on HTTP headers, cookies, URL
  • Content-aware routing
  • SSL termination
  • Example: HTTP routing by path

Load Balancing Algorithms:

1. Round Robin:

  • Sequential distribution
  • Equal weight to all servers
  • Simple, fair

2. Weighted Round Robin:

  • Accounts for server capacity
  • More powerful servers get more requests

3. Least Connections:

  • Sends to server with fewest active connections
  • Good for long-lived connections

4. Least Response Time:

  • Considers current response time
  • Routes to fastest server

5. IP Hash:

  • Client IP determines server
  • Session persistence
  • Same client always to same server

6. Random:

  • Simple, distributes evenly over time

Health Checks:

  • Periodic checks to detect failures
  • Failed servers removed from rotation
  • Types: HTTP check, TCP check, custom

Session Persistence (Sticky Sessions):

  • Same user to same server
  • Cookie-based or IP-based
  • Trade-off with load distribution

22. What is Docker and how does it differ from Virtual Machines?

Container: Lightweight, standalone executable package with everything needed to run application.

Docker Architecture:

  • Docker Engine: Runtime and daemon
  • Docker Images: Read-only templates
  • Docker Containers: Running instances
  • Docker Registry: Image storage (Docker Hub)

Container vs VM:

AspectContainerVirtual Machine
OSShares host OS kernelComplete OS per VM
SizeMBsGBs
Boot timeSecondsMinutes
PerformanceNear nativeSome overhead
IsolationProcess-levelHardware-level
DensityHundreds per hostTens per host
PortabilityHighly portableLess portable

Docker Benefits:

  • Consistency: Same environment everywhere
  • Isolation: Applications don't interfere
  • Efficiency: Less overhead than VMs
  • Portability: Runs on any Docker host
  • Scalability: Easy to replicate
  • Version Control: Image versioning

Dockerfile:

  • Instructions to build image
  • Layered architecture (caching)
  • Example:
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

Orchestration:

  • Docker Compose: Multi-container apps
  • Kubernetes: Container orchestration at scale

23. Explain the CAP Theorem.

C - Consistency:

  • All nodes see same data at same time
  • Read returns most recent write
  • Linearizability

A - Availability:

  • Every request receives response
  • No guarantee data is most recent
  • System remains operational

P - Partition Tolerance:

  • System continues despite network partitions
  • Messages lost between nodes
  • Required for distributed systems

The Trade-off:

CP Systems (Consistency + Partition Tolerance):

  • Sacrifice availability during partition
  • Example: HBase, MongoDB (configured), Redis Cluster
  • Use case: Banking, inventory

AP Systems (Availability + Partition Tolerance):

  • Sacrifice consistency during partition
  • Example: Cassandra, DynamoDB, CouchDB
  • Use case: Social media, analytics

CA Systems (Consistency + Availability):

  • Sacrifice partition tolerance
  • Single-node systems
  • Not truly distributed
  • Example: Traditional RDBMS (single node)

PACELC Theorem (Extended CAP):

  • If partitioned (P), choose A or C
  • Else (E), choose latency (L) or C

Practical Considerations:

  • Partitions are rare but must be handled
  • Many systems offer tunable consistency
  • Eventual consistency common in AP systems

24. What is Blockchain and how does it work?

Key Properties:

  1. Decentralized: No single point of control
  2. Immutable: Cannot alter past records
  3. Transparent: All participants see transactions
  4. Secure: Cryptographic protection

Structure:

Block Components:

  • Data: Transactions
  • Hash: Unique fingerprint of block
  • Previous Hash: Links to previous block
  • Nonce: Number used for mining
  • Timestamp

Chain:

  • Blocks linked by hashes
  • Changing one block changes its hash
  • Breaks chain integrity

Consensus Mechanisms:

1. Proof of Work (PoW):

  • Miners solve computational puzzle
  • First to solve adds block
  • Energy intensive (Bitcoin)

2. Proof of Stake (PoS):

  • Validators stake cryptocurrency
  • Randomly selected to add block
  • Energy efficient (Ethereum 2.0)

3. Delegated PoS:

  • Token holders vote for delegates
  • Delegates validate transactions

Smart Contracts:

  • Self-executing code on blockchain
  • Automatically enforce agreements
  • Ethereum popularized concept

Use Cases:

  • Cryptocurrency (Bitcoin, Ethereum)
  • Supply chain tracking
  • Digital identity
  • Voting systems
  • Smart contracts
  • Cross-border payments

Challenges:

  • Scalability (limited TPS)
  • Energy consumption (PoW)
  • Regulatory uncertainty
  • Complexity

25. Explain the difference between Authentication and Authorization.

Authentication Methods:

1. Knowledge:

  • Something you know
  • Password, PIN, security questions
  • Weakest method alone

2. Possession:

  • Something you have
  • OTP, hardware token, smart card

3. Inherence:

  • Something you are
  • Fingerprint, face, iris, voice

4. Location:

  • Where you are
  • GPS, IP address

Multi-Factor Authentication (MFA):

  • Combine two+ methods
  • Common: Password + OTP
  • Significantly improves security

Authentication Protocols:

  • Basic Auth: Username:password in header
  • Session-based: Server-side session storage
  • Token-based: JWT (JSON Web Tokens)
  • OAuth 2.0: Third-party authorization
  • OpenID Connect: Authentication layer on OAuth
  • SAML: Enterprise SSO

Authorization Models:

1. Role-Based Access Control (RBAC):

  • Permissions assigned to roles
  • Users assigned to roles
  • Simple, scalable

2. Attribute-Based Access Control (ABAC):

  • Policies based on attributes
  • User, resource, environment attributes
  • More flexible, complex

3. Access Control Lists (ACL):

  • List of permissions per resource
  • Fine-grained but hard to manage

JWT Structure:

  • Header: Algorithm, token type
  • Payload: Claims (user data, expiry)
  • Signature: Verification

Flow:

  1. Authenticate → Receive token
  2. Send token with each request
  3. Server validates token
  4. Check authorization
  5. Allow/deny access

Company-wise Question Mapping

CompanyFavorite CS TopicsDifficulty Level
GoogleAlgorithms, System DesignVery High
AmazonData Structures, OODHigh
MicrosoftProblem solving, CodingHigh
AppleAlgorithms, OS, Low-levelVery High
MetaSystem Design, AlgorithmsVery High
NetflixDistributed SystemsVery High
UberSystem Design, AlgorithmsHigh
AdobeAlgorithms, Product senseHigh
OracleDBMS, Java, SQLMedium-High
IBMEnterprise systemsMedium-High

Tips for CSE Students

Technical Preparation

  1. Data Structures: Arrays, Trees, Graphs, Heaps - master all
  2. Algorithms: Sorting, searching, dynamic programming, greedy
  3. System Design: Scalable architecture for experienced hires
  4. OS Concepts: Memory, processes, synchronization
  5. DBMS: SQL mastery, normalization, indexing

Coding Practice

  • LeetCode: 200+ problems (Easy/Medium focus)
  • Company-tagged questions
  • Mock interviews
  • Time yourself

System Design (For experienced)

  • Design patterns
  • Scalability concepts
  • Database design
  • Caching strategies
  • Message queues

Frequently Asked Questions (FAQs)

Q1: Do I need to know competitive programming for product company placements?

A: Helpful but not mandatory. Strong problem-solving is essential, which CP develops. Focus on data structures and algorithms. LeetCode Medium-level proficiency is generally sufficient. Some companies (Google, Meta) have higher bars.

Q2: How important is system design for freshers?

A: Usually not asked for freshers. Focus on DSA, OS, DBMS, networks. System design becomes important for 2+ years experience. However, basic understanding of scalability and design patterns is beneficial.

Q3: Should I specialize in frontend, backend, or full-stack?

A: For campus placements, being versatile helps. Backend roles are most common. Full-stack gives flexibility. Specialize based on interest after joining. Core CS fundamentals matter more than specific tech stack.

Q4: Which programming language should I use for interviews?

A: Use what you're most comfortable with. Python is concise and popular. Java/C++ are also widely accepted. Know your chosen language deeply - standard libraries, time complexity of operations.

Q5: How to prepare for HR/behavioral rounds at tech companies?

A: Use STAR method (Situation, Task, Action, Result). Prepare stories about: teamwork, conflict resolution, failure and learning, leadership. Research company values. Have questions ready for interviewers. Practice with mock interviews.


Best of luck with your Computer Science placements 2026!

Advertisement Placement

Explore this topic cluster

More resources in Interview Questions

Use the category hub to browse similar questions, exam patterns, salary guides, and preparation resources related to this topic.

More in Interview Questions

More from PapersAdda

Share this article: