Operating Systems 2023

2 sections, 9 questions

Sem: V
Marks: 70
Duration: 03 Hours

MCQ Section

1 questions

MCQ Section

Q1: Multiple Choice Questions (Any Seven) [2 × 7 = 14]

Choose the correct option of the following (any seven only):

Q1(a): OS Types

A time-sharing operating system uses:

  • (i) Long-term scheduling only
  • (ii) CPU scheduling with time slices
  • (iii) No scheduling
  • (iv) Batch processing only

Answer: (ii) CPU scheduling with time slices

Solution: Time-sharing OS allocates CPU time slices to multiple users/processes for interactive use, giving the illusion of simultaneous execution.


Q1(b): Process States

Which is NOT a process state?

  • (i) Running
  • (ii) Waiting
  • (iii) Deleted
  • (iv) Ready

Answer: (iii) Deleted

Solution: Standard process states are: New, Ready, Running, Waiting (Blocked), and Terminated. "Deleted" is not a standard process state.


Q1(c): Deadlock

The number of conditions required for a deadlock is:

  • (i) 2
  • (ii) 3
  • (iii) 4
  • (iv) 5

Answer: (iii) 4

Solution: Four conditions for deadlock: Mutual Exclusion, Hold and Wait, No Preemption, and Circular Wait. All must hold simultaneously.


Q1(d): Page Replacement

FIFO page replacement may suffer from:

  • (i) Thrashing
  • (ii) Belady's anomaly
  • (iii) Starvation
  • (iv) Fragmentation

Answer: (ii) Belady's anomaly

Solution: Belady's anomaly is the counterintuitive phenomenon where increasing the number of page frames can increase page faults in FIFO replacement.


Q1(e): CPU Scheduling

In SJF scheduling, which job is selected first?

  • (i) Longest job
  • (ii) Shortest job
  • (iii) Most recent job
  • (iv) Random job

Answer: (ii) Shortest job

Solution: Shortest Job First (SJF) selects the process with the smallest next CPU burst time, minimizing average waiting time.


Q1(f): File System

Contiguous allocation of files suffers from:

  • (i) External fragmentation
  • (ii) Internal fragmentation only
  • (iii) No fragmentation
  • (iv) Page fault

Answer: (i) External fragmentation

Solution: Contiguous allocation requires files to occupy consecutive disk blocks. When files are deleted, gaps form that may be too small for new files — external fragmentation.


Q1(g): Semaphore

A binary semaphore can take values:

  • (i) Any integer
  • (ii) 0 and 1 only
  • (iii) 0 to n
  • (iv) -1 and 1

Answer: (ii) 0 and 1 only

Solution: A binary semaphore (mutex) can only have values 0 or 1, used for mutual exclusion. A counting semaphore can take any non-negative integer value.


Q1(h): Memory Management

Internal fragmentation occurs in:

  • (i) Segmentation
  • (ii) Paging
  • (iii) Dynamic partitioning
  • (iv) None of the above

Answer: (ii) Paging

Solution: In paging, the last page of a process may not completely fill its frame, wasting space within the allocated frame — internal fragmentation.


Q1(i): Disk Scheduling

The SCAN disk scheduling algorithm is also known as:

  • (i) FIFO algorithm
  • (ii) Elevator algorithm
  • (iii) Round Robin
  • (iv) Priority algorithm

Answer: (ii) Elevator algorithm

Solution: SCAN moves the disk arm in one direction servicing requests, then reverses — similar to an elevator moving up and down a building.


Q1(j): System Call

fork() system call creates:

  • (i) A new file
  • (ii) A new process
  • (iii) A new thread
  • (iv) A new directory

Answer: (ii) A new process

Solution: fork() creates a new child process that is a copy of the parent process. It returns 0 to child and child's PID to parent.


Long Answer Questions

8 questions

Long Answer Questions

Q2: Process Management [7 + 7 = 14]

(a) Explain the concept of process. Describe the process state diagram with all transitions.

Answer:

Process: A program in execution. It includes the program code, current activity (program counter), stack, data section, and heap.

Process Control Block (PCB): Contains: Process ID, process state, program counter, CPU registers, memory management info, I/O status, scheduling info.

Process States:

  1. New: Process is being created
  2. Ready: Process is waiting to be assigned to CPU
  3. Running: Instructions are being executed
  4. Waiting (Blocked): Process is waiting for an event (I/O completion)
  5. Terminated: Process has finished execution

State Transitions:

    New →(admitted)→ Ready ←→ Running →(exit)→ Terminated
                        ↑         |
                        |    (I/O or event wait)
                        |         ↓
                    (I/O done) Waiting
  1. New → Ready: Process admitted to ready queue (long-term scheduler)
  2. Ready → Running: Process selected by CPU scheduler (dispatch)
  3. Running → Ready: Time quantum expires (preemption) or higher priority process arrives
  4. Running → Waiting: Process requests I/O or waits for event
  5. Waiting → Ready: I/O completed or event occurred
  6. Running → Terminated: Process completes execution or is killed

Context Switch: Saving the state of one process and loading another's state when switching CPU between processes. Overhead depends on hardware support.


(b) Explain Process Scheduling. Compare Preemptive and Non-preemptive scheduling.

Answer:

Process Scheduling: The activity of selecting which process should run on the CPU. The scheduler aims to maximize CPU utilization and throughput while minimizing waiting time and turnaround time.

Scheduling Queues:

  1. Job queue: All processes in the system
  2. Ready queue: Processes in memory, ready to run
  3. Device queue: Processes waiting for I/O

Types of Schedulers:

  1. Long-term (Job): Controls degree of multiprogramming
  2. Short-term (CPU): Selects next process to run
  3. Medium-term: Swapping (temporarily removes processes from memory)

Preemptive vs Non-Preemptive:

AspectPreemptiveNon-Preemptive
DefinitionRunning process can be interruptedProcess runs until completion/voluntarily yields
CPU allocationCan be taken awayHeld until process finishes or blocks
Context switchesMore frequentLess frequent
OverheadHigherLower
Response timeBetterWorse
StarvationLess likelyPossible
ThroughputHigherLower
ComplexityMore complexSimpler
ExamplesRound Robin, SRTF, Priority (preemptive)FCFS, SJF, Priority (non-preemptive)
Suitable forTime-sharing, real-time systemsBatch systems

Scheduling Criteria:

  1. CPU utilization (maximize)
  2. Throughput (maximize)
  3. Turnaround time (minimize)
  4. Waiting time (minimize)
  5. Response time (minimize)

Q3: CPU Scheduling Algorithms [7 + 7 = 14]

(a) Explain FCFS and SJF scheduling with example. Calculate average waiting time.

Answer:

Given processes:

ProcessArrival TimeBurst Time
P106
P218
P327
P433

FCFS (First Come First Served): Order: P1 → P2 → P3 → P4

ProcessStartEndWaiting TimeTurnaround
P10606
P2614513
P314211219
P421241821

Average Waiting Time = (0+5+12+18)/4 = 35/4 = 8.75 Average Turnaround Time = (6+13+19+21)/4 = 59/4 = 14.75

SJF (Shortest Job First — Non-preemptive): At t=0: Only P1 available → run P1 (burst=6) At t=6: P2(8), P3(7), P4(3) available → run P4 (shortest) At t=9: P2(8), P3(7) available → run P3 At t=16: P2 → run P2

ProcessStartEndWaiting TimeTurnaround
P10606
P46936
P3916714
P216241523

Average Waiting Time = (0+3+7+15)/4 = 25/4 = 6.25 Average Turnaround Time = (6+6+14+23)/4 = 49/4 = 12.25

SJF gives better average waiting time than FCFS.


(b) Explain Round Robin scheduling with time quantum = 4 using the same processes.

Answer:

Round Robin (Time Quantum = 4):

Gantt Chart:

| P1 | P2 | P3 | P4 | P1 | P2 | P3 | P2 |
0    4    8   12   15   17   21   24   25

Wait, let me recalculate with arrival times:

Ready queue progression (TQ=4):

t=0: P1 arrives, run P1 [0-4], remaining=2 t=1: P2 arrives (added to queue) t=2: P3 arrives t=3: P4 arrives t=4: Queue: P2, P3, P4, P1(rem=2). Run P2 [4-8], rem=4 t=8: Queue: P3, P4, P1, P2(rem=4). Run P3 [8-12], rem=3 t=12: Queue: P4, P1, P2, P3(rem=3). Run P4 [12-15], done t=15: Queue: P1, P2, P3. Run P1 [15-17], done (rem=2) t=17: Queue: P2, P3. Run P2 [17-21], done (rem=4) t=21: Queue: P3. Run P3 [21-24], done (rem=3)

ProcessCompletionTurnaroundWaiting
P11717-0=1717-6=11
P22121-1=2020-8=12
P32424-2=2222-7=15
P41515-3=1212-3=9

Average Waiting Time = (11+12+15+9)/4 = 47/4 = 11.75 Average Turnaround Time = (17+20+22+12)/4 = 71/4 = 17.75

Characteristics of Round Robin:

  • Fair allocation — each process gets equal CPU time
  • Good response time for interactive systems
  • Performance depends on time quantum selection
  • Large quantum → becomes FCFS
  • Small quantum → high context switch overhead

Q4: Deadlocks [7 + 7 = 14]

(a) Explain the four necessary conditions for deadlock. Describe the Resource Allocation Graph.

Answer:

Four Necessary Conditions for Deadlock: All four must hold simultaneously:

1. Mutual Exclusion:

  • At least one resource must be non-sharable
  • Only one process can use the resource at a time
  • Example: Printer, tape drive

2. Hold and Wait:

  • A process holds at least one resource while waiting for additional resources
  • Process doesn't release held resources while waiting

3. No Preemption:

  • Resources cannot be forcibly taken from a process
  • A process can only release resources voluntarily
  • Must finish using the resource first

4. Circular Wait:

  • A circular chain of processes exists
  • Each process waits for a resource held by the next process
  • P₁ → P₂ → P₃ → ... → Pₙ → P₁

Resource Allocation Graph (RAG): A directed graph to model resource allocation:

  • Process node: Circle (○)
  • Resource node: Rectangle (□) with dots for instances
  • Request edge: Process → Resource (process is waiting)
  • Assignment edge: Resource → Process (resource allocated)

Deadlock Detection using RAG:

  • If graph has NO cycle → No deadlock
  • If graph has cycle:
    • Single instance per resource type → Deadlock
    • Multiple instances → May or may not be deadlock

Example:

P1 → R1 → P2 → R2 → P1  (Cycle = Deadlock!)

(b) Explain Banker's Algorithm for deadlock avoidance with an example.

Answer:

Banker's Algorithm: Determines if granting a resource request will leave the system in a safe state. If yes, grant; otherwise, deny.

Data Structures:

  • Available[j]: Number of available instances of resource j
  • Max[i,j]: Maximum demand of process i for resource j
  • Allocation[i,j]: Currently allocated instances
  • Need[i,j]: Max[i,j] - Allocation[i,j]

Safety Algorithm:

  1. Let Work = Available, Finish[i] = false for all i
  2. Find process i where Finish[i]=false and Need[i] ≤ Work
  3. Work = Work + Allocation[i], Finish[i] = true, go to step 2
  4. If all Finish[i]=true, system is in safe state

Example: 5 processes (P0-P4), 3 resources (A=10, B=5, C=7)

ProcessAllocation (A B C)Max (A B C)Need (A B C)
P00 1 07 5 37 4 3
P12 0 03 2 21 2 2
P23 0 29 0 26 0 0
P32 1 12 2 20 1 1
P40 0 24 3 34 3 1

Available = (10-7, 5-2, 7-5) = (3, 3, 2)

Safety check:

  1. P1: Need(1,2,2) ≤ Available(3,3,2) ✓ → Work = (5,3,2)
  2. P3: Need(0,1,1) ≤ (5,3,2) ✓ → Work = (7,4,3)
  3. P4: Need(4,3,1) ≤ (7,4,3) ✓ → Work = (7,4,5)
  4. P0: Need(7,4,3) ≤ (7,4,5) ✓ → Work = (7,5,5)
  5. P2: Need(6,0,0) ≤ (7,5,5) ✓ → Work = (10,5,7)

Safe sequence: <P1, P3, P4, P0, P2>


Q5: Memory Management [7 + 7 = 14]

(a) Explain paging. How does address translation work in paging?

Answer:

Paging: A memory management scheme that permits the physical address space to be non-contiguous. It divides:

  • Logical memory: Into fixed-size pages
  • Physical memory: Into fixed-size frames (same size as pages)
  • Typical page size: 4 KB

Address Translation:

A logical address is divided into:

  • Page number (p): Index into page table
  • Page offset (d): Offset within the page

For logical address space 2^m and page size 2^n:

  • Page number = m-n bits
  • Page offset = n bits

Translation process:

  1. Extract page number (p) from logical address
  2. Look up frame number (f) in page table using p as index
  3. Physical address = f × page_size + d

Example:

  • Logical address: 11 (binary: 1011)
  • Page size: 4 bytes (2 bits for offset)
  • Page number = 10 (2), Offset = 11 (3)
  • If page table maps page 2 → frame 5
  • Physical address = 5 × 4 + 3 = 23

Page Table:

  • One entry per page
  • Contains frame number
  • Stored in main memory
  • Page Table Base Register (PTBR) points to page table

TLB (Translation Lookaside Buffer):

  • Cache for page table entries
  • Speeds up address translation
  • TLB hit: 1 memory access
  • TLB miss: 2 memory accesses (page table + actual data)

(b) Explain segmentation. Compare paging and segmentation.

Answer:

Segmentation: A memory management scheme that supports user's view of memory. A program is divided into variable-sized segments based on logical units (main program, functions, stack, data).

Segment Table:

  • Base: Starting physical address
  • Limit: Length of segment
  • One entry per segment

Address Translation: Logical address = (segment number, offset)

  1. Look up segment in segment table
  2. Check: offset < limit (if not, trap)
  3. Physical address = base + offset

Comparison:

FeaturePagingSegmentation
SizeFixed (pages)Variable (segments)
ViewPhysical (OS view)Logical (user view)
Address1D (page, offset)2D (segment, offset)
FragmentationInternalExternal
TablePage tableSegment table
SharingDifficultEasy (share segments)
ProtectionPer pagePer segment (more natural)
User visibleNoYes
ReplacementSimpleComplex

Paged Segmentation: Combines both approaches:

  • Program divided into segments (logical view)
  • Each segment divided into pages (physical management)
  • Eliminates external fragmentation while maintaining logical structure
  • Used in Intel x86 architecture

Q6: Virtual Memory [7 + 7 = 14]

(a) Explain demand paging. Describe page fault handling mechanism.

Answer:

Demand Paging: A technique where pages are loaded into memory only when needed (on demand), not at process start. A page is brought into memory only when it is referenced.

Key Features:

  • Lazy loading — pages loaded only on access
  • Valid-invalid bit in page table (valid = in memory, invalid = on disk)
  • Requires backing store (swap space)
  • Enables running programs larger than physical memory

Page Fault Handling:

  1. Reference: Process references a memory location
  2. Trap: If page not in memory (invalid bit), hardware generates page fault trap
  3. Validate: OS checks if reference is valid or invalid
    • Invalid reference → Terminate process
    • Valid but page not in memory → Continue
  4. Find frame: Locate a free frame in physical memory
    • If no free frame → page replacement algorithm
  5. Disk I/O: Schedule disk operation to read page into frame
  6. Update page table: Set frame number and valid bit
  7. Restart instruction: Resume process from the instruction that caused the fault

Performance:

  • Effective Access Time (EAT) = (1-p) × ma + p × page_fault_time
  • Where p = page fault rate, ma = memory access time
  • page_fault_time includes: service interrupt + read page + restart process
  • Typical: ma = 200ns, page fault time = 8ms
  • Even p = 0.001: EAT = 8.2 μs (40× slowdown!)

(b) Explain page replacement algorithms: FIFO, Optimal, and LRU with example.

Answer:

Reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 Frames = 3

1. FIFO (First In First Out): Replace the oldest page.

Ref70120304230321201701
F177722224440000000777
F20000333222221111100
F3111100033333222221
PF***************

Page faults = 15

2. Optimal (OPT): Replace the page that won't be used for the longest time.

Ref70120304230321201701
F177722222222222222777
F20000000000000000000
F3111334433331111111

Page faults = 9 (theoretically minimum)

3. LRU (Least Recently Used): Replace the page that hasn't been used for the longest time.

Ref70120304230321201701
F177722224440001111111
F20000333222222200000
F3111100033333333777

Page faults = 12

Summary: OPT(9) < LRU(12) < FIFO(15)


Q7: File System [7 + 7 = 14]

(a) Explain file allocation methods: contiguous, linked, and indexed.

Answer:

1. Contiguous Allocation: Each file occupies a contiguous set of blocks on disk.

Directory entry: (file name, start block, length)

Advantages:

  • Simple implementation
  • Fast sequential and random access
  • Minimal disk head movement

Disadvantages:

  • External fragmentation
  • Difficult to grow files
  • Need to know file size at creation

2. Linked Allocation: Each file is a linked list of disk blocks scattered anywhere.

Directory entry: (file name, start pointer, end pointer) Each block contains pointer to next block.

Advantages:

  • No external fragmentation
  • Files can grow easily
  • No need to know size at creation

Disadvantages:

  • No random access (must traverse from beginning)
  • Pointer space overhead
  • Reliability issues (if pointer is lost/damaged)

FAT (File Allocation Table): Variation that keeps all pointers in a separate table for faster access.


3. Indexed Allocation: Each file has an index block containing pointers to all data blocks.

Directory entry: (file name, index block number) Index block contains array of disk block addresses.

Advantages:

  • Supports random access
  • No external fragmentation
  • Files can grow easily

Disadvantages:

  • Index block overhead (even small files need full index block)
  • Index block size limits file size
  • Solution: Linked scheme, multilevel index, combined scheme (Unix inode)

(b) Explain directory structure and its types.

Answer:

Directory: A data structure containing information about files (name, type, size, location, timestamps, permissions).

Operations: Search, create, delete, list, rename, traverse

Types of Directory Structures:

1. Single-Level Directory:

  • All files in one directory
  • Simple but naming conflicts
  • No grouping capability
  • Not practical for multiple users

2. Two-Level Directory:

  • Separate directory for each user
  • User File Directory (UFD) under Master File Directory (MFD)
  • Solves naming conflict
  • Limited grouping

3. Tree-Structured Directory:

  • Hierarchy of directories
  • Efficient searching and grouping
  • Absolute and relative paths
  • Current directory concept
  • Most common structure

4. Acyclic-Graph Directory:

  • Shared files and subdirectories
  • Links (hard links, symbolic links)
  • Multiple paths to same file
  • Complexity in deletion (reference counting)

5. General Graph Directory:

  • Allows cycles
  • More flexible but complex
  • Garbage collection needed
  • Risk of infinite loops during traversal

Path Names:

  • Absolute path: /home/user/documents/file.txt (from root)
  • Relative path: documents/file.txt (from current directory)

Q8: Synchronization [7 + 7 = 14]

(a) Explain the Critical Section Problem. Describe Peterson's solution.

Answer:

Critical Section Problem: When multiple processes access shared data concurrently, the outcome depends on the order of execution — called a race condition.

Critical Section: Code segment where shared resources are accessed.

Requirements for solution:

  1. Mutual Exclusion: Only one process in CS at a time
  2. Progress: If no process is in CS, selection of next cannot be postponed indefinitely
  3. Bounded Waiting: A bound exists on the number of times other processes can enter CS before a requesting process

Peterson's Solution (2 processes):

// Shared variables
int turn;            // Whose turn to enter CS
bool flag[2];        // Intention to enter CS

// Process Pi (i = 0 or 1, j = 1-i)
do {
    flag[i] = true;       // I want to enter
    turn = j;             // Give chance to other
    while (flag[j] && turn == j)
        ;                 // Wait (busy waiting)
    
    // CRITICAL SECTION
    
    flag[i] = false;      // Done, leaving CS
    
    // REMAINDER SECTION
} while (true);

Proof of correctness:

  1. Mutual Exclusion: Both P0 and P1 can't be in CS simultaneously. If both flag[0] and flag[1] are true, turn can only be 0 or 1, so one must wait. ✓

  2. Progress: If Pj is not trying to enter (flag[j]=false), Pi enters immediately. ✓

  3. Bounded Waiting: After Pi sets turn=j, if Pj enters CS, it sets flag[j]=false upon exit, allowing Pi to enter. At most one wait. ✓


(b) Explain Semaphores. Solve the Producer-Consumer problem using semaphores.

Answer:

Semaphore: An integer variable accessed through two atomic operations:

  • wait(S): (P operation) Decrement S; if S < 0, block
  • signal(S): (V operation) Increment S; if S ≤ 0, wake up blocked process

Types:

  • Binary semaphore: Values 0 or 1 (mutex)
  • Counting semaphore: Any non-negative integer

Producer-Consumer Problem (Bounded Buffer):

// Shared variables
int buffer[N];          // Buffer of size N
semaphore mutex = 1;    // Mutual exclusion
semaphore empty = N;    // Empty slots (initially N)
semaphore full = 0;     // Full slots (initially 0)

// Producer
do {
    produce item;
    
    wait(empty);        // Wait for empty slot
    wait(mutex);        // Enter critical section
    
    add item to buffer;
    
    signal(mutex);      // Leave critical section
    signal(full);       // Signal a full slot
} while (true);

// Consumer
do {
    wait(full);         // Wait for full slot
    wait(mutex);        // Enter critical section
    
    remove item from buffer;
    
    signal(mutex);      // Leave critical section
    signal(empty);      // Signal an empty slot
    
    consume item;
} while (true);

How it works:

  • empty tracks available buffer slots (producer waits when buffer full)
  • full tracks items in buffer (consumer waits when buffer empty)
  • mutex ensures mutual exclusion on buffer access
  • Order of wait operations matters! (wait(mutex) before wait(empty) → deadlock!)

Q9: Short Notes (Any Two) [7 × 2 = 14]

(a) Thrashing

Answer:

Thrashing occurs when a process spends more time paging (swapping pages in and out) than executing. The system becomes extremely slow.

Cause:

  • When degree of multiprogramming is too high
  • Processes don't have enough frames
  • Page fault → replace page → that page needed soon → another page fault → cycle

Working Set Model:

  • Working set = set of pages referenced in a recent time window Δ
  • If allocated frames ≥ working set size → few page faults
  • If allocated frames < working set size → thrashing

Solutions:

  1. Working set model for frame allocation
  2. Page fault frequency (PFF) monitoring
  3. Reduce degree of multiprogramming
  4. Suspend/swap out processes
  5. Increase physical memory

(b) Disk Scheduling Algorithms

Answer:

Goal: Minimize head movement (seek time) for disk I/O requests.

Algorithms:

1. FCFS: Service requests in arrival order. Simple but high seek time.

2. SSTF (Shortest Seek Time First): Select request closest to current head position. Better but may cause starvation.

3. SCAN (Elevator): Move head in one direction, servicing requests, then reverse. No starvation, uniform wait time.

4. C-SCAN (Circular SCAN): Like SCAN but returns to beginning without servicing on return. More uniform wait time.

5. LOOK: Like SCAN but reverses at last request, not disk end.

Example: Head at 50, requests: 82, 170, 43, 140, 24, 16, 190

  • FCFS: 50→82→170→43→140→24→16→190 (total = 642)
  • SSTF: 50→43→24→16→82→140→170→190 (total = 208)

(c) Inter-Process Communication (IPC)

Answer:

IPC mechanisms allow processes to communicate and synchronize actions.

Types:

1. Shared Memory:

  • Processes share a region of memory
  • Fast communication (no kernel involvement after setup)
  • Producer-consumer model
  • Need synchronization (semaphores, mutexes)

2. Message Passing:

  • Processes exchange messages via send/receive
  • No shared variables needed
  • Direct communication: send(P, message), receive(Q, message)
  • Indirect communication: via mailboxes

3. Pipes:

  • Unidirectional communication channel
  • Named pipes: persist in filesystem
  • Anonymous pipes: parent-child communication

4. Signals:

  • Asynchronous notification mechanism
  • Limited information (signal number only)
  • Example: SIGKILL, SIGTERM, SIGINT

5. Sockets:

  • Communication endpoint
  • Can be used for network communication
  • Client-server model

(d) RAID Levels

Answer:

RAID (Redundant Array of Independent Disks): Combines multiple disks for performance, capacity, or reliability.

LevelDescriptionMin DisksRedundancy
RAID 0Striping2None
RAID 1Mirroring2Full copy
RAID 5Striping + Distributed Parity31 disk failure
RAID 6Striping + Double Parity42 disk failures
RAID 10Mirroring + Striping4Mirror copies

RAID 0: High performance, no fault tolerance RAID 1: High reliability, 50% capacity overhead RAID 5: Good balance of performance and reliability, most common RAID 10: Best performance with redundancy, expensive

Last updated: 2025-02-09

Radhika's PlacePrep built by Rohan Sharma