Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Arrays, Linked Lists, Stacks, and Queues

The algorithms of the preceding chapters operate on arrays — contiguous blocks of memory indexed by integers. Arrays are powerful but they are only one of many ways to organize data. In this chapter we study the fundamental data structures that underpin nearly all of computer science: dynamic arrays, linked lists, stacks, queues, and deques. Each offers a different set of trade-offs between time complexity, memory usage, and flexibility. Understanding these structures deeply is essential, because every higher-level data structure — from hash tables to balanced trees to graphs — is built on top of them.

Arrays

An array is the simplest data structure: a contiguous block of memory divided into equal-sized slots, each identified by an integer index. Accessing any element by its index takes time, because the memory address can be computed directly: if the array starts at address and each element occupies bytes, then element lives at address .

This direct addressing makes arrays extremely efficient for random access. However, arrays have a fundamental limitation: their size is fixed at creation time. If we need to store more elements than the array can hold, we must allocate a new, larger array and copy all existing elements — an operation.

Static arrays in TypeScript

TypeScript (and JavaScript) arrays are actually dynamic — they resize automatically behind the scenes. But to understand the foundations, imagine a fixed-size array:

const fixed = new Array<number>(10); // 10 slots, all undefined
fixed[0] = 42;
fixed[9] = 99;
// fixed[10] would be out of bounds in a true static array

In languages like C or Java, going beyond the allocated size is either a compile-time error or a runtime crash. JavaScript's built-in arrays hide this complexity, but the cost of resizing is still there — it is just managed for us. Let us see how.

Dynamic arrays

A dynamic array maintains an internal buffer that is larger than the number of elements currently stored. When the buffer fills up, the array allocates a new buffer of double the size and copies all elements over. This doubling strategy gives us amortized appends while keeping worst-case access at .

The doubling strategy

Suppose our dynamic array has capacity and currently holds elements. When we append element :

  • If : store the element in slot . Cost: .
  • If : allocate a new buffer of size , copy all elements, then store the new element. Cost: .

The key insight is that expensive copies happen rarely. After a copy doubles the capacity to , we can perform another cheap appends before the next copy. This is the essence of amortized analysis.

Amortized analysis of append

We use the aggregate method. Starting from an empty array with initial capacity 1, suppose we perform appends. Copies happen when the size reaches 1, 2, 4, 8, ..., up to some power of 2. The total number of element copies across all resizes is:

So the total cost of appends is at most (for the stores) plus (for all the copies), giving total. The amortized cost per append is therefore .

Implementation

Our DynamicArray<T> uses a plain JavaScript array as the backing buffer, with explicit capacity management. The initial capacity defaults to 4.

export class DynamicArray<T> implements Iterable<T> {
  private data: (T | undefined)[];
  private length: number;

  constructor(initialCapacity = 4) {
    this.data = new Array<T | undefined>(initialCapacity);
    this.length = 0;
  }

  get size(): number {
    return this.length;
  }

  get capacity(): number {
    return this.data.length;
  }

  get(index: number): T {
    this.checkBounds(index);
    return this.data[index] as T;
  }

  set(index: number, value: T): void {
    this.checkBounds(index);
    this.data[index] = value;
  }

  append(value: T): void {
    if (this.length === this.data.length) {
      this.resize(this.data.length * 2);
    }
    this.data[this.length] = value;
    this.length++;
  }

  insert(index: number, value: T): void {
    if (index < 0 || index > this.length) {
      throw new RangeError(
        `Index ${index} out of bounds for size ${this.length}`
      );
    }
    if (this.length === this.data.length) {
      this.resize(this.data.length * 2);
    }
    for (let i = this.length; i > index; i--) {
      this.data[i] = this.data[i - 1];
    }
    this.data[index] = value;
    this.length++;
  }

  remove(index: number): T {
    this.checkBounds(index);
    const value = this.data[index] as T;
    for (let i = index; i < this.length - 1; i++) {
      this.data[i] = this.data[i + 1];
    }
    this.data[this.length - 1] = undefined;
    this.length--;
    if (
      this.length > 0 &&
      this.length <= this.data.length / 4 &&
      this.data.length > 4
    ) {
      this.resize(Math.max(4, Math.floor(this.data.length / 2)));
    }
    return value;
  }

  private resize(newCapacity: number): void {
    const newData = new Array<T | undefined>(newCapacity);
    for (let i = 0; i < this.length; i++) {
      newData[i] = this.data[i];
    }
    this.data = newData;
  }

  private checkBounds(index: number): void {
    if (index < 0 || index >= this.length) {
      throw new RangeError(
        `Index ${index} out of bounds for size ${this.length}`
      );
    }
  }
  // ... iterator, toArray, etc.
}

Notice that remove also implements shrinking: when occupancy falls below 25%, the buffer is halved (but never below 4). This prevents a long sequence of removals from wasting memory, and the halving threshold (1/4 rather than 1/2) avoids thrashing — a pathological pattern where alternating appends and removes near the boundary trigger repeated resizes.

Complexity summary

OperationTimeNotes
get(i) / set(i, v)Direct index access
append(v) amortized worst case during resize
insert(i, v)Must shift elements right
remove(i)Must shift elements left
indexOf(v)Linear scan

Linked lists

A linked list stores elements in nodes that are scattered throughout memory, with each node containing a value and a pointer (reference) to the next node. Unlike arrays, linked lists do not require contiguous memory, and inserting or removing an element at a known position takes time — no shifting required.

The trade-off is that random access is lost: to reach the th element, we must follow pointers from the head, taking time.

Singly linked lists

In a singly linked list, each node points to the next node. The list maintains a pointer to the head (first node) and, for efficiency, a pointer to the tail (last node).

head → [10 | •] → [20 | •] → [30 | null]
                                ↑
                              tail

Implementation

class SinglyNode<T> {
  constructor(
    public value: T,
    public next: SinglyNode<T> | null = null,
  ) {}
}

export class SinglyLinkedList<T> implements Iterable<T> {
  private head: SinglyNode<T> | null = null;
  private tail: SinglyNode<T> | null = null;
  private length: number = 0;

  get size(): number {
    return this.length;
  }

  prepend(value: T): void {
    const node = new SinglyNode(value, this.head);
    this.head = node;
    if (this.tail === null) {
      this.tail = node;
    }
    this.length++;
  }

  append(value: T): void {
    const node = new SinglyNode(value);
    if (this.tail !== null) {
      this.tail.next = node;
    } else {
      this.head = node;
    }
    this.tail = node;
    this.length++;
  }

  removeFirst(): T | undefined {
    if (this.head === null) return undefined;
    const value = this.head.value;
    this.head = this.head.next;
    if (this.head === null) {
      this.tail = null;
    }
    this.length--;
    return value;
  }

  delete(value: T): boolean {
    if (this.head === null) return false;
    if (this.head.value === value) {
      this.head = this.head.next;
      if (this.head === null) this.tail = null;
      this.length--;
      return true;
    }
    let current = this.head;
    while (current.next !== null) {
      if (current.next.value === value) {
        if (current.next === this.tail) this.tail = current;
        current.next = current.next.next;
        this.length--;
        return true;
      }
      current = current.next;
    }
    return false;
  }

  find(value: T): boolean {
    let current = this.head;
    while (current !== null) {
      if (current.value === value) return true;
      current = current.next;
    }
    return false;
  }
  // ... iterator, toArray, etc.
}

Tracing through an example

Starting with an empty singly linked list, let us perform a sequence of operations:

OperationList statesize
append(10)[10]1
append(20)[10] → [20]2
prepend(5)[5] → [10] → [20]3
removeFirst() → 5[10] → [20]2
delete(20) → true[10]1
append(30)[10] → [30]2

Notice that prepend and removeFirst are both because they only touch the head pointer. Appending is because we maintain a tail pointer. However, delete(value) requires a linear scan.

A limitation of singly linked lists

Removing the last element is in a singly linked list, because we must traverse the entire list to find the node that precedes the tail. The doubly linked list solves this problem.

Doubly linked lists

In a doubly linked list, each node has pointers to both the next and previous nodes. This enables removal from both ends.

null ← [10 | •] ⇄ [20 | •] ⇄ [30 | •] → null
        ↑                        ↑
       head                    tail

Implementation

class DoublyNode<T> {
  constructor(
    public value: T,
    public prev: DoublyNode<T> | null = null,
    public next: DoublyNode<T> | null = null,
  ) {}
}

export class DoublyLinkedList<T> implements Iterable<T> {
  private head: DoublyNode<T> | null = null;
  private tail: DoublyNode<T> | null = null;
  private length: number = 0;

  get size(): number {
    return this.length;
  }

  prepend(value: T): void {
    const node = new DoublyNode(value, null, this.head);
    if (this.head !== null) {
      this.head.prev = node;
    } else {
      this.tail = node;
    }
    this.head = node;
    this.length++;
  }

  append(value: T): void {
    const node = new DoublyNode(value, this.tail, null);
    if (this.tail !== null) {
      this.tail.next = node;
    } else {
      this.head = node;
    }
    this.tail = node;
    this.length++;
  }

  removeFirst(): T | undefined {
    if (this.head === null) return undefined;
    const value = this.head.value;
    this.head = this.head.next;
    if (this.head !== null) {
      this.head.prev = null;
    } else {
      this.tail = null;
    }
    this.length--;
    return value;
  }

  removeLast(): T | undefined {
    if (this.tail === null) return undefined;
    const value = this.tail.value;
    this.tail = this.tail.prev;
    if (this.tail !== null) {
      this.tail.next = null;
    } else {
      this.head = null;
    }
    this.length--;
    return value;
  }

  private removeNode(node: DoublyNode<T>): void {
    if (node.prev !== null) {
      node.prev.next = node.next;
    } else {
      this.head = node.next;
    }
    if (node.next !== null) {
      node.next.prev = node.prev;
    } else {
      this.tail = node.prev;
    }
    this.length--;
  }
  // ... delete, find, iterators, etc.
}

The critical advantage is removeLast: by following the tail's prev pointer, we can unlink the last node in time without traversing the list. The removeNode helper detaches any node from the list in once we have a reference to it.

The cost of this flexibility is extra memory: each node stores two pointers instead of one. For large collections of small values, this overhead can be significant.

Comparing arrays and linked lists

OperationDynamic arraySingly linked listDoubly linked list
Access by index
Prepend
Append*
Remove first
Remove last*
Insert at known position
Search
Memory per elementLow (contiguous)+1 pointer+2 pointers
Cache performanceExcellentPoorPoor

* Amortized

When to use which:

  • Dynamic array when you need fast random access or are iterating sequentially (cache-friendly).
  • Singly linked list when insertions and deletions at the front dominate.
  • Doubly linked list when you need efficient removal from both ends or deletion of arbitrary nodes (given a reference).

In practice, arrays and dynamic arrays dominate due to cache locality — modern CPUs are optimized for accessing contiguous memory. Linked lists shine in scenarios where elements are frequently inserted or removed at the endpoints, or when the data is too large to copy during a resize.

Abstract data types: stacks, queues, and deques

The data structures above — arrays and linked lists — are concrete implementations. Now we turn to abstract data types (ADTs): specifications of behavior that can be implemented in multiple ways. A stack, for instance, defines what operations are available (push, pop, peek) and what they do, without prescribing how to store the elements.

Stacks

A stack is a Last-In, First-Out (LIFO) collection. The most recently added element is the first one to be removed, like a stack of plates.

Interface

interface IStack<T> {
  push(value: T): void;     // Add to top
  pop(): T | undefined;      // Remove and return top
  peek(): T | undefined;     // Return top without removing
  readonly size: number;
  readonly isEmpty: boolean;
}

Implementation

A stack is naturally implemented as a linked list where both push and pop operate on the head:

export class Stack<T> implements IStack<T>, Iterable<T> {
  private head: { value: T; next: unknown } | null = null;
  private length: number = 0;

  get size(): number { return this.length; }
  get isEmpty(): boolean { return this.length === 0; }

  push(value: T): void {
    this.head = { value, next: this.head };
    this.length++;
  }

  pop(): T | undefined {
    if (this.head === null) return undefined;
    const value = this.head.value;
    this.head = this.head.next as typeof this.head;
    this.length--;
    return value;
  }

  peek(): T | undefined {
    return this.head?.value;
  }
}

All three operations — push, pop, peek — are .

We could equally implement a stack with a dynamic array (push = append, pop = remove last). The array-based version has better cache locality, while the linked-list version avoids occasional resize costs. For most purposes in TypeScript, the built-in array with push/pop is the pragmatic choice; our implementation here serves pedagogical purposes.

Applications

Stacks appear throughout computer science:

  • Function call stack. When a function is called, its local variables and return address are pushed onto the call stack. When it returns, they are popped. This is why recursive algorithms can overflow the stack with too many nested calls.
  • Parenthesis matching. To check whether brackets are balanced in an expression like ((a + b) * c), push each opening bracket and pop when a matching closing bracket is found.
  • Undo/redo. Text editors push each action onto an undo stack. Undoing pops the most recent action.
  • Depth-first search. DFS uses a stack (often the call stack via recursion) to track which vertices to visit next.

Tracing through an example

OperationStack (top → bottom)Returned
push(10)10
push(20)20, 10
push(30)30, 20, 10
peek()30, 20, 1030
pop()20, 1030
pop()1020
push(40)40, 10
pop()1040
pop()(empty)10

Queues

A queue is a First-In, First-Out (FIFO) collection. Elements are added at the back and removed from the front, like a line of people waiting.

Interface

interface IQueue<T> {
  enqueue(value: T): void;    // Add to back
  dequeue(): T | undefined;   // Remove and return front
  peek(): T | undefined;      // Return front without removing
  readonly size: number;
  readonly isEmpty: boolean;
}

Implementation

A queue maps naturally onto a singly linked list with head and tail pointers: enqueue appends at the tail, dequeue removes from the head.

interface QueueNode<T> {
  value: T;
  next: QueueNode<T> | null;
}

export class Queue<T> implements IQueue<T>, Iterable<T> {
  private head: QueueNode<T> | null = null;
  private tail: QueueNode<T> | null = null;
  private length: number = 0;

  get size(): number { return this.length; }
  get isEmpty(): boolean { return this.length === 0; }

  enqueue(value: T): void {
    const node: QueueNode<T> = { value, next: null };
    if (this.tail !== null) {
      this.tail.next = node;
    } else {
      this.head = node;
    }
    this.tail = node;
    this.length++;
  }

  dequeue(): T | undefined {
    if (this.head === null) return undefined;
    const value = this.head.value;
    this.head = this.head.next;
    if (this.head === null) this.tail = null;
    this.length--;
    return value;
  }

  peek(): T | undefined {
    return this.head?.value;
  }
}

All operations are .

An array-based queue is trickier: naively dequeuing from the front of an array is because every element must shift. A circular buffer solves this by wrapping indices around modulo the capacity, giving amortized enqueue and dequeue. Our linked-list implementation avoids this complexity altogether.

Applications

  • Breadth-first search. BFS uses a queue to explore vertices level by level.
  • Task scheduling. Operating systems use queues to schedule processes for CPU time.
  • Buffering. Data streams (network packets, keyboard input) are buffered in queues.
  • Level-order tree traversal. Visiting tree nodes level by level requires a queue.

Tracing through an example

OperationQueue (front → back)Returned
enqueue(10)10
enqueue(20)10, 20
enqueue(30)10, 20, 30
peek()10, 20, 3010
dequeue()20, 3010
dequeue()3020
enqueue(40)30, 40
dequeue()4030

Deques

A deque (double-ended queue, pronounced "deck") supports insertion and removal at both ends in time. It generalizes both stacks and queues.

Implementation

A deque maps directly onto a doubly linked list:

interface DequeNode<T> {
  value: T;
  prev: DequeNode<T> | null;
  next: DequeNode<T> | null;
}

export class Deque<T> implements Iterable<T> {
  private head: DequeNode<T> | null = null;
  private tail: DequeNode<T> | null = null;
  private length: number = 0;

  get size(): number { return this.length; }
  get isEmpty(): boolean { return this.length === 0; }

  pushFront(value: T): void {
    const node: DequeNode<T> = { value, prev: null, next: this.head };
    if (this.head !== null) {
      this.head.prev = node;
    } else {
      this.tail = node;
    }
    this.head = node;
    this.length++;
  }

  pushBack(value: T): void {
    const node: DequeNode<T> = { value, prev: this.tail, next: null };
    if (this.tail !== null) {
      this.tail.next = node;
    } else {
      this.head = node;
    }
    this.tail = node;
    this.length++;
  }

  popFront(): T | undefined {
    if (this.head === null) return undefined;
    const value = this.head.value;
    this.head = this.head.next;
    if (this.head !== null) this.head.prev = null;
    else this.tail = null;
    this.length--;
    return value;
  }

  popBack(): T | undefined {
    if (this.tail === null) return undefined;
    const value = this.tail.value;
    this.tail = this.tail.prev;
    if (this.tail !== null) this.tail.next = null;
    else this.head = null;
    this.length--;
    return value;
  }

  peekFront(): T | undefined { return this.head?.value; }
  peekBack(): T | undefined { return this.tail?.value; }
}

All six operations — pushFront, pushBack, popFront, popBack, peekFront, peekBack — are .

Using a deque as a stack or queue

A deque subsumes both stacks and queues:

  • As a stack: use pushFront / popFront (or pushBack / popBack).
  • As a queue: use pushBack / popFront.

This flexibility makes the deque a useful building block when the access pattern is uncertain, or when both ends are needed.

Applications

  • Sliding window maximum. In the classic interview problem "maximum in every window of size ," a deque holds indices of potential maximums. Elements are added at the back and removed from the front (when they fall out of the window) or from the back (when a larger element supersedes them).
  • Work-stealing schedulers. Each thread has a deque of tasks. It pops from its own front, while idle threads steal from other deques' backs.
  • Palindrome checking. Push characters from both ends; pop from both ends and compare.

Complexity comparison

DynamicArraySinglyLinkedListDoublyLinkedListStackQueueDeque
Add front
Add back*
Remove front
Remove back*
Access by index
Search

* Amortized

Exercises

Exercise 7.1. Implement a function isBalanced(expression: string): boolean that uses a Stack to determine whether the parentheses (), brackets [], and braces {} in an expression are properly balanced. For example, isBalanced("((a+b)*[c-d])") should return false (mismatched outer parentheses), while isBalanced("{a*(b+c)}") should return true.

Exercise 7.2. Implement a circular buffer–based queue. Use a fixed-size array and two indices (front and back) that wrap around using modular arithmetic. Compare its performance characteristics with our linked-list–based Queue.

Exercise 7.3. Implement a MinStack<T> that supports push, pop, peek, and an additional min() operation that returns the minimum element in the stack — all in time. Hint: maintain a second stack that tracks minimums.

Exercise 7.4. Using only two Stacks, implement a Queue. Analyze the amortized time complexity of enqueue and dequeue. Hint: use one stack for enqueuing and another for dequeuing; transfer elements between them lazily.

Exercise 7.5. Implement a function slidingWindowMax(arr: number[], k: number): number[] that returns the maximum value in each window of size as the window slides from left to right across the array. Use a Deque to achieve time complexity.

Summary

This chapter introduced the foundational data structures upon which nearly everything else is built:

  • Dynamic arrays provide random access and amortized append via the doubling strategy. Insert and remove at arbitrary positions cost due to shifting.
  • Singly linked lists offer insertion and removal at the head, and append with a tail pointer, but sacrifice random access and efficient removal from the tail.
  • Doubly linked lists add back-pointers for removal at both ends, at the cost of extra memory per node.
  • Stacks (LIFO) are the workhorse of recursion, expression evaluation, and depth-first search.
  • Queues (FIFO) power breadth-first search, task scheduling, and buffering.
  • Deques generalize stacks and queues, supporting operations at both ends.

The choice between arrays and linked lists comes down to access patterns. If you need random access or sequential iteration (where cache locality matters), use an array. If insertions and deletions at the endpoints dominate, use a linked list. When in doubt, the dynamic array is usually the right default — it is what most languages provide as their standard collection.

In the next chapter, we will use these building blocks to construct hash tables, which achieve expected lookup by combining arrays with a hash function.