Arrays, Linked Lists, Stacks, and Queues
The algorithms of the preceding chapters operate on arrays: contiguous blocks of memory indexed by integers. Arrays are powerful but they are only one of many ways to organize data. In this chapter we study the fundamental data structures that underpin nearly all of Computer Science: dynamic arrays, linked lists, stacks, queues, and deques. Each offers a different set of trade-offs between time complexity, memory usage, and flexibility. Understanding these structures deeply is essential, because every higher-level data structure, from hash tables to balanced trees to graphs, is built on top of them.
Arrays
An array is the simplest data structure: a contiguous block of memory divided into equal-sized slots, each identified by an integer index. Accessing any element by its index takes time, because the memory address can be computed directly: if the array starts at address and each element occupies bytes, then element lives at address .
This direct addressing makes arrays extremely efficient for random access. However, arrays have a fundamental limitation: their size is fixed at creation time. If we need to store more elements than the array can hold, we must allocate a new, larger array and copy all existing elements, which is an operation.
Static arrays in TypeScript
TypeScript (and JavaScript) arrays are actually dynamic; they resize automatically behind the scenes. But to understand the foundations, imagine a fixed-size array:
const fixed = new Array<number>(10); // 10 slots, all undefined
fixed[0] = 42;
fixed[9] = 99;
// fixed[10] would be out of bounds in a true static array
In languages like C or Java, going beyond the allocated size is either a compile-time error or a runtime crash. JavaScript's built-in arrays hide this complexity, but the cost of resizing is still there; it is just managed for us in the background. Let us see how by prodiving an implementation of dynamic arrays.
A note on faithfulness. The closest analogue to a true fixed-size, contiguous-memory array in TypeScript is a TypedArray such as Int32Array or Float64Array, which is backed by a raw ArrayBuffer of bytes. TypedArrays have a fixed length set at construction, cannot grow, and store elements of a single numeric type in a predictable, cache-friendly layout. The downside is that they only support numeric primitives, so they cannot hold a generic T. Because we want a reusable DynamicArray<T>, the implementation below uses a regular JavaScript array as the backing buffer and treats its slots as fixed by tracking capacity manually. Conceptually, every this.data[i] access should be read as "load the -th word from a contiguous memory block of size capacity." If you want to see the same idea with a genuinely fixed-size store, swap the backing buffer for a Float64Array and drop the T | undefined typing; the resize logic stays identical.
Dynamic arrays
A dynamic array maintains an internal buffer that is larger than the number of elements currently stored. When the buffer fills up, the array allocates a new buffer of double the size and copies all elements over. This doubling strategy gives us amortized appends while keeping worst-case access at . We will justify both bounds in the complexity and space analysis section below, once we have seen how the implementation grows the buffer.
The doubling strategy
Suppose our dynamic array has capacity and currently holds elements. When we append element :
- If : store the element in slot . Cost: .
- If : allocate a new buffer of size , copy all elements, then store the new element. Cost: .
The key insight is that expensive copies happen rarely. After a copy doubles the capacity to , we can perform another cheap appends before the next copy. This is essential for amortized analysis, which we will make precise after the implementation.
Implementation
Our DynamicArray<T> uses a plain JavaScript array as the backing buffer, with explicit capacity management. The initial capacity defaults to 4.
export class DynamicArray<T> implements Iterable<T> {
private data: (T | undefined)[];
private length: number;
constructor(initialCapacity = 4) {
this.data = new Array<T | undefined>(initialCapacity);
this.length = 0;
}
get size(): number {
return this.length;
}
get capacity(): number {
return this.data.length;
}
get(index: number): T {
this.checkBounds(index);
return this.data[index] as T;
}
set(index: number, value: T): void {
this.checkBounds(index);
this.data[index] = value;
}
append(value: T): void {
this.growIfFull();
this.data[this.length] = value;
this.length++;
}
insert(index: number, value: T): void {
this.checkInsertBounds(index);
this.growIfFull();
for (let i = this.length; i > index; i--) {
this.data[i] = this.data[i - 1];
}
this.data[index] = value;
this.length++;
}
remove(index: number): T {
this.checkBounds(index);
const value = this.data[index] as T;
for (let i = index; i < this.length - 1; i++) {
this.data[i] = this.data[i + 1];
}
this.data[this.length - 1] = undefined;
this.length--;
this.shrinkIfSparse();
return value;
}
private growIfFull(): void {
if (this.length === this.data.length) {
this.resize(this.data.length * 2);
}
}
private shrinkIfSparse(): void {
if (
this.length > 0 &&
this.length <= this.data.length / 4 &&
this.data.length > 4
) {
this.resize(Math.max(4, Math.floor(this.data.length / 2)));
}
}
private resize(newCapacity: number): void {
const newData = new Array<T | undefined>(newCapacity);
for (let i = 0; i < this.length; i++) {
newData[i] = this.data[i];
}
this.data = newData;
}
private checkBounds(index: number): void {
if (index < 0 || index >= this.length) {
throw new RangeError(
`Index ${index} out of bounds for size ${this.length}`
);
}
}
private checkInsertBounds(index: number): void {
if (index < 0 || index > this.length) {
throw new RangeError(
`Index ${index} out of bounds for size ${this.length}`
);
}
}
// ... iterator, toArray, etc.
}
Notice that remove also implements shrinking: when occupancy falls below 25%, the buffer is halved (but never below 4). This prevents a long sequence of removals from wasting memory, and the halving threshold (1/4 rather than 1/2) avoids thrashing, a pathological pattern where alternating appends and removes near the boundary trigger repeated resizes.
Complexity and space analysis
Now that we have seen the implementation, we can return to the claim made at the start of this section: a dynamic array supports append in amortized time while keeping get and set at in the worst case.
Indexed read and write stay
The backing store is a single contiguous JavaScript array, so the slot for element lives at a fixed offset within it. Computing that offset is constant work, independent of . Once we have the offset, both reading the slot (get(i)) and writing to it (set(i, v)) take constant time.
Note that neither get nor set ever triggers a resize. They do not change the number of elements stored, so the buffer never needs to grow or shrink as a result of them; only append, insert, and remove change the size and therefore interact with the resize logic. get and set simply address an existing slot.
Resizing does not break this invariant: the new buffer is also contiguous, so after the copy the slot for element is again at a fixed offset. Therefore get(i) and set(i, v) are both in the worst case, not just on average.
Amortized analysis of append
A single append can cost when it triggers a resize, but resizes are rare enough that the average cost per append, taken over a long sequence of operations, is . This kind of averaging is what amortized analysis captures.
We use the aggregate method. Starting from an empty array with initial capacity 1, suppose we perform appends. Resizes happen when the size reaches 1, 2, 4, 8, ..., up to the largest power of 2 not exceeding . The total number of element copies across all resizes is therefore
To see why this bound holds, recall the multiply-by--and-subtract derivation we used in Chapter 6 (in the box explaining ). Applied to a finite sum with , the same algebra gives , hence
Specializing to and , the sum of copies is exactly . By definition of the floor, , so
The intuition that "the last term dominates the sum" is precisely the statement that, reading the series backward and factoring out , the remaining factor approaches its limit of but never reaches it, so the whole sum is always strictly less than .
So the total work for appends is at most writes plus copies, giving in total. The amortized cost per append is .
Geometric growth is essential to this argument. If we instead grew the buffer by a constant amount at every resize, every -th append would trigger an copy, summing to work for appends and an amortized cost of per append. Doubling (or any growth factor strictly greater than 1) keeps the geometric sum bounded and the amortized cost constant.
Space overhead
There are two distinct sources of space overhead to account for, both absent from a fixed-size static array.
Steady-state slack. Between resizes, some of the allocated buffer is unused: capacity is at most twice the live size, and the shrink rule keeps occupancy above 25% (except for the small initial buffer). So at any moment outside of a resize, the unused capacity is at most a constant fraction of , and the dynamic array's footprint is bounded by a small constant times the footprint of its static-array counterpart.
Transient resize overhead. Resizing is not free of memory cost either. When the buffer becomes full at size , the implementation allocates a new buffer of capacity and copies every live element into it before the old buffer can be freed. During the copy, both buffers coexist, so the peak memory usage of a single resize is slots, three times the live size, and strictly more than the steady-state bound. After the copy finishes and the old buffer is released, the footprint drops back to . A static array, in contrast, is allocated once at its final size and never has this transient spike.
For long-lived programs this transient is rarely a problem: it is local to the resize and reclaimed immediately. But it is worth being aware of in two situations. First, when memory is tight (embedded systems, large in-memory datasets close to physical limits), a doubling-resize on a buffer of size requires slots free at the moment of the resize, even though the data structure "uses" only . Second, the resize allocates a fresh contiguous block, which interacts with allocator fragmentation and (in garbage-collected runtimes like JavaScript) creates collectable garbage proportional to the old buffer's size.
Asymptotically, both sources of overhead are still . The dynamic array uses space, with a small constant-factor overhead compared to a perfectly sized static array, but the constants differ between the steady-state () and the brief peak during a resize ().
Summary
| Operation | Time | Notes |
|---|---|---|
get(i) / set(i, v) | Direct index access | |
append(v) | amortized | worst case during resize |
insert(i, v) | Must shift elements right | |
remove(i) | Must shift elements left | |
indexOf(v) | Linear scan |
Linked lists
A linked list stores elements in nodes that are scattered throughout memory, with each node containing a value and a pointer (reference) to the next node. Unlike arrays, linked lists do not require contiguous memory, and inserting or removing an element at a known position takes time, with no shifting required.
The trade-off is that random access is lost: to reach the th element, we must follow pointers from the head, taking time.
Singly linked lists
In a singly linked list, each node points to the next node. The list maintains a pointer to the head (first node) and, for efficiency, a pointer to the tail (last node).
head → [10 | •] → [20 | •] → [30 | null]
↑
tail
Implementation
class SinglyNode<T> {
constructor(
public value: T,
public next: SinglyNode<T> | null = null,
) {}
}
export class SinglyLinkedList<T> implements Iterable<T> {
private head: SinglyNode<T> | null = null;
private tail: SinglyNode<T> | null = null;
private length: number = 0;
get size(): number {
return this.length;
}
prepend(value: T): void {
const node = new SinglyNode(value, this.head);
this.head = node;
if (this.tail === null) {
this.tail = node;
}
this.length++;
}
append(value: T): void {
const node = new SinglyNode(value);
if (this.tail !== null) {
this.tail.next = node;
} else {
this.head = node;
}
this.tail = node;
this.length++;
}
removeFirst(): T | undefined {
if (this.head === null) return undefined;
const value = this.head.value;
this.head = this.head.next;
if (this.head === null) {
this.tail = null;
}
this.length--;
return value;
}
delete(value: T): boolean {
if (this.head === null) return false;
if (this.head.value === value) {
this.head = this.head.next;
if (this.head === null) this.tail = null;
this.length--;
return true;
}
let current = this.head;
while (current.next !== null) {
if (current.next.value === value) {
if (current.next === this.tail) this.tail = current;
current.next = current.next.next;
this.length--;
return true;
}
current = current.next;
}
return false;
}
find(value: T): boolean {
let current = this.head;
while (current !== null) {
if (current.value === value) return true;
current = current.next;
}
return false;
}
// ... iterator, toArray, etc.
}
Tracing through an example
Starting with an empty singly linked list, let us perform a sequence of operations:
| Operation | List state | size |
|---|---|---|
append(10) | [10] | 1 |
append(20) | [10] → [20] | 2 |
prepend(5) | [5] → [10] → [20] | 3 |
removeFirst() → 5 | [10] → [20] | 2 |
delete(20) → true | [10] | 1 |
append(30) | [10] → [30] | 2 |
Notice that prepend and removeFirst are both because they only touch the head pointer. Appending is because we maintain a tail pointer. However, delete(value) requires a linear scan.
A limitation of singly linked lists
Removing the last element is in a singly linked list, because we must traverse the entire list to find the node that precedes the tail. The doubly linked list solves this problem.
Doubly linked lists
In a doubly linked list, each node has pointers to both the next and previous nodes. This enables removal from both ends.
null ← [10 | •] ⇄ [20 | •] ⇄ [30 | •] → null
↑ ↑
head tail
Implementation
class DoublyNode<T> {
constructor(
public value: T,
public prev: DoublyNode<T> | null = null,
public next: DoublyNode<T> | null = null,
) {}
}
export class DoublyLinkedList<T> implements Iterable<T> {
private head: DoublyNode<T> | null = null;
private tail: DoublyNode<T> | null = null;
private length: number = 0;
get size(): number {
return this.length;
}
prepend(value: T): void {
const node = new DoublyNode(value, null, this.head);
if (this.head !== null) {
this.head.prev = node;
} else {
this.tail = node;
}
this.head = node;
this.length++;
}
append(value: T): void {
const node = new DoublyNode(value, this.tail, null);
if (this.tail !== null) {
this.tail.next = node;
} else {
this.head = node;
}
this.tail = node;
this.length++;
}
removeFirst(): T | undefined {
if (this.head === null) return undefined;
const value = this.head.value;
this.head = this.head.next;
if (this.head !== null) {
this.head.prev = null;
} else {
this.tail = null;
}
this.length--;
return value;
}
removeLast(): T | undefined {
if (this.tail === null) return undefined;
const value = this.tail.value;
this.tail = this.tail.prev;
if (this.tail !== null) {
this.tail.next = null;
} else {
this.head = null;
}
this.length--;
return value;
}
private removeNode(node: DoublyNode<T>): void {
if (node.prev !== null) {
node.prev.next = node.next;
} else {
this.head = node.next;
}
if (node.next !== null) {
node.next.prev = node.prev;
} else {
this.tail = node.prev;
}
this.length--;
}
// ... delete, find, iterators, etc.
}
The critical advantage is removeLast: by following the tail's prev pointer, we can unlink the last node in time without traversing the list. The removeNode helper detaches any node from the list in once we have a reference to it.
The cost of this flexibility is extra memory: each node stores two pointers instead of one. For large collections of small values, this overhead can be significant.
Comparing arrays and linked lists
| Operation | Dynamic array | Singly linked list | Doubly linked list |
|---|---|---|---|
| Access by index | |||
| Prepend | |||
| Append | * | ||
| Remove first | |||
| Remove last | * | ||
| Insert at known position | |||
| Search | |||
| Memory per element | Low (contiguous) | +1 pointer | +2 pointers |
| Cache performance | Excellent | Poor | Poor |
* Amortized
When to use which:
- Dynamic array when you need fast random access or are iterating sequentially (cache-friendly).
- Singly linked list when insertions and deletions at the front dominate.
- Doubly linked list when you need efficient removal from both ends or deletion of arbitrary nodes (given a reference).
In practice, arrays and dynamic arrays dominate due to cache locality; modern CPUs are optimized for accessing contiguous memory. Linked lists shine in scenarios where elements are frequently inserted or removed at the endpoints, or when the data is too large to copy during a resize.
Abstract data types: stacks, queues, and deques
The data structures above (arrays and linked lists) are concrete implementations. Now we turn to abstract data types (ADTs): specifications of behavior that can be implemented in multiple ways. A stack, for instance, defines what operations are available (push, pop, peek) and what they do, without prescribing how to store the elements.
Stacks
A stack is a Last-In, First-Out (LIFO) collection. The most recently added element is the first one to be removed, like a stack of plates.
Interface
interface IStack<T> {
push(value: T): void; // Add to top
pop(): T | undefined; // Remove and return top
peek(): T | undefined; // Return top without removing
readonly size: number;
readonly isEmpty: boolean;
}
Implementation
A stack is naturally implemented as a linked list where both push and pop operate on the head:
export class Stack<T> implements IStack<T>, Iterable<T> {
private head: { value: T; next: unknown } | null = null;
private length: number = 0;
get size(): number { return this.length; }
get isEmpty(): boolean { return this.length === 0; }
push(value: T): void {
this.head = { value, next: this.head };
this.length++;
}
pop(): T | undefined {
if (this.head === null) return undefined;
const value = this.head.value;
this.head = this.head.next as typeof this.head;
this.length--;
return value;
}
peek(): T | undefined {
return this.head?.value;
}
}
All three operations — push, pop, peek — are .
We could equally implement a stack with a dynamic array (push = append, pop = remove last). The array-based version has better cache locality, while the linked-list version avoids occasional resize costs. For most purposes in TypeScript, the built-in array with push/pop is the pragmatic choice; our implementation here serves pedagogical purposes.
Applications
Stacks appear throughout Computer Science:
- Function call stack. When a function is called, its local variables and return address are pushed onto the call stack. When it returns, they are popped. This is why recursive algorithms can overflow the stack with too many nested calls.
- Parenthesis matching. To check whether brackets are balanced in an expression like
((a + b) * c), push each opening bracket and pop when a matching closing bracket is found. - Undo/redo. Text editors push each action onto an undo stack. Undoing pops the most recent action.
- Depth-first search. DFS uses a stack (often the call stack via recursion) to track which vertices to visit next.
Tracing through an example
| Operation | Stack (top → bottom) | Returned |
|---|---|---|
push(10) | 10 | — |
push(20) | 20, 10 | — |
push(30) | 30, 20, 10 | — |
peek() | 30, 20, 10 | 30 |
pop() | 20, 10 | 30 |
pop() | 10 | 20 |
push(40) | 40, 10 | — |
pop() | 10 | 40 |
pop() | (empty) | 10 |
Queues
A queue is a First-In, First-Out (FIFO) collection. Elements are added at the back and removed from the front, like a line of people waiting.
Interface
interface IQueue<T> {
enqueue(value: T): void; // Add to back
dequeue(): T | undefined; // Remove and return front
peek(): T | undefined; // Return front without removing
readonly size: number;
readonly isEmpty: boolean;
}
Implementation
A queue maps naturally onto a singly linked list with head and tail pointers: enqueue appends at the tail, dequeue removes from the head.
interface QueueNode<T> {
value: T;
next: QueueNode<T> | null;
}
export class Queue<T> implements IQueue<T>, Iterable<T> {
private head: QueueNode<T> | null = null;
private tail: QueueNode<T> | null = null;
private length: number = 0;
get size(): number { return this.length; }
get isEmpty(): boolean { return this.length === 0; }
enqueue(value: T): void {
const node: QueueNode<T> = { value, next: null };
if (this.tail !== null) {
this.tail.next = node;
} else {
this.head = node;
}
this.tail = node;
this.length++;
}
dequeue(): T | undefined {
if (this.head === null) return undefined;
const value = this.head.value;
this.head = this.head.next;
if (this.head === null) this.tail = null;
this.length--;
return value;
}
peek(): T | undefined {
return this.head?.value;
}
}
All operations are .
An array-based queue is trickier: naively dequeuing from the front of an array is because every element must shift. A circular buffer solves this by wrapping indices around modulo the capacity, giving amortized enqueue and dequeue. Our linked-list implementation avoids this complexity altogether.
Applications
- Breadth-first search. BFS uses a queue to explore vertices level by level.
- Task scheduling. Operating systems use queues to schedule processes for CPU time.
- Buffering. Data streams (network packets, keyboard input) are buffered in queues.
- Level-order tree traversal. Visiting tree nodes level by level requires a queue.
Tracing through an example
| Operation | Queue (front → back) | Returned |
|---|---|---|
enqueue(10) | 10 | — |
enqueue(20) | 10, 20 | — |
enqueue(30) | 10, 20, 30 | — |
peek() | 10, 20, 30 | 10 |
dequeue() | 20, 30 | 10 |
dequeue() | 30 | 20 |
enqueue(40) | 30, 40 | — |
dequeue() | 40 | 30 |
Deques
A deque (double-ended queue, pronounced "deck") supports insertion and removal at both ends in time. It generalizes both stacks and queues.
Implementation
A deque maps directly onto a doubly linked list:
interface DequeNode<T> {
value: T;
prev: DequeNode<T> | null;
next: DequeNode<T> | null;
}
export class Deque<T> implements Iterable<T> {
private head: DequeNode<T> | null = null;
private tail: DequeNode<T> | null = null;
private length: number = 0;
get size(): number { return this.length; }
get isEmpty(): boolean { return this.length === 0; }
pushFront(value: T): void {
const node: DequeNode<T> = { value, prev: null, next: this.head };
if (this.head !== null) {
this.head.prev = node;
} else {
this.tail = node;
}
this.head = node;
this.length++;
}
pushBack(value: T): void {
const node: DequeNode<T> = { value, prev: this.tail, next: null };
if (this.tail !== null) {
this.tail.next = node;
} else {
this.head = node;
}
this.tail = node;
this.length++;
}
popFront(): T | undefined {
if (this.head === null) return undefined;
const value = this.head.value;
this.head = this.head.next;
if (this.head !== null) this.head.prev = null;
else this.tail = null;
this.length--;
return value;
}
popBack(): T | undefined {
if (this.tail === null) return undefined;
const value = this.tail.value;
this.tail = this.tail.prev;
if (this.tail !== null) this.tail.next = null;
else this.head = null;
this.length--;
return value;
}
peekFront(): T | undefined { return this.head?.value; }
peekBack(): T | undefined { return this.tail?.value; }
}
All six operations — pushFront, pushBack, popFront, popBack, peekFront, peekBack — are .
Using a deque as a stack or queue
A deque subsumes both stacks and queues:
- As a stack: use
pushFront/popFront(orpushBack/popBack). - As a queue: use
pushBack/popFront.
This flexibility makes the deque a useful building block when the access pattern is uncertain, or when both ends are needed.
Applications
- Sliding window maximum. In the classic interview problem "maximum in every window of size ," a deque holds indices of potential maximums. Elements are added at the back and removed from the front (when they fall out of the window) or from the back (when a larger element supersedes them).
- Work-stealing schedulers. Each thread has a deque of tasks. It pops from its own front, while idle threads steal from other deques' backs.
- Palindrome checking. Push characters from both ends; pop from both ends and compare.
Complexity comparison
DynamicArray | SinglyLinkedList | DoublyLinkedList | Stack | Queue | Deque | |
|---|---|---|---|---|---|---|
| Add front | — | |||||
| Add back | * | — | ||||
| Remove front | ||||||
| Remove back | * | — | — | |||
| Access by index | — | — | — | |||
| Search | — | — | — |
* Amortized
Summary
This chapter introduced the foundational data structures upon which nearly everything else is built:
- Dynamic arrays provide random access and amortized append via the doubling strategy. Insert and remove at arbitrary positions cost due to shifting.
- Singly linked lists offer insertion and removal at the head, and append with a tail pointer, but sacrifice random access and efficient removal from the tail.
- Doubly linked lists add back-pointers for removal at both ends, at the cost of extra memory per node.
- Stacks (LIFO) are the workhorse of recursion, expression evaluation, and depth-first search.
- Queues (FIFO) power breadth-first search, task scheduling, and buffering.
- Deques generalize stacks and queues, supporting operations at both ends.
The choice between arrays and linked lists comes down to access patterns. If you need random access or sequential iteration (where cache locality matters), use an array. If insertions and deletions at the endpoints dominate, use a linked list. When in doubt, the dynamic array is usually the right default — it is what most languages provide as their standard collection.
In the next chapter, we will use these building blocks to construct hash tables, which achieve expected lookup by combining arrays with a hash function.
Exercises
Exercise 7.1. Implement a function isBalanced(expression: string): boolean that uses a Stack to determine whether the parentheses (), brackets [], and braces {} in an expression are properly balanced. For example, isBalanced("((a+b)*[c-d])") should return false (mismatched outer parentheses), while isBalanced("{a*(b+c)}") should return true.
Exercise 7.2. Implement a circular buffer–based queue. Use a fixed-size array and two indices (front and back) that wrap around using modular arithmetic. Compare its performance characteristics with our linked-list–based Queue.
Exercise 7.3. Implement a MinStack<T> that supports push, pop, peek, and an additional min() operation that returns the minimum element in the stack — all in time. Hint: maintain a second stack that tracks minimums.
Exercise 7.4. Using only two Stacks, implement a Queue. Analyze the amortized time complexity of enqueue and dequeue. Hint: use one stack for enqueuing and another for dequeuing; transfer elements between them lazily.
Exercise 7.5. Implement a function slidingWindowMax(arr: number[], k: number): number[] that returns the maximum value in each window of size as the window slides from left to right across the array. Use a Deque to achieve time complexity.