Elementary Sorting
Sorting is one of the most fundamental problems in Computer Science. In this chapter we define the sorting problem precisely, introduce the concepts of stability and in-place sorting, and study three elementary sorting algorithms — bubble sort, selection sort, and insertion sort. All three run in time in the worst case, but they differ in important ways: their behavior on nearly sorted input, their stability properties, and their practical performance. We close the chapter by proving that any comparison-based sorting algorithm must make comparisons in the worst case — a lower bound that the elementary algorithms do not achieve, motivating the efficient algorithms of Chapter 5.
The sorting problem
Sorting is the problem of rearranging a collection of elements into a specified order. It arises constantly in practice — in database queries, in preparing data for binary search, in eliminating duplicates, in scheduling, and in countless other contexts. Knuth devoted an entire volume of The Art of Computer Programming to sorting and searching, calling sorting "perhaps the most deeply studied problem in Computer Science."
Definition 4.1 - The sorting problem
Input: A sequence of elements and a total ordering on the elements.
Output: A permutation of the input such that .
The definition requires a total ordering on the elements. A total ordering is a relation that satisfies four properties for all elements , , and :
- Reflexivity: .
- Transitivity: if and , then .
- Antisymmetry: if and , then .
- Totality: for any two elements, either or (or both).
The crucial property is totality: every pair of elements is comparable. Numbers compared with are the most familiar example — given any two numbers, one is less than or equal to the other — but numbers are not the only things we can sort: for example, strings can be sorted lexicographically (dictionary order), dates can be sorted chronologically, and objects can be sorted by any key that admits a total ordering. No matter what the nature of the elements is, any sequence of such elements can be sorted, if we can define a total ordering on them.
So far we have seen only total orderings. A natural question arises if there exist any other kinds of orderings. It turns out that not all orderings are total: a partial ordering satisfies the first three properties but not totality: some pairs of elements may be incomparable. For example, consider sets ordered by the subset relation . We have , but and are incomparable — neither is a subset of the other. Another example is a task dependency relation: task A must precede task B, and task C must precede task D, but A and C have no ordering relation between them.
Because with partial ordering we cannot compare any arbitrary pair of elements, we cannot sort elements into a single linear sequence (though we can topologically sort them, which is a different problem discussed in Chapter 12). This is why the condition that there is a total ordering defined on the elements of the sequence being sorted is important and cannot be omitted: sorting requires a total ordering because the output must be a linear sequence where every adjacent pair of elements satisfies . If some elements were incomparable, there would be no way to decide which should come first.
In TypeScript, we express the ordering through a comparator function:
export type Comparator<T> = (a: T, b: T) => number;
The comparator returns a negative number if , zero if , and a positive number if . For numbers in ascending order, the comparator is simply:
export const numberComparator: Comparator<number> = (a, b) => a - b;
All three sorting algorithms in this chapter accept an optional comparator, defaulting to numberComparator. This makes them generic: they can sort arrays of any type, provided an appropriate comparator is supplied.
Stability
When a sequence contains elements that compare as equal, there is a choice: should the algorithm preserve the original relative order of equal elements, or is potentially re-arranging these equal elements acceptable?
Definition 4.2 - Stable sort
A sorting algorithm is stable if, whenever two elements and satisfy and in the input, then appears before in the output.
Stability matters when elements carry additional data beyond the sort key. For example, suppose we sort a list of students by grade, and two students — Alice and Bob — both have a grade of 90. If Alice appeared before Bob in the original list, a stable sort guarantees she still appears before Bob in the sorted output. An unstable sort might swap them.
Stability also enables multi-key sorting by composition: to sort by last name and then by first name, we first sort by first name (using a stable sort). When we then sort by last name (using a stable sort), stability ensures that records sharing the same last name retain the first-name ordering from the first sort — giving us the desired two-level sort.
Of the three algorithms in this chapter, bubble sort and insertion sort are stable, while selection sort is not.
In-place sorting
Definition 4.3 - In-place sort
A sorting algorithm is in-place if it uses auxiliary space — that is, a constant amount of memory beyond the input array.
All three algorithms in this chapter are in-place: they sort by swapping and shifting elements within the array, using only a constant number of temporary variables. Our TypeScript implementations sort the input array directly — the caller's array is modified.
Bubble sort
Bubble sort's main virtue is pedagogical simplicity: it is the easiest sorting algorithm to understand and implement, which makes it an ideal first example when studying sorting. In practice, however, it is outperformed by insertion sort on nearly every input and is rarely used outside the classroom. The algorithm works by repeatedly scanning the array from left to right, swapping adjacent elements that are out of order. After each complete pass, the largest unsorted element has "bubbled" to its correct position at the end. After passes, every element is in place.
The algorithm
- For :
- For :
- If , swap and .
- For :
After pass , the last elements are already in their final positions, so the inner loop only needs to scan positions through .
Implementation
export function bubbleSort<T>(
elements: T[],
comparator: Comparator<T> = numberComparator as Comparator<T>,
): T[] {
for (let k = 0; k < elements.length - 1; k++) {
for (let i = 1; i < elements.length - k; i++) {
if (comparator(elements[i - 1]!, elements[i]!) > 0) {
const temp = elements[i - 1]!;
elements[i - 1] = elements[i]!;
elements[i] = temp;
}
}
}
return elements;
}
Tracing through an example
Let us sort .
Pass 1 (, inner loop scans ):
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 1 | [5, 3, 8, 4, 2] | ? Yes | Swap | [3, 5, 8, 4, 2] |
| 2 | [3, 5, 8, 4, 2] | ? No | — | [3, 5, 8, 4, 2] |
| 3 | [3, 5, 8, 4, 2] | ? Yes | Swap | [3, 5, 4, 8, 2] |
| 4 | [3, 5, 4, 8, 2] | ? Yes | Swap | [3, 5, 4, 2, 8] |
After pass 1, the largest element (8) is in its final position.
Pass 2 (, inner loop scans ):
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 1 | [3, 5, 4, 2, 8] | ? No | — | [3, 5, 4, 2, 8] |
| 2 | [3, 5, 4, 2, 8] | ? Yes | Swap | [3, 4, 5, 2, 8] |
| 3 | [3, 4, 5, 2, 8] | ? Yes | Swap | [3, 4, 2, 5, 8] |
After pass 2, the two largest elements (5, 8) are in place.
Pass 3 (, inner loop scans ):
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 1 | [3, 4, 2, 5, 8] | ? No | — | [3, 4, 2, 5, 8] |
| 2 | [3, 4, 2, 5, 8] | ? Yes | Swap | [3, 2, 4, 5, 8] |
Pass 4 (, inner loop scans ):
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 1 | [3, 2, 4, 5, 8] | ? Yes | Swap | [2, 3, 4, 5, 8] |
Result: .
Correctness
We prove correctness using the following loop invariant for the outer loop.
Invariant: After complete passes, the largest elements are in their correct final positions at the end of the array, and the algorithm has not changed the relative order of equal elements.
Initialization: Before any passes (), the invariant holds trivially — zero elements are known to be in their final positions.
Maintenance: Consider pass . The inner loop scans positions through from left to right, swapping adjacent out-of-order pairs. The largest element in the unsorted prefix "bubbles" rightward through every comparison, because it is larger than (or equal to) every element it encounters. By the end of the pass, this element has reached position , which is its correct final position. The swap condition uses strict inequality (), so equal elements are never swapped — preserving stability.
Termination: The outer loop runs exactly times and then terminates. By the invariant, after passes all largest elements are in their correct positions. The remaining element — the smallest — is necessarily in position , so the entire array is sorted.
Complexity analysis
Worst case. The outer loop performs passes. Pass performs comparisons. The total is:
Best case. Even when the array is already sorted, the algorithm performs all passes:
Average case. comparisons in all cases.
Space complexity. auxiliary space.
Early termination optimization
The basic algorithm always performs passes, even if the array becomes sorted early. We can improve the best case by tracking whether any swap occurred during a pass. If a complete pass makes no swaps, the array is already sorted and we can stop:
export function bubbleSortOptimized<T>(
elements: T[],
comparator: Comparator<T> = numberComparator as Comparator<T>,
): T[] {
let n = elements.length;
let wasSwapped = true;
while (wasSwapped) {
wasSwapped = false;
for (let i = 1; i < n; i++) {
if (comparator(elements[i - 1]!, elements[i]!) > 0) {
const temp = elements[i - 1]!;
elements[i - 1] = elements[i]!;
elements[i] = temp;
wasSwapped = true;
}
}
n--;
}
return elements;
}
This optimization does not change the worst-case or average-case complexity, but on already-sorted input only one pass is needed ( comparisons, zero swaps), giving a best case. The correctness proof from above still applies: by the loop invariant, each pass places at least one more element in its final position, so after at most passes the array is sorted and wasSwapped remains false, guaranteeing termination.
Properties
| Property | Bubble sort | Bubble sort (optimized) |
|---|---|---|
| Worst-case time | ||
| Best-case time | ||
| Average-case time | ||
| Space | in-place | in-place |
| Stable | Yes | Yes |
Selection sort
Selection sort takes a different approach: instead of bubbling elements rightward, it repeatedly finds the minimum element from the unsorted portion and places it at the beginning.
The algorithm
- For :
- Find the index of the minimum element in .
- Swap and .
After iteration , the first positions contain the smallest elements in sorted order.
Implementation
export function selectionSort<T>(
elements: T[],
comparator: Comparator<T> = numberComparator as Comparator<T>,
): T[] {
for (let i = 0; i < elements.length - 1; i++) {
let remainingMinimum = elements[i]!;
let indexToSwap = -1;
for (let j = i + 1; j < elements.length; j++) {
if (comparator(elements[j]!, remainingMinimum) < 0) {
remainingMinimum = elements[j]!;
indexToSwap = j;
}
}
if (indexToSwap >= 0) {
elements[indexToSwap] = elements[i]!;
elements[i] = remainingMinimum;
}
}
return elements;
}
Tracing through an example
Let us sort .
Iteration 1 (): find the minimum in and place it at position 0.
| Array | Comparison | Update minimum? | Current minimum | |
|---|---|---|---|---|
| — | [29, 10, 14, 37, 13] | — | Initialize | 29 (index 0) |
| 1 | [29, 10, 14, 37, 13] | ? Yes | Yes | 10 (index 1) |
| 2 | [29, 10, 14, 37, 13] | ? No | No | 10 (index 1) |
| 3 | [29, 10, 14, 37, 13] | ? No | No | 10 (index 1) |
| 4 | [29, 10, 14, 37, 13] | ? No | No | 10 (index 1) |
Minimum is 10 at index 1. Swap and : .
Array after iteration 1: .
Iteration 2 (): find the minimum in and place it at position 1.
| Array | Comparison | Update minimum? | Current minimum | |
|---|---|---|---|---|
| — | [10, 29, 14, 37, 13] | — | Initialize | 29 (index 1) |
| 2 | [10, 29, 14, 37, 13] | ? Yes | Yes | 14 (index 2) |
| 3 | [10, 29, 14, 37, 13] | ? No | No | 14 (index 2) |
| 4 | [10, 29, 14, 37, 13] | ? Yes | Yes | 13 (index 4) |
Minimum is 13 at index 4. Swap and : .
Array after iteration 2: .
Iteration 3 (): find the minimum in and place it at position 2.
| Array | Comparison | Update minimum? | Current minimum | |
|---|---|---|---|---|
| — | [10, 13, 14, 37, 29] | — | Initialize | 14 (index 2) |
| 3 | [10, 13, 14, 37, 29] | ? No | No | 14 (index 2) |
| 4 | [10, 13, 14, 37, 29] | ? No | No | 14 (index 2) |
Minimum is 14 at index 2. No swap needed — the minimum is already at position .
Array after iteration 3: .
Iteration 4 (): find the minimum in and place it at position 3.
| Array | Comparison | Update minimum? | Current minimum | |
|---|---|---|---|---|
| — | [10, 13, 14, 37, 29] | — | Initialize | 37 (index 3) |
| 4 | [10, 13, 14, 37, 29] | ? Yes | Yes | 29 (index 4) |
Minimum is 29 at index 4. Swap and : .
Array after iteration 4: .
Result: .
Correctness
Invariant: After iteration of the outer loop, the subarray contains the smallest elements of the original array, in sorted order, and the remaining elements in are all greater than or equal to .
Initialization: Before the first iteration (), the sorted prefix is empty. The invariant holds trivially.
Maintenance: In iteration , the inner loop scans and finds the minimum element. This element is the smallest among all elements not yet in the sorted prefix (since, by the invariant, all smaller elements are already in ). Swapping it into position extends the sorted prefix by one element, maintaining the invariant.
Termination: After iterations, positions through contain the smallest elements in order. The remaining element at position is necessarily the largest, so the entire array is sorted.
Why selection sort is not stable
Consider the array , where and are equal values distinguished by subscripts to track their original positions. In the first iteration, selection sort finds the minimum (1, at index 2) and swaps it with :
Now appears before , but in the original array appeared first. The relative order of equal elements has been reversed. This happens because the swap moves past in a single step, without regard for their original order.
Complexity analysis
The inner loop in iteration performs comparisons. The total number of comparisons is:
This count is the same regardless of the input — selection sort always performs exactly comparisons, whether the array is sorted, reverse-sorted, or random.
Swaps. Selection sort performs at most swaps (one per outer-loop iteration). This is a notable advantage in languages where swaps are expensive — for example, in C or C++, an array of structs stores the structs inline, so swapping two elements copies the entire struct byte by byte, and larger structs mean slower swaps. In TypeScript, however, arrays of objects store references (pointers) rather than the objects themselves, so swapping two elements only exchanges two small references — a constant-time operation regardless of how large the underlying objects are. The low swap count of selection sort therefore matters most in languages with value-type semantics; in TypeScript the benefit is negligible.
Space complexity. auxiliary space.
Properties
| Property | Selection sort |
|---|---|
| Worst-case time | |
| Best-case time | |
| Average-case time | |
| Space | in-place |
| Stable | No |
Insertion sort
Insertion sort is the algorithm most people use intuitively when sorting a hand of playing cards. We hold the sorted cards in our left hand and pick up one card at a time from the table with our right hand, inserting it into the correct position among the already-sorted cards.
The algorithm
- For :
- Let .
- Insert into the sorted subarray by shifting larger elements one position to the right.
Implementation
export function insertionSort<T>(
elements: T[],
comparator: Comparator<T> = numberComparator as Comparator<T>,
): T[] {
for (let i = 1; i < elements.length; i++) {
const toInsert = elements[i]!;
let insertIndex = i - 1;
while (insertIndex >= 0 && comparator(toInsert, elements[insertIndex]!) < 0) {
elements[insertIndex + 1] = elements[insertIndex]!;
insertIndex--;
}
insertIndex++;
elements[insertIndex] = toInsert;
}
return elements;
}
The inner while loop shifts elements rightward until it finds the correct position for toInsert. The use of strict less-than (< 0) in the comparator check means that equal elements are not shifted past each other, which makes the algorithm stable.
Tracing through an example
Let us sort . In the tables below, denotes insertIndex from the code.
Iteration 1 (, toInsert ): insert 2 into the sorted prefix .
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 0 | [5, 2, 4, 6, 1, 3] | ? Yes | Shift 5 right | [5, 5, 4, 6, 1, 3] |
| [5, 5, 4, 6, 1, 3] | Place 2 at position 0 | [2, 5, 4, 6, 1, 3] |
After iteration 1: .
Iteration 2 (, toInsert ): insert 4 into the sorted prefix .
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 1 | [2, 5, 4, 6, 1, 3] | ? Yes | Shift 5 right | [2, 5, 5, 6, 1, 3] |
| 0 | [2, 5, 5, 6, 1, 3] | ? No | Place 4 at position 1 | [2, 4, 5, 6, 1, 3] |
After iteration 2: .
Iteration 3 (, toInsert ): insert 6 into the sorted prefix .
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 2 | [2, 4, 5, 6, 1, 3] | ? No | Place 6 at position 3 | [2, 4, 5, 6, 1, 3] |
After iteration 3: . No shifting was needed — 6 is already in the right place.
Iteration 4 (, toInsert ): insert 1 into the sorted prefix .
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 3 | [2, 4, 5, 6, 1, 3] | ? Yes | Shift 6 right | [2, 4, 5, 6, 6, 3] |
| 2 | [2, 4, 5, 6, 6, 3] | ? Yes | Shift 5 right | [2, 4, 5, 5, 6, 3] |
| 1 | [2, 4, 5, 5, 6, 3] | ? Yes | Shift 4 right | [2, 4, 4, 5, 6, 3] |
| 0 | [2, 4, 4, 5, 6, 3] | ? Yes | Shift 2 right | [2, 2, 4, 5, 6, 3] |
| [2, 2, 4, 5, 6, 3] | Place 1 at position 0 | [1, 2, 4, 5, 6, 3] |
After iteration 4: .
Iteration 5 (, toInsert ): insert 3 into the sorted prefix .
| Array before | Comparison | Action | Array after | |
|---|---|---|---|---|
| 4 | [1, 2, 4, 5, 6, 3] | ? Yes | Shift 6 right | [1, 2, 4, 5, 6, 6] |
| 3 | [1, 2, 4, 5, 6, 6] | ? Yes | Shift 5 right | [1, 2, 4, 5, 5, 6] |
| 2 | [1, 2, 4, 5, 5, 6] | ? Yes | Shift 4 right | [1, 2, 4, 4, 5, 6] |
| 1 | [1, 2, 4, 4, 5, 6] | ? No | Place 3 at position 2 | [1, 2, 3, 4, 5, 6] |
After iteration 5: .
Result: .
Notice how each element is inserted into its correct position within the growing sorted prefix on the left. When the element is already in the right place (like 6 in iteration 3), no shifting is needed and the inner loop exits immediately.
Correctness
Invariant: At the start of iteration of the outer loop, the subarray is a sorted permutation of the elements originally in those positions.
Initialization: Before the first iteration (), the subarray contains a single element. A single element is trivially sorted.
Maintenance: During iteration , the element is removed from its position and inserted into the sorted subarray . The inner loop finds the correct insertion point by scanning rightward from position and shifting elements that are larger than . After the insertion, is a sorted permutation of the elements originally in .
Termination: When , the entire array is sorted.
Complexity analysis
The number of comparisons depends on the input.
Worst case. The worst case is a reverse-sorted array. In iteration , the element must be shifted past all elements in the sorted prefix, requiring comparisons. The total is:
Best case. The best case is an already-sorted array. In each iteration, the inner loop performs one comparison (finding that toInsert is already in place) and zero shifts:
This is remarkable: insertion sort runs in linear time on sorted input, matching the theoretical minimum for any algorithm that must verify sortedness.
Average case. On a random permutation, each element is, on average, shifted past half the elements in the sorted prefix:
Nearly sorted input. If each element is at most positions from its sorted position, the inner loop performs at most comparisons per element, giving . When is a small constant, insertion sort runs in linear time. This makes it an excellent choice for "nearly sorted" data and for finishing off the work of a more sophisticated algorithm (for example, some quicksort implementations switch to insertion sort for small subarrays).
Space complexity. auxiliary space.
Inversions
The performance of insertion sort is closely tied to the concept of inversions.
Definition 4.4 - Inversion
An inversion in a sequence is a pair with and .
Each swap (or shift) in insertion sort eliminates exactly one inversion. Therefore, the number of comparisons insertion sort makes is , where is the number of inversions in the input. A sorted array has inversions; a reverse-sorted array has , the maximum possible. On average, a random permutation has inversions.
This connection makes insertion sort the natural choice when we know the input has few inversions — it is adaptive to the presortedness of the input.
Properties
| Property | Insertion sort |
|---|---|
| Worst-case time | |
| Best-case time | |
| Average-case time | |
| Space | in-place |
| Stable | Yes |
| Adaptive | Yes (time depends on inversions) |
Comparison of elementary sorts
Now that we have studied all three algorithms, let us compare them side by side.
| Property | Bubble sort | Selection sort | Insertion sort |
|---|---|---|---|
| Worst-case time | |||
| Best-case time | |||
| Average-case time | |||
| Stable | Yes | No | Yes |
| Adaptive | No | No | Yes |
| Comparisons (worst) | |||
| Swaps (worst) | shifts |
The optimized bubble sort variant (with early termination) achieves best-case time and becomes adaptive, but the comparison above reflects the basic algorithm.
Several observations stand out:
-
Selection sort always does the same amount of work regardless of the input — it is not adaptive. However, it minimizes the number of swaps (), which matters when moving elements is expensive.
-
Insertion sort is the best general-purpose choice among the three. It is stable, adaptive, and efficient on small or nearly sorted inputs. In practice, it outperforms both bubble sort and selection sort.
-
Bubble sort in its basic form always performs comparisons regardless of input. Even with the early termination optimization, it is slower in practice than insertion sort because elements move only one position per swap, while insertion sort shifts an entire block. Bubble sort's main virtue is pedagogical simplicity.
The comparison-based sorting lower bound
All three elementary sorting algorithms are comparison-based: they access the input elements only through pairwise comparisons. This shared trait raises a natural question — is the best we can do under this model, or can a comparison-based algorithm sort faster? It turns out the answer is yes: merge sort, heapsort, and quicksort all achieve time, as we will see in Chapter 5. This immediately raises a deeper question: can we do better still — is there a comparison-based algorithm that beats ? It turns out that the answer is "no" and we are going to prove it below.
Theorem 4.1 - Comparison-based sorting lower bound
Any comparison-based sorting algorithm must make at least comparisons in the worst case to sort elements.
The decision tree argument
To prove this theorem, we model any comparison-based sorting algorithm as a decision tree. Each internal node represents a comparison between two elements (e.g., "is ?"), with two children corresponding to the outcomes "yes" and "no." Each leaf represents a specific output permutation.
For the algorithm to be correct, it must be able to produce every permutation of elements as output — there must be at least leaves. The number of comparisons in the worst case equals the height of the decision tree (the longest root-to-leaf path).
A binary tree of height has at most leaves. We have just established that a correct decision tree must have at least leaves. Since the actual number of leaves is simultaneously at most (the binary-tree bound) and at least (the correctness requirement), the upper bound must be large enough to accommodate the lower bound:
Taking logarithms:
It remains to show that . One can appeal to Stirling's approximation (), but a direct argument is just as short and more illuminating.
Claim —
Proof. Write as a sum and keep only the upper half of its terms:
Every term in the remaining sum satisfies , so . There are at least such terms, giving:
For we have , so , and therefore:
Combining with the claim, any comparison-based sorting algorithm requires comparisons in the worst case.
Implications
This lower bound tells us that algorithms like merge sort and heapsort are asymptotically optimal among comparison-based sorts — they cannot be improved in the worst case.
Moreover, once we prove that a sorting algorithm's worst-case running time is (an upper bound), the lower bound that we just established applies to it automatically — because it is a comparison-based sort. The two bounds together give us : such an algorithm is not merely "at most " but exactly in the worst case. There is no comparison-based sorting algorithm whose worst case grows slower, and merge sort and heapsort already match this floor.
It also tells us that our elementary algorithms are a factor of away from optimal. For , that factor is roughly 50,000 — the same dramatic gap we noted in the growth-rate table of Chapter 2.
However, as it is evident from the proof of the lower bound, the lower bound applies only to comparison-based sorting. Algorithms that exploit additional structure in the input (such as knowing that elements are integers in a bounded range) can sort in time, as we will see in Chapter 6.
Summary
In this chapter we studied the sorting problem and three elementary algorithms for solving it:
- Bubble sort repeatedly swaps adjacent out-of-order elements. It is simple and stable, with time in all cases. An early termination optimization can improve the best case to .
- Selection sort repeatedly selects the minimum from the unsorted portion. It always takes time but minimizes swaps to . It is not stable.
- Insertion sort inserts each element into its correct position in a growing sorted prefix. It is stable, adaptive to the number of inversions, and has best-case time. It is the practical choice among elementary sorts.
- The comparison-based lower bound of shows that these quadratic algorithms are not optimal.
In Chapter 5, we study three efficient sorting algorithms — merge sort, quicksort, and heapsort — that achieve the bound. These algorithms use the divide-and-conquer strategy from Chapter 3 to overcome the quadratic barrier.
Exercises
Exercise 4.1. The chapter shows that selection sort is not stable. Describe how selection sort could be modified to become stable (hint: use insertion into a separate output instead of swapping). What is the cost of this modification?
Exercise 4.2. A sentinel version of insertion sort places a minimum element at position before sorting, eliminating the insertIndex >= 0 bound check in the inner loop. Explain why this is correct and analyze its effect on performance. What are the drawbacks?
Exercise 4.3. Write a Comparator<string> for lexicographic ordering and use it to sort ["banana", "apple", "cherry", "date", "apricot"] with insertion sort. Why can't a string comparator use the subtraction pattern (a, b) => a - b that numberComparator uses?
Exercise 4.4. Given an array of student records [{name: "Charlie", grade: 90}, {name: "Alice", grade: 85}, {name: "Bob", grade: 90}, {name: "Alice", grade: 90}], use the multi-key sorting technique described in the stability section to sort by grade (ascending) as the primary key and by name (alphabetically) as the secondary key. Then write a single composite comparator that achieves the same result in one pass — when would you prefer each approach?
Exercise 4.5. Consider sorting an array of {x: number, y: number} points by their Euclidean distance from the origin. Write the comparator. Can you avoid computing square roots? Does the choice between insertion sort and selection sort matter if multiple points may share the same distance?