MCS 275 Spring 2023
Emily Dumas
Reminders and announcements:
Last time we wrote a mergesort function that acts as a transformation: A list is given as input, a new sorted list is returned.
Another approach we could consider is sorting as a mutation: A list is provided, the function reorders its items and returns nothing.
A sorting transformation always uses an amount of extra memory proportional to the size of the list. (It needs a second list to store the output.)
A sort that operates as a mutation has the possibility of using only a fixed amount of memory to do its work.
Doing so is called an in place sorting method.
A recursive in place sorting method that, like mergesort, is reasonably efficient and widely used.
Let's first study something weaker than sorting.
Given a list L
, let p
be the last element of L
.
We want to rearrange L
so that it looks like:
[ items < p, p, items ≥ p ]
We say L
has been partitioned at p
, and we call p
the
pivot.
Scan through the list, moving things smaller than the pivot to the beginning.
The two chunks of the list on either side of the pivot may not be sorted.
But we could bring each of them closer to being sorted by partitioning them...
Starting with an unsorted list:
It's divide and conquer, but with no merge step. The hard work is instead in partitioning.
Let's implement quicksort
in Python.
quicksort
:
Input: list L
and indices start
and end
.
Goal: reorder elements of L
so that L[start:end]
is
sorted.
(end-start)
is less than or equal to 1, return immediately.partition(L)
to partition the list, letting m
be the
final location of the pivot.
quicksort(L,start,m)
and quicksort(L,m+1,end)
to sort the parts
of the list on either side of the pivot.partition
:
Input: list L
and indices start
and end
.
Goal: Take L[end-1]
as a pivot, and reorder elements of L
to partition L[start:end]
accordingly.
pivot=L[end-1]
.dst=start
.src
from start
to end-1
:L[src] < pivot
, swap L[src]
and L[dst]
and
increment dst
.L[end-1]
and L[dst]
to put the pivot in its proper place.dst
.Python lists have built-in .sort()
method. Why talk about sorting?
Last time we discussed and implemented mergesort, developed by von Neumann (1945) and Goldstine (1947).
Today we discussed quicksort, first described by Hoare (1959) and the simpler partitioning scheme introduced by Lomuto.
But are these actually good ways to sort a list?
Theorem: If you measure the time cost of mergesort in any of these terms
L[i] = x
counts as 1)then the cost to sort a list of length $n$ is less than $C n \log(n)$, for some constant $C$ that only depends on which expense measure you chose.
$C n \log(n)$ is pretty efficient for an operation that needs to look at all $n$ elements. It's not linear in $n$, but it only grows a little faster than linear functions.
Furthermore, $C n \log(n)$ is the best possible time for comparison sort of $n$ elements (though different methods might have better $C$).
Is quicksort similarly efficient?