Smoothsort
Class | Sorting algorithm |
---|---|
Data structure | Array |
Worst-case performance | O(n log n) |
Best-case performance | O(n) |
Average performance | O(n log n) |
Worst-case space complexity | O(n) total, O(1) auxiliary |
Optimal | When the data is already sorted |
In computer science, smoothsort is a comparison-based sorting algorithm. A variant of heapsort, it was invented and published by Edsger Dijkstra in 1981.[1] Like heapsort, smoothsort is an in-place algorithm with an upper bound of O(n log n),[2] but it is not a stable sort.[3][self-published source?] The advantage of smoothsort is that it comes closer to O(n) time if the input is already sorted to some degree, whereas heapsort averages O(n log n) regardless of the initial sorted state.
Overview
Like heapsort, smoothsort builds up an implicit heap data structure in the array to be sorted, then sorts the array by repeatedly extracting the maximum element from that heap. Unlike heapsort, which places the root of the heap and largest element at the beginning (left) of the array before swapping it to the end (right), smoothsort places the root of the heap at the right, already in its final location. This, however, considerably complicates the algorithm for removing the rightmost element.
Dijkstra's formulation of smoothsort does not use a binary heap, but rather a custom heap based on the Leonardo numbers. As he pointed out in his original paper,[1] it is also possible to use perfect binary trees (of size 2k−1) with the same asymptotic efficiency,[2] but there is a constant factor lost in efficiency due to the greater number of trees required by the larger spacing of permissible sizes.
The Leonardo numbers are similar to the Fibonacci numbers, and defined as:
- L(0) = L(1) = 1
- L(k+2) = L(k+1) + L(k) + 1
A Leonardo tree of order k ≥ 2 is a binary tree with a root element, and two children which are themselves Leonardo trees, of orders k−1 and k−2. A Leonardo heap resembles a Fibonacci heap in that it is made up of a collection of heap-ordered Leonardo trees.
Each tree is an implicit binary tree of size L(k), and the heap consists of a list of trees of decreasing size and increasing root elements. Ordered left-to-right, the rightmost tree is the smallest and its root element is the global maximum.
The advantage of this custom heap over binary heaps is that if the input is already sorted, it takes only O(n) time to construct and deconstruct the heap, hence the better runtime.
Breaking the input up into a list of heaps is simple – the leftmost nodes of the array are made into the largest heap possible, and the remainder is likewise divided up. The list is constructed to maintain the following size properties (it can be proven[4] that this is always possible):
- No two trees have the same order. Except for L(1) = L(0), this means they will be different sizes. The list will therefore be a series of trees strictly decreasing in order.
- No two trees will have sizes that are consecutive Leonardo numbers, except for possibly the final two.
(In the perfect binary tree variant of smoothsort, the equivalent invariant is that no two trees have the same size, except possibly the last two.)
An implicit Leonardo tree of order k and size L(k) is either a single node (for orders 0 and 1), or a left subtree of size L(k − 1), folowed by a right subtree of size L(k − 2), and finally a root node. Each tree maintains the max-heap property that each node is always at least as large as either of its children (and thus the root of the tree is the largest element of all). The list of heaps as a whole maintains the order property that the root node of each tree is at least as large as the root node of its predecessor (to the left).
The consequence of this is that the root of the last (rightmost) tree in the list will always be the largest of the nodes, and, importantly, an array that is already sorted needs no rearrangement to be made into a valid Leonardo heap. This is the source of the adaptive qualities of the algorithm.
Like heapsort, the algorithm consists of two phases. In the first, the heap is grown by repeatedly adding the next unsorted element, and performing rearrangements to restore the heap properties.
In phase two, the root of the last tree in the list will be the largest element in the heap, and will therefore be in its correct, final position. We then reduce the series of heaps back down by removing the rightmost node (which stays in place) and performing re-arrangements to restore the heap invariants. When we are back down to a single heap of one element, the array is sorted.
Operations
Ignoring (for the moment) Dijkstra's optimisations, two operations are necessary: enlarge the heap by adding one element to the right, and shrink the heap by removing the right most element (the root of the last heap), preserving the three types of invariant properties:
- The heap property within each tree,
- The order property, keeping the root elements of the trees in order, and
- The size properties relating the sizes of the various trees.
Grow the heap by adding an element to the right
Enlarging the heap while maintaining the size properties can be done without any data motion at all by just rearranging the boundaries of the component trees:
- If the last two trees are of size L(k+1) and L(k) (i.e., consecutive Leonardo numbers), combine them with the new element as the root of a larger tree of size L(k+2). Because consecutive trees other than the last two may not have consecutive orders, the third-last
tree must have been of size at least L(k+3), and therefore the size property is maintained. This new tree may not have the heap property.
- If the last two trees in the list are not consecutive Leonardo numbers, then we may simply add a new heap of size 1 to the list. This 1 is taken to be L(1), unless the rightmost tree is already of order 1, in which case the new one-element tree is taken to be of size L(0).
After this, the newly added element must be moved to the correct location to maintain the heap and order properties. Because there are O(log n) trees, each of depth O(log n), it is simple to do this in O(log n) time, as follows:
- First, restore the order property by moving the new element left until it is at the root of the correct tree. Begin with the rightmost tree (the one that has just been created) as the "current" tree.
- While there is a tree to the left of the current tree and its root is greater than the new element (the current root) and both of its children,
- swap the new element with the root of the tree to the left. This preserves the heap property of the current tree. The tree on the left then becomes the current tree, and we repeat this step.
- While there is a tree to the left of the current tree and its root is greater than the new element (the current root) and both of its children,
- Second, restore the heap property to the current tree by "sifting down" the new element to its correct position. (This is a standard heap operation, as used in heapsort.)
- While the current tree has a size greater than 1 and either child of is greater than the current root, then
- Swap the greater child root with the current root. That child tree becomes the current tree and sift-down continues.
- While the current tree has a size greater than 1 and either child of is greater than the current root, then
The sift-down operation is slightly simpler than in binary heaps, because each node has either two children or zero. One does not need to handle the condition of one of the child heaps not being present.
Optimisation
There are two opportunities to omit some operations during heap construction.
- If the new tree is going to become part of a larger tree before we are done, then don't bother establishing the order property: it only needs to be done when a tree has reached its final size.
- To do this, look at how many elements remain after the new tree of size L(k). If there are L(k − 1) + 1 or more, then this new tree is going to be merged.
- Do not maintain the heap property of the rightmost tree. If that tree becomes one of the final trees of the heap, then maintaining the order property will restore the heap property. Of course, whenever a new tree is added to the list, then the rightmost tree is no longer the rightmost and the heap property needs to be restored, but this can be done in O(n) time, whereas maintaining it at each step takes O(n log n) time.
Shrink the heap by removing the rightmost element
This is the reverse of the grow process:
- If the rightmost tree has a size of 1 (i.e., L(1) or L(0)), then this is trivial; simply remove that rightmost tree.
- If the rightmost tree has size greater than 1, then remove the root, exposing the two sub-trees as members of the list. This preserves the size property. Restore the order property using the same algorithm as in construction; first on the left subtree, and then on the right one.
Optimisation
- Unlike when constructing the heap, when restoring the order property after removing a tree's root, we know that the newly exposed trees satisfy the heap property. Therefore, it is not necessary to compare the preceding tree's root to the children of the newly exposed root. Just compare it to the root. (This only applies to the first step. After an element has been swapped left once, it is necessary to compare to both children.)
Analysis
Smoothsort takes O(n) time to process a presorted array and O(n log n) in the worst case. However, it does not handle all nearly-sorted sequences optimally. Using the count of inversions as a measure of un-sortedness (the number of pairs of indices i and j with i < j and A[i] > A[j]; for randomly sorted input this is approximately n2/4), there are possible input sequences with O(n log n) inversions which cause it to take Ω(n log n) time, whereas other adaptive sorting algorithms can solve these cases in O(n log log n) time.[2]
The smoothsort algorithm needs to be able to hold in memory the sizes of all of the trees in the Leonardo heap. Since they are sorted by order and all orders are distinct, this is usually done using a bit vector indicating which orders are present. Moreover, since the largest order is at most O(log n), these bits can be encoded in O(1) machine words, assuming a transdichotomous machine model.
References
- ^ a b Dijkstra, Edsger W. Smoothsort – an alternative to sorting in situ (EWD-796a) (PDF). E.W. Dijkstra Archive. Center for American History, University of Texas at Austin.
One can also raise the question why I have not chosen as available stretch lengths: ... 63 31 15 7 3 1 which seems attractive since each stretch can then be viewed as the postorder traversal of a balanced binary tree. In addition, the recurrence relation would be simpler. But I know why I chose the Leonardo numbers:
(transcription) - ^ a b c Hertel, Stefan (13 May 1983). "Smoothsort's behavior on presorted sequences" (PDF). Information Processing Letters. 16 (4): 165–170. doi:10.1016/0020-0190(83)90116-3.
- ^ http://www.codeproject.com/Articles/26048/Fastest-In-Place-Stable-Sort
- ^ Smoothsort Demystified. Keithschwarz.com. Retrieved on 2010-11-20.