The Data Compression course covers a variety of compression techniques that must be learned. Some are simple, and some are complicated, but all are not as hard as learning how computers actually work.
Possibly the simplest, this is purely for research and isnt really used anywhere. We will start with the following properties:
Now we start the steps:
Using the probabilities in \(P\) create the cumulative probabilities list \(P_C\), starting at \(0\): This will be used, once converted to binary, to be a the representations of our words. This will require us to know how many bits of the binary form to include however, that is step 2.
To find the number of bits each word requires we need to find the self-information of the word. This is done using the self-information function: This gives: Which represents the lengths of each of the expansions for the probabilities found in step 1.
Now we can convert \(P_C\) to binary up to the length given in \(I(A)\), this can be done any way you want. I use the multiplying expansion technique (not shown). This gives:
These are the expansions for the respective codewords: This gives a uniquely decodable prefix code but it may not be optimal, the average compression length is:
Shannon-Fano coding is very hard to describe mathematically, It leverages the property that binary trees will create prefix codes if the leaves represent words. we will start with a longer code alphabet than before:
The first step is to sort the alphabet by the probabilities:
Now the probabilities are sorted, make this the “root” of the tree, then split the list in half by weighting the probabilities so they’re equal on both sides. Now append 0 to the left groups members, and 1 to the right.
Repeat step 2 for each of the groups in until you get groups of order 1, this is essentially constructing a tree: or, as a tree:
This gives a uniquely decodable prefix code but it may not be optimal, the average compression length is:
Hufman coding will always give an optimal tree, but again is hard to describe mathematically. It uses forrests of nodes that are joined in certain orders to create a tree the same as Shanon-Fano coding. We will use the same code as in the Shanon-Fano example:
A node has 2 properties; the contained elements & the cumulative probability of the elemens in the node. For all elements in the alphabet, create a node:
Create a single new node, this is the parent of the 2 nodes in your set with the lowest cumulative probabilities:
Repeat step 2 over and over until you have a tree:
add numbers to the tree branches and append these numbers until reaching a node. these are the codewords.
It is easier to see as a gif: once the final tree is given numbers can be applied to the vertacies and the codewords found. Because of the optional 0-1 choice this will not produce unique codes. We can produce the following coded alphabet: (it can be seen by replacing 1s with 0s in the above that this is non unique). This wll also produce an optimal code: