Given a sequence of integers in the range , its bit-reversal permutation is given as a sequence of integers whose bits are in reverse order to the original sequence.
For example, for :
Input: 0, 1, 2, 3, 4, 5, 6, 7 == 000, 001, 010, 011, 100, 101, 110, 111 Output: 0, 4, 2, 6, 1, 5, 3, 7 == 000, 100, 010, 110, 001, 101, 011, 111
The bit-reversal permutation is commonly used in Fast Fourier Transform algorithms, to reorder the input sequence into an order that makes computing recursively smaller DFTs convenient (the necessary elements are adjacent to each other).
A naive way of computing the bit reversal permutation is to iterate through each value in the input sequence, and then reverse each bit in turn:
The naive algorithm loops through every value in the sequence, and then every bit of every value, so its performance is . The Fast Bit Reversal Algorithms paper by Elster  presents a method for computing the bit-reversal permutation in time sequentially, by mapping values in the input sequence to values in the output sequence.
The key observation made in this paper is that, as you iterate through the values of the input sequence, you can factor out values in the output sequence by a power of two, the exponent of which is dependant on the most-significant-bit of the input value:
Running through with , and defining as the most significant bit in the input number:
|Input Number (k)||MSB (q)||Ouptut Number (X)||Factorisation|
|0 (000)||0||0 (000)|
|1 (001)||1||4 (100)|
|2 (010)||2||2 (010)|
|3 (011)||2||6 (110)|
|4 (100)||3||1 (001)|
|5 (101)||3||5 (101)|
|6 (110)||3||3 (011)|
|7 (111)||3||7 (111)|
By factoring out a power of two, our focus is now shifted to finding the sequence of constants (for example, when ). The paper shows that this sequence can be computed sequentially, as the following relations hold:
For example, continuing to use , suppose we already know the value of . Then we can trivially compute the values of and :
The paper also shows that the factor can be generated directly. Suppose you have a value , and its most significant bit number . To find the value of , we realise that multiplying by two is equivalent to left shifting the value of by 1, and so .
Multiplying a value by two will always result in an even number (or to put it another way, left shifting by 1 will always result in a least significant bit of zero). It therefore follows that (e.g. adding a value of 1 to the value of will have no effect on its most significant bit).
With this, we can now have all the information we need to directly compute the bit reversed value given its previous value, and after some simplification, the following relations can be found:
We can therefore write an algorithm to directly compute future values of the permutation given the values we have already computed:
Despite running in linear time, the algorithm does require storage space of size . It should be noted though that if you are performing many DFTs of the same size, you could generate the permutation table once and reuse it across all those DFTs to accelerate the cost of the bit reversal step. Since the paper was written (in 1989!) the gap between compute and memory performance has grown significantly larger, and the cost of repeated loads and stores could be detremental on modern architectures. Other cache-friendly bit reversal algorithms do exist, I will save those for a future post!
 Elster, A.C., 1989, May. Fast bit-reversal algorithms. In Acoustics, Speech, and Signal Processing, 1989. ICASSP-89., 1989 International Conference on (pp. 1099-1102). IEEE.