sqrt decomposition
Motivation
- Range operation on an array that costs O(n), we want to improve to O(sqrt(n)) with O(sqrt(n)) additional memory
- May have write operation along with reads, write/read ratio could be high
Why each block has size sqrt?
The reasoning is similar to master theorem, i.e., the number of subproblems vs the cost of combining results from subproblems. The worst case is optimized when the two parts of the total cost is equal
Example
- No write operation, answer abitrary range query, each would cost O(n) without this techinque
- A mixture of read-write operation, one with O(1) and one O(n). So overall cost will be O(n). After applying the technique the cost will be dropped back to O(sqrt(n)).
- Works best when write/read ratio could be high
- Often an alternative for segment tree
Compare with meet in the middle
- After computing each sqrt bucket, most likely we just keep the result in the form of aggregation, e.g., SUM, MAX, MIN, but in MitM, we do have a full result set of brute force search.
- After sqrt decomposition, we know that solution either does not exist across buckets or can be calculated quickly by the aggregation results on each bucket. In MitM, we have to combine search results from both sides to find the solution
- Because we only split problem into half. MitM is less likely to handle update well compared with sqrt decomposition