In school, we all learn how to multiply add and divide numbers. They are simple algorithms and it is easy to conceive how computers perform the same kind of calculations. And this is mostly what happens: instead of decimals, a binary representation is used and instead of humans transistors perform the operations but the rest is quite similar.
But how are more complicated operations, such as square roots calculated?
Iterative methods
There are many ways to calculate the square root, but they all have a common property: they start out with an initial guess and then refine the guess over many steps. These are called iterative methods.
The idea is simple: checking whether a number
The question is, how to change our guess, in which direction and how much? One technique is the Newton-method. It finds the root of a function
Note that this solves an equation but we need to take a square root. Well, the trick is simple. Just define
We can just substitute the above formula in Newton’s method:
Newton’s method improved
Modern processors can calculate additions and multiplication very fast, almost at the same speed. But divisions are difficult and much slower. Consequently, if our square root calculation algorithm uses division, it will be slow too.
We can apply a clever trick to remove all divisions from our formula: calculate
This is the method that Algeo uses and also many other calculators.
Other methods
There are other ways to calculate the square root. In the last section, I said divisions are expensive so it’s best to avoid them. However many decades ago even multiplication were expensive, some processors didn’t even have a builtin multiplication method! For those cases, there is CORDIC which uses only addition, subtractions and bit shifting.
Some processors use Goldschmidt’s algorithm that calculates