Computers can divide by zero
We learned, many times, that we can't divide by zero.
First, in algebra, we learn that division by zero is impossible. Then, in calculus, we learn that it was a lie and division by zero is defined in most cases.
Then, in programming, we learn that division by zero is impossible and causes exceptions. It appears that it was a lie as well. IEEE has defined behaviour for division by zero for floats (per page 10 of IEEE 754).
Recall that in most implementation floating point has a sign (can be either -0.0 or 0.0).
With that in mind, let's look at interesting cases defined by IEEE 754:
// these three make sense from calculus perspective if we consider 0.0 to be very close to zero, but not zero
1.0/0.0 = Infinity
(±0.0)^(-2) = Infinity
(-0.0)^(-3) = -Infinity
// why does (-0.0)^(-3) have a negative sign but others have positive? This is a mystery to me
(-0.0)^(-2.99) = Infinity
(-0.0)^(-3) = -Infinity
(-0.0)^(-3.01) = Infinity
Why should I care?
I would not recommend using this behaviour in your programs to not confuse future code readers and maintainers, but it is useful to consider when thinking about edge-cases and debugging.
Potential problems might arise when code handles division by zero by catching exceptions. We now know that division by 0.0 would not be caught and would introduce issues down the line.