Some times ago, I've been annoyed by self log: 2 which was performing (self ln / 2 ln).

The problem is that it is inexact for some trivial cases. Whatever the libm used by the VM, we'll always find an integer n, such that (1.0 timesTwoPower: n) ln / 2 ln ~= n.

Rather than hacking the VM and adding a primitive, I have implemented the log2 function like this in Squeak which is way more simple:

^self significand ln / Ln2 + self exponent

In Squeak, every Float (but nan as usual) verifies:

self = (self significand timesTwoPower: self exponent).

Also for every finite Float (but 0.0 and -0.0) - even the denormals (which means that the exponent is more evolved than just extracting the exponent bits):

1 <= self significand and: [self significand < 2].

Beware when porting, this may differ in other dialects, for example in Visualworks:

self class emin - 1 <= self exponent and: [self class exponent <= (self class emax + 1)].

Where emin - 1 is for denormals, and emax + 1 is for special values (infinities and NaNs).

Back to our formulation, the nice thing is that self significand = 1.0 if and only if the Float is a power of two. It's so simple that it's the new definition of Float>>isPowerOfTwo in Squeak from now on.

And another nice thing is that 1.0 ln = 0.0 whatever libm implementation. the result is that we have an exact log2 for every exact power of two. That's goodness, goal achieved!

From the accuracy side, we have an inexact function with ln, an inexact representation of Ln2 (2 ln), and inexact division, and the last summation which can be inexact too. Even with the best libm library (a correctly rounded one), that's up to 2 ulp of error. That's not too bad for such a simple 1 liner, and just half ulp more than previous implementation!

Err! except in case of catastrophic cancellation!

That's exactly what happens in the neighbourhood of 1.0.

Let's take one example:

x := 1.0 predecessor.

x significand = 2.0 predecessor.

x exponent = -1.

2.0 predecessor ln/2 ln = 1.0 predecessor.

1.0 predecessor - 1 = 0.5 ulp negated.

x significand ln / 2 ln + x exponent = 0.5 ulp negated.

-> -1.1102230246251565e-16

Compared to old implementation:

1.0 predecessor ln / 2 ln.

-> -1.6017132519074588e-16

Which we can verify with doubled precision ArbitraryPrecisionFloat:

(1.0 predecessor asArbitraryPrecisionFloatNumBits: Float precision * 2) log2 asFloat

-> -1.6017132519074588e-16

That's about 50% of error for the new implementation, more than 10^14 ulp, abysmally inacceptable for a math library!

Where is the problem?

The problem of catastrophic cancellation is when evaluating the logarithm, then subtracting 1.

In other words, ln(1- x) is easy to evaluate for small x, but ln(2 - 2x) - ln(2) is not.

Bummer, that's precisely the operation that significand is bringing.

We must scale down (divide by 2) if we're getting a significand near 2, but must not spoil the accuracy of 1.0 successor log2... That's easy:

| s |

s := self significand.

^s > 1.3333333333333333

ifTrue: [(0.5 * s) ln / Ln2 + (1 + self exponent)]

ifFalse: [s ln / Ln2 + self exponent]

With this formulation, s will remain in the interval [2/3 , 4/3]. It remains close to 1 for receiver close to 1. Our pathological case is now cured:

1.0 predecessor log2.

-> -1.6017132519074588e-16

Float can be tricky, they are not always 1 liner friendly...

The alternative is to profit by already existing wisdom and implementations of log2 from the external world, there must be some good one, somewhere...

That means, adding more and more primitives or FFI calls. But dealing with all the library path and other complexity brought by the external world is neither fun, nor a 1 liner! I've been there before. Until someone find another flaw, I prefer my 5 liners, that's managed entropy.