The more I think about yesterday's post, the more I feel like it needs some tidying up.
First, this really isn't suitable for production code, and I
realize this. What happens when, for example, a double
isn't 64-bits wide
anymore? (As one example, we might end up addressing too much memory.) An
alternative construction might use a union
type, which is as large as the
largest data type.
This doesn't guarantee an exact mapping, of course—it's possible to lose bits easily by unioning incorrect data types:
union u { short s; double d; }
In this case, sizeof(u) == sizeof(double)
, but u.s
cannot hold the same
number of bits as the double. This is likely to cause more collisions in our
algorithm.
This implies that we're going to have to do some extra work to make sure we generate structures that are of the appropriate width, and this is hard to do in a cross-platform way.
Second, there are a lot of ways to skin this cat: associative arrays on each of the coordinates will serve, although we'd need one array for each of the x, y, and z values. There are also pretty significant problems with using real-valued numbers as entries in an associative array.
Third, we probably are optimizing a bit prematurely. This is more interesting
to me as a thought experiment than as a good solution to a real problem. Keep
in mind that if we want to optimize the worst-case performance, we have to do
enough work that we'll probably negate whatever time savings we might enjoy
from this solution. It also has a higher memory complexity and makes more
demands of a Point
class.
What I like about it, though, is that this is a good thought experiment: it gives us some ideas about how to embed n-dimensional data in a one-dimensional space, suggests some of the problems associated with floating point precision, and offers some areas for discussion.
Maybe a good programming lab?