Perspectives 1: Floating pennies

Currency has been bothering me for a long time. Or, more precisely, how currency gets represented in computer programs and databases has. The reason it bothers me is that a lot of software is written so that it can’t correctly represent a penny. I’m not saying that computer hardware can’t deal accurately with currencies, nor even that software can’t be written to deal accurately with currencies. The problem is that the software is written in such a way that it has to approximate the value of a penny, and doesn’t get it quite right.

So, if computers and software can represent money accurately, why would programmers choose to use approximate representations? To answer this question, we need to consider how currencies, number systems, and data types relate to one another.

Let’s start with number systems. By the time they enter a university, most people are familiar with the natural numbers \mathbb{N} (including 0), the integers \mathbb{Z}, the rational numbers \mathbb{Q}, and the real numbers \mathbb{R}; some people are even familiar with the complex numbers \mathbb{C}. For the most part, we don’t worry too much about the formal construction of these number systems, and we learn that there’s a strict containment relationship among them:

\mathbb{N} \subset \mathbb{Z} \subset \mathbb{Q} \subset \mathbb{R} \subset \mathbb{C}.

In many common contexts where we see numbers written down, we’re not told which of these number systems the number is taken from. Rather, we have to infer the number system from the form in which the number is written and the context in which we find it. I suspect that most people, when they see a number written with a decimal point (example: 2.32) are going to automatically think of this as a real number. It’s obviously not an integer, and it doesn’t look like the fraction notation that we’re taught when we first learn about rational numbers.

Let’s consider how currency works relative to the number systems. I’ll work with the US dollar for examples, but most of the same considerations apply to every currency that I’m familiar with. Each currency system has a smallest unit—the penny in the case of US dollars. We may often see rates that use smaller units (gas prices come to mind); but, when we make the purchase, there are no partial cents in the final transaction. The currency doesn’t permit subdivision of a penny. We cannot use the US currency system to pay an amount that is between $5.27 and $5.28. Of the usual number systems we learn as we’re growing up, only \mathbb{N} and \mathbb{Z} have this property.

Moreover, currency doesn’t have negative units. If we buy something that costs $5.65, we cannot pay this with a $5, a $1 bill, and coins totaling -$0.35. If we pay with the five and one dollar bills, the cashier has to give us change. Since everything from \mathbb{Z} on up allows negative values (technically, additive inverses), we see currency behaving more like \mathbb{N} than any of the other number systems.

The challenge here is the notation we use for currency. Since we express currency values in terms of dollars, we tend to think of counting in dollars and parts of dollars, which sounds like something we’d do using rational or real numbers. In terms of the characteristics of the currency system, though, what we’re really counting in is pennies; that’s why the US currency system behaves like the natural numbers.

So, now that we know how currency and number systems are related, let’s take a quick look at data types. All data on a computer is stored as a finite sequence of bits. We can think of the data type of some piece of data as a way to tell us what those bits mean. When it comes to numbers, many modern programming languages provide two primary data types for numbers. Each of these types is a way of representing a number—as an ‘integer’ or as a floating-point number. (PHP, for example, provides integer and float types.) An integer data type can accurately represent any integer in some interval; provided you don’t give it too large of a value, it will accurately represent any count (say, of pennies).

Floating-point numbers, on the other hand, are how we approximate real numbers on a computer. Floating-point numbers, however, can’t represent every number in the range they cover. Every floating point number actually has the form \frac{a}{b}, where b is a power of two. So, if you can’t write the number you want to represent as a fraction of this form, then the computer will approximate it. In particular, if we enter something like 2.11 and store it as a floating-point number, it will get approximated.

So why do programmers choose to use an approximate representation rather than an accurate one? I’d suggest that they think about currency very intuitively as “a number with a decimal point”. If you don’t think about the properties of currencies and data types, then the approximate representation as a floating-point number is the “obvious” choice. When you take those properties into account, however, storing a count of pennies as an integer certainly gives a better correspondence between the currency and how it gets represented.

Copyright © 2008 Michael L. McCliment.

Advertisements

3 Responses to Perspectives 1: Floating pennies

  1. I agree with you that currency should be represented in integers, or long integers as some languages have them.

    Benjamin Fanklin invented our decimal currency system. He thought of it as consisting of dollars, dimes, cents and mills, and the currency still shows it that way. A dime says “one dime” but a nickel says “five cents”. There is a half dollar (called that) and there used to be a half dime. Usage eventually settled on the decimal point after the dollar amount, and I suppose that confused people into thinking of money as a measurement rather than an integral quantity.

  2. Michael McCliment says:

    Charles,

    I haven’t ever had a reason to research the history of the US currency system, and didn’t know that ‘mills’ had been proposed. The only currency I’ve heard of having actually having a 1/1000 unit is the Japanese yen; the Wikipedia article (http://en.wikipedia.org/wiki/Japanese_yen) mentions that they had sen (=1/100 yen) and rin (= 1/1000 yen) from 1872 until 1953.

  3. […] Utility, unfortunately, is particularly tricky to evaluate. It almost always depends on the context in which it is applied. For example, floating point computations on computers generally have a high degree of utility. They are our best approximation to the real numbers; they are used in nearly all numerical approximation algorithms. However, their utility drops significantly when they’re used to represent money. […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: