The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

which data type?

Hi,
We are developing a web application,which will involve money transactions.What factors we will have to take into consideration,for choosing a datatype for amount-float or decimal?
curious
Saturday, July 28, 2007
 
 
seeing as decimal was designed chiefly for currency, I would go that way.
onanon
Monday, July 30, 2007
 
 
Rounding is the obvious answer.

For more information, just watch Officespace. Everything is explained there.
Entries of Confusion Send private email
Monday, July 30, 2007
 
 
If your application will actually be manipulating money, i.e. moving it from one place to another, I would go with integers, since precision is your most important criterion. Otherwise float will probably be OK.

You should also consider whether you ever want to deal with the concept of a fraction of a penny.
DJ Clayworth
Monday, July 30, 2007
 
 
Never using floating point numbers for money. Floating points are not precise and can lead to all kinds of strange rounding and precision errors.

Use fixed point decimals or integers (implied decimals).
dood mcdoogle
Monday, July 30, 2007
 
 
Floating point is an approximation.  It is not intended for use with money.

Sincerely,

Gene Wirchenko
Gene Wirchenko Send private email
Monday, July 30, 2007
 
 
I agree, use a decimal. If you are using a system that lets you chose the precision of your decimals (e.g., SQL Server), then the question becomes whether two decimal points is sufficient or if you need more. I've found that is there are any sorts of calculations (amortization, interest, pro-rating, conversions, etc.) done at the database level, then fractional cents become useful. Also, make sure you are consistent in your precision or you can run into truncation problems.

Another interesting approach that I've heard people do, but haven't done myself, is to store everything as cents in an integer field.
Eric Marthinsen Send private email
Monday, July 30, 2007
 
 
Int64 with 4 implied decimal places. You can thank me later.
Anony
Monday, July 30, 2007
 
 
Not sure what language you're programming in...but if you're using .NET, definitely go with Decimal.  See here:

http://msdn2.microsoft.com/en-us/library/system.decimal(vs.80).aspx

"The binary representation of a Decimal value consists of a 1-bit sign, a 96-bit integer number, and a scaling factor used to divide the 96-bit integer and specify what portion of it is a decimal fraction."

In other words, as I understand it, Decimal _is_ an integer internally, giving you all the benefits of integer calculation without having to continuously remember to add or remove decimal places.
Kyralessa Send private email
Monday, July 30, 2007
 
 
Varchar(50). No thanks needed.
Just kidding...
Monday, July 30, 2007
 
 
Uhhh, don't use a character type.  Bad news.  Use an integral type.
xampl Send private email
Monday, July 30, 2007
 
 
"Uhhh, don't use a character type.  "

He was "Just kidding...".
dood mcdoogle
Tuesday, July 31, 2007
 
 
I wasn't kidding about watching OfficeSpace ;-)
Entries of Confusion Send private email
Tuesday, July 31, 2007
 
 
Decimal.
OneMist8k
Tuesday, July 31, 2007
 
 
DateTime
BenjiSmith Send private email
Tuesday, July 31, 2007
 
 
You cannot represent 0.1 exactly as a floating point number.
Peter Send private email
Tuesday, July 31, 2007
 
 
"You cannot represent 0.1 exactly as a floating point number."

Sure you can.  It can not be represented exactly with a *binary* floating point value, but it can be represented exactly with a *decimal* floating point value.

Sincerely,

Gene Wirchenko
Gene Wirchenko Send private email
Thursday, August 02, 2007
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz