this post was submitted on 25 Jun 2023
257 points (98.1% liked)
Programmer Humor
19606 readers
1040 users here now
Welcome to Programmer Humor!
This is a place where you can post jokes, memes, humor, etc. related to programming!
For sharing awful code theres also Programming Horror.
Rules
- Keep content in english
- No advertisements
- Posts must be related to programming or programmer topics
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
One of my lecturers mentioned a way they would get around this was to store all values as ints and then append a . two character before the final one.
Yeah, this works especially well for currencies (effectively doing all calculations in cents/pennies), as you do need perfect precision throughout the calculations, but the final results gets rounded to two-digit-precision anyways.
quite a horrible hack, most modern languages have decimal type that handles floating rounding. And if not, you should just use rounding functions to two digits with currency.
Not sure what financing applications you develop. But what you suggest wouldn't pass a code review in any financial-related project I saw.
Using integers for currency-related calculations and formatting the output is no dirty hack, it's industry standard because floating-point arithmetic is, on contemporary hardware, never precise (can't be, see https://en.wikipedia.org/wiki/IEEE_754 ) whereas integer arithmetic (or integers used to represent fixed-point arithmetic) always has the same level of precision across all the range it can represent. You typically don't want to round the numbers you work with, you need to round the result ;-) .
Phew. Sometimes I read things and think I'm going crazy. I work in ERP/accounting software and was sure the monetary data type I've been using was backed by integers, but the post you're replying to had me second guessing myself...
Had to think about it, but yeah, I guess, you can't do division or non-integer multiplication with integer cents, as standard integer math always rounds downwards and it forces you to round after every step.
You could convert to a float for the division/multiplication and you do get more efficient addition/subtraction as well as simpler de-/serialization, but in most situations, it's probably less trouble to use decimals.
You do not want to use floats for any part of calculating money. The larger the value the larger the error in them - not a trait you want when dealing with money. Fixed point numbers/decimals/big ints are much better for this, if you want greater than cent precision, treat the values as fractions of a cent (aka move the arbitrary decimal over one more place or however many you need for your application). The maths is the same no matter where you place the decimal point in it.
Fixed point notation. Before floats were invented, that was the standard way of doing it. You needed to keep your equation within certain boundaries.