3 Must-know to work with Money and Calculation in Coding

tanut aran
3 min readApr 18, 2024

1. No System is Perfect : Base 2 and 10

10/3 in base 10 cannot be perfectly represent but rather an approximation

10/3
0.33
0.3333
0.33333333
0.333333333333
// Which one is correct?

Just the way you have problem with Floating point

0.1 + 0.2
// 0.30000000000000004

2.03-0.42
// 1.6099999999999999

1.07 - 0.08
// 0.9900000000000001

But in most programming language it round up to some precision and show back on the screen (just like you see below)

If you force it to show it original value (without rounding), for the single decimal point unfortunately 0.5 is the first one that can be represent perfectly on base 2.

You can derive this result easily by thinking in term of division since 0.5=1/2 and any number that 1/2, 1/2/2, 1/2/2/2/2 or combination of this can be perfectly written in base 2.

Everything else is an approximation by combination of 1, 2, 4, 8, 16, … and 1/2, 1/2/2, 1/2/2/2, 1/2/2/2/2, 1/2/2/2/2/2, …

2. Then Why we Use Base 10 for Money?

The one and only reason is

Because it align with our counting system of money.

Everybody on the street know that 10 / 3 = 0.33 with rounding

Ability to ‘Calculate Correctly’ is the ability to ‘Calculate as we Calculate’ and ‘Count as we Count’ i.e. make computer do as we do.

Implementation Programming

The BigDecimal data type is used

Implementation Database

For database DECIMAL = NUMERIC and its precision e.g.,

For example the popular DECIMAL(20,6)

  • Have precision 6 digit right to the decimal point
  • Have 20–6=14 digit allowed left to the decimal point

(Remarks: This is not very accurate, but I think it is easy to remember)

3. There will be Less Feature with BigDecimal

It support addition, subtraction, multiplication. Even for division can be problematic like 10/3

The Java Jshell will throw the error then you have to specified precision and rounding model: up, down, half even (≥0.5 round up) etc.

This is why scientific or complex computation will use floating point and this is default to all programming language

As long as I round it to specific precision it will be okay — NO and Why

Human will truncate the result to 2, 4 or 6 digit precisions and lead to inconsistency with computer calculation.

Correctness is relative to human base 10 system

See some exaggerated case here.

Real case is subtle but will turn out to be the giant mess over time.

// Human calculate 1/3 stored somewhere
// then add VAT of 1.07
1/3
= 0.33
// Step 2
0.33*1.07
= 0.3531
= 0.35
// Human think this is right

// Computer
1/3
= 0.333333..float type
0.33..*1.07
= 0.3566..float type
= 0.36

Hope this help !

--

--