Why 0.1 + 0.2 Doesn’t Equal 0.3?
As developers, we often assume that basic arithmetic will always work as expected. For example, adding 0.1 and 0.2 should give us 0.3, right? But when you run this calculation in Ruby (or many other programming languages), you might be surprised by the result: puts 0.1 + 0.2 # Output: 0.30000000000000004 Why doesn’t 0.1 + 0.2 equal exactly 0.3? Let’s break it down in simple terms. The Problem: Computers Use Binary Computers represent numbers in binary (base 2), but some decimal numbers can’t be perfectly represented in binary. For example: The decimal number 0.1 is like 1/3 in base 10—it becomes a repeating fraction in binary. Similarly, 0.2 also has a repeating binary representation. When you add these imprecise binary representations together, tiny rounding errors occur. That’s why the result of 0.1 + 0.2 ends up being something like 0.30000000000000004 instead of exactly 0.3. Why Does This Matter? While the difference between 0.3 and 0.30000000000000004 might seem small, it can cause problems in situations where precision is important. For example: if 0.1 + 0.2 == 0.3 puts "Equal" else puts "Not Equal" end # Output: Not Equal This happens because the tiny error makes the numbers not exactly equal. How to Fix It in Ruby If you need precise results, here are two simple solutions: 1. Use BigDecimal for Exact Arithmetic Ruby provides the BigDecimal class, which allows you to perform precise decimal calculations: require 'bigdecimal' a = BigDecimal("0.1") b = BigDecimal("0.2") result = a + b puts result.to_s # Output: "0.3" By using strings to initialize BigDecimal, you avoid the inaccuracies of floating-point numbers. 2. Round the Result If you don’t need extreme precision, you can round the result to a reasonable number of decimal places: result = 0.1 + 0.2 rounded_result = result.round(1) puts rounded_result # Output: 0.3 This approach works well for most everyday cases. Conclusion The reason 0.1 + 0.2 doesn’t equal 0.3 is due to how computers represent decimal numbers in binary. While this can lead to unexpected results, tools like BigDecimal or rounding can help you get the answers you expect. So next time you see 0.30000000000000004, remember—it’s just a quirk of how computers handle numbers! Happy coding!

As developers, we often assume that basic arithmetic will always work as expected. For example, adding 0.1
and 0.2
should give us 0.3
, right? But when you run this calculation in Ruby (or many other programming languages), you might be surprised by the result:
puts 0.1 + 0.2
# Output: 0.30000000000000004
Why doesn’t 0.1 + 0.2
equal exactly 0.3
? Let’s break it down in simple terms.
The Problem: Computers Use Binary
Computers represent numbers in binary (base 2), but some decimal numbers can’t be perfectly represented in binary. For example:
- The decimal number
0.1
is like1/3
in base 10—it becomes a repeating fraction in binary. - Similarly,
0.2
also has a repeating binary representation.
When you add these imprecise binary representations together, tiny rounding errors occur. That’s why the result of 0.1 + 0.2
ends up being something like 0.30000000000000004
instead of exactly 0.3
.
Why Does This Matter?
While the difference between 0.3
and 0.30000000000000004
might seem small, it can cause problems in situations where precision is important. For example:
if 0.1 + 0.2 == 0.3
puts "Equal"
else
puts "Not Equal"
end
# Output: Not Equal
This happens because the tiny error makes the numbers not exactly equal.
How to Fix It in Ruby
If you need precise results, here are two simple solutions:
1. Use BigDecimal for Exact Arithmetic
Ruby provides the BigDecimal
class, which allows you to perform precise decimal calculations:
require 'bigdecimal'
a = BigDecimal("0.1")
b = BigDecimal("0.2")
result = a + b
puts result.to_s # Output: "0.3"
By using strings to initialize BigDecimal
, you avoid the inaccuracies of floating-point numbers.
2. Round the Result
If you don’t need extreme precision, you can round the result to a reasonable number of decimal places:
result = 0.1 + 0.2
rounded_result = result.round(1)
puts rounded_result # Output: 0.3
This approach works well for most everyday cases.
Conclusion
The reason 0.1 + 0.2
doesn’t equal 0.3
is due to how computers represent decimal numbers in binary. While this can lead to unexpected results, tools like BigDecimal
or rounding can help you get the answers you expect.
So next time you see 0.30000000000000004
, remember—it’s just a quirk of how computers handle numbers!
Happy coding!