Have computers started storing 0.1 correctly?


Have computers started storing 0.1 correctly?



While learning about floating point arithmetic, I came across something, I quote: 'a float/double can't store 0.1 precisely".



There is a question on SO pointing the same thing and accepted answer is also very convincing. However I thought of trying it out on my own computer, so I wrote following program as below


double a = 0.1;

if (a == 0.1)
{
Console.WriteLine("True");
}
else
{
Console.WriteLine("False");
}

Console.Read();



and console printed True. Shocking to as I was already convinced with something else. Can anyone tell me what's going on with floating point arithmetic? Or I just got a computer that store numeric values as base 10?


True





Given that it's stored inaccurately the same way both times you wrote it why wouldn't they be equal?
– jonrsharpe
Jul 1 at 7:55





en.wikipedia.org/wiki/Floating-point_arithmetic
– TheGeneral
Jul 1 at 7:56





@jonrsharpe: I completely agree that when you think about it in the right way, it becomes obvious. But I think it's a reasonable question if you're not used to thinking about exactly what's going on.
– Daisy Shipton
Jul 1 at 8:04





@jonrsharpe the question I mentioned was doing the same thing, please correct me if I am wrong.
– Imad
Jul 1 at 8:10





I get the correct results on my computer. Are you using Debug or Release? Does it give the same results for both Debug and Release? There are some microprocessors that have internal bugs that give wrong results. The Debug uses a simulators to perform the math while the Release uses the Floating Point Arithmetic Unit inside the micro. I've seen both fail. Some PC have patches to fix the bug in the FPU and the patches may be installed wrong depending on the Micro that is installed on PC. The answer should always be TRUE, except for the bugs.
– jdweng
Jul 1 at 9:34




1 Answer
1



Your program is only checking whether the compiler is approximating 0.1 in the same way twice, which it does.



The value of a isn't 0.1, and you're not checking whether it is 0.1. You're checking whether "the closest representable value to 0.1" is equal to "the closest representable value to 0.1".


a



Your code is effectively compiled to this:


double a = 0.1000000000000000055511151231257827021181583404541015625;

if (a == 0.1000000000000000055511151231257827021181583404541015625)
{
Console.WriteLine("True");
}
else
{
Console.WriteLine("False");
}



... because 0.1000000000000000055511151231257827021181583404541015625 is the double value that's closest to 0.1.


double



There are still times you can see some very odd effects. While double is defined to be a 64-bit IEEE-754 number, the C# specification allows intermediate representations to use higher precision. That means sometimes the simple act of assigning a value to a field can change results - or even casting a value which is already double to double.


double


double


double



In the question you refer to, we don't really know how the original value is obtained. The question states:



I've a double variable called x. In the code, x gets assigned a value of 0.1


x


x



We don't know exactly how it's assigned a value of 0.1, and that detail is important. We know the value won't be exactly 0.1, so what kind of approximation has been involved? For example, consider this code:


using System;

class Program
{
static void Main()
{
SubtractAndCompare(0.3, 0.2);
}

static void SubtractAndCompare(double a, double b)
{
double x = a - b;
Console.WriteLine(x == 0.1);
}
}



The value of x will be roughly 0.1, but it's not the exact same approximation as "the closest double value to 0.1". In this case it happens to be slightly less than 0.1 - the value is exactly 0.09999999999999997779553950749686919152736663818359375, which isn't equal to 0.1000000000000000055511151231257827021181583404541015625... so the comparison prints False.


x


double





Thanks, thats right, wasn't the question I mentioned was doing the same?
– Imad
Jul 1 at 8:12





@Imad: In the question you linked to, the OP says "In the code, x gets assigned a value of 0.1" but never shows how that happens. If it's the result of arithmetic (e.g. subtracting 0.8 from 0.9) then it may well have a value which is near to 0.1 but not the same approximation. We can't really tell from the question.
– Daisy Shipton
Jul 1 at 8:14





@Imad: I've provided an example to help clarify.
– Daisy Shipton
Jul 1 at 8:25






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

List of Kim Possible characters

Audio Livestreaming with Python & Flask

NSwag: Generate C# Client from multiple Versions of an API