Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

Please explain why the following pieces of code behave differently.

#include<stdio.h>
int main(){
 float a=0.1;
 if(a<0.1)
  printf("less");
 else 
  printf("greater than equal");
getchar();
}

Output:greater than equal

 #include<stdio.h>
 int main(){
 float a=0.7;
 if(a<0.7)
  printf("less");
 else 
  printf("greater than equal");
getchar();
}

Output:less contrary to what i expected.

PS: This is NOT homework.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
834 views
Welcome To Ask or Share your Answers For Others

1 Answer

There are two different types involved here: float and double. You're assigning to a float, but then comparing with a double.

Imagine that float and double were actually 2 and 4-digit decimal floating point types instead. Now imagine that you had:

float a = 0.567123 // Actually assigns 0.57
if (a < 0.567123) // Actually compares 0.5700 with 0.5671
{
    // We won't get in here
}

float a = 0.123412; // Actually assigns 0.12
if (a < 0.123412) // Actually compares 0.1200 with 0.1234
{
    // We will get in here
}

Obviously this is an approximation of what's going on, but it explains the two different kinds of results.

It's hard to say what you should be doing without more information - it may well be that you shouldn't be using float and double at all, or that you should be comparing using some level of tolerance, or you should be using double everywhere, or you should be accepting some level of "inaccuracy" as just part of how the system works.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...