Occasionally here on the forums there's posts that come across matters that deal with inherent floating point precision variances.
Like this recent post:
https://www.sidefx.com/forum/topic/102733/ [www.sidefx.com]
I've been using the following snippet of vex for a while now but with no issues(works as intended in all cases so far):
int Integer_Value_A = 0; // Replace 0 with Target Integer Attribute Min Value int Integer_Value_B = 1; // replace 1 with Target Integer Attribute Max Value float Mid_Float_Value; float Fractional_Value; Mid_Float_Value = (Integer_Value_A + Integer_Value_B) / 2.0; Fractional_Value = frac(Mid_Float_Value); f@AA = Fractional_Value; if(frac(Fractional_Value) != 0.0) warning('Your mid target value is not a integer value'); //if(frac(Fractional_Value) != 0) warning('Your mid target value is not a integer value');// This works too
I would think though, because of possible floating point precision variances that my code would fail to function as intended because of how I am doing the comparisons.
But this is not the case; So far many times I am always getting the desired functionality.
My question of curiosity is how is the frac() function working?
Why am I NOT running into floating point precision issues?
Is the function itself doing an intentional type of 'truncating' of its return value?
Or does it inherently operation in 64 bit values and return 32 bit values by default(which gives an automatic truncation; I assume).
I would tend to think the 64 to 32 idea is not valid, because I tried my code with an attribute caste to 64 bit before trying my code - It still seems to work.
So I think the frac() function does an intentional truncation(rounding)? (regardless of whether in 32 or 64 bit mode).
Or I am missing something completely different?
