discuss@lists.openscad.org

OpenSCAD general discussion Mailing-list

View all threads

Testing Array Equality

JB
Jordan Brown
Mon, Feb 12, 2024 6:21 AM

On 2/11/2024 6:54 AM, Michael Möller via Discuss wrote:

Ohhh, I am curious, too, but refrained from asking. I wondered, was
the question badly phrased, as in: what is the result for undef,
unequal sizes, unequal nesting ? Maybe the test was done, but the
answer was "wrong".

Vector comparison is done by comparing successive entries in the two
vectors.

  • If the result of the comparison of the two elements is undef, the
    result is undef.
  • If the result of result of the comparison of the two elements is
    false, the result is false.
  • If, after reaching the end of either vector, we have not yet reached
    the end of the other, the result is false.
  • If we reach the end of both vectors, the result is true.

https://github.com/openscad/openscad/blob/dd2da9e2908e881af6cb1fe90306fe6004081cb6/src/core/Value.cc#L718

And of course that per-entry comparison may itself be a vector
comparison, or any other type.

On 2/11/2024 6:54 AM, Michael Möller via Discuss wrote: > Ohhh, I am curious, too, but refrained from asking. I wondered, was > the question badly phrased, as in: what is the result for undef, > unequal sizes, unequal nesting ? Maybe the test was done, but the > answer was "wrong". Vector comparison is done by comparing successive entries in the two vectors. * If the result of the comparison of the two elements is undef, the result is undef. * If the result of result of the comparison of the two elements is false, the result is false. * If, after reaching the end of either vector, we have not yet reached the end of the other, the result is false. * If we reach the end of both vectors, the result is true. https://github.com/openscad/openscad/blob/dd2da9e2908e881af6cb1fe90306fe6004081cb6/src/core/Value.cc#L718 And of course that per-entry comparison may itself be a vector comparison, or any other type.
JB
Jordan Brown
Mon, Feb 12, 2024 7:22 AM

On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote:

Maybe round a number to 4 or 5 decimal places and then comparison
should work.

E.g.
round(2.0000001,4)==2 , should give  "True" as result.

No, it won't.

If you're looking to compare to four decimal places, you would want

1.12344999999

and

1.12345000001

to compare equal, because they only differ by 0.00000000002, but the
first will round to 0.1234 and the second will round to 0.1235.

The "compare the absolute value of the difference" answer seems correct,
except that to be general, if you're looking at floating point, you want
to compare to some number of significant figures, not some number of
decimal places.  If you want to compare to six significant figures, and
you're comparing numbers that are around 1, then you need to confirm
that their difference is less than 0.000001.  If you're comparing
numbers that are around a million, then you need to confirm that their
difference is less than 1.  You can do that by comparing the absolute
value of the difference to a fraction of one of the two numbers.

But that breaks down when the value you are comparing against is zero,
because there are no significant figures.

It is also problematic - not wrong, but you could easily make errors -
if the values that you are comparing are themselves the results of
subtracting large nearly-equal numbers.  Discussing single-precision
floating point for easy numbers, if you subtract two numbers that are
around a million and get a result of around 1, that result only has
about two significant figures.  Each of your two millions is plus or
minus about 0.1, and (simplistically) those error bars add, so your
result of around 1 is plus or minus 0.2; you should consider everything
from 0.8 to 1.2 to be the same as 1.  (I might be off by an order of
magnitude there, but hopefully the idea comes across.)

On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote: > Maybe round a number to 4 or 5 decimal places and then comparison > should work. > > E.g. > round(2.0000001,4)==2 , should give  "True" as result. > No, it won't. If you're looking to compare to four decimal places, you would want 1.12344999999 and 1.12345000001 to compare equal, because they only differ by 0.00000000002, but the first will round to 0.1234 and the second will round to 0.1235. The "compare the absolute value of the difference" answer seems correct, except that to be general, if you're looking at floating point, you want to compare to some number of significant figures, not some number of decimal places.  If you want to compare to six significant figures, and you're comparing numbers that are around 1, then you need to confirm that their difference is less than 0.000001.  If you're comparing numbers that are around a million, then you need to confirm that their difference is less than 1.  You can do that by comparing the absolute value of the difference to a fraction of one of the two numbers. But that breaks down when the value you are comparing against is zero, because there *are* no significant figures. It is also problematic - not wrong, but you could easily make errors - if the values that you are comparing are themselves the results of subtracting large nearly-equal numbers.  Discussing single-precision floating point for easy numbers, if you subtract two numbers that are around a million and get a result of around 1, that result only has about two significant figures.  Each of your two millions is plus or minus about 0.1, and (simplistically) those error bars add, so your result of around 1 is plus or minus 0.2; you should consider everything from 0.8 to 1.2 to be the same as 1.  (I might be off by an order of magnitude there, but hopefully the idea comes across.)
JB
Jordan Brown
Mon, Feb 12, 2024 9:02 AM

On 2/11/2024 4:25 AM, Raymond West via Discuss wrote:

you have to include a value for the fuzziness, so not difficult to add
that, as you've shown. It's OK for numbers, but how about for shapes?
Personally, I have not looked at in detail, but I think for practical
purposes, we could not bother with floating point. 64 bit integer can
cover a range big enough for most, imnsho.

Why do you think that using integers would help?

Say you used units of micrometers, so 1cm is 10,000 units.

What's one third of 10,000 units, in integer math?  3,333 units.  What's
that times three?  9,999 units.  Not 10,000 units.

What's 20,000 units, divided by three?  6,667 units.  Then times three? 
20,001, not 20,000.

That kind of error is the root of floating point error, that
finite-length representations cannot represent infinite-length values. 
Different representations will differ in which values are
infinite-length, but they will pop up.

Floating point arithmetic neither magically solves problems nor
magically creates them.

There are three things that confuse people.

First, you learned base 10 arithmetic in school.  In base 10 arithmetic,
negative powers of 10 - really, negative powers of 2 and 5 - are simple
numbers.  One tenth is 0.1, a nice simple number.  You learned that one
third and one seventh and similar numbers are ugly and will misbehave if
you don't remember that they have an infinite number of digits.  If
you're limited to any finite number of digits, you'll get the "wrong"
answer.

Most computer floating point uses base 2 arithmetic.  In base 2
arithmetic, fractional powers of 1 are simple numbers.  One half is 0.1;
one quarter is 0.01.  negative powers of 5 (and thus negative powers of
10) are, like one third in base 10, infinite repeating fractions.  One
tenth is 0.0001100110011... forever.  Just like one third in base 10, if
you use any finite number of digits, you'll get the "wrong" answer.

The second thing is that it's floating point.  It works in terms of a
certain number of significant figures - about 7 for single precision and
about 16 for double precision.  That's kind of alien; it means that you
can represent very small numbers (like, say, one one-millionth) and very
large numbers (like, say, a million), but in single precision you can't
represent one million plus one one-millionth, because there aren't
enough digits.  You can't represent a hundred million plus one in single
precision, because there aren't enough significant figures available.

The final thing is that mostly it works.  Rounding often cancels out
those errors, so you don't see the problems.  That's kind of obvious
when you round for display, but it also addresses some of the
arithmetically tough cases like one-third times three.  Because it
mostly works, it's surprising when it doesn't.

So what does that all tell us?

Any representation of numbers is limited.  The exact limitations will
vary from one representation to another.  (You might think that you
could represent numbers as fractions, but remember that the definition
of a rational number is that it can be represented as a fraction, and
the definition of an irrational number is that it can't be represented
as a fraction.  As soon as you start to do trig - like, to rotate
something - you're working with pi, an irrational number.  For that
matter, the diagonal of a unit cube is sqrt(2), also irrational.)

No matter what the representation, there will be cases that will yield
unexpected results.  You must take care not to assume that two different
calculations that mathematically yield the same number, will yield the
same number when you actually do the arithmetic.  The biggest and most
obvious of those is that when you add up fractions, the total may not be
precisely what you expected.

If you say "that's all too hard, I want to just use integers"... sure. 
Go for it.  Double precision floating point is good for 53 bits; you can
represent all of the integers from about -10^15 to +10^15 absolutely
precisely.  You can add, subtract, and multiply them all day, and as
long as you don't exceed those limits you'll get perfect answers. 
Division, well, division is hard; if you divide and it doesn't come out
evenly, you'll end up with a fraction.  If you don't like the fraction,
you can floor() or ceil() or round() the result to get back to an
integer, but of course you'll be doing something "wrong" with the
remainder.  Even single-precision floating point (which is what STL
uses) will get you about 7 digits, so if you measure in micrometers you
can get up to about ten meters precisely.  You won't really avoid the
problems, but maybe they will be more obvious to you.

On 2/11/2024 4:25 AM, Raymond West via Discuss wrote: > > you have to include a value for the fuzziness, so not difficult to add > that, as you've shown. It's OK for numbers, but how about for shapes? > Personally, I have not looked at in detail, but I think for practical > purposes, we could not bother with floating point. 64 bit integer can > cover a range big enough for most, imnsho. > Why do you think that using integers would help? Say you used units of micrometers, so 1cm is 10,000 units. What's one third of 10,000 units, in integer math?  3,333 units.  What's that times three?  9,999 units.  Not 10,000 units. What's 20,000 units, divided by three?  6,667 units.  Then times three?  20,001, not 20,000. That kind of error is the root of floating point error, that finite-length representations cannot represent infinite-length values.  Different representations will differ in which values are infinite-length, but they will pop up. Floating point arithmetic neither magically solves problems nor magically creates them. There are three things that confuse people. First, you learned base 10 arithmetic in school.  In base 10 arithmetic, negative powers of 10 - really, negative powers of 2 and 5 - are simple numbers.  One tenth is 0.1, a nice simple number.  You learned that one third and one seventh and similar numbers are ugly and will misbehave if you don't remember that they have an infinite number of digits.  If you're limited to any finite number of digits, you'll get the "wrong" answer. Most computer floating point uses base 2 arithmetic.  In base 2 arithmetic, fractional powers of 1 are simple numbers.  One half is 0.1; one quarter is 0.01.  negative powers of 5 (and thus negative powers of 10) are, like one third in base 10, infinite repeating fractions.  One tenth is 0.0001100110011... forever.  Just like one third in base 10, if you use any finite number of digits, you'll get the "wrong" answer. The second thing is that it's *floating* point.  It works in terms of a certain number of significant figures - about 7 for single precision and about 16 for double precision.  That's kind of alien; it means that you can represent very small numbers (like, say, one one-millionth) and very large numbers (like, say, a million), but in single precision you can't represent one million plus one one-millionth, because there aren't enough digits.  You can't represent a hundred million plus one in single precision, because there aren't enough significant figures available. The final thing is that mostly it works.  Rounding often cancels out those errors, so you don't see the problems.  That's kind of obvious when you round for display, but it also addresses some of the arithmetically tough cases like one-third times three.  Because it mostly works, it's surprising when it doesn't. So what does that all tell us? Any representation of numbers is limited.  The exact limitations will vary from one representation to another.  (You might think that you could represent numbers as fractions, but remember that the definition of a rational number is that it can be represented as a fraction, and the definition of an irrational number is that it *can't* be represented as a fraction.  As soon as you start to do trig - like, to rotate something - you're working with pi, an irrational number.  For that matter, the diagonal of a unit cube is sqrt(2), also irrational.) No matter what the representation, there will be cases that will yield unexpected results.  You must take care not to assume that two different calculations that mathematically yield the same number, will yield the same number when you actually do the arithmetic.  The biggest and most obvious of those is that when you add up fractions, the total may not be *precisely* what you expected. If you say "that's all too hard, I want to just use integers"... sure.  Go for it.  Double precision floating point is good for 53 bits; you can represent all of the integers from about -10^15 to +10^15 absolutely precisely.  You can add, subtract, and multiply them all day, and as long as you don't exceed those limits you'll get perfect answers.  Division, well, division is hard; if you divide and it doesn't come out evenly, you'll end up with a fraction.  If you don't like the fraction, you can floor() or ceil() or round() the result to get back to an integer, but of course you'll be doing something "wrong" with the remainder.  Even single-precision floating point (which is what STL uses) will get you about 7 digits, so if you measure in micrometers you can get up to about ten meters precisely.  You won't really avoid the problems, but maybe they will be more obvious to you.
NH
nop head
Mon, Feb 12, 2024 9:11 AM

My Casio calculator from the 1970's seems to round numbers perfectly. So
you can divide 1 by three and multiply by 3 to get one. I presume it
just calculates an extra digit and rounds. I wish OpenSCAD did the same.

On Mon, 12 Feb 2024 at 09:02, Jordan Brown via Discuss <
discuss@lists.openscad.org> wrote:

On 2/11/2024 4:25 AM, Raymond West via Discuss wrote:

you have to include a value for the fuzziness, so not difficult to add
that, as you've shown. It's OK for numbers, but how about for shapes?
Personally, I have not looked at in detail, but I think for practical
purposes, we could not bother with floating point. 64 bit integer can cover
a range big enough for most, imnsho.

Why do you think that using integers would help?

Say you used units of micrometers, so 1cm is 10,000 units.

What's one third of 10,000 units, in integer math?  3,333 units.  What's
that times three?  9,999 units.  Not 10,000 units.

What's 20,000 units, divided by three?  6,667 units.  Then times three?
20,001, not 20,000.

That kind of error is the root of floating point error, that finite-length
representations cannot represent infinite-length values.  Different
representations will differ in which values are infinite-length, but they
will pop up.

Floating point arithmetic neither magically solves problems nor magically
creates them.

There are three things that confuse people.

First, you learned base 10 arithmetic in school.  In base 10 arithmetic,
negative powers of 10 - really, negative powers of 2 and 5 - are simple
numbers.  One tenth is 0.1, a nice simple number.  You learned that one
third and one seventh and similar numbers are ugly and will misbehave if
you don't remember that they have an infinite number of digits.  If you're
limited to any finite number of digits, you'll get the "wrong" answer.

Most computer floating point uses base 2 arithmetic.  In base 2
arithmetic, fractional powers of 1 are simple numbers.  One half is 0.1;
one quarter is 0.01.  negative powers of 5 (and thus negative powers of 10)
are, like one third in base 10, infinite repeating fractions.  One tenth is
0.0001100110011... forever.  Just like one third in base 10, if you use any
finite number of digits, you'll get the "wrong" answer.

The second thing is that it's floating point.  It works in terms of a
certain number of significant figures - about 7 for single precision and
about 16 for double precision.  That's kind of alien; it means that you can
represent very small numbers (like, say, one one-millionth) and very large
numbers (like, say, a million), but in single precision you can't represent
one million plus one one-millionth, because there aren't enough digits.
You can't represent a hundred million plus one in single precision, because
there aren't enough significant figures available.

The final thing is that mostly it works.  Rounding often cancels out those
errors, so you don't see the problems.  That's kind of obvious when you
round for display, but it also addresses some of the arithmetically tough
cases like one-third times three.  Because it mostly works, it's surprising
when it doesn't.

So what does that all tell us?

Any representation of numbers is limited.  The exact limitations will vary
from one representation to another.  (You might think that you could
represent numbers as fractions, but remember that the definition of a
rational number is that it can be represented as a fraction, and the
definition of an irrational number is that it can't be represented as a
fraction.  As soon as you start to do trig - like, to rotate something -
you're working with pi, an irrational number.  For that matter, the
diagonal of a unit cube is sqrt(2), also irrational.)

No matter what the representation, there will be cases that will yield
unexpected results.  You must take care not to assume that two different
calculations that mathematically yield the same number, will yield the same
number when you actually do the arithmetic.  The biggest and most obvious
of those is that when you add up fractions, the total may not be
precisely what you expected.

If you say "that's all too hard, I want to just use integers"... sure.  Go
for it.  Double precision floating point is good for 53 bits; you can
represent all of the integers from about -10^15 to +10^15 absolutely
precisely.  You can add, subtract, and multiply them all day, and as long
as you don't exceed those limits you'll get perfect answers.  Division,
well, division is hard; if you divide and it doesn't come out evenly,
you'll end up with a fraction.  If you don't like the fraction, you can
floor() or ceil() or round() the result to get back to an integer, but of
course you'll be doing something "wrong" with the remainder.  Even
single-precision floating point (which is what STL uses) will get you about
7 digits, so if you measure in micrometers you can get up to about ten
meters precisely.  You won't really avoid the problems, but maybe they will
be more obvious to you.


OpenSCAD mailing list
To unsubscribe send an email to discuss-leave@lists.openscad.org

My Casio calculator from the 1970's seems to round numbers perfectly. So you can divide 1 by three and multiply by 3 to get one. I presume it just calculates an extra digit and rounds. I wish OpenSCAD did the same. On Mon, 12 Feb 2024 at 09:02, Jordan Brown via Discuss < discuss@lists.openscad.org> wrote: > On 2/11/2024 4:25 AM, Raymond West via Discuss wrote: > > you have to include a value for the fuzziness, so not difficult to add > that, as you've shown. It's OK for numbers, but how about for shapes? > Personally, I have not looked at in detail, but I think for practical > purposes, we could not bother with floating point. 64 bit integer can cover > a range big enough for most, imnsho. > > > Why do you think that using integers would help? > > Say you used units of micrometers, so 1cm is 10,000 units. > > What's one third of 10,000 units, in integer math? 3,333 units. What's > that times three? 9,999 units. Not 10,000 units. > > What's 20,000 units, divided by three? 6,667 units. Then times three? > 20,001, not 20,000. > > That kind of error is the root of floating point error, that finite-length > representations cannot represent infinite-length values. Different > representations will differ in which values are infinite-length, but they > will pop up. > > Floating point arithmetic neither magically solves problems nor magically > creates them. > > There are three things that confuse people. > > First, you learned base 10 arithmetic in school. In base 10 arithmetic, > negative powers of 10 - really, negative powers of 2 and 5 - are simple > numbers. One tenth is 0.1, a nice simple number. You learned that one > third and one seventh and similar numbers are ugly and will misbehave if > you don't remember that they have an infinite number of digits. If you're > limited to any finite number of digits, you'll get the "wrong" answer. > > Most computer floating point uses base 2 arithmetic. In base 2 > arithmetic, fractional powers of 1 are simple numbers. One half is 0.1; > one quarter is 0.01. negative powers of 5 (and thus negative powers of 10) > are, like one third in base 10, infinite repeating fractions. One tenth is > 0.0001100110011... forever. Just like one third in base 10, if you use any > finite number of digits, you'll get the "wrong" answer. > > The second thing is that it's *floating* point. It works in terms of a > certain number of significant figures - about 7 for single precision and > about 16 for double precision. That's kind of alien; it means that you can > represent very small numbers (like, say, one one-millionth) and very large > numbers (like, say, a million), but in single precision you can't represent > one million plus one one-millionth, because there aren't enough digits. > You can't represent a hundred million plus one in single precision, because > there aren't enough significant figures available. > > The final thing is that mostly it works. Rounding often cancels out those > errors, so you don't see the problems. That's kind of obvious when you > round for display, but it also addresses some of the arithmetically tough > cases like one-third times three. Because it mostly works, it's surprising > when it doesn't. > > > So what does that all tell us? > > Any representation of numbers is limited. The exact limitations will vary > from one representation to another. (You might think that you could > represent numbers as fractions, but remember that the definition of a > rational number is that it can be represented as a fraction, and the > definition of an irrational number is that it *can't* be represented as a > fraction. As soon as you start to do trig - like, to rotate something - > you're working with pi, an irrational number. For that matter, the > diagonal of a unit cube is sqrt(2), also irrational.) > > No matter what the representation, there will be cases that will yield > unexpected results. You must take care not to assume that two different > calculations that mathematically yield the same number, will yield the same > number when you actually do the arithmetic. The biggest and most obvious > of those is that when you add up fractions, the total may not be > *precisely* what you expected. > > If you say "that's all too hard, I want to just use integers"... sure. Go > for it. Double precision floating point is good for 53 bits; you can > represent all of the integers from about -10^15 to +10^15 absolutely > precisely. You can add, subtract, and multiply them all day, and as long > as you don't exceed those limits you'll get perfect answers. Division, > well, division is hard; if you divide and it doesn't come out evenly, > you'll end up with a fraction. If you don't like the fraction, you can > floor() or ceil() or round() the result to get back to an integer, but of > course you'll be doing something "wrong" with the remainder. Even > single-precision floating point (which is what STL uses) will get you about > 7 digits, so if you measure in micrometers you can get up to about ten > meters precisely. You won't really avoid the problems, but maybe they will > be more obvious to you. > > _______________________________________________ > OpenSCAD mailing list > To unsubscribe send an email to discuss-leave@lists.openscad.org >
DP
Dan Perry
Mon, Feb 12, 2024 9:44 AM

Integer math is a feature not a bug.  The easiest way to check odd vs even
is: num % 2 == 0.

On Mon, Feb 12, 2024 at 9:12 AM nop head via Discuss <
discuss@lists.openscad.org> wrote:

My Casio calculator from the 1970's seems to round numbers perfectly. So
you can divide 1 by three and multiply by 3 to get one. I presume it
just calculates an extra digit and rounds. I wish OpenSCAD did the same.

On Mon, 12 Feb 2024 at 09:02, Jordan Brown via Discuss <
discuss@lists.openscad.org> wrote:

On 2/11/2024 4:25 AM, Raymond West via Discuss wrote:

you have to include a value for the fuzziness, so not difficult to add
that, as you've shown. It's OK for numbers, but how about for shapes?
Personally, I have not looked at in detail, but I think for practical
purposes, we could not bother with floating point. 64 bit integer can cover
a range big enough for most, imnsho.

Why do you think that using integers would help?

Say you used units of micrometers, so 1cm is 10,000 units.

What's one third of 10,000 units, in integer math?  3,333 units.  What's
that times three?  9,999 units.  Not 10,000 units.

What's 20,000 units, divided by three?  6,667 units.  Then times three?
20,001, not 20,000.

That kind of error is the root of floating point error, that
finite-length representations cannot represent infinite-length values.
Different representations will differ in which values are infinite-length,
but they will pop up.

Floating point arithmetic neither magically solves problems nor magically
creates them.

There are three things that confuse people.

First, you learned base 10 arithmetic in school.  In base 10 arithmetic,
negative powers of 10 - really, negative powers of 2 and 5 - are simple
numbers.  One tenth is 0.1, a nice simple number.  You learned that one
third and one seventh and similar numbers are ugly and will misbehave if
you don't remember that they have an infinite number of digits.  If you're
limited to any finite number of digits, you'll get the "wrong" answer.

Most computer floating point uses base 2 arithmetic.  In base 2
arithmetic, fractional powers of 1 are simple numbers.  One half is 0.1;
one quarter is 0.01.  negative powers of 5 (and thus negative powers of 10)
are, like one third in base 10, infinite repeating fractions.  One tenth is
0.0001100110011... forever.  Just like one third in base 10, if you use any
finite number of digits, you'll get the "wrong" answer.

The second thing is that it's floating point.  It works in terms of a
certain number of significant figures - about 7 for single precision and
about 16 for double precision.  That's kind of alien; it means that you can
represent very small numbers (like, say, one one-millionth) and very large
numbers (like, say, a million), but in single precision you can't represent
one million plus one one-millionth, because there aren't enough digits.
You can't represent a hundred million plus one in single precision, because
there aren't enough significant figures available.

The final thing is that mostly it works.  Rounding often cancels out
those errors, so you don't see the problems.  That's kind of obvious when
you round for display, but it also addresses some of the arithmetically
tough cases like one-third times three.  Because it mostly works, it's
surprising when it doesn't.

So what does that all tell us?

Any representation of numbers is limited.  The exact limitations will
vary from one representation to another.  (You might think that you could
represent numbers as fractions, but remember that the definition of a
rational number is that it can be represented as a fraction, and the
definition of an irrational number is that it can't be represented as a
fraction.  As soon as you start to do trig - like, to rotate something -
you're working with pi, an irrational number.  For that matter, the
diagonal of a unit cube is sqrt(2), also irrational.)

No matter what the representation, there will be cases that will yield
unexpected results.  You must take care not to assume that two different
calculations that mathematically yield the same number, will yield the same
number when you actually do the arithmetic.  The biggest and most obvious
of those is that when you add up fractions, the total may not be
precisely what you expected.

If you say "that's all too hard, I want to just use integers"... sure.
Go for it.  Double precision floating point is good for 53 bits; you can
represent all of the integers from about -10^15 to +10^15 absolutely
precisely.  You can add, subtract, and multiply them all day, and as long
as you don't exceed those limits you'll get perfect answers.  Division,
well, division is hard; if you divide and it doesn't come out evenly,
you'll end up with a fraction.  If you don't like the fraction, you can
floor() or ceil() or round() the result to get back to an integer, but of
course you'll be doing something "wrong" with the remainder.  Even
single-precision floating point (which is what STL uses) will get you about
7 digits, so if you measure in micrometers you can get up to about ten
meters precisely.  You won't really avoid the problems, but maybe they will
be more obvious to you.


OpenSCAD mailing list
To unsubscribe send an email to discuss-leave@lists.openscad.org


OpenSCAD mailing list
To unsubscribe send an email to discuss-leave@lists.openscad.org

Integer math is a feature not a bug. The easiest way to check odd vs even is: num % 2 == 0. On Mon, Feb 12, 2024 at 9:12 AM nop head via Discuss < discuss@lists.openscad.org> wrote: > My Casio calculator from the 1970's seems to round numbers perfectly. So > you can divide 1 by three and multiply by 3 to get one. I presume it > just calculates an extra digit and rounds. I wish OpenSCAD did the same. > > On Mon, 12 Feb 2024 at 09:02, Jordan Brown via Discuss < > discuss@lists.openscad.org> wrote: > >> On 2/11/2024 4:25 AM, Raymond West via Discuss wrote: >> >> you have to include a value for the fuzziness, so not difficult to add >> that, as you've shown. It's OK for numbers, but how about for shapes? >> Personally, I have not looked at in detail, but I think for practical >> purposes, we could not bother with floating point. 64 bit integer can cover >> a range big enough for most, imnsho. >> >> >> Why do you think that using integers would help? >> >> Say you used units of micrometers, so 1cm is 10,000 units. >> >> What's one third of 10,000 units, in integer math? 3,333 units. What's >> that times three? 9,999 units. Not 10,000 units. >> >> What's 20,000 units, divided by three? 6,667 units. Then times three? >> 20,001, not 20,000. >> >> That kind of error is the root of floating point error, that >> finite-length representations cannot represent infinite-length values. >> Different representations will differ in which values are infinite-length, >> but they will pop up. >> >> Floating point arithmetic neither magically solves problems nor magically >> creates them. >> >> There are three things that confuse people. >> >> First, you learned base 10 arithmetic in school. In base 10 arithmetic, >> negative powers of 10 - really, negative powers of 2 and 5 - are simple >> numbers. One tenth is 0.1, a nice simple number. You learned that one >> third and one seventh and similar numbers are ugly and will misbehave if >> you don't remember that they have an infinite number of digits. If you're >> limited to any finite number of digits, you'll get the "wrong" answer. >> >> Most computer floating point uses base 2 arithmetic. In base 2 >> arithmetic, fractional powers of 1 are simple numbers. One half is 0.1; >> one quarter is 0.01. negative powers of 5 (and thus negative powers of 10) >> are, like one third in base 10, infinite repeating fractions. One tenth is >> 0.0001100110011... forever. Just like one third in base 10, if you use any >> finite number of digits, you'll get the "wrong" answer. >> >> The second thing is that it's *floating* point. It works in terms of a >> certain number of significant figures - about 7 for single precision and >> about 16 for double precision. That's kind of alien; it means that you can >> represent very small numbers (like, say, one one-millionth) and very large >> numbers (like, say, a million), but in single precision you can't represent >> one million plus one one-millionth, because there aren't enough digits. >> You can't represent a hundred million plus one in single precision, because >> there aren't enough significant figures available. >> >> The final thing is that mostly it works. Rounding often cancels out >> those errors, so you don't see the problems. That's kind of obvious when >> you round for display, but it also addresses some of the arithmetically >> tough cases like one-third times three. Because it mostly works, it's >> surprising when it doesn't. >> >> >> So what does that all tell us? >> >> Any representation of numbers is limited. The exact limitations will >> vary from one representation to another. (You might think that you could >> represent numbers as fractions, but remember that the definition of a >> rational number is that it can be represented as a fraction, and the >> definition of an irrational number is that it *can't* be represented as a >> fraction. As soon as you start to do trig - like, to rotate something - >> you're working with pi, an irrational number. For that matter, the >> diagonal of a unit cube is sqrt(2), also irrational.) >> >> No matter what the representation, there will be cases that will yield >> unexpected results. You must take care not to assume that two different >> calculations that mathematically yield the same number, will yield the same >> number when you actually do the arithmetic. The biggest and most obvious >> of those is that when you add up fractions, the total may not be >> *precisely* what you expected. >> >> If you say "that's all too hard, I want to just use integers"... sure. >> Go for it. Double precision floating point is good for 53 bits; you can >> represent all of the integers from about -10^15 to +10^15 absolutely >> precisely. You can add, subtract, and multiply them all day, and as long >> as you don't exceed those limits you'll get perfect answers. Division, >> well, division is hard; if you divide and it doesn't come out evenly, >> you'll end up with a fraction. If you don't like the fraction, you can >> floor() or ceil() or round() the result to get back to an integer, but of >> course you'll be doing something "wrong" with the remainder. Even >> single-precision floating point (which is what STL uses) will get you about >> 7 digits, so if you measure in micrometers you can get up to about ten >> meters precisely. You won't really avoid the problems, but maybe they will >> be more obvious to you. >> >> _______________________________________________ >> OpenSCAD mailing list >> To unsubscribe send an email to discuss-leave@lists.openscad.org >> > _______________________________________________ > OpenSCAD mailing list > To unsubscribe send an email to discuss-leave@lists.openscad.org >
MM
Michael Möller
Mon, Feb 12, 2024 11:24 AM

Hi Jordan,

(this is in reply to your mail of 12 Feb 2024, 07:21 CET) no, I am
perfectly happy about ArrayComparison, my curiosity was about why the
question was asked when the testcase provided constrained it's own answer.

On Mon, 12 Feb 2024 at 07:21, Jordan Brown openscad@jordan.maileater.net
wrote:

On 2/11/2024 6:54 AM, Michael Möller via Discuss wrote:

Ohhh, I am curious, too, but refrained from asking. I wondered, was the
question badly phrased, as in: what is the result for undef, unequal sizes,
unequal nesting ? Maybe the test was done, but the answer was "wrong".

Vector comparison is done by comparing successive entries in the two
vectors.

- If the result of the comparison of the two elements is undef, the
result is undef.
- If the result of result of the comparison of the two elements is
false, the result is false.
- If, after reaching the end of either vector, we have not yet reached
the end of the other, the result is false.
- If we reach the end of both vectors, the result is true.

https://github.com/openscad/openscad/blob/dd2da9e2908e881af6cb1fe90306fe6004081cb6/src/core/Value.cc#L718

And of course that per-entry comparison may itself be a vector comparison,
or any other type.

Hi Jordan, (this is in reply to your mail of 12 Feb 2024, 07:21 CET) no, I am perfectly happy about ArrayComparison, my curiosity was about why the question was asked when the testcase provided constrained it's own answer. M² On Mon, 12 Feb 2024 at 07:21, Jordan Brown <openscad@jordan.maileater.net> wrote: > On 2/11/2024 6:54 AM, Michael Möller via Discuss wrote: > > Ohhh, I am curious, too, but refrained from asking. I wondered, was the > question badly phrased, as in: what is the result for undef, unequal sizes, > unequal nesting ? Maybe the test was done, but the answer was "wrong". > > > Vector comparison is done by comparing successive entries in the two > vectors. > > - If the result of the comparison of the two elements is undef, the > result is undef. > - If the result of result of the comparison of the two elements is > false, the result is false. > - If, after reaching the end of either vector, we have not yet reached > the end of the other, the result is false. > - If we reach the end of both vectors, the result is true. > > > https://github.com/openscad/openscad/blob/dd2da9e2908e881af6cb1fe90306fe6004081cb6/src/core/Value.cc#L718 > > And of course that per-entry comparison may itself be a vector comparison, > or any other type. > > >
DM
Douglas Miller
Mon, Feb 12, 2024 12:56 PM

Agreed that it should be made explicit, but  in my opinion, rather than
specifying tolerance as a number, epsilon should specify tolerance as a
/proportion/ of the values being compared: epsilon = 0.001 should mean
that the two values are being tested for equality within 0.1% rather than
literally within 0.001 -- e.g. fuzeq(0.001, 0.00199) should return false.

Or perhaps better still, an optional fourth parameter specifying whether
epsilon should be understood as a number or as a proportion, defaulting
to proportion.

On 2/10/2024 2:50 PM, Father Horton wrote:

Altering the definition of == could be breaking, though I would be a
bit surprised. But I'd rather see the fuzzy comparison made explicit.

function fuzeq(a, b, epsilon = 0.001) = abs(a - b) < epsilon;

echo(fuzeq(1, 1.1));
echo(fuzeq(1, 1.00001));
echo(fuzeq(1, 1.00001, epsilon = 0.0000001));

ECHO: false
ECHO: true
ECHO: false

Agreed that it should be made explicit, but  in my opinion, rather than specifying tolerance as a number, epsilon should specify tolerance as a /proportion/ of the values being compared: epsilon = 0.001 should mean that the two values are being tested for equality within 0.1% rather than literally within 0.001 -- e.g. fuzeq(0.001, 0.00199) should return false. Or perhaps better still, an optional fourth parameter specifying whether epsilon should be understood as a number or as a proportion, defaulting to proportion. On 2/10/2024 2:50 PM, Father Horton wrote: > Altering the definition of == could be breaking, though I would be a > bit surprised. But I'd rather see the fuzzy comparison made explicit. > > function fuzeq(a, b, epsilon = 0.001) = abs(a - b) < epsilon; > > echo(fuzeq(1, 1.1)); > echo(fuzeq(1, 1.00001)); > echo(fuzeq(1, 1.00001, epsilon = 0.0000001)); > > ECHO: false > ECHO: true > ECHO: false
SP
Sanjeev Prabhakar
Mon, Feb 12, 2024 4:27 PM

You have a point here, I did not think about this in enough depth.

but maybe you can achieve the same result by writing a better logic for
rounding.

e.g. iteratively rounding a number from 1 less than the decimal point of a
number till you reach the decimal point you want.

in the example 1.12344999999, if it is first rounded to 10 decimal it will
be 1.12345 and another step will make it 1.1234
similarly 1.12345000001 if it is rounded to 10 decimal it will be 1.12345
and so on.

Now writing such logic in openscad may need some good skills, but I think
this should be the right logic.

On Mon, 12 Feb 2024 at 12:52, Jordan Brown openscad@jordan.maileater.net
wrote:

On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote:

Maybe round a number to 4 or 5 decimal places and then comparison should
work.

E.g.
round(2.0000001,4)==2 , should give  "True" as result.

No, it won't.

If you're looking to compare to four decimal places, you would want

1.12344999999

and

1.12345000001

to compare equal, because they only differ by 0.00000000002, but the first
will round to 0.1234 and the second will round to 0.1235.

The "compare the absolute value of the difference" answer seems correct,
except that to be general, if you're looking at floating point, you want to
compare to some number of significant figures, not some number of decimal
places.  If you want to compare to six significant figures, and you're
comparing numbers that are around 1, then you need to confirm that their
difference is less than 0.000001.  If you're comparing numbers that are
around a million, then you need to confirm that their difference is less
than 1.  You can do that by comparing the absolute value of the difference
to a fraction of one of the two numbers.

But that breaks down when the value you are comparing against is zero,
because there are no significant figures.

It is also problematic - not wrong, but you could easily make errors - if
the values that you are comparing are themselves the results of subtracting
large nearly-equal numbers.  Discussing single-precision floating point for
easy numbers, if you subtract two numbers that are around a million and get
a result of around 1, that result only has about two significant figures.
Each of your two millions is plus or minus about 0.1, and (simplistically)
those error bars add, so your result of around 1 is plus or minus 0.2; you
should consider everything from 0.8 to 1.2 to be the same as 1.  (I might
be off by an order of magnitude there, but hopefully the idea comes across.)

You have a point here, I did not think about this in enough depth. but maybe you can achieve the same result by writing a better logic for rounding. e.g. iteratively rounding a number from 1 less than the decimal point of a number till you reach the decimal point you want. in the example 1.12344999999, if it is first rounded to 10 decimal it will be 1.12345 and another step will make it 1.1234 similarly 1.12345000001 if it is rounded to 10 decimal it will be 1.12345 and so on. Now writing such logic in openscad may need some good skills, but I think this should be the right logic. On Mon, 12 Feb 2024 at 12:52, Jordan Brown <openscad@jordan.maileater.net> wrote: > On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote: > > Maybe round a number to 4 or 5 decimal places and then comparison should > work. > > E.g. > round(2.0000001,4)==2 , should give "True" as result. > > > No, it won't. > > If you're looking to compare to four decimal places, you would want > > 1.12344999999 > > and > > 1.12345000001 > > to compare equal, because they only differ by 0.00000000002, but the first > will round to 0.1234 and the second will round to 0.1235. > > > The "compare the absolute value of the difference" answer seems correct, > except that to be general, if you're looking at floating point, you want to > compare to some number of significant figures, not some number of decimal > places. If you want to compare to six significant figures, and you're > comparing numbers that are around 1, then you need to confirm that their > difference is less than 0.000001. If you're comparing numbers that are > around a million, then you need to confirm that their difference is less > than 1. You can do that by comparing the absolute value of the difference > to a fraction of one of the two numbers. > > But that breaks down when the value you are comparing against is zero, > because there *are* no significant figures. > > It is also problematic - not wrong, but you could easily make errors - if > the values that you are comparing are themselves the results of subtracting > large nearly-equal numbers. Discussing single-precision floating point for > easy numbers, if you subtract two numbers that are around a million and get > a result of around 1, that result only has about two significant figures. > Each of your two millions is plus or minus about 0.1, and (simplistically) > those error bars add, so your result of around 1 is plus or minus 0.2; you > should consider everything from 0.8 to 1.2 to be the same as 1. (I might > be off by an order of magnitude there, but hopefully the idea comes across.) > > >
FH
Father Horton
Mon, Feb 12, 2024 4:32 PM

Why would this be better than the more usual (and faster)
subtract-and-compare method?

On Mon, Feb 12, 2024 at 10:27 AM Sanjeev Prabhakar sprabhakar2006@gmail.com
wrote:

You have a point here, I did not think about this in enough depth.

but maybe you can achieve the same result by writing a better logic for
rounding.

e.g. iteratively rounding a number from 1 less than the decimal point of a
number till you reach the decimal point you want.

in the example 1.12344999999, if it is first rounded to 10 decimal it will
be 1.12345 and another step will make it 1.1234
similarly 1.12345000001 if it is rounded to 10 decimal it will be 1.12345
and so on.

Now writing such logic in openscad may need some good skills, but I think
this should be the right logic.

On Mon, 12 Feb 2024 at 12:52, Jordan Brown openscad@jordan.maileater.net
wrote:

On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote:

Maybe round a number to 4 or 5 decimal places and then comparison should
work.

E.g.
round(2.0000001,4)==2 , should give  "True" as result.

No, it won't.

If you're looking to compare to four decimal places, you would want

1.12344999999

and

1.12345000001

to compare equal, because they only differ by 0.00000000002, but the
first will round to 0.1234 and the second will round to 0.1235.

The "compare the absolute value of the difference" answer seems correct,
except that to be general, if you're looking at floating point, you want to
compare to some number of significant figures, not some number of decimal
places.  If you want to compare to six significant figures, and you're
comparing numbers that are around 1, then you need to confirm that their
difference is less than 0.000001.  If you're comparing numbers that are
around a million, then you need to confirm that their difference is less
than 1.  You can do that by comparing the absolute value of the difference
to a fraction of one of the two numbers.

But that breaks down when the value you are comparing against is zero,
because there are no significant figures.

It is also problematic - not wrong, but you could easily make errors - if
the values that you are comparing are themselves the results of subtracting
large nearly-equal numbers.  Discussing single-precision floating point for
easy numbers, if you subtract two numbers that are around a million and get
a result of around 1, that result only has about two significant figures.
Each of your two millions is plus or minus about 0.1, and (simplistically)
those error bars add, so your result of around 1 is plus or minus 0.2; you
should consider everything from 0.8 to 1.2 to be the same as 1.  (I might
be off by an order of magnitude there, but hopefully the idea comes across.)

Why would this be better than the more usual (and faster) subtract-and-compare method? On Mon, Feb 12, 2024 at 10:27 AM Sanjeev Prabhakar <sprabhakar2006@gmail.com> wrote: > You have a point here, I did not think about this in enough depth. > > but maybe you can achieve the same result by writing a better logic for > rounding. > > e.g. iteratively rounding a number from 1 less than the decimal point of a > number till you reach the decimal point you want. > > in the example 1.12344999999, if it is first rounded to 10 decimal it will > be 1.12345 and another step will make it 1.1234 > similarly 1.12345000001 if it is rounded to 10 decimal it will be 1.12345 > and so on. > > Now writing such logic in openscad may need some good skills, but I think > this should be the right logic. > > On Mon, 12 Feb 2024 at 12:52, Jordan Brown <openscad@jordan.maileater.net> > wrote: > >> On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote: >> >> Maybe round a number to 4 or 5 decimal places and then comparison should >> work. >> >> E.g. >> round(2.0000001,4)==2 , should give "True" as result. >> >> >> No, it won't. >> >> If you're looking to compare to four decimal places, you would want >> >> 1.12344999999 >> >> and >> >> 1.12345000001 >> >> to compare equal, because they only differ by 0.00000000002, but the >> first will round to 0.1234 and the second will round to 0.1235. >> >> >> The "compare the absolute value of the difference" answer seems correct, >> except that to be general, if you're looking at floating point, you want to >> compare to some number of significant figures, not some number of decimal >> places. If you want to compare to six significant figures, and you're >> comparing numbers that are around 1, then you need to confirm that their >> difference is less than 0.000001. If you're comparing numbers that are >> around a million, then you need to confirm that their difference is less >> than 1. You can do that by comparing the absolute value of the difference >> to a fraction of one of the two numbers. >> >> But that breaks down when the value you are comparing against is zero, >> because there *are* no significant figures. >> >> It is also problematic - not wrong, but you could easily make errors - if >> the values that you are comparing are themselves the results of subtracting >> large nearly-equal numbers. Discussing single-precision floating point for >> easy numbers, if you subtract two numbers that are around a million and get >> a result of around 1, that result only has about two significant figures. >> Each of your two millions is plus or minus about 0.1, and (simplistically) >> those error bars add, so your result of around 1 is plus or minus 0.2; you >> should consider everything from 0.8 to 1.2 to be the same as 1. (I might >> be off by an order of magnitude there, but hopefully the idea comes across.) >> >> >>
SP
Sanjeev Prabhakar
Mon, Feb 12, 2024 4:54 PM

If you can round the arrays to a certain number all at once, comparing them
would be much neater and easier for many people.

subtract and compare is probably better for speed right now, till you have
such a rounding solution not available inside openscad

On Mon, 12 Feb 2024 at 22:02, Father Horton fatherhorton@gmail.com wrote:

Why would this be better than the more usual (and faster)
subtract-and-compare method?

On Mon, Feb 12, 2024 at 10:27 AM Sanjeev Prabhakar <
sprabhakar2006@gmail.com> wrote:

You have a point here, I did not think about this in enough depth.

but maybe you can achieve the same result by writing a better logic for
rounding.

e.g. iteratively rounding a number from 1 less than the decimal point of
a number till you reach the decimal point you want.

in the example 1.12344999999, if it is first rounded to 10 decimal it
will be 1.12345 and another step will make it 1.1234
similarly 1.12345000001 if it is rounded to 10 decimal it will be 1.12345
and so on.

Now writing such logic in openscad may need some good skills, but I think
this should be the right logic.

On Mon, 12 Feb 2024 at 12:52, Jordan Brown openscad@jordan.maileater.net
wrote:

On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote:

Maybe round a number to 4 or 5 decimal places and then comparison should
work.

E.g.
round(2.0000001,4)==2 , should give  "True" as result.

No, it won't.

If you're looking to compare to four decimal places, you would want

1.12344999999

and

1.12345000001

to compare equal, because they only differ by 0.00000000002, but the
first will round to 0.1234 and the second will round to 0.1235.

The "compare the absolute value of the difference" answer seems correct,
except that to be general, if you're looking at floating point, you want to
compare to some number of significant figures, not some number of decimal
places.  If you want to compare to six significant figures, and you're
comparing numbers that are around 1, then you need to confirm that their
difference is less than 0.000001.  If you're comparing numbers that are
around a million, then you need to confirm that their difference is less
than 1.  You can do that by comparing the absolute value of the difference
to a fraction of one of the two numbers.

But that breaks down when the value you are comparing against is zero,
because there are no significant figures.

It is also problematic - not wrong, but you could easily make errors -
if the values that you are comparing are themselves the results of
subtracting large nearly-equal numbers.  Discussing single-precision
floating point for easy numbers, if you subtract two numbers that are
around a million and get a result of around 1, that result only has about
two significant figures.  Each of your two millions is plus or minus about
0.1, and (simplistically) those error bars add, so your result of around 1
is plus or minus 0.2; you should consider everything from 0.8 to 1.2 to be
the same as 1.  (I might be off by an order of magnitude there, but
hopefully the idea comes across.)

If you can round the arrays to a certain number all at once, comparing them would be much neater and easier for many people. subtract and compare is probably better for speed right now, till you have such a rounding solution not available inside openscad On Mon, 12 Feb 2024 at 22:02, Father Horton <fatherhorton@gmail.com> wrote: > Why would this be better than the more usual (and faster) > subtract-and-compare method? > > On Mon, Feb 12, 2024 at 10:27 AM Sanjeev Prabhakar < > sprabhakar2006@gmail.com> wrote: > >> You have a point here, I did not think about this in enough depth. >> >> but maybe you can achieve the same result by writing a better logic for >> rounding. >> >> e.g. iteratively rounding a number from 1 less than the decimal point of >> a number till you reach the decimal point you want. >> >> in the example 1.12344999999, if it is first rounded to 10 decimal it >> will be 1.12345 and another step will make it 1.1234 >> similarly 1.12345000001 if it is rounded to 10 decimal it will be 1.12345 >> and so on. >> >> Now writing such logic in openscad may need some good skills, but I think >> this should be the right logic. >> >> On Mon, 12 Feb 2024 at 12:52, Jordan Brown <openscad@jordan.maileater.net> >> wrote: >> >>> On 2/10/2024 4:05 PM, Sanjeev Prabhakar via Discuss wrote: >>> >>> Maybe round a number to 4 or 5 decimal places and then comparison should >>> work. >>> >>> E.g. >>> round(2.0000001,4)==2 , should give "True" as result. >>> >>> >>> No, it won't. >>> >>> If you're looking to compare to four decimal places, you would want >>> >>> 1.12344999999 >>> >>> and >>> >>> 1.12345000001 >>> >>> to compare equal, because they only differ by 0.00000000002, but the >>> first will round to 0.1234 and the second will round to 0.1235. >>> >>> >>> The "compare the absolute value of the difference" answer seems correct, >>> except that to be general, if you're looking at floating point, you want to >>> compare to some number of significant figures, not some number of decimal >>> places. If you want to compare to six significant figures, and you're >>> comparing numbers that are around 1, then you need to confirm that their >>> difference is less than 0.000001. If you're comparing numbers that are >>> around a million, then you need to confirm that their difference is less >>> than 1. You can do that by comparing the absolute value of the difference >>> to a fraction of one of the two numbers. >>> >>> But that breaks down when the value you are comparing against is zero, >>> because there *are* no significant figures. >>> >>> It is also problematic - not wrong, but you could easily make errors - >>> if the values that you are comparing are themselves the results of >>> subtracting large nearly-equal numbers. Discussing single-precision >>> floating point for easy numbers, if you subtract two numbers that are >>> around a million and get a result of around 1, that result only has about >>> two significant figures. Each of your two millions is plus or minus about >>> 0.1, and (simplistically) those error bars add, so your result of around 1 >>> is plus or minus 0.2; you should consider everything from 0.8 to 1.2 to be >>> the same as 1. (I might be off by an order of magnitude there, but >>> hopefully the idea comes across.) >>> >>> >>>