discuss@lists.openscad.org

OpenSCAD general discussion Mailing-list

View all threads

feature request: plug-ins

TP
Torsten Paul
Tue, Nov 24, 2015 6:43 PM

On 11/24/2015 11:22 AM, wolf wrote:

Z-fighting? Do we want that? In the light of what what recently discussed
here
http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html
, z-fighting is bound to be a floating point issue.  MKoch
http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html
provided some interesting code producing shapes that look quite different
when previewed and rendered:
http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1

This is not just Z-fighting, adding a convexity parameter (value 3 seems
to work for me) to the minkowski() fixes the white see-through parts in
the preview display.

Only the bottom plane actually has the Z-fighting issue.

ciao,
Torsten.

On 11/24/2015 11:22 AM, wolf wrote: > Z-fighting? Do we want that? In the light of what what recently discussed > here > <http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html> > , z-fighting is bound to be a floating point issue. MKoch > <http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html> > provided some interesting code producing shapes that look quite different > when previewed and rendered: > <http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1> > This is not just Z-fighting, adding a convexity parameter (value 3 seems to work for me) to the minkowski() fixes the white see-through parts in the preview display. Only the bottom plane actually has the Z-fighting issue. ciao, Torsten.
JL
Jean-Paul Louis
Wed, Nov 25, 2015 2:47 AM

Why not using the nanometer as internal unit, and always use 64bit integers.
That would be easy to handle. If my memory is not abusing me, KiCAD use the nanometer as internal unit.

My $0.02,
Jean-Paul
AC9GH

On Nov 24, 2015, at 12:41 PM, doug moen doug@moens.org wrote:

Hi Wolf.

A point of terminology: you use the word "integer", when I think you are actually referring to the fixed point format that Alan has proposed: 64 bit fixed point with 32 bits before and after the binary point. I'll call this "fixed point", since if we actually switched to using integers, then we wouldn't be able to describe distances smaller than 1 mm.

64 bit floating point is better than the proposed 64 bit fixed point because it is so much more accurate, for the kinds of computations we do in OpenSCAD.

First, let's consider vertex positions in the model. Let's suppose that no part of the model is more than 256mm (25.6cm, ~10in) away from the origin, which will be true in the large majority of cases. If we are using fixed point numbers, then we'll be using at most 8 bits before the binary point. The high order 24 bits will be zero. So vertex coordinates have at most 40 bits of precision, compared to 53 bits of precision with floating point. With fixed point, the smaller the number, the less precision, while floating point numbers have a constant precision of 53 bits.

The precision of vertex coordinates in the final computed mesh is not the most important issue.
What's important is the accuracy of computations.

Many of the numbers in OpenSCAD computations are not vertex coordinates. Consider trigonometry and rotation. To rotate an object, you need to compute the sine and cosine of the angle, which will be in the range [1,-1]. With fixed point numbers, sines and cosines have 33 bits of precision, compared to 53 bits of precision with floating point.

The big advantage of floating point is when we perform multi-stage and iterative computations in OpenSCAD, with many intermediate results. At each stage, some accuracy is lost, and the errors accumulate at each stage. To minimize the impact of these errors on the model, you want to have as much precision as possible.

If you have a detailed understanding of how computer arithmetic works, then you can design your programs to minimize these errors. Frankly, this is an advanced topic, taught in 3rd year computer science at my university. Most professional computer programmers know little about this subject. Most OpenSCAD users know little about this subject. Floating point is better than fixed point for OpenSCAD because the vastly greater precision of floating point numbers makes numeric computation more accurate, and makes OpenSCAD more beginner friendly.

Doug Moen.

On 24 November 2015 at 05:22, wolf wv99999@gmail.com wrote:
The argument whether to use integer or floating point arithmetic can be
decided quite simply. If you care to look up  equinumerosity
http://en.wikipedia.org/wiki/Equinumerosity  in the wikipedia, you'll see
that a 64 bit integer has a one-on-one correspondence (a bijection) with a
64 bit floating point number, i.e an any combination of bits may be
interpreted either as an integer or a float. Floats give a greater range, at
the cost of reduced accuracy.

10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD model
can meaningfully be given, below that, the rules of physics change
(uncertainty principle, etc), and modelling that in OpenSCAD is just plain
silly. If we take that as the bottom, then a signed integer can tops
represent a distance of 922337km, or twice the distance earth-moon, with an
accuracy of 10E-7mm.
A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the
integer, and thus its accuracy is only 1/1024 that of an integer
representation. It can represent tops 900km with an accuracy of 10E-7mm. Is
900km enough? Possibly. 922337km for integer arithmetic is certainly enough,
and no more range is needed.

Computing speed? have a look  here
http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/  and  here
http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/  .

Z-fighting? Do we want that? In the light of what what recently discussed
here
http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html
, z-fighting is bound to be a floating point issue.  MKoch
http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html
provided some interesting code producing shapes that look quite different
when previewed and rendered:
http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1
http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2
http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3
http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4

I have no problem deciding what I prefer: 64 bit integer arithmetic.
Accuracy is better, range is fully sufficient, computing speed is equal or
superior, and the likelyhood of imaging problems that need to be overcome by
user-space tricks associated with floating point numbers - forget about
floats asap, please.
Wolf

--
View this message in context: http://forum.openscad.org/feature-request-plug-ins-tp14663p14735.html
Sent from the OpenSCAD mailing list archive at Nabble.com.


OpenSCAD mailing list
Discuss@lists.openscad.org
http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org


OpenSCAD mailing list
Discuss@lists.openscad.org
http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org

Why not using the nanometer as internal unit, and always use 64bit integers. That would be easy to handle. If my memory is not abusing me, KiCAD use the nanometer as internal unit. My $0.02, Jean-Paul AC9GH > On Nov 24, 2015, at 12:41 PM, doug moen <doug@moens.org> wrote: > > Hi Wolf. > > A point of terminology: you use the word "integer", when I think you are actually referring to the fixed point format that Alan has proposed: 64 bit fixed point with 32 bits before and after the binary point. I'll call this "fixed point", since if we actually switched to using integers, then we wouldn't be able to describe distances smaller than 1 mm. > > 64 bit floating point is better than the proposed 64 bit fixed point because it is so much more accurate, for the kinds of computations we do in OpenSCAD. > > First, let's consider vertex positions in the model. Let's suppose that no part of the model is more than 256mm (25.6cm, ~10in) away from the origin, which will be true in the large majority of cases. If we are using fixed point numbers, then we'll be using at most 8 bits before the binary point. The high order 24 bits will be zero. So vertex coordinates have at most 40 bits of precision, compared to 53 bits of precision with floating point. With fixed point, the smaller the number, the less precision, while floating point numbers have a constant precision of 53 bits. > > The precision of vertex coordinates in the final computed mesh is not the most important issue. > What's important is the accuracy of computations. > > Many of the numbers in OpenSCAD computations are not vertex coordinates. Consider trigonometry and rotation. To rotate an object, you need to compute the sine and cosine of the angle, which will be in the range [1,-1]. With fixed point numbers, sines and cosines have 33 bits of precision, compared to 53 bits of precision with floating point. > > The big advantage of floating point is when we perform multi-stage and iterative computations in OpenSCAD, with many intermediate results. At each stage, some accuracy is lost, and the errors accumulate at each stage. To minimize the impact of these errors on the model, you want to have as much precision as possible. > > If you have a detailed understanding of how computer arithmetic works, then you can design your programs to minimize these errors. Frankly, this is an advanced topic, taught in 3rd year computer science at my university. Most professional computer programmers know little about this subject. Most OpenSCAD users know little about this subject. Floating point is better than fixed point for OpenSCAD because the vastly greater precision of floating point numbers makes numeric computation more accurate, and makes OpenSCAD more beginner friendly. > > Doug Moen. > > On 24 November 2015 at 05:22, wolf <wv99999@gmail.com> wrote: > The argument whether to use integer or floating point arithmetic can be > decided quite simply. If you care to look up equinumerosity > <http://en.wikipedia.org/wiki/Equinumerosity> in the wikipedia, you'll see > that a 64 bit integer has a one-on-one correspondence (a bijection) with a > 64 bit floating point number, i.e an any combination of bits may be > interpreted either as an integer or a float. Floats give a greater range, at > the cost of reduced accuracy. > > 10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD model > can meaningfully be given, below that, the rules of physics change > (uncertainty principle, etc), and modelling that in OpenSCAD is just plain > silly. If we take that as the bottom, then a signed integer can tops > represent a distance of 922337km, or twice the distance earth-moon, with an > accuracy of 10E-7mm. > A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the > integer, and thus its accuracy is only 1/1024 that of an integer > representation. It can represent tops 900km with an accuracy of 10E-7mm. Is > 900km enough? Possibly. 922337km for integer arithmetic is certainly enough, > and no more range is needed. > > Computing speed? have a look here > <http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/> and here > <http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/> . > > Z-fighting? Do we want that? In the light of what what recently discussed > here > <http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html> > , z-fighting is bound to be a floating point issue. MKoch > <http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html> > provided some interesting code producing shapes that look quite different > when previewed and rendered: > <http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1> > <http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2> > <http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3> > <http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4> > > I have no problem deciding what I prefer: 64 bit integer arithmetic. > Accuracy is better, range is fully sufficient, computing speed is equal or > superior, and the likelyhood of imaging problems that need to be overcome by > user-space tricks associated with floating point numbers - forget about > floats asap, please. > Wolf > > > > > -- > View this message in context: http://forum.openscad.org/feature-request-plug-ins-tp14663p14735.html > Sent from the OpenSCAD mailing list archive at Nabble.com. > > _______________________________________________ > OpenSCAD mailing list > Discuss@lists.openscad.org > http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org > > > > _______________________________________________ > OpenSCAD mailing list > Discuss@lists.openscad.org > http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
W
wolf
Wed, Nov 25, 2015 10:44 AM

Thank you, Torsten, for the reference to convexity. Whoever is looking after
the manual, may I request that the parameters applicable to minkowski() be
added to the manual?

For the remainder of the comments received, I'll focus on what Doug Moen has
written, not because I want to pick on him, but because his comment contains
enough information on his educational background that I can compare it
against my own - and fill in what appears to be a gap in his mathematics
education. This gap is quite deep, what Doug references as an advanced topic
in computer science - how computer arithmetics works - I learned when I was
about 12 or 14 years old, and the most advanced computer available to me
could not even do divisions! It would take another 20 or 25 years before IBM
created the first PC. What I say in the following is based on my High School
maths - five hours of classes a week, and six hour long tests, but
definitely not university level math. Thus, Doug would have been taught my
stuff only if he had attended university for a total of at least five or six
years - and he sounds as if he finished after three.

The first concept we were trained in thoroughly is to understand mapping: if
a==b then it does not matter whether I refer to a or b, they are just
different names for the same thing. Among mathematicians, this is called a
bijection or a one-on-one correspondence.

The second concept we had to understand is that a measurement (a distance,
an area, a weight, etc) is always the product of a value multiplied with a
unit. A computer does not understand units, programs only manipulate values
(commonly, but somewhat inaccurately, called numbers). Units need to be
added to any computer output by a human, to connect those values/numbers to
the real world and give them meaning. Humans do combining values and units
all the time, unconciously. To become conscious about them requires quite
some intellectual effort, and many people never get this far.

The third concept we learned is that items may be collected in sets, and
that a number may be assigned bijectivly to these items. The topic, set
theory, has become the foundation of much, if not all mathematics. Read the
CGAL documentation, and you cannot escape set theory. The only property of
sets that I need here is that many sets have the property that they are
countable, meaning that a natural number (a number generated from
n=1+1+1+...) can be assigned bijectively to them. Integers have this
property, as do ratios and even roots. Because they are countable, they are
at heart not different from each other, and I am justified in calling them
all integers (see mapping above). That holds true also for all numbers
representable on a computer, be they integers, floats or strings, because of
their finite length. But real numbers are not countable, and thus need to be
treated differently.

Real numbers need to be represented by infinite Taylor series, and when
these series are forced into what a computer can handle, errors, called
rounding errors, are inevitable. Sines and cosines, indispensible for
rotations, belong into this category. But because any real number can be
represented by a suitable infinite series, this has an effect on accuracy
only if an improper library is chosen.

It is getting rather late now, and I do not have the time to discuss number
formats and representations. From my short excursion into number theory it
is clear that deep inside any computer only integers are at work, and that
any other formats, including floats, are there for the user's convenience. I
do not believe in Alan Cox' speed advantage for integer arithmetic - my
impression is that this claim arises from counting clock cycles,
disregarding time lost e.g. when long queues collapse - but . . .

For me the true decision is made by the presence or absence of imaging
artifacts - z-fighting is bound to have its origin in the use of floats, and
from all I have read in this forum, manifold issues also appear to be due to
the use of floats.
For an "internal unit" I prefer 1E-7mm, the atomic scale, to match the most
detailed printer that has ever been built. Then I don't need any "decimal
point", as I am not limited by the choice of "millimeter". Fixed point
numbers are just floating point numbers in disguise, and do not have any
additional utility.

The real challenge is to raise Doug Moen's mathematical competence, so that
his prejudice against integer arithmetic can be overcome, and his
programming competence may shine.

Wolf

--
View this message in context: http://forum.openscad.org/feature-request-plug-ins-tp14663p14748.html
Sent from the OpenSCAD mailing list archive at Nabble.com.

Thank you, Torsten, for the reference to convexity. Whoever is looking after the manual, may I request that the parameters applicable to minkowski() be added to the manual? For the remainder of the comments received, I'll focus on what Doug Moen has written, not because I want to pick on him, but because his comment contains enough information on his educational background that I can compare it against my own - and fill in what appears to be a gap in his mathematics education. This gap is quite deep, what Doug references as an advanced topic in computer science - how computer arithmetics works - I learned when I was about 12 or 14 years old, and the most advanced computer available to me could not even do divisions! It would take another 20 or 25 years before IBM created the first PC. What I say in the following is based on my High School maths - five hours of classes a week, and six hour long tests, but definitely not university level math. Thus, Doug would have been taught my stuff only if he had attended university for a total of at least five or six years - and he sounds as if he finished after three. The first concept we were trained in thoroughly is to understand mapping: if a==b then it does not matter whether I refer to a or b, they are just different names for the same thing. Among mathematicians, this is called a bijection or a one-on-one correspondence. The second concept we had to understand is that a measurement (a distance, an area, a weight, etc) is always the product of a value multiplied with a unit. A computer does not understand units, programs only manipulate values (commonly, but somewhat inaccurately, called numbers). Units need to be added to any computer output by a human, to connect those values/numbers to the real world and give them meaning. Humans do combining values and units all the time, unconciously. To become conscious about them requires quite some intellectual effort, and many people never get this far. The third concept we learned is that items may be collected in sets, and that a number may be assigned bijectivly to these items. The topic, set theory, has become the foundation of much, if not all mathematics. Read the CGAL documentation, and you cannot escape set theory. The only property of sets that I need here is that many sets have the property that they are countable, meaning that a natural number (a number generated from n=1+1+1+...) can be assigned bijectively to them. Integers have this property, as do ratios and even roots. Because they are countable, they are at heart not different from each other, and I am justified in calling them all integers (see mapping above). That holds true also for all numbers representable on a computer, be they integers, floats or strings, because of their finite length. But real numbers are not countable, and thus need to be treated differently. Real numbers need to be represented by infinite Taylor series, and when these series are forced into what a computer can handle, errors, called rounding errors, are inevitable. Sines and cosines, indispensible for rotations, belong into this category. But because any real number can be represented by a suitable infinite series, this has an effect on accuracy only if an improper library is chosen. It is getting rather late now, and I do not have the time to discuss number formats and representations. From my short excursion into number theory it is clear that deep inside any computer only integers are at work, and that any other formats, including floats, are there for the user's convenience. I do not believe in Alan Cox' speed advantage for integer arithmetic - my impression is that this claim arises from counting clock cycles, disregarding time lost e.g. when long queues collapse - but . . . For me the true decision is made by the presence or absence of imaging artifacts - z-fighting is bound to have its origin in the use of floats, and from all I have read in this forum, manifold issues also appear to be due to the use of floats. For an "internal unit" I prefer 1E-7mm, the atomic scale, to match the most detailed printer that has ever been built. Then I don't need any "decimal point", as I am not limited by the choice of "millimeter". Fixed point numbers are just floating point numbers in disguise, and do not have any additional utility. The real challenge is to raise Doug Moen's mathematical competence, so that his prejudice against integer arithmetic can be overcome, and his programming competence may shine. Wolf -- View this message in context: http://forum.openscad.org/feature-request-plug-ins-tp14663p14748.html Sent from the OpenSCAD mailing list archive at Nabble.com.
AC
Alan Cox
Wed, Nov 25, 2015 2:47 PM

On Tue, 24 Nov 2015 12:41:16 -0500
doug moen doug@moens.org wrote:

Hi Wolf.

A point of terminology: you use the word "integer", when I think you are
actually referring to the fixed point format that Alan has proposed: 64 bit
fixed point with 32 bits before and after the binary point. I'll call this
"fixed point", since if we actually switched to using integers, then we
wouldn't be able to describe distances smaller than 1 mm.

They are the same thing, where you put the decimal point is a matter of
the units. You obviously want to pick a divide that reflects the normal
use of the program. Decimal points are part of the unit not part of the
number.

64 bit floating point is better than the proposed 64 bit fixed point
because it is so much more accurate, for the kinds of computations we do in
OpenSCAD.

How is representing partial atom close to the origin useful 8)

Consider trigonometry and rotation. To rotate an object, you need to
compute the sine and cosine of the angle, which will be in the range
[1,-1]. With fixed point numbers, sines and cosines have 33 bits of
precision, compared to 53 bits of precision with floating point.

It depends how you divide up the bits. 32:32 may well not be the ideal
choice - working on notional nanometres as has been suggested is probably
a better split. Nanometres should suit everyone except crazed physicists
and microprocessor designers.

If you have a detailed understanding of how computer arithmetic works, then
you can design your programs to minimize these errors. Frankly, this is an

You don't need to minimise the errors, you need a mathematical model
which produces valid objects rapidly and within the tolerance of
printing. It's the difference between engineering and mathematicians.
OpenSCAD lives in the real world.

Mathematicians minimise errors, engineers verify that the cumulative
error won't change anything they care about.

advanced topic, taught in 3rd year computer science at my university. Most
professional computer programmers know little about this subject. Most
OpenSCAD users know little about this subject. Floating point is better
than fixed point for OpenSCAD because the vastly greater precision of
floating point numbers makes numeric computation more accurate, and makes
OpenSCAD more beginner friendly.

Disagree entirely. Again OpenSCAD is a 3D printing tool not a
mathematical modeller. Let me explain the end user view of OpenSCAD

  • Type in stuff
  • Press render
  • Export to STL
  • Print

Nobody cares what format is used internally provided when you give it
real world problems it gives you valid objects back within the tolerances
of the printer and in acceptable time.

And if you really want mathematical purity and perfection then turn the
objects into a set of implicit functions and do your final render by
solving them. Your internal representation will then be perfectly accurate
and your "conversion" algorithm just has to decide on the accuracy it
desires. That lets you do all sorts of funky stuff such as getting more
accurate the longer you leave it, or most accurate render we can do
within n seconds.

Alan

On Tue, 24 Nov 2015 12:41:16 -0500 doug moen <doug@moens.org> wrote: > Hi Wolf. > > A point of terminology: you use the word "integer", when I think you are > actually referring to the fixed point format that Alan has proposed: 64 bit > fixed point with 32 bits before and after the binary point. I'll call this > "fixed point", since if we actually switched to using integers, then we > wouldn't be able to describe distances smaller than 1 mm. They are the same thing, where you put the decimal point is a matter of the units. You obviously want to pick a divide that reflects the normal use of the program. Decimal points are part of the unit not part of the number. > 64 bit floating point is better than the proposed 64 bit fixed point > because it is so much more accurate, for the kinds of computations we do in > OpenSCAD. How is representing partial atom close to the origin useful 8) > Consider trigonometry and rotation. To rotate an object, you need to > compute the sine and cosine of the angle, which will be in the range > [1,-1]. With fixed point numbers, sines and cosines have 33 bits of > precision, compared to 53 bits of precision with floating point. It depends how you divide up the bits. 32:32 may well not be the ideal choice - working on notional nanometres as has been suggested is probably a better split. Nanometres should suit everyone except crazed physicists and microprocessor designers. > If you have a detailed understanding of how computer arithmetic works, then > you can design your programs to minimize these errors. Frankly, this is an You don't need to minimise the errors, you need a mathematical model which produces valid objects rapidly and within the tolerance of printing. It's the difference between engineering and mathematicians. OpenSCAD lives in the real world. Mathematicians minimise errors, engineers verify that the cumulative error won't change anything they care about. > advanced topic, taught in 3rd year computer science at my university. Most > professional computer programmers know little about this subject. Most > OpenSCAD users know little about this subject. Floating point is better > than fixed point for OpenSCAD because the vastly greater precision of > floating point numbers makes numeric computation more accurate, and makes > OpenSCAD more beginner friendly. Disagree entirely. Again OpenSCAD is a 3D printing tool not a mathematical modeller. Let me explain the end user view of OpenSCAD - Type in stuff - Press render - Export to STL - Print Nobody cares what format is used internally provided when you give it real world problems it gives you valid objects back within the tolerances of the printer and in acceptable time. And if you *really* want mathematical purity and perfection then turn the objects into a set of implicit functions and do your final render by solving them. Your internal representation will then be perfectly accurate and your "conversion" algorithm just has to decide on the accuracy it desires. That lets you do all sorts of funky stuff such as getting more accurate the longer you leave it, or most accurate render we can do within n seconds. Alan
RW
Rogier Wolff
Wed, Nov 25, 2015 3:06 PM

On Wed, Nov 25, 2015 at 02:47:43PM +0000, Alan Cox wrote:

Consider trigonometry and rotation. To rotate an object, you need to
compute the sine and cosine of the angle, which will be in the range
[1,-1]. With fixed point numbers, sines and cosines have 33 bits of
precision, compared to 53 bits of precision with floating point.

It depends how you divide up the bits. 32:32 may well not be the ideal
choice - working on notional nanometres as has been suggested is probably
a better split. Nanometres should suit everyone except crazed physicists
and microprocessor designers.

If you switch to using integers/fixedpoint for coordinates, do you
have to do the same for the transformation matrices where the
rotations with cosines and sines end up?

If you switch to using 64 bit integers/fixedpoint for coordinates,
using a 32:32 fixed point format for the matrices would require
extracting something like the middle 64 bits from a 128 bit 64x64bit
muliplication result, right? Wouldn't an FPU be more efficient at
doing this?

Roger. 

--
** R.E.Wolff@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 **
**    Delftechpark 26 2628 XH  Delft, The Netherlands. KVK: 27239233    **
-- BitWizard writes Linux device drivers for any device you may have! --
The plan was simple, like my brother-in-law Phil. But unlike
Phil, this plan just might work.

On Wed, Nov 25, 2015 at 02:47:43PM +0000, Alan Cox wrote: > > Consider trigonometry and rotation. To rotate an object, you need to > > compute the sine and cosine of the angle, which will be in the range > > [1,-1]. With fixed point numbers, sines and cosines have 33 bits of > > precision, compared to 53 bits of precision with floating point. > > It depends how you divide up the bits. 32:32 may well not be the ideal > choice - working on notional nanometres as has been suggested is probably > a better split. Nanometres should suit everyone except crazed physicists > and microprocessor designers. If you switch to using integers/fixedpoint for coordinates, do you have to do the same for the transformation matrices where the rotations with cosines and sines end up? If you switch to using 64 bit integers/fixedpoint for coordinates, using a 32:32 fixed point format for the matrices would require extracting something like the middle 64 bits from a 128 bit 64x64bit muliplication result, right? Wouldn't an FPU be more efficient at doing this? Roger. -- ** R.E.Wolff@BitWizard.nl ** http://www.BitWizard.nl/ ** +31-15-2600998 ** ** Delftechpark 26 2628 XH Delft, The Netherlands. KVK: 27239233 ** *-- BitWizard writes Linux device drivers for any device you may have! --* The plan was simple, like my brother-in-law Phil. But unlike Phil, this plan just might work.
AC
Alan Cox
Wed, Nov 25, 2015 3:20 PM

If you switch to using integers/fixedpoint for coordinates, do you
have to do the same for the transformation matrices where the
rotations with cosines and sines end up?

Yes. You do need enough bits for accuracy there too - which I think is
the key point Doug was making. On the other hand we don't need to be
accurate enough to hit a hit a dinner plate on the moon.

If you switch to using 64 bit integers/fixedpoint for coordinates,
using a 32:32 fixed point format for the matrices would require
extracting something like the middle 64 bits from a 128 bit 64x64bit
muliplication result, right? Wouldn't an FPU be more efficient at
doing this

On some Intel (but then you'd write all your operations in SSE3 not FPU)
and if all you cared about was Intel you'd be doing packed multiplies
on 64bit floats.

On ARM tablets and the like the FPU is generally very much weaker.

I'm not too fussed either approach - and my work hat is Intel so with my
work hat on I positively encourage FPU 8)

Alan

> If you switch to using integers/fixedpoint for coordinates, do you > have to do the same for the transformation matrices where the > rotations with cosines and sines end up? Yes. You do need enough bits for accuracy there too - which I think is the key point Doug was making. On the other hand we don't need to be accurate enough to hit a hit a dinner plate on the moon. > If you switch to using 64 bit integers/fixedpoint for coordinates, > using a 32:32 fixed point format for the matrices would require > extracting something like the middle 64 bits from a 128 bit 64x64bit > muliplication result, right? Wouldn't an FPU be more efficient at > doing this On some Intel (but then you'd write all your operations in SSE3 not FPU) and if all you cared about was Intel you'd be doing packed multiplies on 64bit floats. On ARM tablets and the like the FPU is generally very much weaker. I'm not too fussed either approach - and my work hat is Intel so with my work hat on I positively encourage FPU 8) Alan