discuss@lists.openscad.org

OpenSCAD general discussion Mailing-list

View all threads

feature request: plug-ins

DM
doug moen
Mon, Nov 23, 2015 5:37 PM

Alan said: "Yes, but not half as difficult as debugging crashes caused by
scribbles from non parallel safe C++ code. The number of people who can
write good parallel C++ code is rather smaller than you'd want, not helped
by the fact that computer science as often taught if anything damages rather
than enhances those skills."

I agree. I worked on a project about 20 years ago where we were writing
highly parallel C code, and failing badly. We were using the shared memory

  • locks style of programming; the code was almost impossible to understand
    and debug. This (pthreads) model of programming is broken, and should be
    avoided. My most recent large C++ project used synchronous message passing,
    and this worked out much better.

I've recently been learning Rust. This language is interesting because it's
a low level, high performance language like C and C++, but it provides a
compile time guarantee of no memory bugs (no dangling pointers, writing
past the end of an array, etc), and this guarantee extends to parallel
code: no memory corruption caused by race conditions and competing threads.
I'm not saying we can or should use Rust, only that it is worth checking
out.

Python has some problems: you can't run your code in multiple cores
simultaneously. Python supports threads, but the interpreter contains a
global lock that prevents more than one thread from running at a time, so
you can only use a single core.

On 23 November 2015 at 11:39, Alan Cox alan@lxorguk.ukuu.org.uk wrote:

Let's think about the performance issues more carefully.

Consider that, in the current architecture, this plugin API would need to
be invoked both during script evaluation, and also during CGAL rendering,
and I guess also preview. The generalized non-affine transformation
operator would have to be invoked during rendering, because only then is
the mesh available.

Now go measure how much of the CPU wall clock time is spent in the depths
of intersections unions and friends.

If you implement a complex operator in python it'll suck yes. Ditto with
Blender btw. But if you need a fundmanetal mathematical operation the
chances are it should be in the core anyway.

The Blender examples are interesting in that to put it bluntly Blender
shows it works for real world problems.

If we use Python, then we are invoking Python code in the middle of CGAL
rendering. We have to convert CGAL numeric objects (which are dynamically
allocated rational numbers) into Python numeric objects, run the plugin
code, then convert the Python numbers back into CGAL numbers. This isn't
cheap, as I suspect that operations on CGAL numbers are the bulk of the
cost of rendering.

And if you want OpenSCAD to every run at useful speed for larger objects
you'll have to either remove CGAL or replace the numbers implementation
you select with a fixed point integer one which also fixes that.

We haven't implemented any of our ideas yet for speeding up rendering by
using multiple cores, or by using the GPU, although that has been

discussed

a lot. If part of the rendering code is written in Python, this becomes
much more difficult.

Yes, but not half as difficult as debugging crashes caused by scribbles
from non parallel safe C++ code. The number of people who can write good
parallel C++ code is rather smaller than you'd want, not helped by the
fact that computer science as often taught if anything damages rather
than enhances those skills.

Alan

Alan said: "Yes, but not half as difficult as debugging crashes caused by scribbles from non parallel safe C++ code. The number of people who can write good parallel C++ code is rather smaller than you'd want, not helped by the fact that computer science as often taught if anything damages rather than enhances those skills." I agree. I worked on a project about 20 years ago where we were writing highly parallel C code, and failing badly. We were using the shared memory + locks style of programming; the code was almost impossible to understand and debug. This (pthreads) model of programming is broken, and should be avoided. My most recent large C++ project used synchronous message passing, and this worked out much better. I've recently been learning Rust. This language is interesting because it's a low level, high performance language like C and C++, but it provides a compile time guarantee of no memory bugs (no dangling pointers, writing past the end of an array, etc), and this guarantee extends to parallel code: no memory corruption caused by race conditions and competing threads. I'm not saying we can or should use Rust, only that it is worth checking out. Python has some problems: you can't run your code in multiple cores simultaneously. Python supports threads, but the interpreter contains a global lock that prevents more than one thread from running at a time, so you can only use a single core. On 23 November 2015 at 11:39, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote: > > Let's think about the performance issues more carefully. > > > > Consider that, in the current architecture, this plugin API would need to > > be invoked both during script evaluation, and also during CGAL rendering, > > and I guess also preview. The generalized non-affine transformation > > operator would have to be invoked during rendering, because only then is > > the mesh available. > > Now go measure how much of the CPU wall clock time is spent in the depths > of intersections unions and friends. > > If you implement a complex operator in python it'll suck yes. Ditto with > Blender btw. But if you need a fundmanetal mathematical operation the > chances are it should be in the core anyway. > > The Blender examples are interesting in that to put it bluntly Blender > shows it works for real world problems. > > > If we use Python, then we are invoking Python code in the middle of CGAL > > rendering. We have to convert CGAL numeric objects (which are dynamically > > allocated rational numbers) into Python numeric objects, run the plugin > > code, then convert the Python numbers back into CGAL numbers. This isn't > > cheap, as I suspect that operations on CGAL numbers are the bulk of the > > cost of rendering. > > And if you want OpenSCAD to every run at useful speed for larger objects > you'll have to either remove CGAL or replace the numbers implementation > you select with a fixed point integer one which also fixes that. > > > We haven't implemented any of our ideas yet for speeding up rendering by > > using multiple cores, or by using the GPU, although that has been > discussed > > a lot. If part of the rendering code is written in Python, this becomes > > much more difficult. > > Yes, but not half as difficult as debugging crashes caused by scribbles > from non parallel safe C++ code. The number of people who can write good > parallel C++ code is rather smaller than you'd want, not helped by the > fact that computer science as often taught if anything damages rather > than enhances those skills. > > Alan > > >
DM
doug moen
Mon, Nov 23, 2015 5:58 PM

Alan said: "And if you want OpenSCAD to every run at useful speed for
larger objects you'll have to either remove CGAL or replace the numbers
implementation you select with a fixed point integer one which also fixes
that."

I agree that we need to stop using CGAL with rational numbers in order to
fix our performance problems.

I've read your posts about the benefits of fixed point numbers. I don't
agree with your conclusion, though. I think we are better off using 64
point floating point numbers everywhere, in both the scripting language and
in the geometry engine. This eliminates the problem of repeatedly
converting between float and fixed point at various stages in geometry
processing. The conversion is potentially lossy in both directions, but
most of the time, we'll be throwing away accuracy in float->fixed
conversions. For example, a mesh that is manifold when represented in 64
bit floats can become non-manifold when converted to 64 bit fixed point.

On 23 November 2015 at 11:39, Alan Cox alan@lxorguk.ukuu.org.uk wrote:

Let's think about the performance issues more carefully.

Consider that, in the current architecture, this plugin API would need to
be invoked both during script evaluation, and also during CGAL rendering,
and I guess also preview. The generalized non-affine transformation
operator would have to be invoked during rendering, because only then is
the mesh available.

Now go measure how much of the CPU wall clock time is spent in the depths
of intersections unions and friends.

If you implement a complex operator in python it'll suck yes. Ditto with
Blender btw. But if you need a fundmanetal mathematical operation the
chances are it should be in the core anyway.

The Blender examples are interesting in that to put it bluntly Blender
shows it works for real world problems.

If we use Python, then we are invoking Python code in the middle of CGAL
rendering. We have to convert CGAL numeric objects (which are dynamically
allocated rational numbers) into Python numeric objects, run the plugin
code, then convert the Python numbers back into CGAL numbers. This isn't
cheap, as I suspect that operations on CGAL numbers are the bulk of the
cost of rendering.

And if you want OpenSCAD to every run at useful speed for larger objects
you'll have to either remove CGAL or replace the numbers implementation
you select with a fixed point integer one which also fixes that.

We haven't implemented any of our ideas yet for speeding up rendering by
using multiple cores, or by using the GPU, although that has been

discussed

a lot. If part of the rendering code is written in Python, this becomes
much more difficult.

Yes, but not half as difficult as debugging crashes caused by scribbles
from non parallel safe C++ code. The number of people who can write good
parallel C++ code is rather smaller than you'd want, not helped by the
fact that computer science as often taught if anything damages rather
than enhances those skills.

Alan

Alan said: "And if you want OpenSCAD to every run at useful speed for larger objects you'll have to either remove CGAL or replace the numbers implementation you select with a fixed point integer one which also fixes that." I agree that we need to stop using CGAL with rational numbers in order to fix our performance problems. I've read your posts about the benefits of fixed point numbers. I don't agree with your conclusion, though. I think we are better off using 64 point floating point numbers everywhere, in both the scripting language and in the geometry engine. This eliminates the problem of repeatedly converting between float and fixed point at various stages in geometry processing. The conversion is potentially lossy in both directions, but most of the time, we'll be throwing away accuracy in float->fixed conversions. For example, a mesh that is manifold when represented in 64 bit floats can become non-manifold when converted to 64 bit fixed point. On 23 November 2015 at 11:39, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote: > > Let's think about the performance issues more carefully. > > > > Consider that, in the current architecture, this plugin API would need to > > be invoked both during script evaluation, and also during CGAL rendering, > > and I guess also preview. The generalized non-affine transformation > > operator would have to be invoked during rendering, because only then is > > the mesh available. > > Now go measure how much of the CPU wall clock time is spent in the depths > of intersections unions and friends. > > If you implement a complex operator in python it'll suck yes. Ditto with > Blender btw. But if you need a fundmanetal mathematical operation the > chances are it should be in the core anyway. > > The Blender examples are interesting in that to put it bluntly Blender > shows it works for real world problems. > > > If we use Python, then we are invoking Python code in the middle of CGAL > > rendering. We have to convert CGAL numeric objects (which are dynamically > > allocated rational numbers) into Python numeric objects, run the plugin > > code, then convert the Python numbers back into CGAL numbers. This isn't > > cheap, as I suspect that operations on CGAL numbers are the bulk of the > > cost of rendering. > > And if you want OpenSCAD to every run at useful speed for larger objects > you'll have to either remove CGAL or replace the numbers implementation > you select with a fixed point integer one which also fixes that. > > > We haven't implemented any of our ideas yet for speeding up rendering by > > using multiple cores, or by using the GPU, although that has been > discussed > > a lot. If part of the rendering code is written in Python, this becomes > > much more difficult. > > Yes, but not half as difficult as debugging crashes caused by scribbles > from non parallel safe C++ code. The number of people who can write good > parallel C++ code is rather smaller than you'd want, not helped by the > fact that computer science as often taught if anything damages rather > than enhances those skills. > > Alan > > >
DM
doug moen
Mon, Nov 23, 2015 6:00 PM

Alan said: "Now go measure how much of the CPU wall clock time is spent in
the depths of intersections unions and friends."

I'd like to learn how to do that.

I'm strongly in favour of getting rid of "implicit union"; I've posted in
detail about this in the past.

On 23 November 2015 at 11:39, Alan Cox alan@lxorguk.ukuu.org.uk wrote:

Let's think about the performance issues more carefully.

Consider that, in the current architecture, this plugin API would need to
be invoked both during script evaluation, and also during CGAL rendering,
and I guess also preview. The generalized non-affine transformation
operator would have to be invoked during rendering, because only then is
the mesh available.

Now go measure how much of the CPU wall clock time is spent in the depths
of intersections unions and friends.

If you implement a complex operator in python it'll suck yes. Ditto with
Blender btw. But if you need a fundmanetal mathematical operation the
chances are it should be in the core anyway.

The Blender examples are interesting in that to put it bluntly Blender
shows it works for real world problems.

If we use Python, then we are invoking Python code in the middle of CGAL
rendering. We have to convert CGAL numeric objects (which are dynamically
allocated rational numbers) into Python numeric objects, run the plugin
code, then convert the Python numbers back into CGAL numbers. This isn't
cheap, as I suspect that operations on CGAL numbers are the bulk of the
cost of rendering.

And if you want OpenSCAD to every run at useful speed for larger objects
you'll have to either remove CGAL or replace the numbers implementation
you select with a fixed point integer one which also fixes that.

We haven't implemented any of our ideas yet for speeding up rendering by
using multiple cores, or by using the GPU, although that has been

discussed

a lot. If part of the rendering code is written in Python, this becomes
much more difficult.

Yes, but not half as difficult as debugging crashes caused by scribbles
from non parallel safe C++ code. The number of people who can write good
parallel C++ code is rather smaller than you'd want, not helped by the
fact that computer science as often taught if anything damages rather
than enhances those skills.

Alan

Alan said: "Now go measure how much of the CPU wall clock time is spent in the depths of intersections unions and friends." I'd like to learn how to do that. I'm strongly in favour of getting rid of "implicit union"; I've posted in detail about this in the past. On 23 November 2015 at 11:39, Alan Cox <alan@lxorguk.ukuu.org.uk> wrote: > > Let's think about the performance issues more carefully. > > > > Consider that, in the current architecture, this plugin API would need to > > be invoked both during script evaluation, and also during CGAL rendering, > > and I guess also preview. The generalized non-affine transformation > > operator would have to be invoked during rendering, because only then is > > the mesh available. > > Now go measure how much of the CPU wall clock time is spent in the depths > of intersections unions and friends. > > If you implement a complex operator in python it'll suck yes. Ditto with > Blender btw. But if you need a fundmanetal mathematical operation the > chances are it should be in the core anyway. > > The Blender examples are interesting in that to put it bluntly Blender > shows it works for real world problems. > > > If we use Python, then we are invoking Python code in the middle of CGAL > > rendering. We have to convert CGAL numeric objects (which are dynamically > > allocated rational numbers) into Python numeric objects, run the plugin > > code, then convert the Python numbers back into CGAL numbers. This isn't > > cheap, as I suspect that operations on CGAL numbers are the bulk of the > > cost of rendering. > > And if you want OpenSCAD to every run at useful speed for larger objects > you'll have to either remove CGAL or replace the numbers implementation > you select with a fixed point integer one which also fixes that. > > > We haven't implemented any of our ideas yet for speeding up rendering by > > using multiple cores, or by using the GPU, although that has been > discussed > > a lot. If part of the rendering code is written in Python, this becomes > > much more difficult. > > Yes, but not half as difficult as debugging crashes caused by scribbles > from non parallel safe C++ code. The number of people who can write good > parallel C++ code is rather smaller than you'd want, not helped by the > fact that computer science as often taught if anything damages rather > than enhances those skills. > > Alan > > >
AC
Alan Cox
Mon, Nov 23, 2015 6:26 PM

On Mon, 23 Nov 2015 13:00:35 -0500
doug moen doug@moens.org wrote:

Alan said: "Now go measure how much of the CPU wall clock time is spent in
the depths of intersections unions and friends."

I'd like to learn how to do that.

Linux: gprof

There are equivalent Windows tools but I'm not familiar with things like
windows Xperf to really comment on them.

There are some more sophisticated techniques we use for things like the
Linux kernel but they aren't really needed for basic analysis.

I'm strongly in favour of getting rid of "implicit union"; I've posted in
detail about this in the past.

ImplicitCad does that and from a usability perspective it's not that
annoying - but does make it incompatible. You can also do a lot of
deferring and boundary box optimisations to speed up common "hard" unions
like trays full of objects for printing.

Alan

On Mon, 23 Nov 2015 13:00:35 -0500 doug moen <doug@moens.org> wrote: > Alan said: "Now go measure how much of the CPU wall clock time is spent in > the depths of intersections unions and friends." > > I'd like to learn how to do that. Linux: gprof There are equivalent Windows tools but I'm not familiar with things like windows Xperf to really comment on them. There are some more sophisticated techniques we use for things like the Linux kernel but they aren't really needed for basic analysis. > I'm strongly in favour of getting rid of "implicit union"; I've posted in > detail about this in the past. ImplicitCad does that and from a usability perspective it's not that annoying - but does make it incompatible. You can also do a lot of deferring and boundary box optimisations to speed up common "hard" unions like trays full of objects for printing. Alan
AC
Alan Cox
Mon, Nov 23, 2015 6:29 PM

On Mon, 23 Nov 2015 12:58:02 -0500
doug moen doug@moens.org wrote:

Alan said: "And if you want OpenSCAD to every run at useful speed for
larger objects you'll have to either remove CGAL or replace the numbers
implementation you select with a fixed point integer one which also fixes
that."

I agree that we need to stop using CGAL with rational numbers in order to
fix our performance problems.

I've read your posts about the benefits of fixed point numbers. I don't
agree with your conclusion, though. I think we are better off using 64
point floating point numbers everywhere, in both the scripting language and
in the geometry engine. This eliminates the problem of repeatedly
converting between float and fixed point at various stages in geometry
processing. The conversion is potentially lossy in both directions, but
most of the time, we'll be throwing away accuracy in float->fixed
conversions. For example, a mesh that is manifold when represented in 64
bit floats can become non-manifold when converted to 64 bit fixed point.

And vice versa, plus float has the nasty property that the accuracy of
your model changes according to distance from the axes.

You don't do conversions. You never want to do conversions because
conversions muck stuff up. You do the lot in fixed point, or you do the
lot in float. Fixed point is a bit faster (way faster on a lot of Android
tablet devices). That's really a detail except on ARM - if OpenSCAD ran
in 64bit float I'd not even bother arguing about whether fixed point was
better 8)

Alan

On Mon, 23 Nov 2015 12:58:02 -0500 doug moen <doug@moens.org> wrote: > Alan said: "And if you want OpenSCAD to every run at useful speed for > larger objects you'll have to either remove CGAL or replace the numbers > implementation you select with a fixed point integer one which also fixes > that." > > I agree that we need to stop using CGAL with rational numbers in order to > fix our performance problems. > > I've read your posts about the benefits of fixed point numbers. I don't > agree with your conclusion, though. I think we are better off using 64 > point floating point numbers everywhere, in both the scripting language and > in the geometry engine. This eliminates the problem of repeatedly > converting between float and fixed point at various stages in geometry > processing. The conversion is potentially lossy in both directions, but > most of the time, we'll be throwing away accuracy in float->fixed > conversions. For example, a mesh that is manifold when represented in 64 > bit floats can become non-manifold when converted to 64 bit fixed point. And vice versa, plus float has the nasty property that the accuracy of your model changes according to distance from the axes. You don't do conversions. You never want to do conversions because conversions muck stuff up. You do the lot in fixed point, or you do the lot in float. Fixed point is a bit faster (way faster on a lot of Android tablet devices). That's really a detail except on ARM - if OpenSCAD ran in 64bit float I'd not even bother arguing about whether fixed point was better 8) Alan
W
wolf
Tue, Nov 24, 2015 10:22 AM

The argument whether to use integer or floating point arithmetic can be
decided quite simply. If you care to look up  equinumerosity
http://en.wikipedia.org/wiki/Equinumerosity  in the wikipedia, you'll see
that a 64 bit integer has a one-on-one correspondence (a bijection) with a
64 bit floating point number, i.e an any combination of bits may be
interpreted either as an integer or a float. Floats give a greater range, at
the cost of reduced accuracy.

10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD model
can meaningfully be given, below that, the rules of physics change
(uncertainty principle, etc), and modelling that in OpenSCAD is just plain
silly. If we take that as the bottom, then a signed integer can tops
represent a distance of 922337km, or twice the distance earth-moon, with an
accuracy of 10E-7mm.
A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the
integer, and thus its accuracy is only 1/1024 that of an integer
representation. It can represent tops 900km with an accuracy of 10E-7mm. Is
900km enough? Possibly. 922337km for integer arithmetic is certainly enough,
and no more range is needed.

Computing speed? have a look  here
http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/  and  here
http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/  .

Z-fighting? Do we want that? In the light of what what recently discussed
here
http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html
, z-fighting is bound to be a floating point issue.  MKoch
http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html
provided some interesting code producing shapes that look quite different
when previewed and rendered:
http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1
http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2
http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3
http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4

I have no problem deciding what I prefer: 64 bit integer arithmetic.
Accuracy is better, range is fully sufficient, computing speed is equal or
superior, and the likelyhood of imaging problems that need to be overcome by
user-space tricks associated with floating point numbers - forget about
floats asap, please.
Wolf

--
View this message in context: http://forum.openscad.org/feature-request-plug-ins-tp14663p14735.html
Sent from the OpenSCAD mailing list archive at Nabble.com.

The argument whether to use integer or floating point arithmetic can be decided quite simply. If you care to look up equinumerosity <http://en.wikipedia.org/wiki/Equinumerosity> in the wikipedia, you'll see that a 64 bit integer has a one-on-one correspondence (a bijection) with a 64 bit floating point number, i.e an any combination of bits may be interpreted either as an integer or a float. Floats give a greater range, at the cost of reduced accuracy. 10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD model can meaningfully be given, below that, the rules of physics change (uncertainty principle, etc), and modelling that in OpenSCAD is just plain silly. If we take that as the bottom, then a signed integer can tops represent a distance of 922337km, or twice the distance earth-moon, with an accuracy of 10E-7mm. A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the integer, and thus its accuracy is only 1/1024 that of an integer representation. It can represent tops 900km with an accuracy of 10E-7mm. Is 900km enough? Possibly. 922337km for integer arithmetic is certainly enough, and no more range is needed. Computing speed? have a look here <http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/> and here <http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/> . Z-fighting? Do we want that? In the light of what what recently discussed here <http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html> , z-fighting is bound to be a floating point issue. MKoch <http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html> provided some interesting code producing shapes that look quite different when previewed and rendered: <http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1> <http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2> <http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3> <http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4> I have no problem deciding what I prefer: 64 bit integer arithmetic. Accuracy is better, range is fully sufficient, computing speed is equal or superior, and the likelyhood of imaging problems that need to be overcome by user-space tricks associated with floating point numbers - forget about floats asap, please. Wolf -- View this message in context: http://forum.openscad.org/feature-request-plug-ins-tp14663p14735.html Sent from the OpenSCAD mailing list archive at Nabble.com.
J
jon
Tue, Nov 24, 2015 10:59 AM

You make the decision seem so simple!

On 11/24/2015 5:22 AM, wolf wrote:

The argument whether to use integer or floating point arithmetic can be
decided quite simply. If you care to look up  equinumerosity
http://en.wikipedia.org/wiki/Equinumerosity  in the wikipedia, you'll see
that a 64 bit integer has a one-on-one correspondence (a bijection) with a
64 bit floating point number, i.e an any combination of bits may be
interpreted either as an integer or a float. Floats give a greater range, at
the cost of reduced accuracy.

10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD model
can meaningfully be given, below that, the rules of physics change
(uncertainty principle, etc), and modelling that in OpenSCAD is just plain
silly. If we take that as the bottom, then a signed integer can tops
represent a distance of 922337km, or twice the distance earth-moon, with an
accuracy of 10E-7mm.
A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the
integer, and thus its accuracy is only 1/1024 that of an integer
representation. It can represent tops 900km with an accuracy of 10E-7mm. Is
900km enough? Possibly. 922337km for integer arithmetic is certainly enough,
and no more range is needed.

Computing speed? have a look  here
http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/  and  here
http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/  .

Z-fighting? Do we want that? In the light of what what recently discussed
here
http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html
, z-fighting is bound to be a floating point issue.  MKoch
http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html
provided some interesting code producing shapes that look quite different
when previewed and rendered:
http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1
http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2
http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3
http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4

I have no problem deciding what I prefer: 64 bit integer arithmetic.
Accuracy is better, range is fully sufficient, computing speed is equal or
superior, and the likelyhood of imaging problems that need to be overcome by
user-space tricks associated with floating point numbers - forget about
floats asap, please.
Wolf

You make the decision seem so simple! On 11/24/2015 5:22 AM, wolf wrote: > The argument whether to use integer or floating point arithmetic can be > decided quite simply. If you care to look up equinumerosity > <http://en.wikipedia.org/wiki/Equinumerosity> in the wikipedia, you'll see > that a 64 bit integer has a one-on-one correspondence (a bijection) with a > 64 bit floating point number, i.e an any combination of bits may be > interpreted either as an integer or a float. Floats give a greater range, at > the cost of reduced accuracy. > > 10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD model > can meaningfully be given, below that, the rules of physics change > (uncertainty principle, etc), and modelling that in OpenSCAD is just plain > silly. If we take that as the bottom, then a signed integer can tops > represent a distance of 922337km, or twice the distance earth-moon, with an > accuracy of 10E-7mm. > A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the > integer, and thus its accuracy is only 1/1024 that of an integer > representation. It can represent tops 900km with an accuracy of 10E-7mm. Is > 900km enough? Possibly. 922337km for integer arithmetic is certainly enough, > and no more range is needed. > > Computing speed? have a look here > <http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/> and here > <http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/> . > > Z-fighting? Do we want that? In the light of what what recently discussed > here > <http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html> > , z-fighting is bound to be a floating point issue. MKoch > <http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html> > provided some interesting code producing shapes that look quite different > when previewed and rendered: > <http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1> > <http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2> > <http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3> > <http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4> > > I have no problem deciding what I prefer: 64 bit integer arithmetic. > Accuracy is better, range is fully sufficient, computing speed is equal or > superior, and the likelyhood of imaging problems that need to be overcome by > user-space tricks associated with floating point numbers - forget about > floats asap, please. > Wolf >
NH
nop head
Tue, Nov 24, 2015 11:48 AM

I think you are very naive regarding integers. When you start representing
fractions with them you are using fixed point notation, not integer. That
will have similar issues as floating point when you start adding fractions.
On Nov 24, 2015 11:00 AM, "jon" jon@jonbondy.com wrote:

You make the decision seem so simple!

On 11/24/2015 5:22 AM, wolf wrote:

The argument whether to use integer or floating point arithmetic can be
decided quite simply. If you care to look up  equinumerosity
http://en.wikipedia.org/wiki/Equinumerosity  in the wikipedia, you'll
see
that a 64 bit integer has a one-on-one correspondence (a bijection) with a
64 bit floating point number, i.e an any combination of bits may be
interpreted either as an integer or a float. Floats give a greater range,
at
the cost of reduced accuracy.

10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD
model
can meaningfully be given, below that, the rules of physics change
(uncertainty principle, etc), and modelling that in OpenSCAD is just plain
silly. If we take that as the bottom, then a signed integer can tops
represent a distance of 922337km, or twice the distance earth-moon, with
an
accuracy of 10E-7mm.
A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the
integer, and thus its accuracy is only 1/1024 that of an integer
representation. It can represent tops 900km with an accuracy of 10E-7mm.
Is
900km enough? Possibly. 922337km for integer arithmetic is certainly
enough,
and no more range is needed.

Computing speed? have a look  here
http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/  and  here
http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/  .

Z-fighting? Do we want that? In the light of what what recently discussed
here
<
http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html

, z-fighting is bound to be a floating point issue.  MKoch
<
http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html

provided some interesting code producing shapes that look quite different
when previewed and rendered:
http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1
http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2
http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3
http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4

I have no problem deciding what I prefer: 64 bit integer arithmetic.
Accuracy is better, range is fully sufficient, computing speed is equal or
superior, and the likelyhood of imaging problems that need to be overcome
by
user-space tricks associated with floating point numbers - forget about
floats asap, please.
Wolf

I think you are very naive regarding integers. When you start representing fractions with them you are using fixed point notation, not integer. That will have similar issues as floating point when you start adding fractions. On Nov 24, 2015 11:00 AM, "jon" <jon@jonbondy.com> wrote: > You make the decision seem so simple! > > On 11/24/2015 5:22 AM, wolf wrote: > >> The argument whether to use integer or floating point arithmetic can be >> decided quite simply. If you care to look up equinumerosity >> <http://en.wikipedia.org/wiki/Equinumerosity> in the wikipedia, you'll >> see >> that a 64 bit integer has a one-on-one correspondence (a bijection) with a >> 64 bit floating point number, i.e an any combination of bits may be >> interpreted either as an integer or a float. Floats give a greater range, >> at >> the cost of reduced accuracy. >> >> 10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD >> model >> can meaningfully be given, below that, the rules of physics change >> (uncertainty principle, etc), and modelling that in OpenSCAD is just plain >> silly. If we take that as the bottom, then a signed integer can tops >> represent a distance of 922337km, or twice the distance earth-moon, with >> an >> accuracy of 10E-7mm. >> A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the >> integer, and thus its accuracy is only 1/1024 that of an integer >> representation. It can represent tops 900km with an accuracy of 10E-7mm. >> Is >> 900km enough? Possibly. 922337km for integer arithmetic is certainly >> enough, >> and no more range is needed. >> >> Computing speed? have a look here >> <http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/> and here >> <http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/> . >> >> Z-fighting? Do we want that? In the light of what what recently discussed >> here >> < >> http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html >> > >> , z-fighting is bound to be a floating point issue. MKoch >> < >> http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html >> > >> provided some interesting code producing shapes that look quite different >> when previewed and rendered: >> <http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1> >> <http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2> >> <http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3> >> <http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4> >> >> I have no problem deciding what I prefer: 64 bit integer arithmetic. >> Accuracy is better, range is fully sufficient, computing speed is equal or >> superior, and the likelyhood of imaging problems that need to be overcome >> by >> user-space tricks associated with floating point numbers - forget about >> floats asap, please. >> Wolf >> >> > > _______________________________________________ > OpenSCAD mailing list > Discuss@lists.openscad.org > http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org >
AC
Alan Cox
Tue, Nov 24, 2015 12:25 PM

On Tue, 24 Nov 2015 11:48:43 +0000
nop head nop.head@gmail.com wrote:

I think you are very naive regarding integers. When you start representing
fractions with them you are using fixed point notation, not integer. That
will have similar issues as floating point when you start adding fractions.

There is no difference between "integer" and "fixed point". The "point"
is part of your units not part of the value.

Agreed you still have the same underlying issues of approximation. The
big value of integer is speed, and parallelism.

Alan

On Tue, 24 Nov 2015 11:48:43 +0000 nop head <nop.head@gmail.com> wrote: > I think you are very naive regarding integers. When you start representing > fractions with them you are using fixed point notation, not integer. That > will have similar issues as floating point when you start adding fractions. There is no difference between "integer" and "fixed point". The "point" is part of your units not part of the value. Agreed you still have the same underlying issues of approximation. The big value of integer is speed, and parallelism. Alan
DM
doug moen
Tue, Nov 24, 2015 5:41 PM

Hi Wolf.

A point of terminology: you use the word "integer", when I think you are
actually referring to the fixed point format that Alan has proposed: 64 bit
fixed point with 32 bits before and after the binary point. I'll call this
"fixed point", since if we actually switched to using integers, then we
wouldn't be able to describe distances smaller than 1 mm.

64 bit floating point is better than the proposed 64 bit fixed point
because it is so much more accurate, for the kinds of computations we do in
OpenSCAD.

First, let's consider vertex positions in the model. Let's suppose that no
part of the model is more than 256mm (25.6cm, ~10in) away from the origin,
which will be true in the large majority of cases. If we are using fixed
point numbers, then we'll be using at most 8 bits before the binary point.
The high order 24 bits will be zero. So vertex coordinates have at most 40
bits of precision, compared to 53 bits of precision with floating point.
With fixed point, the smaller the number, the less precision, while
floating point numbers have a constant precision of 53 bits.

The precision of vertex coordinates in the final computed mesh is not the
most important issue.
What's important is the accuracy of computations.

Many of the numbers in OpenSCAD computations are not vertex coordinates.
Consider trigonometry and rotation. To rotate an object, you need to
compute the sine and cosine of the angle, which will be in the range
[1,-1]. With fixed point numbers, sines and cosines have 33 bits of
precision, compared to 53 bits of precision with floating point.

The big advantage of floating point is when we perform multi-stage and
iterative computations in OpenSCAD, with many intermediate results. At each
stage, some accuracy is lost, and the errors accumulate at each stage. To
minimize the impact of these errors on the model, you want to have as much
precision as possible.

If you have a detailed understanding of how computer arithmetic works, then
you can design your programs to minimize these errors. Frankly, this is an
advanced topic, taught in 3rd year computer science at my university. Most
professional computer programmers know little about this subject. Most
OpenSCAD users know little about this subject. Floating point is better
than fixed point for OpenSCAD because the vastly greater precision of
floating point numbers makes numeric computation more accurate, and makes
OpenSCAD more beginner friendly.

Doug Moen.

On 24 November 2015 at 05:22, wolf wv99999@gmail.com wrote:

The argument whether to use integer or floating point arithmetic can be
decided quite simply. If you care to look up  equinumerosity
http://en.wikipedia.org/wiki/Equinumerosity  in the wikipedia, you'll
see
that a 64 bit integer has a one-on-one correspondence (a bijection) with a
64 bit floating point number, i.e an any combination of bits may be
interpreted either as an integer or a float. Floats give a greater range,
at
the cost of reduced accuracy.

10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD
model
can meaningfully be given, below that, the rules of physics change
(uncertainty principle, etc), and modelling that in OpenSCAD is just plain
silly. If we take that as the bottom, then a signed integer can tops
represent a distance of 922337km, or twice the distance earth-moon, with an
accuracy of 10E-7mm.
A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the
integer, and thus its accuracy is only 1/1024 that of an integer
representation. It can represent tops 900km with an accuracy of 10E-7mm. Is
900km enough? Possibly. 922337km for integer arithmetic is certainly
enough,
and no more range is needed.

Computing speed? have a look  here
http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/  and  here
http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/  .

Z-fighting? Do we want that? In the light of what what recently discussed
here
<
http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html

, z-fighting is bound to be a floating point issue.  MKoch
<
http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html

provided some interesting code producing shapes that look quite different
when previewed and rendered:
http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1
http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2
http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3
http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4

I have no problem deciding what I prefer: 64 bit integer arithmetic.
Accuracy is better, range is fully sufficient, computing speed is equal or
superior, and the likelyhood of imaging problems that need to be overcome
by
user-space tricks associated with floating point numbers - forget about
floats asap, please.
Wolf

--
View this message in context:
http://forum.openscad.org/feature-request-plug-ins-tp14663p14735.html
Sent from the OpenSCAD mailing list archive at Nabble.com.


OpenSCAD mailing list
Discuss@lists.openscad.org
http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org

Hi Wolf. A point of terminology: you use the word "integer", when I think you are actually referring to the fixed point format that Alan has proposed: 64 bit fixed point with 32 bits before and after the binary point. I'll call this "fixed point", since if we actually switched to using integers, then we wouldn't be able to describe distances smaller than 1 mm. 64 bit floating point is better than the proposed 64 bit fixed point because it is so much more accurate, for the kinds of computations we do in OpenSCAD. First, let's consider vertex positions in the model. Let's suppose that no part of the model is more than 256mm (25.6cm, ~10in) away from the origin, which will be true in the large majority of cases. If we are using fixed point numbers, then we'll be using at most 8 bits before the binary point. The high order 24 bits will be zero. So vertex coordinates have at most 40 bits of precision, compared to 53 bits of precision with floating point. With fixed point, the smaller the number, the less precision, while floating point numbers have a constant precision of 53 bits. The precision of vertex coordinates in the final computed mesh is not the most important issue. What's important is the accuracy of computations. Many of the numbers in OpenSCAD computations are not vertex coordinates. Consider trigonometry and rotation. To rotate an object, you need to compute the sine and cosine of the angle, which will be in the range [1,-1]. With fixed point numbers, sines and cosines have 33 bits of precision, compared to 53 bits of precision with floating point. The big advantage of floating point is when we perform multi-stage and iterative computations in OpenSCAD, with many intermediate results. At each stage, some accuracy is lost, and the errors accumulate at each stage. To minimize the impact of these errors on the model, you want to have as much precision as possible. If you have a detailed understanding of how computer arithmetic works, then you can design your programs to minimize these errors. Frankly, this is an advanced topic, taught in 3rd year computer science at my university. Most professional computer programmers know little about this subject. Most OpenSCAD users know little about this subject. Floating point is better than fixed point for OpenSCAD because the vastly greater precision of floating point numbers makes numeric computation more accurate, and makes OpenSCAD more beginner friendly. Doug Moen. On 24 November 2015 at 05:22, wolf <wv99999@gmail.com> wrote: > The argument whether to use integer or floating point arithmetic can be > decided quite simply. If you care to look up equinumerosity > <http://en.wikipedia.org/wiki/Equinumerosity> in the wikipedia, you'll > see > that a 64 bit integer has a one-on-one correspondence (a bijection) with a > 64 bit floating point number, i.e an any combination of bits may be > interpreted either as an integer or a float. Floats give a greater range, > at > the cost of reduced accuracy. > > 10E-7mm (about the radius of an atom) is the smallest size an OpenSCAD > model > can meaningfully be given, below that, the rules of physics change > (uncertainty principle, etc), and modelling that in OpenSCAD is just plain > silly. If we take that as the bottom, then a signed integer can tops > represent a distance of 922337km, or twice the distance earth-moon, with an > accuracy of 10E-7mm. > A 64 bit float uses only 53 bits for its mantissa, 10 bits less than the > integer, and thus its accuracy is only 1/1024 that of an integer > representation. It can represent tops 900km with an accuracy of 10E-7mm. Is > 900km enough? Possibly. 922337km for integer arithmetic is certainly > enough, > and no more range is needed. > > Computing speed? have a look here > <http://nicolas.limare.net/pro/notes/2014/12/12_arit_speed/> and here > <http://nicolas.limare.net/pro/notes/2014/12/16_math_speed/> . > > Z-fighting? Do we want that? In the light of what what recently discussed > here > < > http://forum.openscad.org/Simple-addition-of-numbers-introduces-error-td14408.html > > > , z-fighting is bound to be a floating point issue. MKoch > < > http://forum.openscad.org/make-an-object-hollow-with-constant-wall-thickness-td14255.html > > > provided some interesting code producing shapes that look quite different > when previewed and rendered: > <http://forum.openscad.org/file/n14735/MKoch-1.mkoch-1> > <http://forum.openscad.org/file/n14735/MKoch-2.mkoch-2> > <http://forum.openscad.org/file/n14735/MKoch-3.mkoch-3> > <http://forum.openscad.org/file/n14735/MKoch-4.mkoch-4> > > I have no problem deciding what I prefer: 64 bit integer arithmetic. > Accuracy is better, range is fully sufficient, computing speed is equal or > superior, and the likelyhood of imaging problems that need to be overcome > by > user-space tricks associated with floating point numbers - forget about > floats asap, please. > Wolf > > > > > -- > View this message in context: > http://forum.openscad.org/feature-request-plug-ins-tp14663p14735.html > Sent from the OpenSCAD mailing list archive at Nabble.com. > > _______________________________________________ > OpenSCAD mailing list > Discuss@lists.openscad.org > http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org > > >