NH
nop head
Sun, Mar 26, 2017 12:47 PM
Sorry OpenCG should be OpenGL.
On 26 March 2017 at 13:46, nop head nop.head@gmail.com wrote:
No I don't think CGAL does any actual rendering of pixels. It just
calculates the geometry using the CPU. After that OpenSCAD can render the
results to screen very quickly using OpenCG and your graphics card. When
you pan around with the mouse after F6 it is faster than after F5.
On 26 March 2017 at 13:03, noil noil@gmx.net wrote:
I don't think, that is the main problem.
Even for the amount of faces CGAL does render it often takes way longer,
than it should.
Is it possible, that it renders on the CPU?
nophead wrote
Sorry typed the opposite of what I meant.
I don't think it would be too hard to speed up CGAL by many orders of
magnitude, but that may be naive. It obviously doesn't make any attempt
skip the facets that need no modification in a union, which is often all
of
them.
Sorry OpenCG should be OpenGL.
On 26 March 2017 at 13:46, nop head <nop.head@gmail.com> wrote:
> No I don't think CGAL does any actual rendering of pixels. It just
> calculates the geometry using the CPU. After that OpenSCAD can render the
> results to screen very quickly using OpenCG and your graphics card. When
> you pan around with the mouse after F6 it is faster than after F5.
>
> On 26 March 2017 at 13:03, noil <noil@gmx.net> wrote:
>
>> I don't think, that is the main problem.
>> Even for the amount of faces CGAL does render it often takes way longer,
>> than it should.
>> Is it possible, that it renders on the CPU?
>>
>>
>> nophead wrote
>> > Sorry typed the opposite of what I meant.
>> >
>> >
>> > I don't think it would be too hard to speed up CGAL by many orders of
>> > magnitude, but that may be naive. It obviously doesn't make any attempt
>> to
>> > skip the facets that need no modification in a union, which is often all
>> > of
>> > them.
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://forum.openscad.org/Latt
>> ice-structure-tp21001p21039.html
>> Sent from the OpenSCAD mailing list archive at Nabble.com.
>>
>> _______________________________________________
>> OpenSCAD mailing list
>> Discuss@lists.openscad.org
>> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>>
>
>
DM
doug moen
Sun, Mar 26, 2017 2:37 PM
Hi Ronaldo.
"F-rep ... the absolute necessity of a feature detection algorithm"
If you use marching cubes to convert F-Rep to STL, then the results look
terrible if you have corners and edges. But this is a solved problem, there
are modern high quality conversion algorithms. For example, the following,
and also Dual Contouring.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.
73.3235&rep=rep1&type=pdf
The ImpliSolid open source project claims to implement the above algorithm.
I haven't tried their code yet.
https://github.com/sohale/implisolid/
"Another issue is the apparent lack of a robust convex hull and minkowsky
algorithms for F-rep."
Have you found any non-robust convex hull algorithms? If so, please share.
My only idea so far is to convert the shapes to polyhedra first, as part of
a hybrid system.
I haven't tackled Minkowski sum yet. The only paper I found so far is this:
http://www.hyperfun.org/Mink.pdf
Do you know of any other algorithms, even if they aren't robust?
"And there is another issue I found trying to model the intersection of a
sphere and a paraboloid: the primitive function scale. If a primitive
function f() is multiplied by a constant the corresponding solid is not
changed as its is the set of points p such f(p)>=0. But when the primitive
is intercepted with another one (the min of two functions) the relative
scale of the two functions matters and the intersection polygonalization
may show very noticeable artifacts. I observed this issue in my system and
were able to reproduce it in Hyperfun."
I haven't run into this specific problem, as I haven't implemented
polygonalization yet.
Multiplying a distance function by a constant, without doing anything else,
will definitely screw up the distance field without changing the shape, as
you have noted, and that causes a problem for some shape operators. But
that's not how you scale a shape. To scale a distance function f by a
scalar constant c, you use
fscaled(p) = c * f(p/c)
which doesn't mess up the distance field of the result. So maybe I don't
understand exactly what you are doing.
The more general problem is this. For any given shape, there are an
infinite number of distance functions that represent it (all functions
whose zeros lie at the shape's boundary). The "best" distance function is
usually the one that gives the exact Euclidean distance to the closest
boundary, but often we deal with others that only approximate this. Some
shape operations are sensitive to the structure of the distance function,
and need it to satisfy certain properties (eg, it should be Euclidean for
points outside the boundary, for example). My system will provide a
protocol whereby shape operations can specify requirements on the distance
field structure to their shape arguments. That will allow me to implement a
larger set of shape operations that are more robust or have a higher range
of applicability. (I don't know of any other F-Rep systems that work this
way.)
On 25 March 2017 at 18:55, Ronaldo Persiano rcmpersiano@gmail.com wrote:
I agree that F-rep is a good way to go. I have implemented a F-rep
evaluation system a few months ago written in OpenSCAD language and have
found how easy is to define not only the boolean operators as other
geometric operators like twist and even solid sweep. But I have faced its
drawbacks too. The main one is the absolute necessity of a feature
detection algorithm (I have not implemented none) which is not easy. Even
Hyperfun which has a worldwide development team has no feature detection
yet. Without feature detection all solid edges seems roughly blended even
if you refine the evaluation grid a lot.
Another issue is the apparent lack of a robust convex hull and minkowsky
algorithms for F-rep. And there is another issue I found trying to model
the intersection of a sphere and a paraboloid: the primitive function
scale. If a primitive function f() is multiplied by a constant the
corresponding solid is not changed as its is the set of points p such
f(p)>=0. But when the primitive is intercepted with another one (the min of
two functions) the relative scale of the two functions matters and the
intersection polygonalization may show very noticeable artifacts. I
observed this issue in my system and were able to reproduce it in Hyperfun.
Anyway, despite those issues I am a big enthusiast of F-rep.
2017-03-25 18:09 GMT-03:00 doug moen doug@moens.org:
Hi David.
B-Rep means boundary representation: a shape is represented by its
boundary. OpenSCAD's mesh/polyhedron representation is a kind of B-Rep.
NURBS are another kind of boundary representation. CSG means that you
represent a shape as a tree of primitive shapes and operations on those
shapes (like boolean operations and affine transformations). OpenSCAD
evaluates a program to construct a CSG, which is then converted to a mesh
(B-Rep) using CGAL et al.
I've looked at BRL-CAD, but I encountered the same learning curve
problem, and haven't tried to use it. It seems to use CSG and two kinds of
B-Rep (NURBS and meshes) to represent shapes. I don't see any support for
F-Rep.
F-Rep is functional representation. There is no explicit representation
of a shape's boundary. Instead, there is a distance function that maps
every point [x,y,z] in 3-space onto a signed distance value, which
indicates the distance of the point from the boundary of the object, and
the sign indicates whether the point is inside or outside of the object.
Antimony and ImplicitCAD use F-Rep. Although they can export STL files,
they can't directly manipulate B-Rep representations the same way OpenSCAD
can.
I've read the source code for ImplicitCAD and Antimony, and I'm currently
trying to build an F-Rep geometry kernel that runs on a GPU, to see if it
will have the performance characteristics I want. F-Rep has very cheap
boolean operations, and avoids the cost of performing boolean operations on
polyhedral approximations of curved surfaces. Also, F-Rep is easy to
parallelize on a GPU. So right now it looks like a good way to achieve my
goals. Eventually, though, I'd like to extend this to an F-Rep/B-Rep hybrid
system, to make it more general purpose.
It would be difficult to integrate my work back into OpenSCAD. One
problem is that my approach represents shapes as functions, and requires a
functional language with first class function values (ie, closures), and
it's very difficult to retrofit this feature into OpenSCAD without breaking
backward compatibility. Another problem is due to the fact that I don't
convert curved surfaces into polygons before applying boolean operations
and other transformations. They remain as mathematically exact curved
surfaces. In OpenSCAD, users rely on the fact that circle, sphere and
cylinder return polygons and polyhedra, not curved shapes. Eg,
circle(r,$fn=6) is a regular hexagon. My system has circle and
regular_polygon as separate primitives. So that would be another backward
compatibility problem.
Hi Ronaldo.
"F-rep ... the absolute necessity of a feature detection algorithm"
If you use marching cubes to convert F-Rep to STL, then the results look
terrible if you have corners and edges. But this is a solved problem, there
are modern high quality conversion algorithms. For example, the following,
and also Dual Contouring.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.
73.3235&rep=rep1&type=pdf
The ImpliSolid open source project claims to implement the above algorithm.
I haven't tried their code yet.
https://github.com/sohale/implisolid/
"Another issue is the apparent lack of a robust convex hull and minkowsky
algorithms for F-rep."
Have you found any non-robust convex hull algorithms? If so, please share.
My only idea so far is to convert the shapes to polyhedra first, as part of
a hybrid system.
I haven't tackled Minkowski sum yet. The only paper I found so far is this:
http://www.hyperfun.org/Mink.pdf
Do you know of any other algorithms, even if they aren't robust?
"And there is another issue I found trying to model the intersection of a
sphere and a paraboloid: the primitive function scale. If a primitive
function f() is multiplied by a constant the corresponding solid is not
changed as its is the set of points p such f(p)>=0. But when the primitive
is intercepted with another one (the min of two functions) the relative
scale of the two functions matters and the intersection polygonalization
may show very noticeable artifacts. I observed this issue in my system and
were able to reproduce it in Hyperfun."
I haven't run into this specific problem, as I haven't implemented
polygonalization yet.
Multiplying a distance function by a constant, without doing anything else,
will definitely screw up the distance field without changing the shape, as
you have noted, and that causes a problem for some shape operators. But
that's not how you scale a shape. To scale a distance function f by a
scalar constant c, you use
fscaled(p) = c * f(p/c)
which doesn't mess up the distance field of the result. So maybe I don't
understand exactly what you are doing.
The more general problem is this. For any given shape, there are an
infinite number of distance functions that represent it (all functions
whose zeros lie at the shape's boundary). The "best" distance function is
usually the one that gives the exact Euclidean distance to the closest
boundary, but often we deal with others that only approximate this. Some
shape operations are sensitive to the structure of the distance function,
and need it to satisfy certain properties (eg, it should be Euclidean for
points outside the boundary, for example). My system will provide a
protocol whereby shape operations can specify requirements on the distance
field structure to their shape arguments. That will allow me to implement a
larger set of shape operations that are more robust or have a higher range
of applicability. (I don't know of any other F-Rep systems that work this
way.)
On 25 March 2017 at 18:55, Ronaldo Persiano <rcmpersiano@gmail.com> wrote:
> I agree that F-rep is a good way to go. I have implemented a F-rep
> evaluation system a few months ago written in OpenSCAD language and have
> found how easy is to define not only the boolean operators as other
> geometric operators like twist and even solid sweep. But I have faced its
> drawbacks too. The main one is the absolute necessity of a feature
> detection algorithm (I have not implemented none) which is not easy. Even
> Hyperfun which has a worldwide development team has no feature detection
> yet. Without feature detection all solid edges seems roughly blended even
> if you refine the evaluation grid a lot.
>
> Another issue is the apparent lack of a robust convex hull and minkowsky
> algorithms for F-rep. And there is another issue I found trying to model
> the intersection of a sphere and a paraboloid: the primitive function
> scale. If a primitive function f() is multiplied by a constant the
> corresponding solid is not changed as its is the set of points p such
> f(p)>=0. But when the primitive is intercepted with another one (the min of
> two functions) the relative scale of the two functions matters and the
> intersection polygonalization may show very noticeable artifacts. I
> observed this issue in my system and were able to reproduce it in Hyperfun.
>
> Anyway, despite those issues I am a big enthusiast of F-rep.
>
>
> 2017-03-25 18:09 GMT-03:00 doug moen <doug@moens.org>:
>
>> Hi David.
>>
>> B-Rep means boundary representation: a shape is represented by its
>> boundary. OpenSCAD's mesh/polyhedron representation is a kind of B-Rep.
>> NURBS are another kind of boundary representation. CSG means that you
>> represent a shape as a tree of primitive shapes and operations on those
>> shapes (like boolean operations and affine transformations). OpenSCAD
>> evaluates a program to construct a CSG, which is then converted to a mesh
>> (B-Rep) using CGAL et al.
>>
>> I've looked at BRL-CAD, but I encountered the same learning curve
>> problem, and haven't tried to use it. It seems to use CSG and two kinds of
>> B-Rep (NURBS and meshes) to represent shapes. I don't see any support for
>> F-Rep.
>>
>> F-Rep is functional representation. There is no explicit representation
>> of a shape's boundary. Instead, there is a distance function that maps
>> every point [x,y,z] in 3-space onto a signed distance value, which
>> indicates the distance of the point from the boundary of the object, and
>> the sign indicates whether the point is inside or outside of the object.
>> Antimony and ImplicitCAD use F-Rep. Although they can export STL files,
>> they can't directly manipulate B-Rep representations the same way OpenSCAD
>> can.
>>
>> I've read the source code for ImplicitCAD and Antimony, and I'm currently
>> trying to build an F-Rep geometry kernel that runs on a GPU, to see if it
>> will have the performance characteristics I want. F-Rep has very cheap
>> boolean operations, and avoids the cost of performing boolean operations on
>> polyhedral approximations of curved surfaces. Also, F-Rep is easy to
>> parallelize on a GPU. So right now it looks like a good way to achieve my
>> goals. Eventually, though, I'd like to extend this to an F-Rep/B-Rep hybrid
>> system, to make it more general purpose.
>>
>> It would be difficult to integrate my work back into OpenSCAD. One
>> problem is that my approach represents shapes as functions, and requires a
>> functional language with first class function values (ie, closures), and
>> it's very difficult to retrofit this feature into OpenSCAD without breaking
>> backward compatibility. Another problem is due to the fact that I don't
>> convert curved surfaces into polygons before applying boolean operations
>> and other transformations. They remain as mathematically exact curved
>> surfaces. In OpenSCAD, users rely on the fact that circle, sphere and
>> cylinder return polygons and polyhedra, not curved shapes. Eg,
>> circle(r,$fn=6) is a regular hexagon. My system has circle and
>> regular_polygon as separate primitives. So that would be another backward
>> compatibility problem.
>>
>>
>>
>
> _______________________________________________
> OpenSCAD mailing list
> Discuss@lists.openscad.org
> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>
>
RP
Ronaldo Persiano
Sun, Mar 26, 2017 3:22 PM
dough,
I am afraid to go into details on this matter within this thread. We may
either open a new one or discuss it privately. So I will address some few
points here.
No, I don't know any even non robust algorithm to compute convex hull of
implicitly-defined models. I have been contacted privately by a forum
member who told me he is working on that.
About minkowski: I have not fully read the Pasko paper; however, they say
in the conclusions that: "Implementation of Minkowski sums for 3D objects
will require parallel or distributed processing; we are planning to use
networked workstations with PVM system. Because the numerical procedure is
quite time consuming, the procedural definition is not directly applicable
in a modern practical modeling system." That was enough for me.
The issue I have observed relates to the choice of the "distance function".
It is not a model scale. We may discuss it elsewhere.
2017-03-26 11:37 GMT-03:00 doug moen doug@moens.org:
Hi Ronaldo.
"F-rep ... the absolute necessity of a feature detection algorithm"
If you use marching cubes to convert F-Rep to STL, then the results look
terrible if you have corners and edges. But this is a solved problem, there
are modern high quality conversion algorithms. For example, the following,
and also Dual Contouring.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.
3235&rep=rep1&type=pdf
The ImpliSolid open source project claims to implement the above
algorithm. I haven't tried their code yet.
https://github.com/sohale/implisolid/
"Another issue is the apparent lack of a robust convex hull and minkowsky
algorithms for F-rep."
Have you found any non-robust convex hull algorithms? If so, please share.
My only idea so far is to convert the shapes to polyhedra first, as part of
a hybrid system.
I haven't tackled Minkowski sum yet. The only paper I found so far is this:
http://www.hyperfun.org/Mink.pdf
Do you know of any other algorithms, even if they aren't robust?
"And there is another issue I found trying to model the intersection of a
sphere and a paraboloid: the primitive function scale. If a primitive
function f() is multiplied by a constant the corresponding solid is not
changed as its is the set of points p such f(p)>=0. But when the primitive
is intercepted with another one (the min of two functions) the relative
scale of the two functions matters and the intersection polygonalization
may show very noticeable artifacts. I observed this issue in my system and
were able to reproduce it in Hyperfun."
I haven't run into this specific problem, as I haven't implemented
polygonalization yet.
Multiplying a distance function by a constant, without doing anything
else, will definitely screw up the distance field without changing the
shape, as you have noted, and that causes a problem for some shape
operators. But that's not how you scale a shape. To scale a distance
function f by a scalar constant c, you use
fscaled(p) = c * f(p/c)
which doesn't mess up the distance field of the result. So maybe I don't
understand exactly what you are doing.
The more general problem is this. For any given shape, there are an
infinite number of distance functions that represent it (all functions
whose zeros lie at the shape's boundary). The "best" distance function is
usually the one that gives the exact Euclidean distance to the closest
boundary, but often we deal with others that only approximate this. Some
shape operations are sensitive to the structure of the distance function,
and need it to satisfy certain properties (eg, it should be Euclidean for
points outside the boundary, for example). My system will provide a
protocol whereby shape operations can specify requirements on the distance
field structure to their shape arguments. That will allow me to implement a
larger set of shape operations that are more robust or have a higher range
of applicability. (I don't know of any other F-Rep systems that work this
way.)
On 25 March 2017 at 18:55, Ronaldo Persiano rcmpersiano@gmail.com wrote:
I agree that F-rep is a good way to go. I have implemented a F-rep
evaluation system a few months ago written in OpenSCAD language and have
found how easy is to define not only the boolean operators as other
geometric operators like twist and even solid sweep. But I have faced its
drawbacks too. The main one is the absolute necessity of a feature
detection algorithm (I have not implemented none) which is not easy. Even
Hyperfun which has a worldwide development team has no feature detection
yet. Without feature detection all solid edges seems roughly blended even
if you refine the evaluation grid a lot.
Another issue is the apparent lack of a robust convex hull and minkowsky
algorithms for F-rep. And there is another issue I found trying to model
the intersection of a sphere and a paraboloid: the primitive function
scale. If a primitive function f() is multiplied by a constant the
corresponding solid is not changed as its is the set of points p such
f(p)>=0. But when the primitive is intercepted with another one (the min of
two functions) the relative scale of the two functions matters and the
intersection polygonalization may show very noticeable artifacts. I
observed this issue in my system and were able to reproduce it in Hyperfun.
Anyway, despite those issues I am a big enthusiast of F-rep.
2017-03-25 18:09 GMT-03:00 doug moen doug@moens.org:
Hi David.
B-Rep means boundary representation: a shape is represented by its
boundary. OpenSCAD's mesh/polyhedron representation is a kind of B-Rep.
NURBS are another kind of boundary representation. CSG means that you
represent a shape as a tree of primitive shapes and operations on those
shapes (like boolean operations and affine transformations). OpenSCAD
evaluates a program to construct a CSG, which is then converted to a mesh
(B-Rep) using CGAL et al.
I've looked at BRL-CAD, but I encountered the same learning curve
problem, and haven't tried to use it. It seems to use CSG and two kinds of
B-Rep (NURBS and meshes) to represent shapes. I don't see any support for
F-Rep.
F-Rep is functional representation. There is no explicit representation
of a shape's boundary. Instead, there is a distance function that maps
every point [x,y,z] in 3-space onto a signed distance value, which
indicates the distance of the point from the boundary of the object, and
the sign indicates whether the point is inside or outside of the object.
Antimony and ImplicitCAD use F-Rep. Although they can export STL files,
they can't directly manipulate B-Rep representations the same way OpenSCAD
can.
I've read the source code for ImplicitCAD and Antimony, and I'm
currently trying to build an F-Rep geometry kernel that runs on a GPU, to
see if it will have the performance characteristics I want. F-Rep has very
cheap boolean operations, and avoids the cost of performing boolean
operations on polyhedral approximations of curved surfaces. Also, F-Rep is
easy to parallelize on a GPU. So right now it looks like a good way to
achieve my goals. Eventually, though, I'd like to extend this to an
F-Rep/B-Rep hybrid system, to make it more general purpose.
It would be difficult to integrate my work back into OpenSCAD. One
problem is that my approach represents shapes as functions, and requires a
functional language with first class function values (ie, closures), and
it's very difficult to retrofit this feature into OpenSCAD without breaking
backward compatibility. Another problem is due to the fact that I don't
convert curved surfaces into polygons before applying boolean operations
and other transformations. They remain as mathematically exact curved
surfaces. In OpenSCAD, users rely on the fact that circle, sphere and
cylinder return polygons and polyhedra, not curved shapes. Eg,
circle(r,$fn=6) is a regular hexagon. My system has circle and
regular_polygon as separate primitives. So that would be another backward
compatibility problem.
dough,
I am afraid to go into details on this matter within this thread. We may
either open a new one or discuss it privately. So I will address some few
points here.
No, I don't know any even non robust algorithm to compute convex hull of
implicitly-defined models. I have been contacted privately by a forum
member who told me he is working on that.
About minkowski: I have not fully read the Pasko paper; however, they say
in the conclusions that: "Implementation of Minkowski sums for 3D objects
will require parallel or distributed processing; we are planning to use
networked workstations with PVM system. Because the numerical procedure is
quite time consuming, the procedural definition is not directly applicable
in a modern practical modeling system." That was enough for me.
The issue I have observed relates to the choice of the "distance function".
It is not a model scale. We may discuss it elsewhere.
2017-03-26 11:37 GMT-03:00 doug moen <doug@moens.org>:
> Hi Ronaldo.
>
> "F-rep ... the absolute necessity of a feature detection algorithm"
>
> If you use marching cubes to convert F-Rep to STL, then the results look
> terrible if you have corners and edges. But this is a solved problem, there
> are modern high quality conversion algorithms. For example, the following,
> and also Dual Contouring.
> http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.73.
> 3235&rep=rep1&type=pdf
>
> The ImpliSolid open source project claims to implement the above
> algorithm. I haven't tried their code yet.
> https://github.com/sohale/implisolid/
>
> "Another issue is the apparent lack of a robust convex hull and minkowsky
> algorithms for F-rep."
>
> Have you found any non-robust convex hull algorithms? If so, please share.
> My only idea so far is to convert the shapes to polyhedra first, as part of
> a hybrid system.
>
> I haven't tackled Minkowski sum yet. The only paper I found so far is this:
> http://www.hyperfun.org/Mink.pdf
> Do you know of any other algorithms, even if they aren't robust?
>
> "And there is another issue I found trying to model the intersection of a
> sphere and a paraboloid: the primitive function scale. If a primitive
> function f() is multiplied by a constant the corresponding solid is not
> changed as its is the set of points p such f(p)>=0. But when the primitive
> is intercepted with another one (the min of two functions) the relative
> scale of the two functions matters and the intersection polygonalization
> may show very noticeable artifacts. I observed this issue in my system and
> were able to reproduce it in Hyperfun."
>
> I haven't run into this specific problem, as I haven't implemented
> polygonalization yet.
>
> Multiplying a distance function by a constant, without doing anything
> else, will definitely screw up the distance field without changing the
> shape, as you have noted, and that causes a problem for some shape
> operators. But that's not how you scale a shape. To scale a distance
> function f by a scalar constant c, you use
> fscaled(p) = c * f(p/c)
> which doesn't mess up the distance field of the result. So maybe I don't
> understand exactly what you are doing.
>
> The more general problem is this. For any given shape, there are an
> infinite number of distance functions that represent it (all functions
> whose zeros lie at the shape's boundary). The "best" distance function is
> usually the one that gives the exact Euclidean distance to the closest
> boundary, but often we deal with others that only approximate this. Some
> shape operations are sensitive to the structure of the distance function,
> and need it to satisfy certain properties (eg, it should be Euclidean for
> points outside the boundary, for example). My system will provide a
> protocol whereby shape operations can specify requirements on the distance
> field structure to their shape arguments. That will allow me to implement a
> larger set of shape operations that are more robust or have a higher range
> of applicability. (I don't know of any other F-Rep systems that work this
> way.)
>
>
> On 25 March 2017 at 18:55, Ronaldo Persiano <rcmpersiano@gmail.com> wrote:
>
>> I agree that F-rep is a good way to go. I have implemented a F-rep
>> evaluation system a few months ago written in OpenSCAD language and have
>> found how easy is to define not only the boolean operators as other
>> geometric operators like twist and even solid sweep. But I have faced its
>> drawbacks too. The main one is the absolute necessity of a feature
>> detection algorithm (I have not implemented none) which is not easy. Even
>> Hyperfun which has a worldwide development team has no feature detection
>> yet. Without feature detection all solid edges seems roughly blended even
>> if you refine the evaluation grid a lot.
>>
>> Another issue is the apparent lack of a robust convex hull and minkowsky
>> algorithms for F-rep. And there is another issue I found trying to model
>> the intersection of a sphere and a paraboloid: the primitive function
>> scale. If a primitive function f() is multiplied by a constant the
>> corresponding solid is not changed as its is the set of points p such
>> f(p)>=0. But when the primitive is intercepted with another one (the min of
>> two functions) the relative scale of the two functions matters and the
>> intersection polygonalization may show very noticeable artifacts. I
>> observed this issue in my system and were able to reproduce it in Hyperfun.
>>
>> Anyway, despite those issues I am a big enthusiast of F-rep.
>>
>>
>> 2017-03-25 18:09 GMT-03:00 doug moen <doug@moens.org>:
>>
>>> Hi David.
>>>
>>> B-Rep means boundary representation: a shape is represented by its
>>> boundary. OpenSCAD's mesh/polyhedron representation is a kind of B-Rep.
>>> NURBS are another kind of boundary representation. CSG means that you
>>> represent a shape as a tree of primitive shapes and operations on those
>>> shapes (like boolean operations and affine transformations). OpenSCAD
>>> evaluates a program to construct a CSG, which is then converted to a mesh
>>> (B-Rep) using CGAL et al.
>>>
>>> I've looked at BRL-CAD, but I encountered the same learning curve
>>> problem, and haven't tried to use it. It seems to use CSG and two kinds of
>>> B-Rep (NURBS and meshes) to represent shapes. I don't see any support for
>>> F-Rep.
>>>
>>> F-Rep is functional representation. There is no explicit representation
>>> of a shape's boundary. Instead, there is a distance function that maps
>>> every point [x,y,z] in 3-space onto a signed distance value, which
>>> indicates the distance of the point from the boundary of the object, and
>>> the sign indicates whether the point is inside or outside of the object.
>>> Antimony and ImplicitCAD use F-Rep. Although they can export STL files,
>>> they can't directly manipulate B-Rep representations the same way OpenSCAD
>>> can.
>>>
>>> I've read the source code for ImplicitCAD and Antimony, and I'm
>>> currently trying to build an F-Rep geometry kernel that runs on a GPU, to
>>> see if it will have the performance characteristics I want. F-Rep has very
>>> cheap boolean operations, and avoids the cost of performing boolean
>>> operations on polyhedral approximations of curved surfaces. Also, F-Rep is
>>> easy to parallelize on a GPU. So right now it looks like a good way to
>>> achieve my goals. Eventually, though, I'd like to extend this to an
>>> F-Rep/B-Rep hybrid system, to make it more general purpose.
>>>
>>> It would be difficult to integrate my work back into OpenSCAD. One
>>> problem is that my approach represents shapes as functions, and requires a
>>> functional language with first class function values (ie, closures), and
>>> it's very difficult to retrofit this feature into OpenSCAD without breaking
>>> backward compatibility. Another problem is due to the fact that I don't
>>> convert curved surfaces into polygons before applying boolean operations
>>> and other transformations. They remain as mathematically exact curved
>>> surfaces. In OpenSCAD, users rely on the fact that circle, sphere and
>>> cylinder return polygons and polyhedra, not curved shapes. Eg,
>>> circle(r,$fn=6) is a regular hexagon. My system has circle and
>>> regular_polygon as separate primitives. So that would be another backward
>>> compatibility problem.
>>>
>>>
>>>
>>
>> _______________________________________________
>> OpenSCAD mailing list
>> Discuss@lists.openscad.org
>> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>>
>>
>
> _______________________________________________
> OpenSCAD mailing list
> Discuss@lists.openscad.org
> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>
>
N
noil
Wed, Mar 29, 2017 3:44 PM
I did not think about pixels, but caluculating the geometry is exactly
something, that should take place on the GPU.
nophead wrote
Sorry OpenCG should be OpenGL.
On 26 March 2017 at 13:46, nop head <
No I don't think CGAL does any actual rendering of pixels. It just
calculates the geometry using the CPU. After that OpenSCAD can render the
results to screen very quickly using OpenCG and your graphics card. When
you pan around with the mouse after F6 it is faster than after F5.
I did not think about pixels, but caluculating the geometry is exactly
something, that should take place on the GPU.
nophead wrote
> Sorry OpenCG should be OpenGL.
>
> On 26 March 2017 at 13:46, nop head <
> nop.head@
> > wrote:
>
>> No I don't think CGAL does any actual rendering of pixels. It just
>> calculates the geometry using the CPU. After that OpenSCAD can render the
>> results to screen very quickly using OpenCG and your graphics card. When
>> you pan around with the mouse after F6 it is faster than after F5.
>>
--
View this message in context: http://forum.openscad.org/Lattice-structure-tp21001p21055.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
NH
nop head
Wed, Mar 29, 2017 3:57 PM
Really, I thought they were dedicated to transforming and rending geometry,
not doing CSG operations like union, intersection and difference.
On 29 March 2017 at 16:44, noil noil@gmx.net wrote:
I did not think about pixels, but caluculating the geometry is exactly
something, that should take place on the GPU.
nophead wrote
Sorry OpenCG should be OpenGL.
On 26 March 2017 at 13:46, nop head <
No I don't think CGAL does any actual rendering of pixels. It just
calculates the geometry using the CPU. After that OpenSCAD can render
results to screen very quickly using OpenCG and your graphics card.
you pan around with the mouse after F6 it is faster than after F5.
Really, I thought they were dedicated to transforming and rending geometry,
not doing CSG operations like union, intersection and difference.
On 29 March 2017 at 16:44, noil <noil@gmx.net> wrote:
>
> I did not think about pixels, but caluculating the geometry is exactly
> something, that should take place on the GPU.
>
>
> nophead wrote
> > Sorry OpenCG should be OpenGL.
> >
> > On 26 March 2017 at 13:46, nop head <
>
> > nop.head@
>
> > > wrote:
> >
> >> No I don't think CGAL does any actual rendering of pixels. It just
> >> calculates the geometry using the CPU. After that OpenSCAD can render
> the
> >> results to screen very quickly using OpenCG and your graphics card.
> When
> >> you pan around with the mouse after F6 it is faster than after F5.
> >>
>
>
>
>
>
> --
> View this message in context: http://forum.openscad.org/Lattice-structure-
> tp21001p21055.html
> Sent from the OpenSCAD mailing list archive at Nabble.com.
>
> _______________________________________________
> OpenSCAD mailing list
> Discuss@lists.openscad.org
> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>
DM
doug moen
Wed, Mar 29, 2017 7:45 PM
GPUs can be used for general purpose parallel computing in cases where the
same code is simultaneously executed on many different pieces of data.
There are some restrictions: no recursive functions, no function pointers,
no dynamic memory allocation. Often there is no double precision floating
point arithmetic, only 32 bit.
You can definitely use a GPU to directly render an F-Rep CSG tree onto a
display, without first rendering to polygons. I've got that much working in
my new geometry engine. I think it should be possible to use a GPU to
convert F-Rep into STL, but I'm not at that stage of my project yet.
An interesting question for OpenSCAD is whether you can implement boolean
operations on meshes using a GPU. Ie, a drop in replacement for CGAL or
Carve that runs on the GPU.
I dunno if this has been done. It seems like a challenging problem. A quick
google search doesn't find exactly this, but there is some related tech.
Approximate boolean operations on meshes. Page 13 compares performance to
CGAL:
https://pdfs.semanticscholar.org/6bcd/0369f5fe36dc7c807891a241c1c33f9a0e94.pdf
Fast, robust, accurate, multithreaded booleans. No mention of GPU, though.
http://dl.acm.org/citation.cfm?id=2442403
These projects sound more like better alternatives to OpenCSG:
http://www.cc.gatech.edu/~jarek/papers/Blister.pdf
https://people.eecs.berkeley.edu/~sequin/CS285/PAPERS/Zhao_Boolean-on-Solids.pdf
This uses a GPU to compute intersections of NURBS surfaces.
http://www.nvidia.com/content/gtc-2010/pdfs/2171_gtc2010.pdf
On 29 March 2017 at 11:57, nop head nop.head@gmail.com wrote:
Really, I thought they were dedicated to transforming and rending
geometry, not doing CSG operations like union, intersection and difference.
On 29 March 2017 at 16:44, noil noil@gmx.net wrote:
I did not think about pixels, but caluculating the geometry is exactly
something, that should take place on the GPU.
nophead wrote
Sorry OpenCG should be OpenGL.
On 26 March 2017 at 13:46, nop head <
No I don't think CGAL does any actual rendering of pixels. It just
calculates the geometry using the CPU. After that OpenSCAD can render
results to screen very quickly using OpenCG and your graphics card.
you pan around with the mouse after F6 it is faster than after F5.
GPUs can be used for general purpose parallel computing in cases where the
same code is simultaneously executed on many different pieces of data.
There are some restrictions: no recursive functions, no function pointers,
no dynamic memory allocation. Often there is no double precision floating
point arithmetic, only 32 bit.
You can definitely use a GPU to directly render an F-Rep CSG tree onto a
display, without first rendering to polygons. I've got that much working in
my new geometry engine. I think it should be possible to use a GPU to
convert F-Rep into STL, but I'm not at that stage of my project yet.
An interesting question for OpenSCAD is whether you can implement boolean
operations on meshes using a GPU. Ie, a drop in replacement for CGAL or
Carve that runs on the GPU.
I dunno if this has been done. It seems like a challenging problem. A quick
google search doesn't find exactly this, but there is some related tech.
*Approximate* boolean operations on meshes. Page 13 compares performance to
CGAL:
https://pdfs.semanticscholar.org/6bcd/0369f5fe36dc7c807891a241c1c33f9a0e94.pdf
Fast, robust, accurate, multithreaded booleans. No mention of GPU, though.
http://dl.acm.org/citation.cfm?id=2442403
These projects sound more like better alternatives to OpenCSG:
http://www.cc.gatech.edu/~jarek/papers/Blister.pdf
https://people.eecs.berkeley.edu/~sequin/CS285/PAPERS/Zhao_Boolean-on-Solids.pdf
This uses a GPU to compute intersections of NURBS surfaces.
http://www.nvidia.com/content/gtc-2010/pdfs/2171_gtc2010.pdf
On 29 March 2017 at 11:57, nop head <nop.head@gmail.com> wrote:
> Really, I thought they were dedicated to transforming and rending
> geometry, not doing CSG operations like union, intersection and difference.
>
> On 29 March 2017 at 16:44, noil <noil@gmx.net> wrote:
>
>>
>> I did not think about pixels, but caluculating the geometry is exactly
>> something, that should take place on the GPU.
>>
>>
>> nophead wrote
>> > Sorry OpenCG should be OpenGL.
>> >
>> > On 26 March 2017 at 13:46, nop head <
>>
>> > nop.head@
>>
>> > > wrote:
>> >
>> >> No I don't think CGAL does any actual rendering of pixels. It just
>> >> calculates the geometry using the CPU. After that OpenSCAD can render
>> the
>> >> results to screen very quickly using OpenCG and your graphics card.
>> When
>> >> you pan around with the mouse after F6 it is faster than after F5.
>> >>
>>
>>
>>
>>
>>
>> --
>> View this message in context: http://forum.openscad.org/Latt
>> ice-structure-tp21001p21055.html
>> Sent from the OpenSCAD mailing list archive at Nabble.com.
>>
>> _______________________________________________
>> OpenSCAD mailing list
>> Discuss@lists.openscad.org
>> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>>
>
>
> _______________________________________________
> OpenSCAD mailing list
> Discuss@lists.openscad.org
> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>
>
NH
nop head
Wed, Mar 29, 2017 8:21 PM
What we need is a hardware infinite precision rationals unit. That is if
rational representation is the only way to get exact geometry.
When you have circles, cylinders and spheres or any rotation not by
multiples of 90 then most of the vertices become irrational numbers, so it
is no surprise CGAL uses an enormous amount of time and memory trying to
represent them as rationals. Is it possible to do CSG with floats and get
correct results?
It would be interesting to try the sphere and cylinder lattice with
vertices snapped to a fairly course grid to see if it vastly increases
speed and reduces CGAL memory usage. I.e. so that the irrational numbers
become three digit fractions instead of a full floating point mantissa
worth of digits as a numerator.
On 29 March 2017 at 20:45, doug moen doug@moens.org wrote:
GPUs can be used for general purpose parallel computing in cases where the
same code is simultaneously executed on many different pieces of data.
There are some restrictions: no recursive functions, no function pointers,
no dynamic memory allocation. Often there is no double precision floating
point arithmetic, only 32 bit.
You can definitely use a GPU to directly render an F-Rep CSG tree onto a
display, without first rendering to polygons. I've got that much working in
my new geometry engine. I think it should be possible to use a GPU to
convert F-Rep into STL, but I'm not at that stage of my project yet.
An interesting question for OpenSCAD is whether you can implement boolean
operations on meshes using a GPU. Ie, a drop in replacement for CGAL or
Carve that runs on the GPU.
I dunno if this has been done. It seems like a challenging problem. A
quick google search doesn't find exactly this, but there is some related
tech.
Approximate boolean operations on meshes. Page 13 compares performance
to CGAL:
https://pdfs.semanticscholar.org/6bcd/0369f5fe36dc7c807891a241c1c33f
9a0e94.pdf
Fast, robust, accurate, multithreaded booleans. No mention of GPU, though.
http://dl.acm.org/citation.cfm?id=2442403
These projects sound more like better alternatives to OpenCSG:
http://www.cc.gatech.edu/~jarek/papers/Blister.pdf
https://people.eecs.berkeley.edu/~sequin/CS285/PAPERS/Zhao_
Boolean-on-Solids.pdf
This uses a GPU to compute intersections of NURBS surfaces.
http://www.nvidia.com/content/gtc-2010/pdfs/2171_gtc2010.pdf
On 29 March 2017 at 11:57, nop head nop.head@gmail.com wrote:
Really, I thought they were dedicated to transforming and rending
geometry, not doing CSG operations like union, intersection and difference.
On 29 March 2017 at 16:44, noil noil@gmx.net wrote:
I did not think about pixels, but caluculating the geometry is exactly
something, that should take place on the GPU.
nophead wrote
Sorry OpenCG should be OpenGL.
On 26 March 2017 at 13:46, nop head <
No I don't think CGAL does any actual rendering of pixels. It just
calculates the geometry using the CPU. After that OpenSCAD can render
results to screen very quickly using OpenCG and your graphics card.
you pan around with the mouse after F6 it is faster than after F5.
What we need is a hardware infinite precision rationals unit. That is if
rational representation is the only way to get exact geometry.
When you have circles, cylinders and spheres or any rotation not by
multiples of 90 then most of the vertices become irrational numbers, so it
is no surprise CGAL uses an enormous amount of time and memory trying to
represent them as rationals. Is it possible to do CSG with floats and get
correct results?
It would be interesting to try the sphere and cylinder lattice with
vertices snapped to a fairly course grid to see if it vastly increases
speed and reduces CGAL memory usage. I.e. so that the irrational numbers
become three digit fractions instead of a full floating point mantissa
worth of digits as a numerator.
On 29 March 2017 at 20:45, doug moen <doug@moens.org> wrote:
> GPUs can be used for general purpose parallel computing in cases where the
> same code is simultaneously executed on many different pieces of data.
> There are some restrictions: no recursive functions, no function pointers,
> no dynamic memory allocation. Often there is no double precision floating
> point arithmetic, only 32 bit.
>
> You can definitely use a GPU to directly render an F-Rep CSG tree onto a
> display, without first rendering to polygons. I've got that much working in
> my new geometry engine. I think it should be possible to use a GPU to
> convert F-Rep into STL, but I'm not at that stage of my project yet.
>
> An interesting question for OpenSCAD is whether you can implement boolean
> operations on meshes using a GPU. Ie, a drop in replacement for CGAL or
> Carve that runs on the GPU.
>
> I dunno if this has been done. It seems like a challenging problem. A
> quick google search doesn't find exactly this, but there is some related
> tech.
>
> *Approximate* boolean operations on meshes. Page 13 compares performance
> to CGAL:
> https://pdfs.semanticscholar.org/6bcd/0369f5fe36dc7c807891a241c1c33f
> 9a0e94.pdf
>
> Fast, robust, accurate, multithreaded booleans. No mention of GPU, though.
> http://dl.acm.org/citation.cfm?id=2442403
>
> These projects sound more like better alternatives to OpenCSG:
> http://www.cc.gatech.edu/~jarek/papers/Blister.pdf
> https://people.eecs.berkeley.edu/~sequin/CS285/PAPERS/Zhao_
> Boolean-on-Solids.pdf
>
> This uses a GPU to compute intersections of NURBS surfaces.
> http://www.nvidia.com/content/gtc-2010/pdfs/2171_gtc2010.pdf
>
>
>
> On 29 March 2017 at 11:57, nop head <nop.head@gmail.com> wrote:
>
>> Really, I thought they were dedicated to transforming and rending
>> geometry, not doing CSG operations like union, intersection and difference.
>>
>> On 29 March 2017 at 16:44, noil <noil@gmx.net> wrote:
>>
>>>
>>> I did not think about pixels, but caluculating the geometry is exactly
>>> something, that should take place on the GPU.
>>>
>>>
>>> nophead wrote
>>> > Sorry OpenCG should be OpenGL.
>>> >
>>> > On 26 March 2017 at 13:46, nop head <
>>>
>>> > nop.head@
>>>
>>> > > wrote:
>>> >
>>> >> No I don't think CGAL does any actual rendering of pixels. It just
>>> >> calculates the geometry using the CPU. After that OpenSCAD can render
>>> the
>>> >> results to screen very quickly using OpenCG and your graphics card.
>>> When
>>> >> you pan around with the mouse after F6 it is faster than after F5.
>>> >>
>>>
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context: http://forum.openscad.org/Latt
>>> ice-structure-tp21001p21055.html
>>> Sent from the OpenSCAD mailing list archive at Nabble.com.
>>>
>>> _______________________________________________
>>> OpenSCAD mailing list
>>> Discuss@lists.openscad.org
>>> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>>>
>>
>>
>> _______________________________________________
>> OpenSCAD mailing list
>> Discuss@lists.openscad.org
>> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>>
>>
>
> _______________________________________________
> OpenSCAD mailing list
> Discuss@lists.openscad.org
> http://lists.openscad.org/mailman/listinfo/discuss_lists.openscad.org
>
>
W
wolf
Thu, Mar 30, 2017 10:35 PM
What we need is a hardware infinite precision rationals unit.
unquote
If it were that simple . . . How to build even one component of such a
machine, memory capable of storing a single number, at infinite precision,
in a finitely sized space containing a finite amount of matter, as small as
Special Relativity demands our universe is, with lower spatial resolution
limited by the Heisenberg uncertainty principle, I shall leave to Nophead to
explain to all of us.
As far as I know, lies are most effectively told by not mentioning certain
facts - as Nophead did by not mentioning that our universe contains only a
finite amount of matter from which memory may be constructed. Had he
considered it, I don't think he would have lied to himself, as he so clearly
did.
If you want to do CSG operations (unions, . . .) on a computer, you have to
program a routine that handles the intersection of parallel planes, planes
that in Euclidean Geometry do not intersect at all. That isn't so difficult,
but what about planes that are just shy of being parallel? Mathematically,
the procedures needed to calculate the intersection involve a division, by
zero for truly parallel planes, by near zero, for nearly parallel planes. To
solve that, you need to understand Cardinal Numbers, and in particular how
to expand the concept of Cardinal numbers to sets of finite size. As far as
I know, CGAL uses just that understanding to sail around the problems. And
that means it uses rational numbers, or, if you allow me some inaccuracy, it
does not do divisions at all. (In a binary number system, divisions by any
number other than powers of two leads to infinite series of digits, and thus
imprecise results.) And these mathematics tricks, otherwise known as
arbitrary precision arithmetic, take lots of computing time, according to
one source 1000 times as long as without them.
If this number is correct, then parallel computing is not an answer to slow
rendering times either, as you may be able to increase speed say 10fold, but
not 1000fold unless you invest a lot of money in a large GPU farm,
containing hundreds of GPU. Who would be able to afford that?
So here you have another lie: parallel processing will speed up rendering.
Indeed it does, but by how much its proponents never say, or it would not so
ardently be promoted.
. . .That is if rational representation is the only way to get exact
geometry.
. . .
Is it possible to do CSG with floats and get correct results?
. . .
unquote
The answer here is a flat no. It is not possible to get "exact geometry" if
you restrict yourself to floats. And again I must invoke the hated word
"lie" to explain what can be done.
First of all, the so-called "exact answer" is an approximation of reality,
as it implies infinite precision. And infinite precision is not attainable
by any instrumentation in the light of the Heisenberg uncertainty principle.
What Peter Hachenberger, one of the authors of CGAL, has shown in his
doctoral thesis is that CGAL is more robust towards mis-use than the
competition. What I cannot answer here is whether this statement holds for
all possible competitors, or only for those he, and I, have considered. The
lie here is to consider "exact answer" as a bijective mapping of the
computer model onto something that can be produced by a "perfect" printer.
My own work in identifying the causes of "degenerate triangles" has given me
hope that something like an "exact approximation" is possible, as it has led
me to understand the design errors that underpin CGAL, or, better said, the
design flaws arising out of joining CGAL into OpenSCAD. If my current
thinking pans out, I expect to obtain a ten thousandfold speed increase in
rendering objects. The price for this effort is steep, however: It demands
that current OpenSCAD developers are moved from coders (people who can
string together libraries) into programmers (people who consider and
implement the constraints and assumptions underlying those libraries).
I will conclude my remarks with a short quote from issue1258.stl, an
OpenSCAD generated .stl file:
facet normal -1 -4.4983e-08 -4.4983e-08
outer loop
vertex -0.249999 -29.6066 22.3934
vertex -0.25 -20 32
vertex -0.249999 -20 12.7868
endloop
This short excerpt says it quite clearly: whoever coded the .stl generator
had no clue about data accuracy, about what is realistic to report, and what
denotes sheer stupidity and ignorance. Otherwise, it would have read
facet normal -1 0 0
outer loop
vertex -0.25 -29.6066 22.3934
vertex -0.25 -20 32
vertex -0.25 -20 12.7868
endloop
Why? The coder decided to report 6 decimal places. This is quite
appropriate, if not excessive, for an item that is produced with final
dimensions to maybe two and, at best, within four decimal places. But then
he/she should have realized that, firstly, the largest dimension ends at
four places behind the decimal point. Thus, reporting other dimensions to
more places behind the decimal point is a form of bullshitting, of lulling
the user into false trust. There is simply no real difference between
-0.249999 and -0.25. But coding for the first output is less work, in
particular less mental work, than coding for the second output.
As far as lattices go, they can be constructed in OpenSCAD quickly and
rendered within seconds, if you do not involve CGAL at all in generating the
output .stl file, i.e. if you let polyhedron() do all the work. Both points
and faces can be generated with for loops without a problem, but doing so is
not a task for a newbie.
wolf
--
View this message in context: http://forum.openscad.org/Lattice-structure-tp21001p21060.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
What we need is a hardware infinite precision rationals unit.
unquote
If it were that simple . . . How to build even one component of such a
machine, memory capable of storing a single number, at infinite precision,
in a finitely sized space containing a finite amount of matter, as small as
Special Relativity demands our universe is, with lower spatial resolution
limited by the Heisenberg uncertainty principle, I shall leave to Nophead to
explain to all of us.
As far as I know, lies are most effectively told by not mentioning certain
facts - as Nophead did by not mentioning that our universe contains only a
finite amount of matter from which memory may be constructed. Had he
considered it, I don't think he would have lied to himself, as he so clearly
did.
If you want to do CSG operations (unions, . . .) on a computer, you have to
program a routine that handles the intersection of parallel planes, planes
that in Euclidean Geometry do not intersect at all. That isn't so difficult,
but what about planes that are just shy of being parallel? Mathematically,
the procedures needed to calculate the intersection involve a division, by
zero for truly parallel planes, by near zero, for nearly parallel planes. To
solve that, you need to understand Cardinal Numbers, and in particular how
to expand the concept of Cardinal numbers to sets of finite size. As far as
I know, CGAL uses just that understanding to sail around the problems. And
that means it uses rational numbers, or, if you allow me some inaccuracy, it
does not do divisions at all. (In a binary number system, divisions by any
number other than powers of two leads to infinite series of digits, and thus
imprecise results.) And these mathematics tricks, otherwise known as
arbitrary precision arithmetic, take lots of computing time, according to
one source 1000 times as long as without them.
If this number is correct, then parallel computing is not an answer to slow
rendering times either, as you may be able to increase speed say 10fold, but
not 1000fold unless you invest a lot of money in a large GPU farm,
containing hundreds of GPU. Who would be able to afford that?
So here you have another lie: parallel processing will speed up rendering.
Indeed it does, but by how much its proponents never say, or it would not so
ardently be promoted.
. . .That is if rational representation is the only way to get exact
geometry.
. . .
Is it possible to do CSG with floats and get correct results?
. . .
unquote
The answer here is a flat no. It is not possible to get "exact geometry" if
you restrict yourself to floats. And again I must invoke the hated word
"lie" to explain what can be done.
First of all, the so-called "exact answer" is an approximation of reality,
as it implies infinite precision. And infinite precision is not attainable
by any instrumentation in the light of the Heisenberg uncertainty principle.
What Peter Hachenberger, one of the authors of CGAL, has shown in his
doctoral thesis is that CGAL is more robust towards mis-use than the
competition. What I cannot answer here is whether this statement holds for
all possible competitors, or only for those he, and I, have considered. The
lie here is to consider "exact answer" as a bijective mapping of the
computer model onto something that can be produced by a "perfect" printer.
My own work in identifying the causes of "degenerate triangles" has given me
hope that something like an "exact approximation" is possible, as it has led
me to understand the design errors that underpin CGAL, or, better said, the
design flaws arising out of joining CGAL into OpenSCAD. If my current
thinking pans out, I expect to obtain a ten thousandfold speed increase in
rendering objects. The price for this effort is steep, however: It demands
that current OpenSCAD developers are moved from coders (people who can
string together libraries) into programmers (people who consider and
implement the constraints and assumptions underlying those libraries).
I will conclude my remarks with a short quote from issue1258.stl, an
OpenSCAD generated .stl file:
facet normal -1 -4.4983e-08 -4.4983e-08
outer loop
vertex -0.249999 -29.6066 22.3934
vertex -0.25 -20 32
vertex -0.249999 -20 12.7868
endloop
This short excerpt says it quite clearly: whoever coded the .stl generator
had no clue about data accuracy, about what is realistic to report, and what
denotes sheer stupidity and ignorance. Otherwise, it would have read
facet normal -1 0 0
outer loop
vertex -0.25 -29.6066 22.3934
vertex -0.25 -20 32
vertex -0.25 -20 12.7868
endloop
Why? The coder decided to report 6 decimal places. This is quite
appropriate, if not excessive, for an item that is produced with final
dimensions to maybe two and, at best, within four decimal places. But then
he/she should have realized that, firstly, the largest dimension ends at
four places behind the decimal point. Thus, reporting other dimensions to
more places behind the decimal point is a form of bullshitting, of lulling
the user into false trust. There is simply no real difference between
-0.249999 and -0.25. But coding for the first output is less work, in
particular less mental work, than coding for the second output.
As far as lattices go, they can be constructed in OpenSCAD quickly and
rendered within seconds, if you do not involve CGAL at all in generating the
output .stl file, i.e. if you let polyhedron() do all the work. Both points
and faces can be generated with for loops without a problem, but doing so is
not a task for a newbie.
wolf
--
View this message in context: http://forum.openscad.org/Lattice-structure-tp21001p21060.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
M
MichaelAtOz
Thu, Mar 30, 2017 10:52 PM
If it were that simple . . . How to build even one component of such a
machine, memory capable of storing a single number, at infinite precision,
in a finitely sized space containing a finite amount of matter, as small
as Special Relativity demands our universe is
Why limit oneself to just this universe?
Admin - PM me if you need anything, or if I've done something stupid...
Unless specifically shown otherwise above, my contribution is in the Public Domain; to the extent possible under law, I have waived all copyright and related or neighbouring rights to this work. Obviously inclusion of works of previous authors is not included in the above.
The TPP is no simple “trade agreement.” Fight it! http://www.ourfairdeal.org/ time is running out!
View this message in context: http://forum.openscad.org/Lattice-structure-tp21001p21061.html
Sent from the OpenSCAD mailing list archive at Nabble.com.
wolf wrote
> If it were that simple . . . How to build even one component of such a
> machine, memory capable of storing a single number, at infinite precision,
> in a finitely sized space containing a finite amount of matter, as small
> as Special Relativity demands our universe is
Why limit oneself to just this universe?
-----
Admin - PM me if you need anything, or if I've done something stupid...
Unless specifically shown otherwise above, my contribution is in the Public Domain; to the extent possible under law, I have waived all copyright and related or neighbouring rights to this work. Obviously inclusion of works of previous authors is not included in the above.
The TPP is no simple “trade agreement.” Fight it! http://www.ourfairdeal.org/ time is running out!
--
View this message in context: http://forum.openscad.org/Lattice-structure-tp21001p21061.html
Sent from the OpenSCAD mailing list archive at Nabble.com.