Friday, November 19, 2010

From Voxels to Polygons

Voxels are just a way to store 3D data. Somehow they need to be visualized. At this point you have two avenues: use volumetric rendering, or extract a surface out of the voxel data and display it using a traditional triangle mesh.

Assuming there is enough processing power on the client, probably the best is to use ray tracing and volumetric rendering from a Sparse Voxel Octree. In the long run this may become the predominant method. With generic GPU computing frameworks like OpenCL and CUDA a paradigm shift at the hardware level is not necessary anymore. The migration will happen as OpenCL/CUDA cards become the norm.

Content creation tools will catch up quickly. There are already some impressive tools that work directly on voxels. Check out 3D Coat if you haven't. Then it will be bye-bye to polygonal meshes, UV-Mapping, texture unwrapping and other oddities of the raster world.

But it is too soon to ditch polygons. What if you want your engine to run on mobile devices, tablets, WebGL, or even just plain old PCs? If that is your case, it makes more sense right now to keep voxels on the content-generation side and use traditional triangle meshes on the clients. That is what I intended to do.

As I described in an earlier post, I tried extracting the surface out of the voxels by relaxing the voxel grid until it became smooth. It produced fairly good results, but not good enough for every case. I needed to look at other methods for extracting surfaces. At this point I had given up on simplistic binary voxels where each cell would be entirely solid or entirely empty. I knew I had to keep different voxel densities.

There was Marching Cubes. This is the grand-daddy of all surface extraction algorithms. This method is so old that computers were really slow when it was invented. It is very simple to implement, it is all based on table lookups so it lends very well for GPU implementation.

The general idea is that for any voxel there is a fixed number of ways a surface may cross it. Based on the density values at the voxel corners, it possible to know if the surface is crossing the edge between the corners. There is a finite number of combinations, as the image below shows:


I borrowed this image from a nice article in GPU Gems 3. If you are interested in Marching Cubes and GPU terrain generation, this is one you should read.

Marching Cubes biggest problem is that it cannot produce sharp features. Everything looks blobby coming out of Marching Cubes. Since I wanted sharp corners, especially for architecture, Marching Cubes was not a good option.

Why so keen on sharp corners? It is not only that they look good. If you remember my goals, I needed to stream geometry to clients. Having sharp features from the onset would provide the best simplification for the meshes sent to the client. Marching Cubes creates too much noise around sharp features, it would never translate into clean meshes.

The lack of sharp features in the resulting surface is caused by all the guessing Marching Cubes does. While it may compute the right intersection points along the edges, it knows nothing about the behavior of the surface inside the cube. The bigger the voxel resolution, the larger this error becomes. You can see this in the following image:
This image I borrowed from another nice article about isocontouring. This one has everything you need to know, even source code.

What if we could know more about the surface's behavior inside the voxel, predict where it goes? If, in addition to the intersection point, we take into account the direction of the surface at that point, it is possible to have a much better idea:

The first section (a) shows the output of Marching Cubes. If there had been a sharp corner in any of the voxels, it would have been smoothen out. The second section (b) shows that if you know the normal vector to the surface at the intersection point, you may still be able to detect a sharp corner inside the voxel.

Marching Cubes is a not good framework to add this since it can only consider points that lay along the voxel edges. In this case we clearly need to position points anywhere inside the voxel's space. There is another method that does exactly this: Dual Contouring.

Dual Contouring looks at each edge of the voxel at a time. If the corners around  the edge have different signs, meaning one is inside the volume and the other is outside, the surface is crossing the edge.

Now, each edge has four neighbor voxels. If the surface is crossing the edge, it also means that it necessarily is crossing the four neighbor voxels. You can say there is at least one point inside each voxel that lays in the surface. If you then output a quadrilateral using the points for the four neighboring voxels, you will have a tiny square patch of the surface. Repeating this for every edge in the voxel data produces the entire surface.

You still need to find the position of this point for each voxel. The best way is to look at all the edges for the voxel and see which ones are intersected by the surface. Then, using the surface normal at each intersection point, you can predict where all these intersections converge inside the voxel. For that it is best to use Quadratic Error Functions (QEF). At this point you may have multiple planes crossing different edges of the voxel, and QEF will find a point that is closest to the intersection of all these planes.

If you want more information on this method, you can check this article.

So I implemented my own Dual Contouring with QEF. The results were encouraging.  The method was fast, and still produced very nice sharp features. All the videos I have posted in YouTube were obtained using this method.

In a future posts, I will describe how I represented everything in the world as a 3D field so I could get surfaces out using Dual Contouring. I will also describe how I moved the Dual Contouring implementation to the GPU using OpenCL. Stick around for that one, you may find it interesting.

49 comments:

  1. Where did you get the source code for this? was it from GPU Gems 3?

    Do you know if the source for this chapter is online?

    http://http.developer.nvidia.com/GPUGems3/gpugems3_part01.html

    http://news.coreyfleming.com/

    The whole books text is available free at nVidias site, but not the DVD contents (unless I overlooked them)

    ReplyDelete
  2. I did not look for the code. It was a Marching Cubes implementation. The guy who wrote Chapter 1 has a site, I think he has the code there: http://www.geisswerks.com/about_terrain.html

    ReplyDelete
  3. Oh no way! Spectacular! I always forget to check the authors personal sites. I had a similar situation with a DSP book on audio filters. It kept referring to examples in the source code but there was no source with it. So I tracked down a website from the book's forward, got an email from there, and the author got a hold of his colleague who worked on the book as well, and then I got my source :)

    Thanks for the link, I'll go through go through it.

    edit: Ok, after going through the code at the site it is the same code I have, which I can't run because I don't have DX10 installed on this Win XP computer. And it doesn't give the code to build the "demo.exe" program. What I'd like to get my hands on is that or any demo program that generates voxel terrain in C++ or XNA C# and gives a marching cubes example. Until then, I'll just have to put together best I can from what I can understand from all these articles I'm finding.

    One thing I'm trying to figure out is how to build the voxel terrain generator in the first place. 3D Perlin noise seems to be pretty well recommended.

    You said: "I did not look for the code" Does that mean you wrote all your code from voxel generator/marching cube theory and literature, with absolutely no example code files? If so congrats, 3D rendering math always trips me up so I usually can't understand it from literature alone and try to find some source to help give me a educational jump.

    ReplyDelete
  4. I found it easier to write the code from scratch instead of collecting pieces from the Net. Also, since it was mostly OpenCL, there was very little around.

    There are some exceptions, like computing the Quadratic Error Function solutions. I got that one from a mesh simplification library. I also got the Perlin Noise for OpenCL from an example by Apple. But I'm not using Perlin anymore, it was slow.

    In general the math required for this is not complex. Reading a lot of papers helped mostly to know whether I was doing things efficiently enough. I don't like looking at code. The trees usually won't let you see the forest, seeing the big picture will save you time at the end.

    ReplyDelete
    Replies
    1. Do you happen to have a link to the library where you got your QEF solver?

      Delete
    2. I would also like a link, if possible

      Delete
  5. Could you descibe the Dual Contouring in a little more depth? I read through the article, but am finding it really hard to follow. So any chance of any more depth?

    ReplyDelete
  6. @Anonymous: You can get the source code for the method at http://www.oocities.org/tzukkers/isosurf/isosurfaces.html (already linked in my post) If you want depth, source code for a working implementation beats any explanation.

    If you have any specific question, I will be glad to help if I can.

    ReplyDelete
  7. How exactly are the voxels layed out? Are they all layed out next to each other, with faces touching? Or are the two faces slightly pushed into one another?

    ReplyDelete
  8. @Anonymous: Voxel faces touch each other, no pushing.

    ReplyDelete
  9. Would it be possibele to post an image of what your cubes look like before they are dual contoured? So their layout? You could just stick it on imageshack or something.

    Anyway, have you seen this?
    http://kotaku.com/5818451/a-man-did-not-make-this-beautiful-world-a-computer-did/gallery/1

    ReplyDelete
  10. Hmmm... those screenshots in Kotaku seems familiar.

    I cannot provide an image of a cube stage (ala Minecraft). I don't really work with cubes, except for the adaptive octree I'm using which you can say breaks down the space into cubic sub-spaces. But this is just a way to process data.

    ReplyDelete
  11. Kotaku is how I found you, though Shamus Young linked to you a couple of weeks ago too.

    ReplyDelete
  12. What does it mean when it says sign change in the paper?

    ReplyDelete
  13. @Silence: Isocontouring methods like Dual Contouring output a surface out of a 3D field. For instance, if you think of a Sphere of radius 1, the field function would be:

    f = 1 - sqrt(x^2 + y^2 + z^2)

    The center of this sphere is at (x=0, y=0, z=0), which makes f = 1

    Now a point far away from the center (x=10, y=0, z=0) would make f = -9

    If you compare 1 to -9 you see there is a sign change, 1 is positive and -9 is negative. It means the surface of the sphere crosses somewhere between these two points. (The exact location of the crossing is where f evaluates to zero.)

    ReplyDelete
  14. Thanks for the explanation! But what two points are we getting?

    ReplyDelete
  15. @Silence: In the example the two points are (0, 0, 0) and (10, 0, 0). In reality when you use a regular grid to contour, each point in the grid is considered.

    ReplyDelete
  16. In the paper, it says 'for each cube that exhibits a sign change'. So are we considering the 12 verts of a cube, or what?

    ReplyDelete
  17. @Finale: Correct, but note 3D cubes are only 8 verts.

    ReplyDelete
  18. @Finale: I just realized you probably meant 12 edges

    ReplyDelete
  19. I meant the 8 verts, just wasn't thinking. Thanks for the reply, and also for the pretty inspiring material!

    ReplyDelete
  20. What is the field function of a cube, then?

    ReplyDelete
  21. Hi Hi!

    From the top of my head, a simple field function for a cube would be similar to the sphere's, but using Manhattan distance instead of using Euclidean distance. You would get a cube that is rotated 45 degrees on each axis.

    Another way to do it is by using six planes, one for each side of the cube. Output the distance to the closest plane and make it negative if the point is outside.

    ReplyDelete
  22. Out of interest, which technique did you use?

    ReplyDelete
  23. @Silence: I used dual contouring over an octree.

    ReplyDelete
  24. Are your youtube videos rendered from your voxel farm or a single machine ?

    I still have problem to understand how view-dependent adaptive octrees work and how they should be implemented since there are many different approaches out there, suiting specific applications(AO, etc...).

    Could you please point toward a 101 view based adaptive octree paper that barely match yours ?

    I am using a specific graphic engine with a quite efficient thread pool scheme and while slowly understanding its scene graph, I would like to write similar stuff for it.

    I would like to render a planet surface with caves whereas density is sampled from a simplex noise 3D at run-time. Do you think all these could fit at least asynchronously ? I am not allowed to use the GPU and even if I could, it is only SM3+ (no GS), but I need to test collision over part of the resulting topology or at least a low LOD version of it.

    Much thanks and very nice job !!

    ReplyDelete
    Replies
    1. I don't know of any papers doing this. My technique is simple, the octree is increasingly divided according to the camera position. Then a second pass makes sure neighbor octree cells match the subdivision level. You end up with rings of cells that become larger as they are further from the view.

      You should have no problem running a simplex density function in real-time, even if you don't use the GPU. Of course this is a function of the grid resolution you use and how many octaves of noise you have.

      I recommend you use adaptive sampling as well, that is you can test bigger cells of the octree, if you see there is no surface crossing you can skip all the children cells. However this is tricky since you may get some details overlooked.

      Delete
    2. Hi MCeperoG,

      Much thanks for the light-speed assistance !

      OK, gotcha, the octree should be simple, I am going to try that.

      Yes, even lowering octaves/grid res to a poor level, I want to first have something encouraging and move on with tightening the whole stuff. I did quite extensive tests with various noise implementations exposing the trade off for sampling simplex in such scenario and you are right, we can at least gather as much as possible samples and see what we get depending on hardware.

      OK gotcha on this too : Adaptive sampling relative to octree traversal. This definitely looks yum !

      In all cases, much thanks again for your precious inputs.

      Delete
  25. How do you predict if no triangle will later on, or yet already, occlude new candidate ones ? Are you casting rays from camera to far voxels vertices and check if they intersect previously generated triangles ?

    I guess this could be also possibly extracted from octree's vertices sampled density values but it not yet so obvious for me.

    Thanks !

    ReplyDelete
    Replies
    1. Not sure what you mean. I do not perform any tests on triangles. I extract the isosurface as a triangle mesh using Dual Contouring. Very often there are pockets of surface that are hidden, or floating bits of terrain, which is the same. I'm adding a phase to detect and remove these later from the meshes.

      Delete
    2. Hi MCeperoG,

      Sorry for my poor clarity due to my lack of training on such approaches.

      It is very cool to kindly answer, I really appreciate this.

      To put it in other words, I have hard time to understand how the octree could stop sub-dividing when underlying caves(if sampling a 3D noise for a planet surface in my case) are yet occluded by upper levels of surface. So I guess I should force the core of the octree to be skipped at a certain planet diameter.

      In other words, how to prevent generating this piece of cheese with closed underground pockets that are not visible even if they are close to camera.

      But you said you are applying extra pass to detect them.

      I am slowly getting it, much thanks again for your inputs.

      Kindest regards

      PS: I checked Tao Ju's intersection free DC on sf.net... Wow...I need to study it a bit more.

      Delete
  26. Hiya! I am curious if you have a lot of traffic on your weblog?

    ReplyDelete
  27. Hi, you've got pretty cool stuff here :)
    How does it work?

    First you calculate exact intersection points and normals for voxels?
    Then you apply QEF?
    And than you use Dual Contouring? (Because for dual contouring you need normal)

    ReplyDelete
  28. MCepero will help you much better.

    Minimizing QEF's is required for grabbing the 3d points needed for building your DC quads.
    I am not sure but I think you may bake your minimizers offline. But I guess this is just if you really need to stick to your exact implicit function's shape.

    If you are not that interested in shaping the exact noise you were sampling, maybe you can save some streams but if you want to store/pack the noise's real values, then I think there is no lighter alternative so far.

    Good luck

    ReplyDelete
  29. Hi, everyone!
    I still have problem to get correct gradients. I noticed that everywere they are calculated using a function (sphere, cube) or they are already on the model used (hermite data).

    What if i want to generate terrain AND building?
    I MUST have the freedom of build (and destroy) any kind (and any part) of tridimensional object by simply arbitrarly set latticePoints to "internal" or "external", but if i do so, correct gradients are impossible to get with my methods.
    Actually i "tag" latticepoints as internal or external, then i calculate their density using hte number of adjacent "internal" points. Then i calculate gradients by using the formula
    Gxyz = [Dx-1yz - Dx+1yz, Dxy-1z - Dxy+1z, Dxyz-1 - Dxyz+1]/2
    Gradients are good on sphere but aren't so on sharp edges.
    Any idea?

    ReplyDelete
  30. Hi, thanks for you work. It is awesome. In this article you mentioned that later you will describe how is world represented as 3D field. Can you point me to your blog post about this?

    ReplyDelete
  31. I have a similar (or perhaps identical) question as Anonymous up above - dual contouring as described in the paper requires explicitly tagging each intersection point with the surface normal. For a procedurally generated surface, how do you calculate the normal at an arbitrary point in the grid? Or do you use some sort of heuristic for determining the normal based on the relative positions of the surface around the point? The only tangentially related article I could find was http://dl.acm.org/citation.cfm?id=777792.777840, but that describes an iterative approach that doesn't seem like it would cleanly extend to 3D space (nor be computationally feasible).

    ReplyDelete
    Replies
    1. If you have a 3D field function (procedural or not), you can compute normals as the gradient of the function. It does require you to evaluate the function at least another three times. I think I mentioned this in one of the OpenCL contouring articles.

      Delete
    2. Ah, you are right. It was mentioned at the bottom of http://procworld.blogspot.com/2010/11/opencl-first-iteration.html, which I had read some time earlier and forgotten about. Thanks!

      All of your posts have been very good reads. Thanks for making voxel rendering and the problems inherent with it so interesting!

      Delete
  32. I have been trying to implement DC as well, however I've hit a bit of a snag.

    How are you modifying the octree with the model shaping and whatnot, are you using some sort of branching CSG tree?

    ReplyDelete
  33. Do you handle the non-manifold cases for dual contouring somehow? Alternatively, is there some trick to still get smooth normals over triangles connected to singular features? Forcing a sub-division to resolve ambiguities seems to work, but is there another way?

    ReplyDelete
  34. Ok, I understand the output from the algorithms described here is traditional 3D triangle geometry, but what are your inputs? Are voxels the input or simply the resolution at which you sample your procedural functions? If the inputs are functions, then what about when someone starts making explicit modifications to the procedural world? Is that when you start storing per-voxel data?

    ReplyDelete
    Replies
    1. I think I understand now. From the Dual Contour paper you link (http://www.frankpetterson.com/publications/dualcontour/dualcontour.pdf), I found a good two-sentence answer to what was confusing me:

      "The resulting representation is an octree whose leaf cubes have signs at their corners with exact intersections and normals tagging edges that exhibit sign changes… Interior nodes in the octree contain QEFs used during simplification."

      Delete
  35. Hi Miguel Cepero,
    I'm a very big fan of your work and excited to see a game created with your engine. I already thought of such an engine myself and during my research I found out about your blog.
    Now I would like to ask you how Dual Contouring works. I did not understand the thing with the QEFs and the paper does not help me. Can you explain how exactly the QEFs are working and are minimized?
    So, if you have the twelve normals of the isosurface at the edges of the cube, how do you place the point in that cube?
    Greetings from Germany

    ReplyDelete
  36. Is there a simple way to take destruction and creation of objects into a game so it works much the same way in realtime outside your editor? Preferrably with a zero sum of mass, meaning if i attempt ingame to create say a sphere it only cuts out the prexisting part but doesnt destroy it(or does but recreates only the destroyed part at the same time). Think "i point at wall in a game, it cuts out a chunk of it but it doesnt disappear". Gravity gun kind of but on the terrain itself. Otherwise i want it to work pretty much exactly like you show on your youtube videos. I simply want to add an avatar and some gameplay elements to your editor in a way.

    Is this possible at all? If it is, how much would someone who needs unity to model a game need to learn to make it possible?

    ReplyDelete
  37. Can someone explain what exactly QEF is?

    ReplyDelete
    Replies
    1. Hermite data stores points and surface normals on edges of a voxel to later on have enough info to build a plane.

      QEF is the sum of the squares of the distances between a given point inside a voxel and each of the planes of this voxel.

      For Dual Contouring, you may want to find a point that optimally "minimize" it, the lowest QEF value. With this point and the normal of the surface on the edge, you get the plane equation for this point location.

      Delete
  38. I was wondering if you continually reconstruct the octree as the camera moves or do you re-use part of the old octree? Also, is your entire world stored as an octree or do you use chunks? (I would prefer not to use chunks if possible)

    ReplyDelete