Models in Simulations

CS 482 Lecture, Dr. Lawlor

You can use random points, mathematical functions, or hand coordinate entry to make simple shapes.  But to model real-world objects, you need better tools.
Blender is a pretty capable free 3D modeling program.  It's available for all platforms.  The only downside is the bizarre, utterly unique user interface.  This is typical for 3D modeling programs, even pro versions--they're just each their own weird thing.  Start with the official installer (use the .zip option on the Chapman lab machines, since the .exe installer needs admin access).

Check out the Blender Tutorials and Blender Doc Wiki.  Here's my super-compressed cheat sheet:
There's a "mode" popup menu directly in the middle of the screen.  
Blender starts you out with a cube.  

To model anything with this, we need more polygons.  Press the bottom-right icon to switch from Timeline to Properties, select the wrench icon to get Object Modifiers, and hit Add Modifier -> Generate column -> Multiresolution.  Scroll down and hit "Subdivide" six times, to generate 2^6 smaller polygons on each face.  Zoom into the now-smoothed high-poly sphere with scroll wheel or control-middle-click.   Switch to "Sculpt Mode".  The "Brush" tab on the left shows your sculpting options.  Scroll down to Symmetry, and turn on symmetry about the X axis.  Use the "Add", "Grab", and "Smooth" tools to sculpt the object into something meaningful, like a potato.  

Note you can switch back to "Edit" mode, and deform the original (non-subdivided, non-sculpted) cube, and everything works nicely.

Save the original as a .blend file.  Export as a .obj file.  To save a low-poly triangle version in a nice ASCII format, go back to Object Mode and find Object Modifiers again.  Add the Modifer "Decimate", and set the decimation ratio to 0.1 or so.  Hit Apply, and File->Export as a RAW or Wavefront .obj file.

Exporting from 3D modelers to "Real Code"

So 3D modeling programs make it pretty easy to generate cool geometry.  The trick is then you've got to somehow move that geometry into your application (game, visualization app, etc).

The easiest way to do this is skip it entirely--just do all your modeling and rendering inside the 3D modeling program!  But the modeling performance of these programs usually isn't that good, and you often need to add some scripting or shading features that would be tricky in the 3D program.

The standard way to exchange data between programs is of course files.  We've looked at several very simple file formats, like the OBJ file format, but modeling programs usually support more than the very simplest "just the polygons" formats, because the modeling programs support way more than just polygons--they have colors, textures, "instanced" sub-pieces (like function calls), and transforms.

Every program supports vertices, the XYZ positions of geometric points.   Some care about edges, pairs of points.  Others want faces like triangles, triplets of points, or quads, with four points (planar or non-planar).  Most programs need additional data like texture coordinates (often per vertex), rendering needs normals (per face for flat shading, per vertex for smooth shading), and simulations need boundary condition information (per face, vertex, or both).  The situation is hence something of a mess, with many possible conversions workable with some loss of data, but few lossless model format conversions are possible.

Blender, at least, supports a bunch of decent file formats:
Typically, when reading a new 3D object file format, I will:
  1. Research and use a hex dump tool to figure out how the file is organized.
  2. Try to read and dump some XYZ coordinates to the screen.  This usually takes a few tries, and tells me what the scale factors are (e.g., all coordinates between -0.00001 and +0.00003, or -1000 and +30000).  
  3. Splat some GL_POINTS at the XYZ coordinates of the vertices.  This usually takes some tweaking to get the scale factor correct (meters, inches, or millimeters?), and here's where I need to fight the Y/Z up axis question.
  4. Draw GL_TRIANGLES at the face indices.  There are often issues with silly things like 0-based versus 1-based numbering (which makes a spiky-looking glob instead of a smooth object).
  5. Try to recover the existing normals, or compute my own normals.  I usually need to compute my own.
  6. Try to figure out texture coordinates.  By this point, I'll likely just bodge something together!

Loading Models in Babylon

There are Babylon plugins to read .obj or .stl format files.

Aside from figuring out the return value, it's really pretty straightforward.  One annoying part is the data gets loaded using an XMLHttpRequest, which sadly is subject to the "same-origin" rule: it can only fetch data from the same server, not a local file (this is to protect your computer from random JavaScript reading your files). CS 482 Lecture

Solid Models via Tetrahedra

To simulate a solid, you need a solid mesh, not just triangles.  In 2D, you can use the Delaunay triangulation to make a good triangle mesh from scattered points.  TetGen can build a tetrahedral mesh using the 3D version of this, the Delaunay tetrahedralization.  

To use TetGen:
  1. Install the program:
    1. sudo apt-get install tetgen
    2. Or download the source code, and compile the C++ code predicates.cxx and tetgen.cxx into a "tetgen" executable.
  2. Remesh or decimate, triangulate, and export your surface to ASCII .stl or Stanford .ply format, which TetGen can read.
  3. Run with command line arguments: 
(Note this will fail with "terminate called after throwing an instance of 'int'" if the outside surface is not well formed, like self-intersections or not closed.  Rerun tetgen with -d to check for self-intersections, which you can manually clean up in Blender.)

This will dump the XYZ vertex/node locations (including interior nodes) to MODEL.1.node, the node numbers for each tetrahedron to MODEL.1.ele, and the renumbered surface mesh to MODEL.1.face.  I crudely converted these to javascript function calls with these UNIX awk commands, and some hand editing on the first and last lines; it'd be cleaner to read them directly from JavaScript (via XMLHttpRequest) or convert them in C++.

	m=MyModel
awk 'NF==4 && NR>1 {printf("vertex(%.3f,%.3f,%.3f);\n", $2,$3,$4); }' < $m.1.node > $m.js
awk 'NF==3 && NR>1 {printf("edge(%d,%d);\n", $2-1,$3-1); }' < $m.1.edge >> $m.js awk 'NF==5 && NR>1 {printf("tet(%d,%d,%d,%d);\n", $2-1,$3-1,$4-1,$5-1); }' < $m.1.ele >> $m.js
awk 'NF==4 && NR>1 {printf("face(%d,%d,%d);\n", $2-1,$3-1,$4-1); }' < $m.1.face >> $m.js
You can paste small models directly into a PixAnvil tab, and fetch the data as a string with PixAnvil.loadTab("ModelData").

Be aware that many surface modeling programs, including Blender, can produce fairly spiky narrow triangles, which results in spiky narrow tetrahedra.  These are fine for rendering, but don't always work well for simulation, where the numerics work better for larger angles.  You can get better-shaped triangles by remeshing the surface (in Blender, Add Modifier -> Geometry -> Remesh), which internally uses a "marching cubes" type volumetric re-subdivision of the mesh, at a selectable resolution.  After applying, enter Edit Mode, select all, and triangulate with Mesh -> Faces -> Quads to Tris.


Given a tet mesh, you can construct springs along each edge (every pair of vertices is connected with an edge, so this is easy).  In fact, typically the same indexed vertex scheme used for faces is extended for tets.  You can even use the same single vertex list, and index into it from an array of tets for simulation, and then index into it from an array of faces for rendering.  There might be a few interior vertices used only for simulation, but the graphics card is fine with this.