Models in Simulations

CS 482 Lecture, Dr. Lawlor

You can use random points, mathematical functions, or hand coordinate entry to make simple shapes.  But to model real-world objects, you need better tools.
Blender is a pretty capable free 3D modeling program.  It's available for all platforms.  The only downside is the bizarre, utterly unique user interface.  This is typical for 3D modeling programs, even pro versions--they're just each their own weird thing.  Start with the official installer (use the .zip option on the Chapman lab machines, since the .exe installer needs admin access).

Check out the Blender Tutorials and Blender Doc Wiki.  Here's my super-compressed cheat sheet:
There's a "mode" popup menu directly in the middle of the screen.  
Blender starts you out with a cube.  

To model anything with this, we need more polygons.  Press the bottom-right icon to switch from Timeline to Properties, select the wrench icon to get Object Modifiers, and hit Add Modifier -> Generate column -> Multiresolution.  Scroll down and hit "Subdivide" six times, to generate 2^6 smaller polygons on each face.  Zoom into the now-smoothed high-poly sphere with scroll wheel or control-middle-click.   Switch to "Sculpt Mode".  The "Brush" tab on the left shows your sculpting options.  Scroll down to Symmetry, and turn on symmetry about the X axis.  Use the "Add", "Grab", and "Smooth" tools to sculpt the object into something meaningful, like a potato.  

Note you can switch back to "Edit" mode, and deform the original (non-subdivided, non-sculpted) cube, and everything works nicely.

Save the original as a .blend file.  Export as a .obj file.  To save a low-poly triangle version in a nice ASCII format, go back to Object Mode and find Object Modifiers again.  Add the Modifer "Decimate", and set the decimation ratio to 0.1 or so.  Hit Apply, and File->Export as a RAW or Wavefront .obj file.

Exporting from 3D modelers to "Real Code"

So 3D modeling programs make it pretty easy to generate cool geometry.  The trick is then you've got to somehow move that geometry into your application (game, visualization app, etc).

The easiest way to do this is skip it entirely--just do all your modeling and rendering inside the 3D modeling program!  But the modeling performance of these programs usually isn't that good, and you often need to add some scripting or shading features that would be tricky in the 3D program.

The standard way to exchange data between programs is of course files.  We've looked at several very simple file formats, like the OBJ file format, but modeling programs usually support more than the very simplest "just the polygons" formats, because the modeling programs support way more than just polygons--they have colors, textures, "instanced" sub-pieces (like function calls), and transforms.

Every program supports vertices, the XYZ positions of geometric points.   Some care about edges, pairs of points.  Others want faces like triangles, triplets of points, or quads, with four points (planar or non-planar).  Most programs need additional data like texture coordinates (often per vertex), rendering needs normals (per face for flat shading, per vertex for smooth shading), and simulations need boundary condition information (per face, vertex, or both).  The situation is hence something of a mess, with many possible conversions workable with some loss of data, but few lossless model format conversions are possible.

Blender, at least, supports a bunch of decent file formats:
Typically, when reading a new 3D object file format, I will:
  1. Research and use a hex dump tool to figure out how the file is organized.
  2. Try to read and dump some XYZ coordinates to the screen.  This usually takes a few tries, and tells me what the scale factors are (e.g., all coordinates between -0.00001 and +0.00003, or -1000 and +30000).  
  3. Splat some GL_POINTS at the XYZ coordinates of the vertices.  This usually takes some tweaking to get the scale factor correct (meters, inches, or millimeters?), and here's where I need to fight the Y/Z up axis question.
  4. Draw GL_TRIANGLES at the face indices.  There are often issues with silly things like 0-based versus 1-based numbering (which makes a spiky-looking glob instead of a smooth object).
  5. Try to recover the existing normals, or compute my own normals.  I usually need to compute my own.
  6. Try to figure out texture coordinates.  By this point, I'll likely just bodge something together!

Loading Models in THREE.js

In THREE.js, there are a whole set of file format loaders in threejs/examples/js/loaders/.  Generally, these are objects with a "load" function that fetches the mesh data from a web server.  The loaded mesh data is passed to a "parse" function that converts it to a THREE.Geometry object like this:
parse: function ( data ) {
var geometry = new THREE.Geometry();

while ( --data has more faces -- ) {

if ( -- face has a normal -- ) {
var normal = new THREE.Vector3( -- XYZ normal -- );
while (-- face has more vertices --) {
geometry.vertices.push( new THREE.Vector3( -- XYZ vertex -- ) );

var len = geometry.vertices.length;
geometry.faces.push( new THREE.Face3( len - 3, len - 2, len - 1, normal ) );

return geometry;
The STLLoader currently returns a THREE.Geometry object, and you create your own THREE.Mesh.
The OBJLoader returns a THREE.Object3D ready to add to your scene.
The ColladaLoader returns a custom object with a ".scene" field.  

Aside from figuring out the return value, it's really pretty straightforward.  One annoying part is the data gets loaded using an XMLHttpRequest, which sadly is subject to the "same-origin" rule: it can only fetch data from the same server, not a local file (this is to protect your computer from random JavaScript reading your files).