3D Model Guide: Difference between revisions
(→Bone) |
imported>Tsukihime No edit summary |
||
| Line 149: | Line 149: | ||
*Noesis - http://oasis.xentax.com/index.php?content=home | *Noesis - http://oasis.xentax.com/index.php?content=home | ||
*Metasequoia - http://metaseq.net/english/ | *Metasequoia - http://metaseq.net/english/ | ||
[[Category:3D Models]] | |||
[[Category:Tutorials]] | |||
Revision as of 18:41, 12 January 2012
This is a work-in-progress and open to any edits. You may add any information that you feel would be useful, and modify any existing information that may be inaccurate or could be improved. No permission is required. Of course, as this is an open page, "If you don't want your writing to be edited mercilessly, then don't submit it here."
Preface
This page will briefly describe how to reverse a 3D model format. This will not be a tutorial on how to read hex, but instead on various concepts used in 3D computing and rendering that will help make sense of what the data represents.
For a more detailed list of 3D concepts, check out the 3D model glossary
Introduction
Please read the Definitive Guide to Exploring File Formats before starting this guide, which is aimed specifically for 3d file formats. It is assumed that you know how to read files in hex viewers.
No knowledge in 3d is needed, however, any experience in 3d modelling or 3d programming will help. No advance math knowledge is used in 3d formats, though it is in 3d programming, but those are usually hidden inside 3d libraries you use, like your 3d engine or your 3d modeller's scripting language. You should be able to understand what vectors and matrices are though.
We should first understand the basics of how 3d data is represented.
Representation of 3D objects
GPU's main job is to process vertices. They are very good at it. It is possible to store and render 3d graphics using other methods than the ones described here, like voxels. But modern GPUs are only good for rendering 3d graphics from the data we described here and most games need the GPU to run, as CPU is too slow for rendering 3d graphics in realtime, both polygonal or voxel based. What this means is 99.9 % of games you encounter will use the methods we describe in this document.
Vertex
The building block of 3d data is the vertex. A vertex is a point in 3d space. So a vertex needs 3 values: x, y and z position. The position values are usually represented as 4 byte floats, though not always.
Face (Triangle, Tristrip, etc.)
Two vertices (“vertices” is the plural for “vertex”) can connect and form a line.
Three vertices can connect and form a triangle. Although more vertices can connect and form more complex polygons like quads (4 vertices), triangles are the most common. In fact, the GPU needs to break more complex polygons down to triangles before processing them, so 3d formats are likely to always use triangles. But how are triangles stored in files? Three vertices (3*3 position values) are enough to create a triangle. But this is not how triangles are always generated from the vertex data in the 3d files. Triangles can also be represented as what is called a “tristrip”, or triangle strip. In a triangle strip, the first three vertices form a triangle, then every new vertex creates a new triangle by connecting with the previous two. Tristrips are performance optimization for some GPUs.
There are other ways to connect vertices, like linestrips or trifans (short for “triangle fan”, each new vertex after the 1st one creates a new triangle, by connecting with the previous and the very 1st vertex). However these are less common.
Some 3d formats will use what is called an “index buffer” (index list), together with the “vertex buffer” (vertex list). Index buffer is basically a list of (usually integer) numbers which tells which vertices connect with each other by their index in the vertex buffer.
Vertex and Index Buffer
Vertex buffer is just a list of vertex datas. The vertex buffer can be interleaved or non-interleaved. Interleaved means each vertex data in the buffer contains vertex positions, normals, colors, uv maps, etc. one after another. Non-interleaved means there is a list (buffer) of positions, then buffer of vertex normals and so on.
As explained above, index buffer is basically a list of (usually integer) numbers which tells which vertices connect with each other by their index in the vertex buffer. Index buffers are easy to spot as they are usually a list of growing integer numbers. Simple formats might not use an index buffer.
Mesh
A collection of triangles or tristrips is called a “Mesh” or “Surface”. Each file you explore will likely have more than one of these.
Vertex color
Vertex colors are color values assigned to vertices. It's not hard to imagine a colored dot in 3d space, but what happens when vertices connect and form lines or triangles? It would be practically useless to have colored dots on your mesh. Luckily, that's not how they are rendered by OpenGL or DirectX when connected. The colors of the neighboring vertices are interpolated over the surface, so you get a nice gradient. This makes vertex colors very useful for things like skydomes because they take up far less memory than gradient textures. Vertex colors are also used on map models to simulate shadows. Some old games use vertex colors on characters instead of textures (usually only the face of the character will use a texture image).
Vertex colors are usually 4 float numbers or integers. The 4 values are in RGBA order (Red, Green, Blue, Alpha), or BGRA. Alpha might not be used depending on the game. It's easy to find out if the game uses BGR(A) or RGB(A) order. If after importing the model appears too blueish or too reddish, then you probably read it in the wrong order.
Vertex normal
Vertex normals are vectors which are used to calculate lightning during rendering. If you won't read them, your 3d tool might display your meshes as completely black or white. But usually most 3d modellers will silently regenerate normals during import and editing, or at least give you an option to regenerate normals. Normals are usually stored as 3 floats (x,y,z).
Vertex UV map
UV maps are (x,y) position values which specify how a 2d texture image is “wrapped” over your 3d mesh. The game or 3d modeller treats your texture image as a grid where the bottom-left corner is the position (0.0, 0.0) and the top-right corner is the position (1.0, 1.0). Then your vertices, and therefore also the faces which they form are positioned on that grid and the GPU can easily tell what pixel to assign where on your mesh. So you can tell that your vertices have two positions: one in the 3d space, one in the 2d space of your texture grid. UV maps are usually represented as two (x,y) float numbers. Do note that although the grid has positions in the range of (0.0,1.0), the vertex positions might go beyond this range. Depending on the format, or a flag somewhere in the file, the vertex falling outside of that range will either not get any pixel assigned (black or transparent), will get assigned the closest pixel in the grid, or the UV grid will be treated as infinitely repeating (in other words, position 2.0 will be the same as 1.0 and so on).
Some games use what is called “multitexturing”, which means multiple textures are applied on top of each other like layers in Photoshop or GIMP. In such case each vertex can have a different UV map for every texture image it is assigned to. Multitexturing is often used for terrain models for texture splatting effect. Multitexturing is also present when the game uses normal maps or specular maps, though the vertex will not always use different UV maps for them.
Material
Materials specify how a mesh is shaded, or how it appears when lit with OpenGL or DirectX lights. Materials are applied at the mesh level, they are not a per-vertex value (although you can technically have a mesh with a single vertex, it is not done in practice). Some games might not use materials at all because the don't use realtime lightning, or they might have materials for characters, but no materials for levels, as lightning can be simulated by vertex colors or shadow textures.
Materials are pretty standard, there are usually RGBA/RGB or BGRA/BGR values for each material component. The material “components” are ambient color, diffuse color, specular color, emissive color. There is also a single integer or float for “shininess” or “specular power” and probably a texture id or name telling what texture to use with this material.
These will almost always be one after another, so it won't be very hard to spot them. Materials can also contain a vertex/face index and count, which will tell you which vertices or faces are assigned this material (and if the material has a texture name or index, also the texture to assign to those vertices/faces). However, it is possible that the material will have a mesh name or id, in which case each mesh will be assigned a single material and there won't be any list of vertices/faces in the file.
Texture
Texture is a 2d image which is applied to a 3d model, by using the vertex UV maps, or generating one in realtime. There is not much to say about textures. They can be embedded in the 3d file, or stored in a separate file. If they are stored in a separate file, the 3d file will contain it's location, or the game engine will assume the meshes and textures have the same name. If the files are separate, you might be lucky as they might be in common format like JPG, PNG, BMP, TGA or DDS and you can just open them in an image viewer and the formats are well documented. If they are inside the 3d file, you can start off by searching for common image headers, however the headers might be removed.
Bone, shapekey and animation
When we think of 2d animation, we think of an image movement, rotation, scale or other transformation over time. In the 3d realm there is much more than that. When you move your character in the game, the game engine repositions and rotates the player mesh based on your input. That animations are not stored in 3d files and are generated by the game engine every time.
Bone
But when you move your player, you can see him moving his arms and legs around like in real life. How does that work? How did the developers specify that the vertices of the arms should move one way, the vertices of the legs the other and the some vertices should move more than others. And is all that stored in 3d file? Yes and no. The movement of the vertex “groups” (arm, leg) are indeed stored in the file, but not the movement of each vertex! That “group” is given a name or index in the file, it has default position, rotation and scale ,which is usually stored as a 4x4 “transform matrix”. Matrices are a difficult topic, we wouldn't be able to fit a tutorial here. Anyway, then the animation files stores new position, rotation and scale for every frame for that “group”. And when you play an animation file, the “group” transforms accordingly, transforming the vertices assigned to it in it's turn. This “group” is called a “bone” or “joint” in 3d graphics. A collection of bones is called a “skeleton” or “armature”. Each character model will usually have only one skeleton. Very old games had only a one bone index or bone name for every vertex and vertices were assigned to bones this way. The transform matrices of bones (and that's all bones are basically) can be relative to the skeleton (or "root") or relative to parent bones).
Please don't confuse bones stored in 3d files with bones in your 3d modeller. Blender for example has a base and tip for a bone, which is useless for games, so that info is discarded during export.
Bone weight
Developers soon realized that having a vertex follow only a single bone looked very unrealistic, it couldn't mimic the elastic nature of the skin. So now developers could assign a vertex to multiple bones. But how would that work? What if both of the bones were to move at the same time? The answer was that the new position of the vertex would be calculated from both, and so “soft-skinning” was invented. Later developers were also able to specify how much each bone influences the vertex and “vertex weights” were invented. With each bone name or bone index the vertex also had a vertex weight for that bone. Vertex weights are used in almost all games these days. Vertex weight is usually a float number. The total sum of all vertex weights for one vertex is always 1.0. So sometimes developers will not store the vertex weight for the final bone and you will need to calculate it by subtracting the sum of all vertex wieghts for that vertex from 1.0.
Animation
Animations are usually stored in their own files and contain a new transform matrix or just position,rotation and scale values for every bone for every frame. The transform matrix ("animation matrix") is multiplied with the transform matrix of the bone to get the final results.
Some games have what are called "jiggle bones". These bones don't have any animations stored in files and the bone animation is calculated during gameplay with a physics engine and physics constraints. Such bones are most often used for hair, clothes and body parts.
Shapekey
Shapekeys or morphs are not used as often. A morph is basically a different position of vertices. Morph animations basically set the position of that vertex in the range of it's two positions. Morphs are used often for facial animation.
(gjinka: this is probably a terrible explanation, please rewrite)
Tools
You will need two things
- A hex viewer
- A 3D model viewer/ 3D modeller
This is the bare minimum required to get started on your model reversing journey, but your choice of tools may make your experience easier or more challenging. Of course if you are working with a text format you won't need a hex viewer, but more often than not you will be working with binary formats.
Some common tools are listed below. It is by no means an exhaustive list. Try different tools to get a feel for what you like. Links are available at the end of the page.
Hex Viewers
- 010 Editor
- 30-day trial, US$50 license
- Comes with many useful features.
- HxD
- Free
- Simple hex editor.
- Hex Workshop
- 30-day trial, US$90 license
3D Model Viewers/ 3D Modellers
There are many packages for model editing. There are dozens of custom model viewers. The more common ones are as follows:
- 3D Studio Max
- 30-day trial, US$3500 license
- Comes with its own scripting language for importing models.
- Blender
- Free
- Uses Python scripts to import models.
- Noesis
- Free
- Provides C++ and python API's for importing models.
- Metasequoia
- Free*, US$45 license
- Simple 3D editor. Does not support too many things, but it gets the geometry out. The MQO format is very simple as well.
- Note that the free version comes with limited features (it doesn't allow importing/exporting to other formats).
A simple model format
Here is a simple model format: http://forum.xentax.com/viewtopic.php?f=29&t=3739 (temporarily just linking to an existing tutorial)
Links
- Blender - http://blender.org
- Python - http://python.org/
- Noesis - http://oasis.xentax.com/index.php?content=home
- Metasequoia - http://metaseq.net/english/