Before you start, make certain you are familiar with the following concepts and terminology.
OpenGL is the Open Graphics Library, a platform-independent specification for rendering 2D/3D computer graphics. For Java, there are two implementations of OpenGL-based renderers:
OpenAL is the Open Audio Library, a platform-independent 3D audio API.
The jME Context makes settings, renderer, timer, input and event listeners, display system, accessible to a JME game.
Coordinates represent a location in a coordinate system, relative to the origin (0,0,0). m. In 3D space, you need to specify three coordinate values to locate a point: x,y,z.
As opposed to a vector (which looks similar), a coordinate does not have a "direction".
The origin is the central point in the 3D world. It's at the coordinates (0,0,0).
Code sample: Vector3f origin = new Vector3f( Vector3f.ZERO );
A vector has a length and a direction. It is used like an arrow pointing at a point in 3D space. A vector starts at the origin (0,0,0), and ends at the target coordinate (x,y,z). Backwards directions are expressed with negative values.
Code sample: Vector3f v = new Vector3f( 17 , -4 , 0 );
A unit vector is a basic vector with a length of 1 world unit. Since its length is fixed (and it thus can only point at one location anyway), the only interesting thing about this vector is its direction.
Vector3f.UNIT_X
= ( 1, 0, 0) = rightVector3f.UNIT_Y
= ( 0, 1, 0) = upVector3f.UNIT_Z
= ( 0, 0, 1) = forwardsVector3f.UNIT_XYZ
= 1 wu diagonal right-up-forewardsA normalized vector is a custom unit vector. A normalized vector is not the same as a (surface) normal vector. When you normalize a vector, it still has the same direction, but you lose the information where the vector originally pointed. For instance, you normalize vectors before calculating angles.
A surface normal is a vector that is perpendicular to a plane. You calculate the Surface Normal by calculating the cross product.
The cross product is a calculation that you use to find a perpendicular vector (an orthogonal, a "right angle" at 90°).
In 3D space, speaking of an orthogonal only makes sense with respect to a plane. You need two vectors to uniquely define a plane. The cross product of the two vectors, v1 × v2
, is a new vector that is perpendicular to this plane. A vector perpendicular to a plane is a called Surface Normal.
Example: The x unit vector and the y unit vector together define the x/y plane. The vector perpendicular to them is the z axis. JME can calculate that this equation is true:
( Vector3f.UNIT_X.cross( Vector3f.UNIT_Y ) ).equals( Vector3f.UNIT_Z )
== true
Most visible objects in a 3D scene are made up of polygon meshes – characters, terrains, buildings, etc. A mesh is a grid-like structure that represents a complex shape. The advantage of a mesh is that it is mathematically simple enough to render in real time, and detailed enough to be recognizable.
Every shape is reduced to a number of connected polygons, usually triangles; even round surfaces such as spheres are reduced to a grid of triangles. The polygons' corner points are called vertices. Every vertex is positioned at a coordinate, all vertices together describe the outline of the shape.
You create 3D meshes in tools called mesh editors, e.g in Blender. Learn more about 3D maths here.
What we call "color" is merely part of an object's light reflection. The onlooker's brain uses shading and reflecting properties to infer an object's shape and material. Factors like these make all the difference between chalk vs milk, skin vs paper, water vs plastic, etc! (External examples)
Textures are part of Materials. In the simplest case, an object could have just one texture, the Color Map, loaded from one image file. When you think back of old computer games you'll remember this looks quite plain. The more information you (the game designer) provide additionally to the Color Map, the higher the degree of detail and realism. Whether you want photo-realistic rendering or "toon" rendering (Cel Shading), everything depends on the quality of your materials and texture maps. Modern 3D graphics use several layers of information to describe one material, each layer is a Texture Map.
Bump maps are used to describe detailed shapes that would be too hard or simply too inefficient to sculpt in a mesh editor. You use Normal Maps to model cracks in walls, rust, skin texture, or a canvas weave. You use Height Maps to model whole terrains with valleys and mountains.
This is a very simple, commonly used type of texture. When texturing a wide area (e.g. walls, floors), you don't create one huge texture, instead you tile a small texture repeatedly to fill the area.
A seamless texture is an image file that has been designed so that it can be used as tiles: The right edge matches the left edge, and the top edge matches the bottom edge. The onlooker cannot easily tell where one starts and the next one ends, thus creating an illusion of a huge texture. The downside is that the tiling becomes painfully obvious when the area is viewed from a distance. Also you cannot use it on more complex models such as characters.
Creating a texture for a cube is easy – but what about a character with a face and extremities? For more complex objects, you design the texture in the same ways as a sewing pattern: One image file contains the outline of the front, back, and side of the object, next to one another. Specific areas of the flat texture (UV coordinates) map onto certain areas of your 3D model (XYZ coordinates), hence the name UV map.
Getting the seams and mappings right is crucial: You will have to use a graphic tool like Blender to create UV Maps. It's worth the while to learn this, UV mapped models look a lot more professional.
Environment Maps are used to create the impression of reflections and refractions. You create a Cube Map to represent your environment, similar to a skybox. (Sphere Maps are possible, but often look too distorted.) You give the Cube Map a set of images showing a "360° view" of the completed scene. The renderer uses this as base to texture the reflective surface.
Of course these reflections are static and not "real", e.g. the player will not see his avatar's face reflected, etc… But they cause the desired "glass/mirror/water" effect, and are fast enough to render in real usecases, it's better than nothing.
You provide the texture in two or three resolutions to be stored in one file (MIP = "multum in parvo" = "many in one"). Depending on how close (or far) the camera is, the engine automatically renders a more (or less) detailed texture for the object. Thus objects look smooth from close up, but don't waste resources with unspottable details when far away. Good for everything, but requires more time to create and more space to store textures.
A procedural texture is generated from repeating one small image, plus some pseudo-random, gradient variations (Perlin noise). Procedural textures look more natural than static rectangular textures, for instance, they look less distorted on spheres. On big meshes, their repetitiveness is much less noticable than tiled seamless textures. Procedural textures are ideal for irregular large-area textures like grass, soil, rock, rust, and walls. See also: jMonkeyPlatform NeoTexture plugin See also: Creating Materials in Blender, Blender: Every Material Known to Man
In 3D games, Skeletal Animation is used for animated characters, but in principle the skeleton approach can be extended to any 3D mesh (for example, an opening crate's hinge can be considered a joint). Unless you animate a 3D cartoon, realism of animated characters is generally a problem: Movement can look alien-like mechanical or broken, the character appears hollow, or as if floating. Professional game designers invest a lot of effort to make characters animate in a natural way (including motion capturing).
An animated character has an armature: An internal skeleton (Bones) and an external surface (Skin). The Skin is the visible outside of the character and it includes clothing. The Bones are not visible and are used to interpolate (calculate) the morphing steps of the skin.
JME3, the game engine, only loads and plays your recorded animations. You must use a tool (such as Blender) to set up (rig, skin, and animate) a character.
In the JME3 application, you register animated models to the Animation Controller. The controller object gives you access to the available animation sequences. The controller has several channels, each channels can run one animation sequence at a time. To run several sequences, you create several channels, and run them in parallel.
Non-player (computer-controlled) characters (NPCs) are only fun in a game if they do not stupidly bump into walls, or blindly run into the line of fire. You want to make NPCs "aware" of their surroundings and let them make decisions based on the game state – otherwise the player can just ignore them. The most common use case is that you want to make enemies interact in a way so they offer a more interesting challenge for the player. "Smart" game elements are called artificially intelligent agents (AI agents). An AI agent can be an NPC, but also an "automatic alarm system" that locks doors after an intruder alert, or a trained pet. The domain of artificial intelligence deals, among other things, with:
There are lots of resources explaining interesting AI algorithms: