Tag Archives: orientation

World Space Coordinates of Screen Fragments 2: Manchester Prep

I’ve been inspired by the fact that a very nice person commented on my previous post about getting the world-space coordinates of the near plane of the view frustum to revisit this topic, because it’s been bugging me that my previous technique required a matrix multiplication and that feels like it might be more expensive than strictly needed. As I discussed before, you might want to know where exactly in your world the screen is so that if it intersects with something, you can treat that part of the screen differently – for example if your camera is partially submerged in water, you might want to apply a fogging and distortion effect to those fragments below the surface, but not above it.

The first thing to understand is how your camera is oriented. For historical reasons, and because of the way that my world axes are set up (+x is east, +y is north, +z is down; the camera’s neutral direction is looking north down the y axis.), the camera orients itself in world space by rotating around the z axis to look left-right, and around the x axis to look up-down. Just to make things more confusing, because the world moves around the camera in OGL, remember that in your shaders, the camera’s coordinates are negative (i.e. your shaders think your camera is at (-cameraX, -cameraY, -cameraZ). You can cut through a lot of confusion by using a system like gluLookAt() to orient your camera, which confers a huge bonus in that it returns both the direction in which the camera is facing and also the camera’s direction for “up”, which will be very handy.

The first step is to work out where the camera is and which direction it’s looking. In my case, I keep track of the camera’s position as (cameraX, cameraY, cameraZ), and rotation around Z and X in radians (i.e. pi radians is 180 degrees). My camera matrix rotates the camera around the Z axis and then around its own X axis, and then translates to its location in world space. Using this system, the camera’s unit vector is worked out like this:

Continue reading World Space Coordinates of Screen Fragments 2: Manchester Prep

Orientation Matrices for Cube Mapping

I may well be totally wrong, but I don’t think I’ve ever successfully Googled a useful set of orientation matrices for cube mapping. As you already know, a cube map is a set of six images which, when projected on to a cube, provide a decent simulation of a spherical texture map. Two of the most common modes of usage are to provide real-time reflections on the outside of an object (by repeatedly making a map of the environment surrounding that object and then projecting that map back on to the object, as in the reflections you see on e.g. the cars in racing games), and to provide a skybox.

Skyboxes are usually either pre-rendered (pretty but boring), or done through rendering atmospheric scattering for your scene and then projecting some celestial bodies like the moon, stars etc (pretty but computationally intensive). An additional bonus of drawing your own skyboxes is that you can then use them for doing environment/ambient lighting for objects in your scene, either by working out the spherical harmonics (neat but I’m far too dumb to have ever wrapped my head around it) or by techniques which involve downsampling the cube map. This gives you ambient light which changes colour depending on the angle of the sun basically for free.

Therefore cube maps have multiple advantages for rendering your skybox and lighting:
1) render once and reuse (you can make this once per frame, or less often depending on how dynamic the sun is.)
2) you can do atmospheric scattering at a surprisingly low resolution and still have a decent looking result. You basically have to do your atmospheric scattering in the fragment shader if you want to use a “spotlight” effect to render the sun, which gets very expensive in terms of fragment power. I actually do both the sun and the moon, which is even more expensive, so lowering resolution is a major speed-up here.
3) basically free specular and ambient environment mapping of the sky on to everything in your scene. You can either go the very expensive route for downsampling, or just mipmap the thing and get 90% of the quality for 10% of the effort, and hardware acceleration.
4) if you’re blending your scene into the sky for a distance fogging effect – well, you just got the source for that as well!

This is where you usually run into a brick wall because figuring out the correct orientation matrices for rendering the cube map is a pain in the backside. What you’re going to be doing in the end is rendering a box around your camera and texture mapping the cube map on to the inside of the box, which will then act as the skybox. You can simplify this by not applying any rotation to the skybox, so that it’s aligned with the x, y and z axes. Therefore what you need to do is figure out how to make the camera look in six directions: +x, -x, +y, -y, +z and -z. You could do this with gluLookAt, but that’s a whole heck of a lot of lines of code just to look in the direction of an axis. Better to just know what matrices to use: see below. (I’m weird and use +x = east, +y = north, -z = up i.e. inverted right-handed axes.)

Continue reading Orientation Matrices for Cube Mapping

Getting world-space coordinates of screen fragments in glsl

So, you’re probably asking yourself why on earth you would even want to do that? Well, it’s useful information if you’ve got a camera which can intersect with things in your scene. The most obvious example here is water – say, for example, you wanted to be able to have a distort effect and reduced fogging distance to make the underwater part of your environment visually distinct from the part above water:

Camera Submerging

You may have noticed that in e.g. elder scrolls games you can trick the camera into behaving as if it’s not underwater when you’re near to the air-water interface. This is presumably because they’ve just set a camera height which denotes “underwater”, but what if the player has positioned the camera so that half of the screen is underwater and half is above? Hence you need an approach which works per-fragment.

How do you work out which part of your screen is actually underwater? You generally can’t do it when you’re rendering your water’s surface, because you’re not going to be shading any pixels which aren’t directly at the air-water interface. If you’re under the surface of the water and looking down, this approach will immediately fail. What you are going to want to do is find out a way of masking off bits of your screen that are underwater and using them to do your “underwater” effects.

Continue reading Getting world-space coordinates of screen fragments in glsl

Simple billboard orientation in world space

I had a great deal of difficulty in working out how to do billboards properly, simply because I don’t have the mental agility to handle the descriptions usually given in 3D programming tutorials. The method I’ve managed to figure out is related to gluLookAt: and uses a similar method to work out the rotation matrix needed to map one vector to another.

Here’s how the process works:
Continue reading Simple billboard orientation in world space