Wednesday, December 12, 2012

Normals and the World Inverse Transpose Matrix

On the last assignment and in the exam review, there was a question about the world inverse transpose matrix.  I perused a bunch of online material on this issue, and thought this discussion had a good explanation:
The purpose of transforming a normal by the world matrix is to rotate it so that it's a direction in world-space.  This is typically done by only using the 3x3 portion of the world matrix, since this includes all of the rotation data and none of translation (you don't want to translate a normal). 

The 3x3 portion of the matrix also can contain scaling data.  When it contains scaling data, the normal is scaled by the scaling factor.  As long as the scale is uniform (which means that xScale == yScale == zScale), the scaling is okay provided your normalize the resultant normal vector before using it.  However if the scale is non-uniform, this won't work.  The solution in the case of non-uniform scaling is to calculatate the inverse transpose outside the shader, pass it in, and then transform normals (and also tangents/bitangents, if necessary) by that matrix.

So in summary:  if you're using no scaling or uniform scaling in your world matrix you can use the world matrix, if you're using non-uniform scaling then you need to use the inverse transpose.
So, in part 4 of the last assignment, changing the scaling of the sphere to be non-uniform caused the normals to be incorrect until you used the inverse transpose world matrix instead of the world matrix.

Monday, December 10, 2012

Tutorial: Tuesday at 1pm

Our informal tutorial is going to be Tuesday at 1pm.  I will edit this post with a room as soon I have one.  Be sure to submit questions ahead of time so I can prepare the best answers possible. :)

Update: We will meet at my lab, 5317 HP, at 1pm.

Monday, December 3, 2012

Exam Review Tutorial

If you are interested in attending a review session, please fill in the poll linked to below.  Only choose times you know you can attend. I need at least five people to make it worthwhile.

Tutorial Scheduling Poll

Also, please submit questions or topics you would like to be covered in the tutorial so I can prepare good answers for you ahead of time.  You can submit as many times as you like, but be specific, and only ask the most important things (otherwise I'll run out of time to help you with them!).

Tutorial Questions Form

Monday, November 26, 2012

Modifying the Shader Example for Assignment 5

Here are the steps I took to modify the crater shader example available on the course web page to make a wave effect.  This is part of what you had to do for Assignment 5.

  • Removed warnings. ;)
     
  • In the mesh_wave.fx file added in the following:
    uniform extern texture gWorldTex;
    uniform extern float3 direction;
    uniform extern int amplitude;
    uniform extern float wavelength;

     
  • Added the following sampler info structure for use with textures:
    sampler2D mySampler = sampler_state {
        Texture = (gWorldTex);
        MinFilter = Linear;
        MagFilter = Linear;
        AddressU = Wrap;
        AddressV = Wrap;
    };

      
  • Added or modified the following structs to the effect file:

    struct inputVS {
        float3 pos : POSITION0;
        float4 col : COLOR0;
        float2 tex0: TEXCOORD0;
    };


    struct OutputVS {
        float4 pos : POSITION0;
        float4 col : COLOR0;
        float2 tex0: TEXCOORD0;
    };

     
  • Added a new vertex shader in the effect file called directionalWaveVS:
     
    • used the following formula to determine the distance from the plane with the normal given by the direction passed in:
      d = direction.x * vin.pos.x + direction.z * vin.pos.z;
       
    • used the following formula to compute the new y position of the current vertex being processed, using the distance computed above and an omega value of 0.07:
      vin.pos.y += amplitude*cos(wavelength*d-omega*time);
       
    • transformed the current vertex using the world, view, and projection matrices
       
    • saved the input colour and texture value to be passed along to the pixel shader
       
  • Modified the pixel shader function to use the texture's colour instead of the vertex's colour. Used the following function with the input texture coordinate to accomplish this:
    tex2D(mySampler, v.tex0)
     
  • Added a wavesTechnique that uses the new vertex and pixel shaders in a single pass.
     
  • Added "waveTechnique" to the techniceName array (ignoring the misspelling...)
     
  • Added D3DXVECTOR2 tex0 to meshVertex in meshSurface.h
     
  • Added int amplitude, float waveDir and float waveLength to the meshSurface class, and initializing these values in the constructor (good defaults are amplitude=50, wavelength=0.01f, and waveDir=0.0f)
     
  • Added D3DXHANDLE hWaveTechnique, D3DXHANDLE hDirection, D3DXHANDLE hAmplitude, and D3DXHANDLE hWavelength to the meshSurface class
     
  • Added the following to the D3DVERTEXELEMENT9 decl in meshSurface.cpp:
    {0,0,D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}
     
    • And after that, assuming it was placed third in decl:
      decl[2].Offset = (char *) &v.tex0 - (char *) &v;
       
  • Set up texture usage (mTexture is already declared in the class):
     
    • released mTexture in the class destructor
    • updated the texture coordinates in the vertex structure in createSurface:
      vtx[k].tex0.x = texU;
      vtx[k].tex0.y = texV;
    • created the texture in initEffect:
      hTexture = pMeshEffect->GetParameterByName(NULL, "gWorldTex");
      D3DXCreateTextureFromFile(md3dDev, "earth.bmp", &mTexture);
  • Associated the direction, amplitude, and wavelength variables with their counterparts in the effects file in initEffect:
    hDirection = pMeshEffect->GetParameterByName(NULL, "direction");
    hAmplitude = pMeshEffect->GetParameterByName(NULL, "amplitude");
    hWavelength = pMeshEffect->GetParameterByName(NULL, "wavelength");

     
  • Associated the new wave technique in the effects file in initEffect:
    hTechnique = pMeshEffect->GetTechniqueByName(techniceName[0]);
     
  • Updated ShaderRender to compute the application-side values needed for the effect file:
     
    • computed the wave's direction (making sure to initialize the wave direction at some point - can default to zero, and if desired, can be controlled with keyboard input):
      float directionVec[3] = {sin(waveDir), 0.0f, cos(waveDir)}
       
    • copy the texture, direction, amplitude, and wavelength values into their counterparts in pMeshEffect
       
    • set the technique to the waves technique:
      pMeshEffect->SetTechnique(hWaveTechnique);

In the effects file, I also added a second wave to add onto the first one and played with the parameters to see what kinds of effects it gave me.  You can uncomment that line to see for yourself.

Thursday, November 22, 2012

Game Design Document Comments

After submitting your grades (as assigned by the prof) and adding my own comments on WebCT, I thought I'd share some general comments and advice here.

I realize that some of my suggestions might be out of date since you've likely made progress since that document, but I'll share just in case you haven't, and to give you something to think about for the next time.

First, something I suggested to most if not all of you is to sketch out your levels.  It was clear in the design documents that most of you did not have a concrete idea of how the levels would look yet, so this will be a crucial step.  You will end up with a much better design if you work on it on paper before diving into implementation, and it will give you something useful for the final report.

Second, I tried to suggest ways to come up with a minimally viable product that you could submit.  For instance, spending lots of time on artwork when the very basics of the game are not yet working is not a good idea.  You need to know what the must-haves are, and then prioritize the rest after that.  I'd be happy to discuss ideas related to this if you want to.

Finally, something I didn't write much in the comments is to make good use of your resources - especially me and the prof! Here is how you can approach getting my help to get the most out of it:
  • Send me an email with lots of context.  What are you working on? What are trying to accomplish? What have you tried so far? Are you stuck halfway through, just need a bit of help debugging, or don't even know where to start?
     
  • You may want to include code.  Sometimes pasting a snippet will be enough, but sometimes you'll have to send the whole project for context.  If you do, make sure to remove build files and minimize the zip file size, and even more importantly, tell me where to look in the code.
     
  • If you want to meet in person, there are some office hours left, but not many.  It helps to send an email to say you are coming so I can let you know if there are any other students around and when you should come.  After office hours are over, I may be able to help out on an appointment basis, but you will need to give lots of notice for that to happen.
I think that's about it - good luck everyone! I am looking forward to seeing your games! :)

Monday, November 12, 2012

An Applet For Diffuse, Ambient, and Specular Lighting

Since you either have covered this recently or will very soon, I will repost this information about lighting from last year's blog in case it helps you out.

I have another applet I wrote a while back that might be useful to you. It demonstrates the various kinds of lighting and can really help you understand what's happening behind the equations.  

In class you learned about diffuse, ambient, and specular lighting.  Diffuse lighting comes from a direct light source like the sun, while ambient lighting is just what's hanging around in the environment.  Highlights on a surface from the direct light source appear as specular highlights and depend on where your eye is positioned relative to the surface.

In my applet, you can adjust the intensities of the red, green, and blue light coming from the ambient and direct light sources.  You can also adjust the surface reflectance, which determines how much of a certain colour of light will be reflected back from the surface (and eventually reach your eye, making the material look like a certain colour).  If the surface does not reflect any blue, then changing the blue intensity on the lights won't affect anything (the only exception being for the specular highlight, since those are made before the light would be absorbed/reflected in the surface anyway).

You can move the sun (direct light source) and the eye (viewpoint/camera centre), and you can tilt the surface to see how the angle between its normal and the other directions changes the outcome.

I recommend trying to isolate each type of light first to see how they look.  Turn the specular highlight off at first as well.  Then try to compare what you see with the equations involved in defining the light sources.

As always, if you have any questions, comment here or email me!

Go to the applet now...

Wednesday, November 7, 2012

Assignment 4 Marking Scheme

Here is the marking scheme I used to grade Assignment 4, along with details of the desired solution for Task 1:

 

Task 1 (30%)

  • If you said something correct but fairly obvious and simple (such as 'rotating the camera around the vector'), you will get only 10%
  • If you did a little better than the minimal description, but wrote something that didn't require you to understand more deeply what the code was doing and why, you will get 15%
  • I am looking for a much more detailed description of all the steps involved; I want to know that you understood it beyond what each line of code has done (for instance, I want to know you understand what the role of the up and lookat vectors actually are for, and how/why we want to change them in the context of the overall game).  Also, why the function is needed goes beyond code organization; I wanted to know why the stuff done in the function is needed.
  • Here is a sample response (something closer to this would get you full marks):
    • This function adjusts the orientation (i.e. rotation) of the camera in space using the input of a vector representing the axis of rotation and the number of degrees to rotate around it. The camera’s defining members (look-at and view-up vectors) are updated to reflect their new orientation relative to the world. This allows us to simulate pitch, roll, and yaw by rotating around the appropriate axis vector.
    • A rotation matrix representing rotation around the input rotVector by angleRad (radians) is created with D3DXMatrixRotationAxis. D3DXVec3TransformCoord applies the rotation matrix to the two defining camera vectors: the view-up vector and the look-at vector. This is enough to say that the camera’s orientation has now been updated since these two vectors completely define that orientation.
    • The last set of cross products and normalizations are done to update the up vector. It probably seems odd that we would do this given that we just rotated it (hence it is “updated”), but the reason it is done because of floating point number inaccuracies in the rotations. There’s a good chance the up and look-at vectors will lose their orthogonality over time, so this update ensures they are always orthogonal.

 

Task 2 (35%)

  • 5% for implementing each of the functionalities correctly

 

Task 3 (35%)

  • 5% each for correctly implementing pitch, yaw, changePositionDelta(D3DXVECTOR3 *dv), changeAbsPoition x 2, moveForward
  • an extra 5% for generally having no build errors (3%) or warnings (2%) (as mentioned on course blog); although this wasn't specified in the assignment, I did ask for it on previous assignments and the course blog, and this task didn't divide out to 35 marks nicely

 

Bonus 1 (20%)

  • 3% most items, 5% airplane roll angle