Monday, August 19, 2013

Let there be collision!

I had a lot of fun last week, and not just by getting drunk on friday.

It took me a bit more time than i would have liked, but i managed to get my physics and collision library (Box2D) to draw it's debug data to the screen.

To do so i hacked up a small class called Polygon, which has a couple of parameters to control whether the polygon is solid or not (aka, filled or just the outlines), it's color and a few others, and to set up its shape. For now i implemented functions to make it a square, ellipse, circle (which is a special case of an ellipse), line and a triangle, plus the extra function to pass in your own data about the shape (so you could make any shape you wanted).
This is my first class that is able to draw itself on the screen, and i used it to draw all of the shapes that Box2D is able to draw.

Box2D is (as its name implies) a 2D library for physics and collision. There are a bunch of other libraries out there to handle just that, but this one seemed easy enough to learn, and i didn't want to spend too much time researching different physics libraries and comparing them.

Box2D works by first making an instance of a b2World object (it's, well, the world where box2d bodies live in), and asking it to create bodies. When a body is created, you tell it to create a fixture, which is something that gives the body a shape and physical properties like mass, velocity etc. And if you want to kill a body, you have to give it back to the world, so it knows which of the many possible bodies to kill. More on this a bit later.

Then on each game update you tell the world to simulate by solving movement and collisions.

The result is something like this:



This is super simple for now, it's just two boxes, one of them being a static body, the other one dynamic (can you guess which is which?). What this opens up now is the ability to at least get something more interesting done, and having it draw on the screen.

My next step is to finish off my Entity structure, so when entities die, they are cleaned up properly. This is a small problem for me right now, because of the way i setup my entities.

My Entity class is more or less just a container of States, where a state is any set of variables grouped together in a logical manner. For instance, there might be a state which holds the amount of gold, silver and copper a person has, or it might be a simple flag on whether the person can do a double jump. The problem lies in the way states are destroyed. The entity knows absolutely nothing about the states it contains, because they are (for the sake of simplicity) just unique IDs.
Since an entity has absolutely no idea about what states it has, it becomes a problem when they are destroyed. I wanted to keep them simple, to i made them hold just the data they should have, and removed any piece of logic from them.
So how do you delete a state which holds a box2d body? If you don't return the body to b2World, it just stays in there forever, because nobody knows it exists anymore (except the world, but it doesn't decide when to destroy the bodies unless it's dying as well).

The first solution i'm going to try out will be callbacks on entity destruction, by using lambdas.
A lambda is a concept from functional languages, which is basically an anonymous function which does some work, and it's results can be passed to other lambdas, etc. By using a lamed for the callbacks, i can register a function to happen in case an entity is deleted, and it will auto-magically return the body to the b2World (if the body exists on the entity being destroyed).

Monday, August 12, 2013

My cubicle...

Last week was a good week.

After some (well, a lot of) reading, i decided i'd go from D3D9 API to D3D11 API.
The decision is based on three things:
1. D3D11 APIs are cleaner and feel better designed.
2. D3D11 has ways to run on all hardware types (even those that don't support D3D11 features) by using feature levels.
3. The number of XP users that play games should be non-existent by the time a make a game that i'd want to sell.

So with that in mind, i decided to try my luck in D3D11 programming. I found a couple of web sites that have good explanations of the code they give in their tutorials, and have been following them to understand why something is used the way it is. So far so good.

In particular, the things i want to learn are:
- using vertex and index buffers ......... DONE
- using and writing shaders ......... half way there
- texture mapping ......... need to understand it a bit more
- alpha blending and transparency ......... TODO
- orthogonal projection ......... know the principle, but have yet to use it

As you can see, some things are easier than others. =D

Using vertex and index buffers differs slightly from D3D9 in the way they are created, and the method name for locking their GPU memory so you can copy your data into it, but the principles behind it are still valid. Same goes for index buffers. The difference between D3D9 and 11 is that in D3D11, all types of buffers are created in the exact same way with the exact same method, the only thing you change are the parameters of the struct that is passed to the method.

One change that i did not expect in the transfer was the need to write my own shaders, which is not optional. There are ten stages in the D3D11 graphics pipeline as can be seen on this page of the tutorial i'm reading (scroll down a small bit), and half of them are programmable by the user, and the other half is configurable by setting certain flags or values. The part that surprised me was the fact that if you don't write two of those five programmable shaders, you don't get any output on the screen (namely, the vertex and pixel shaders). Coming from D3D9 where you'd just shove vertex data to the GPU and it would render them on its own, this came as a bit of a shock at first, but later i figured how much flexible this system actually was. You get to control how things get done, like outputting the entire scene in grey scale instead of color =D

Going forward, the texture mapping seems a bit more complicated than what i thought it would be. It requires much more control than i thought it would, but (hopefully) most of it i related to just loading the texture. I still have some reading to do on how it actually works, and i need to try cutting out parts of the code down to see what are the bare essentials to get it done. I did, however, manage to get it working by setting the texture of my long lived test subject to my spinning cubes.

The orthogonal projection (aka, viewing the scene in 2D instead of perspective 3D) is a matter of calling the right function to set the projection matrix for the camera, so it shouldn't be much work (seeing as i already have the data i need to set the perspective projection).

Alpha blending and transparency is my next step, so by next week i should have it well under my belt. :)

Finally, to show that i am making some progress with it, here's a video which should explain the title of the post. The center cube is enlarged by a factor of 1.5 on all sides, and made to rotate on the Y and Z axis, while the outer cube is set to rotate around the Y in one direction, then translated away from the center (where the first cube is), then set to rotate in the other direction on the Y axis. Anyway, the video speaks more than words or pictures, so enjoy.



Monday, August 5, 2013

Attack of the DLLs, and The Eclipse wars

Last week was an interesting week, filled with despair and joy in differing intervals

For whatever reason, i decided to change my current DLL renderer setup. I knew i wanted code and dependency isolation between my main project and the rendering code, but i also didn't see the reason to do it using inheritance, since i wasn't using polymorphism at all (and didn't need to).
My original rendering code was setup by having an ABC (Abstract Base Class) pretending to be an interface class to the rest of my code, by hiding all of the grisly details of how the rendering works behind it. But by doing so, i was using inheritance for something i had no need for. In my opinion, you should use inheritance only if you need the runtime polymorphism, and for my renderer, i didn't need it, as i was planning to statically link the different renderers into my code by using the same class names. And while this is possible, it's also impossible to do so all the while keeping the code and its dependencies separated in a DLL. I mean, it is, but not without sacrificing time and a whole lot of hair.

So after a small discussion on gamedev.net, i decided that using an interface wasn't so bad if i get to keep all of my rendering dependencies tucked away nicely in a DLL file. This does, however, mean that my renderer is going to have to manage all classes that in any way use or reference a concrete rendering API data type. However, this isn't that big of a deal, because this requires that:
1. Each of those classes needs to have an ABC interface
2. Each of those classes can only be referenced by a pointer or reference
3. The memory and lifetime of those classes needs to be managed by the DLL
4. The creation of these class instances is done through the renderer interface
5. (optional) Getting the reference/pointer to those classes requires an ID or name
The first four points just mean that i will need to have two managers for these objects, one to be inside the DLL and handle their memory and lifetime, and one outside of the DLL that controls the lifetime. And here i make a difference between handling and controlling the lifetime, because while i can only delete the object from inside the the DLL (thus ending its life), i get to control WHEN i do that from the outside.
The last point is a feature that i'd really like to have, because it allows me to reload and replace any resource (texture, model, etc) without needing a restart of the executable. Fun!