LineWars VR blog posts

Mar 25th, 2018 - Creating a Space Station

Modeling

Immediately after I finished the previous blog post, I began working on the new space station model for LineWars VR. If you remember the original LineWars II, you may remember it having a space station that was basically just a cube, with a "mail slot" for the fighter ships to fly through. That model only had 20 vertices (or points), so it was very cheap to process on the old DOS PC machines. However, for LineWars VR I wanted to have better and more complex space station. I thought the space station design in the movie "2001: A Space Odyssey" looked good and made sense scientifically, and as it has been copied by many games and movies since, I thought I'd do something similar for LineWars VR as well.

Modeling the new space station took only a day or so, as I could just use a collection of primitime shapes (cylinders, toruses capsules, and some cubes) in Cinema 4D. After I got these looking like a simple space station, I made the object editable, and split it into a single quarter of the station, using two symmetry deformers to then generate the full station. That way I only needed to hand-model one quarter of the station. I also wanted to keep the object as low-poly as possible, as the recommended maximum number of vertices per scene for a Gear VR game is a hundred thousand. As I will have many other objects in addition to this space station in a scene, it should only have a fraction of that amount of vertices. On the other hand, there is never more than one space station in a scene, so it can have more polygons and vertices than the other game objects.

My resulting space station model has 2465 vertices in Cinema 4D. Since all the sharp edges and also all vertices where the texture UV coordinates are not continuous generate extra vertices, the vertex count when the object got imported into Unity went up to 6310. That is pretty high, but still acceptable if I can keep the fighter ships very low-poly.

Texturing

After I got the modeling done, I began texturing the station. The outer rim should obviously have windows, as those are the living quarters. Since I did not have enough polygons to model the windows, I knew I needed this object to use normal mapping for all the small details. In addition to normal mapping, I knew I also needed luminance, as many of the windows should have light in then, even when that side of the station is in shadow. Also, the windows should have specular reflections (same as the solar panels), so that when the sunlight hits them in the correct angle, they should look bright even when there is no light coming from behind the window.

I added all those four 2048x2048 textures (diffuse color, normal mapping, luminance and specular strength) into Cinema 4D, with the plan of using just a 1024x1024 corner area of the texture maps for my station. I plan to have these same textures as a texture atlas for all ships in my game, as there are only four types of ships: Cobra, Pirate, Cruiser and the space station. There will also be alien ships, so if I can fit those into the same texture atlas that would be good, but if needed I can use a different texture for those.

I wanted to be able to shoot at the various parts of the station and have them take damage, so I tried to re-use the same texture panels everywhere I could, to leave room for various damaged panels in the texture atlas. This also meant trying to keep all the panels rectangular, and also not using continuous UV coordinates, so that I can then just change the UV coordinates of a single panel when it gets hit. The solar panels and fuel/water tanks would probably take damage differently. The tanks could simply explode, leaving nothing much behind, and the solar panels could just get torn away when they get hit.

I also planned to have the "mail slot" side of the station always facing the sun, so that I could keep the tanks always in shadow. This meant that I had to have some other way to make the fuel tanks visible, and I decided to add some spot lights shining on them. I modeled these lights in Cinema 4D, and then baked the lighting into a texture, and then copied the relevant parts of the texture into my texture atlas. I had to make some adjustments to the generated texture coordinates to make the texture fit nicely within my texture atlas. I did similar work for the landing pads that are inside the space station.

Finally, as I did not want to load all four different texture maps in the shader, I tried to figure out a way to pack the textures into fewer actual texture maps. With the asteroid I had used the RGB texture planes as the normal directions, and the alpha channel as the grayscale color. This would not work all that well with my station, as I needed to have full RGB color available. It then occurred to me, that perhaps I could get by with just two texture maps? The luminance and specularity were practically on/off toggles, or at most grayscale values, so that left two full RGB and XYZ planes. That totals 8 different panels, which would nicely fit into two RGBA textures. With the ETC2 texture compression the RGB colors of a pixel are compressed into 4 bits and the Alpha channel into another 4 bits, which means that the alpha channel has much less compression artifacts than the RGB channels. Thus, I decided to use the alpha channels of both the textures for the normal vector (as compression artifacts are most noticeable in the normal map). Thus, my resulting texture packing uses the first texture as the RGB diffuse color plus X coordinate of the normal vector, and the second texture as luminance toggle in the Red channel, normal vector Z coordinate in the Green channel, specular strength in the Blue channel, and the normal vector Y coordinate in the Alpha channel.

Shadows

The space station would look pretty bad if it didn't have shadows on the solar panels, when the sun is shining from the front of the station. My plan is to avoid using proper shadow maps in my game, as those would require rendering the scene separately from the view point of the sun, and then using this shadow map to determine which pixels are in shadow, and all of this should be done for every frame. I don't think the mobile devices running Gear VR have the performance to handle this with sufficient quality (meaning large enough shadow maps). So, what are the alternatives?

One thing I could have done would have been to point the station directly towards the sun, and then just bake the shadow information into the texture maps. However, as I wanted to have the solar panels stay stationary while the rest of the station rotates, this would not work. Next, I tried using a static shadow map texture, which would rotate as the main part of the station rotates. Since I use the Dynamic Soft Shadows Based on Local Cubemap method for the cocpit shadows, and that basically just calculates the correct shadow map position from the fragment position in 3D, I thought I could perhaps use something similar but just with a simple texture, when I know the sun always shines from the same direction. I got this working fine, but the problem was the uneven shadow edge around the circular main body of the station. Straight lines looked pretty good, but the circular section had very obvious jagged edges.

I then got the idea of using code instead of a texture map to calculate whether a pixel is in shadow. Since my station only has simple shapes (from the shadow perspective), it has a ring, a central circle, and four poles, I thought that the required formula should not be overly complex. And I was right, I was able to have just a couple of if clauses to check the shadow areas. This resulted in very clean and sharp shadow edge, which was just what I was after.

The Shader

I created a new custom shader to handle all the afore mentioned ideas. I used the asteroid shader as the basis, as it already handled the normal mapping. I had found a slightly more efficient method of handling the tangent space lighting calculations for the normal mapping since my original asteroid blog post, though. Instead of converting the tangent space normal into world space in the fragment shader, it is more efficient to convert the light vector into tangent space in the vertex shader. Unity provides a TANGENT_SPACE_ROTATION macro for this purpose, so the vertex shader calculations can be done simply by the following code, with no need to calculate the binormal vector:

	TANGENT_SPACE_ROTATION;
	o.lightDirection = mul(rotation, mul(unity_WorldToObject, _WorldSpaceLightPos0).xyz);
Then in the fragment shader, this can be handled simply by taking the dot product of the normal vector (taken from the texture) and this light vector:
	fixed4 tex = tex2D(_MainTex, i.uv);
	fixed3 tangentSpaceNormal = tex.rgb * 2 - 1; // Convert the normal vector values from 0..1 to -1..1 range
	fixed4 col = tex.a * (DotClamped(normalize(i.lightDirection), tangentSpaceNormal) * _LightColor0;

The space station vertex shader has four different sections to handle the special requirements of the station model and textures:

  1. The non-rotating solar panels are handled by using unity_WorldToObject matrix for those vertices to get their coordinates in object space, while the rotating vertices already have their coordinates in object space. This same handling needs to be done also to the normals and tangents of those vertices, which adds so many GPU cycles that I am thinking of eventually abandoning this idea of using a single mesh for the whole station.
  2. Next, the blinking polygons (or more accurately their vertices) are handled by checking the vertex color Green value (which I use in the c# script to mark the blinking polygons), and if it is set, and the _SinTime.w variable is > 0.99, I move the vertex UV coordinates to a blinking area of a texture map. This generates a short flash once every two seconds or so.
  3. The next step is to prepare the shadow calculation values. The shadow calculation in the fragment shader needs to know which areas of the space station cause a shadow on the polygon, for example polygons in front of the ring poles are not shadowed by the ring poles. Here again I use the vertex colors (this time the Red channel) to select the correct shadow area. This step also prepares the object space vertex coordinate and the object space light direction (which is not the same as the tangent space light direction) for the fragment shader.

    Since the tangent space surface normal can point towards the sun even when the polygon itself is in shadow, this can create unrealistic lit pixels on otherwise shadowed polygons. To avoid this, I also calculate a shadow multiplier at this stage, like this:

    	saturate(50 * dot(_WorldSpaceLightPos0.xyz, worldNormal))
    

  4. Finally, I calculate the specular color, based on the world coordinates of the camera, vertex and the light. For better quality specular reflection (especially for curved surfaces) this should be calculated per pixel in the fragment shader, but since my specular surfaces are flat, I thought I could use this optimization.

Then in the fragment shader I first read the two texture maps, and get the tangent space surface normal for this fragment (pixel). This looks rather similar to the asteroid fragment shader above, except I have two textures that get combined:

	fixed4 col = tex2D(_ColorTex, i.uv);
	fixed4 texn = tex2D(_NormalTex, i.uv);
	fixed3 tangentSpaceNormal = fixed3(col.a, texn.a, texn.g) * 2 - 1;

The shadow is then calculated, projecting the fragment position onto the plane that generates the shadow (which the vertex shader has given us), and then checking the coordinates of this projected point whether it is inside the radius of a circular section or whether the X and Y coordinates fall within rectangular sections (the poles, for example). These coordinate areas are currently hard-coded into the shader, but as I would like to use the same shader also for the other ships, I may need to figure out a better system for this. In the fragment shader I call my subroutine CheckShadow to handle the shadow calculation, which returns a value between 0 (in shadow) and 1 (not in shadow), with the not-in-shadow taken from the shadow multiplier calculated in the vertex shader.

	// Handle shadow
	fixed sh = CheckShadow(i.objectPosition, i.objectLightDir, i.shadowData);

Then it is just a matter of checking for luminance (which is not affected by the shadow) and specularity (which is affected by the shadow) to get the final color of the pixel. The luminance uses the second texture Blue channel, and the specularity the second texture Red channel multiplied by the specularity value pre-calculated in the vertex shader.

	// Handle luminance
	if (texn.b > 0.5)
	    return i.specular * texn.r * sh + col;
	// Handle specular
	col = sh * (i.specular * texn.r + 
	// Handle bumpiness
	col * DotClamped(normalize(i.lightDirection), tangentSpaceNormal) * _LightColor0);
	return col;

The resulting fragment shader takes only 8.5 GPU cycles worst case, and only 2.5 GPU cycles best case, according to the Mali Offline Compiler. These are pretty good values in my opinion, considering all the stuff the shader needs to handle. The vertex shader however takes 30 GPU cycles, most of which is caused by the rotating/non-rotating part handling, which I could get rid of completely if I had the station in two parts. However, if I split it into two parts, I would have to come up with some different way of handling the rotated shadows on the stationary solar panels, and as even the solar panel part of the station has more than 300 vertices, it could not be batched into the same draw call as the rest of the station. So, I would get rid of one problem and generate two new problems, so I am not yet sure if that change would be worth it. This is the Mali Offline Compiler result for the fragment shader:

  3 work registers used, 1 uniform registers used, spilling not used.

                          A       L/S     T       Bound
  Instructions Emitted:   24      5       2       A
  Shortest Path Cycles:   2.5     2       2       A
  Longest Path Cycles:    8.5     5       2       A

The Result

Here below is a small video of me running the game in the Unity editor (using my Oculus Rift), and recording the editor window. It shows me flying around the space station, so you can see the shadows, luminance and specular handling in action. The specular reflections are visible on the solar panels and on the various windows, while the luminance shows on the windows of the shadow side of the station, and also on the fuel/water tanks.

The next step is to start working on the damaged textures, to handle the effects of ships shooting at the station. This will probably keep me busy for the next couple of weeks, and after that I can hopefully move on the creating the cruiser. I keep learning new tricks every step of the way, so after I have done the cruiser and the fighter ships, I probably have learned a lot of new tricks I can use to improve my space ship cockpit textures and shader. As always, thank you for your interest in my project!

Mar 8th, 2018 - Splitting Asteroids

Game code from LineWars II to Linewars VR

Most of my time during the last month and a half has been spent working on code that handles laser rays hitting an asteroid, but before I started working on that, I ported some of the game logic code from my old Linewars II over to LineWars VR. I started with the Demo scene, where a group of Cobra fighters attack a pirate starbase defended by some Pirate fighters. It took about a week to get the code working. The main issue was changing the original Euler angles for ship directions to use the Quaternions that Unity uses. This needed quite a bit of trial and error to get working, with me trying to understand how the quaternions actually work.

After I got the game code mostly working, I also added a HUD display, which shows the type, distance and speed of a ship that is directly in front of the player's ship. This HUD information is displayed around the crosshairs, just like it was in the original LineWars II.

Asteroid hit work

Then I began working on the main feature of this blog post, the laser hitting an asteroid. I had an idea of doing this in three phases:

  1. Determine where (on which polygon) the laser hits the asteroid, and play a hit animation at that position.
  2. If the asteroid is sufficiently large, generate a crater around this hit position.
  3. Explode the asteroid into fragments, when a big asteroid has been hit several times, or straight after the first hit if the asteroid is very small.

Determining the hit position

Unity does have a physics system that could handle most of this stuff pretty much automatically, but I decided not to use that, as I am not sure about the performance of the system on mobile devices, and the system is pretty much a black box. I like to know exactly what the game code is doing, so I decided to port the collision code from my original LineWars II over, and then start enhancing that with more features.

The first step was to determine whether the laser ray actually hits the asteroid, and if so, where. For a rough collision test I use a simple bounding sphere (as my asteroids are somewhat round in shape). If it looks like the laser ray is close enough to the asteroid center point, I then use a more exact collision detection. I found a good algorithm for a ray-triangle intersection test from the Unity Answers pages. I could use this algorithm pretty much as-is, I just added a test that the triangle and laser do not face the same way (as that would mean the laser hits the back side of the asteroid, which is not what I want). This test removes about half of the triangles from the test, and thus saves some CPU time. I used the System.Diagnostics.Stopwatch to check the number of ticks these tests take (when run in the editor), and the full intersection test for all 528 triangles of the asteroid takes between 1218 and 1291 ticks, while the intersection test leaving out the triangles facing the wrong way takes between 765 and 915 ticks.

Using this ray-triangle intersection test I was able to determine which triangle of the asteroid got hit, and I can even get the exact hit position in world coordinates quite easily. I then used a scene from Star Wars Episode 2 to check the timing of the hit flash and the speed of the explosion fragments, and tried to generate something similar in Cinema 4D using the Explosion FX deformer on my asteroid mesh, together with some flash footage. Below is an animated gif of the hit animation I came up with. This will be played on a quad facing the camera whenever a laser ray hits the asteroid. (Note that the speed of this animated gif is not necessarily the same as what the speed of the animation is inside the game. The animation should last one second, but your browser may run it faster or slower.)

I even added code to my shader rendering the animation, so that the color of the fragments varies depending on how much sunlight falls on the surface of the asteroid that got hit. So, if the laser ray hits a shadow side of the asteroid, you see a flash, but the ejected fragments are almost black. However, hitting a brightly lit side of the asteroid shows bright fragments ejecting from the hit position.

Creating craters

Next, I started working on the code that would dynamically generate craters into the asteroid mesh, around this hit position. I decided to aim for a crater with a radius of 5 meters (or Unity units), which meant that I had to have a way of finding the vertices, triangles and edges that fall within this radious from the hit position.

Since I only knew the one triangle that got hit, and Unity meshes do not have a way of easily finding adjacent triangles, I added a list called V2t (for Vertex-To-Triangles) into my asteroid GameObjects, which I fill when creating the asteroids. This list contains a list of triangles that each vertex in the mesh is a part of. This way I could easily find the adjacent triangles of my hit triangle. However, I soon realized that this was not enough, as my asteroids consist of several texture UV sections, which meant that Unity has duplicated some of the vertices. Thus, I needed to add still another list, keeping track of all the duplicates of each vertex, to be able to locate an adjacent triangle even if it does not share vertices with the current triangle. These two additional lists began to make things rather more complex than I would have liked.

Well, now that i could find the adjacent triangles, the next step was to find the edges of the triangles that get intersected by the crater rim, so that I could then split the triangles along the crater rim. I obviously wanted to have separate triangles for inside and outside the crater rim. For this intersection test I found a good ray-Sphere Intersection Test algorithm, which I could modify to test for intersections along the edges. Thus, my algorithm basically consisted of checking whether each corner vertex (p1, p2 and p3) of a triangle is inside or outside of the crater (with midpoint at p0) like this:

    // Check how many vertices are inside the crater.
    int tst = ((p1 - p0).sqrMagnitude < mhd.craterSqrRadius ? 1 : 0) +
              ((p2 - p0).sqrMagnitude < mhd.craterSqrRadius ? 2 : 0) +
              ((p3 - p0).sqrMagnitude < mhd.craterSqrRadius ? 4 : 0);

This gave me a number between 0 (no vertices are inside the crater) and 7 (all vertices are inside the crater), with the bits of the tst value determining which edges are intersected by the crater. This I could then use in a switch statement to try to handle each of the separate cases. Here below is an image from my quad grid notebook where I had doodled some examples of these different intersections, in an attempt to figure out how to handle them, and to help me to keep track of which vertex is which when implementing the code.

As you can see from the above image, even if no vertices of the triangle are inside the crater, it is still possible that the crater rim intersects one or more of the triangle edges. Thus, I added the code below, using the Ray-Sphere intersection test, to calculate another variable tst2, which keeps track of how many intersections there are on each of the triangle edges.

    // Check for edge intersections.
    // When tst == 0, usual tst2 values are 9 (100 100), 18 (010 010), 27 (110 110), 36 (001 001), 45 (101 101), 54 (011 011) and 63 (111 111).
    t12a = RaySphereIntersect(p1, (p2 - p1), p0, mhd.craterSqrRadius, out t12b);
    t13a = RaySphereIntersect(p1, (p3 - p1), p0, mhd.craterSqrRadius, out t13b);
    t23a = RaySphereIntersect(p2, (p3 - p2), p0, mhd.craterSqrRadius, out t23b);
    int tst2 = (t12a > 0.0f && t12a < 1.0f ? 1 : 0) +
               (t13a > 0.0f && t13a < 1.0f ? 2 : 0) +
               (t23a > 0.0f && t23a < 1.0f ? 4 : 0) +
               (t12b > t12a && t12b < 1.0f ? 8 : 0) +
               (t13b > t13a && t13b < 1.0f ? 16 : 0) +
               (t23b > t23a && t23b < 1.0f ? 32 : 0);

So, now things began to get quite complex, as I had to handle all these different cases, and not just for the triangle that got hit, but for all the adjacent triangles as long as there are triangles that have any edge intersections around the original triangle. I spent a couple of weeks working on this code, and got it to work reasonably well on the original asteroid mesh, but when trying to generate a new crater that overlaps an existing crater, I ran into such severe problems (stack overflow, and other hard to trace occasional bugs in my code), that I eventually decided to abandon this code for now. That was pretty frustrating, as I would really have liked to have craters appear on the asteroids when shooting them.

Exploding the asteroid

Instead of fighting with the crater creation for weeks and weeks, I decided to start working on code that would then eventually split and explode the asteroid. I wanted have a sort of a crumbling effect, so that the asteroid does not simply blast into small polygons, but instead crumbles in a convincing way for a large asteroid. This meant, that I had to divide the changes to happen over several frames, not everything at once. I decided to do this also in three parts:

  1. Since my asteroid has six separate texture UV sections, I decided to split the asteroid initially into six fragments along the UV sections, as those section rims already had duplicated vertices.
  2. During the next step, I build proper asteroid fragments from these six sections. This basically means connecting all the rim vertices to a new fragment-specific vertex at the center of the asteroid.
  3. For every frame after that, I move the six sections away from each other, and start splitting triangles away from the rims.

The first part was pretty easy, as I could just check each vertex, and determine the section it belongs to using it's UV coordinates. I created six separate lists for the vertices of each section, and since the sections were aligned along the local axis of the asteroid, it was easy to determine the direction where the section should move.

During the second frame after the explosion has started, I then generate the new center vertex, and generate new triangles to join all the rim vertices to this new center vertex, for all the six parts. For determining the rim vertices I could use my vertex duplicate lists, since if a vertex has a duplicate, it must be a rim vertex. My algorithm first looks for any duplicated vertex, and then starts traversing the rim (taht is, looking for an adjacent duplicated vertex) until we get back to the original vertex. Here I had to handle one special case, since in a corner triangle all three vertices are on the rim. I had to make sure I follow the correct edge (and do not accidentally cut the corner). I then add new vertex duplicates for each of these rim vertices (to get a sharp angle with different normal directions), and create the new triangles. The normal and tangent directions of the center vertex were a bit problematic, until I decided to just point the normal away from the sun, which has the effect of making the center of the asteroid look black from all directions, which in my opinion looks fine.

During all the following frames (until I determine the explosion has run sufficiently long) I randomly select a rim triangle of the section, and remove it from the main fragment body, generate new vertices for it, and start moving it away from the main fragment body. I also make all these separated small fragments smaller every frame, so that they eventually vanish. All this work is done using the single mesh, so even though it looks like many separate parts, it actually still is just a single GameObject in Unity.

Since the asteroid originally has 528 triangles, and eventually all of these triangles may get separated into a four-triangle fragment, the triangle count can increase up to 528*4 = 2112. Similarly, the original vertex count of 342 can get up to 5280 vertices (as every original triangle becomes a fragment with 10 vertices). Both of these numbers are still within sensible limits though, especially considering that only a few asteroids should be both visible and in the explosion phase at any given time in the game.

Here below is a YouTube video illustration of my asteroid explosion routine in action:

Jan 26th, 2018 - Cobra cockpit work

Cockpit model from my Snow Fall project

For the past month or so I have been mainly working on creating the Cobra cockpit mesh, and texturing it. I started with the main components of my Snow Fall project ship cockpit (which in turn is loosely based on the real Space Shuttle glass cockpit). I think this sort of a retro ship cockpit, without any fancy holographic instruments, suits the feel of my game the best. The first problem I had was with the correct scale of the cockpit. After many tests and trials I ended with an instrument panel that is about 3 metres wide (as the ship is a two-seater) in Cinema 4D, but as that felt a bit too big in Gear VR, I scaled it by 0.9 when importing the mesh to Unity. That size seems to be at least close to correct.

I redid almost all parts of the model, trying to get by with as few vertices as possible. I also decided to use flat shading for the cockpit, based on the excellent Flat and Wireframe Shading tutorial from Catlike Coding (Jasper Flick). That way I don't get duplicated vertices for sharp edges in the mesh when Unity imports it, rather I can disable normals in the mesh completely, and then calculate them as needed in the fragment shader.

Dynamic Soft Shadows Based on Local Cubemap

I had found this interesting blog post on the Arm Mali community about Dynamic Soft Shadows Based on Local Cubemap. This is a trick of getting proper dynamic shadows that emulate the light shining into a room through some windows, using a baked cube map instead of any real time shadow calculations. I thought it might fit my use case pretty well, as I wanted have the sunlight coming through the cockpit windows hitting the instruments and walls of my cockpit. The problem I have, is that my cockpit is not exactly rectangular, and the original algorithm expects a rectangular room for which it calculates the correct shadow position, using a Bounding Box of the room size. I do have some ideas about how to solve this issue, though, but haven't yet had time to fully implement my ideas. I do have the basic system working already, though, and it looks pretty neat in my opinion!

The blog post (and the corresponding Unity sample project) also gives code for calculating dynamic shadows for moving objects, which I think I might need for getting proper shadows from all the switches, the joystick, the pilot's body parts, and such. To be ready for this, I decided to split my cockpit into two meshes, one containing the base cockpit structure, using the flat shading, and another containing all the separate switches and other (possibly even moving) objects which should generate shadows on the various cockpit panels. I decided to use a different shader for this object, with normals, as most of these objects should not be flat shaded. This of course adds one Draw Call, but I don't think having an extra Draw Call for the cockpit is that much of an issue, considering the cockpit is the closest object to your eyes, and thus should be the most detailed.

I have already tested these dynamic shadows as well, but the code has a lot of issues (for nicer results I should up the shadow texture resolution to 2048x2048 pixels, but that would cause a rather significant extra work for the GPU, and even so the shadows sometimes are not at exactly the correct position), so I am not yet sure if I will actually implement this part of the code at all. I think with the issues and slowdown the trouble is perhaps not worth the effort. Besides, even John Carmack has said "Don't try to do accurate dynamic shadows on GearVR. Dynamic shadows are rarely aliasing free and high quality even on AAA PC titles, cutting the resolution by a factor of 16 and using a single sample so it runs reasonably performant on GearVR makes it hopeless."

By the way, there was one issue with the dynamic shadows that I fought with before I managed to solve it, the shadow texture was upside down on my Oculus Rift (which I use for quick tests)! I spent a bit too long googling for this issue, considering it is a known issue, the texture coordinates have the V coordinate reversed in the Direct 3D system, compared to Open GL for which the original algorithm and shaders were coded for. I managed to fix this issue by replacing this code (in the original RoomShadows.shader):

	// ------------ Runtime shadows texture ----------------
	// ApplyMVP transformation from shadow camera to the vertex
	float4 vertexShadows = mul(_ShadowsViewProjMat, output.vertexInWorld);

	output.shadowsVertexInScreenCoords = ComputeScreenPos(vertexShadows);

	return output;
}
with this code (I needed to base my change on ComputeNonStereoScreenPos() instead of the original ComputeScreenPos(), which uses separate coordinates for each eye when in VR, and thus displayed the shadows in one eye only!):
	// ------------ Runtime shadows texture ----------------
	// ApplyMVP transformation from shadow camera to the vertex
	float4 vertexShadows = mul(_ShadowsViewProjMat, o.vertexInWorld);

	o.shadowsVertexInScreenCoords = ComputeNonStereoScreenPosNew(vertexShadows);
	
	return o;
}

inline float4 ComputeNonStereoScreenPosNew (float4 pos) {
	float4 o = pos * 0.5f;
	#if defined(UNITY_HALF_TEXEL_OFFSET)
		o.xy = float2(o.x, o.y /** _ProjectionParams.x*/) + o.w * _ScreenParams.zw;
	#else
		o.xy = float2(o.x, o.y /** _ProjectionParams.x*/) + o.w;
	#endif
	o.zw = pos.zw;
	return o;
}
That is, I commented out the ProjectionParams.x multiply, so the shadow texture is always read the correct way up.

Cockpit texture packing

Same as with my Snow Fall project, I want to have the cockpit of my LineWars VR game as detailed as I can make it (without sacrificing performance, obviously). Even as a kid I built all sorts of plane cockpits (using cardboard boxes) with detailed instruments, so my interest in detailed cockpits must be trying to fulfill some sort childhood dream of sitting in a cockpit of an aeroplane. :-) Anyways, for my Snow Fall project I had purchased a book called The Space Shuttle Operators Manual which has detailed schematics for all the instrument panels of the Space Shuttle. I had scanned these pages and converted them to emissive textures for my Snow Fall project, but in LineWars VR I needed them to have also some other details, so I decided to re-scan the schematics. (By the way, it seems that the same schematics can be found from this PDF from NASA, which even has the new glass cockpit instrumentation, which my original book did not have: Space Shuttle Crew Operations Manual).

After scanning all the schematics of the panels I wanted to have in my Cobra cockpit, I tried to fit them into a single rectangular texture (in Snow Fall all the textures were separate, with various sizes, most over 2048 pixels per side, and there were dozens of these textures!). I noticed I could just about fit them with still readable texts and symbols if I used a 4096x4096 texture. However, a texture of this size would take 48 megabytes uncompressed, and as all the recommendations for Gear VR state that textures should be kept at 2048x2048 or below, I began looking into ways to make the texture atlas smaller.

I decided to go with "gray packing", as most of the information in my textures has to do with the instrument panel switches illumination, and all the panels themselves are pretty much just gray. Thus, I created a C# script for Unity, which reads my 4096x4096 uncompressed BMP texture, and generates a 2048x2048 32-bit image from it, with each 2048x2048 area of the original image in one of the Red, Green, Blue and Alpha channels. Using ETC2 compression, I was able to get practically all the information from the original 48MB BMP file into a 4MB texture! The actual packing routine is pretty simple, it just gets the input bytes array, offset into the BMP file where the actual data starts, and width and height of the original file, and it packs the data into the four pixel planes into outbytes array (with room for the 54-byte BMP header):

    private void Convert(byte[] outbytes, byte[] inbytes, int inoffs, int w, int h)
    {
        // BMP pixel format = Blue, Green, Red, Alpha
        for (int y = 0; y < h; y++)
        {
            for (int x = 0; x < w; x++)
            {
                outbytes[54 + 
                    (4 * (w >> 1) * (y & ((h >> 1) - 1))) + 
                    (4 * (x & ((w >> 1) - 1))) +
                    (y * 2 < h ? 2 : 0) +
                    (x * 2 < w ? 0 : 1)
                    ] = inbytes[inoffs + 3 * (y*w + x)];
            }
        }
    }
That code is obviously not the most efficient way to do this, but since I only run it in the Unity editor whenever the texture BMP changes (which does happen often, now when I am working on the textures), it does not matter whether it takes 100ms or 500ms to run.

Of course this packing of the texture also needed some changes to the vertex and fragment shaders, to look up the correct texture coordinates and select the correct color plane, and also to convert the grayscale texture value to the yellowish instrument panel illumination color. In my CobraCockpitShader.shader code I use a vertex-to-fragment structure that looks something like this:

	struct v2f
	{
		float4 vertex : SV_POSITION;
		float2 uv : TEXCOORD0;
		fixed4 channel: TEXCOORD1;
	};
The other items are pretty much standard, but the channel element is the one that handles the color plane masking. It is set up in the vertex shader like this:
	o.uv = 2 * TRANSFORM_TEX(v.uv, _MainTex);
	o.channel = max(fixed4(1 - floor(o.uv.x) - floor(o.uv.y), floor(o.uv.x) * floor(o.uv.y), floor(o.uv.y) - floor(o.uv.x), floor(o.uv.x) - floor(o.uv.y)), 0);
That is, all the texture coordinates are multiplied by two (so they get the range of 0..2 instead of 0..1, to map from 0..4096 to 0..2048 texels). Since the texture parameters use wrapping, the coordinates that are over 1 simply get mapped back to the range 0..1, but I can use these 0..2 coordinate ranges to determine the correct "quadrant" of the texture. The floor function converts the coordinate to integer, so it can only get a value of 0 or 1, and thus the UV coordinates map to one of the four "quadrants" (with the V coordinate reversed for OpenGL texture orientation): (0,1) = Red, (1,1) = Green, (0,0) = Blue, and (1,0) = Alpha. The channel setting uses some arithmetic to get only one of the four color components set, based on which of the UV coordinates were over 1, without using any conditional operations.

Then, in the fragment shader, I take only the wanted color channel from the texture, and switch to yellowish color if the resulting color is above a threshold, like this:

	// sample the texture
	fixed4 col = tex2D(_MainTex, i.uv) * i.channel;
	// Only one of the channels has data, so sum them all up to avoid conditionals
	fixed tmp = col.r + col.g + col.b + col.a;
	// Clamp the colors so we get yellow illumination with gray base color.
	col = min(tmp, fixed4(1, 0.7f, 0.4f, 1));

Cinema 4D C.O.F.F.E.E. UV Plugin

I find modeling pretty easy, but texturing in Cinema 4D is something I constantly struggle with. Perhaps my workflow especially with this project is not very well suited to the way the UV tools in Cinema 4D work. I have a BMP image containing a texture atlas, and I want to map certain polygons in my model to certain exact UV coordinates in this already existing texture atlas. At first I simply used the Structure view of Cinema 4D to input the coordinates by hand, but that got boring and error-prone pretty quickly. I then decided to look into creating my own plugin to make this job easier.

I managed to create a plugin that finds a point (vertex) that is currently selected in the mesh, and then looks for all the selected polygons sharing this point, and gets the UV coordinates from the UVW tag for this point in the polygon. These coordinates (which are floating point numbers between 0 and 1) are then converted to 0..4096, to match with my texture image, and displayed in a popup window.

Then when I input new coordinates, it sets all the selected polygon's UVW coordinates for this point to the given value (converted back from 0..4096 to 0..1 range). Thus, using this makes it easier for me to map coordinates in the texture atlas to a UV coordinates, and since I can select the polygons that should be affected, I can avoid (or create when necessary) discontinuities in the UV coordinates, which would make Unity duplicate the vertex when importing the mesh. Even though the plugin is a bit buggy and quite rudimentary, it has been a big help in my peculiar texturing workflow.

RenderScale, AntiAliasing and MipMapping

I mostly use my Oculus Rift when working on my project, and only occasionally actually build and run the project on my Gear VR device. I began wondering why my cockpit does not look nearly as nice on Gear VR as it looks on Oculus Rift. The textures were flickering and did not look to be as detailed as on the Rift, even though I used the same textures, and the display resolution should be about the same. Even the skybox showing the background planet had clear aliasing problems, while it was very clean-looking on the Rift.

I first tried to increase the antialiaing (MSAA) level, but that did not seem to have much of an effect. After searching the net for anwsers, I finally found the RenderScale setting, and noticed that using the default 1.0 RenderScale the eye buffer size was only 1024x1024 on the Gear VR, while on Oculus Rift it was 1536x1776 per eye! This obviously caused a big difference in the apparent quality. I experimented with increasing the RenderScale to 1.5, which made the eye texture 1536x1536 on the Gear VR (and something huge, like 2304x2664 on the Rift), and that got rid of the aliasing problem with the skybox, and the textured looked much more detailed, but still there was some texture crawl and star field flickering issues, on both Gear VR and Oculus Rift. On my Galaxy S6, the RenderScale 1.5 also caused an occasional FPS drop, so that would not be a real solution for the texture problems.

I then ran accross the article by John Carmack, where he states that Mip Maps should always be enabled on Gear VR. Well, I did not have them enabled, as I thought the cockpit is so close to the eyes, there is no need to blur any of the cockpit textures. Just to test this, I enabled Mip Mapping, and contrary to my expectations, the textures got a lot calmer and the flickering was almost completely gone. The bad thing was, the texture compression artifacts (caused by my gray packing) got quite visible. At first I thought about doing some clever reordering of the texture atlas that could lessen the artifacts, but in the end I decided to go with an uncompressed texture for the cockpit instrument panels. Sadly, with Mip Mapping, this bloated the original 4MB texture to a whopping 21.3MB! However, I think I can have all my other textures compressed, so perhaps I can get away with one such huge texture in my game.

Cockpit instruments, clock and radar

Occasionally I get bored with working on the textures, and get sidetracked with some other feature that my game needs. One of the things I think every Virtual Reality game should have, is a visible real time clock when you are in VR. I don't know if it is just me, but usually when I am playing a VR game, I only have a certain amount of time I can play it, before I need to do something else. It is pretty annoying trying to check what time it is by peeking out of the VR glasses. Thus, I began experimenting with ways to get a clock display into my cockpit. I had already implemented a simple UI panel into the center MFD (multi-function display), which I certainly could use for a clock, but I wanted to check if there was a way to add instruments without adding any Draw Calls to my project.

The center panel (based on the Space Shuttle center panel) happened to have a slot for a timer, which I had some trouble deciding on how to model or texture. I decided to change this into a digital clock, so I could kill two birds with one stone, so to speak: Have a clock visible, and have the timer area actually do something useful in the center panel. I had an idea of adding the number (and letter) glyphs into my cockpit texture atlas, and then just switching the UV coordinates in my cockpit mesh whenever the clock changes (once per minute). This would neatly avoid any extra draw calls, and I thought that updating the UV coordinates of my base cockpit mesh (which at the moment has 737 vertices and 1036 triangles inside Unity, and 566 points/688 polygons in Cinema 4D) once a minute should not cause much of a slowdown. However, to be able to update just the UV coordinates of certain polygons in the cockpit mesh, I needed a way to find those polygons!

I couldn't use anything like the index of a point or polygon from Cinema 4D to find the clock face polygons, as Unity will rearrange the vertices and triangles when it imports the mesh. I needed to find the correct UV coordinate array indices within Unity, but to do that I needed to have something set up in Cinema 4D to flag the polygons I needed to find. I decided to simply flag the left edge of the clock face polygons with a UV coordinate U value 0.5, as nothing else in my mesh uses that UV coordinate value. Of course I could also have used for example 0 or 1, but as Cinema 4D gives those coordinates to newly created polygons, I did not want to have this cause problems. This is how the polygons were organized in the object inside Cinema 4D (don't mind the Min/Sec headers, the clock will show Hour/Min, I was just lazy to change the texture, as that text is so small it will not be readable in the game):

So, now I only needed to find the corresponding triangles in Unity, find their UV indices (in correct order), store these, and then use these to display a number glyph whenever the current time changes. Sounds simple, but it took a bit of a trial and error to find the simplest algorithm to handle this. In my daytime job I had used C# and Linq quite extensively, so I reverted to my Linq toolbox for these algorithms, as the performance was not critical during this setup phase. Here is the routine I came up with, hopefully sufficiently commented, so that you can see what it does:

    using System.Linq;

    int[,] m_clockUVIndices = new int[4, 4];

    void PrepareClock()
    {
        // Find the clock vertices
        Mesh mesh = GetComponent<MeshFilter>().mesh;
        Vector2[] uvs = mesh.uv;
        Vector3[] verts = mesh.vertices;
        // Find all the vertices flagged with "uv.x == 0.5f"
        List<int> vidxs = new List<int>();
        for (int i = 0; i < verts.Length; i++)
            if (uvs[i].x == 0.5f)
                vidxs.Add(i);
        // Find the polygons that use these vertices, these are the digital clock face polygons.
        List<int> tidxs = new List<int>();
        int[] tris = mesh.triangles;
        for (int i = 0; i < tris.Length; i++)
            if (vidxs.Contains(tris[i]))
                tidxs.Add(i / 3);
        // Now tidxs contains all the triangles (including duplicates) that belong to the digital instrument faces.
        // We need to find the correct order of the triangles, based on the sum of the X and Y
        // coordinates of their vertices.
        tidxs = tidxs.Distinct()
                     .OrderBy(a => verts[tris[a * 3]].x + verts[tris[a * 3 + 1]].x + verts[tris[a * 3 + 2]].x)
                     .ThenBy(a => verts[tris[a * 3]].y + verts[tris[a * 3 + 1]].y + verts[tris[a * 3 + 2]].y).ToList();
        // Next, reorder the vertices of each pair of triangles for our final UV coordinate array.
        for (int i = 0; i < 4; i++)
        {
            List<int> tmp = new List<int>
            {
                tris[tidxs[i*2] * 3],
                tris[tidxs[i*2] * 3 + 1],
                tris[tidxs[i*2] * 3 + 2],
                tris[tidxs[i*2+1] * 3],
                tris[tidxs[i*2+1] * 3 + 1],
                tris[tidxs[i*2+1] * 3 + 2],
            };
            tmp = tmp.Distinct().OrderBy(a => verts[a].x).ThenByDescending(a => verts[a].y).ToList();

            m_clockUVIndices[i, 0] = tmp[0];
            m_clockUVIndices[i, 1] = tmp[1];
            m_clockUVIndices[i, 2] = tmp[2];
            m_clockUVIndices[i, 3] = tmp[3];
        }
    }

Now that I had the UV indices that need changing stored, it was a simple matter to change these whenever the current minute changes. Here below is the code that does that, by checking the current minute against the last updated minute. Don't get confused by the const values having X and Y coordinates, these mean the texture U and V coordinates, I just prefer the X and Y terminology over U and V:

    const float X_START = 2048f / 4096f;	// Start of the number glyph U coordinate
    const float Y_START = 1f - (2418f / 4096f);    // Start of the letter 0 in the texture atlas
    const float X_END = 2060f / 4096f;	// End texture U coordinate of the number glyph
    const float Y_SIZE = -((2435f - 2418f) / 4096f); // Height of the number glyph we want to display
    const float Y_STRIDE = -23f / 4096f;	// How much to travel to find the next number glyph

    int m_currentMinute = -1;

    void UpdateClock()
    {
        DateTime curTime = DateTime.Now;
        if (curTime.Minute != m_currentMinute)
        {
            // Update the clock when the current minute changes.
            m_currentMinute = curTime.Minute;
            Mesh mesh = GetComponent<MeshFilter>().mesh;
            Vector2[] uvs = mesh.uv;
            // Set the lower digit of the minute
            float y = Y_START + Y_STRIDE * (m_currentMinute % 10);
            uvs[m_clockUVIndices[3, 0]] = new Vector2(X_START, y);
            uvs[m_clockUVIndices[3, 1]] = new Vector2(X_START, y + Y_SIZE);
            uvs[m_clockUVIndices[3, 2]] = new Vector2(X_END, y);
            uvs[m_clockUVIndices[3, 3]] = new Vector2(X_END, y + Y_SIZE);
            // Set the higher digit of the minute
            y = Y_START + Y_STRIDE * (m_currentMinute / 10);
            uvs[m_clockUVIndices[2, 0]] = new Vector2(X_START, y);
            uvs[m_clockUVIndices[2, 1]] = new Vector2(X_START, y + Y_SIZE);
            uvs[m_clockUVIndices[2, 2]] = new Vector2(X_END, y);
            uvs[m_clockUVIndices[2, 3]] = new Vector2(X_END, y + Y_SIZE);
            // Set the lower digit of the hour
            y = Y_START + Y_STRIDE * (curTime.Hour % 10);
            uvs[m_clockUVIndices[1, 0]] = new Vector2(X_START, y);
            uvs[m_clockUVIndices[1, 1]] = new Vector2(X_START, y + Y_SIZE);
            uvs[m_clockUVIndices[1, 2]] = new Vector2(X_END, y);
            uvs[m_clockUVIndices[1, 3]] = new Vector2(X_END, y + Y_SIZE);
            // Set the higher digit of the hour (24-hour clock)
            y = Y_START + Y_STRIDE * (curTime.Hour / 10);
            uvs[m_clockUVIndices[0, 0]] = new Vector2(X_START, y);
            uvs[m_clockUVIndices[0, 1]] = new Vector2(X_START, y + Y_SIZE);
            uvs[m_clockUVIndices[0, 2]] = new Vector2(X_END, y);
            uvs[m_clockUVIndices[0, 3]] = new Vector2(X_END, y + Y_SIZE);
            mesh.uv = uvs;
        }
    }

The end result, with the game running on Oculus Rift, and this image captured from the editor, looks like the following (with the clock showing 12:42):

As you can see from the image above (well, besides the badly unfinished texturing of the center console), I also worked on some radar code. The radar display uses much of the same techniques, the legend showing the number of friendly, enemy, and other objects uses the exact same texture UV coordinate changing, and the radar blips actually change the triangle mesh coordinates similarly. That creates a 3D radar view (not a hologram, mind you, just a simple 3D display, which even current day technology is capable of, like in Nintendo 3DS) that shows the direction and distance of the other ships and asteroids. I decided to use logarithmic scale in the radar, so that it is scaled based on the furthest object, and the distances are first square rooted and then scaled based on this furthest distance. This way even objects that are relatively close clearly show their direction relative to your own ship.

Next steps

Well, I will continue working on texturing the cockpit, and occasionally testing some other interesting algorithms, to not get bored with the texturing work (which I really do not much enjoy). I just got an Android-compatible gamepad, and I have already done some basic work on reading the Gear VR Controller, so adding proper input would be something I need to do pretty soon. I have also imported the original Pirate and Station meshes from LineWars II into my project, as placeholders, so I could perhaps start working on the actual game mechanics in the near future.

Lots of work remaining, but at least the project does progress slowly but surely!

Dec 23rd, 2017 - Modeling and Texturing Asteroids

Asteroid references

A week or so ago I began looking into creating asteroids for LineWars VR. In the original Linewars II I had and asteroid mesh that had 12 vertices and 20 polygons. I then scaled this randomly and differently in all three dimensions, to create asteroids of various sizes and shapes. However, for Linewars VR I want to have something a bit more natural looking, so I spent a couple of days looking for ideas and tutorials about asteroid generation. I thought that modeling the asteroid mesh by hand would not create suitable variation, so I mainly looked into procedural asteroid generation. I even found a Unity forum thread about that exact subject, so I was certainly not the first one trying to do that. The Unity forum thread did not seem to have exactly what I was after, though. I also found a tutorial about creating asteroids in Cinema 4D, but those asteroids did not look quite like what I had in mind for LineWars VR.

Procedural textures

Finally I found a thread about procedural asteroid material in Blender, which seemed to have results much like what I was after. So, I decided to first look into creating a suitable texture for my asteroid, and only after that look into the actual shape of the asteroid. The example used a procedural texture with Cells Voronoi noise together with some color gradient. At first I tried to emulate that in Cinema 4D, but did not quite succeed. Finally I realized that the Cinema 4D Voronoi 1 noise actually generated crater-like textures when applied to the Bump channel, with no need for a separate color gradient or other type of post-processing! Thus, I mixed several different scales of Voronoi 1 (for different sized craters), and added some Buya noise for small angular-shaped rocks/boulders. For the diffusion channel (the asteroid surface color) I just used some Blistered Turbulence (for some darker patches on the surface) together with Buya noise (again for some rocks/boulders on the surface).

Procedural asteroid mesh

Okay, that took care of the textures, but my asteroid was still just a round sphere. How do I make it looking more interesting? For my texturing tests I used the default Cinema 4D sphere object with 24 segments. For that amount of segments, the resulting sphere has 266 vertices. For my first tests to non-spherify this object, I simply randomized all these vertex coordinates in Unity when generating the mesh to display. This sort of worked, but it generated a lot of sharp angles, and the asteroid was not very natural-looking. Many of the online tutorials used FFD (Free Form Deformation) tool in the modeling software to generate such deformed objects. I could certainly also use the FFD tool in Cinema 4D for this, but I preferred something that I could use within Unity, so that I could generate asteroids that are different during every run of the game, just like they were in the original LineWars II.

I decided to check if Unity would have an FFD tool, and found a reasonably simple-looking FreeFormDeformation.cs C# code for Unity by Jerdak (J. Carson). I modified that code so, that instead of creating the control points as GameObjects for the Unity editor, I created the control points in code with some random adjustments, and then used these control points for deforming the original sphere mesh while instantiating a new asteroid in Unity. After some trial and error with the suitable random value ranges I was able to generate quite convincing asteroid forms, at least in my opinion. This is my current random adjustment, which still keeps the asteroids mostly convex, so I don't need to worry about self-shadowing (as I want to have dynamic lighting, but plan to avoid real-time shadows, for performance reasons):

    Vector3 CreateControlPoint(Vector3 p0, int i, int j, int k)
    {
        Vector3 p = p0 + (i / (float)L * S) + (j / (float)M * T) + (k / (float)N * U);
        return new Vector3(p.x * (0.5f + 4 * Random.value), p.y * (0.5f + 4 * Random.value), p.z * (0.5f + 4 * Random.value));
    }

Exporting procedural textures from Cinema 4D to Unity

Now I had a nicely textured sphere in Cinema 4D, and a nice loking asteroid mesh in Unity, but I still needed to somehow apply the procedural texture generated in Cinema 4D to the mesh deformed in Unity. I first looked into some YouTube tutorial videos, and then began experimenting. Using the Bake Object command in Cinema 4D I was able to convert the sphere object to a six-sided polygon object with proper UV coordinates, together with baked textures.

To generate a normal texture for Unity from the bump channel in Cinema 4D I had to use the Bake Texture command, which gives me full control over which material channels to export, how the normals should be exported (using the Tangent method, as in the screen shots below), and so on.

When I imported this mesh into Unity, applied my Free Form Deformation to it (which meant I had to call the Unity RecalculateNormals() method afterwards), and applied the texture to the mesh, there were visible seams where the six separate regions met. After some googling I found a blog post that explained the problem, together with code for a better method to recalculate normals in Unity. I implemented this algorithm, and got a seamless asteroid! Here below is an animated GIF captured from the Unity game viewport (and speeded up somewhat).

Asteroid shader

After I got the asteroid working witht the Standard Shader of Unity, I wanted to experiment coding my own shader for it. I had several reasons for creating a custom shader for my asteroid object:

  1. I wanted to learn shader programming, and this seemed like a good first object for experimenting with that.
  2. I had an idea of combining both the diffuse texture and the normal texture into a single texture image, as my diffuse color is just shades of gray. I can pack the 24bpp normal map with the 8bpp color map to a single 32bpp RGBA texture. This should save some memory.
  3. I wanted to follow the "GPU Processing Budget Approach to Game Development" blog post in the ARM Community. I needed to have easy access to the shader source code, and be able to make changes to the shader, for this to be possible.
  4. I am not sure how efficient the Standard Shader is, as it seems to have a lot of options. I might be able to optimize my shader better using the performance results from the Mali Offline Compiler for example, as I know the exact use case of my shader.
I followed the excellent tutorials by Jasper Flick from CatlikeCoding, especially the First Light and Bumpiness tutorials, when coding my own shader. I got the shader to work without too much trouble, and was able to check the performance values from the MOC:
C:\Projects\LineWarsVR\Help>malisc -c Mali-T760 Asteroid.vert
  4 work registers used, 15 uniform registers used, spilling not used.

                          A       L/S     T       Bound
  Instructions Emitted:   19      10      0       A
  Shortest Path Cycles:   9.5     10      0       L/S
  Longest Path Cycles:    9.5     10      0       L/S

C:\Projects\LineWarsVR\Help>malisc -c Mali-T760 Asteroid.frag
  2 work registers used, 1 uniform registers used, spilling not used.

                          A       L/S     T       Bound
  Instructions Emitted:   9       4       1       A
  Shortest Path Cycles:   4       4       1       A, L/S
  Longest Path Cycles:    4       4       1       A, L/S
So, the vertex shader (which needs to calculate a BiNormal vector for the vertex, based on the existing Normal and Tangent vectors) takes 10 GPU cycles per vertex to execute (so for 266 vertices in the asteroid, this takes at most 2660 GPU cycles per asteroid, probably less if the back-facing polygons have been culled in an earlier step of the rendering pipeline), and the fragment shader (which needs to calculate the TangentSpace normal vector from the normal map and the Normal, Tangent and BiNormal vectors provided by the vertex shader) takes only 4 GPU cycles per fragment (pixel). As my Galaxy S6 (which is close to the low end of the Gear VR -compatible devices) has a GPU processing budget of 28 GPU cycles per pixel, my asteroid is well within this budget.

Nov 28th, 2017 - The Beginning

Okay, so I decided to start working on a new game project, after quite a long while. Many times since I coded and released my LineWars II game, I have been thinking about getting back to coding a new space game. However, I hadn't dared to start working on such, as it seems that all games nowadays are built by a large team of developers, artists, and other professionals. I thought that a single person making a game during their free time would probably not have a chance of succeeding in competition against such big game projects. However, I recently ran across End Space for Gear VR, which idea-wise is a rather similar space shooter as what Linewars II was. Reading the developer's blog, I found out that it was actually created by a single person. As there are not all that many cockpit-type space shooter games for Gear VR, and this End Space seems to be rather popular, I thought that perhaps there would also be interest for a Virtual Reality port of my old LineWars II game!

As the Gear VR runs on mobile devices, it means that the graphics and other features need to be quite optimized and rather minimalistic to keep the frame rate at the required 60 fps. This nicely limits the complexity of the game, and also gives some challenges, so this would be a good fit to my talents. I am no graphics designer, but I do like to optimize code, so hopefully I can get some reasonably good looking graphics running fast. No need for a team of artists, when you can not take advantage of graphics complexity. :-)

Music

I heard about End Space at the end of November 2017, and after making the decission to at least look into porting LineWars II to Gear VR, I started looking at what sort of assets I already had or could easily create for this project. Pretty much the first thing I looked into was music. Linewars II used four pieces originally composed for Amiga 500 by u4ia (Jim Young). He gave me permission to use those pieces of music in LineWars II, and I converted the original MOD files to a sort of hybrid MIDI/MOD format, in order to play the same music on Creative SoundBlaster, Gravis UltraSound or Roland MT-32, which were the main audio devices at that time. By far the best music quality could be achieved from playing the music via a Roland MT-32 sound module. However, the only way to play that hybrid MIDI/MOD song format was within LineWars II itself, and I was not sure if I could somehow record the music from the game, now 24 years later!

After some experiments and a lot of googling, I managed to run the original Linewars II in DOSBox, together with the Munt Roland MT-32 software emulator and Audacity, and record the music into WAV files with a fully digital audio path, so the music actually sounded better than it had ever sounded on my real Roland LAPC-1 (an internal PC audio card version of the MT-32 sound module)! So, the music was sorted, what else might I already have that I could use in this project?

Missä Force Luuraa

That is the title of a Finnish Star Wars fan film from 2002. The film never got finished or released, but I created some 3D animation clips for the movie, as I was just learning to use Cinema 4D at that time (as that was the only 3D modeling package I could understand after experimenting with the demo versions of many such packages). Now as I was going through my old backup discs of various old projects, I found the scene files for all these animation clips. Looking at those brought back memories, but they also contained some interesting scenes, for example this tropical planet. This would fit nicely into Linewars VR, I would think, as pretty much all the missions happen near a planet.

Snow Fall

Back in 2002 I started working on a 3D animation fan film myself, the setting of my fan film "Snow Fall" being the events of a chapter of the same name in the late Iain M. Banks' novel "Against a Dark Background". This project has also been on hold for quite a while now, but I do every now and then seem to get back to working on it. The last time I worked on it was in 2014, when I added a few scenes to the movie. It is still very far from finished, and it seems the 3D animation technology progresses much faster than I can keep up with my movie, so it does not seem like it will ever get finshed.

In any case, I spent a lot of time working on a detailed ship cockpit for this animation project. I believe I can use at least some of the objects and/or textures of the cockpit in my LineWars VR project. This cockpit mesh has around a million polygons, and uses a lot of different materials, most of which use reflections (and everything uses shadows), so I will need to optimize it quite a bit to make it suitable for real-time game engine rendering. Here below is a test render of the cockpit from June 28th, 2003.

Learning Unity

As I haven't worked with Unity before, there are a lot of things to learn before I can become properly productive with it. I am familiar with C#, though, so at least I shoudl have no trouble with the programming language. I have been reading chapters from the Unity manual every evening (as a bedtime reading :), and thus have been slowly familiarizing myself with the software.

Explosion animation

One of the first things I experimented with in Unity was a texture animation shader, which I plan to use for the explosions. I found an example implementation by Mattatz from Github, and used that for my tests. I also found free explosion footage from Videezy, which I used as the test material. This explosion did not have an alpha mask, but it seems that none of those explosion animations that have an alpha masks are free, so I think I will just add an alpha mask to this free animation myself.