Making of – Underworld Ep.1

Overview

Goals

To follow each role of the game development pipeline and investigate the importance of programmatic animation in modern animation. 

To establish the technical restrictions of artificial intelligence and explore the future of animation.

Produced as part of my major design project to conclude my degree and final semester. See my Project Pitch.

Project

To develop a game episode that focuses on the use of short cinematics to narrate the game and set the game’s atmosphere.

Cinematics are to run in real-time and update in-game features to allow the player to proceed to the next stage.

To use crowd generation to produce an immersive and thrilling experience.

The game must ship in a playable state and run smoothly across different hardware and platforms.

Contents

Planning

Setting

I have chosen to set my scene in the gruesome depths of an orcish stronghold. I believe that a fantasy setting gives me the scope to take full control over the environment. I plan to follow typical lore where orcs are at war with humans. Orcs are nomadic and can survive harsh environments, moving when local resources are depleted. More permanent settlements are built deep underground.

An underground scene will bring challenges in terms of lighting as a result of limited natural light; skylight? lava? torches? luminescence plants/insects? A large amount of scene lighting will bring forth performance issues where I must consider the use of baked or dynamic light sources.

I envision a natural gorge inside of a mountain with lava flowing through it. The inhabitants have mined a labyrinth of tunnels and using local resources, have built bridges, fences, huts and machinery. The surrounding landscape will be barren and rocky.

Surprisingly, there is very little out there when it comes to orc or dwarven caverns. But, this is great as it allows me to be creative and develop something unique.

Requirements

Problems
  • Limited time (12 weeks alongside other classes)
  • Little experience (unity, 3dsMax)
  • One person (me)

 

Choosing the Right Development Pipeline. P1.

Lets break down the process the animation process.

  • For both film and games, Maya is often the tool of choice for modelling.
  • Fine details and textures are done through digital sculpting tools such as ZBrush and increasingly through Substance designer.
  • Animation is often done through in-house software in big-picture companies but, often done through Maya for games.
  • For film and tv, rendering is done through software such as Arnold or Renderman whereas, games are done entirely through their own engines.

Animation is tedious and traditionally done through keyframes and motion capture. More recently, minute animations such as facial features are audio-driven, a technique used predominently in games but is not quite perfect as Bioware found out when developing Mass Effect Andromeda. Although it is still too early to use algorithmic animation on a large scale, it is definately an amazing piece of technology and the future of animation.

Choosing the Right Development Pipeline. P2.

Animation software often provides the ability to insert animation expressions to aid development and are often used in the creation of motion paths. There are also many ready-made plugins that are freely available. However, algorithmic animation is much better suited to game engines as they provide much more support in terms of code libraries.

Algorithmic animation can be best seen through A.I. as characters can be pre-computed to interact with each other or the environment. Film and Games are becoming more and more intertwined as can be seen in Pixar’s Finding Dory which made use of Unreal Engines VR support.

Rendering animated videos through specially designed software such as MentalRay can take hours to days to weeks depending how much data needs processeed. Where as game engines render in realtime! Yes, you heard me, realtime!

Game engines can achieve this by pre-calcuating data and use of optimised meshes and lighting. Game engines are simply not designed to render extremely detailed scenes and are incapable of rendering complex effects.

In recent years as a result of the advance of technology and better consumer computers, game engines have made a huge leap forward in terms of the quality that they can produce. UE4’s Sequencer which replaced the legacy Matinee toolkit allows for video sequences that can be rendered in realtime or frame by frame like a traditional renderer (at much faster rendering times). In film, Disney used the Unreal Engine to render a droid for some scenes of Rogue One.

Due to time restraints and my strong bias for game engines, I will be using a game engine to develop my animation.

Software Used

Thanks to DigitalTudors for giving me an introduction to UE4 and World Machine as well as EPIC GAMES for providing access to their UE4 demos.

Environment

Terrain Creation

As I had no experience with environments before, I was not sure where to start but decided upon creating the exterior first to allow for better visualision of the interior structure.

Main Game Engine Terrain Creation Methods
  • Heightmaps
  • Voxel
  • Polygon

Modern game engines come with terrain editing tools that allow for the creation of massive terrain based worlds. UE4 allows heightmaps to be created from scratch using the landscape tools in-engine however, are not recommended for photorealistic landscapes as the detail must be painted by hand. Instead, heightfield data exported from external software can be imported. Such software can generate landscapes based on nodes and other inputs adding fine details such as erosion over time.

Heightmaps work by storing the height component for each vertex in a quad and are the fastest method for geometry lookups e.g. collision detection. They are relatively low on memory usage and their ability to allow for dynamic LOD make them the most efficent method of rendering terrain. However, it is impossible to create holes and can not have any overlapping geometry, therefore, caves and overhanging cliffs must be created within the engine through meshes and transparent materials. Moreover, heightmaps are prone to artifacts and ultimately provide less control whilst texturing. Heightmaps can be rendered as a mesh but chunking is required so that LOD and other culling techniques can lower the detail or prevent the rendering of hidden chunks. Some off-the-shelf game engines such as UE4 come with automatic frustcum and occlusion culling set by object bounds.

Voxel terrains store data for each point inside a 3D grid and is particularly common in older games but has made a huge come back in recent years. Voxels allow for continuous storage of hidden geometry popularising infinite open worlds like Minecraft and No Mans Sky. Voxels are easy to modify and their structure can be changed on the fly as can be seen in games such as Astroneer. The main advantage of using voxels is that they can produce caves and overhanging geometry, however, are much slower than its competitors to render. Futhermore, artifacts are common and are incredibly taxing on VRAM. Here’s a good read on procedural world generation using voxels.

Terrains can also be created through meshes which are very flexible and give full control over advanced terrain features and textures. Meshes do not require a geometry shader and as a result render extremely fast. As all co-ords are stored individually for each vertex, they are precise and have a low memory impact. They are commonly used to create interiors, caves and overhanging features (close quater areas). However, it is difficult to place meshes through code and have a poor-dynamic LOD and collision detection.

 

So, what tools are the industry using?
Landscapes

These are common tools used in Film and TV to produce photo realistic landscapes, atmosphere and vegetation and are beginning to become popular in game development as home computers are becoming more powerful.

Vegetation

Vegetation generation is built into some software like VUE (E-on) and Terragen but stand alone software such as PlantFactory also by E-on and Speedtree are frequently used. SpeedTree is a huge name in game development and has been around since 2002, recently releasing toolkits for Unity and UE4. SpeedTree is able to produce a huge variety of dynamic, fully textured plants through nodes and provides seamless LOD transisitions.

What does the future hold?

The past few years have proven that there is still a lot of unlocked potential in voxel terrains. Despite their heavy memory usage, voxels can dynamically produce extremely complex features. Voxels make for some interesting game mechanics but can also be used for environmental artists to visually deform a landscape (perhaps through V.R.). Below are some tools that are in development that demonstrate the full power of Voxel terrains.

  • Landscape architect for U.E.4.
    • Procedural landscape generator
    • Design and build landscapes within the editor
    • Automatically takes care of setting up the materials, foliage, world composition etc.
  • World Creator for Unity
    • GPU terrain generation

World Machine – Heightmaps

I took too Google to find real life volcanos, caverns and gorges. Check out my research!

Using Nasa’s satellite imagery, I was able to export a height map into World Machine and manipulate the terrain through a variety of nodes. I merged multiple locations together and created a to scale, 3D representation of them. I exported the new height map from WM along with a colour map and a splat map and imported them into UE4.

I textured the landscape by assigning textures to the RGB of the splat map and overlayed the colour map. As the landscape was huge, the textures i had assigned were very noticeably tiled and I had to look into producing procedural textures. I have detailed my process later.

Procedural Textures

I have previously only worked with basic textures and as a result of time restrictions, I searched the marketplace to see if I could find that I could use. I really needed to get on with the project since I focusing on A.I.

I found Planet Venus Landscape, an atmosphere and procedural texture for a Venus-like setting. Perfect!

I learn from experience and really learned a lot from stripping apart their code on how they created this procedural texture. The landscape that came with this plug-in was very detailed and took a lot of processing power to render. I switched it out for the one I made earlier and modified the code to work with visibility layers to allow me to cut holes in the landscape. Furthermore, the material does not work on meshes and i had to modify the code to get the mesh co-ords when applied to a mesh rather than the landscape’s. I also created a varient without lava and made the atmosphere more like earth’s.

Using the venus textures as a base, i created a seperate material to apply to props and small structures. The new material matched the composition of the landscape and made sure that any props did not look out place.

Cave Building – Meshes

I built a cavern out of rock meshes from Epic’s particle demo. However, the meshes had a lot of polys and no LOD and my editor ran extremely slow. I noticed, I was able to over half the poly count without any visual impact, and created five LODs for a seamless transition. This greatly improved my FPS. I rotated each rock slightly so that they would not look tiled, and grouped others into a singular mesh to remove hidden vertices and further increase my FPS.

The venus material uses tesselation to distort landscapes and create the illusion of pebbles and rocks. As a result of the the high tesselation, my rock meshes looked extremely distorted and I had to drastically reduce it. I also modified the colours to look like basalt, a rock commonly found in volcanic regions.

Scene Building

Bridges

I rewatched The Hobbit to get ideas for bridges and drew up some sketches. In The Hobbit, rope is textured onto the planks which works but doesn’t look real. I decided upon using a low poly 3D rope mesh which is seen in games such as Skyrim.

To do this, I tried using Smart Spline Generator (SSG) but found that the random path finding is unreliable for large distances. I was able to wrap the rope around each plank individually but, would be time consuming to do. I decided to come back to this later if i have time and carry on creating the structures. I used the rope mesh for torches.

I created a variety of different bridges and LODs in 3dsMax before finding that I could create LODs inside of UE4. It was good to know anyway. Using the soft selection tool in 3dsMax, i made some planks look like there were warped upwards from the heat or bent from continuous trampling. I made sure to make the bridges as low poly as possible as they would be duplicated on large scale.

I added different bridges along splines, setting the curve and the gravity for each. SSG does not yet support blueprints, so I had to modify the code to have meshes to spawn at either side of bridge. This was useful for when I created the walkway and needed supports at either side.

SSG generates when the scene loads and as a result each bridge had to be exported as a single mesh. This is problem for the LOD of long bridges as they must be rendered in their entirety.

I made a blueprint which I used to spawn individual meshes. The blueprint chose a random mesh as well as chose particular meshes from different inputs.

Just like the rope, If I had time at the end, I would touch up the edges of the wood in mudbox. UE4 allows meshes to be re-imported and all instances would be updated automatically.

I converted some bridges and their supports to a destructible mesh to make for a nice scene of a boulder smashing through them. However, UE4 currently does not support LOD of destructibles made within the editor. An alternative would be to create the desctible chunks in other software and import them with LOD. Desctucible objects are considered essential by UE4.

Cave Stuff

I created some of my own props such as torch and wooden posts which I textured in Mudbox. However, others were taken from different sources online; Luos’s Free Modular Cave Assets & Sangloo’s Modular Assets.

As mentioned earlier, I re-textured most ‘natural’ assets with my venus material modification and produced my own collisions and LODs. I also created a blueprint for each asset to switch between different varients. The blueprints would rotate the assets without extremes ensuring they were uniquely placed and did not look duplicated.

I wrote a script to snap objects to either the floor or the roof depending on a certain input. I did this by sending a raycast everytime the mesh was moved and jumping to where it hit. This made it much easier to position objects.

Collision & Blocking

Collision

Collision in UE4 is quite complicated but, I will break it down and summarise it.

Think of collision as a layer surrounding a mesh that prevents objects from passing through it.

Collision layers can take the form of scalable spheres, boxes or capsules. In UE4 boxes can be converted to convex collision layers allowing the collision to take the form of the mesh itself taking into account accuracy other parameters.

Convex collisions are great for hollow objects or those with cut outs. For example, if I had a hollow room with a hole in the wall in place of a door and I added a box collision around it, I would not be able to enter it. Therefore, if I converted the box to a convex collision then it would take the form of the mesh, allowing me to enter through the hole and into the room inside.

However, you must remember that the more vertices a collision has, the more processing power is required to render it. As a result, convex collisions should only be used characters or dynamic objects need to be in close proximity or inside of a hollow mesh.

in UE4 it is possible to allow collisions to block all or ignore certain actors. Events can be created for actors that overlap or come into contact with collisions. Moreover, physicsbodies can be added so that contact with other dynamic actors can be processed. They can block or allow raytracing. 

Blocking

Collision is great and ultimately doesnt require too much processing but, large blocking volumes can be added in place of multiple small collision boxes to further improve performance.

My scene’s environment consists of many rock meshes stacked on top of each other to create a cave. Rocks have many small extrusions that a large character would not be able to pass through and a box collision could be used for individual rocks. However, I have stacked rocks in rows to created walls and the roof of the cave and many box collisions would be unnessary. In my scene characters and dynamic actors do not get anywhere close to the roof and rendering collision for the roof would also be unnessary.

Therefore, multiple large blocking volumes can be added to make up the walls and floor of the cave and smaller voumes can be added to more extreme shapes.

However, In my case, the most performance mesh for collisions is the wooden supports on the bridges. I did not use blocking volumes because i make use of convex collisions to allow my AI to pass through or walk on the supports.

I have used small blocking volumes to cover areas the AI is not supposed to go or areas that I have noticed the AI getting stuck.

A.I.

Creatures

Orcs

As a result of time restrictions and my focus on A.I. instead of character modelling, I did not want to produce my own model. I searched the internet for something of value but came out empty handed.

As my machinima is purely for educationaly purposes, I ported a ‘falmer’ out of The Elder Scrolls: Skyrim and into UE4. I sent Bethesda an email asking if they had problem with this but I did not receive a reply… 

The falmer looks quite a like an orc or a goblin would and so worked extremely well in my scene. It required a few tweaks and some new animations (detailed later).

Trolls

I also borrowed a Troll from Skyrim, it works great as a boss and the different animation set really helps to break up my scene.

For the Falmer and the Troll, I had to convert and edit all the textures and animations as part of the porting process.

Navigation data must be built on the persistent map for AI to roam. I have found does not work if built on sub level if you are using level streaming.

Navigation Mesh

Unreal automatically generates a navigation mesh inside of NavMeshBoundsVolumes and to the specifications set up inside the level’s navigation settings. It will as close to objects as the bounds of the smallest AI character allows. NavMeshBoundsVolumes are square/rectangular and can build a nav mesh on unwanted items outside of the intended scope, this can be combated by adding a NavMeshModifer with a Null class and cutting away the unwanted areas.

I had a lot of trouble setting up my navigation mesh as it doesnt read the navigation data attached to meshes in blueprints. For example, I created a blueprint that generated supports for paths. However, only paths should affect navigation data and not the supports. Supports that had a hidden path still affected the navigation mesh so I replace the main problematic instances of the blueprint with static meshes.

Navigation Link

NavLinkProxies can be added to specify where Pawns can jump or drop off ledges allowing them to temporarily leave the NavMesh in order to cross gaps in it. I have used this substanially ensuring A.I. can drop down anywhere with a reasonable height but not jump back up. However, NavLinkProxies are farily limited and A.I. will only jump at the specified point, so many may need added along a path.

Path Finding

Set-Up

To create A.I. in UE4 you need;

  • Character
    • Mesh
    • Animation Class
      • State Machine (main animations e.g. Idle/Run/Jump)
      • Event Graph (in-air check, get terrain footstep sounds)
    • Event Graph (additional functions with access to character variables)
    • Functions For Classes Run By Behavior Tree
  • Blackboard
    • Public variables
  • Behavior Tree
    • Blackboard Conditions
    • Run Blueprint Classes
  • Controller Blueprint Class
    • Fetch Blackboard
    • Get Actor Location
    • Run Behaviour Tree
My Classes

I have scripted my classes in a such a way that allows me to resuse them for each character type.

Unreal has four main behaviour nodes; decorators, services and tasks.

Composite Nodes define the root of a branch and the base rules for how that branch is executed. They can have Decorators applied to them to modify entry into their branch or even cancel out mid execution. Also, they can have Services attached to them that will only be active if the children of the Composite are being executed. Composide nodes come in the form of selectors or sequences.

Decorators (conditionals) are attached to either a Composite or a Task node and define whether or not a branch in the tree, or even a single node, can be executed.

Services attach to Composite nodes, and will execute at their defined frequency as long as their branch is being executed. These are often used to make checks and to update the Blackboard. These take the place of traditional Parallel nodes in other Behavior Tree systems

Tasks are nodes that “do” things, like move an AI, or adjust Blackboard values. They can have Decorators attached to them.

Classes I have written:

  • Decorator_IsInAttackRange
  • Decorator_CheckArrivedAtTarget
  • Service_FindTarget
  • Service_FaceTarget
  • Service_SetValues
  • Task_GetRandomDestination
  • Tast_PerformEmote
  • Task_ResetPatrolValues
  • Task_MoveToAttack
  • Task_AttackPlayer
Go To Location and Patrols

There are a couple of ways going about generating crowds but, I only require the A.I. to move in a set direction in particular shots of the cinematic. A previously used character may be asked to move to another location later in the cinematic.

To do this, I have placed collisions around the map and attached a collision actor to the characters that I want to move.

Each character has a patrol and a wait flag. When patrolling the characters walk around their local radius and if they stray too far from their initial location, they will go back to it.

When the wait flag is removed, they will run to a location near the assigned collision before patrolling the new location.

I understand this may not be the best way about doing this but it works. I do experiece some issues whereby it is incredibly hard to predict where the A.I. will be in the next shot. In a film, each shot will be rendered seperately and the characters will be told exactly where to go where as this cinematic is rendered in one take

Avoidance

The character movement component in Unreal comes with a prebuild avoidance system that changes the avoidance state. However, my navigation paths are very small and not much wider than an individual orc. This results in the A.I. blocking each other’s path.

I am currently working on a solution as detailed in the bugs section.

Ladders

I previously mentioned that I used nav link proxies to allow the A.I. to jump between gaps in the navigation mesh. Nav link proxies do the trick but, I don’t want my A.I. jumping to a higher platform because that would look unnatural. Instead, you must either play a climbing animation or have them walking in a forward motion whilst they move upwards.

One problem I had, was that all my ladders are positioned at different angles which would require multiple animations that I wouldnt not have time to create. Therefore, I went with the latter option.

Ladders must be able to move multiple A.I. at the same time in the one direction and prevent other A.I. from attempting to go the opposite direction whilst in use. To do this, I created variables on each character telling it if it is able to climb ladders or not. However, this required duplicating my code for each character.

I put a trigger at the top and bottom of the ladder and when activated, it will move the character to the opposite end. I set the character’s character movement mode to flying to move the characters upwards and escape gravity. I have a timer that will auto force the character to walk should they not arrive at the second trigger.

I wrote a script to add multiple ladders to same blueprint and their positions can be changed, updating the top and bottom trigger.

Animation

Machinery

The scene still looked a bit lifeless and filled it out with some animated mechanical props. For example, i created rotating wheels by adding a y-axis rotation to a mesh every game tick. Sounds and particles helped the machinery look more believable. Others such as a crane rotate on their z-axis and a variable velocity.

Humanoids

As previously mentioned, I borrowed the falmer and troll assets from Skyrim. I also borrowed their animation sets but had to set them up in Unreal which is a good bit of work.

I converted each animation seperately to an fbx before importing them into Unreal. I then created an animation controller consisting of a state machine.

State machines break up a sketetal mesh’s animations into a series of states governed by Transitional Rules that control how states blend into each other. State machines are largely used for the character’s main processes such as idle, run, jump, attack.

Blendspaces

Blendspaces are used to seemlessly blend animations into each other over a variable value. In my case, I used a blend space to blend the Idle/Walk/Run animations in regards to the character’s speed.

A blueprint is used to combine all the character’s assets. Epic allows you to use their Character Class as a base which really speeds up development. Through it, you can pick your AI Controller Class and set up variables regarding the characters movement. Thats half the work, so thanks EPIC! In the blueprint you need to add a mesh and pick the animation controller as well as a collision component. Capsule’s work best for humonoids

For Navigation Mesh to update correctly, the character must be added as an ‘Agent’ to the navigation system in project settings. The agent radius and height refers to the radius and height of the colision compenent added to the character. Unreal works by checking for characters that share the closest values to the added agents.


Sequences

Overview

Level sequences have replaced the legacy matinee feature and is Unreal’s solutions to cinematics and complex scripted events. They bring together cameras, audio, particles, events and all meshes in the scene through a timeline.

Unlike other animation software, Unreal’s timeline can access and edit game variables which can be scripted to fire off other events and sequences.

The main cinematic was created by a master sequence containing sub-level sequences containing different camera-cuts. However, Unreal is incapable of any fancy transitions and comes with a basic fade in and out of black. Therefore, it is quite difficult to prevent hard cuts without post editing.

Events

Game objects such as audio or meshes can be added directly onto a sequence and have their values changed. This works great for playing audio at a certain time but, these changes will not impact the actual game as they revert back to their default values once the sequence finishes.

On some occasions I require objects to retain their new values. This can be done through events. Events can be added to the timeline which then run a custom event on the level blueprint. For example, when the troll bursts through the doors in the final cut, the doors need to stay open in the actual game.

I have programmed a skip feature to allow the player to skip cutscenes. This has led to problems where the player skips before events are fired. I have temporarily solved this by the sequences that impact gameplay unskipable.

Renders

Level sequences can be rendered as a series of images or an AVI. This allows me to beef up the draw distance, shadows and textures and have the engine render each frame in its own time. This quality would be impossible to render in real time on consumer desktops.

As a result, audio cannot be recorded with the render and a seperate real-time render must be produced in-order to capture the audio. I did not take much notice of this until I began production when i remembered I am using A.I. and no two scenes will ever be the same. Dang.

As detailed later, my A.I. make sounds and those recorded from a real-time render will not necessarily line up with the location of the A.I. in the fully rendered sequence. So I have pretty big problem.

A solution to this would be to always know where the AI should be but this requires rendering each and every camera shot seperately and manually scripting each character. But this defeats the purpose of my original goal to create a machinma using crowd generation to populate an immersive environment. Therefore, I will have to work at optimising the game to run as smoothly as possible in realtime at a reasonable quality.

Particles

I had previously only worked with particles in 3dsMax and had to learn an entirely new process to create particles inside of UE4. I followed a tutorial by DigitalTutors (DT) to give me a head start before making some myself. Through DT, I learned how to work with different particle types.

  • Sprites
  • Meshes
  • Ribbon Trails
  • Beam Emitters
  • Anim-trails

Below I have included links to see the particles I have created.

Particle Types

Sprites are must easier to process and are used predominantly when multiple particles are required to emitted at the same time or very quickly. As a result, they are commonly used for smoke, sparks and flames. Sprites are 2D and therefore, can rotate to face the camera, preventing the particle from disapearing. Engines like UE4 are able to process sprites through the GPU which allows the output of significantly more particles. However, GPU sprites are generally restricted in terms of functionality such as light emission.

View Sprite GPU/CPU
View Sprite LOD

Meshes are 3D and therefore, are heavier to process. They are usually used for larger particles such as falling rocks or leaves. They work hand-in-hand with sprites and often benefit from ribbon trails.

Ribbon trails are usually created from sprites are very useful when creating projectile effects.

View Mesh and Ribbon Trail

Beam emitters create an effect between two points and are usually used to simulate electricity effects or bullet trails.

View Beam Emitter

Anim-trails are particles that can follow the bones of an animation mesh and can be used to produce sword slashes or magic casts.

View Anim Trail

Triggering Particles

I created a blueprint to simulate a gust of wind stoking fires. I created my own flames and embers using sprites from Epic’s elemental demo and attached them to broken pot meshes. I added two trigger volumes, one inside the other. When a moveable actor enters the outer trigger, small embers will be emitted once and fly out of the fire in a circular motion. The embers will continue to produce whilst the actor is within the inner trigger and will dissipate when the actor leaves it.

Socket Emission

Whilst researching particles I decided to go back and revamp one of my earlier pieces which would benefit from my newly learned skills. I attached a fireball particle and had it follow the hands of the character.

Lighting

Light Types

UE4 has four light types; Directional, Point, Spot, and Sky. I have taken the following descriptions from the UE4 knowledge base as I can’t word it any better.

Directional lights are primarily used as your primary outdoor light or any light that needs to appear as if it is casting light from extreme or near infinite distances.

Point lights are your classic “light bulb” like light, emitting light in all directions from a single point.

Spot lights emit light from a single point, but have their light limited by a set of cones.

Sky lights capture the background of your scene and apply it as lighting to your level’s meshes.

Light Mobility

Lights in UE4 come with three mobility settings; static, Stationary, and Movable. Each dramatically changes the way the light works and their impact on performance.

Static lights have no overhead during a running game and their colour, brightness and shadows baked into meshes. They have an extremely little impact on performance as their data is pre-calculated.

Movable lights are the opposite and are fully dynamic. Therefore, they are capable of changing all of its properties during runtime and make the perfect light source for items carried on the player or moving objects. However, they require the most processing power to render and only used if neccessary.

Stationary lights however, are a happy medium between static and moveable. They can change their color and brightness at runtime but cannot move, rotate or change influence size. They are able to cast dynamic shadows and are perfect for mounted light sources such as torches and candles.

Scene Lighting

I have scattered torches and pot fires across the map to illuminate dark corridors and bring to life the scene. These meshes are blueprinted and come with individual stationary point lights. The values of stationary lights can be edited through the blueprint but I found an emmisive, slow flashing light function material that is attached to the light source can better mimic torch/fire flickers. I used stationary lights for their dynamic shadows which help produce a realistic and more dramatic scene.

I have also scattered static point lights with a large attenuation radius across the cavern floor to light up the floor just that little bit more.

Post Processing

Torches have a very limited range, and the cavern is still very dark. I was able to combat this through post processing. Using multiple layers of post processing, I was able to give different areas a different hue and brightness. For example, the water room has a natural blue/green hue where as the main chamber is red and brown.

Post Processing allowed me to alter the contrast and add bloom and lens flares that helped to brighten the whole scene as well as make it look a little more believable. I added a dirt mask to the post processing which gives a really nice effect when illuminated through lens flares.

Cameras

Animation Curves

Over a timeline I have cameras pan across my environment, with some targetting a following certain actors. I have used auto-key when animating cameras. Auto-key draws a path between two locations over the timeline which the cameras follow. The path can edited through the curve-editor. Unreal’s auto-curve works well but, often uses acceleration which is not nessarily the best when switching between cameras. As a result, I have modified the curvature of most paths.

As mentioned previously, it is difficult to make soft camera cuts without transitions and therefore, I have to think outside the box. Some possible methods of reducing hard cuts is to have a constant acceleration of the camera between shots or have the player’s/viewer’s focus on an actor that is present in both scenes.

I have used linear curves at the beginning and ending of some shots to make for a smooth transition. I have also centered actors in some shots as well as over populate the scene at the ending and begining of others to distract the user from the camera cuts.

Post Processing

In Unreal, Cameras can also have post processing. I have used this only to add the world’s post processing and for the machinima, the brighten particular seens.

However, as cameras pan through meshes I experienced issues with the depth of field and auto exposure where the scene will darken and then slowly brighten. Some scenes required the camera to pass through multiple meshes and therefore these features proved to destroy immersion rather than add to it. I combatted by temporarily disabling the depth of field and auto exposure when overlapping other actors.

Audio

I am in no way an sound designer and I had to outsource all the audio. Unreal provided some really nice stuff such as torch sound effects with their demo content but, others such as breaths, footsteps and scores had to gotten online. A list of sounds that I used can be found on my freesound.org account.

Footsteps

Audio is extremely important in creating immersion and minor effects really do go a long way. For example, I have added surface dependent sounds and particle effects to walking animations. Walking on wood should not sound like walking on stone.

I did this by adding Notify events to the animation at the time where the character’s feet hit the ground. In the character blueprint, everytime the notify is activated it needs to send a line trace to hit the surface and return the surface type. The surface type is set up in Project Settings -> Physics -> Physical Surface. A physical material needs to be attached to the landscape/mesh that the character is walking on top of. The physical material contains the surface type. When the linetrace returns, we do a switch on the surface types and play the selected sound/particle at the characters location.

Breath/Grunt

I have added breath sound effects to the player and grunts to the monsters. When developing the player, I wanted breaths to be subtle and have them programmed to only play after the player has been running consecutively for three seconds. Whereas the grunts are set much like footsteps and activated in the character’s animations.

I have used a wide variety of breaths and grunts splitting each individual sound into a WAV file and used a music CUE to combine similar sounds. I have set the sounds to select a sound at random. This makes the sounds much more believable as loops can not be recognised.

The different monster types use the same grunt cue but with a different pitch. The troll has a much deeper grunt. This helps to reduce the project size as I am re-using as much as possible.

Scores

I have purchased the rights to use music scores from Devesh Sodha. His music is great and really helps to bring my machinima to life.

I made use of ‘Wishes Come True’, ‘The Avengers’ and ‘Descendent Of God’.

Polishing

User Interface

User interfaces can make and break games. MMOs largely require complicated interfaces as the user is often given full control of their play through. Linear shooters generally are more simple and only the important and releavent information is diaplayed i.e. health and ammo.

I am struggling to find a genre for my game. I want to say walking simulator yet I have some interactive elements. Anyhow, a user interface would take away from the beauty of the environment but, is necessary for story telling.

The main menu is simple and is scripted to work with a controller and mouse and keyboard. The background has been programmed to change depending on their player’s progress. So far there only is one level so I have two cameras that swap every 20 seconds to show this.

I plan to have a scene selection and a cinematic selection if I have time.

In-game the user is presented with a crosshair that disappears during cinematics and can be toggeled on and off.

Some cutscenes and game events trigger text that provides a background to the player’s story and guides the player through the level. The text is programmed to show letter by letter at the speed of a narrator to give the sense that they are being read a story. Altogether, I believe the HUD is not intrusive and allows the player to play and explore at their leisure.

Level Streaming

Wait, I thought you said polishing?

When nearing the end of my game, I had an epiphany. I was too learn level streaming. Level streaming is a method of layering assets that can be quickly swapped in and out.

My epiphany was to base the story around level streaming. The player would wonder through a lush cave and have flashes of a time where orcs rule the underworld. So not only would level streaming add to the gameplay, it would also serve as a massive performance boost.

In Unreal, a persistent level is used to group all sub-levels. Sub-levels can either be always loaded or loaded in and out through blueprints and their lighting data either be baked across all levels or be level dependent.

I moved all rocks and props to an always loaded level, moved all foilage and natural lights to a blueprint loaded level and all the AI, fire effects and torches to another blueprint loaded level. All gameplay elements such as triggers were moved to an always loaded level.

Now as the player progresses through the game, Unreal automatically and seemlessly loads in and out levels depending if they are in view or not. Its really cool. I also manually toggle levels when in sequences and it has had a noticeable improvement on the games FPS.

However, everytime the A.I. is loaded their behaviour trees must be re-computed causing the A.I. to temporarily break. Until I find a soltion, I had to move them to the persistent level.

Should I manage to have time to spare, I would love to add foilage to main cavern to show the full power of level streaming.

Bugs

As I add new features, I am always trying to maintain stability. In the game’s current state, there are a few issues regarding A.I., ladders and frames per second.

A.I.

My additional avoidance script is not perfect and on some cases, the characters struggle to get past each other, The AI is also quite unresponsive and does not find new path immediately when pushed off nav-mesh.

Ladders

I have measure to reduce issues by only allowing characters that want to use the ladder in same direction, at the same time. However, some characters block the exit of ladder and prevent current climbers from finishing their animation.

Destruction

Collision remaining on destructible supports after bridge collapse. 

Frames Per Second

When I started programming, I wrote a lot features around the game tick. Scenes with less content render faster and so have a higher FPS and compute game loops faster.

As a result, events that are based on ticks happen more often in areas where there is a higher FPS and less often in areas with a lower FPS. This a huge problem for me as the character movement is tick-based and so moves slowly in the main cavern. Events are also less responsive.

Pre-rendering

Pre-rendered videos in Unreal do not come with sound. My A.I. is random and the sound does not necessarily line up with realtime rendered videos and others are missing.

Level Streaming – Game Breaking

I split up different assets across different levels and merged them together for faster rendering. But, events that are stored on these sub-levels sometimes miss when fired from a sequence. As a result, the produced game and the pre-render are full of bugs. Playing in editor works fine. I believe this to be a bug within the latest build of UE4.

Reflection

Goals – Development Pipeline

I have taken on many roles of the game development pipeline throughout the creation of this short game and has helped to reshape the direction of work that I would like to enter. I am a fully capable artist and animator but enjoy taking on the challenges faced by developers. I believe that I would be the perfect technical artist. 

I began the project with little experience in each role and worked extensively with Unreal each day to achieve the high targets. I learned the pipeline process for each role by tinkering with Unreal tech demos and free community projects.

The modern animation pipeline is changing as technology advances. The presence of artificial intelligence is becoming increasingly common in film and games. A.I. and programmatic animation has been able to severely reduce development time as it can auto compute time demanding process such as keyframing.

Goals – Technical Restrictions

Technological advancement has led to improving home computers allowing developers to push better graphics and more resource intensive scripts. However, FPS is still a major issue in Game and Film development and throughout development I had to ensure that assets and scripts were refined and optimised.

In recent years, facial patterns and speech databases have been used to construct procedurally generated conversations and animations in games. In film, the past few years have seen CG actors commonly replacing distance actors and stunt men. I believe that as technology improves, games and film will become more intertwined as animation becomes more dependent on artificial intelligence.

Project

I have created a short game that is playable with a variety of input devices and across multiple resolutions. Game settings can be changed to modify the graphically quality. I have created multiple cinematics that are triggered within the game or can be played from the scene selector.

I have made use of crowd generation to spawn orcs on a large scale and developed artificial intelligence to allow them to interact with their environment.

I have put a lot of effort into small gameplay features that you may not notice on first play through but together help to create a robust and believable game.

Closing

I encountered many problems mostly brought about through my lack of experience with Unreal. I had previously written in C# but, Blueprints was an entirely new language and I had to learn and recognise new terminology.

The huge learning curve meant that I had to take time to learn new features and processes. As a result, I now have excellent experience with Unreal and 3D software.

I pushed all the skills that I have learned at university and industrial placement and have learned new skills and techniques. I have noticed a tremendous improvement in my work and I would love to be a part of team where I can continue learning and pushing my skills further.

When taking on the role of an environmental artist, I invested a HUGE amount of time learning World Machine. I made some cool environments but, I did not end up using my new skills in the final project.

Tackling issues in FPS has been a lot of fun as I have been able to use my knowledge of 3D and programming to understand where and why the performance hits occurred and what can be done to combat it.

I believe I have met all the initial requirements, however, if I was able to take on this project again, I would be able to better streamline my game development pipeline with my newly learned skills.