Underworld Ep.1

My first steps as a technical artist are detailed in my Presentation Documentation.

Interested in playing playing within the underworld? Drop me an email.

I ran into a few problems rendering the animation. They are detailed in the blog.


Some of my favourite angles from the level.

[gview file=”https://aarontrotter.com/wp-content/uploads/2017/05/Development-Booklet.pdf”]


I have put together a development booklet.

For full research and development process, you can read my blog.

Presentation Documentation

The aim of my project was to follow each role of the game development pipeline and investigate the importance of programmatic expressions in modern animation. In doing so, I would establish the technical restrictions of artificial intelligence in animation and explore the future of animation.

I set my scene in the depths of an orcish stronghold as I believed that a fantasy setting would give me the scope to take full control over the environment. This would allow me to be creative in tackling challenges such as light placement. It would also encourage me to learn the software and the different processes involved in creating a game.

To chose the right software, I looked at what the industry uses and their processes in creating an animation. I found that animation is traditionally done through keyframes and motion capture but the process is tedious and time consuming. More recently, as games are expanding and becoming more and more non-linear, minute animations such as facial features are becoming audio driven. However, the technology is too early to use on a large scale. Animation software often provides the ability to insert expressions to aid development and are often used in the creation of motion paths. However, algorithmic animation is much better suited to game engines as they provide much more support in terms of code libraries.

Rendering animated videos through specially designed software such as MentalRay can take hours to days to weeks depending how much data needs processed. Where as game engines render in realtime! Game engines can achieve this by pre-calcuating data and requiring well optimised meshes and lighting. Game engines are simply not designed to render extremely detailed scenes and are incapable of rendering complex effects, although technology is improving.

To achieve my goals, I set out to develop a game episode that focuses on the use of short cinematics to narrate the game and set the game’s atmosphere. Cinematics are to run in realtime and update in-game features to allow the player to proceed to the next stage. To use crowd generation to produce an immersive and thrilling experience. The game must ship in a playable state and run smoothly across different hardware and platforms.

As I had no experience with environments before, I was not sure where to start. I look at how environments are created in games and film and saw that there are three methods, Heightmaps, Voxel, And Meshes. There is no out of the box support in Unreal for voxels and decided to first create the exterior using a heightmap to better visualise the interior structure which I would build with meshes.

I built a cavern out of rock meshes from Epic’s particle demo but, the engine began to run slow. I read up on different methods of improving the game’s FPS and found LOD to be highly effective. These rocks meshes had no LOD and a lot of polys. I noticed that I could over half the poly count without any visual impact, and created five LODs for a seamless transition. This greatly improved my FPS. I then rotated each rock slightly so that they would not look tiled, and grouped others into a singular mesh to remove hidden vertices and further increase my FPS.

I think there was a lot of confusion over what exactly I did myself. I made bridge meshes, wooden posts and a torch and rope mesh. I had to construct the bridges and rope so that they could be be used on a spline generator that I purchased. The spline generator works by spawning my meshes between two points. The spline generator saved a lot of time as I was able to reuse the same meshes to construct bridges of different lengths and falloff. I also built my own particles and had to learn the different methods of particle creation and the purpose each one serves.

I purchased a procedural lava landscape material which I modified to work on meshes. I used it as a ground to build my own procedural wood texture which I applied to every wooden object in my scene. Procedural textures are not perfect but they heavily reduced development time and allowed me to work on other items. I ported some characters out of a game and into Unreal. I had to modify their bones and edit their textures to fit my scene.

Scene building is slow and I wrote scripts that could swap between related assets and randomise their rotation and scale. This allowed me to quickly place a lot of assets and ensured that they did not look repeated.

Now that the scene was built, I focused on meeting my original aims of investigating the importance of programmatic animation. In a publication by Robin Hegg, he had written about animating a character and had said that —giving it movement— presents a new set of challenges. Viewers’ eyes are very sensitive to jerky or unnatural movements. When animating props in my scene, I made sure that has changed direction they first slowed down or chugged along as they rotated. This helped to bring the scene to life and make it look as if it was really inhabited.

In recent years, in the film industry it is becoming increasingly common to see A.I. particularly in war films or animated movies. This has severely reduced the need to key frame every movement of every character. So I had a go at it. I ported some animations out of Skyrim for the characters and made my own such as the forward walk. I created a blendspace for the character, allowing Unreal to handle how animations play into each other.

The film industry has used crowd generation for background assets for years. Crowds can make immersive virtual worlds more realistic, interesting, and compelling. It has recently seen use in VR games to give the player greater special awareness. To populate my scene, I used the process of crowd generation to spawn a lot of monsters and order groups of those to move to select destinations.

When awaiting an order, I constructed a behaviour tree that told the A.I. to patrol their local area. I included some gameplay elements such as follow and attack the player. I noticed that the A.I. often block each others path and I wrote a script that assigned each orc a weight and those of the lesser weight would move out of the way of the bigger weight. The orcs will do their best to avoid the troll.

Technological advancement has led to improving home computers allowing developers to push better graphics and more resource intensive scripts. However, in games especially FPS has always been the main cause of concern and can negatively impact the players perception of a well written game. Therefore, I had to ensure that assets and scripts were refined and optimised. Unfortunately, one major drawback of using realtime crowds is that game engines run a new process for every actor in the scene which is extremely performance heavy.

To increase fps, I have had to use extreme LOD on the scaffolding which is noticeable particular in bright areas. I have also used level streaming to quickly swap in and out assets from different levels when in view of the camera. This has greatly improved my fps but events that are stored on these sub-levels sometimes miss when fired from a sequence. As a result, the produced game and the prerender are full of bugs. Playing in editor works fine. I believe this to be a bug within the latest build of UE4.

I have taken on many roles of the game development pipeline throughout the creation of this short game and has helped to reshape the direction of work that I would like to enter. I am a fully capable artist and animator but enjoy taking on the challenges faced by developers. I believe that I would be the perfect technical artist and as the modern animation pipeline is changing with technology, the presence of artificial intelligence is becoming increasingly common in film and games and so the need for a technical artist is becoming more important.

I have created a short game that is playable with a variety of input devices and across multiple resolutions. Game settings can be changed to modify the graphically quality. I have created multiple cinematics that are triggered within the game or can be played from the scene selector. I have put a lot of effort into small gameplay features that you may not notice on first play through but together help to create a robust and believable game.

I encountered many problems mostly brought about through my lack of experience with Unreal. I had previously written in C# but, Blueprints was an entirely new language and I had to learn and recognise new terminology. The huge learning curve meant that I had to take time to learn new features and processes. I pushed all the skills that I have learned at university and industrial placement and have learned new skills and techniques.


I believe I have met all the initial requirements, however, if I was able to take on this project again, I would be able to better streamline my game development pipeline with my newly learned skills. I believe that as technology improves, games and film will become more intertwined as animation becomes more dependent on artificial intelligence.

Final Viva

As part of the final submission I gave a presentation where I discussed my aims and the challenges faced during as a technical artist throughout development.

References available.