Week 08

Week 8: Prototyping Abilities in Unity by Valzorra

Having crafted a basic system for the camera and movement of this prototype, I thought it was now time to introduce some of the fun abilities that would help players maneuver through the environment. At this stage, I had a very rough idea of what I wanted those abilities to be based on some initial idea generation in Project Proposal 4, As It Lies. I felt the strongest about Teleportation as it would give players mobility and flexibility and about Electrocution as it is a damage-dealing ability that would help players confront any threats. I would like to note that these have not yet been set in stone and could be amended or removed as the design and development process progresses. Additionally, I would also like to reiterate that the work described below has been a collaborative effort between James and myself and that this prototype would not be possible without his help and guidance.

Range Indication and Teleportation

The very first ability I wanted to try my hand at was the Teleportation Ability. The exact methodology of how this power would work has been described in detail in Project Proposal 4 in one of my storyboards within that post. However, to briefly summarize, once the player has selected that ability, they will be presented with an indicator of their range in the form of a circle around them. The player would then be able to click anywhere within the Range Circle to select a target location, and after they have made their selection, they would be instantly teleported there. There were a few problems to solve within this description, the first of which was to figure out how to best implement the Range Circle Indicator, which would be used for a series of other abilities as well. I knew what I was going for, which was ideally a large circle around the player, which would give them a clear indication of where their abilities stretch up to. Additionally, the Range Indicator needed to appear only as players are about to perform an Ability and had to disappear and reset as soon as the ability has been executed. That way, players would not have circles all over their screens without necessity. A terrific example of the type of indicator I wanted is constantly used throughout League of Legends as shown below.

League of Legends Range Indicator

For my own prototype, the Range Indicator was handled through the use of a very basic cylinder, scaled to X: 1, Y:0.0001, and Z: 1, making it as close to a two-dimensional circle as possible. That Cylinder was then placed straight on top of the Player, keeping them in the center. The Cylinder was then parented to the Player, which meant that as the Player moved, so would the Ranged Indicator. Additionally, in order to see the scene clearly, a Semi-Transparent Material was added to the Range Indicator with the transparency set to 50 out of 255. At this stage, I had a very simple environment and a Player character with a large semi-transparent circle on top of their head, meaning I was ready to actually make the Range Indicator work. The system for it functions in a similar but slightly more intricate manner to the way Player Movement is handled as described in Week 8: Prototyping the Camera and Movement in Unity.

This is how the Range Indicator appears in the prototype for Teleportation.

This is how the Range Indicator appears in the prototype for Teleportation.

The first step in the process was to indicate what the maximum range of Teleportation would actually be, which was determined by a simple float. For the purposes of this prototype, the value has been arbitrarily set to 15. The next step was to ensure that the Range Indicator would only be displayed when the Teleportation was actually used. As the Range Indicator would only really be needed if an ability has been activated, its Mesh Renderer was disabled in Unity for all other general purposes, making it invisible to the user. The only occasion on which the Mesh Renderer is enabled through code, is when the player presses down any keys associated with the activation of powers. For the purposes of this prototype, I have set it up so that pressing T activates Teleportation. The dice rolling mechanic will be implemented at a later point. Once the Mesh Renderer is enabled, the other important aspect of displaying the Range Indicator is ensuring that the Circle resizes itself according to the range of each ability as indicated by the associated variables. To do that, a line of code has been implemented, which states that if the Current Power is Teleportation, then the Range Indicator will be scaled in accordance with the maxTeleportRange variable. This is basically how the Range Indicator is handled within this prototype not only for Teleportation, but for all of the abilities that would need it. Therefore, the entire system for displaying the Range Indicator has been compacted in a switch statement as showcased below.

void ShowRangeOfPower() {
        switch(currentPower) {
            case "TELEPORT":
                //Enabling the Mesh Renderer
                rangeIndicator.GetComponent<MeshRenderer>().enabled = true;
                //Scaling the Range Indicator accoring to predetermined variables
                rangeIndicator.transform.localScale =
                new Vector3(maxTeleportRange, 0.01f, maxTeleportRange);
                break;
            case "ELECTROCUTE":
                rangeIndicator.GetComponent<MeshRenderer>().enabled = true;
                rangeIndicator.transform.localScale =
                new Vector3(maxElectrocuteRange, 0.01f, maxElectrocuteRange);
                break;
            //If no powers are selected, the Meah Renderer is Disabled
            default:
                rangeIndicator.GetComponent<MeshRenderer>().enabled = false;
                rangeIndicator.transform.localScale =
                new Vector3(1f, 0.01f, 1f);
                break;
        }

After the algorithm for the Range Indication had been sorted, it was time to code the Teleportation itself, which was a relatively straightforward process. The function works by casting a Ray from the Main Camera directly onto a location in the environment selected by the player. Just as with Walking, the Player can select a target location by pressing the Left Mouse Button. The function then checks whether the the target location is within the predetermined Teleportation Range and whether it is on a Walkable surface as indicated by our NavMesh. All floors within this testing environment have been marked as Walkable and if the player were to click anywhere else to move, such as on a building or outside of the environment, then the Player Character will be taken to the closest point to their selection. If the target location answers to both of those conditions, then the Player Game Object will be Warped to that point. The last bit of that function simply states that once the Teleportation is completed, then currentPower variable goes back to Null, thus resetting the whole process. The Teleportation function used in the prototype is displayed below.

 void Teleport() {
        RaycastHit h; 
        if(Physics.Raycast(Camera.main.ScreenPointToRay
                          (Input.mousePosition),out h, 100f)) {
            //If the Player Selects a location in Range and on a Walkable area
            if(Mathf.Abs((h.point - transform.position).magnitude)
              <= maxTeleportRange && h.collider.gameObject.layer == 9) {
                nAgent.Warp(h.point);
            }
        }
        currentPower = "NULL";
    }

That’s the basic premise of how both the Range Indicator and the Teleportation Ability work. The player can move freely within the environment and once the press T, a large circle will appear around them. If the player clicks on a surface within the environment that is also within the Range indication with the Left Mouse Button, then the player will be instantly teleported to that location. Currently there is no cool-down of any kind on the Teleportation, which does make it kind of fun to play around with. Below I have attached a few screenshots to display how the Teleportation Mechanic works in the prototype, and a video will be available in future blog posts.

Initial Phase of Teleportation: The Range Indicator appears in front of the player, allowing them to select a location.

Initial Phase of Teleportation: The Range Indicator appears in front of the player, allowing them to select a location.

Final Phase of Teleportation: Once a selection has been made, the player is transported there instantly.

Final Phase of Teleportation: Once a selection has been made, the player is transported there instantly.

Electrocution

Now that the Teleportation mechanic had been created and fully functional, James and I thought we would move on to Electrocution as it is the only damage-dealing ability in the game, making it rather significant. The Electrocution mechanic works by allowing the player to select an enemy within a certain range. Once an enemy has been selected they will take a certain amount of damage. If that initial enemy has any number of other enemies surrounding it within a certain range, then those secondary enemies will also take damage. In a sense, it is a strategic chaining mechanic meant to represent electricity going through enemies. The first step in this process was to create a list that would contain all of the Enemies, which would be affected by the Electrocution. Then, the algorithm finds the first enemy that would be affected by checking whether the player has selected a target marked as Enemy and whether that target is in range. If the selected enemy answers to both of those conditions, then it is added to the newly created list of Enemies Affected. The player selects the first target by clicking on it with the Left Mouse Button, and a Ray from the camera locates the selection much like with Teleportation.

After the first enemy has been located and added to the List, then the script searches for other enemies in range that would also be affected by the Electrocution. To begin with, a check is performed to see whether the List of Enemies Hit is empty or not. If the List is not empty, then an Overlap Sphere Collider is placed over the existing enemy within that List. The sphere’s radius is equal to a predetermined maximum range for the Electrocution. The next check performed is to see whether or not there are any other enemies within the Overlap Sphere Collider. If there are, then those enemies are also added to the List of Enemies Hit. Through this method, we can get all of the enemies that would be affected by the Electrocution. After this list has been filled, Damage must be applied to the Affected Targets. For the purposes of this prototype, a very simple check is executes, which simply states that if an enemy is within the Enemies Hit List, then that enemy will be destroyed. That’s the overall logic behind how the Electrocution Script works and the code for it can be reviewed below. Images of the visual process within Unity have also been included.

 void Electrocute() {
        GetComponent<PlayerMove> ().enabled = false;
        nAgent.SetDestination (transform.position);
        RaycastHit h;
        List<GameObject> enemiesHit = new List<GameObject>();
        //Get Initial Victim
        if (Physics.Raycast(Camera.main.ScreenPointToRay
                           (Input.mousePosition), out h, 100f)) {
            if (Mathf.Abs((h.point - transform.position).magnitude) 
            <= maxElectrocuteRange && h.collider.gameObject.layer == 10) {
                enemiesHit.Add(h.collider.gameObject);
            }
        }
        //Find next victims
        if(enemiesHit.Count > 0) {
            Collider[] nextVictims = 
            Physics.OverlapSphere(enemiesHit[0].transform.position,
                                 maxElectrocuteRange/2f);
            foreach (Collider c in nextVictims) {
                if(c.gameObject.layer == 10) {
                    enemiesHit.Add(c.gameObject);
                }
            }
        }
        //Visual Effect
        for(int i = enemiesHit.Count - 1; i >= 0; i--) {
            if (enemiesHit [i].GetComponent<LineRenderer> ().Equals (null)) {
                enemiesHit[i].AddComponent<LineRenderer>();
            }
            enemiesHit [i].GetComponent<LineRenderer> ().positionCount = 2;
            enemiesHit [i].GetComponent<LineRenderer> ().material = 
            Resources.Load<Material> ("Materials/lightning");
            enemiesHit [i].GetComponent<LineRenderer> ().startWidth = 0.2f;
            enemiesHit [i].GetComponent<LineRenderer> ().endWidth = 0.2f;
            enemiesHit [i].GetComponent<LineRenderer> ().startColor = Color.cyan;
            enemiesHit [i].GetComponent<LineRenderer> ().endColor = Color.cyan;
            if (i == 0) {
                enemiesHit [i].GetComponent<LineRenderer> ().SetPosition 
                (0, enemiesHit [0].transform.position);
                enemiesHit [i].GetComponent<LineRenderer> ().SetPosition 
                (1, transform.position);
            } else {
                enemiesHit [i].GetComponent<LineRenderer> ().SetPosition 
                (0, enemiesHit [i].transform.position);
                enemiesHit [i].GetComponent<LineRenderer> ().SetPosition 
                (1, enemiesHit [0].transform.position);
            }
        }
        //Apply damage
        foreach (GameObject enemy in enemiesHit) {
            enemy.GetComponent<Renderer>().material.color = Color.blue;
            Destroy(enemy, 1f);
        }
        currentPower = "NULL";
        GetComponent<PlayerMove> ().enabled = true;
    }
Initial state of Electrocution: The Player has activated the ability and can now choose a target.

Initial state of Electrocution: The Player has activated the ability and can now choose a target.

Middle state of Electrocution: The Player has selected the only target in range and the damage has been distributed to the enemies surrounding the initial target.

Middle state of Electrocution: The Player has selected the only target in range and the damage has been distributed to the enemies surrounding the initial target.

Final state of Electrocution: The damage has now been dealt and all enemies are destroyed.

Final state of Electrocution: The damage has now been dealt and all enemies are destroyed.

Reflection and Feedback

Overall, I really enjoyed working on these mechanics with James, because of how fun they seemed to me even at the prototyping stage. Because there were no cool-downs to these abilities at this stage, they could be used in combination with each other. For example, I found it quite entertaining to use Teleportation to get in range of the enemies and to then use Electrocution to defeat them, all in a manner of seconds. I think that with the addition of new abilities and their refinement, even more fun combinations will be possible. In terms of challenges, I thought that the Electrocution mechanic was rather tricky, especially because it was quite difficult to actually visualize the effect. James and I tied to apply some sort of texture as opposed to a single colored Line Renderer, however, we did not have much success. This indicates that I would have to do some more research on how one achieves effects of Lightning that can be controlled. And although this is just a simple prototype, I would like to refine the Range Indicator and make it a bit more intuitive. The main problem with it for now is that it has to stay on top of the Player’s head, because any other position results in it colliding with the environment, which can look quite broken. However, these problems may end up finding their solutions in Semester 2, because there is still quite a bit of design work to do for this Semester. Nonetheless, I am really excited about what we have so far, and I look forward to developing it further.

IMG_2269.JPG

After a long week of prototyping we had our Formative Feedback Session with Adam on the Friday of Week 8. At that point, I was relatively up to date on what I had been doing so far, so thankfully he was able to see most of my work. We primarily talked about Project Proposal 4 or As It Lies and I briefly went over the main mechanics, the abilities, the dice rolling, and some elements of the world. Overall, Adam’s feedback was positive and he had some really good recommendations on where to take the project forward. For example, he recommended that I translate the game into a paper prototype, specifically for the dice-rolling mechanic and to ensure that it would actually be fun. Additionally, after seeing a 3D Model I was working on, he suggested that I create somewhat of a style guide to help narrow down the specific style of models and game overall. Adam also suggested that I should carefully consider what abilities to include in the game, as they shouldn’t seem random or boring. That’s why none of them have been set in stone yet, and I will be reworking some of the powers later on, possibly through testing with the paper prototype. Quite reasonably, Adam also expressed concerns about the size of the project as I am working alone. That’s why we agreed that I would create a timetable for the second semester in order to ensure that the game will be fully completed by the time it needs to be presented. I intend to start working on Adam’s recommendations as soon as next week, so hopefully I will have even more clarity and information by next Friday.

Week 8: Prototyping the Camera and Movement in Unity by Valzorra

In addition to some experimentation and practice with 3D Modelling in Blender, Week 8 was also dedicated to developing a basic prototype in Unity around Project Proposal 4. I should also mention that for now the working title for Project Proposal 4 is As It Lies, so I will be referring to the game with that title in all future documentation until it is changed. The first set of important decisions that needed to be made in regard to the digital prototype was how to the camera and movement within this game would work. To answer both of these questions, I needed to ask myself what is the actual goal of this game, what am I hoping to achieve, and what is the essential experience I want players to go through. For now, I knew I wanted this to be a problem-solving experience where players would have to navigate through an environment by using a series of special abilities. As previously described, the abilities would be determined by the roll of a set of dice and players would be able to control and change the roll to some extent. With that in mind, this game is very much a strategic experience, and I would not like it to be possible for users to punch their way through the level.

Camera Angle

Having narrowed down the overall experience I would like to create, I began analyzing different camera perspectives to try and figure out which one would be the most appropriate one. The first version I analysed was the standard over-the-shoulder or third person camera angle. These type of cameras are usually used when the character and how they act is rather significant to the overall experience, shifting the focus away from the environment. We explored a lovely version of this camera angle in one of our Tech Workshop Session, so I had a pretty good idea of what it would look like in the game. However, I decided against it, because a large part of the screen with a third person camera view is focused on the player’s character rather than on anything else. This would be very problematic because precision and accuracy would be rather important in this strategic problem-solving game. Additionally, the solutions to levels would be incorporated within the level design, which means that the environment would be a major focus of the game. Therefore, I knew straight away that third person was not the way to go for this project.

Example of a Third-Person Camera Angle in Skyrim. Nearly 1/3 of the screen is occupied by the character, which is not ideal for As It Lies, which needs to give you an accurate view of the surroundings.

Example of a Third-Person Camera Angle in Skyrim. Nearly 1/3 of the screen is occupied by the character, which is not ideal for As It Lies, which needs to give you an accurate view of the surroundings.

Since I had established that the layout of the level would be the focus of this game, I started thinking about the potential of a first-person camera angle. First-Person Camera Angles display the game through the eyes of the protagonist, dedicating the entirety of the screen to the environment and the events going on in it. In theory, this sounds perfect for As It Lies, as it will enable players to look at their surroundings carefully, to analyse them, and to figure out how to maneuver their way through. However, even though this is closer to what I would ideally want for the game, it does not focus as much on the strategic and planning elements as much as it could. What is the most crucial about this game is the level layout and the careful positioning of separate abilities. As nice as a first-person camera would be, it does not allow for that careful strategic planning of using your powers. What’s more is that upon further investigation, I discovered that if As It Lies were in first-person, it would echo a series of other games in the genre more than I would like, namely Dishonored. What I want to avoid more than anything else is re-skinning a preexisting game, as that would make the exciting dice-rolling mechanic fade. Therefore, I had to take a look at other options.

As fond as I am of the First-Person Camera Angle, it is not appropriate for As It Lies, because it neglects the strategic elements I want to convey and it makes the game too similar to what’s on the market.

As fond as I am of the First-Person Camera Angle, it is not appropriate for As It Lies, because it neglects the strategic elements I want to convey and it makes the game too similar to what’s on the market.

After taking a look at those two options and realizing they would not work, I began analyzing a Top-Down or Isometric Camera Angle. Top-Down and Isometric Camera Angles places the player above the main action and scenes, often giving them a bird’s eye view of what’s going on. This wide oversight of an area is the most precise and accurate way to make strategic decisions about the environment. This gives the player complete and total control over what is going on in the scene allowing them to make the best possible judgement on how to solve the situation they have been placed in. The only problem with a Top-Down/Isometric Perspective is that is detaches the player from the events, thus making it more difficult to establish an emotional connection. However, the main goal of As It Lies is to provide an exciting experience with environmental puzzles all wrapped up in the context of an exciting world and story. Therefore, as long as players are having fun, emotional attachment to the game is not a priority.

I would like to take this opportunity that the Camera Angle of As It Lies is closest to Top-Down/Isometric, but would likely be a mix between that and the General Third Person View. This combination would still provide enough flexibly for strategy, but would help the player connect with the protagonist and events that are going on more-so than a pure top-down view. Additionally, this would give me more flexibility as I could chose where to lock the camera based on the level and I would not be confined to solid view that’s always straightly vertical to the ground. The closest example I can think of would be the League of Legends Camera System, which is locked into the same position, oftentimes onto the player and follows them as they move. The League of Legends camera system allows for careful strategic decisions and enables players to work together and see everything that’s going on in the scene. For more on Camera Angles, have a look at this fantastic blog post I found on Gamasutra.

A camera angle similar to the one in League of Legends would be ideal for As It Lies.

A camera angle similar to the one in League of Legends would be ideal for As It Lies.

Creating the Prototype and Movement

After I had made my decision about what Camera Angle would work best for As It Lies, the way the player would move in the environment became very straight-forward. Traditionally, Top-Down and Semi-Top-Down Camera Angles are accompanied by a mouse-click system for movement. Therefore, at this stage I knew that I wanted the game to have a Top-Down Camera View, similar to League of Legends in terms of angle, and that the player would move through the environment by using their mouse to click on a target location. Now that I had come to those conclusions, it was time to actually make a prototype in Unity that would showcase how it all works. I would like to note that as I am not a very experienced programmer, I had quite a bit of help from James in the creation of this prototype both during our Tech Workshop Sessions and beyond. Therefore, anything you see from here on out in Unity has been a collaborative effort between the two of us.

To my surprise, crafting a locked Top-Down Camera that would follow the player is actually quite easy to do. The first step of this process was to position the camera adequately, so that it fully shows everything that’s going on in our testing environment. For the purposes of this prototype the Camera was locked in a relatively arbitrary position, above the testing environment and player, in such a way that it would display them efficiently. After that, a basic Camera Move script was created and the first thing that script does is to locate the Game Object in the Scene tagged as “Player“, which in this case is a basic cylinder. After the Player has been located, the script takes into account any changes in the player’s position according to the coordinate system. After those changes have been calculated and assessed, they are applied to the Camera in the same manner, causing the two objects to move simultaneously. That way, the camera is constantly looking at the player from a predetermined angle and is moving along to the actions of the player. Below I have attached a few screenshots of the scene in Unity and the code for the Camera Movement as an image and in text format.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class CameraMove : MonoBehaviour {
    private GameObject player;
    private Vector3 offset;
    void Start () {
        player = GameObject.Find("Player");
        offset = player.transform.position - transform.position;
    }
    void LateUpdate () {
        MoveCamera();
    }
    void MoveCamera() {
        transform.position = player.transform.position - offset;
        transform.LookAt(player.transform);
    }
}

An example of how the Camera View Point is directly looking at the player. If they were to move, the Camera would directly follow the player, always keeping them in the center of the image.

Programming the Player Movement Script was slightly more challenging as it involved moving the character by mouse click rather than more conventional means such as a controller or keyboard buttons. First of all, I needed to do some work with Unity to determine the areas that could be walked over in this environment. This meant that I would have to bake a Navigation Mesh to indicate that the three green planes on the screen are Walkable Objects. Once that was created, I also needed to indicate which Game Object was the Player. Therefore, I tagged my basic Cylinder as “Player“ and proceeded to attach a Box Collider and a NavMesh Agent to that Cylinder. These Components in combination with the baked NavMesh are what would make the Cylinder move over this very basic environment. Both the Box Collider and the NavMesh Agent have not had any of their default settings altered. Now, with all of those Unity bits and pieces sorted, I could get down to the actual code. The Player Move Script works by saying that if the player presses the Right Mouse Button onto a Walkable part of the environment, then a Ray will be cast from the Camera to the location the Mouse Cursor is directly over. That way, once the player clicks on a part of the environment, then they will be automatically moved toward the location where the mouse was clicked at a predetermined speed. I have attached the code for the Player Move script below.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.AI;
public class PlayerMove : MonoBehaviour {
    private NavMeshAgent nAgent;
    private float currentSpeed;
    void Start() {
        nAgent = GetComponent<NavMeshAgent>();
        currentSpeed = nAgent.speed;
    }
    void Update() {
        GetPlayerInput();
    }
    void GetPlayerInput() {
        if(Input.GetMouseButtonDown(1)) {
            MoveToClick();
        }
    }
    void MoveToClick() {
        nAgent.speed = 0f;
        RaycastHit h;
        if (Physics.Raycast (Camera.main.ScreenPointToRay
                            (Input.mousePosition), out h, 100f)) {
            nAgent.SetDestination(h.point);
            nAgent.speed = currentSpeed;
        }
    }
}

Thoughts and Reflection

These were the first of many scripts to come. James and I designed and crafted the Camera Movement System and the Player Movement System, which will be the building blocks of the entire project. Overall, the process for these two scripts was rather smooth and James and I did not have too much trouble making decisions in regard to them. I did wonder for the longest time whether a first person or a top-down camera perspective would be ideal for this game. However, after seeing the prototype with the top-down point of view, I know we made the right decision, as this perspective accentuates the strategic elements of As It Lies. So far, I think this has been a terrific start to the prototyping process and I look forward to writing more code and designing the algorithms for the abilities, dice rolling and so on.

Week 8: Intro to Modelling with Blender by Valzorra

After reading week was over, it was quite the relief to get back into my normal working schedule and to begin creating assets and prototypes during Week 8. At this stage, I had completed my idea generation and had come up with four fleshed out project proposals, one of which I was thinking of taking forward. Aside from mathematics, the common element between all four project proposals is that they would work best in a 3D environment. There is a potential problem with that as I have never really interacted with any 3D Modelling Software, aside from brief endeavours into texturing during Year 2. However, I am really passionate and enthusiastic about these projects, I want to bring them to life and to make them happen, which is why I have chosen to take on this challenge and learn how to create 3D Models. Week 8 was the perfect time to do this, because if I couldn’t handle the process at this stage I would have enough time to reconsider my options and potentially join a team. Nonetheless, I was determined to try my best and to create an asset by the end of the week. My program of choice was Blender as it is a professional-grade 3D Modelling Software, free of charge, and it was the program I used to texture last year, which meant I was conversational with its interface.

I started my journey into 3D Modelling with Blender by attempting to create a low-poly proportional female figure. Zeta, the protagonist of Project Proposal 4, will most likely be a healthy and athletic female, which is why I thought that trying to create a body for her as practice may be useful. Additionally, I have chosen to focus on a low-poly style as I am only beginning to learn 3D Modelling, and focusing on stylised and low-poly models would be a more feasible choice for my FMP. Overall, I would like to reiterate that my main goal here is to become familiar with Blender, to learn some of the basics of 3D Modelling, and to determine whether or not this was something I thought I could handle if I were to work by myself. The finished asset itself may not necessarily be used in the game, but if it does make it’s way to it, then all the better. Now, without further ado, let’s dive in.

Make Human

Before I started working in Blender, I thought I would explore other software for crafting Human 3D Models in order to create something to use as a reference for my own model. James S. recommended that I explore Make Human as it’s an open-source program designed specifically for making anatomically correct human models. Upon downloading and opening the program, I was surprised at how easy and user-friendly it was. Make Human generates a human 3D Model and allows users to change their gender, shape, form, and more, through the use of sliders. One can make humans taller, shorter, more muscular, chubbier, and all facial features may also be amended. Make Human even has a few in-built hair styles and clothing for the model, all textured. Below is an example of the default human model and the way the sliders work in this software.

The whole process of crafting a model was really easy and straightforward, and all I did to create my character was to work with the default model and to adjust the sliders accordingly based on the features I wanted to change. I made the woman a bit slimmer, more muscular, and taller than average, as she is meant to be quite healthy and relatively athletic. I made her face slightly sharper and her nose a bit longer than the default model. After I was happy with my character in terms of facial features and overall body shaped, I took advantage of Make Human’s built in poses and facial expressions to bring more life to the model. The image below shows the finished character with Make Human’s textures. I went for a braided hairstyle as it seemed to work best with her face, and a pose to make the model seem more life-like. However, I would like to reiterate that these decisions are not particularly significant just yet, as I am merely using this model as a reference by which to create my own 3D Human Figure. Once I was happy with the model and it looked somewhat like what I envisioned for Zeta, I exported it as an FBX (as Unity tends to like that format), and was ready to begin working with it in Blender.

Modelling in Blender

Once I had my FBX from Make Human I imported it into Blender to use as a guide for my own model. I chose to work with a model in the A-Pose, as my character will be rather unlikely to lift her hands up past that point, allowing me to model the shoulders with more detail. This is in contrast to the T-pose, where detail on the shoulders can get lost or distorted while doing animations. They key bit to remember here is that I am making a low-poly yet proportional human, which is why the highly detailed Make Human (MH) model serves as nothing more than a guide. I started off with a simple cube at the bottom of the left foot and I began shaping it and forming it along the outline of the MH model in order to get an abstracted version of the foot. One of the main tools I used was the Extrusion Tool, which works by selecting the face one would like to extrude and then pressing E. What’s lovely about Blender is that it automatically determines what type of extrusion would best fit the selected face, thus making the process a little easier and more user friendly. Another tool I used very commonly was the Loop Subdivide Tool (Ctrl+R), which creates a loop of edges and vertices around the selected object. This was hugely beneficial as it’s the way I was able to divide the cube I started with into a shape with more than eight vertices. Once you have those vertices, they can be dragged along each axis and changed into desired shapes, which is how I formed the foot.

The original imported A-Pose Model from Make Human.

As I was working with my cube, continuously using the Extrude Tool and the Loop Subdivide Tool, the model I had was slowly starting to look like a foot with an ankle. The MH Model was incredibly useful as it gave me insight into where to break the lines and form the appropriate curves of the human body. In that same fashion of using these tools, editing my vertices, shifting my edges, and occasionally grabbing entire faces of the model, I went up along the leg and was sure to include the key curves and breaks in it. I really enjoyed this process of abstraction because I had to continuously ask myself how much could I take away from the shape of the foot before it ceases to resemble a foot anymore. I was aiming for a relatively abstracted version, but still clearly distinguishable as a human foot and leg.

Going along the leg and progressing towards the waist with the described techniques.

Up until this point, I had been working only on one side of the leg, slowly creating half of the body. After a handy tip from Richard, I turned on the Mirror Modifier in Blender, in order to connect simultaneously create both the left and the ride side of the human. The model was mirrored along the X-Axis, with Clipping and Merge enabled, as I wanted both sides of the model to be connected and to result in one coherent shape. After enabling that modifier, I continued to go up along the body and to extrude it all the way up, working along its curvature. The most difficult and and time consuming areas thus far were by far the areas of her bottom and her breasts, as those we most shaped like an oval, which means they required significantly more subdivisions of the plane. Nonetheless, I was greatly enjoying the process thus far and was excited to complete the finished piece.

Once I had crafted the body up until the neck, I branched off to the sides to complete the arms. Using the same methods of Extrusion and Subdivision, I proceeded to form the elbows, the muscles, and the armpits, which were rather challenging. I had the most difficulty viewing the area underneath the armpits as positioning the camera in a convenient way to edit the vertices often caused me to see the inside of the model, which was not very helpful. Nonetheless, I managed to work my way around that issue by positioning my view towards the side of the model rather than directly beneath. Another very challenging part of this area for me was the hand. I knew that I wasn’t going to create all five fingers as that seemed unnecessary for a low-poly model, so I focused on the thumb and combined all four fingers to essentially create a mitten. Nonetheless, I found it really difficult to account for the curvature of the palm, and I required a series of subdivisions to get that done correctly. After it was all complete, I was rather pleased with the result as it did resemble an abstracted human hand. However, I am eternally grateful I do not need to repeat the process on the other side.

Extrusion of of the area around the armpit, which will eventually form the arm.

The hand and arm near completion, by far one of the trickiest areas to get right.

After I had created the entire body, including the arms, it was time to proceed to the head, the section I found the most challenging. In the same fashion as before, I began by extruding a small section from the neck and slowly matching all of my vertices to points in the face where they touched the MH Model. The difficulty with the head came from the fact that it was the most ovular part of the entire body and thus required more vertices to get it done accurately. Additionally, as this is a low-poly model I did not model the face based on the MH Model, but rather made my own version, which only had very basic indents to indicate the eyes and the mouth, and a simple bump for the nose. I did go over and edit the face a couple of times because I kept making mistakes with stray vertices and geometry, but nonetheless, I got there in the end.

Blender25.png

Below is an example of how I would fix any mistakes I found within the face and the entire body. Whenever I notices that something was off with the model in Solid View, I became suspicious of what has transpired. To investigate the issue, I would enter the Wire Frame View of the model, which only displays the mesh of the model. Once a face is selected as shown below, if there are any stay vertices or edges, they become really easy to spot. The way I fixed problems of the sort was a bit clumsy, but nonetheless, efficient. I would merely delete or dissolve the problematic vertex and then merely fill in any holes the process may have caused in the model. I was rather lucky, because I did not have an abundance of such instances at all, so there was no need for excessive deletion and rebuilding. Once the head was all done and I had fixed all of my mistakes along the way, I went on to clean up the model from any stray or unnecessary edges and vertices, which significantly decreased my Vert Count and my Tri Count, making the model much more optimal.

Blender28.png

Once I was happy with the model overall, I took a look at it from afar and tried to experiment with some of the Modifiers in Blender. Specifically, I wanted to make the model look even more low-poly than it already was. That is why I explored the Decimate Modifier, which essentially decreases the number of Polygons in the model. I found the process rather exciting because by editing the Collapse Ratio, which is a slider, I could change the model from its current form to a single triangle. The question I was asking myself while using this Modifier was how much information can I take away before the object is virtually indistinguishable. I found that for now, the ideal ratio was somewhere around 0.3050, however, I could decimate it even further if needs be. Another Modifier I explored was the Triangulate Modifier, which essentially divides each face of the model from Quads to Tris. I personally much prefer Tris, because mathematically, any three points can fall into the same plane, which is quite nice to model and work with. Quads on the other hand, do not always have all of their vertices in the same place, which can lead to some difficult to spot and undesirable geometry. Once all of the Modifiers were applied, the model was complete.

Blender39.png

Thoughts and Reflection

To my surprise, I’ve really enjoyed creating this model. The process was not overly complex and it was an excellent exercise in familiarising myself with the Blender, some of its shortcuts and its interface. There were plenty of challenges along the way, most notably with the more ovular shapes of the human body, but they were all manageable and I managed to work through them without too much trouble. One very frustrating setback was that while I was crafting the head, my computer completely crashed causing me to lose some of my progress. However, I took it as a learning moment, and acknowledges that I should be saving rather frequently as there is a risk of things like that happening. Through this model, I have now familiarised myself with Modelling via Extrusion, working with vertices, edges, faces, I’ve explored a few modifiers, and I have clearly pushed the technical limitations of my personal computer. I’ve explored how to fix certain problems and how to optimise my model by dissolving unnecessary geometry. To top it all off, I am rather happy with the final result, which is attached below. This is my very first 3D Model and it seems to look rather decent, better than I expected in any case. The model seems fairly well optimised with 934 Verts, 1,920 faces, and 1,920 Tris, although there does not seem to be a solid number one should aim for with these stats. Creating this figure took me about three days during which I was also working on other prototype pieces. This should hopefully mean that if I stick with a relatively low-poly style, I should be able to feasibly create a 3D game by the end of the year, which is incredibly exciting. This exploration has been incredibly motivating and I am looking forward to getting back into it.

HumanBody.png