Week 04

Week 4: On Speculative Design by Valzorra

Speculative Design Workshop

The Friday of Week 4 was dedicated to a talk on Speculative Design by Jussi. I really appreciated this introduction to the topic as I was unfamiliar with the field up until the workshop. What I found interesting about the idea of Speculative Design is that to me it seemed to overlap quite a bit with various fields of Art, such as Sculpture or Installation Art. Design is typically all about solving a certain problem, improving the current state of affairs though good choices, and communicating clearly with the user. Speculative Design turns many of those notions on their heads in order to start an engaging conversation about the future and where we might end up based on the present. It’s about conveying ideas, making people question what appears to be natural behaviour, and oftentimes it takes it’s content to the extreme. The inherit desire of Speculative Design to ask questions rather than to provide answers brings it incredibly close to Fine Art. I thought this was quite interesting as usually the fields are considered fundamentally different, yet somehow there is overlap.

After the talk had concluded we were tasked to come up with an exciting concept relating to Speculative Design by taking inspiration from the Cooper Hewitt Collection. Richard, Fred, and I got in the same team and had a look through the massive collection, exploring a variety of artwork, models, and textiles. We ended up finding this elaborate Architectural Model for a Church or Baptistry. What struck me about that model was that although it was very well crafted it did not feature any windows at all. Thinking on Speculative Design, I thought it would be quite interesting to take that concept into the other extreme and to imagine a society which lives in buildings made of nothing but glass. This concept begged some interesting questions such as how would people behave at home, knowing that they could be watched at any given moment? How would any sort of privacy be managed? Would privacy even be an option at that point? What would be the difference between a society that was raised in those circumstances and one that was introduced to them coming from the life we know?

The image we generated the idea from: Church Architectural Model, 1782

The image we generated the idea from: Church Architectural Model, 1782

Additionally, the idea begs us to reconsider present day notions of transparency and the idea that everyone must be constantly visible and open, for example through social media. After we discussed the idea with Jussi, he have us a few interesting connections to this concept. Specifically, he mentioned that in glass architecture as a general rule, light and shadows are exceptionally important when it comes to visibility within the building itself. Additionally, we connected this hypothetical society with the idea of surveillance, self-surveillance, the Panopticon, and the Crystal Palace. After that discussion, the team and I started thinking of how we could turn this into an interesting game. At that point, Richard suggested that this would be a fantastic stealth based game. The main objective would be to use light and shadow to hide from unwanted eyes and to do a certain activity in private, for example taking a shower. Below I have attached the images the team and I presented to show the concept of what a house in this society might look like, and how the artistic style in-game could function.

GlassHouse.png
Screen Shot 2018-10-30 at 11.22.27.png

After the presentations, we got some pretty good feedback on the idea and most seemed to enjoy the concept and the game we had come up with. Adam suggested that we also have a look at The Circle, which ties into the themes of this concept brilliantly. It was also quite inspiring to hear what other people came up with, because all of the ideas seemed rather good and held potential. Overall, I really enjoyed the whole concept of imagining futures and societies based on the present, and taking certain ideas from modern day to the extreme. It’s a great thought experiment that holds the potential for fantastic idea generation, and crafting entire worlds based on a problem in the present.

Week 4: Dimension by Valzorra

Representation of the Third Dimension Into the Second Dimension

The representation of three dimensional space on a two dimensional surface has been explored over hundreds of years in a variety of artistic attempts. Mathematically, the idea of perspective and vanishing points is what is exciting about the representation of these dimensions. The basic idea of perspective is that objects appear to be smaller the further they are away from the viewer, and there is an accurate mathematical way to represent perspective. The most basic way to do that is to use a single vanishing point, which used to traditionally positioned in the centre of the canvas. With this technique horizontal lines are perpendicular to the canvas, while all vertical lines lead up to the same point in the centre. To make the perspective more exciting, artists later included multiple vanishing points, sometimes positioning them outside of the canvas. However, with more than one vanishing point, the horizontal lines can no longer be perpendicular to the canvas as they would not appear horizontal in the world of the paining or image. When there is more than one vanishing point, we refer to the line connecting those points as the vanishing line.

There are other ways to give the illusion of perspective other than using traditional vanishing points. Introducing Desargues’ Theorem, which relates to how triangles can be in perspective of each other without the use of vanishing points. In order to explain the two main notions of the theorem, let’s introduce two triangles ABC and abc. Desargues’ Theorem states that if the points Aa, Bb, and Cc all converge to the same point, then the two triangles are in perspective from a point. Now, to explain the other notion of the theorem, let’s call the meeting point of AB and ab = D, the meeting point of BC and bc = E, and the meeting point of AC and ac = F. If D, E, and F all fall on the same line, then the triangles are considered to be in perspective from a line. Desargues’ Theorem is a keystone notion in the field of projective geometry , which deals with the representation and transformation of geometric objects. If you would like to explore the proof of Desargues’ Theorem, please refer to the video below as it explains the concept better than I could ever hope to.

One of the most notable artists who have tackled the idea of representing the change from second to third dimension is M.C. Escher, who’s work is absolutely fascinating mathematically. Specifically, his experiments with Tessellations (which are simply infinitely repeating mathematical patterns, usually closely fitted together) continuously merge and play with the idea of dimensions and the constant shift between them. In the example below, Escher represents the cyclic nature of life by depicting the perpetual existence of the crocodile, which manages to escape its position, reach unknown heights, only to swiftly return and crawl back into position, repeating the whole process once more. The reptile manages to become a higher form of existence by entering another dimension, however, that is incredibly short lived. Additionally, M.C. has placed another object in this Graphic which shares a similar transitional nature, and that would be the dodecahedron, one of the regular polyhedrons. As discussed in a previous post, this is one of the few objects in existence that retains its mathematical properties while shifting between dimensions, thus further reiterating the main idea behind the reptiles.

Reptiles, 1943, M.C. Escher

Reptiles, 1943, M.C. Escher

Representation of Higher Dimensions

Representation of the fourth dimension can be excessively challenging, however, there are a few notable ways of visualising it, specifically through the Hypercube. One of the most famous examples of a depiction of the Hypercube is Salvador Dali’s Corpus Hypercubus, where he used a net of eight cubic cells glued to each other. Another possible method, commonly known as projection, features one cube located in the centre of another, with their corners joined together by edges. However, all of these are mere representations of the Hypercube through one format or another, while true vision of the fourth dimension has yet to be achieved, if at all possible. What’s interesting to me about this specific section of geometry is how these different shapes interact with each other, what their core principles are, how they help shape our understanding of dimensions and the main pillars of their construction. The transfer of information through dimensions and the existence of extraordinary shapes we cannot even imagine fascinates me and motivates me to look into them even further.

Crucifixion, 1954, Salvador Dali

Crucifixion, 1954, Salvador Dali

Mathematically speaking, the fourth dimension (and other higher dimensions) could be represented through the use of matrices, filled with Cartesian Coordinates and data points. The four vertices of a square can be represented by (0, 0) (0, 1) (1, 0) (1, 1). Adding one dimension, we can represent the cube as (0, 0, 0) (0, 0, 1) (0, 1, 0) (1, 0, 0) (0, 1, 1) (1, 0, 1) (0, 0, 0) (1, 1, 0) and (1, 1, 1). Adding a dimension one more time, and we would have the mathematical representation of the fourth dimension given by (0, 0, 0, 1) (0, 0, 1, 0) (0, 1, 0, 0) (1, 0, 0, 0) (0, 0, 1, 1) (0, 1, 0, 1) (0, 1, 1, 0) (1, 0, 0, 1) (1, 0, 1, 0) (1, 1, 0, 0) (1, 1, 1, 0) (1, 1, 1, 1), a total of 16 vertices. In order to represent even higher dimensions than the fourth dimension, we would simply need to add an additional coordinate. What’s even more exciting is that through the properties of matrices, one could then upscale or downscale the n-dimensional object, transforming it into any desired size.

Moving on from one dimension to another results in the loss and gain of certain information about those objects.

However, working purely in numbers is not very visual and does not provide a very intuitive idea of how to think or work with four or higher dimensional objects. I have previously explored the geometric properties of certain Polychora in Week 3: Research on Geometry, so feel free to explore that section of the blog for further visualisation. Looking into a way to make higher dimensions more intuitive, I came across this fantastic video that combines analytical and geometric methods of thinking about the fourth dimension, specifically a 4D Sphere. The basic method detailed in the video is to use a series of sliders in order to represent the points in four dimensional space, rather than to use strictly coordinates or strictly geometric shapes. Furthermore, what one will discover through this method is that in higher dimensions, the geometric shapes seem more counter-intuitive, which would force mathematicians and enthusiasts to be very creative when working and explaining their properties. More detail on the subject can be found in the video itself, but for my research purposes, the methodology of visualisation and representation is most significant.

Thoughts and Reflection

What I find really interesting about the idea of dimensions and going in between dimensions is this idea of information. Moving on from one dimension to another results in the loss and gain of certain information about those objects, which is absolutely fascinating to me. For example, a 2D representation of a 3D cube could not possibly display all of the edges as having the same size, as then the object would not appear to be a cube anymore, due to the rules of perspective. Additionally, the idea of different perspectives showing different sides of the same object, revealing new data about that object along the way could be perfectly translated into a game mechanic. What’s more is that this is the primary way in which one can create art with optical illusions and potentially use those in an environment. Overall, I am rather excited to see if this will go any further, but for now, onward with the research.

Week 4: Probability Manipulation in SEO by Valzorra

Overview

During our Building the World Session on Tuesday, James gave a fantastic example of how Markov Chains were used in Marketing by illustrating the flow of customers between two brands based on data that the Brand A had supposedly gathered about its customers and marketing strategies. This got me thinking about other potential uses of Markov Chains within the fields of Marketing and Software Engineering. A very exciting meeting point between both Software Engineering and Marketing is the study of Search Engine Optimisation. Search Engine Optimisation can be incredibly useful to us as students going into an extremely competitive environment after our studies, so learning the fundamentals of SEO also has the practical application of potentially increasing our own website’s popularity. However, that’s not the most exciting part about this bit of research. What I want to explore is what factors feature into SEO, how that data can be manipulated through the use of Markov Chains, and how the data can best be visualised. But in order to get to the more fun parts, first I need to gain a greater understanding of SEO.

No individual factor could ensure the success of a page by itself, they must all work in relation to each other for optimal results.

On-Page Search Engine Optimisation (SEO) refers to the practice of editing individual web pages to help them gain relevancy and rank higher amongst search engine results. On-Page SEO relates to the type of content published on a page, how user-friendly that page is, how well designed it is, whether HTML has been used to its full potential, and more. Additionally, On-Page SEO is almost entirely under the control of the page’s publisher. Although there are dozens of different factors that go into On-Page SEO, the most significant ones are listed and examined below. It’s important to note that there is no single one factor that will ensure search engine success, but rather all these components must be utilised and well-developed together.

2017-SEO_Periodic_Table_1920x1080-800x450.png

Content

The content of a website is the single most important factor that determines its success within search engines. No matter how many Search Engine Optimisation techniques are applied, poorly researched and low-quality content is highly unlikely to climb to the top of the search results. Judging what constitutes as good content can be both subjective and difficult, however, the three most note-worthy aspects for it are detailed below.

Quality

Quality indicates how valuable the contents of a page are to its visitors. High quality content goes beyond what other similar sites offer, satisfies users, and incentivizes them to stay on the page for long periods of time. What does the page provide that users would not be able to find elsewhere? Is the information on the site distinct or useful? Those are some of the questions one may ask when determining the quality of a site’s content.

Panda attempts to mimic a human point of view.

Search engines use a variety of techniques to determine whether a page contains high quality content. User engagement metrics are key to making that judgement. After a user searches for a query the search engine lists a series of results to them. If they click on the first result, then immediately click back and move on to the second, then the first result must have been unsatisfactory to that user. By gathering millions of data points about the time visitors spend on a page, search engines can estimate how valuable that content is. High quality content engages visitors and keeps them on the page for more than a few seconds. In addition to user engagement metrics, search engines also use machine learning to determine the quality of a page’s content. Google’s 2011 Panda update significantly changed their ranking algorithm. The company used human evaluators to rate the quality of thousands of web sites and they then incorporated machine learning to mirror the evaluation of humans. Once it was able to evaluate websites in the same manner as humans, Panda was released across the web, assessing every web page. The key bit to remember is that Panda attempts to mimic a human point of view. Therefore, the content of a web site must be designed to be valuable and useful to humans, rather than to attempt to artificially rank higher. More on this will follow shortly.

Keywords and Word Choice

Keyword research is an essential and high-return factor to search engine success if done correctly. A website must strive to rank for the correct keywords based that website’s market. Researching the market’s demand for specific keywords can not only provide a target for search engine optimisation, but also reveals information about what users want, need, and how that changes, thus enabling websites to adapt. Additionally, appropriate keywords are more likely to direct interested visitors to the site, rather than just general users that are more likely to click away. Therefore, appropriate use of keywords can feed into content quality as that way interested users will stay on the page longer. In addition to finding the adequate keywords for an individual site, it’s also important that they are used throughout the pages in a natural manner. Flooding a page with keywords in an effort to artificially rank higher in the reach results is highly inefficient. Keywords should rather flow naturally, avoiding unnecessary repetition. This will make the page easier to read for humans, making the site user-friendly, and improving the quality of its content.

Vertical Search

A search engine performs a vertical search when it looks only for specific types of results to display. For example, Google Images is a specialised search engine that only provides images to its users. A web page is likely to rank higher if it incorporates a variety of relevant media that can be efficiently picked up by vertical searches as well. These can include images, video, news, maps, and other forms of media. However, as with the use of keywords, it’s important that all of these elements follow the natural and logical flow of the page and should not be included if they are irrelevant.

Design and Architecture

The structure of a web page refers to how easy that page is to read and understand by both search engines and humans. Even if a website is filled with high quality content, inadequate structure and architecture can negatively impact its success in search engine ranking.

Crawlability

Crawlability refers to a how easy or difficult it is for the search engine to go through a web page and store a copy of it in its index. When a user searches for something, the search engine goes through that index to provide the most relevant results. Therefore, if the engine has had difficulty crawling through a page, it may not provide that result to the user. The easiest type of information for a search engine to index is HTML text. Therefore, the most common type of data on a web page should be in that format. JavaScript, Flash, and even images are often ignored or devalued by crawlers. However, there are way to have a variety of visual content, and still have great crawlability. Using alt-text for images, plugins for Flash and JavaScript, and providing a transcript for videos are all ways that the information can be indexed easier.

User Experience and Interface

While Crawlability refers to how the search engine interprets the data on a web page, User Experience and Interface refers to how easy it is for humans to read and understand its content. The content needs to be intuitive to use and navigate, while also providing direct and relevant information to the query. Additionally, a professionally designed website with a well-structured layout is likely to fair better in the search engine rankings. Users typically consume content that is not only useful and innovative, but also aesthetically pleasing and clear, which is why the overall design of a web page must account for that.

Mobile Version

As of 2015, it has been recorded that more Google searches take place on mobile devices than on desktop. Therefore, websites that are mobile-friendly tend to be ranked higher than those without mobile support due to the large number of searches on mobile. Not only that, but websites that are optimised for such devices also look and feel better for the users themselves, which feeds into the content section of On-Page SEO Factors. For those cases where a website also has an app, both Google and Bing offer app indexing and linking, which means that users can be directed from the search results straight onto an app.

HTML

HTML is the underlying code of all websites and webpages. It’s important to understand HTML, because that is the way a publisher of a web page can communicate efficiently with search engine and thus boost their position in the results page. Dozens of HTML tags send specific signals to search engines about the importance and hierarchy of the content. Below is a summary of some of the most important tags and ways to approach HTML to optimise a site for search engines.

Title Tag

The Title Tag is arguably the most important tag when it some to Search Engine Optimisation. It clearly states what each individual page of a website is about and what sort of content users are likely to find if they view that page. For optimal results, titles should be very clear and descriptive, and should ideally include specifics about what users are likely to find on the page. Additionally, titles should also include keywords based on the keyword research mentioned above in order to take full advantage of the title tag and its visibility on the search results page.

Overall Structure

This section is dedicated to other HTML tags that are less significant to SEO success, but are still worth noting and managing correctly. The meta-description of a page serves as a short blurb of that page’s content. This text appears directly underneath the title in the search engine results page. To take full advantage of the meta-description, one needs to use the same keywords in that text as the keywords used in the title. This continuity aids in letting search engines know what the page is about, which helps them rank that page more efficiently. Additionally, header tags are a good way of naturally including keywords into the content, while also providing search engines with more information on what the page is about. Not only that, but header tags also tend to break down large bulks of text, thus making the page easier to consume for humans as well. However, as with the use of keywords, it is important that headers are used naturally within a page rather than artificially structured and overused. Good UX and UI has priority over efficient header and meta description use.

Relationships between On-Page SEO Factors

Now that the most important On-Page Search Optimisation Success Factors have been detailed and explained, it’s important to examine how they relate to each other and how significant each one of those factors is to the overall ranking of a page. The key thing to remember is that any individual factor could not ensure the success of a page by itself, and they must all work in relation to each other for optimal results. Nonetheless, some factors carry more weight than others, which can give publishers an idea of what they should focus on. The relationships and weights of each of the discussed factors are summarised in the following charts.

SEO Factors do not always work individually, and usually efforts to make improvements in on one factor also positively impact another. For example, excellent market research on the most appropriate Keywords for a web page can also improve that page’s Vertical Search, Content Quality, Title, and Overall HTML Structure.

SEO Factors do not always work individually, and usually efforts to make improvements in on one factor also positively impact another. For example, excellent market research on the most appropriate Keywords for a web page can also improve that page’s Vertical Search, Content Quality, Title, and Overall HTML Structure.

The relative impact of each Search Engine Optimization On-Page Factor is rated on a scale of 1-5. Content Quality and Keywords are the most influential factors in terms of ranking higher in search results, while Vertical Search and HTML Structure are not as crucial to search engine success.

The relative impact of each Search Engine Optimization On-Page Factor is rated on a scale of 1-5. Content Quality and Keywords are the most influential factors in terms of ranking higher in search results, while Vertical Search and HTML Structure are not as crucial to search engine success.

Manipulating Probabilities through Markov Chains

Markov Chains are a method for controlling chance and manipulating probability based on the results we want.

Having established the key variables that feature into On-Page SEO, how they affect each other, and what their relative weights are, it becomes apparent that all of this complex data can be inputted into Markov Chains. The Start State Matrix would have estimated values by the publisher for all of the On-Page SEO variables, while the Transition Matrix would show how much those variables would be improved by. What’s more is that by changing certain values in the Transition Matrix, one could estimate what strategies to implement in order to help their website rank higher. Let’s look at an example with a website we will call Bubble Unicorn Donuts. Bubble Unicorn Donuts currently ranks 10th of the 2nd page of Google, when one searches “donuts“. As that does not provide a lot of traffic, the publisher of Bubble Unicorn Donuts would like to boost their rank when users search for the keyword “donuts“. Well, at that point, the publisher would have a look at all of their On-Page SEO Variables that we mentioned above and input values for them in their Start State Matrix. Then the publisher would have to establish which variables they would like to improve. Oh, it turns out that Bubble Unicorn Donuts does not support a Mobile Version on their website. Therefore, by increasing the value for Mobile Version in the Transition Matrix, the publisher of Bubble Unicorn Donuts would be able to calculate the probability of traffic increasing if they were to support Mobile Versions. After they’ve run all of the numbers, it turns out that a Mobile Version of their site is most likely going to boost their rank by a couple of positions. Then the publisher of Bubble Unicorn Donuts would take a look at the other variables, change them further, and would theoretically be able to estimate how to get on the top five results for “donuts“.

This manipulation of probability and the incredible predictive power of Markov Chains can allow users to completely shift the tides when it comes to the performance of a site or marketing strategy. On the other hand, Markov Chains can aid in making gameplay varied, exciting, and difficult to predict by dictating the behaviour of NPCs based on their individual stats and personalities. They are a method by which we can control chance and manipulate probability based on the results we would like and factors we may wish to take into account.

Week 4: Markov Chains by Valzorra

Building the World

After our introduction to matrices from last week, which can be found in Week 3: The Matrix, we moved on to the rather fascinating Markov Chains in our Building the World Session. In short, Markov Chains express transitions from one state to another based on certain probabilities. They answer the question, after a certain event has occurred, what is the next most likely event to take place? Markov Chains are all about predictions based on what has already happened and on a set of actions and events we can choose from. This ties in quite nicely with our previous work on Chance and Probability with James, as well as my own research into that field of mathematics, which can be found in Week 3: Research on Mathematics. This transitioning data is represented in the form of matrices, which is why it was crucial to have an understanding of what a matrix is before moving on to the Markov Chains. Traditionally, Markov Chains have been utilised in the field of Marketing as they provide an excellent way of structuring data and making estimations about the success of certain marketing strategies, which I will be taking a closer look at shortly.

maxresdefault.jpg

As games designers, Markov Chains can be rather fascinating as they are key components in making NPCs and AI with individual personalities and changing behaviours based on those personalities. For example, if an enemy NPC has been wounded by the player then a tank NPC might choose to engage, while a more squishy one may opt to gain range and attack from afar. The same event occurs, which is that an NPC has been wounded, however, responses vary and behaviours change based on what data is within our State and Transition Matrices. Through these values and probabilities we have the opportunity to create an exciting and unique gameplay experience, which would excite players as it would be difficult to predict and take advantage of. In the session itself, we calculated how the Safety Index of certain sniper positions would change, if an NPC were to shoot from one. This is another example of how gameplay and AI within video games could gain variety and become more fun, making better and more unpredictable decisions about how to react to players. Below I have attached my work on Markov Chains from the class, their basic definitions and examples of how they operate.

Markov1.JPG
Markov2.JPG
Exploration of Markov Chains through analysis of the Pokemon Battle System.

Exploration of Markov Chains through analysis of the Pokemon Battle System.

In addition to dissecting the example related to sniper positioning, we also took a look at the popular Pokemon Battle System and came up with our own Pokemon, exploring how they would react based on certain probabilities we inputted. After crunching the numbers, we realised that if our Pokemon starts off with their strongest attacks, then based on the probabilities we chose together, they are most likely to attack with the same ability again. Now, this opens up a variety of possibilities in terms of design, because we have the option of estimate all of our probabilities on paper and complete the entire design of a sophisticated AI, all without writing a line of code. This is absolutely amazing, because not only do Markov Chains allow us to make accurate predictions of probability based on data we input, but they also enable us to change and manipulate that data based on the results we desire. This is an exceptionally interesting idea to me, because it gives users the power to learn from their mistakes and to change data based on the results they would like without any excessively complicated or time-consuming processes.

Board Games with Dice can be entirely represented by Matrices and Markov Chains. What matters in those games is the current state of the board, which is then changed to the next state by the dice. The next state is entirely dependant on the current one.

Board Games with Dice can be entirely represented by Matrices and Markov Chains. What matters in those games is the current state of the board, which is then changed to the next state by the dice. The next state is entirely dependant on the current one.

After our detailed examination into Markov Chains and why they are freaking awesome, we proceeded to explore some more principles in Calculus. We had a brief overview of all of the principles and rules we had covered up until that point and then James proceeded to explain the wondrous and mystical powers of the number e or Euler’s Number. This is an irrational number, approximately equal to 2.7182818284590452353, that spans across multiple areas of mathematics. The reason e is such an amazing number is that it allows us to solve seemingly unsolvable mathematical problems. It offers a solution where none seem apparent, which makes it incredibly useful. Below I have attached my work in relation to this number and how we essentially proved its value.

Euler's Number.JPG
Euler's Number 2.JPG

Tech Workshop

After our Markov and Calculus extravaganza in the morning, we went over to Studio 16 for individual one to one sessions with James in relation to mechanics we may be interested in developing. As I am very much focused on Mathematics, Chance, and Probability, I was interested in how one could go about manipulating certain probabilities. I wondered if I had three standard six-sided die, A, B, and C, and if I threw them all one after the other, how could I go about changing the order in which the values appeared. For example, if I tossed a 1, another 1, and a 6, how could I manipulate my result to become in the order of 1, 6, and 1. The reason I was interested in this concept was primarily the idea of data and chance manipulation based on certain knowledge.

dice-gambling-1024x768-wallpaper.jpg

After James gave me a nice and clean method for achieving the task, I went to one of the computers to try and code it. However, as I was setting up, I realised that it would be a great exercise to try and craft my own random number generator to give me the values for my three die. Additionally, as this is Probability at its purest, I thought it would be an excellent application of my research on the topic and of the things we have covered in previous Tech Workshop sessions. For this piece of code, I referred to my handout on Linear Congregational Generation and applied the same method for obtaining a truly random number within a limit. The code, which I am happy to confirm does in fact work, is attached below. I should note that as the computer I was on did not have Unity installed, I coded this in an online compiler in C#. But anyway, who cares, the thing works!

using System;           
public class Program{

    //Getting the current time which will be used as our seed value
    public static String GetTimestamp(DateTime value){
        return value.ToString("ssffff");
    }
    public static void Main(){
        
        //Initialising variables
        string inputM;
        int x, m, s;
        
        //Inputting m
        Console.WriteLine("Enter Value for m: ");
        inputM = Console.ReadLine();
        m = Convert.ToInt32(inputM);
        
        //Ensuring m is greater than 0
        if (m<0){
            do{
                Console.WriteLine("The value of m must be greater than 0.
                Please enter a different value for m: ");
                inputM = Console.ReadLine();
                m = Convert.ToInt32(inputM);
            }while(m<0);
        };
        Console.WriteLine("Your value for m is {0} ", m);
        
        //Getting the seed value based on time
        String timeStamp = GetTimestamp(DateTime.Now);
        s = Convert.ToInt32(timeStamp);
        
        //Ensuring that s<m based on seconds;
        if (m<s){ 
            do{
            s = s/10;
            }while(m<s);
        };
        
        //Outputting the accurate seed value
        Console.WriteLine("Seconds right now: {0}", timeStamp);
        Console.WriteLine("The value of s is: {0}", s);
        
        //Arbitrarily choosing t and u
        Random rnd = new Random();
        int t = rnd.Next(0, m);
        int u = rnd.Next(0, m);
        
        //Outputting t and u
        Console.WriteLine("The value of t: {0}", t);
        Console.WriteLine("The value of u: {0}", u);
        
        //Calculating x
        x = ((t*s) + u)%m;
        Console.WriteLine("Your random number is: {0}", x);
        
    } 
}

Unfortunately, although I had successfully coded a random number generator based on the Linear Congregational Generation method, by the time I had finished this task the session had concluded, so I wasn’t able to incorporate the swapping mechanic fully. However, I did have the time to consider how to approach the process and what I intend to do is to simply add three random numbers from my generator in an array, and then swap the elements in the array to the desired positions based on the algorithm James showed me at the start of the session.

Thoughts and Reflection

I found the material we covered this Tuesday incredibly useful to my personal research into Mathematics and specifically Probability and Chance. I am fascinated by the idea of being able to make predictions of certain probabilities and then to be able to change the data in order to obtain the results we desire. This is an excellent method for manipulating chance and probability and an be extremely applicable not only in games design and AI, but also in fields such as Marketing and Training. Markov Chains have excessive power as they can essentially predict the future and all possible outcomes based on a set of collected data, which is absolutely fascinating to me. I’m also incredibly interested in figuring out how to actually visualise this data in a digestible format, which anyone could understand, even without the knowledge of matrices. More on Markov Chains and their applications will follow soon as a continuation of my research into this field of Mathematics.