Friday, 23 September 2016

Technical Blog Entry 1


For these technical blog posts I shall be doing a review of methods for:

- Procedural Generation -



What is Procedural Generation?

Instead of explicitly recording information an algorithm or procedure is used to create it when required. Whilst the phrase is now most commonly seen in reference to content generation in computer games it has been used by people long before the development of computers to produce interesting results.

The most basic form of procedural generation is to use randomness to determine somethings state. Imagine shuffling a pack of cards to play hearts of solitaire. The order that they end up in has been "procedurally" created in the shuffling and with have a large impact upon the game and whether it will be possible to complete.

Complete randomness is very noisy and not very interesting. The human mind is a powerful pattern finder and can identify and ignore data with no pattern at all. To avoid this the output can be structured and made to follow rules of generation. The created patterns and shapes will be much more appealing to the eye. Some examples of procedural generation are to the right, map generation algorithms that create landmarks and geographic features.

John Carmack famously disparaged Procedural Generation as simply another form of compression algorithm. (link) In some sense he was correct, procedural generation is fantastic for creating textures for organic surfaces like wood, stone or the swirling flux of a fire.
A well known example of 3D models and textures was the demoscene project kkrieger. Imagine a whole Doom-like shooter in just X KB!

In this first blog post I will be starting with some basics of procedural generation in the realm of text generation. To that end I have created several web applets which you can play around with to get procedurally generated output :D

I do hope they work ok, so long as you are using a modern browser (i.e. NOT Internet Explorer) they should function. It stands to reason that they won't work without JavaScript enabled!

Why Randomness is not enough

Random Stream of Letters

The most naive way to generate text which is guaranteed to be novel, (that is non-repeating) would be to string letters together with no rhyme or reason to their order. Give the web applet to the side a try and see what it produces...

Doesn't look great does it? It's only a slight improvement over opening up a binary file as text and getting a garbled mess. The main issue is that words don't have normal lengths (~5 letters). Here a "word" might span multiple lines!

Another aspect of this output that instantly marks it out as artificial is that the punctuation characters are incorrectly placed. In real text they should only appear a sentence endings every 5-10 words. To get more useful output you require the generation algorithm to observe some aspects of the structure of real sentences.

Simple Example: Weighted Text Generation


Sentence Generator

A marked improvement over purely random text can be observed if you use a pre-existing piece of text as your source. By reading through it and noting when a token (a word or letter) occurs and what follows it you can create a table of frequencies. By looking at these pairs of tokens (also known as bi-grams) you can create an algorithm to randomly pick the next token, weighted by how frequently the pair occurs.

The web applet above does so with three different source texts, using words as the tokens it tracks. Ignore the punctuation marks as they don't really work (ideally punctuation would be tracked as tokens too)

A neat example of procedural text generation is if you successively click on predictive text options on a smart phone. You might learn a lot about your most frequently used words and sentences!

What my phone's predictive text produces.
I, uh, talk about tech a lot...


Like many algorithms the output of this bi-gram analysis depends on the "correctness" of the input data. If you put in nonsense sentences you'll get the same out. Another property to consider is how tightly fitted the model is to the input training data. It can often be advantageous to have an aspect of noise or randomness to the algorithm to avoid repeating the sequences in the input data *exactly*

Whilst bi-grams can produce impressive results in terms of looking real-ish sentences composed of them tend to be meaningless; all structure and no content. Whilst it is still an open problem of how many rules go into producing real human language texts there is always the option of throwing more computing power after deeper frequency analysis.



While bi-grams are concerned with the immediately subsequent token there is nothing stopping an enterprising computational linguist measuring tri-gram (next two token) frequency or any n-gram frequencies. There is fascinating work being done in terms of word vectors at the moment. See here for tutorial: (www.tensorflow.org)

New Word and Name Generation


New Word Generator

In the other direction, instead of trying to replicate real languages procedural generation can create fantastic results when creating completely novel words and languages.

Finer grained procedural generation of novel words can be done similarly as the weighted sentence generation above. Instead of tracking frequency of word bi-grams you track pairs of letters. The final Web Applet gives and example for this along with consonant-vowel-consonant random generation for comparison. The source texts (also referred to as corpus) for this Applet are a bi-gram analysis from Google Book's English texts: (norvig.com/mayzner.html) and an equivalent for German texts from here: (www.staff.uni-mainz.de/pommeren/Cryptology/Classic/8_Transpos/BigramsD1.html)

A further step in creating new words would be to create the bi-gram frequencies procedurally. What would a language with 'z' as common as 'e' look like? Alternatively you could improve the 'englishness' of a text by looking at longer combinations of letters, triplets, quadruplets and so on (tri-grams, quad-grams or n-grams).


How many pieces has the Triforce
been split into??

Summary

I've enjoyed creating this first blog on procedural generation, especially the interactive Web Applets. I hope my enthusiasm has been well received by you the reader. Thank you for having a read-through and perhaps you've learnt something too!

I could happily continue on procedural text generation for another three weeks, alas there are many more topics to discuss. Next fortnight's technical blog post will be on generation of shapes. I'm looking forward to it. Perhaps I'll do fractals, Menger Sponges or tree generation (what is an L-System exactly?) :D

No comments:

Post a Comment