Innovative and fascinating this series of typographic experiments expands our understanding of the role technology can play in type design (and construction).

When we hear the term neural pathways we typically think of the brain. The human brain is a biological neural network (an interconnected web of neurons transmitting elaborate patterns of electrical signals). But these same complex networks can also apply to models for mechanical and machine based computing. Artificial neural networks (ANNs) are used in machine learning and cognitive science, (Now Technology is not really my thing so forgive the clumsy explanation here but I most readily relate this to how apple music makes 'For You' suggestions based on your playlist and listening patterns. The network is collecting data about my habits and tastes and over time it learns to make playlists and recommendations to suit me).

Wikipedia describes Artificial neural networks as being "based on the central nervous system of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown. Artificial neural networks are generally presented as systems of interconnected 'neurons' which exchange messages between each other. The connections have numeric weights that can be tuned based on experience, making neural nets adaptive to inputs and capable of learning. Like other systems that learn from data - neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming."

The artificial neurons together, are capable of making predictions based on what they "know."

Late one evening Erik Bernhardsson (a former Spotify engineer who now works at Better) decided he wanted "to get a bunch of fonts". An hour later he had a bunch of scripts pulling down fonts from the net, and after a few days he had accumulated more than 50k fonts.

After collecting the fonts he decided to use this massive dataset to conduct a series of visual (and program based) experiments to train the neural network to learn, interpret and create new characters and fonts.

In one experiment he asked the model to complete a font with one missing character. Using only its knowledge of the other letters, and its training, it produced a character that it thought would best fit the font in question. In some cases, the results are dead-on.

"The model has seen other characters of the same font during training, so what it does is to infer from those training examples to the unseen test examples," Bernhardsson explains.

In the image above the actual letter is on the left, while the neural network's best guess is on the right.

Erik also asked the network to create an ultimate average of every typeface it had trained on, resulting in this softly focused archetypal font shown below which is reminiscent of the beautiful average font experiment by Moritz Resi.

 
 


The average was then distilled down to the mean of each character to create a more legible and solid interpretation of letter structure (shown below).

 
 

Erik then used interpolation to blend typographic form and genres. "We can also interpolate between different fonts in continuous space. Since every font is a vector (mathematical description), we can create arbitrary font vectors and generate output from it. Let’s sample four fonts and put them in the corners of a square, then interpolate between them!" (shown below).

 

In another test, he asked the model to create an entirely new font based on a random vector from the training set. What resulted was a series of new fonts based on the network’s past knowledge. Check these new forms evolve as animated giffs on Erik's site. Bernhardsson explains. "A lot of the variation is on a continuum in terms of spacing, boldness, etc., so a lot of the 'new' fonts generated that way are basically just new ways to recombine those variables."

Bernhardsson writes "Another cool thing we can do since we have all fonts in a continuous space is to run t-SNE on them and embed all fonts into the 2D plane." Here he maps the visual language of each Typeface into a flat two-dimensional space, with similar forms clustered together in the plane. (shown below).

 

Bernhardsson encourages people to download the data and play around with it if you’re interested!

Sources: Reddit user AmazingThew on r/Typography, Bernhardsson's blog  and FastCo.