BRERIN: A PhilosoBot (at random temperatures for 2 hours)

BRERIN : A Philosobot: Trained on the collected book-length works of Erin Manning and Brian Massumi: Producing single sentences for 2 hours and 2 minutes at random temperatures: Temperature is a hyperparameter of neural nets that influences randomness: Think of it as complexity fluctuation.

~ + ~

BRERIN is a homage to a sustained diligent fertile intellectual oeuvre.

Erin Manning and Brian Massumi are thinkers; they operate within diverse terrains (radical empiricists, speculative pragmatists, process philosophers); they utilize language to explore cultural thought as a process; they co-direct the SenseLab. I am grateful to them for the generosity of inviting me to explore their work with machine learning and donating their writings to this process. As they write:

“The SenseLab does not exist as such. It is not an organization. It is not an institution. It is not a collective identity. It is an event-generating machine, a processual field of research-creation whose mission is to inside itself out. Its job is to generate outside prolongations of its activity that rip.ple into distant pools of potential.” — Thought in the Act

~ + ~

BRERIN generates text which reflects the vocabulary and cadence of its origin. It operates as a container for modes of idiomatic discourse. Yet it is also an an artefact of contemporary deep learning, utterly lacking in subtle contextuality or genuine cognition.

+~+

TECH DETAILS

Library: PyTorch

Mode: GRU
Embedding size: 2500
Hidden Layers: 2500
Batch size: 20

Epoch: 69
Loss: 0.71
Perplexity: 2.03.pt

Temperature range: 0.25 to 1.25

~ + ~

TEXT: BRERIN_2h02m_07092017_longSentence

~ + ~

~ + ~

CODE: https://github.com/jhave/pytorch-poetry-generation/tree/master/word_language_model

BRERIN (Sense Lab Philosobot – Ep69 )

BRERIN

A Philosobot:
Trained on the collected book-length works
of Erin Manning and Brian Massumi

Neural nets learn how-to write by reading.
Each reading of the corpus is called an epoch.

This neural net read all  the collected book-length works
of Erin Manning and Brian Massumi
69 times (in approx 8 hours
using a TitanX GPU).

+ ~ +

Now it writes 70 word segments (that end in a sentence).
Matching as best it can vocabulary and cadence of the corpus.

It cannot match the thought, but reflects a simulacrum of thought:
the thought inherent within language, within reading, within writing.

+~+

Library: PyTorch

+~+

Mode: GRU
Embedding size: 2500
Hidden Layers: 2500

Batch size: 20
Epoch: 69

Loss: 0.71
Perplexity: 2.03.pt

~ + ~

Text: 25-08-2017_Epoch69_Temperature0p95_1h39m

~ + ~

BRERIN (Epoch 39)

Epoch 39 is a roughly fermented gated recurrent network (GRU) that exemplifies the rough parabolic deflection contours of Sense Lab discourse.

jhav:~ jhave$ cd /Users/jhave/Desktop/github/pytorch-poetry-generation/word_language_model

jhav:word_language_model jhave$ source activate ~/py36 
(/Users/jhave/py36) jhav:word_language_model jhave$ python generate_2017-SL-BE_LaptopOPTIMIZED.py --checkpoint=/Users/jhave/Desktop/github/pytorch-poetry-generation/word_language_model/models/2017-08-22T12-35-49/model-GRU-emsize-2500-nhid_2500-nlayers_2-batch_size_20-epoch_39-loss_1.59-ppl_4.90.pt

System will generate 88 word bursts, perpetually, until stopped.

BRERIN 

A Philosobot:
Trained on the collected book-length works of Erin Manning and Brian Massumi

+~+Library: PyTorch+~+

Mode: GRU
Embedding size: 2500
Hidden Layers: 2500
Batch size: 20
Epoch: 39
Loss: 1.59
Perplexity: 4.90.pt

Initializing.
Please be patient.

 


 


Text : Screencast_SL_BE_Epoch39_24-08-2017_16h12_1h04m_model-GRU-emsize-2500-nhid_2500-nlayers_2-batch_size_20-epoch_39-loss_1.59-ppl_4.90


For the tech-minded, let it be noted: this is an overfit model. While overfitting is taboo in science, it is a creator of blossoms in natural language generation. The texture of actual units of source text sutured into a collagen of authenticity.

Specifically: I used all the text sources in the training data. And basically did not care about the relevance or size of test or validation data. And the embedding size is made as large as the gpu will tolerate. Dropout is high so it gets confused.

Basically, for a deep learning expert, the loss and perplexity values are invalid, to put it crudely: bullshit. Yet the texture of the language generated is superior.

Consider the analogy of training a child to read and write: does the wise teacher keep back part of the corpus of knowledge, or does the teacher give all to the student?

Brerin may have many moments of spasmodic incoherence, yet at an idiomatic cadence and vocabulary level the texts recreate the dexterity and delirium intensities of the source fields. In essence, reflecting the vast variational presence of both Erin and Brian. This bot is a homage to their massive resilient oeuvre.