“and farther heart, stuff’d the Arts supernova” | 173,901 lines made in 1h35m | Screencast 2018-03-08 21:10:55


Txt excerpt:

 I can even hear your violent debates. Of prayer,
 weep, Whatever you say, я recognise me. paulist. That's
 right no.
 laughing?

Complete txt output:

generated-2018-03-08T19-35-32_Screencast 2018-03-08 21:10:55


Corpus retuned to reflect tech and other torque

100selected-archiveorg.txt
 Jacket2_ALL.txt
 99 Terms You Need To Know When You’re New To Tech.txt jhavelikes_CLEANED-again.txt
 ABZ.txt
 jorie_graham.txt
 Aeon_autonomousrobots_science-and-metaphysics_enemies_biodiversity.txt
 literotica.txt
 capa.txt ndsu.edu-creativeWriting-323.txt
 Cathay by LiBai pg50155.txt neurotransmitter.txt
 celan.txt nyt_china-technology-censorship-borders-expansion.txt
 citizen-an-american-lyric-claudia-rankine.txt patchen.txt
 citylights.txt
 Patti Smith - all songs lyrics.txt
 deepmind.txt
 poetryFoundation_CLEANED_lower.txt
 evergreenreview_all_poem.html.txt
 rodneyJones.txt
 glossary-networking-terms.txt
 swear.txt
 glossary-neurological-terms.txt
 Teen Love Meets the Internet.txt
 glossary-posthuman-terms.txt
 tumblr_jhavelikes_de2107-feb2018.txt
 Grief.txt tumblr_jhavelikes_EDITED.txt
 harari_sapiens-summary_guardian-review.txt
 twoRiver.txt
 I Feel It Is My Duty to Speak Out_SallyOReilly.txt

Process left to run for 36 hours on Nvidia Titan X:

(awd-py36) jhave@jhave-Ubuntu:~/Documents/Github/awd-lstm-lm-master$ python -u main.py --epochs 500 --data data/March-2018_16mb --clip 0.25 --dropouti 0.4 --dropouth 0.2 --nhid 1500 --nlayers 4 --seed 4002 --model QRNN --wdrop 0.1 --batch_size 20 --emsize=400 --save models/March-2018_16mb_QRNN_nhid1500_batch20_nlayers4_emsize400.pt

$ python -u finetune.py --epochs 500 --data data/March-2018_16mb --clip 0.25 --dropouti 0.4 --dropouth 0.2 --nhid 1500 --nlayers 4 --seed 4002 --model QRNN --wdrop 0.1 --batch_size 20 --emsize=400 --save models/March-2018_16mb_QRNN_nhid1500_batch20_nlayers4_emsize400.pt

$ python pointer.py --lambdasm 0.1279 --theta 0.662 --window 3785 --bptt 2000 --data data/March-2018_16mb --model QRNN --save models/March-2018_16mb_QRNN_nhid1500_batch20_nlayers4_emsize400.pt

$ python generate_March-2018_nocntrl.py --cuda --words=444 --checkpoint="models/March-2018_16mb_QRNN_nhid1500_batch20_nlayers4_emsize400.pt" --model=QRNN --data='data/March-2018_16mb' --mint=0.75 --maxt=1.25

Training QRNN using awd-lstm exited manually after 252 epochs:

-----------------------------------------------------------------------------------------
| end of epoch 252 | time: 515.44s | valid loss 4.79 | valid ppl 120.25
-----------------------------------------------------------------------------------------
Saving Averaged!
| epoch 253 | 200/ 1568 batches | lr 30.00 | ms/batch 307.76 | loss 4.29 | ppl 73.12
^C-----------------------------------------------------------------------------------------
Exiting from training early
=========================================================================================
| End of training | test loss 3.99 | test ppl 54.11

Complete training terminal output: TRAINING__Mid March Corpus | Screencast 2018-03-08 21:10:55

 

Paragraph style: 16 lines of approx 42 chars each : Averaged Stochastic Gradient Descent : Screencast 2018-02-04 18:52:12

5038 poems of 16 lines of approx 42 chars each generated in 2h42m

I am now utterly convinced of the impossibility of neural nets ever producing coherent contextually-sensitive verse, yet as language play, and a window into the inherent biases within humans, it is is impeccably intriguing and occasionally nourishing.

Text: Screencast 2018-02-04 18:52:12_PARAGRAPHS.txt.tar 

(awd-py36) jhave@jhave-Ubuntu:~/Documents/Github/awd-lstm-lm-master$ python generate_Feb4-2018_PARAGRAPHS.py –cuda –words=1111 –checkpoint=”models/SUBEST4+JACKET2+LYRICS_QRNN-PBT_Dec11_FineTune+Pointer.pt” –model=QRNN –data=’data/dec_rerites_SUBEST4+JACKET2+LYRICS’ –mint=0.75 –maxt=1

Averaged Stochastic Gradient Descent
with Weight Dropped QRNN
Poetry Generation

Trained on 197,923 lines of poetry & pop lyrics.

Library: PyTorch
Mode: QRNN

Embedding size: 400
Hidden Layers: 1550
Batch size: 20
Epoch: 478
Loss: 3.62
Perplexity: 37.16

Temperature range: 0.75 to 1.0

 You disagree and have you to dream. We
 Are the bravest asleep open in the new undead

Text: Screencast 2018-02-03 19:25:34

2017-12-11 11:52:45 [SUBEST4+JACKET2+LYRICS]

Using a mildly revised (cleaner leaner) corpus

AND … dropout=0.65 (to prevent overfitting)

jhave@jhave-Ubuntu:~/Documents/Github/pytorch-poetry-generation/word_language_model$ python generate_2017-INFINITE-1M_October.py --checkpoint=models/2017-12-11T06-42-23_dec_rerites_SUBEST4+JACKET2+LYRICS/model-LSTM-emsize-2400-nhid_2400-nlayers_2-batch_size_20-epoch_21-loss_3.58-ppl_36.04.pt --cuda --words=600 --data=data/dec_rerites_SUBEST4+JACKET2+LYRICS

Generated: Screencast 2017-12-11 11:52:45_SUBEST4+JACKET2+LYRICS

Pytorch 1800 hidden layers 31 epochs (July 2107 RERITES Source)

 

The second video here became the source-text for July 2017 RERITES  http://glia.ca/rerites/

+~+

PyTorch Poetry Language Model.
Trained on approx 600,000 lines of poetry
http://bdp.glia.ca

+~+

jhave@jhave-Ubuntu:~/Documents/Github/pytorch-poetry-generation/word_language_model$ python generate_2017-INFINITE-1M.py –cuda –checkpoint=’/home/jhave/Documents/Github/pytorch-poetry-generation/word_language_model/models/2017-06-17T09-22-17/model-LSTM-emsize-1860-nhid_1860-nlayers_2-batch_size_20-epoch_30-loss_6.00-ppl_405.43.pt’

Mode: LSTM
Embedding size: 1860
Hidden Layers: 1860
Batch size: 20
Epoch: 30
Loss: 6.00
Perplexity: 405.43.pt

+~+

jhave@jhave-Ubuntu:~/Documents/Github/pytorch-poetry-generation/word_language_model$ python generate_2017-INFINITE-1M.py –cuda –checkpoint=’/home/jhave/Documents/Github/pytorch-poetry-generation/word_language_model/models/2017-06-17T09-22-17/model-LSTM-emsize-1860-nhid_1860-nlayers_2-batch_size_20-epoch_31-loss_6.00-ppl_405.39.pt’

Mode: LSTM
Embedding size: 1860
Hidden Layers: 1860
Batch size: 20
Epoch: 31
Loss: 6.00
Perplexity: 405.39.pt

+~+

Ridges— Ourselves?

4
K-Town: ideality;
The train lost, “Aye man!
O old beggar, O perfect friend;
The bath-tub before the Bo’s’n resentments; pissing
rimed metaphors in the white pincers
scratching and whiten each of a clarity
in the sky the sacred hoof of eastward,
arc of the pestle through sobered the cliffs to the own world.

+~+

TXT Version:

jhave@jhave-Ubuntu_Screencast 2017-06-26 09_45_14_2017-06-17T09-22-17_model-LSTM-emsize-1860-nhid_1860-nlayers_2-batch_size_20-epoch_31-loss_6.00-ppl_405.39

jhave@jhave-Ubuntu-pytorch-poet_Screencast 2017-06-07 20:16:49_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_7-loss_6.02-ppl_412.27

+~+

O wht the heck. Why not one more last deranged excessive epic deep learning poetry binge courtesy of pytorch-for-poetry-generation

Personally I like the coherence of Pytorch, it’s capacity to hold the disembodied recalcitrant veil of absurdity over a somewht stoic normative syntactical model.

Text:

jhave@jhave-Ubuntu-pytorch-poet_Screencast 2017-06-07 20:16:49_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_7-loss_6.02-ppl_412.27

Code:

https://github.com/jhave/pytorch-poetry-generation

Excerpt:

Jumble, Rub Up The Him-Whose-Penis-Stretches-Down-To-His-Knees. 

 
 The slow-wheeling white Thing withheld in the light of the Whitman? 
 The roof kisses the wounds of blues and yellow species. 
 Far and cold in the soft cornfields bending to the gravel, 
 Or showing diapers in disclosure, Atlantic, Raymond 
 Protract the serried ofercomon, — the throats "I've used to make been sustene, 
 Fanny, the inner man clutched to the keep; 
 Who meant me to sing one step at graves. 

WVNT regamed (blind, still bland, but intriguing)

The opposite of scarcity is not abundance: saturation inevitably damages, or perhaps it just corrodes or blunts the impassioned pavlov puppy, delivering a dent of tiny deliberate delirious wonder.

Yet technical momentum now propels and impels continuance: how can the incoherence be tamed? Set some new hyperparameters and let the wvnt train for 26 hours.

Over this weekend, I’ve churned out about 100,000 lines. Generating reckless amounts of incoherent poetry threatens more than perspective or contemplation, it totters sanity on the whim of a machine. Teeming bacteria, every epiphany a whiff of redundancy.

$ python train_2017_py3p5_150k_low6_sample4096_SAVE-STARTS_100k.py 
--wavenet_params=wavenet_params_ORIG_dilations2048_skipChannels8192_qc2048_dc32.json 
--data_dir=data/2017

Using default logdir: ./logdir/train/2017-06-01T08-38-45 

_______________________________________________________________________________________________

dilations: 2048 filter_width: 2 residual_channels: 32
dilation_channels: 32 skip_channels: 8192 quantization_channels: 2048

(tf0p1-py3.5-wvnt) jhave@jhave-UbuntuScreencast 2017-06-02 11:08:14_2017-06-01T08-38-45

and faithful wound 
To fruit white, the dread 
One by one another, Image saved-- 
Ay of the visit. What pursued my heart to brink. 


Such the curse of hopes fraught memory;

tf0p1-py3.5-wvnt_jhave-Ubuntu_Screencast 2017-06-02 11:08:14_2017-06-01T08-38-45

Caught a new light bulb,   
All the heart is grown.

TXTs generated in SLOW MODE

There’s a way of calculating the matrices that taxes the strength of evn a magnificient GPU, making production crawl, and the computer difficult to use. Each of the following txt files (4444 letters in each) took about 40-60 minutes  to generate on an Nvidia Maxwell TitanX using cuda8 on Ubuntu 16.4

Txts generated slow seem somehow thicker, as if issued from a more calibrated mentation, yet at the same time it’s math scat, glitch flow. Glisses from disintegrating encyclopedias.

Here are some samples:

I found myself within us wonder.
   You purchase as ease with water over events,
   because the straightforward that I miximally, she
   Don't sports commentation with its ruffled story

tf0p1-py3.5-wvnt_jhave-Ubuntu_SLOW_4444charsx4_2017-06-02-16T08_2017-06-01T08-38-45_model.ckpt-117396 Continue reading “WVNT regamed (blind, still bland, but intriguing)”

Wvnt Tamed

So the abrasive brash gutter voice of the neural net seemed maybe due to lack of longterm exposure, wider horizons, deeper reading into the glands of embodied organisms, so I set the hyperparameters higher, waited 26 hours and watched the ubuntu HD fill up with models to the point where the OS crashed on reboot and i found myself entering a mysterious cmd line universe called grub… thus to say apprenticing a digital poet is not without perils.


Text: tf0p1-py3.5-wvnt_jhave-Ubuntu_Screencast 2017-05-31 15:19:36_Wavenet_2017-05-30T10-36-56


Text: tf0p1-py3.5-wvnt_jhave-Ubuntu_Screencast 2017-05-31 11:49:04_Wavenet_2017-05-30T10-36-56

THAT ETERNAL HEAVY COLOR OF SOFT

The title of this post is the title of the first poem generated by a re-installed Wavenet for Poetry Generation (still using 0.1 Tensorflow, but now  on Python 3.5), and working on an expanded corpus (using approx 600,000 lines of poetry) the first model was Model: 7365  |  Loss: 0.641  |  Trained on: 2017-05-27T14-19-11 (full txt here).

Wavenet is a rowdy poet, jolting neologisms, careening rhythms, petulant insolence, even the occasional glaring politically-incorrect genetic smut. Tangents codified into contingent unstable incoherence. 

Compared to Pytorch, which aspires to a refined smooth reservoir of cadenced candy, Wavenet is a drunken street brawler: rude, slurring blursh meru crosm nanotagonisms across rumpled insovite starpets.

Pytorch is Wallace Stevens; Wavenet is Bukowski (if he’d been born a mathematician neologist).

Here’s a random poem:

____________________________________________
Model: 118740  |  Loss: 0.641  |  2017-05-28T00-35-33

Eyes calm, nor or something cases.

from a wall coat hardware as it times, a fuckermarket
in my meat by the heart, earth signs: a pupil, breaths &

stops children, pretended. But they were.

Case study: Folder 2017-05-28T12-16-50 contains 171 models (each saved because their loss was under the 0.7 threshold). But what does loss really mean? In principle it is a measurement of the gap between the generated text and the validation text (how close is it?); yet however many different schemas proliferate, loss (like pain) cannot be measured by instrumentality.

Here’s another random poem:

____________________________________________
Model: 93286  |  Loss: 0.355  |   2017-05-28T12-16-50

would destroying through the horizon. The poor
Sequel creation rose sky.

So we do not how you bastard, grew,
there is no populously, despite bet.
Trees me that he went back
on tune parts.

I will set
a girl of sunsets in the glass,

and no one even on the floral came

Training

I’m slowly learning the hard way to wrap each install using VirtualEnvironments. Without that as the upgrades happen, code splinters and breaks, leaking a fine luminous goo of errors.

The current virtual environment was created using

$ sudo apt-get install python3-pip python3-dev python-virtualenv
$ virtualenv -p python3.5 ~/tf0p1-py3.5-wvnt

After that, followed the instructions,

$  TF_BINARY_URL=https//storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py3-none-any.whl
$ pip3 install --upgrade $TF_BINARY_URL

 

then got snarled into a long terrible struggle with error messages messing up the output, resolved by inserting,

os.environ['TF_CPP_MIN_LOG_LEVEL']='2' 
# into generate_Poems_2017-wsuppressed.py

And to generate on Ubuntu, using the lovely Nvidia Titan X GPU so generously donated by Nvidia under their academic grant program:

$ cd Documents/Github/Wavenet-for-Poem-Generation/
$ source ~/tf0p1-py3.5-wvnt/bin/activate
(tf0p1-py3.5-wvnt)$ python train_2017_py3p5.py --data_dir=data/2017 --wavenet_params=wavenet_params_ORIG_dilations1024_skipChannels4096_qc1024_dc32.json

Text Files

tf0p1-py3.5-wvnt_jhave-Ubuntu_Screencast 2017-05-28 11:31:40_TrainedOn_2017-05-28T00-35-33 tf0p1-py3.5-wvnt_jhave-Ubuntu_2017-05-28 09:18:00_TRAINED_2017-05-28T00-35-33 tf0p1-py3.5-wvnt_jhave-Ubuntu_Screencast-2017-05-27 23:50:14_basedon_2017-05-27T14-19-11 tf0p1-py3.5-wvnt_jhave-Ubuntu_Screencast 2017-05-28 22:36:35_TrainedOn_2017-05-28T12-16-50.txt

Fermentation & Neural Nets

Mead recipe: dilute honey with water, stir twice a day, wait.  

Fermentation begins after 24-48 hours. After a week, the fermented honey-wine (mead) can be enjoyed green, low in alcohol yet lively with essence. Or you can let it continue.

Generative-poetry recipe: text-corpus analysed by neural net, wait.

After each reading of the corpus (a.k.a. ‘training epoch’), a neural net can produce/save a model. Think of the model as an ecosystem produced by fermentation, an idea, a bubbling contingency, a knot of flavours, a succulent reservoir, a tangy substrate that represents the contours, topology or intricacies of the source corpus. Early models ( a model is usually produced after the algorithm reads the entire corpus; remember, each reading of the corpus is called an epoch; after 4-7 epochs the models are still young) may be obscure but are often buoyant with energy (like kids mimicking modern dance).

Here’s an example from a model that is only 2 epochs old:

Daft and bagel, of sycamore.
“Now thou so sex–sheer if I see Conquerours.

So he is beneath the lake, where they pass together,
Amid the twister that squeeze-box fire:

Here’s an example from a model that is 6 epochs old:

Yellow cornbread and yellow gems
and all spring to eat.

Owls of the sun: the oldest worm of light
robed in the advances of a spark.

Later models (after 20 epochs), held in vats (epoch after epoch), exhibit more refined microbial (i.e. algorithm) calibrations:

Stale nose of empty lair

The bright lush walls are fresh and lively and weeping,
Whirling into the night as I stand

unicorn raging on the serenade
Of floors of sunset after evil-spirits,
adagio.

Eventually fermentation processes halt. In the case of mead, this may occur after a month; with neural nets, a process called simulated annealing intentionally decreases the learning rate every iteration; so the system begins by exploring large features then focuses on details. Eventually the learning rate diminishes to zero. Learning (fermentation) stops.







 

TEXT FILES

2017-04-30 09:49:10_EPOCH-6 2017-04-30 13:27:15_EPOCH-16 2017-04-30 23:03:42_EPOCH-4 2017-05-03 09:41:35_2017-05-02T06-19-17_EPOCH22 2017-05-03 12:54:57_2017-05-02T06-19-17_EPOCH-6 2017-05-12 12:24:54_2017-05-02T06-19-17_EPOCH22

2017-05-13 21:24:06_2017-05-02T06-19-17_-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_7-loss_6.02-ppl_412.27 2017-05-14 23:51:14_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_8-loss_6.03-ppl_417.49 2017-05-15 11:00:03_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_3-loss_6.09-ppl_439.57 2017-05-15 11:28:24_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_4-loss_6.04-ppl_421.71 2017-05-15 11:53:40_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_5-loss_6.02-ppl_413.64 2017-05-15 21:08:56_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_9-loss_6.02-ppl_409.87 2017-05-15 21:48:55_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_10-loss_6.03-ppl_416.36 2017-05-15 22:21:11_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_11-loss_6.02-ppl_413.61 2017-05-15 22:48:53_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_12-loss_6.03-ppl_414.69 2017-05-15 23:24:25_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_13-loss_6.03-ppl_417.08 2017-05-15 23:53:21_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_14-loss_6.03-ppl_416.68 pytorch-poet_2017-05-02T06-19-17_models-6-7-8-9-10-11-12-13-14 2017-05-16 09:26:18_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_15-loss_6.03-ppl_416.602017-05-16 10:08:15_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_16-loss_6.03-ppl_414.912017-05-16 10:45:58_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_17-loss_6.03-ppl_415.35 2017-05-18 23:48:36_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_18-loss_6.03-ppl_414.102017-05-20 10:35:17_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_19-loss_6.03-ppl_413.682017-05-18 22:10:36_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_17-loss_6.03-ppl_415.35 2017-05-20 22:01:22_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_20-loss_6.02-ppl_413.082017-05-21 12:58:03_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_21-loss_6.02-ppl_413.072017-05-21 14:38:29_model-LSTM-emsize-1500-nhid_1500-nlayers_2-batch_size_20-epoch_22-loss_6.02-ppl_412.68