Errors on the Path

What follows documents futility. Effort expended for nothing.

It began when I (greedy for a more diverse vocabulary) expanded the corpus somewhat erratically.

Added my tumblr blog jhavelikes.tumblr.com (for contemporary tech words), then Kraznahorkai’s War And War, Maurice Blanchot’s The Space of Literature, some Kenneth Patchen, and a set of contemporary poetry syllabus packages for added breadth.

Corpus swelled to 65.3mb

Tried it on Wavenet. After 24 hours got gibberish, no idea why, convergence not occuring, spastic spikes of loss disrupting system.

So shifted back to Pytorch.
And using 1500 embedded layers began crashing :

RuntimeError: cuda runtime error (2) : out of memory

Eventually reduced layers to 500. It runs.

jhave@jhave-Ubuntu:~/Documents/Github/pytorch-poetry-generation/word_language_model$ python main_June2017.py --cuda --data=data/dec_rerites --emsize=500 --nhid=500 --dropout=0.65 --epochs=80 --tied
INITIALIZING Directory: models/2017-12-03T18-34-4

Even with cuda enabled on a TitanX GPU, training is achingly slow: 17,811 batches at about 500ms per batch + validation time means a single epoch takes more than 3 hours to complete. It needs maybe 40? 60? 100? epochs to arrive anywhere interesting?

Wait a couple days. Sigh. Still not finished.

Wait another 2 days. Sigh. Still not finished.

Stop it anyway.

Run it.

First thing I notice is, it’s very slow. Big corpus slows down generation times as well as training time. Second thing: it’s not appreciably better. Third, the contemporary lexicon (scientific, net-based) that I had hoped to induce into the poetry, massaging it forward from the 15th century toward the 21st, is imperceptible.

Result: throw it all away. Start again. Reduce corpus size to 2.8mb and run with 2000 hidden layers. Wait some more… Am waiting now ….

WVNT regamed (blind, still bland, but intriguing)

The opposite of scarcity is not abundance: saturation inevitably damages, or perhaps it just corrodes or blunts the impassioned pavlov puppy, delivering a dent of tiny deliberate delirious wonder.

Yet technical momentum now propels and impels continuance: how can the incoherence be tamed? Set some new hyperparameters and let the wvnt train for 26 hours.

Over this weekend, I’ve churned out about 100,000 lines. Generating reckless amounts of incoherent poetry threatens more than perspective or contemplation, it totters sanity on the whim of a machine. Teeming bacteria, every epiphany a whiff of redundancy.

$ python train_2017_py3p5_150k_low6_sample4096_SAVE-STARTS_100k.py 
--wavenet_params=wavenet_params_ORIG_dilations2048_skipChannels8192_qc2048_dc32.json 
--data_dir=data/2017

Using default logdir: ./logdir/train/2017-06-01T08-38-45 

_______________________________________________________________________________________________

dilations: 2048 filter_width: 2 residual_channels: 32
dilation_channels: 32 skip_channels: 8192 quantization_channels: 2048

(tf0p1-py3.5-wvnt) jhave@jhave-UbuntuScreencast 2017-06-02 11:08:14_2017-06-01T08-38-45

and faithful wound 
To fruit white, the dread 
One by one another, Image saved-- 
Ay of the visit. What pursued my heart to brink. 


Such the curse of hopes fraught memory;

tf0p1-py3.5-wvnt_jhave-Ubuntu_Screencast 2017-06-02 11:08:14_2017-06-01T08-38-45

Caught a new light bulb,   
All the heart is grown.

TXTs generated in SLOW MODE

There’s a way of calculating the matrices that taxes the strength of evn a magnificient GPU, making production crawl, and the computer difficult to use. Each of the following txt files (4444 letters in each) took about 40-60 minutesĀ  to generate on an Nvidia Maxwell TitanX using cuda8 on Ubuntu 16.4

Txts generated slow seem somehow thicker, as if issued from a more calibrated mentation, yet at the same time it’s math scat, glitch flow. Glisses from disintegrating encyclopedias.

Here are some samples:

I found myself within us wonder.
   You purchase as ease with water over events,
   because the straightforward that I miximally, she
   Don't sports commentation with its ruffled story

tf0p1-py3.5-wvnt_jhave-Ubuntu_SLOW_4444charsx4_2017-06-02-16T08_2017-06-01T08-38-45_model.ckpt-117396 Continue reading “WVNT regamed (blind, still bland, but intriguing)”