Conversations with Rocks


So I fine tuned GPT-2 on my journals to make an artificial me.

I have written over 2 million words over the past 15 years, as I discovered when prepping them as input. That's enough words to fill 25 books. 25 books of random, rambly personal journaling, mostly. What happens when I feed it to a neural net?

After training nanoGPT for a couple of hours on colab, I have a model that does effectively sound like me. It shares my preoccupations, favorite phrases, paragraph formatting, punctuation, and verbal tics. At first I felt called out, almost like it was making fun of me. But then I realized that it was simply reflecting me in its computery way. There is no judgment. It is not me, but it is in a real way a reflection of my writing in both style and content.

Sometimes it says things that I could easily say:

you know what i really feel though? it's like, i'm really not sure what i want to accomplish. and i know i want to get back to a more balanced and creative life-- i suppose if i really want to, i could actually do that.

Sometimes it is oddly poignant:

So I guess I'm at a turning point in my life. I wrote down the word love in my notebook for some reason, and I kind of understand that.

Or absurd:

and then there is the sort of repetitive feeling of the machine before breakfast.

Since it is trained on dream journals it can create dreams:

here i am at the end of my rope. i open the door and it turns out it is filled with thick black tar. i look to the window and i see a bunch of tiny birds cuddling, fluffet sounds start playing.

Sometimes it writes poetry:

i told you about my book therapy

you told me i was being a jerk

i told you about my stupid heart feeling

i told you i wanted you to touch me

you wanted to touch me

i was trying to reach across so that i could touch you

Sometimes it asks good questions:

what does it take to get out of an ugly system that makes no sense or is not universally necessary?

Here's the whole paragraph so you can see the general drift of what it's doing:

i get so hung up on this story, which i am still entangled in. i believe it was a lie. it was a lie, a cold lie, and a thing of stubbornness that was never fair. not fair enough. what does it take to get out of an ugly system that makes no sense or is not universally necessary? what does it take to get oneself on an acid trip where psychedelics, brain surgery, and body possessions are on the edge of it. As a single divorced woman, I have no idea how to deal with this, how to look at it or feel about it. Even the notion of "all bodies are human" seems contradictory to the idea of the human body as a whole. Human beings are eternally and separately less special. It's the Tao hiding in there somewhere, trying not to freak out or something.

It is not coherent, the sense drifts over a paragraph in a way that makes it hard to read. This is because I used the GPT-2 model, which is far less powerful than the current models. A very refreshing thing about GPT-2 is that there is nothing corporate about it, no safety features to stop it talking about verboten things. It is just a mirror of a certain slice of culture, of whatever OpenAI scraped up to train it with 4 years ago. The fine-tuned model is a combination of that culture and the micro-culture of my journals.

The relative simplicity and even the defects of GPT-2 help me understand the magic better. I can see where it tries and fails to keep up coherence. As I read I can tell that sometimes it's going for something that feels right in some way, even though it might not cohere logically. What for me is a feeling must be in neural net terms the latent space, how the process of writing is being modeled by 124 million digital synapses. I get a very distinct sense that it really has created a latent space that somehow shares something of my concerns about the world, which is a strange feeling.

#AI #life #poetry #writing