How to Train Your Own Neural Network

These days, artificial intelligence (AI) seems poised to rule most of the world: it detects skin cancer , looks for hate speech on Facebook, and even notes possible lies in Spain’s police records . But not all AIs are run by megacorporations and governments; you can download some algorithms and play with them yourself, often with funny results.

There’s a fake Coachella poster full of fake group names , made by injecting a bunch of real group names into a neural network and asking them to come up with some of them. There are recipes created in a similar way where “BBQ Beef” requires “1 Beer – Dice”. And then there are my favorite paint colors, created by the artificial intelligence of Janelle Shane (tag yourself, I’m Dorkwood).

They were all created using neural networks , a type of AI modeled on the networked nature of our own brains. You train a neural network by giving it input: recipes, for example. As it learns, the network strengthens some connections between its neurons (mimics of brain cells) more than others. The idea is that it figures out the rules for how input works: for example, which letters tend to follow others. Once the network is trained, you can ask it to generate its own output, or give it partial input and ask it to fill in the rest.

But the computer doesn’t really understand the rules for, say, making recipes. He knows that beer can be an ingredient and that everything can be diced, but no one ever told him that beer was not one of them. Results that look nearly correct but don’t follow some fundamental rules are often the funniest.

I was happy to just watch these antics from afar until Shane tweeted that in high school programming class, ice cream names were better than hers. And I thought that if children can do it, then so can I.

How to train your first neural network

I started with the same set of tools that Shane used for ice cream flavors: Python module called textgenrnn , created by Max Wolf of Buzzfeed. You need basic command line knowledge to work with it, but it works on any system (Mac, Linux, Windows) on which you have installed the programming language / interpreter python .

Before you can train your own neural network, you need some background information to get started. For example, in high school, a class began with a list of thousands of ice cream flavors. Whichever you choose, you will need at least a few hundred examples; a thousand would be better. You might want to download all of your tweets and ask the network to generate some new tweets for you. Or see the list of listings on Wikipedia for ideas.

Whatever you choose, put it in a text file, one element per line. This might require creative copy and paste or spreadsheet work, or if you are a seasoned programmer, you can write some ugly Perl scripts to send data to the view. I’m an ugly girl with Perl scripts, but when I needed Lifehacker headers for one of my datasets, I just asked our analytics team for a large list of headers and they emailed me exactly what I needed. Asking questions correctly is an underestimated programming skill.

(If you want to add Lifehacker titles to your neural network, here’s a list. There are about 10,000 ).

Create a folder for your new project and write two scripts. First, one is called train.py:

 from textgenrnn import textgenrnn t = textgenrnn () t.train_from_file ('input.txt', num_epochs = 5)

This script will make the neural network read your input and think about what its rules should be. There are a couple of things in the script that you can change:

  • t = textgenrnn() is t = textgenrnn() first time you run the script, but if you want to come back to it later, enter the name of the .hdf5 file that magically appeared in the folder when you ran it. In this case, the line should look like this: t=textgenrnn('textgenrnn_weights.hdf5')
  • 'input.txt' is the name of your file with one header / recipe / tweet / etc on each line.
  • num_epochs – how many times you want to process the file. The longer you let it learn, the better the neural network becomes, so start with 2 or 5 to see how long it will take and then move on.

Learning the network takes time. If you are running your scripts on a laptop, one epoch may take 10-15 minutes (larger datasets will take longer). If you have access to a powerful desktop, perhaps your or a friend’s gaming computer, things will go faster. If you have a large dataset, you can query it for tens or even hundreds of epochs and let it run overnight.

Then write another script called spit_out_stuff.py (you can give them better names than me):

 from textgenrnn import textgenrnn t = textgenrnn ('textgenrnn_weights.hdf5') t.generate (20, temperature = 0.5)

This is the fun part! The above script will give you 20 interesting things to look at. The important parts of this last line are:

  • The number of generated items: here 20.
  • A temperature that is similar to the dial of creativity. At 0.1, you get very simple output, which is likely to be even more boring than what you downloaded. At 1.0, the result will be so creative that often what comes out isn’t even real words. You can go above 1.0 if you dare.

When you ran the tutorial script, you noticed that it shows you a sample of the output at different temperatures, so you can use it to determine how many epochs you are running and what temperature you want to use to generate the final result.

Not every idea that comes to your neural network is going to be a golden comedy. You must choose the best ones yourself. Here are some of the best Lifehacker titles my AI has come up with:

The best way to make a kid’s laptop

How To Survive The Backspace Drinking Game

The best way to buy an interview

How to get the best bonfire of your life with this handy drawing

How to create your own podcast bar

How to get a new iPhone X if you’re an Arduino

How to tidy up your own measurements in a museum

How to start telling stories and getting anxious

The best way to DIY winter ink

How to maintain a relationship with an imaginary concept

The best way to make the perfect cup of wine from your Raspberry Pi

The best way to eat toilet strawberries

How to find the best vacation job

The best way to eat a tough can

I got them by playing around with the temperature and the number of training eras, and every time I saw something that I liked, I copied it into a text file with my favorites. I also experimented with the word-by-word version of the algorithm; the scripts above use the default character-by-character model. My final list of titles includes the results of both.

If you’re curious about some marriages, here’s what I get with a temperature of 0.1:

The best way to stay streaming so there is no more alternative to make your phone

The best way to transfer the best power when you don’t need to know about the world around you

The best way to get started and start using the familiar ways to stop someone

How to get the best way to see the most popular posts

The best way to start making your phone

And if I check it to 1.5 (dangerously creative):

The Remains of the Day: How to Point the Finger at the Non-Subject

Upgrade Qakeuage to Travel History, Ovenchime or Contreiting Passfled

Risk-Idelecady not double copy, Zoomitas focus

Ifo Went Vape Texts Battery or Supremee Buldsweoapotties Lending

DIY Grilling Now Can Edt My Hises Uniti To Tell About You

It is clear that human help is needed.

Become your AI friend

While neural networks can learn from datasets, they don’t really understand what’s going on. This is why partnerships between humans and machines provide some of the best results. “I know this is the tool I’m using,” says Janelle Shane, “but it’s hard not to think of it as ‘go into the little neural network, you can handle it’ and ‘Oh, that was smart’ or you’re confused, poor thing.

To get the most out of your relationship, you will need to guide your conversation partner with AI. It can sometimes be so good at guessing the rules of your dataset that it just recreates the same thing you fed it – the AI ​​rip-off version. You will need to make sure that his funny result is truly original.

Botnik Studios connects humans with machines by teaching predictive keyboard. Imagine grabbing your friend’s phone and typing messages simply by using predictive text on their keyboard. You will end up writing your own message, but in a style that reads like your friend’s. In the same way, you can train Botnik’s keyboard to work with any data source you need, and then write the words you enter from the keyboard. This is where this awesome duel in the advice column came from: two Botnik keyboards are trained on Savage Love and Dear Abby.

If you prefer to work against your algorithmic buddy rather than with him, watch how Janelle Shane played out a neural network that initially seemed good for recognizing sheep grazing in a meadow. She photoshopped a sheep and realized that the AI ​​was just looking for white spots in the grass. If she dyed the sheep orange, the AI ​​thought it was flowers. So she asked her Twitter followers about sheep in unusual places and found that AI believed that the sheep in the car should be a dog, the goats in the tree should be birds, and the sheep in the kitchen should be a cat.

Serious AIs can have similar problems, and playing with algorithms for fun can help us understand why they are so error prone. For example, one of the first AIs to detect skin cancer accidentally learned the wrong rules for distinguishing between benign and malignant skin lesions. When a doctor discovers a large lesion, they will often photograph it next to a ruler to show the size. AI accidentally learned that cancers are easy to detect: just look for rulers.

Another lesson we can learn is that the output of an algorithm is as good as the data you enter. ProPublica found that one algorithm used in sentencing was harsher on black defendants than whites . He did not view race as a factor, but his contributions led him to mistakenly believe that crimes and background common to black defendants were stronger predictors of reoffending than crimes and background associated with white accused. This computer had no idea about the concept of race, but if your input reflects a bias, the computer may end up perpetuating that bias. It is best that we understand this limitation of algorithms, rather than assume that because they are inhuman, they must be impartial. (Good luck with your hateful AI Facebook!)

Mix your datasets

There is no need to dwell on one dataset; you can mix the two and see what happens. (For example, I’ve combined product listings from Goop and Infowars. A bit of NSFW.)

You can also train the classification algorithm. Shane says she already had a list of metal bands and a list of My Little Pony names, so she trained the classifier to distinguish the difference . (Pinky Doom: 99 percent metal.) Once you’ve trained the classifier, you can inject anything into it and get readings. Benedict Cumberbatch: 96 percent metal.

You can also feed whatever you want to the trained textgenrnn network. When you specify how many items you want and what temperature (creativity) the network should use, you can also prefix it. Then it will try to find the words that must follow this prefix. After I trained the Lifehacker headlines, I asked the AI ​​to give me headlines that start with the words “3 components of happy hour.” He responded with wonderful cocktails from fictional ones (again, this is my pick from a longer list):

Hour of Happiness with 3 Ingredients: Stress-Inducing Herbal Renewal

Happy hour with 3 ingredients: cake straw

3 Hours of Happy Ingredients: Darkened Pot

Happy hour with three ingredients: pizza and drinks for them are the wings of a trader

3 Happy Hours with Ingredients: Ferrent Pot

An hour of happiness with three ingredients: a sip refreshes

Happy Hour with 3 Ingredients: Best Bar Order

3 happy hours with ingredients: the rest of the party

Happy Hour with 3 Ingredients: Summer Rum Slicing

3 happy hours with ingredients: the best coconuts

The 3 components of a Happy Hour: beautiful shicline

Hour of Sale with Three Ingredients: Cheekey Candy

Don’t be surprised if you see this in a future Lifehacker post; Claire Lower, our food and drink editor, says she wants to try some of these.

But instead of waiting for her expert recipes, I decided to transfer them to the neural network. I have collected a few cocktail recipes from the management of cocktails Chris Loder and glossary WikiBooks cocktails and arranged them so that each cocktail served one text file line with the name of a cocktail as the first few words. This means I could pick a cocktail name and ask my trained cocktail neural network for the following recipe. Here are some results:

The best cocoons are oz. Benedictine e. 1 trait Aromatic b. <1 oz. Cranberries d. 0.5 oz. Lemon c. 0.75 oz. Italone d. 2 trait Juponged Slipes i. Stir / Strain / Coupe / No garnish

Cheekey Candy i. 1 oz. Blendy Sherry b. 1.5 oz. Fresh pineapple d. Lonstine Brandy Bowl De there in the great Jamaican c. 2 dashes of pineapple d. 1 drop of aromatic bitterness e. 1 drop of aromatic gin ii. 1 oz. Vodka ii. 0.5 oz. Aged rum c. 2 drops of angosture bitterness i. Stir / Strain / Glass Nick & Nora / Ice / 1

Ferrent Pot – 1.25 oz. Green Chartreuse 1.5 oz. London Dry Jean b. 0.75 oz. Pouring whiskey b. Half whiskey with orange

You can of course ask him for anything:

Beth Skoreki – 1 oz. Mixed whiskey (juice) with water b. 1 oz. Egg white in a large glass from childhood 1934 or Babbino

Life hacker c. 14 Vodka Martini i. 0.75 oz. Campari i. Shake / Thin stretch / Coupe / Lemon twist

The inputs are only a few hundred cocktail recipes, so I had to turn the temperature path to get something interesting. And at high temperatures (1.0 in this case), sometimes you get words that aren’t actually words. Good luck finding any Lonstine or Blendey Sherry brandy in the store in the store, but if you do, my AI pet will be very happy.

More…

Leave a Reply