How to make high-tech pies sound really old

jumpingjacktrash:

jumpingjacktrash:

lewisandquark:

A while ago I made a bunch of new pies. Well, I didn’t *make* them because they were neural network invented titles and although it tried to imitate the list of pies I gave it, the neural net’s imitations are imperfect.

Strawberry Ham Pie, Impossible Maple Spinach Apple Pie, Caribou Meringue Pie

The neural network, after all, is a computer program with about as many neurons as an earthworm. It doesn’t understand what the ingredients are, or why some combinations don’t work. Some of its titles were intriguing, though. They sounded mysterious. Potentially delicious and/or magical?

Flangerson’s Blusty Tart, Mur’s Tartless Tart, Cromberry Yas

Or maybe it just helps that they’re vague. I decided I wanted more like these. To help it along, I spiced up the pie dataset with the names of cookies and apple varieties from the 1905 edition of Apples of New York. I filtered the names for those that had possessives: Mcaffee’s Nonesuch, Cornell’s Savewell, Wile Ox’s Winter (all apples), combined with Goldy’s Dungeon Bars, Esther’s Bracelets, and Fido’s Rewards (all cookies). Then, to give it added old-school flavor, I added all the Dungeons and Dragons spells that had possessives as well (for example, Ivy’s Irresistible Scent, Freedom’s Toast, and Leomund’s Tiny Hut).

I arranged the training data so the pies would be last (so they would be freshest in the neural net’s virtual mind). Then I gave it one single look at the data.

It turns out that I didn’t manage to prevent the neural net from coming up with bad ideas. Perhaps what I should have done instead was remove all the meat pies from the training data.

Chicken Pineapple Cream Pie, Lemon Chicken Meringue Pie, Mothy Mincemeat Cheese

But some of the pies were exactly what I’d hoped for.

Cherry Pie With Cheese Fashions, Hodnum's Favorite, Grandmoss pie
Sheeper's Short, Lord's Crunch, Stradd's Snack

And some even went a little past “ancient” and into “legendary”

Light's Strike-Tart, Bigubbunkupkilecic Pie, Pumpkin Chiffon Summon Pie
Snake's Swift Shortbread, Bigby's Gluring Strazbert, Mordenkainen's Potato Pie

This week’s bonus material: a few more pies, including some that were inexplicably PG-13.

MORDENKAINEN’S POTATO PIE oh my word i am going to INVENT THAT and SERVE IT ON GAMING NIGHT

@lewisandquark you may find this thread of interest http://littlepinkbeast.tumblr.com/post/180359832296/jumpingjacktrash-littlepinkbeast

Paint colors designed by neural network, Part 2

starrynight35:

lewisandquark:

image

So it turns out you can train a neural network to generate paint colors if you give it a list of 7,700 Sherwin-Williams paint colors as input. How a neural network basically works is it looks at a set of data – in this case, a long list of Sherwin-Williams paint color names and RGB (red, green, blue) numbers that represent the color – and it tries to form its own rules about how to generate more data like it. 

Last time I reported results that were, well… mixed. The neural network produced colors, all right, but it hadn’t gotten the hang of producing appealing names to go with them – instead producing names like Rose Hork, Stanky Bean, and Turdly. It also had trouble matching names to colors, and would often produce an “Ice Gray” that was a mustard yellow, for example, or a “Ferry Purple” that was decidedly brown.  

These were not great names.

image

There are lots of things that affect how well the algorithm does, however.

One simple change turns out to be the “temperature” (think: creativity) variable, which adjusts whether the neural network always picks the most likely next character as it’s generating text, or whether it will go with something farther down the list. I had the temperature originally set pretty high, but it turns out that when I turn it down ever so slightly, the algorithm does a lot better. Not only do the names better match the colors, but it begins to reproduce color gradients that must have been in the original dataset all along. Colors tend to be grouped together in these gradients, so it shifts gradually from greens to browns to blues to yellows, etc. and does eventually cover the rainbow, not just beige.

Apparently it was trying to give me better results, but I kept screwing it up.

Raw output from RGB neural net, now less-annoyed by my temperature setting

image

People also sent in suggestions on how to improve the algorithm. One of the most-frequent was to try a different way of representing color – it turns out that RGB (with a single color represented by the amount of Red, Green, and Blue in it) isn’t very well matched to the way human eyes perceive color.

These are some results from a different color representation, known as HSV. In HSV representation, a single color is represented by three numbers like in RGB, but this time they stand for Hue, Saturation, and Value. You can think of the Hue number as representing the color, Saturation as representing how intense (vs gray) the color is, and Value as representing the brightness. Other than the way of representing the color, everything else about the dataset and the neural network are the same. (char-rnn, 512 neurons and 2 layers, dropout 0.8, 50 epochs)

Raw output from HSV neural net:

image

And here are some results from a third color representation, known as LAB. In this color space, the first number stands for lightness, the second number stands for the amount of green vs red, and the third number stands for the the amount of blue vs yellow.

Raw output from LAB neural net:

image

It turns out that the color representation doesn’t make a very big difference in how good the results are (at least as far as I can tell with my very simple experiment). RGB seems to be surprisingly the best able to reproduce the gradients from the original dataset – maybe it’s more resistant to disruption when the temperature setting introduces randomness.

And the color names are pretty bad, no matter how the colors themselves are represented.

However, a blog reader compiled this dataset, which has paint colors from other companies such as Behr and Benjamin Moore, as well as a bunch of user-submitted colors from a big XKCD survey. He also changed all the names to lowercase, so the neural network wouldn’t have to learn two versions of each letter.

And the results were… surprisingly good. Pretty much every name was a plausible match to its color (even if it wasn’t a plausible color you’d find in the paint store). The answer seems to be, as it often is for neural networks: more data.

Raw output using The Big RGB Dataset:

image

I leave you with the Hall of Fame:

RGB:

image

HSV:

image

LAB:

image

Big RGB dataset:

image

Turdly 🤣🤣🤣🤣🤣🤣🤣🤣