Does the age of Myra-Hati’s plug began?
When you look for shapes in the clouds, you’ll often find things you see every day: Dogs, people, cars. It turns out that artificial “brains” do the same thing. Google calls this phenomenon “Inceptionism,” and it’s a shocking look into how advanced artificial neural networks really are.
Google’s artificial neural networks research team explains how they’re building the kind of very advanced computer vision systems that are able to identify when you’re looking at a picture of, say, an orange versus a banana.
First, it helps to know a little bit about the structure of neural networks. Google succinctly explains how they’re made up of layers of artificial neurons, as many as 30 at once. When you run a photo through the network, the first layer detects low-level information, like the edges in the picture. The next layer might fill in some information about the shapes themselves, getting closer to figuring out what’s depicted. “The final few layers assemble those into complete interpretations—these neurons activate in response to very complex things such as entire buildings or trees,” Google’s engineers explain.
Google “trains” each network by feeding it tons of images—sometimes focusing on a specific type of image, like trees or animals. The Google team found that these networks can even generate images of certain objects if they’re asked to:
So what would happen if you asked a single layer of the network to “enhance” the things it detects about a certain image? For example, if you asked the layer in charge of detecting edges in images to take that information and build on it? Some weird stuff starts to happen:
Then things get really interesting. Google asked the higher level neuron layers—the ones which identify specific elements of the image, not just shapes and corners—to build on that they detect in an image.
Also, if you ask a network that’s an “expert” on architectural arches to create an image of those arches—and then ask it to generate more based on that image—you get images that look like the fever dream of MC Escher:
By now, the entire internet’s realized that Google’s artificial neural network Deep Dream is capable of some pretty trippy images. But what happens when you run a movie about acid trips through the acid trip generator? Fear and Loathing in your worst nightmares, that’s what.
Some sinister Github user recently published a set of instructions that enable anyone to pump video through Deep Dream. As proof of concept, he gave a scene from Fear and Loathing in Las Vegas the hallucination treatment.
Just as you saw in the still images, the neural network generates bizarre swirls and shapes of unidentifiable creatures. But when everything is moving, the scene becomes more psychedelic than your worst LSD flashback—no drugs required!
You can do more than just watch a Hunter S. Thompson epic with this utility. The Deep Dream Video process will work with any video or audio. You might want to avoid doing actual drugs when watching neural network-generated content. Sounds dangerous.
Fabio Evagelista is a Brazilian writer.
Crossed Paths is the first book of the Myra-Hati trilogy, an epic adventure in a post-apocalyptic world, for the lovers of sci-fi / fantasy genre. This is the author’s first work published in America.