Monday, June 22, 2015

Inceptionism: Going Deeper into Neural Networks  Alexander...













Inceptionism: Going Deeper into Neural Networks  Alexander Mordvintsev, Christopher Olah, and Mike Tyka

We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer. 

If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network.

Images and text by Alexander Mordvintsev, Christopher Olah, and Mike Tyka

Read more here: Inceptionism: Going Deeper into Neural Networks

No comments:

Post a Comment