Neural Networks is a machine learning algorithm. It tries to
mimic human biological system of neurons to learn. In nutshell, neurons or
nodes are connected to each other and interaction between them can be thought
of as simple operations which indeed fire next set of neurons in the next
layer. The result of these small operations then passed to connected neurons.
The output at each node is called activation value.
Throughout the learning process or in training phase, random
weights are assigned to each neuron output layer and these set of optimal
weights are learned during this time.
Now Some examples what are some current research and
problems on which big tech companies are working and what role neural network
is playing in these computations?
The first example is Google Kepler Project:
In this project , Google brain Team had discovered 2
exoplanets by training neural networks to analyzing NASA's keplar space telescope
data. Analysis for approx 700 stars has been done.
The data for analysis is collected by kepler.
"it was in form of When a planet passes in front of the
star, it temporarily blocks some of the light, which causes the measured
brightness to decrease and then increase again shortly thereafter, causing a
“U-shaped” dip in the light curve" taken from Google blog
They fixed signal to noise ratio to threshold and took some
30000 samples in which after filtering
15000 samples forming subsets.
The inputs to the network were two separate views of the
same light curve one wide view that allows the model to examine signals
elsewhere on the light curve and zoomed-in view that enables the model to closely examine the
shape of the detected signal.
After testing, 670 stars were taken which were known to be
stars. We chose these stars because they were already known to have multiple
orbiting planets, and we believed that some of these stars might host
additional planets that had not yet been detected.
Now go ahead find your own planet. :)
Another example where they did use Neural Networks is Video
Segmentation Problem:
It enables movie directors and video content creators to
separate the foreground of a scene from the background, and treat them as two
different visual layers
Solution to this problem will allow creators to replace and
modify the background, effortlessly increasing videos’ production value without
specialized equipment. They used CNN to solve this problem.
Mode about his can be read here
Let us understand how this is working:
Neurons do not understand world by themselves. It’s
collective effort of neurons which makes a neural network classifier a great
classifier.
For Neural Network, we understand maths behind it but we
have much less knowledge of why it does what it does i.e what actually goes on
each layer.
In most cases, the first layer looks for edges and corners,
intermediate maybe dog or leaf and then more layers to more and more
abstraction and finally assembles in case of images.
We can turn network upside down which is one way to
visualize network and ask network to enhance the image for which answer is
known..
Now task is to how feature visualization can combine
together with other interpretability techniques to understand aspects of how
networks make decisions. These techniques can help us to "stand in the
middle" of network and see decisions being made in between and how they
are influencing the output.
photo taken from blog of google
For this Google recently has released LUCID which is a
neural network visualization library which is built on top of it's project
named Deep Dream. It allows user to see lucid feature visualizations which are
difficult to understand.
In correspond to that it also introduced Colab Notebooks
which are way to use LUCID.
I hope you like the content provided in the article. Please
share your feedback at mohit17028@iiitd.ac.in
Comments
Post a Comment