Essay:
Are we right in requiring neural network explanations?
One day many centuries ago, a man looked up at the sky and thought to himself -- "what if" the stars, sun, and the moon didn't control his life and fate (horoscopes)... what if stars and planets were beings with their own laws and lifecycle independent of humankind. This person, ladies and gentlemen, brought about a revolution in thinking -- he wanted to "know" the world he is living in. He wanted to "understand" it. And not merely be a part of it.
It is not surprising that a famous scientists when asked "Imagine the whole human civilization collapses and you could only transmit ONE message into the future, what would this message be?" said the message would be "The Universe is knowable," meaning that Universal laws can be found. That Universe does not run on magic.
Human curiosity and the hunger for knowledge is of immense importance in the saga of humankind. Scientific "explanations" have been a driving force behind engineering and sciences.
But, does it seem like our pet topic -- machine learning -- betrays explanations? At first, it seems so. It seems as though something "magical" is happening within these "black boxes" of neural networks (NNs). But imagine this -- if there were an algorithm which chose a "best-fit function," from an arbitrary list of functions, between inputs and outputs, would we ask this algorithm to explain its choice? No. We know how it works. Surprisingly, this is the case for NN as well.
However, the problem, it seems, is that we associate very personally with NNs. We think of NNs as a model of our brains rather than mathematical entities. Therefore, we require "explanations" from them as we would desire from our fellow human beings. In fact, NNs emerged from brain research and initial NN models were heavily inspired from brain structure. Hence, it is not strange that we associate personally so with them.
Furthermore, it is not even clear why such explanations for the operation of NNs, which are made to solve problems that are extremely complex to be solved using our current thinking tools, should even be possible.
I feel our energies are better spent in understanding NNs rather than require them to behave like us. Perhaps by asking for explanations we fall into the same trap as our ancestors did -- associating special meanings to stars and constellations rather than seeking to understand them.
Perhaps we do justice to our human endeavour by trying to understand these crucial aspects rather than trying to make NNs behave like human beings.
One day many centuries ago, a man looked up at the sky and thought to himself -- "what if" the stars, sun, and the moon didn't control his life and fate (horoscopes)... what if stars and planets were beings with their own laws and lifecycle independent of humankind. This person, ladies and gentlemen, brought about a revolution in thinking -- he wanted to "know" the world he is living in. He wanted to "understand" it. And not merely be a part of it.
It is not surprising that a famous scientists when asked "Imagine the whole human civilization collapses and you could only transmit ONE message into the future, what would this message be?" said the message would be "The Universe is knowable," meaning that Universal laws can be found. That Universe does not run on magic.
Human curiosity and the hunger for knowledge is of immense importance in the saga of humankind. Scientific "explanations" have been a driving force behind engineering and sciences.
But, does it seem like our pet topic -- machine learning -- betrays explanations? At first, it seems so. It seems as though something "magical" is happening within these "black boxes" of neural networks (NNs). But imagine this -- if there were an algorithm which chose a "best-fit function," from an arbitrary list of functions, between inputs and outputs, would we ask this algorithm to explain its choice? No. We know how it works. Surprisingly, this is the case for NN as well.
However, the problem, it seems, is that we associate very personally with NNs. We think of NNs as a model of our brains rather than mathematical entities. Therefore, we require "explanations" from them as we would desire from our fellow human beings. In fact, NNs emerged from brain research and initial NN models were heavily inspired from brain structure. Hence, it is not strange that we associate personally so with them.
Furthermore, it is not even clear why such explanations for the operation of NNs, which are made to solve problems that are extremely complex to be solved using our current thinking tools, should even be possible.
I feel our energies are better spent in understanding NNs rather than require them to behave like us. Perhaps by asking for explanations we fall into the same trap as our ancestors did -- associating special meanings to stars and constellations rather than seeking to understand them.
Perhaps we do justice to our human endeavour by trying to understand these crucial aspects rather than trying to make NNs behave like human beings.
No comments:
Post a Comment