• Vipul Vaibhaw

Real numbers and Neural Networks

Let me show you a magic trick. Think of a number between 1 and 100 but don't tell me! Now, multiply that number with 2. Now divide it by 2 and add 1 to it. Done?

I don' know what the number is! How can I? :P But I can bet that most of the readers would have thought of a positive integer. That's how our brain works. We are so accustomed in living in a world where we see 1 sun, 1 moon, 1 self, 1 god or maybe 330 million gods who knows, 10 reps in 3 sets of squats etc. We are so habituated in counting in the nature using countable numbers or Natural numbers that it requires more effort from our side to think about another commonly occurring uncountable number Pi.


Even though we are so used to natural numbers, Neural Networks deal with Real Numbers. In this blog we will explore this idea a bit more.

Let us start by understanding what exactly are Real numbers. I am going to assume that the readers have basic understanding of neural networks.


It is also important for us to understand why real numbers came into picture? All the mathematics which we have learned can be easily applied to rational number, right?(Think about it)


Note - Rational numbers are the numbers which can be represented in form of p/q, where p and q can be integers while q cannot be zero.


Here is a number line which I have magnified so that our "attention" is on the line between 2 and 3.

We will be trying to zoom-in a number line of rational numbers to see if we can represent Euler's number, e on this line.


Did you notice? There is a gap on this line between 2 and 3. Similarly we can find a lot of gaps on this line because we can't represent irrational numbers or pi on this line.

It might come into your mind, why is it important? Well if we can't represent Euler's number in a space then we cannot use natural logarithms there!


Let's define a metric space here - A set with distance(metric) defined. Please refer page number 30 of "Principles of Mathematical Analysis" by Walter Rudin.


Now we know why we need real numbers. It is because it can express all the numbers(except imaginary) which is needed to do necessary mathematics. Also Real number line is continuous. Recall "Universal Approximation theorem" here. A neural network can approximate any "continuous" function.


Let's take example of sin(x). Can a neural network approximate it? You can try writing a single layer deep fully-connected neural network for this task. This is how you can generate the data -

Of course, a lot of improvement can be done in the code above. It is just mentioned to give you an idea.


Sin(x) or sin inverse(x) can only be approximated properly in real number space. In rational number space we don't know what sin inverse of 1 will be.


Also let us look at the equation of a commonly occurring and very important statistical distribution, Normal distribution -

The presence of pi and euler's number e can only be approximated properly in real number space.


Note - Set of real numbers is a super-set of set of rational numbers.

Another question which came in my mind was why are the parameters in the neural networks, be it hyperparams or learnable params, real numbers?


One quick answer which I can give is that the set of real numbers are ordered. It means that we know 2<3. We know how to properly compare elements of the set of real numbers. This concept allows us to do gradient descent for example. We need to know if loss is reducing or not. We have to compare two numbers. This can only be done if the set is ordered.


This property is not present in set of complex numbers. The mathematics of real numbers are also easier than the mathematics of complex numbers. There is something known as rotation in complex numbers. The architectures which we have right now don't need the property of rotation but I certainly think that this can be an area of exploration. Feel free to reach out if you are working in this domain or are interested to take this topic up.


Introducing complex numbers in neural networks would require fundamental changes in many established methods like activation functions. However I feel that if we want to approximate the following equation, the most beautiful equation in mathematics, we would need to revisit fundamental topics in deep learning -




If we have to use neural networks to explore "Fractals" dimensions. We would need amalgamation of chaos theory, complex numbers in neural networks.

I would end this blog here. There are a lot of ideas which I have thrown here in this blog. The worth of those ideas are questionable and to be critiqued. However, If you liked this blog then please subscribe down below and also share this article. It would be helpful to us. Also check out the courses in the events section. Thanks for your time. :)

169 views1 comment

©2019 by Deeplearned education pvt ltd