Well, first off it is not actually a paradox. I just liked the sound of it so I wrote it in the title. Actually, it’s kind of a funny, intentional bug that Linux developers left in the Linux system ( intentionally ). In Linux time is calculated from the Epoch.
What is Epoch ?
Unix time is currently defined as the number of non-leap seconds that have passed since 00:00:00 UTC on Thursday, 1 January 1970, referred to as the Unix epoch.** Unix time is typically encoded as a signed integer.
The Unix time 0 is exactly midnight UTC on 1 January 1970, with Unix time incrementing by 1 for every non-leap second after this. For example, 00:00:00 UTC on 1 January 1971 is represented in Unix time as 31536000. Negative values, on systems that support them, indicate times before the Unix epoch, with the value decreasing by 1 for every non-leap second before the epoch. For example, 00:00:00 UTC on 1 January 1969 is represented in Unix time as −31536000. Every day in Unix time consists of exactly 86400 seconds.
Understanding Time Representation in Unix Systems
The Unix time is stored in a 32-bit signed integer this means there must be a limit to how much that number can increment, right? If you think this is correct then you are absolutely right, since the integer is a signed 32-bit integer the values range from -2147483647 to +2147483648.
Okay, let me explain what just happened. Till now we have understood that in Linux time is represented in a 32-bit signed integer.
Now, if I have to represent that 32-bit signed integer I will have a single, sign ( also known as Most Significant Bit or MSB for short ) bit followed by 31 value bits. Something like this
From the above it is clear that the maximum value we can store in this 32 bit signed integer is +2147483648 ( 2$^{31}$ = 2147483648 ) and the smallest value we can store is -2147483647 because of how negative numbers are stored in binary. ( two’s complement )
All these numbers are seconds means Unix time 100 -> 100 seconds after 00:00 UTC on 1 January 1970.
The Linux Time Problem
Now that we know time is represented in seconds and is stored in a 32-bit signed integer on your computer. We also know the max and min time that a 32 bit signed integer could have i.e. -2147483647 to +2147483648 seconds.
Its important to understand that the seconds are represented in decimal number system while the integer is in binary therefor there is a 1 unit difference in value due to 2’s complement representation of negative numbers.
There are 31536000 seconds in one year and our 32-bit signed integer can be 2147483648 seconds at most so mathematically we have approximately 68 years.
This means we can store time in the integer in seconds but the maximum time that can be stored in the integer is 2147483648 seconds or 68 years.
Our positive integer means we can store +68 and -68 years since and from, Jan 1 1970.
Solving the Linux Time Problem
The most obvious solution is to use a 64-bit signed integer instead of 32-bit. If you thought of this then you are on the right track since the default integer used to store time in an x64-bit Linux / Unix based Operating System is a 64-bit signed integer. You can also use 64 bit time in a x32-bit Unix System. The GNU also knows about this issue and shows how you can overcome this problem: Avoiding the year 2038 problem (GNU Gnulib).
You might also like: What is TOTP? | Time-Based OTP Explained