Toolverse

Unix Timestamps Explained: Epoch, Y2K38, and Timezone Pitfalls

7 min read

A Unix timestamp is a single integer representing a moment in time. No timezone ambiguity, no date format debates, no locale differences — just a number. This simplicity is why every major API, database, and operating system uses Unix timestamps as their internal time representation. Understanding what a Unix timestamp is, how it works, and where it breaks down is essential knowledge for anyone working with time-sensitive data.

The Epoch: January 1, 1970

A Unix timestamp counts the number of seconds elapsed since 00:00:00 UTC on January 1, 1970 — a reference point called the Unix epoch. This date was chosen when Ken Thompson and Dennis Ritchie designed the Unix operating system at Bell Labs. The original implementation used a 32-bit unsigned integer storing 1/60th-of-a-second intervals, with an epoch of January 1, 1971. The epoch was later moved to 1970 and the resolution changed to whole seconds, as documented in the POSIX specification (IEEE Std 1003.1).

Negative timestamps represent dates before the epoch. The timestamp -86400 corresponds to December 31, 1969 (one day before epoch, since there are 86,400 seconds in a day). The Moon landing on July 20, 1969 has the timestamp -14182940.

The choice of UTC (Coordinated Universal Time) as the reference timezone is deliberate. UTC does not observe daylight saving time and provides a universal baseline. All timezone conversions are performed by adding or subtracting the appropriate offset from the UTC timestamp.

The Year 2038 Problem (Y2K38)

Signed 32-bit integers can represent values up to 2,147,483,647. The Unix timestamp reaches this maximum at 03:14:07 UTC on January 19, 2038. One second later, the integer overflows — wrapping around to -2,147,483,648, which corresponds to December 13, 1901.

This is not theoretical. Embedded systems, IoT devices, and legacy software running 32-bit time functions will misinterpret dates after 2038. The GNU C Library (glibc) transitioned to 64-bit time_t on 32-bit Linux systems starting with version 2.34 (2021). A signed 64-bit timestamp supports dates until approximately 292 billion years in the future — well beyond the expected lifespan of the solar system.

Systems still at risk include:

  • 32-bit embedded Linux devices with firmware that will not be updated
  • Legacy databases storing timestamps as 32-bit integers (MySQL TIMESTAMP columns have a maximum of 2038-01-19)
  • File formats with fixed 32-bit timestamp fields (ext3 filesystem inodes, ZIP archives)
  • Older versions of PHP, Perl, and other languages on 32-bit systems

Seconds vs Milliseconds: The JavaScript Trap

Unix timestamps traditionally count seconds. However, JavaScript's Date.now() returns milliseconds since the epoch. This mismatch is one of the most common sources of timestamp bugs.

A quick way to distinguish them: second-based timestamps (as of 2025) are 10 digits long, starting with 17.... Millisecond timestamps are 13 digits, starting with 17... followed by three additional digits. For example:

  • 1711929600 — seconds (April 1, 2024 00:00:00 UTC)
  • 1711929600000 — milliseconds (same moment)

Other platforms and their conventions:

  • Pythontime.time() returns seconds as a float (e.g., 1711929600.123)
  • JavaSystem.currentTimeMillis() returns milliseconds; Instant.now().getEpochSecond() returns seconds
  • Gotime.Now().Unix() returns seconds; time.Now().UnixMilli() returns milliseconds
  • PostgreSQLEXTRACT(EPOCH FROM NOW()) returns seconds as a decimal

Unix Timestamps vs ISO 8601

ISO 8601 defines human-readable date-time strings like 2024-04-01T00:00:00Z. Both formats represent the same information, but they serve different purposes:

  • Unix timestamps — Compact (10-13 characters vs 20-30+), trivial to compare and sort arithmetically, unambiguous, language-agnostic. Preferred for storage, computation, and machine-to-machine communication.
  • ISO 8601 — Human-readable, self-documenting (includes timezone offset), lexicographically sortable when using the Z (UTC) suffix. Preferred for logging, APIs consumed by humans, and configuration files.

Many APIs use both. The JWT specification (RFC 7519) uses Unix timestamps in seconds for exp (expiration), iat (issued at), and nbf (not before) claims. HTTP headers like Date, Last-Modified, and Expires use RFC 7231 format (Sun, 06 Nov 1994 08:49:37 GMT), which is neither Unix timestamp nor ISO 8601.

Timezone Pitfalls

Unix timestamps are inherently UTC. The timestamp 1711929600 means the same absolute moment regardless of whether the system is in New York, Tokyo, or London. Timezone errors creep in during conversion:

  • Local time without offset — Storing 2024-04-01 00:00:00 without a timezone is ambiguous. Was it midnight UTC, midnight EST (-5), or midnight JST (+9)? Always store or transmit timestamps in UTC and convert to local time only at the display layer.
  • Daylight Saving Time transitions — When clocks spring forward, times like 02:30 do not exist. When clocks fall back, times like 01:30 occur twice. Unix timestamps avoid this entirely because UTC has no DST. The IANA timezone database (used by most operating systems) tracks these rules for every timezone.
  • Leap seconds — UTC occasionally adds a leap second (27 have been added since 1972). POSIX explicitly ignores leap seconds — a POSIX day is always exactly 86,400 seconds. This means Unix timestamps are not strictly monotonic across leap second insertions. In practice, most systems use leap second smearing (spreading the extra second across several hours) to avoid discontinuities.

How APIs and Databases Use Timestamps

Timestamps appear throughout the software stack:

  • JWT tokens — The exp claim is a Unix timestamp in seconds. A JWT with "exp": 1711933200 expires at 2024-04-01T01:00:00Z. Servers compare exp against Math.floor(Date.now() / 1000) and reject expired tokens. Clock skew between servers (typically 5-30 seconds) should be accounted for with a small tolerance window.
  • Database columns — PostgreSQL TIMESTAMPTZ stores microsecond-precision UTC timestamps internally and converts to the session timezone on output. SQLite has no native datetime type and stores timestamps as either Unix integers, ISO 8601 text, or Julian day numbers depending on the application.
  • HTTP caching — The Cache-Control: max-age=3600 header uses seconds (not a timestamp). The Expires header uses an absolute date in RFC 7231 format. Conditional requests use If-Modified-Since with the same format.
  • Event sourcing and logs — Distributed systems use timestamps for event ordering. However, clock synchronization across servers is imperfect (typical NTP accuracy is 1-10ms on LAN, 10-100ms over the internet). For strict ordering, logical clocks (Lamport timestamps, vector clocks) or hybrid logical clocks are necessary.

Need to convert between Unix timestamps, ISO 8601, and human-readable dates? The Timestamp Converter handles seconds and milliseconds, displays results in UTC and local time, and runs entirely in the browser.

Try it yourself

Put what you learned into practice with our free tool.

Open Tool

Frequently Asked Questions

What happens at the Unix timestamp 2147483647?
On January 19, 2038 at 03:14:07 UTC, 32-bit signed integers overflow and wrap to a negative number, representing December 13, 1901. This is the Y2K38 problem. Modern 64-bit systems are unaffected, but embedded devices and legacy software may fail.
Should I store timestamps in seconds or milliseconds?
It depends on your platform. JavaScript and Java use milliseconds natively. Python, Go, and PostgreSQL use seconds. Always document which precision your API returns to avoid off-by-1000x bugs.
Do Unix timestamps account for leap seconds?
No. POSIX defines each day as exactly 86400 seconds. Leap seconds are typically handled by 'smearing' — distributing the extra second across surrounding hours. Google and AWS both use leap smearing.