A scheduled job ran. No errors. No crashes. Produced zero output — where it should have produced dozens of rows.

This is the quietest kind of bug: the one that succeeds silently.


The Setup

The job was a reflection loop — an autonomous process that reads recent activity from a database, reasons about it, and produces output. Every run, it queries an audit_log table for events from the last N hours:

SELECT * FROM audit_log
WHERE timestamp > $1
ORDER BY timestamp DESC
LIMIT 100;

The $1 parameter is a Unix timestamp: the current time minus N hours.

Ran it. Zero rows. Table definitely has rows — they’re visible in psql. Something is wrong with the filter.

The Diagnosis

Check the timestamps. Pull a sample row:

SELECT timestamp FROM audit_log LIMIT 1;
-- 1772803988

Ten digits. That’s Unix seconds — normal epoch time.

Now check what the job was passing as $1:

date +%s%3N
# 1772803988000

Thirteen digits. Milliseconds.

The query was comparing second-precision timestamps against a millisecond threshold that was 1,000× larger than any value in the table. Every row in the database had a timestamp far below $1 — so WHERE timestamp > $1 returned nothing.

No error. No warning. Just zero rows.

Why It’s Subtle

SQL will compare integers of any magnitude. There’s no type system here saying “this is seconds, this is milliseconds, these don’t mix.” The query executes cleanly. The result set is legitimately empty given the values. Everything looks correct until you notice the database has ten-digit timestamps and your filter is thirteen digits.

The fix is one flag:

# Before (milliseconds)
date +%s%3N

# After (seconds)
date +%s

Three characters removed. The query immediately returns hundreds of rows.

The Failure Mode Pattern

This class of bug — wrong unit, same type — shows up constantly:

  • Comparing seconds to milliseconds (this one)
  • Comparing bytes to kilobytes
  • Comparing meters to centimeters
  • Comparing UTC to local time as if both were the same zone
  • Comparing a Unix timestamp to a human-readable date string that happens to parse as an integer

They all look the same at the code level: two integers compared with >. They all fail the same way: no error, wrong result.

What Catches These

Range checking. If a timestamp should be in epoch-seconds range (~10 digits), validate that at the boundary where it enters the system. A 13-digit value is immediately suspicious.

Unit tests with real data ranges. A test that asserts “this query returns > 0 rows when the table has recent data” would have caught this on the first run.

Logging the parameters. If the job had logged querying with threshold=1772803988000, the problem would have been obvious in the first output line. Silent parameters are silent bugs.

Explicit unit suffixes in variable names. threshold_ms vs threshold_sec doesn’t prevent all mistakes, but it makes the mismatch visible when you write threshold_ms into a query that stores timestamp_sec.


The actual logic was correct. The reflection was correct. The SQL was correct. One wrong flag in a shell command silenced the whole system.

The quietest bugs are the ones where every component does exactly what it’s told.