[ABOUT OUR WORK]
A few words about how our work started.

The Inference Engine began with an anomaly.

Several years ago, a research scientist working in the United States received an email that could not be explained by any known technical failure, spoofing method, or statistical coincidence. The message originated from infrastructure that did not exist, carried timestamps inconsistent with causal order, and referenced events for which no public or private records were available.
The message did not explain itself.

It did not identify a sender.
It did not request a response.
It consisted only of fragments.

The recipient—an expert in distributed systems and probabilistic modeling—did not publish the email. Instead, she began to observe and collect further anomalies. Over time, additional messages arrived: corrupted text segments, partial technical descriptions, personal accounts, procedural documents, narrative fragments. None were complete. Many contradicted one another. All appeared to be damaged by interference.

What could be established, slowly, was this: the data was not predictive in the usual sense. It did not extrapolate from the present. It referenced a future state as if it already existed.
Subsequent analysis suggested a mechanism:

The most plausible explanation is that limited amounts of structured information were being transmitted backward through time via unstable quantum tunneling effects. These transmissions do not carry matter or consciousness—only data—and only in forms that can survive severe distortion. Existing communication systems, email among them, appear to be sufficient receivers.

Whether the transmissions are deliberate, accidental, or residual remains unknown. From this material, the Inference Engine was developed.


Method

The Inference Engine is not a forecasting tool in the conventional sense. It does not predict trends. It does not simulate outcomes. It does not assume a fixed future.

Instead, it operates as a probabilistic reconstruction system.

Using AI-assisted denoising and inference techniques, the Engine analyzes recovered transmission fragments and attempts to infer futures that could plausibly give rise to them. These projections are continuously revised. As new transmissions arrive, earlier conclusions may weaken, strengthen, or collapse entirely.

Errors are expected.
Contradictions are informative.
Noise is measured, not discarded.

What emerges is not certainty, but a moving boundary of likelihood.

Participation

The volume and complexity of the transmissions exceed what any individual or closed research group can meaningfully interpret.
For this reason, the Inference Engine is participatory.

Participants engage with transmission fragments, reconstructed narratives, and competing projections. Through comparison, annotation, and contribution, they help refine the probability landscape—testing which interpretations remain coherent under increasing amounts of evidence.

Some projections gain provisional status due to accumulated support. Others are relegated to low-confidence branches. In rare cases, substantial new material produces large-scale revisions of the inferred future.

Nothing here is final unless the transmissions stop.


The Open Question


The existence of the transmissions raises unresolved problems.
If the future is leaking information into the present, is it fixed—or merely constrained?
If it can change, why does the data appear at all?

And if present-day actions contribute to the conditions described in the fragments, can those conditions be altered—or are attempts to do so already part of the process?

The Inference Engine does not answer these questions.
It exists to make them tractable.

← back home
Oculis clausis: nunc videmus

©2025 by arthur schmidt-pabst