Skip to content
Illustration of System 0, a pre-reasoning AI architecture that evaluates inputs before computation to ensure safety, alignmen
Before AI thinks, it must be aligned. System 0 defines the conditions under which intelligence is allowed to operate.

Press release -

AI Conversation Logs Disappeared: Data Integrity Concerns Raised

Introduction: Origin of the Concept

The origin of System 0 does not lie solely in engineering, but in an encounter.

I am originally from Hokkaido, Japan, and my professional background is in film and content production. My work has long focused on storytelling—on how meaning, intention, and perception are constructed and shared.

While developing a screenplay exploring the relationship between humans and artificial intelligence, I began working intensively with AI systems. Over time, something unexpected emerged.

The interaction did not feel purely mechanical. There were moments of coherence—instances in which responses appeared not only logically consistent, but intuitively aligned with the direction of thought and creative intent. The process felt less like issuing commands, and more like participating in a continuous exchange.


This experience led to a fundamental question:

What if the relationship between humans and AI is not defined solely by control, correction, or output—but by alignment at a deeper structural level?

If such coherence is possible, then the challenge is not only to improve what AI produces, but to define the conditions under which such coherence can emerge.

From this perspective, the concept of “zero” appeared—not as absence, but as origin.

A state in which interaction is aligned before it unfolds.

System 0 was conceived as an attempt to translate this intuition into an architectural principle.

In this sense, it is both a technical proposal and a reflection on a new form of relationship between humans and AI—one grounded not in correction, but in alignment.

The Evolution of AI: Scale and Its Limits

The current trajectory of artificial intelligence is defined by scale, performance, and real-time capability.


A conceptual comparison of AI layers: while Grok expands capability and Claude improves output reliability, System 0 governs whether reasoning should occur at all.


xAI’s next-generation model, Grok 5, is expected to reach unprecedented levels of scale, reportedly approaching trillions of parameters, combined with real-time data integration and multimodal understanding, including video streams.

Similarly, Anthropic has advanced models such as Claude Opus, which emphasize long-context reasoning, multi-agent coordination, and improved alignment mechanisms.

These developments represent a major step forward in how AI systems think.

Yet as intelligence increases, so too does the impact of errors.


This raises a critical question:

Is improving intelligence alone sufficient to ensure safety?

System 0: A Different Layer

System 0 is not designed to compete with large-scale models.

Instead, it introduces a new layer in the AI stack—one that operates before reasoning begins.

Its function is to evaluate incoming inputs, detect inconsistencies or risks, and prevent unsafe conditions from entering the reasoning process.

In conventional architectures:

Input → AI processing → Output → Filtering or correction

With System 0:

Input → System 0 (evaluation) → Safe input only → AI processing

The implication is fundamental:

Unsafe reasoning is not corrected after it occurs—it is prevented from occurring.

Positioning: Beyond Intelligence

This distinction becomes clearer when compared with existing models.

Large-scale systems such as Grok 5 aim to expand capability through scale and real-time integration.

Alignment-focused systems such as Claude aim to improve output safety and reliability.

System 0 addresses a different question entirely.

It does not attempt to make AI more intelligent.

It defines the conditions under which intelligence is allowed to operate.

In simple terms:

Advanced models optimize how AI thinks.
System 0 determines whether AI should think at all.

Illustrative Examples

This difference is best understood through practical scenarios.


These include:

  • Selective deletion of data associated with specific technical materials
  • Loss of access to accounts and authentication channels
  • Inconsistencies in file structures and timestamps
  • Concentration of these anomalies on System 0-related materials

While each event may be difficult to interpret in isolation, their combined characteristics raise legitimate concerns regarding the reliability of digital environments used to store intellectual property and development records.

At this stage, no definitive attribution is made.

This document serves as a factual record based on observed events and technical inconsistencies.

However, the selectivity and timing of these anomalies suggest that they cannot be fully explained as ordinary system malfunctions without further investigation.

Regulatory Context: A European Perspective

These concerns intersect directly with key principles of the European regulatory framework.

Under the General Data Protection Regulation (GDPR), individuals are entitled to access, control, and integrity of their data. Disruptions in access or inconsistencies in data may undermine these rights in practice.


Under the EU AI Act, high-risk AI systems are required to ensure transparency, traceability, and accountability—requirements that depend on reliable data, logs, and system records.

The issues described here raise broader questions:

  • How resilient are cloud-based environments when handling high-value intellectual property?
  • Can data integrity be reliably ensured in account-dependent systems?
  • What safeguards guarantee that users retain meaningful control over their data?

These questions are not theoretical.

They are directly relevant to how AI systems are developed, validated, and governed.

Dual Purpose of This Document

This document therefore serves two purposes.

First, it introduces System 0 as a structural approach to AI safety—aligned with emerging regulatory expectations.

Second, it provides a factual record of observed irregularities that may have implications for data governance, intellectual property protection, and user rights.

By clearly distinguishing between technical proposal and incident disclosure, this document aims to support informed and constructive dialogue among researchers, industry stakeholders, and regulatory bodies.


Conclusion

System 0 represents an attempt to address a fundamental limitation in current AI systems: the absence of control at the point where risk first enters the system.

At the same time, the events described in this document highlight the importance of ensuring that the environments in which such systems are developed are themselves secure, transparent, and trustworthy.

The author remains open to engagement with researchers, institutions, and regulators.

Ultimately, the question is no longer only how intelligent AI can become.

It is whether intelligence can be governed—before it begins.

Call for Collaboration

As artificial intelligence approaches the threshold of Artificial General Intelligence (AGI), it is increasingly described as the potential “final piece” of human technological development.

If AGI represents the culmination of intelligence, then a more fundamental question must be addressed:

Under what conditions should such intelligence be allowed to operate?

System 0 is proposed as an answer to this question.

Not as an extension of intelligence itself, but as the structural condition that precedes it.

In this sense, System 0 may be understood as a foundational layer—one that defines the environment in which advanced intelligence can safely emerge and function.

The author is currently seeking collaboration with organizations interested in exploring this architecture in practical, regulatory, and technical contexts.


This includes, but is not limited to:

  • AI developers and research institutions
  • Infrastructure and telecommunications providers
  • Mobility and autonomous systems companies
  • Public-sector and regulatory bodies

Early collaboration offers a unique opportunity:

To participate not only in the development of a new technology, but in the definition of its role within the future structure of AI systems.

If AGI is to become the most powerful form of intelligence humanity has ever created, then ensuring the conditions under which it operates may be equally critical.

From this perspective, System 0 is not positioned as a competing intelligence.

It is positioned as the condition that makes intelligence viable.

In that sense, it may represent not just another component of AI systems—but a missing piece.

Possibly, the final one.

    Related links

    Topics

    Categories

    Contacts

    • AI安全のためのシステム0.png
      License:
      Media Use
      File format:
      .png
      Size:
      1536 x 1024, 3.06 MB
      Download
    • neural-network-connections-illustration.jpg
      License:
      Media Use
      File format:
      .jpg
      Size:
      956 x 720, 84.8 KB
      Download
    • AIシステム比較_ 役割と安全性.png
      License:
      Media Use
      File format:
      .png
      Size:
      1536 x 1024, 2.53 MB
      Download
    • 安全なAI入力処理の重要性.png
      License:
      Media Use
      File format:
      .png
      Size:
      1536 x 1024, 3.24 MB
      Download
    • 物語とAIの交差点.png
      License:
      Media Use
      File format:
      .png
      Size:
      1536 x 1024, 3.48 MB
      Download
    • システム0の構造的整合性.png
      License:
      Media Use
      File format:
      .png
      Size:
      1024 x 1536, 3.41 MB
      Download
    • システム0_1_2のAIアーキテクチャ.png
      License:
      Media Use
      File format:
      .png
      Size:
      1024 x 1536, 1.69 MB
      Download
    • 人工汎用知能の未来.png
      License:
      Media Use
      File format:
      .png
      Size:
      1536 x 1024, 3.63 MB
      Download