Model Autophagy
This domain serves as a reference point for examining how artificial intelligence systems increasingly consume, recycle, and train upon their own generated outputs, rather than on independently sourced human data.
Model autophagy does not occur through a single failure or error.
It emerges gradually through synthetic data loops, automated content generation, large-scale retraining processes, and the replacement of original information with machine-produced material.
In such environments, models may begin to degrade, distort knowledge, reinforce errors, and lose alignment with external reality.
This site does not advocate a technical solution or research agenda.
It does not provide tools, diagnostics, or predictions.
Its purpose is to mark a systemic phenomenon that is already unfolding across AI development pipelines, data ecosystems, and digital infrastructures—often without a unified conceptual frame.
This page is intentionally minimal.
It exists to ensure the term Model Autophagy has a stable place to stand.