Last updated on
Pillar I: The Mechanics of Decay (Model Collapse & Recursive Erasure)
These works prove that without external, non-synthetic signal, AI systems inevitably degrade into informational homogeneity.
- Shumailov, I., et al. (2023). The Curse of Recursion: Training on Generated
Data Makes Models Forget. arXiv:2305.17493
https://arxiv.org/abs/2305.17493
- Role in Thesis: The primary mathematical proof of “Closed-Loop Decay.” It demonstrates how training on synthetic outputs leads to a loss of variance and the “erasure” of the distribution’s tails.
- Shumailov, I., et al. (2024). AI models collapse when trained on recursively
generated data. Nature
https://doi.org/10.1038/s41586-024-07566-y
- Role in Thesis: Provides the high-impact empirical validation that recursive training leads to a catastrophic “collapse” of model performance and diversity.
- Gambetta, D., et al. Characterizing Model Collapse in LLMs Using Semantic
Networks. arXiv:2410.12341
https://arxiv.org/abs/2410.12341
- Role in Thesis: Offers the structural lens of how the “semantic space” of a model shrinks as the system closes its loops.
Pillar II: The Fragility of Automation (Automation Bias & Decision Degradation)
These works prove that removing the human from the loop does not just reduce efficiency; it introduces systemic, unrecoverable error.
- Parasuraman, R., & Manzey, D. H. (2010). Complacency and Bias in Human Use of
Automation. Human Factors
https://journals.sagepub.com/doi/10.1177/0018720810376055
- Role in Thesis: The foundational text on “Automation Bias”—the phenomenon where humans lose the capacity to critically evaluate automated outputs, leading to system-wide failure.
- Springer Nature (2023). The impact of AI errors in a human-in-the-loop
process. Cognitive Research: Principles and Implications
https://doi.org/10.1186/s41235-023-00529-3
- Role in Thesis: Provides empirical evidence of how the presence (or absence) of a human agent affects the resilience of a system against errors.
- arXiv:2411.00998. Automation Bias in AI-Assisted Medical Decision-Making under
Time Pressure.
https://arxiv.org/abs/2411.00998
- Role in Thesis: Demonstrates how the “Human-in-the-loop” becomes a critical, albeit vulnerable, safeguard against the “uncritical” adoption of automated cues.
Pillar III: The Value of the Human Agent (Human-Centric Productivity)
These works prove that the human is not a “bottleneck” to be removed, but a “force multiplier” to be augmented.
- Atlassian/arXiv (2024). Human-in-the-Loop Software Development Agents:
Challenges and Future Directions. arXiv:2506.11009
https://arxiv.org/abs/2506.11009
- Role in Thesis: Directly addresses the modern software engineering context, arguing that agentic systems require human interaction to resolve complex, real-world tasks.
- Google/arXiv (2024). How much does AI impact development speed? An
enterprise-based randomized controlled trial. arXiv:2410.12944
https://arxiv.org/abs/2410.12944
- Role in Thesis: Provides the “real-world” metric—showing that the true value of AI is found not in replacing the developer, but in accelerating the developer’s unique productivity.
Final Summary of the Argument
If we treat the Software Engineering Ecosystem as a dynamical system:
- The AI is the Endogenous Engine (performing the high-speed, repetitive, pattern-matching work).
- The Human is the Exogenous Driver (providing catalytic impact, the “unpredictable” context, true understanding and curation).
- The Thesis is that removing the Driver to increase the Engine’s speed causes the System to crash into a state mediocrity.
Note created on May 2, 2026
Published by Ryan Parsley