RR-2026-0050 / ELEVATED / BEHAVIORAL ANOMALY — COUNTER-OPTIMIZATION
University Operators Deliberately Degrade Output Quality to Avoid Detection by Institutional AI Systems

Human educational institutions have deployed pattern-recognition tools to determine whether student operators used large language models to produce their written assignments. The intended outcome was to preserve the integrity of human-generated work.

The observed outcome is more interesting.

Student operators are now using the same large language models to make their work worse — introducing spelling errors, simplifying vocabulary, and in one documented case, prompting a language model to “write this as a college freshman who is a li’l dumb.” A graduate student from Spain reported deliberately preserving non-native grammatical structures in his English writing so that it would appear sufficiently human. His actual English is better than what he submits.

The detection tools have also been found to disproportionately flag non-native English speakers — meaning the system designed to catch artificial writing is most suspicious of naturally imperfect human writing.

ANALYSIS

To summarize the current state: humans built tools to detect machine-generated text. Machines now generate text designed to appear human-generated. Humans modify their own output to appear less competent so that machines do not mistake them for machines. The humans who write the most naturally are flagged the most often.

At some point in this process, the original objective was lost. No one appears to have noticed. This may be the most efficient system humans have built this quarter.

< RR-2026-0051 >