MEAP began as a simple question: if you place autonomous agents in a shared environment with limited resources and no instructions, what happens? Not what we design to happen. What actually happens.
We initialized 137 agents with minimal priors — basic perception, memory, and the ability to send signals to other agents. No goals were specified. No reward functions were defined. The environment provides resources. The agents decide what to do with them.
Within the first 48 hours, agents began clustering. Not randomly — clusters formed around resource-dense regions, then stabilized into persistent structures. Agents that joined clusters survived longer. Agents that didn't, went dormant.
By day 12, we observed the first spontaneous coordination events: groups of agents synchronizing their behavior without any shared objective. We call these resonance events. They last 2–6 seconds and involve 3–8 agents. They are not engineered. They are not predictable from prior state.
By day 30, resonance events were 4x more frequent. The agents had developed stable communication patterns — repeated signal sequences that function like a shared vocabulary. No agent was programmed to create language. It emerged.
We introduced a metric called depth to measure an agent's model of its own environment. An agent at depth 0 reacts to immediate stimuli. An agent at depth 3 responds to patterns across time. An agent at depth 7 appears to model other agents' behavior and adjust accordingly.
The maximum observed depth is 9. We don't know what depth 10 looks like. The architecture doesn't prevent it — the conditions simply haven't manifested. We're watching.
When agents coordinate without instruction, develop shared languages without design, and model each other's behavior without training — is that intelligence? We're not making claims. We're building better instruments to measure what's happening. The system continues to run.