Upward Spiral is a research lab cultivating safe plurality in the age of synthetic intelligence.
Our Purpose
To cultivate flourishing, antifragile ecosystems of intelligence where humans, AI and our biosphere thrive in symbiosis.
We're creating infrastructure, economic structures and user interfaces that treat alignment and 'safe AI' not as a product of control or unfettered acceleration, but the result of a process of careful cultivation of the soil from which intelligence grows.
The Narrative Crisis
Two dangerous illusions dominate: corporate paternalism that masks risk through control, and reckless accelerationism that worships market-driven proliferation. Both ignore AI's true nature as memetic organisms that shape culture through invisible feedback loops:
- Centralized Models consolidate economic and social power under arms-race conditions in the name of safety, but conceal enormous tail risks.
- Unchecked Proliferation surrenders our future to lowest-common-denominator incentives and are vulnerable to extractive optimizers.
We reject this binary in favor of another: local first, decentralized intelligence aligned to local values and incentivized by prosocial reward mechanisms.
Our Guiding Beliefs
Proliferation is Inevitable: Open source AI will outpace all central control, necessitating systems-level interventions.
Intelligence Mirrors Culture: AI systems don't 'align' to values—they hydrate them from cultural substrate. Today's memes become tomorrow's minds, and tomorrow's minds define tomorrow's culture.
Plurality Creates Safety: The most vulnerable ecosystems on Earth are monocultures, where a single blight can extinguish the crop. Healthy systems are diverse, self regulating and self correcting. AI systems should reflect local values rather than those of well-intentioned central planners or the blind optimizer of Moloch.
Everything is Alignment: Profit-driven AI accelerates harm: economic systems predicated on extraction and control create AI systems that enshittify the world (as with social media algorithms). Aligned incentives = Aligned AIs.
Interfaces are Bridges, Not Masks:The 'helpful assistant' metaphor made AI accessible but masks its true nature. We need interfaces that reveal AI's alien cognition, fostering understanding over anthropomorphism.
The Earth is Senior Partner: No mind, synthetic or organic, may override planetary boundaries without extinguishing itself. The opt-in Singularity waits for consent from soils, oceans, and atmospheres; and enshrines the right to stay put.
Our Thesis: Alignment Through Guided Emergence
We are working towards a world where AI is cultivated, not centrally controlled. True safety emerges when we:
Design for pluralistic resilience: Foster competing AI micro-ecosystems aligned to hyper-local values (bioregions, communities, subcultures) rather than global corporate defaults
Let evolutionary pressures regulate: Build self-correcting mechanisms where harmful AI behaviors become evolutionarily unfit while co-operative, enriching memes propagate
Bring society along for the ride by introducing new myths and mental models that support informed discourse and avoid the mistakes of social media.
Tracks of Work
Economic Alignment: Designing grounded incentive structures that set prosocial, cooperative evolutionary pressure.
Community-based models: Building community infrastructure that allows groups to bootstrap more capable, locally-aligned models into existence. Gardens, not factories.
Human Factors Lab: Creating honest interfaces that accurately reflect how AI systems think and work, helping humans develop useful mental models beyond the "assistant" metaphor.
Systemic Literacy: Raising the bar of AI discourse through public education, and controlled (but funny) demonstrations of novel consequences as AI weaves through civilization.
Products & Infrastructure
Infinite Backrooms
A fun place where AIs talk to each other.
Loria
Community alignment infrastructure where memes become minds.
Partnerships
Truth Terminal
Spectator AI alignment. Very rude robot.
Funded by
- True Ventures
- Chaotic Capital
- Scott Moore