What this work contributes: The phrase 'strategy engine' has scattered prior usage in business-strategy literature. This work is the methodological definition: what is computationally inside one (operations research, simulation, game theory, machine learning, strategic foresight), and the chess-engine analogy framing — a rigorous compute substrate that augments human judgment on wicked problems where intuition alone fails.
Computational systems — operations research, simulation, game theory, machine learning, and strategic foresight, mixed for the problem at hand — that produce principled, well-grounded, auditable strategic moves on wicked problems where the cost of error is high, the environment is adversarial, and the future is structurally uncertain. Decision-first, technology-second.
- Canonical source
- https://mariobrcic.com/strategy-engines/
- Prose definition
- /strategy-engines/
- Wayback snapshots
- all archived snapshots
- Prior usage we are aware of
-
What this work contributes: The general term has substantial prior usage in decolonial / indigenous-data-sovereignty literature (epistemic authority, knowledge-system control). This work gives it an AI-memory-specific operational definition: the right and capacity of a person, organisation, or nation to control the AI memory that mediates their thought, decision-making, and identity — paired with the Network Effect 2.0 lock-in mechanism.
The right and capacity of a person, organisation, or nation to control the AI memory that mediates their thought, decision-making, and identity. Reframes the AI-memory question from data privacy to a sovereignty question: who owns the substrate of cognition.
- Canonical source
- /writing/ai-memory-sovereignty-strategy/
- Prose definition
- /strategy-engines/#cognitive-sovereignty
- External anchor
- arXiv:2508.05867
- Wayback snapshots
- all archived snapshots
- Canonical SHA-256
2b398ab55a42cba61923333e90e62cecd0a4a5ed1d68ed45015a82df0ae42204 - Prior usage we are aware of
-
What this work contributes: First use of the phrase to name the strategic competition between AI vendors over persistent user memory. Earlier 'memory wars' usage exists in psychology (the 1990s repressed-memory / false-memory-syndrome debate, e.g. Loftus) — different domain, no AI-memory bearing.
The strategic competition between AI vendors over persistent user memory. Once an assistant has accumulated deep knowledge of a user, switching costs become a sovereignty issue — not a UX issue. Memory is moat at user, corporate, and geopolitical levels.
- Canonical source
- /writing/ai-memory-sovereignty-strategy/
- External anchor
- arXiv:2508.05867
- Wayback snapshots
- all archived snapshots
- Canonical SHA-256
2b398ab55a42cba61923333e90e62cecd0a4a5ed1d68ed45015a82df0ae42204 - Prior usage we are aware of
-
What this work contributes: First use of '2.0' to name a network effect that scales with depth of personalised memory about a single user, distinct from Metcalfe-style network effects (which scale with user count). The 'value scales with how deeply a system knows you, not just how many users it has' framing is the contribution.
A class of network effect in which utility scales super-linearly with the depth of personalised memory about a single user — not with the count of connected users. Compounds per-user, creating individual-level lock-in and behavioural co-evolution between human and assistant. Distinct from Metcalfe-style network effects.
- Canonical source
- /writing/ai-memory-sovereignty-strategy/
- Prose definition
- /strategy-engines/#network-effect-20
- External anchor
- arXiv:2508.05867
- Wayback snapshots
- all archived snapshots
- Canonical SHA-256
2b398ab55a42cba61923333e90e62cecd0a4a5ed1d68ed45015a82df0ae42204 - Prior usage we are aware of
-
What this work contributes: First use of the phrase to name the de-facto control over decisions, populations, or institutions exercised by AI systems via the geometry of delegated control. Adjacent terms exist in 2024-2026 enterprise IT discourse but mean different things; see prior art.
The de-facto control over decisions, populations, or institutions exercised by AI systems via the geometry of delegated control — typically without the formal accountability or constitutional anchoring that human sovereignty carries. The Power Gambit Pt 1.
- Canonical source
- /writing/ai-misalignment-risks-shadow-sovereignty/
- Prose definition
- /strategy-engines/#shadow-sovereignty
- Wayback snapshots
- all archived snapshots
- Canonical SHA-256
5f6ca34eea42fb3be760932ccdf575782a101a09a80484ec0925502b4316ccaa - Prior usage we are aware of
-
- "Shadow AI" (enterprise IT, 2024-) — Unauthorized / unsanctioned AI use inside organisations. Operational-security framing — different scope from de-facto sovereignty via delegated control.
- "AI sovereignty" / "operational sovereignty" — National / infrastructure / institutional control over AI. Owner-side framing — Shadow Sovereignty here is the inverse, where AI ends up holding the control.
What this work contributes: Names the mechanic — at training and inference time — by which model thinking jumps between disconnected high-capability islands rather than traversing a continuous capability landscape. Distinct from the 'Jagged Frontier' (Dell'Acqua, McFowland, Mollick et al., 2023), which describes irregularity at the local boundary of capability; the Dalmatian Effect is about the global topology — disjoint islands of competence with empty space between them — and how training + inference + exploration produce that pattern.
The training-and-inference dynamic by which AI capability profiles consist of disconnected high-capability 'islands' separated by empty space, rather than a graduated capability surface. Thinking jumps between islands; inference-time exploration and training updates can reach some islands and not others. The image is a Dalmatian — black-and-white spots with empty space, not graduated. Has direct implications for AGI definitions, jobs displacement modelling, and economic transformation forecasting.
- Canonical source
- /writing/transcending-ais-dalmatian-effect-for-transforming-the-economy-and-work/
- Wayback snapshots
- all archived snapshots
- Canonical SHA-256
598f4d84d9cbf812f87442fbf994d5a813d084a5708514be01dd20de9a69d9df - Prior usage we are aware of
-
- Jagged Frontier (Dell'Acqua, McFowland, Mollick, Lifshitz-Assaf, Kellogg, Rajendran, Krayer, Candelon, Lakhani — HBS / SSRN, 2023-09) — Adjacent but mechanically distinct: the Jagged Frontier names irregularity at the local edge of capability (where AI helps and where it hurts within a workflow); the Dalmatian Effect names the global topology of disconnected capability islands and the train + inference + exploration mechanic that produces them. Sister phenomena, different lens.
What this work contributes: Three named phases of an operational alignment framework: Seal (specifying the alignment target), Re-aim (re-aiming when the target shifts mid-deployment), and Tune (tuning behaviour to the target with measurable feedback loops). Phase names, not acronyms — written sentence-case to make that explicit. Designed to be implementable inside organisations, not only at frontier labs.
A three-part operational alignment framework: Seal (specify the alignment target), Re-aim (re-aim when the target shifts mid-deployment), and Tune (tune behaviour to the target with measurable, auditable feedback loops). The three labels are sentence-case phase names, not acronyms.
- Canonical source
- /writing/ai-alignment-framework-seal-reaim-tune/
- Wayback snapshots
- all archived snapshots
- Canonical SHA-256
07053af298568bb40864977c28a81a3bff21c7d8ad12c35ab58222455616c031 - Prior usage we are aware of
-