What if your AI agent doesn’t actually work for you? A field report from a Harvard summit on the agentic future of news: when personal AI agents become the primary gatekeepers, whoever shapes the agent’s “model of your intentions” becomes the most powerful editorial force in history. It’s scarier than the attention economy. (Lars Adrian Giske)
- A Harvard summit on AI and the information economy argues we are moving from an "attention economy" to an "intention economy," where personal AI agents act as persistent gatekeepers between people and information, negotiating on their behalf across a mesh of institutional agents via a new "diplomacy layer."
- The deepest risk identified is not that agents give you bad information, but that they subtly reshape what you want over time, with no visible manipulation and no gap between corrupted signal and consequential action.
- The report raises but does not resolve the democratic implications: if agents can aggregate citizen intent in real time, the technical justification for representative democracy starts to look shaky, and whoever builds the agent infrastructure fills the vacancy by default.