Jacob Lauritzen

CTO of Legora, a vertical AI collaborative workspace for law firms; advocate for high-bandwidth human-agent collaboration interfaces over chat.

Last updated: 2026-04-23

Overview

Jacob Lauritzen is the CTO of Legora, a collaborative AI workspace for law firms with 1,000+ customers across 50+ markets. He comes from the vertical AI space — building AI products that must handle genuinely complex, high-stakes work end-to-end — and his main contribution to the agent design conversation is the argument that chat is the wrong interface for complex agent work.

His practical context: the current state of long-running agent UX is broken. An agent works for 30 minutes, produces a contract, the human spots a problem in clause three, asks for a fix — and the agent hits context compaction. It changes everything or forgets everything. There’s no good way to do surgical, high-trust reviews when the agent’s work is locked behind a chat interface.

Key Ideas

  • Trust vs. Control as two axes: Trust = how reliably does the agent do the right thing (can I let it run unsupervised?). Control = how effectively can I steer the agent’s approach at any point?
  • Work as a DAG: Complex agent work is a tree of subtasks. Control is fundamentally about where in that tree humans can impose judgment. Chat collapses the tree into a linear thread — low control.
  • Skills as the best control mechanism: Skills encode human judgment at specific nodes of the work tree. Unlike planning (requires you to specify everything upfront) and elicitation (interrupts the agent), skills handle contingencies through progressive discovery.
  • Decision log pattern: Rather than blocking on every uncertain decision, agents should decide, unblock themselves, and write their decisions to a log. Humans review the log afterwards and reverse decisions that were wrong.
  • High-bandwidth artifacts over chat: Humans and agents should collaborate in persistent, domain-specific artifacts — documents with inline comments, tabular review interfaces — not chat threads. Language as input is fine; language as the primary collaboration surface is not.

Connections

  • agent-human-collaboration — his main contribution; trust/control framework, high-bandwidth artifacts thesis
  • claude-code-skills — skills as the control mechanism he rates highest; encode judgment at DAG nodes
  • agent-first-software — high-bandwidth artifacts as the evolution of what agent-first interfaces look like
  • ai-agents — vertical AI use case and failure modes for complex, long-running work

Sources