Design Taste, Craft, and Point of View
Dylan Field’s three-part framework distinguishing the types of judgment that remain irreducibly human as AI lowers the cost of execution.
Last updated: 2026-04-24
Overview
As AI makes execution cheap and visual output increasingly competent, the question of what human designers and builders uniquely contribute becomes urgent. Dylan Field offers a three-part framework that separates what’s often conflated into a single word — “taste.”
These three things are related but distinct. You can have one without the others.
The Three Concepts
Taste
Taste is about navigating the possibility space of what exists and having preferences that are clear and articulable. Critically, taste includes being able to help others understand what you’re going for — and what you’re explicitly not going for.
Taste is calibrated by cultural influences of a given moment, by exploration at the frontier, and by rapid real-time updating as you observe new things. It’s not static. The most interesting points of inflection are when you’re pushing further than others have pushed and updating rapidly.
Craft
Craft is about pushing past where others might stop and thinking about things at all levels of abstraction simultaneously — from the macro (overall structure, system design) down to the micro (spacing, string copy, interaction feel) — and making sure all levels fit together.
Craft is a learning curve that compounds. An experienced craftsperson works faster precisely because they’ve internalized the levels of abstraction. Craft applied to AI output: take the clay of a first-pass AI result and push it to a place that’s actually refined, not “good enough.”
Point of View
Point of view is about expressing something unique through a product or design — bringing an insight to life, moving a conversation forward, saying something that hasn’t been said in quite this way before.
The diagnostic: if everyone agrees with your point of view, you probably don’t have much of one.
Field’s framing of user feedback and POV: user feedback finds local maxima. Point of view can find global maxima or the next local maximum entirely. “You can have some interplay there — it’s good to listen to users to get to local maximum — but point of view is what gets you to the next one.”
Why This Matters for AI-Era Building
AI generates competent outputs but has no natural point of view and no stake in the outcome. It can produce things that technically pass aesthetic inspection (“purple slop that works”) while leaving enormous creative territory unexplored.
Field’s observation: when he pushed Gemini 3.0 hard with complex prompts and references, the output that had the most explicit point of view was the output he had the most opinions about — not the averaged output. Having direction activates a dynamic loop where you want to keep pushing.
This suggests: AI is most useful as a generator of divergent starting points. Taste decides which possibilities are worth exploring. Craft refines the chosen direction. Point of view ensures the final result says something rather than averaging everything.
The Canvas as a Taste-First Environment
Figma’s design canvas is architecturally aligned with this framework:
- Infinite canvas → maximizes divergence; see all possibilities simultaneously
- Direct manipulation → faster feedback loop than prompting or code-editing for visual properties
- Visual-first → operates at the level of taste and craft before worrying about code
Direct manipulation on the canvas is superior to prompting for visual property adjustment (spacing, color, layout), which is superior to code editing for those same properties. The design-to-code round trip via Figma MCP exists to let you stay in the right abstraction for each type of work.
Taste Codification: Practitioner Evidence
Felix Lee (ADPList CEO) spent 2+ weeks, 4 hours/day trying to encode his design taste into a Claude Code skill file. Verdict: “It’s just been really hard. I can see where people are coming from that it’s not possible. I don’t think it’s impossible but let’s see.”
The root problem: AI treats everything as high quality. You can’t write a rule like “avoid slop” because the model can’t reliably detect slop — especially visually. Text slop is largely solved by current models; visual slop still has meaningful room to improve. The human designer’s filtering function is not yet replaceable by an instruction set.
This suggests that taste, as defined here, is the last mile of design that resists automation — not because it’s mystical, but because it requires a judgment loop that current models can’t close reliably on visual output.
AI’s Taste Gap: Engineering Practitioner View
Tuomas Artman (CTO, Linear) offers a precise account of why AI lacks taste, grounded in engineering terms:
AI has no concept of time. When interacting with a UI, it takes screenshots or reads the DOM — a timeless view. It knows in the abstract that “1 second is better than 2 seconds,” but it cannot feel whether 2 seconds is frustratingly slow for a specific interaction. It cannot form the visceral experience of waiting.
AI cannot feel interaction quality. A button highlight should appear instantaneously on hover; the fade-out should take 150ms — not because someone told an AI this, but because a person felt the difference and knew which felt right. This micro-level feel — what Linear has been systematically refining for years — cannot be generated from a prompt.
The animation experiment. Emil (Linear design engineer) ran a test: had AI agents build a series of animations (pop-ups, button highlights, transitions). The agents produced technically correct results — they added ease-ins, adjusted timing — but every result felt slightly off. Too slow, too fast, or with unnatural easing curves. Emil then manually refined each one. The before/after difference was immediately perceptible. The agents did the right things; they couldn’t feel what was right.
The last bastion: Artman sees purposeful AI-generated UI as the frontier. AI that can perceive a product as a user perceives it — feeling latency, noticing awkward transitions, knowing that a specific interaction doesn’t feel right — would close the taste gap. Not there yet.
Quality as Competitive Moat (and Why It Doesn’t Show in A/B Tests)
The harder-to-see property of craft: quality doesn’t appear in any A/B test. You cannot run an experiment that tells you whether to invest in quality. The effect is gradual and diffuse.
Artman’s Uber story: two competing products at identical price points, identical features. Users stick to their default. Then, once or twice a year, they open the other app. One feels slightly better — car arrives a bit faster, UI feels more polished. No single moment converts them. Over months, they quietly shift. By the time the analytics show it, you’re already behind. No experiment captures this because the signal requires multi-month longitudinal observation and tiny effect sizes across many interactions.
The implication: quality investment must be a cultural decision, not a data-driven one. If you wait for metrics to tell you to care about quality, you’re already losing.
Quality Wednesdays: Institutionalizing Craft
Linear’s operational response to the unmeasurability problem: Quality Wednesdays.
Every Wednesday, every engineer must find and fix one quality problem they discovered themselves. Not bugs assigned to them — their own discovery. Rules:
- You must find it yourself (no hand-offs)
- It can be as small as one pixel or as large as a backend efficiency improvement
- Bugs are separate and don’t count
Origin: Artman noticed that engineers missed small interaction details (missing highlight fade-outs, inconsistent spacing) not because they didn’t care but because they weren’t looking. He gathered the team, pointed at one small UI view, and said “find everything wrong here.” They found 35 problems in a tiny menu. That became the practice.
Results: ~2,500–3,000 quality fixes to date. More importantly, the practice permanently altered how engineers build: always on the lookout for quality issues, they introduce fewer regressions in unrelated work. Attention to craft becomes ambient, not episodic.
Scalability: Artman argues startups and large companies should both try this — especially now that AI can help find and fix many of these issues quickly.
Zero Bug Policy: Craft as System Design
Linear’s companion practice to Quality Wednesdays: zero bug policy.
The key insight: the rate at which bugs must be fixed is constant regardless of when you fix them. Whether you fix a bug immediately or three months from now, it takes roughly the same engineering time. The only variable is your current backlog.
The math: if bugs arrive at a constant rate and you batch them into a backlog, you end up fixing them months later at the same rate — just delayed, plus carrying the UX debt in the meantime. The “cost” of immediate fixing vs. delayed fixing is near-identical over time.
Practice:
- Every incoming bug is automatically assigned to the relevant engineer (who built or last touched the area) — now with AI agent assistance
- It becomes their highest priority immediately
- Most bugs are fixed within 2–3 hours
- Users receive a notification: “We fixed it — refresh your browser”
- Engineers can choose not to fix a bug (affecting 1/100,000 users, too gnarly) — but every bug is evaluated immediately
The trust compound: users who report a bug and get a fix notification within hours develop qualitatively different loyalty. This is rare enough that it becomes a differentiator.
AI acceleration: 10% of Linear’s incoming bugs are now automatically resolved by AI agents — single-shot, no engineer involvement. Artman expects this to approach 100% over a few years.
Product Engineer Convergence
Artman’s prediction for the engineering role in 4 years: everyone becomes a product engineer.
The pipeline role — moving data from one place to another, plumbing services together — gets absorbed by AI. What remains is the judgment layer: understanding what a customer wants, knowing what good UX looks like for that specific context, shaping the feature to the actual problem.
Linear hires almost exclusively product engineers now. The path there: talk to customers directly (Linear has Slack channels open to all engineers; all customer meetings recorded and tagged); think like a mini-PM; be able to drive a feature from customer problem to shipped product end-to-end.
The resource for getting there without a product engineering job: Apple’s Human Interface Guidelines (“the best book if you want to do good UX”) and building personal projects with real users.
Connections
- dylan-field — source of the three-part taste/craft/POV framework; CEO of Figma
- felix-lee — practitioner evidence for taste codification difficulty; 2 weeks of failed attempts to encode taste into a skill file
- tuomas-artman — CTO of Linear; AI taste gap (time/feel/animation), Quality Wednesdays, zero bug policy, product engineer convergence
- prototype-and-prune — parallel framing from product side: both emphasize broad divergence followed by ruthless pruning; AI over-shipping is the failure mode of divergence without taste
- julie-zhuo — also identifies taste as the AI-era bottleneck from a product/design management perspective
- figma — Figma MCP as the design-to-code bridge; canvas as taste-first environment
- agent-evaluation — quality decay is unmeasurable in A/B tests; gradual competitive erosion
Sources
- Figma CEO on the New Design Playbook in the AI Era — Dylan Field — video transcript, added 2026-04-13
- How to Design and Code with Claude Code and Figma MCP in 50 Min — Felix Lee — video transcript, added 2026-04-13
- Taste & Craft: A Conversation with Tuomas Artman, CTO Linear & Gergely Orosz — added 2026-04-24