<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Roger Wong - Design, UX, AI, Technology</title><description>The design blog that connects the dots others miss. Thoughts on design systems, user experience, artificial intelligence, and technology.</description><link>https://rogerwong.me/</link><language>en-us</language><managingEditor>hello@rogerwong.me (Roger Wong)</managingEditor><webMaster>hello@rogerwong.me (Roger Wong)</webMaster><copyright>Copyright 2026 Roger Wong</copyright><category>Design</category><category>Technology</category><category>User Experience</category><category>Artificial Intelligence</category><ttl>60</ttl><item><title>A Visual Unicode Explorer</title><link>https://rogerwong.me/2026/04/charcuterie-visual-unicode-explorer?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/charcuterie-visual-unicode-explorer</guid><description>Here&amp;#39;s a quickie. Interaction developer David Aerne created a fun, Tempest-inspired Unicode character explorer called Charcuterie. Click a character to see visually-similar ones. You can even draw a symbol in the box in the upper left corner. Super fun....</description><pubDate>Fri, 17 Apr 2026 19:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-wehangtt.png&quot; alt=&quot;Charcutrie app interface showing a grid of Unicode glyphs in blue and white, with a selected Hangul character and descriptive sidebar text.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;Here&amp;#39;s a quickie. Interaction developer &lt;a href=&quot;https://charcuterie.elastiq.ch/&quot;&gt;David Aerne&lt;/a&gt; created a fun, Tempest-inspired Unicode character explorer called Charcuterie. Click a character to see visually-similar ones. You can even draw a symbol in the box in the upper left corner. Super fun.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://charcuterie.elastiq.ch/?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-wehangtt.png" length="0" type="image/png"/></item><item><title>AI feels like a drug</title><link>https://rogerwong.me/2026/04/creativity-osteoporosis-protect-automate?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/creativity-osteoporosis-protect-automate</guid><description>My current side project is a website for a preschool in San Francisco. I&amp;#39;m using AI to accelerate wherever it fits, but I&amp;#39;ve reserved the primary visual treatments to be made by hand. Partly because that&amp;#39;s the right call for a preschool brand. And partly because of a phrase Pablo Stanley...</description><pubDate>Fri, 17 Apr 2026 17:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-4huf7a8n.jpg&quot; alt=&quot;A grid of colorful pixel art robot and creature characters in various designs, colors, and accessories, displayed against a white background.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;My current side project is a website for a preschool in San Francisco. I&amp;#39;m using AI to accelerate wherever it fits, but I&amp;#39;ve reserved the primary visual treatments to be made by hand. Partly because that&amp;#39;s the right call for a preschool brand. And partly because of a phrase &lt;a href=&quot;https://pablostanley.substack.com/p/ai-feels-like-a-drug&quot;&gt;Pablo Stanley&lt;/a&gt; coined for this: creativity osteoporosis.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I wrote about creativity osteoporosis a while back. The idea that your creative skills get weaker when AI does all the reps, like bones thinning when they&amp;#39;re not stressed. You don&amp;#39;t notice it happening. Everything seems fine. Then one day you reach for a skill and it&amp;#39;s... not there like it used to be&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Stanley wrote this after a weekend of making pixel art by hand—a project called Pixabots, little 32x32 robot characters—as a deliberate detox. He describes what set off the detox:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The whole time I was drawing, there was this pull. Physical, almost. Like my body was telling me to open a tab and start prompting. Not because the work was bad. Not because I was stuck. Just because my brain has been trained, over the last two years, to route every creative problem through an LLM.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;He still used AI for the parts that weren&amp;#39;t the art:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I used AI to build the Pixabots website. The stuff I&amp;#39;m not that good at... setting up Next.js, canvas rendering, exporting without antialiasing. And I tried to keep to myself the stuff that felt more &amp;quot;artistic&amp;quot; like the animation, the look and feel.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And then the operating principle:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The parts that feed my soul, I protected (even though everything in my body wanted to pull me away from them). The parts that would&amp;#39;ve killed the project with friction, I automated.&lt;/p&gt;
&lt;p&gt;Maybe that&amp;#39;s the whole game now... knowing which parts to protect...&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Knowing which parts to protect is becoming a judgment call I have to make on every project. The preschool site makes the decision easy: the visual language stays in my hands, AI handles the plumbing. The real work of this judgment is in the middle: projects where craft matters but throughput has merit, and every protect-or-automate call costs you something. If you don&amp;#39;t draw that line on purpose, it draws itself for you.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://pablostanley.substack.com/p/ai-feels-like-a-drug?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-4huf7a8n.jpg" length="0" type="image/jpeg"/></item><item><title>Good Taste the Only Real Moat Left</title><link>https://rogerwong.me/2026/04/taste-without-authorship-fragile?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/taste-without-authorship-fragile</guid><description>When generation gets cheap, craft becomes judgment. Raj Nandan Sharma, writing on his blog, puts it bluntly: Before AI, mediocre work often reflected a lack of time, resources, or execution skill. Today mediocre work often means something else: the person stopped at the first acceptable draft. That ...</description><pubDate>Fri, 17 Apr 2026 15:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-qdj5sb1u.jpg&quot; alt=&quot;Abstract swirling ink or fluid art in dark and pink tones with white text reading &quot;Good Taste: The Only Real Moat Left.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;When generation gets cheap, &lt;a href=&quot;/2026/04/craft-untouchable-butler&quot;&gt;craft becomes judgment&lt;/a&gt;. &lt;a href=&quot;https://rajnandan.com/posts/taste-in-the-age-of-ai-and-llms/&quot;&gt;Raj Nandan Sharma&lt;/a&gt;, writing on his blog, puts it bluntly:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Before AI, mediocre work often reflected a lack of time, resources, or execution skill. Today mediocre work often means something else: the person stopped at the first acceptable draft. That is the economic shift AI introduces. It compresses the cost of first drafts, which means the value moves downstream... In other words, the scarce skill is not generation. It is refusal.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Refusal—knowing what to throw out and why—is what&amp;#39;s scarce in a world where anyone can generate ten competent drafts before lunch.&lt;/p&gt;
&lt;p&gt;But Sharma doesn&amp;#39;t stop there. He warns that elevating taste alone can quietly corner humans into an end-of-pipeline selector role:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;There is a strong version of the &amp;quot;taste matters&amp;quot; argument that quietly pushes humans into a narrow role. In that version, AI generates many outputs and the human stands at the end of the pipeline selecting the best one. That is a useful role, but it is also too small... The warning is not that taste has no value. It does. The warning is that taste without authorship, stake, or construction can become a narrow and eventually fragile role.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The warning Sharma adds is the part the &amp;quot;taste is the moat&amp;quot; conversation tends to skip. Refusal without authorship is still selector work, and selector work has a ceiling. The durable position pairs refined taste with authorship—owning what ships and the stake for getting it wrong.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://rajnandan.com/posts/taste-in-the-age-of-ai-and-llms/?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-qdj5sb1u.jpg" length="0" type="image/jpeg"/></item><item><title>Designers finally have a say in the product they design</title><link>https://rogerwong.me/2026/04/designers-authorship-over-product?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/designers-authorship-over-product</guid><description>The designer&amp;#39;s role is widening at both ends of the product stack. Earlier, I linked to a post by Chad Johnson arguing designers gain influence by moving upstream: becoming orientation devices for the team, shaping the problem before it gets named. Daniel Mitev, writing for UX Collective, argues...</description><pubDate>Thu, 16 Apr 2026 19:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-dsustp36.png&quot; alt=&quot;A hand pressing an Enter key above a terminal showing a git commit command, with text reading &quot;Designers finally have a say in the product they design.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;The designer&amp;#39;s role is widening at both ends of the product stack. Earlier, I linked to a post by  &lt;a href=&quot;https://chadsnewsletter.substack.com/p/why-most-designers-will-never-influence&quot;&gt;Chad Johnson&lt;/a&gt; arguing designers gain influence by moving upstream: becoming orientation devices for the team, shaping the problem before it gets named. &lt;a href=&quot;https://uxdesign.cc/designers-finally-have-a-say-in-the-product-they-design-396c999e1227&quot;&gt;Daniel Mitev&lt;/a&gt;, writing for &lt;em&gt;UX Collective&lt;/em&gt;, argues designers gain authorship by moving downstream, into the code:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The industry has been asking whether designers should code for over a decade. It was always the wrong question, or at least the wrong framing. It implied the barrier was technical: that designers lacked something fundamental, something that required years of study to acquire. Learn TypeScript. Understand the DOM. Earn your way across the divide. That wasn&amp;#39;t the barrier.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Mitev&amp;#39;s argument comes down to access. AI tooling compresses the translation layer and returns authorship to the designer:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;What AI tooling gives back is authorship over the surface layer — the part users actually touch. A designer can now open the codebase, adjust how an element behaves, change how a transition feels, and verify the output against their own intent in real time. The easing curve gets set by the person who decided what it should feel like. The hover state gets defined by the person who thought through why it matters. That work no longer requires an interpreter.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;He points at Alan&amp;#39;s &amp;quot;Everyone Can Build&amp;quot; initiative—283 pull requests shipped by non-engineers over two quarters, each merged after engineering review—as evidence it&amp;#39;s already happening.&lt;/p&gt;
&lt;p&gt;Johnson and Mitev aren&amp;#39;t in conflict. They&amp;#39;re describing the same shift from opposite ends. The interpreters at the top of the product stack—PMs who owned problem framing and prioritization—are compressing. The interpreters at the bottom—frontend engineers translating intent into code—are compressing too. Both jobs return to the designer who understood the intent first.&lt;/p&gt;
&lt;p&gt;The role widens. Some designers will gravitate to one end or the other. The designers who stretch the full range—orientation work and authorship—are working the widest version of the job.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://uxdesign.cc/designers-finally-have-a-say-in-the-product-they-design-396c999e1227?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-dsustp36.png" length="0" type="image/png"/></item><item><title>Why Most Designers Will Never Influence Product Roadmaps</title><link>https://rogerwong.me/2026/04/designers-influence-product-roadmaps?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/designers-influence-product-roadmaps</guid><description>(Second link to Chad Johnson this week, but I just discovered his Substack, so ¯\_(ツ)_/¯.) Chad Johnson, writing in his newsletter, argues that designer influence in product decisions comes from something other than craft output. He lays out the underlying dynamic: Roadmaps are shaped less by who ha...</description><pubDate>Thu, 16 Apr 2026 17:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-4v5t0kh1.jpg&quot; alt=&quot;Large-scale flowchart on a white wall with quirky decision questions including &quot;Have you ever missed an airplane flight?&quot; and &quot;Are you good with names?&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;(Second link to Chad Johnson this week, but I just discovered his Substack, so &lt;code&gt;¯\_(ツ)_/¯&lt;/code&gt;.)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://chadsnewsletter.substack.com/p/why-most-designers-will-never-influence&quot;&gt;Chad Johnson&lt;/a&gt;, writing in his newsletter, argues that designer influence in product decisions comes from something other than craft output. He lays out the underlying dynamic:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Roadmaps are shaped less by who has the best ideas and more by who controls the framing of tradeoffs. Every roadmap decision is a bet: build this instead of that, now instead of later, for these users instead of those. Whoever makes the risk feel smaller tends to win.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;So where does the designer fit? Johnson:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The most influential designers at startups do not position themselves as makers of screens. They act as orientation devices for the team. Orientation is the ability to help a group understand where they are, what matters, and what tradeoffs are real. It precedes prioritization, and it makes decision-making possible.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A designer whose output stops at screens is working on the wrong layer of the problem. Johnson lists the skills that back the orientation role:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Designers who shape direction invest in strategic framing, business literacy, and narrative construction. They learn to say no with evidence and to disagree without drama.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Johnson&amp;#39;s list is right as far as it goes. He understates one skill: legibility. A lot of design influence breaks down at translation. The thinking is strategic; the communication stays in design vocabulary. A sharp problem statement understandable only to other designers stays in the design review. Designers who change the conversation make their analysis readable in product and business terms without flattening it. That&amp;#39;s the same move Johnson gestures at when he describes &amp;quot;decision-ready artifacts&amp;quot; as &amp;quot;tools for comparison... designed to provoke judgment, not admiration.&amp;quot;&lt;/p&gt;
&lt;p&gt;Johnson&amp;#39;s closer calls the future of design leadership &amp;quot;quieter, more rigorous, and deeply strategic.&amp;quot; That&amp;#39;s right. It&amp;#39;s also a role that depends on being read by the people making the call.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://chadsnewsletter.substack.com/p/why-most-designers-will-never-influence?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-4v5t0kh1.jpg" length="0" type="image/jpeg"/></item><item><title>Ian Silber - What it&apos;s like designing at OpenAI</title><link>https://rogerwong.me/2026/04/designing-at-openai-ian-silber?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/designing-at-openai-ian-silber</guid><description>Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky&amp;#39;s interview with Jenny Wen, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with e...</description><pubDate>Thu, 16 Apr 2026 15:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-byianfow.jpg&quot; alt=&quot;Dive Club episode thumbnail: &quot;How design works at OpenAI&quot; featuring Ian Silber, Head of Product Design at OpenAI, smiling against a dark background.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;Two podcast conversations with frontier-lab design leaders on what designing at an AI lab looks like day-to-day. I previously linked to Lenny Rachitsky&amp;#39;s interview with &lt;a href=&quot;/2026/03/the-design-process-is-dead-jenny-wen-head-of-design-at-claude&quot;&gt;Jenny Wen&lt;/a&gt;, head of design for Claude, where she described a redistribution of designer hours: less mocking, more pairing with engineers, a sliver of direct implementation. The activities themselves still look like design.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=oM1d9Tau27w&quot;&gt;Ian Silber&lt;/a&gt;, head of product design at OpenAI, on Michael Riddering&amp;#39;s &lt;em&gt;Dive Club&lt;/em&gt;, describes work that doesn&amp;#39;t fit the same list:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Designers working on this are hopefully spending a lot less time in Figma or whatever tool you use to draw pixels, and more time really thinking about how you interact with this thing, and the fact that the model really is the core product.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Silber&amp;#39;s concrete example is onboarding. Instead of building a first-run tutorial, his team shapes what the model already knows about the person:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We have this super intelligent model that could probably do a much better job trying to understand what this person&amp;#39;s goals are [...] We&amp;#39;re really stripping back a lot of what you might traditionally do and trying to say, &amp;quot;Well, actually [...] let&amp;#39;s think about like how we should give this context to the model that this person is brand new and they might need some handholding.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The traditional response adds UI around the problem. Silber&amp;#39;s team takes it out and gives the model enough context to meet the user where they are.&lt;/p&gt;
&lt;p&gt;That kind of work needs its own scaffolding, and OpenAI is building it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We have a whole system called the Dynamic User Interface Library, which allows us to design things that the model can then interpret.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Primitives the model composes at runtime, shaped by system prompts and context rather than drawn flow by flow. Wen is describing a redistribution of designer hours inside activities that still look recognizable. Silber is describing activities that don&amp;#39;t quite have names yet. And &lt;strong&gt;yes&lt;/strong&gt;, that is still design.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=oM1d9Tau27w&amp;ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-byianfow.jpg" length="0" type="image/jpeg"/></item><item><title>The Last 20% and Who&apos;s Asking Why?</title><link>https://rogerwong.me/2026/04/last-20-percent-thinking-gap?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/last-20-percent-thinking-gap</guid><description>The gap between an AI-produced prototype and a shippable product has a shape. Most of us assume it&amp;#39;s the visual 20%: the polish AI output drifts on. Chad Johnson&amp;#39;s case is that the 20% is the trivial part, and the real gap sits upstream of everything visible. Chad Johnson, writing in his new...</description><pubDate>Wed, 15 Apr 2026 17:30:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-f25hnnad.jpg&quot; alt=&quot;Abstract digital art featuring curved, layered surfaces with fine parallel lines in warm orange, red, and deep blue gradients.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;The gap between an AI-produced prototype and a shippable product has a shape. Most of us assume it&amp;#39;s the visual 20%: the polish AI output drifts on. Chad Johnson&amp;#39;s case is that the 20% is the trivial part, and the real gap sits upstream of everything visible.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://chadsnewsletter.substack.com/p/the-last-20-and-whos-asking-why&quot;&gt;Chad Johnson&lt;/a&gt;, writing in his newsletter:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The deeper issue was that nobody had asked whether a prototype was even the right artifact to produce at that stage. The PM had made three assumptions about user intent that we hadn&amp;#39;t validated. They&amp;#39;d skipped past a critical question about whether this flow needed to exist at all, or whether the real problem was upstream in the information architecture. They&amp;#39;d built a beautiful answer to a question nobody had confirmed was worth asking. That&amp;#39;s the part that stuck with me. Not the visual gaps. The thinking gaps.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That lines up with what I&amp;#39;ve been calling &lt;a href=&quot;/2026/04/acceleration-is-not-automation&quot;&gt;C+ out of the box&lt;/a&gt;: artifacts that read well and seem credible until you apply critical thinking. Johnson gets specific about what&amp;#39;s actually missing, and none of it is visual: the assumption nobody validated, the upstream question nobody asked. The interface was fine. The thinking was absent from the (probably) AI-generated PRD.&lt;/p&gt;
&lt;p&gt;Johnson again:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;…design production got democratized, but design judgment didn&amp;#39;t. Anyone can make something now. Almost nobody new learned how to &lt;em&gt;think&lt;/em&gt; well about what should be made, why, and for whom. And that gap, between what&amp;#39;s possible to produce and what&amp;#39;s actually been thought through, is now the entire playing field for our profession. Designers aren&amp;#39;t becoming obsolete. They&amp;#39;re becoming stewards.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Judgment still takes years to build, and no tool compresses that.&lt;/p&gt;
&lt;p&gt;The last 20% is rarely the gap that matters. The first question—should we build this?—almost always is. Very few teams have the muscle to ask it.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://chadsnewsletter.substack.com/p/the-last-20-and-whos-asking-why?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-f25hnnad.jpg" length="0" type="image/jpeg"/></item><item><title>The Design-Build Loop</title><link>https://rogerwong.me/2026/04/design-build-loop-system-graph?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/design-build-loop-system-graph</guid><description>Tara Tan surveyed more than a dozen AI design tools for The Review. Her field audit sits alongside the design-process compression argument: In working with these tools, one insight emerged for me: the tools that understand your design system produce better output than the ones that don&amp;#39;t. [...] ...</description><pubDate>Wed, 15 Apr 2026 15:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-p0w0ohk4.jpg&quot; alt=&quot;The Design Agent Landscape&quot; diagram categorizing AI design tools into three groups: Agent-first canvas (Pencil, Paper, OpenPencil), Design system-first (Figma MCP, Console MCP, Google Stitch), and Code-native (Subframe, MagicPath, Tempo, Polymet, Magic Patterns, Lovable, Bolt, v0, Replit).&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;&lt;a href=&quot;https://thereview.strangevc.com/p/the-design-build-loop&quot;&gt;Tara Tan&lt;/a&gt; surveyed more than a dozen AI design tools for &lt;em&gt;The Review&lt;/em&gt;. Her field audit sits alongside the &lt;a href=&quot;/2026/04/acceleration-is-not-automation&quot;&gt;design-process compression&lt;/a&gt; argument:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In working with these tools, one insight emerged for me: the tools that understand your design system produce better output than the ones that don&amp;#39;t. [...] The competitive moat in this market is not generative quality, which is commoditizing fast. The moat is the design system graph: the tokens, components, spacing scales, typography rules, and conventions that make your product look like your product and not a generic template. Whoever makes that system machine-readable for agents will win the enterprise.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That&amp;#39;s the operational reason my proposal for an agent design team hinges on a rock-solid design system. What distinguishes output across the tools Tan surveyed is whether the generator respects your existing design system or treats every request as a fresh mood board.&lt;/p&gt;
&lt;p&gt;Tan&amp;#39;s other finding is the role-shift:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The same shift is happening in design. At Uber, Ian Guisard didn&amp;#39;t stop being a design systems lead when uSpec automated his spec-writing. His job shifted from producing documentation to encoding expertise, writing agent skills, defining validation rules, deciding what &amp;quot;correct&amp;quot; means for each component across seven platforms. The human became the system designer, not the system operator. [...] The canary is singing. And the song is about the work shifting from execution to judgment, from operating the system to designing the system itself.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Same title, different job. Ian Guisard&amp;#39;s taste still matters; it lives in the skills and validation rules now, not the deliverables. That&amp;#39;s &amp;quot;&lt;a href=&quot;https://newsletter.rogerwong.me/p/defend-the-role-or-follow-the-skill&quot;&gt;follow the skill, not the role&lt;/a&gt;&amp;quot; made concrete. Guisard used to write specs; now he writes the rules the system follows to validate them.&lt;/p&gt;
&lt;p&gt;The infrastructure is catching up to the process. Tan&amp;#39;s implicit prescription is straightforward: make the design system machine-readable, win the enterprise. Some of that tooling is already out in the open. Southleft&amp;#39;s &lt;a href=&quot;/2026/03/design-systems-ai-infrastructure&quot;&gt;Figma Console MCP&lt;/a&gt; (which Uber&amp;#39;s uSpec is built on) lets agents operate on tokens and components without a custom platform.&lt;/p&gt;
&lt;p&gt;But tooling alone isn&amp;#39;t enough. Most of us aren&amp;#39;t Uber. The path for teams without a dedicated design systems lead still needs someone to do the work Guisard did: encoding the expertise and defining what &amp;quot;correct&amp;quot; looks like across platforms. That&amp;#39;s where the next round of tooling needs to land.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://thereview.strangevc.com/p/the-design-build-loop?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-p0w0ohk4.jpg" length="0" type="image/jpeg"/></item><item><title>Acceleration Is Not Automation</title><link>https://rogerwong.me/2026/04/acceleration-is-not-automation?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/acceleration-is-not-automation</guid><description>I&amp;#39;ve been wandering the wilderness to understand where the software design profession is going. Via this blog and my newsletter, I&amp;#39;ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers&amp;#39;s Zero Vector design m...</description><pubDate>Tue, 14 Apr 2026 16:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/acceleration-is-not-automation-featured-gqi2cic4.jpg&quot; alt=&quot;A sleek high-speed bullet train with glowing headlights crossing a bridge through dense fog over a misty landscape.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;I&amp;#39;ve been wandering the wilderness to understand &lt;a href=&quot;/2025/12/the-year-ai-changed-design&quot;&gt;where the software design profession is going&lt;/a&gt;. Via this blog and my &lt;a href=&quot;https://newsletter.rogerwong.me/subscribe&quot;&gt;newsletter&lt;/a&gt;, I&amp;#39;ve been exploring the possibilities by reading, commenting, and writing. Many other designers are in the same boat, with Erika Flowers&amp;#39;s &lt;a href=&quot;https://zerovector.design&quot;&gt;Zero Vector design&lt;/a&gt; methodology being the most defined. Kudos to her for being one of the first—if not the first—to plant the flag.&lt;/p&gt;
&lt;p&gt;Directionally Flowers is right. But for me, working in a team and on B2B software, it feels too simplistic and ignores the realities of working with customers and counterparts in product management and engineering. (That&amp;#39;s her whole point: one person to do it all, no handoff.)&lt;/p&gt;
&lt;p&gt;The destination is within view. But it&amp;#39;s hazy and distant. The path to get there is unclear, like driving through soupy fog when your headlights reflecting off the mist are all you can see.&lt;/p&gt;
&lt;p&gt;At the core, I don&amp;#39;t believe the process has changed because the UX design process mirrors the scientific method:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Observe&lt;/li&gt;
&lt;li&gt;Question&lt;/li&gt;
&lt;li&gt;Hypothesize&lt;/li&gt;
&lt;li&gt;Experiment&lt;/li&gt;
&lt;li&gt;Test&lt;/li&gt;
&lt;li&gt;Analyze&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=CtxdhnieTCQ&quot;&gt;https://www.youtube.com/watch?v=CtxdhnieTCQ&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&amp;quot;Never Over&amp;quot; TV commercial for Eli Lllly by Wieden+Kennedy, 2026&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Compare with the &lt;a href=&quot;https://www.ideou.com/blogs/inspiration/what-is-design-thinking&quot;&gt;design thinking framework&lt;/a&gt; popularized by IDEO and Stanford&amp;#39;s &lt;a href=&quot;https://web.stanford.edu/~mshanks/MichaelShanks/files/509554.pdf&quot;&gt;d.school&lt;/a&gt; in the late 1990s and 2000s:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Observe → Empathize&lt;/li&gt;
&lt;li&gt;Question → Define&lt;/li&gt;
&lt;li&gt;Hypothesize → Ideate&lt;/li&gt;
&lt;li&gt;Experiment → Prototype&lt;/li&gt;
&lt;li&gt;Test → Test&lt;/li&gt;
&lt;li&gt;Analyze → (Analyze)&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/designthinkingprocesschart-g4j30x5t.webp&quot; alt=&quot;Design thinking process diagram showing five hexagonal stages: Empathize, Define, Ideate, Prototype, and Test, each with bullet-point activities listed alongside.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The Design Thinking framework from &lt;a href=&quot;https://dschool.stanford.edu/&quot;&gt;Standford&amp;#39;s d.school&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Even if you don&amp;#39;t consciously follow the official design thinking process, as a designer you do it anyway. Research → ideate → test → iterate. It&amp;#39;s the same thing at a high level.&lt;/p&gt;
&lt;p&gt;The &lt;a href=&quot;https://www.designcouncil.org.uk/our-resources/the-double-diamond/&quot;&gt;Double Diamond&lt;/a&gt; expands on this a bit, explaining other aspects that we designers do. &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Discover, or research and observe what&amp;#39;s happening in the problem space&lt;/li&gt;
&lt;li&gt;Define, or analyze your research and define the problem&lt;/li&gt;
&lt;li&gt;Develop, diverge on solutions to that problem&lt;/li&gt;
&lt;li&gt;Deliver, start homing in on solutions via testing&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/double-diamond-uk8mwq31.png&quot; alt=&quot;Double Diamond design process diagram showing four phases: Discover, Define, Develop, and Deliver, represented as two red diamond shapes with directional arrows.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The Double Diamond design process from the &lt;a href=&quot;https://www.designcouncil.org.uk/&quot;&gt;Design Council&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;So the question about where design is going is less about the overall process—because it stays the same, &lt;a href=&quot;/2026/03/design-process-compressed-nng-response&quot;&gt;just compressed&lt;/a&gt;—and more about who is doing what with what. In other words, on a daily basis, what are designers doing and what tools are they using.&lt;/p&gt;
&lt;h2&gt;The Coding World Changed in Three Months&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=wc8FBhQtdsA&quot;&gt;https://www.youtube.com/watch?v=wc8FBhQtdsA&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;The industry has moved incredibly fast. First, coding was upended by agentic engineering. Developer and AI researcher &lt;a href=&quot;https://www.youtube.com/watch?v=wc8FBhQtdsA&quot;&gt;Simon Willison said recently&lt;/a&gt; on Lenny&amp;#39;s Podcast:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;…All of the software engineers who took time off over the over the holidays and started tinkering with this stuff got this moment of realization where it&amp;#39;s like, &amp;quot;Oh wow this stuff actually works now. I could tell it to build code and if I describe that code well enough, it&amp;#39;ll follow the instructions and it&amp;#39;ll build the thing that I asked it to build.&amp;quot;&lt;/p&gt;
&lt;p&gt;I think the reverberations to that are still shaking us [through] software engineering. A lot of people woke up in January and February and started realizing, &amp;quot;Oh wow, this technology which I&amp;#39;d been kind of paying attention to, suddenly it&amp;#39;s got really really good.&amp;quot; And what does that mean? Like what does the fact [that] I can churn out 10,000 lines of code in a day and most of it works. Is that good? Like how do we get from &amp;quot;most of it works&amp;quot; to &amp;quot;all of it works&amp;quot;?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;What was a slow simmer that started with &lt;a href=&quot;/2024/11/replatforming-with-a-lot-of-help-from-ai&quot;&gt;Cursor&amp;#39;s autocomplete&lt;/a&gt; and step-by-step prompting, quickly turned into a rapid boil with Claude Code and Opus 4.5 in November 2025. By January 2026, developers like Geoffrey Huntley discovered the &lt;a href=&quot;https://ghuntley.com/ralph/&quot;&gt;Ralph Wiggum loop&lt;/a&gt; applying reinforcement learning to Claude Code forcing itself to continue until its task was solved without bugs; and Steve Yegge released the token-burning automated software factory &lt;a href=&quot;https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16dd04&quot;&gt;Gas Town&lt;/a&gt;. Over the course of the last three months, innovations kept coming: skills, Markdown files serving as how-tos for agents, teams of multiple agents, and a plethora of agent &amp;quot;harnesses,&amp;quot; or apparatuses to orchestrate multiple agent teams. All together, these new tools have effectively automated programming, with developers now commanding multiple teams of AI agents. As Willison put it, &amp;quot;I can fire up four agents in parallel and have them work on four different problems. And by 11 AM, I am wiped out for the day.&amp;quot;&lt;/p&gt;
&lt;p&gt;With AI transforming engineering well underway, the question becomes, &amp;quot;What else can be accelerated in software development?&amp;quot; The other legs of the three-legged stool, of course. &lt;/p&gt;
&lt;h2&gt;What Else Can Be Accelerated?&lt;/h2&gt;
&lt;h3&gt;Writing PRDs&lt;/h3&gt;
&lt;p&gt;Many product management activities can be accelerated with AI. Given the right inputs, CSVs of analytics data, Markdown transcripts of customer calls, deep research on market conditions, &lt;a href=&quot;/2026/03/claude-cowork-guide-56-tips&quot;&gt;Claude Cowork&lt;/a&gt; can produce decent if not great analyses. Talk about findings in a team meeting and then feed that transcript back through Claude and then you can get a tight PRD. The core PM deliverable can be automated. &lt;/p&gt;
&lt;p&gt;The quality of the deliverable is, at best, a C+ out of the box. It might read well and seem credible, but with a little critical thinking, you&amp;#39;ll realize the PRD is full of holes and gross assumptions. You need to give it a battle-tested template and build a skill that considers the right context to write the PRD. You&amp;#39;ll need to iterate, employing reinforcement learning to compound the AI&amp;#39;s experience. Keep improving the skill until the PRDs get better. &lt;/p&gt;
&lt;p&gt;Of course, PRDs are just one deliverable out of many a PM might be responsible for. But for building &lt;em&gt;something&lt;/em&gt;, I&amp;#39;d argue it&amp;#39;s the most important, because it feeds everything else. Because it&amp;#39;s the beginning of the &lt;a href=&quot;/2026/03/spec-driven-development&quot;&gt;&lt;em&gt;spec&lt;/em&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;With a PRD, how would an AI come up with a solution? What could be automated on the design front?&lt;/p&gt;
&lt;h3&gt;Designing Flows and Prototypes&lt;/h3&gt;
&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=iaAT6-dY1QI&quot;&gt;https://www.youtube.com/watch?v=iaAT6-dY1QI&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&amp;quot;Tea. Earl Grey. Hot.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The magic of LLMs is that you can ask it for anything and it&amp;#39;ll make it. It&amp;#39;s like the Star Trek replicator but for digital artifacts. Like replicator food, the generated simulacra isn&amp;#39;t necessarily good. I&amp;#39;ve tried a few times to generate flows from Claude. Feeding it a PRD, I asked for a user flow and got a Mermaid diagram which could be rendered as a flowchart in FigJam or Figma via a plugin. There&amp;#39;s a thing that we humans do when we think about systems: we simplify and calibrate the level of granularity so a flow is easy to understand. What I&amp;#39;ve found with the few flows I&amp;#39;ve generated with Claude is that it tends to flex the altitude in the same chart. Sometimes it&amp;#39;s super low and detailed, and other times it&amp;#39;s high and hand-wavy. With work and iterating on a skill, I&amp;#39;m sure flows can get better.&lt;/p&gt;
&lt;p&gt;This is true of any of our deliverables, even wireframes, mocks, and prototypes. Iterate on a skill to get the deliverables better and improve over time.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/prompt-generate-deploy-ai-capabilities-chart.png&quot; alt=&quot;A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration &amp; Deployment Agents.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: &lt;a href=&quot;https://ai-2027.com/&quot;&gt;AI 2027 scenario&lt;/a&gt; &amp;amp; &lt;a href=&quot;/2025/04/prompt-generate-deploy&quot;&gt;Roger Wong&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A year ago in &lt;a href=&quot;/2025/04/prompt-generate-deploy&quot;&gt;&amp;quot;Prompt. Generate. Deploy.&amp;quot;&lt;/a&gt;, I forecasted this workflow arriving via tools like Tempo, which bundled PRD → flow → wireframes → code into a single pipeline. That bundled version didn&amp;#39;t quite materialize. But the pieces did, just unbundled across Claude, Figma plugins, and v0.&lt;/p&gt;
&lt;p&gt;But if we&amp;#39;re &lt;em&gt;prompting&lt;/em&gt; flows and prototypes into existence, what should we do with the newfound gains in productivity? Not play solitaire while Claude churns, of course. Instead, we can test more.&lt;/p&gt;
&lt;p&gt;In Jake Knapp&amp;#39;s &lt;a href=&quot;https://www.character.vc/sprint&quot;&gt;Design Sprint&lt;/a&gt;, paper or low-fidelity prototypes were used to validate hypotheses. And, if you&amp;#39;re lucky, you could get through maybe two iterations. But the core idea is to get validation signal from real customers and users about your solution via a prototype. Now, with AI, you can iterate on a working prototype quickly enough that talking to more customers and sharing more variations is possible. This multiplies the confidence in your solution, lowering the risk of spending resources to launch it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/accelerate-vs-automation-sprint-process-d79pomds.png&quot; alt=&quot;Hand-drawn diagram of a 5-day design sprint: Monday (Map), Tuesday (Sketch), Wednesday (Decide), Thursday (Prototype), Friday (Test), each with key activities listed.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The 5-day design sprint from &amp;quot;Sprint&amp;quot; by Jake Knapp.&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;Design-to-Code Handoff&lt;/h3&gt;
&lt;p&gt;Once you have a validated prototype, you can go about designing the real thing. Maybe you&amp;#39;ll want to continue to do it in Figma like always, or maybe you&amp;#39;ll use newer &lt;a href=&quot;/2025/04/beyond-the-prompt&quot;&gt;AI-powered tools&lt;/a&gt; like v0, Lovable, or &lt;a href=&quot;/2026/01/how-boris-cherny-uses-claude-code&quot;&gt;Claude Code&lt;/a&gt;. Or maybe you&amp;#39;ll use your prototype as the base and make it production-ready. In any case, you have the opportunity to shape the material directly, to actually make the thing instead of a picture of the thing.&lt;/p&gt;
&lt;p&gt;For designers, I believe it&amp;#39;s possible to build the front-end, at the very least. Some platforms have really complex application logic and backends, so I tend to trust software engineers more than me. But if I can hand off a fully-formed frontend to engineering to hook up the backend, then design QA is cut down by 90% because I made it.&lt;/p&gt;
&lt;h2&gt;Acceleration vs Automation&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/acceleration-is-not-automation-jetson-q8ynzwf5.jpg&quot; alt=&quot;George Jetson sits at a futuristic control panel looking stressed while his angry boss Mr. Spacely shouts at him from a video screen, in this 1985 Hanna-Barbera cartoon illustration.&quot;&gt;&lt;/p&gt;
&lt;p&gt;In the use cases I described above—faster PRD writing, faster flows and prototypes, are they really automation? Or is the artifact generation simply being accelerated?&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;/2026/02/agentic-engineering&quot;&gt;Agentic engineering&lt;/a&gt; is truly automation. Engineers who are orchestrating teams of agents feed them prompts and then sit back and wait for the results. It&amp;#39;s closer to George Jetson sitting at a console pushing a button as a job. Yes, I know there&amp;#39;s more to it. Getting the spec right is the secret. And it&amp;#39;s the PRDs and designs that make up the spec.&lt;/p&gt;
&lt;p&gt;We haven&amp;#39;t really gotten to automated PRDs from short prompts, much less automated flow design, high-fidelity mockups, and interactive prototypes. Each one of those deliverables still take extensive back and forth with the LLMs to get right, to get to something above mediocre.&lt;/p&gt;
&lt;p&gt;As mentioned earlier, skills is one way to improve the output. But to truly automate, we have to think about breaking down the processes further into what specialists might do.&lt;/p&gt;
&lt;h2&gt;Enter the Agent Team&lt;/h2&gt;
&lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/accelerate-vs-automation-intent-yecfsvw4.jpg&quot; alt=&quot;Intent coding agent UI showing multiple AI agents collaborating on JWT authentication middleware, with a spec document, task checklist, and file changes panel visible.&quot;&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Augment Code&amp;#39;s Intent agent orchestration GUI.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;In agentic engineering, developers have figured out that they need to give agents certain roles or personas with specific instructions. For example, in &lt;a href=&quot;https://www.augmentcode.com/product/intent&quot;&gt;Intent by Augment Code&lt;/a&gt;, they utilize a coordinator agent whose system prompt begins with:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You plan, delegate, and verify. You do NOT implement code yourself. You NEVER edit files directly. &lt;strong&gt;You have no file editing tools available. Delegation to implementor agents is the ONLY way code gets written.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And then a developer agent takes assigned tasks:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You plan and implement. You write specs first, then implement the work yourself after approval. No delegation, no sub-agents.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;And finally, a verifier agent ensures the completed task is done right:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You verify the implementation against the spec’s &lt;strong&gt;Acceptance Criteria&lt;/strong&gt;.
You are evidence-driven: if you can’t point to concrete evidence, it’s not verified.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This team of agents works together to get something done. The planning, breaking down of a spec to bite-sized chunks, the coordination, and the testing is all done in this team until they think it&amp;#39;s done. Sometimes this will go on for dozens of minutes.&lt;/p&gt;
&lt;h3&gt;A Team of PMs&lt;/h3&gt;
&lt;p&gt;We can take this same idea and apply it to product management work. I can imagine the following agents:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Customer researcher.&lt;/strong&gt; Gather and summarize heaps of customer call transcripts, support tickets, product usage metrics, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Market researcher.&lt;/strong&gt; Research competitors and write detailed market analyses on the competitive landscape.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Business analyst.&lt;/strong&gt; Extract the requirements from the research, compare against present state, then report findings.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Product strategist.&lt;/strong&gt; Based on all of the above, create SWOT analysis, and then roadmap.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Product manager.&lt;/strong&gt; Write a PRD for the first wave of the roadmap.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To do all of the above in a single skill would not yield great results. But having these specialized agents work together could produce meaningful artifacts. The human judgement PMs bring to the table will include institutional knowledge about the business, its product, and customer base. PMs should shape the AI&amp;#39;s output via iterations.&lt;/p&gt;
&lt;p&gt;(Obviously, product managers need to conduct the customer calls IRL. We&amp;#39;re not talking about AI-automated user interviews.)&lt;/p&gt;
&lt;h3&gt;A Team of Design Specialists&lt;/h3&gt;
&lt;p&gt;MC Dean &lt;a href=&quot;/2026/04/design-team-ai-agents&quot;&gt;created an agent team&lt;/a&gt; of 10 design specialists. It&amp;#39;s packaged together in something called &lt;a href=&quot;https://marieclairedean.substack.com/p/i-built-a-design-team-out-of-ai-agents?ref=rogerwong.me&quot;&gt;Designpowers&lt;/a&gt;, inspired by Jesse Vincent&amp;#39;s &lt;a href=&quot;https://github.com/obra/superpowers&quot;&gt;Superpowers&lt;/a&gt;. Her list of specialists include:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;design-strategist&lt;/strong&gt; builds your flows, information architecture, personas, and design principles.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;design-scout&lt;/strong&gt; does competitive research and pattern analysis.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;design-lead&lt;/strong&gt; handles visual design — layout, colour, typography, components.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;motion-designer&lt;/strong&gt; takes care of animation, transitions, and micro-interactions.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;content-writer&lt;/strong&gt; writes interface copy at Grade 6 reading level.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;design-builder&lt;/strong&gt; converts specs into production code.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;accessibility-reviewer&lt;/strong&gt; runs WCAG and COGA evaluations on everything the team produces.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;design-critic&lt;/strong&gt; reviews the work against your brief and principles, finding the gaps nobody else caught.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;inspiration-scout&lt;/strong&gt; handles aesthetic references, cross-domain inspiration, mood boards.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;heuristic evaluator&lt;/strong&gt; evaluates a design against established usability heuristics (Nielsen&amp;#39;s 10) and conducts cognitive walkthroughs of key tasks.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;I love this and I think this is the step towards automation I&amp;#39;ve been exploring in this essay. Looking at this list, I can see some agents that would be applicable in my day-to-day and some that aren&amp;#39;t. I would also add some others. For what my team and I do at BuildOps, an operational platform for commercial contractors, I&amp;#39;d have the following design specialist agents:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;UX researcher.&lt;/strong&gt; Gather and summarize user interview transcripts, moderated and unmoderated studies, support tickets, product usage metrics, etc. Is there anything else that the Customer Research agent hasn&amp;#39;t already discovered?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Design strategist.&lt;/strong&gt; From the available research—above and from the Product agent team—brainstorm solutions that align with the product strategy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Design architect.&lt;/strong&gt; Given a high-level solution, map out all the flows including edge cases.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UX designer.&lt;/strong&gt; From the flows, spec out all the necessary individual pages. What appears on each screen and what is the user expected to do?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;UX copywriter.&lt;/strong&gt; Writes any UX copy like component labels and user instructions according to our copywriting style guide.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Prototyper.&lt;/strong&gt; Using the spec&amp;#39;d pages, make an interactive prototype. This can be used for testing and validation with customers and users.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Design builder.&lt;/strong&gt; Turn the prototype into production-ready code. Incorporate the design system properly and account for error states and edge cases.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Accessibility reviewer.&lt;/strong&gt; Ensures compliance with a11y guidelines and industry best practices.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Jakob Nielsen.&lt;/strong&gt; Evaluate the final design against Nielsen&amp;#39;s 10 UX heuristics.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Verifier.&lt;/strong&gt; Double-check that the final design satisfies all the requirements from the PRD and finds any gaps that may have been missed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All of the above assumes we have a rock-solid design system with a full assortment of components and documented patterns and rules.&lt;/p&gt;
&lt;p&gt;We have the team, and now what? Is it really as simple as describing what you want to build? In Dean&amp;#39;s Designpowers, the user acts as the creative director, intercepting &amp;quot;any handoff to correct, add, redirect, or skip.&amp;quot; If we&amp;#39;re to imagine a more automated workflow, the user here—you—would simply feed in the PRD as context, let the AI churn for a while, then inspect the resulting prototype and iterate from there. That is what agentic design would look like.&lt;/p&gt;
&lt;p&gt;But then the bottleneck becomes &lt;em&gt;you&lt;/em&gt;. Your judgement built on years of experience is what can &lt;a href=&quot;/2026/03/strongdm-software-factory-shape-thing&quot;&gt;shape and direct&lt;/a&gt; the agent team&amp;#39;s output. As I argued in &lt;a href=&quot;https://newsletter.rogerwong.me/p/who-teaches-the-product-builder&quot;&gt;this week&amp;#39;s newsletter&lt;/a&gt;, specialist experience is what builds the judgment an AI can&amp;#39;t hold. &amp;quot;…Judgment compounds from pattern recognition that only comes from doing grunt work in one lane long enough to know what good looks like.&amp;quot;&lt;/p&gt;
&lt;h2&gt;Walking Out of the Wilderness&lt;/h2&gt;
&lt;p&gt;If you&amp;#39;re still with me, let&amp;#39;s address this directly: Do we &lt;em&gt;want&lt;/em&gt; to automate design? No. But it will happen, and is &lt;a href=&quot;/2026/03/the-design-process-is-dead-jenny-wen-head-of-design-at-claude&quot;&gt;already happening&lt;/a&gt; in tech-forward companies like Silicon Valley startups. The answer is not to resist, but to adapt. To &lt;a href=&quot;https://newsletter.rogerwong.me/p/defend-the-role-or-follow-the-skill&quot;&gt;follow the skill&lt;/a&gt;, not cling onto the role.&lt;/p&gt;
&lt;p&gt;Adapting will look different depending on what you&amp;#39;re building and what happens if it breaks. For consumer apps and early-stage products, a solo operating commanding an agent team may be fine. For vertical SaaS, it isn&amp;#39;t.&lt;/p&gt;
&lt;p&gt;My team of designers do a &lt;em&gt;lot&lt;/em&gt; of discovery, understanding the problem from multiple vantage points, and create bullet-proof solutions by thinking through edge cases, application performance, and integration points. We have to because we work on mission-critical operational software. If BuildOps doesn&amp;#39;t work as expected, our customers&amp;#39; businesses will come to a grinding halt. That can&amp;#39;t happen.&lt;/p&gt;
&lt;p&gt;Which is why I don&amp;#39;t believe a team of one can ship a robust feature end-to-end in vertical SaaS. There&amp;#39;s too much complexity, but more importantly, there&amp;#39;s too much at stake. As a designer, I &lt;em&gt;don&amp;#39;t&lt;/em&gt; understand the ins and outs of integrating with an enterprise accounting system. I&amp;#39;m not trained enough in engineering to be able to spot something amiss in AI-generated code that will result in catastrophe. But I &lt;em&gt;do&lt;/em&gt; have the experience necessary in design to pick out and correct a poor user experience the AI may have built.&lt;/p&gt;
&lt;p&gt;Here&amp;#39;s what I see through the fog: agentic design is the future. But the process to actually run it isn&amp;#39;t fully formed yet. I can see its outlines now. That&amp;#39;s where I&amp;#39;m headed next.&lt;/p&gt;

          </content:encoded><category>essays</category><enclosure url="https://rogerwong.b-cdn.net/media/acceleration-is-not-automation-featured-gqi2cic4.jpg" length="0" type="image/jpeg"/></item><item><title>Elizabeth Goodspeed on why design writing needs designers writing</title><link>https://rogerwong.me/2026/04/goodspeed-designers-who-write?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/goodspeed-designers-who-write</guid><description>I&amp;#39;ve written that AI-era design work reduces to taste and judgment. Elizabeth Goodspeed&amp;#39;s case for designer-writers gets there from a different direction. Elizabeth Goodspeed, writing for It&amp;#39;s Nice That: You can get away with a lot in design: conceptual ideas are able to sit inside a vis...</description><pubDate>Mon, 13 Apr 2026 17:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-r64k8abd.jpg&quot; alt=&quot;Bold black text reading &quot;Placeholder Text&quot; and &quot;Elizabeth Goodspeed&quot; on a pink background, flanked by columns of lorem ipsum-style body copy.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;I&amp;#39;ve written that AI-era design work reduces to &lt;a href=&quot;/2026/03/craft-is-now-judgment-not-polish&quot;&gt;taste and judgment&lt;/a&gt;. Elizabeth Goodspeed&amp;#39;s case for designer-writers gets there from a different direction.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.itsnicethat.com/articles/elizabeth-goodspeed-designers-who-write-creative-industry-020426&quot;&gt;Elizabeth Goodspeed&lt;/a&gt;, writing for &lt;em&gt;It&amp;#39;s Nice That&lt;/em&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;You can get away with a lot in design: conceptual ideas are able to sit inside a visual piece of work without ever being fully spelled out. They&amp;#39;re gestured at rather than articulated. Writing forces you to figure out exactly what your idea is; if it isn&amp;#39;t working, you&amp;#39;ll know immediately. Where design is like a ballet – implicit ideas carried through form – then writing is closer to a theatre – your thinking has to be explicitly spoken.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Goodspeed&amp;#39;s point is that design lets you gesture at an idea without ever articulating it, and writing forces you to name it. A designer who can&amp;#39;t explain why a choice works has taste they can&amp;#39;t grow or pass on.&lt;/p&gt;
&lt;p&gt;Goodspeed&amp;#39;s second point goes further:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Writing is to graphic design what clay is to pottery. It&amp;#39;s the material designer shape and massage into form. To work with text well, you have to really be able to read and understand what you&amp;#39;re setting – not just how it looks and basics like not hyphenating a word in a bad spot, but what it means on a deeper level. Just as reading makes you a better writer, writing makes you a better reader.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Product designers don&amp;#39;t usually think of themselves as writers. But user stories are writing, and articulating what a user should be able to do through an experience and why is essential. &lt;/p&gt;
&lt;p&gt;Worth reading in full. She makes writing feel like a design discipline.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://www.itsnicethat.com/articles/elizabeth-goodspeed-designers-who-write-creative-industry-020426?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-r64k8abd.jpg" length="0" type="image/jpeg"/></item><item><title>Craft is Untouchable</title><link>https://rogerwong.me/2026/04/craft-untouchable-butler?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/craft-untouchable-butler</guid><description>Every few weeks, another essay or YouTube video announces that AI has killed craft. One of my favorite designers writing about design, Christopher Butler, goes the other way: No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf templ...</description><pubDate>Mon, 13 Apr 2026 15:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-g6f9rwaj.jpg&quot; alt=&quot;Close-up of a vibrant fingerprint with swirling ridge patterns in orange, red, blue, and yellow iridescent colors with glittery highlights.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;Every few weeks, another essay or YouTube video announces that AI has killed craft. One of my favorite designers writing about design, &lt;a href=&quot;https://www.chrbutler.com/craft-is-untouchable&quot;&gt;Christopher Butler&lt;/a&gt;, goes the other way:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;No knowledge I possess about design—the incorporeal understanding that makes what I create better than an off-the-shelf template or something done by someone without my experience—is made irrelevant by AI. Nor is it contradicted by my use of AI tools. Structure still communicates before content. Visual hierarchy still guides attention. Negative space still creates rhythm. These principles don&amp;#39;t vanish because I&amp;#39;m working through AI rather than directly manipulating pixels. The craft migrates to a different level of abstraction. But it remains &lt;em&gt;craft&lt;/em&gt;.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Butler&amp;#39;s claim is that the principles don&amp;#39;t vanish; they operate at a higher altitude. The unfinished part is naming where that altitude actually is. For product designers, it&amp;#39;s concept and hierarchy: the decisions that require knowing the user and the stake someone is willing to carry. The generated layout and the choice of components are still outputs. What&amp;#39;s left of design is the judgment that picks between them.&lt;/p&gt;
&lt;p&gt;Butler&amp;#39;s sharper line is the binary between consumption and practice:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Someone who generates an interface with AI and calls it done isn&amp;#39;t practicing craft. They&amp;#39;re consuming convenience. Someone who generates an interface, inspects it, questions what it&amp;#39;s actually communicating, refines the structure, generates again, compares variations, understands why one serves the user better than another—they&amp;#39;re practicing craft. They&amp;#39;re building knowledge through iteration. The tool doesn&amp;#39;t determine whether you&amp;#39;re working with craft. Your approach does.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That&amp;#39;s Jiro Ono&amp;#39;s &lt;a href=&quot;/2025/08/why-im-keeping-my-design-title&quot;&gt;&lt;em&gt;shokunin&lt;/em&gt;&lt;/a&gt; applied to interfaces: craft as lifelong practice, not manual labor. A camera doesn&amp;#39;t take a picture, and a model doesn&amp;#39;t make a design. That decision is the craft.&lt;/p&gt;
&lt;p&gt;Butler&amp;#39;s argument reassures me. What worries me is how optional that decision is becoming. The output already looks finished. The designers who keep asking why one version serves the user better than another will still be designers in five years. The rest may still have jobs, as operators of a tool doing the work their taste used to do.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://www.chrbutler.com/craft-is-untouchable?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-g6f9rwaj.jpg" length="0" type="image/jpeg"/></item><item><title>The Full Stack Builder: The End of the Design Process as We Know It</title><link>https://rogerwong.me/2026/04/full-stack-builder-design-process-end?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/full-stack-builder-design-process-end</guid><description>Tommaso Nervegna writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the &amp;quot;Full Stack Builder.&amp;quot; The structural bet is interesting, but the finding from their rollout is what matters: The expectation was that AI would be a great equaliz...</description><pubDate>Fri, 10 Apr 2026 16:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-fygzdm0i.jpg&quot; alt=&quot;A Renaissance-era man studies blueprint sketches on a glowing drafting table while a giant mechanical lobster draws on the plans with an ornate pen.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;&lt;a href=&quot;https://nervegna.substack.com/p/the-full-stack-builder-the-end-of&quot;&gt;Tommaso Nervegna&lt;/a&gt; writes about LinkedIn killing its Associate Product Manager program and replacing it with a new role called the &amp;quot;Full Stack Builder.&amp;quot; The structural bet is interesting, but the finding from their rollout is what matters:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The expectation was that AI would be a great equalizer: juniors would benefit most because AI would close their skill gaps, while seniors would resist the change. The reality was the opposite. Top performers adopted AI fastest and derived the most value from it. Why? Because they had the judgment and experience to know what to ask for, how to evaluate the output, and where to apply it for maximum leverage.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That tracks with everything I&amp;#39;ve predicted, experienced, and seen. The skill that makes AI useful is knowing what good looks like before &lt;em&gt;and&lt;/em&gt; after the model generates something. That ability comes from reps.&lt;/p&gt;
&lt;p&gt;Nervegna distills LinkedIn CPO Tomer Cohen&amp;#39;s thesis to five skills AI cannot automate:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The five skills that AI cannot automate, according to Cohen, are Vision, Empathy, Communication, Creativity, and Judgment. As he puts it: &amp;quot;I&amp;#39;m working hard to automate everything else.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The operational version:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The critical insight: the builder orchestrates the agents. The agents execute. Judgment stays human. This is not about replacing people with AI. It&amp;#39;s about compressing the team needed to ship something meaningful from fifteen people to three - or even one.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I&amp;#39;ve been calling this the orchestrator gap: the distance between a designer who uses AI and one who directs it. LinkedIn just gave it a job title. I think we will see more companies go this way. Whether or not it&amp;#39;s a good idea remains to be seen.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://nervegna.substack.com/p/the-full-stack-builder-the-end-of?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-fygzdm0i.jpg" length="0" type="image/jpeg"/></item><item><title>I Built a Design Team Out of AI Agents</title><link>https://rogerwong.me/2026/04/design-team-ai-agents?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/design-team-ai-agents</guid><description>Specialization is the whole game. Give an agent a specific role and clear constraints, and the quality of the output changes completely. I&amp;#39;ve been learning this firsthand with Claude Code skills. Marie Claire Dean took that principle and scaled it into an open-source system called Designpowers. ...</description><pubDate>Thu, 09 Apr 2026 16:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-8ek4z6b0.jpg&quot; alt=&quot;3D illustration of abstract biological structures resembling a protein or molecule, with colorful folded shapes, helices, and spheres floating against a dark blue background.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;Specialization is the whole game. Give an agent a specific role and clear constraints, and the quality of the output changes completely. I&amp;#39;ve been learning this firsthand with Claude Code skills.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://marieclairedean.substack.com/p/i-built-a-design-team-out-of-ai-agents&quot;&gt;Marie Claire Dean&lt;/a&gt; took that principle and scaled it into an open-source system called Designpowers. Her reasoning:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Most AI tools give you one assistant. You ask it something, it answers, and you figure out what to do next. That&amp;#39;s not how design teams work.&lt;/p&gt;
&lt;p&gt;Design teams work because a strategist thinks differently from a visual designer, who thinks differently from a content writer, who thinks differently from someone doing accessibility review. The handoffs between those perspectives are where the work gets better. The friction is productive.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Her team of ten covers the full pipeline from discovery through shipping, with dedicated specialists for strategy, visual design, content, motion, accessibility, and critique. All sharing one design state document, with the human directing.&lt;/p&gt;
&lt;p&gt;On what she learned building it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The act of encoding a design process forces you to decide what the handoffs actually are. When does strategy end and visual design begin? What does the content writer need from the strategist before they can start? What happens when the accessibility reviewer and the design critic disagree?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That&amp;#39;s the same clarity I&amp;#39;ve found writing Claude Code skills: what does this agent need to know, and where does its scope end? On where the human stays essential:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The idea is simple: agents can verify that a design is correct, aligned to the brief, accessible, consistent. They can&amp;#39;t tell you whether it&amp;#39;s &lt;em&gt;beautiful&lt;/em&gt;. That&amp;#39;s your job.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/Owl-Listener/designpowers&quot;&gt;full system is on GitHub&lt;/a&gt;.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://marieclairedean.substack.com/p/i-built-a-design-team-out-of-ai-agents?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-8ek4z6b0.jpg" length="0" type="image/jpeg"/></item><item><title>A Practical Guide To Design Principles — Smashing Magazine</title><link>https://rogerwong.me/2026/04/practical-guide-design-principles?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/practical-guide-design-principles</guid><description>I&amp;#39;ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I&amp;#39;ve seen it with product principles and brand values to...</description><pubDate>Wed, 08 Apr 2026 16:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-5kjrzzce.jpg&quot; alt=&quot;Smashing Magazine article title card: &quot;A Practical Guide To Design Principles&quot; by Vitaly Friedman, tagged Design, UX, UI.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;I&amp;#39;ve watched design team values die in a Confluence page. The offsite happens, the Post-Its get transcribed, the principles get written up with care, and then everyone goes back to their desks and ships exactly the way they did before. I&amp;#39;ve seen it with product principles and brand values too. The deck gets built, implementation starts, and the deck gets forgotten.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://smashingmagazine.com/2026/04/practical-guide-design-principles/&quot;&gt;Vitaly Friedman&lt;/a&gt;, writing for &lt;em&gt;Smashing Magazine&lt;/em&gt;, on why this matters more than ever:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We often see design principles as rigid guidelines that dictate design decisions. But actually, they are an incredible tool to rally the team around a shared purpose and document the values and beliefs that an organization embodies. They align teams and inform decision-making. They also keep us afloat amidst all the hype, big assumptions, desire for faster delivery, and AI workslop.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Friedman again:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;In times when we can generate any passable design and code within minutes, we need to decide better what&amp;#39;s worth designing and building — and what values we want our products to embody. It&amp;#39;s similar to voice and tone. You might not design it intentionally, but then end users will define it for you. And so, without principles, many company initiatives are random, sporadic, ad-hoc — and feel vague, inconsistent, or simply dull to the outside world.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;You might not write principles intentionally, but your product will have them anyway. The question is whether you chose them or inherited them by default.&lt;/p&gt;
&lt;p&gt;Friedman closes with the part most teams skip:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Creating principles is only a small portion of the work; most work is about effectively sharing and embedding them. It&amp;#39;s difficult to get anywhere without finding ways to make design principles a default — by revisiting settings, templates, naming conventions, and output. Principles help avoid endless discussions that often stem from personal preferences or taste. But design should not be a matter of taste; it must be guided by our goals and values.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Creating principles feels productive. But alignment without embedding is a Confluence page nobody opens twice. Principles have to show up in the Figma component library, the ticket template, the review rubric. They have to be repeated so that they are ingrained. They have to become the path of least resistance.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://smashingmagazine.com/2026/04/practical-guide-design-principles/?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-5kjrzzce.jpg" length="0" type="image/jpeg"/></item><item><title>The Existential Designer: Facilitating Meaning Through Interaction</title><link>https://rogerwong.me/2026/04/existential-designer-meaning-interaction?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/existential-designer-meaning-interaction</guid><description>Dan Saffer applies mid-century existentialism to the question of what &amp;quot;meaning&amp;quot; actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre&amp;#39;s concept of &amp;quot;projects&amp;quot; to AI tools: When someone uses ChatGPT...</description><pubDate>Tue, 07 Apr 2026 16:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-nun3g5g5.jpg&quot; alt=&quot;Collage of five black-and-white portrait photos of mid-20th century philosophers, including one woman and four men, one holding a pipe.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;&lt;a href=&quot;https://odannyboy.medium.com/the-existential-designer-facilitating-meaning-through-interaction-8952c358ec52&quot;&gt;Dan Saffer&lt;/a&gt; applies mid-century existentialism to the question of what &amp;quot;meaning&amp;quot; actually requires of the people building digital products, and the result is unusually rigorous. His sharpest move is applying Sartre&amp;#39;s concept of &amp;quot;projects&amp;quot; to AI tools:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;When someone uses ChatGPT to write an essay, the Sartrean question is: whose project is this really? If the user is exploring ideas and using the tool as a thinking partner, they&amp;#39;re taking it up into their own meaning-making project. But if they&amp;#39;re pasting in a prompt and submitting the output unchanged, the system has effectively become the meaning-maker, and the user has become a delivery mechanism. The same tool can function either way. The design question is which relationship the system encourages.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Saffer connects this to Camus and the problem of frictionless design:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;When every friction is removed in the name of efficiency, the activity can be hollowed out. There is nothing left to push against, and meaning drains away. This is something that AI systems have become exceedingly good at. Push the sparkle button, the task is done for you, and you have learned nothing and enjoyed nothing.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The HCI/UX field spent decades optimizing for friction removal. Saffer&amp;#39;s argument is that some friction is where the meaning lives. Design the struggle away and you don&amp;#39;t help the user. You empty the experience. Not every friction should be removed.&lt;/p&gt;
&lt;p&gt;Saffer&amp;#39;s closing:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;This sensibility insists that users are not information processors, not customers, not eyeballs, not tapping fingers, and not data sources. They are meaning-making beings whose freedom and dignity are at stake in every interaction. It asks designers to take seriously the existential weight of what they build. The systems we design become part of the conditions of human existence, shaping what people can choose, what they can see, who they can become.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Saffer covers Sartre, Camus, Kierkegaard, Heidegger, and de Beauvoir in the &lt;a href=&quot;https://odannyboy.medium.com/the-existential-designer-facilitating-meaning-through-interaction-8952c358ec52&quot;&gt;full piece&lt;/a&gt;, each applied to contemporary design problems. It&amp;#39;s a lot, and it&amp;#39;s all good.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://odannyboy.medium.com/the-existential-designer-facilitating-meaning-through-interaction-8952c358ec52?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-nun3g5g5.jpg" length="0" type="image/jpeg"/></item><item><title>Why are designers, engineers, and product managers in a ‘three-way standoff’?</title><link>https://rogerwong.me/2026/04/designers-engineers-pms-three-way-standoff?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/designers-engineers-pms-three-way-standoff</guid><description>Yours truly got quoted in Fast Company. Grace Snelling, surveying the industry reaction to Lenny Rachitsky&amp;#39;s TrueUp hiring data, pulled a comment I left under Rachitsky&amp;#39;s original Twitter post: Designers have designed themselves out of the equation because of design systems. But, IMHO, the s...</description><pubDate>Mon, 06 Apr 2026 17:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-5vru5kt0.webp&quot; alt=&quot;Three hands pointing toward a central point on a red background, surrounded by colorful lightning bolt shapes in green, blue, and pink.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;Yours truly got quoted in &lt;em&gt;Fast Company&lt;/em&gt;. &lt;a href=&quot;https://www.fastcompany.com/91519219/why-are-designers-engineers-and-product-managers-in-a-three-way-stand-off&quot;&gt;Grace Snelling&lt;/a&gt;, surveying the industry reaction to &lt;a href=&quot;https://open.substack.com/pub/lenny/p/state-of-the-product-job-market-in-ee9?utm_campaign=post-expanded-share&amp;utm_medium=web&quot;&gt;Lenny Rachitsky&amp;#39;s TrueUp hiring data&lt;/a&gt;, pulled a &lt;a href=&quot;https://x.com/wong_digital/status/2036832856941490247&quot;&gt;comment I left&lt;/a&gt; under Rachitsky&amp;#39;s original Twitter post:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Designers have designed themselves out of the equation because of design systems. But, IMHO, the secret sauce has never been the UI. It was the workflows and looking across the experience holistically.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let me expand on that. The UI has always been the easiest part of product design. Design systems made that even more true. What separates a great product from a mediocre one is understanding our users deeply enough to create experiences that actually delight them. That understanding is the work AI can&amp;#39;t do, and it&amp;#39;s the work &lt;a href=&quot;/2026/02/product-design-is-changing&quot;&gt;too many teams were already skipping&lt;/a&gt; before any standoff started.&lt;/p&gt;
&lt;p&gt;The data behind the standoff: Rachitsky&amp;#39;s analysis of TrueUp&amp;#39;s job market tracker shows design roles have been flat since early 2023 while PM and engineering roles surged. (Quick side note: this data is for tech startups, not the general tech industry or design industry at large.) His theory:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I don&amp;#39;t know exactly what&amp;#39;s going on here, but it does feel AI-related. [...] Unlike PM and eng, which started growing in 2024 (two years post-ChatGPT), design didn&amp;#39;t. If I had to venture a theory, I&amp;#39;d say that because AI is allowing engineers to move so quickly, there&amp;#39;s less opportunity—and less desire—to involve the traditional design process.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Claire Vo, founder of ChatPRD, puts the harder version of why:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Often design teams &amp;amp; designers are the most resistant to change org in the EPD triad, with highly vocal AI opponents, and little skill or interest in the art of campaigning for influence or resources. [...] If a PM or engineer can get 85% there with tailwind and a dream, you better come to the table with more than &amp;#39;I represent the user.&amp;#39;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&amp;quot;I represent the user&amp;quot; was never enough on its own. It just went unchallenged when designers were the only ones who could ship polished interfaces.&lt;/p&gt;
&lt;p&gt;Anthropic&amp;#39;s chief design officer Joel Lewenstein on where the EPD triad actually lands:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;I think there&amp;#39;s a lot of role collapse at the very beginning, but there are still pretty clear swim lanes as things get into the later stages of product development. [...] It&amp;#39;s like a Venn diagram that&amp;#39;s coming closer together.&lt;/p&gt;
&lt;/blockquote&gt;

            &lt;p&gt;&lt;a href=&quot;https://www.fastcompany.com/91519219/why-are-designers-engineers-and-product-managers-in-a-three-way-stand-off?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-5vru5kt0.webp" length="0" type="image/webp"/></item><item><title>The ground is shaking: Why designers must flip the script on AI</title><link>https://rogerwong.me/2026/04/designers-flip-script-ai?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/designers-flip-script-ai</guid><description>Silicon Valley&amp;#39;s pitch to designers is that AI is the more knowledgeable partner now, so they should get good at prompting it. Write better instructions, get better output. Peter Zakrzewski, writing for UX Collective, pushes back: The current Silicon Valley pitch to designers is essentially this...</description><pubDate>Mon, 06 Apr 2026 15:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-jq33muv9.jpg&quot; alt=&quot;A mechanical robotic hand reaching upward against a stormy sky, overlaid with a bold red banner reading &quot;Form follows nothing.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;Silicon Valley&amp;#39;s pitch to designers is that AI is the more knowledgeable partner now, so they should get good at prompting it. Write better instructions, get better output.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://uxdesign.cc/the-ground-is-shaking-why-designers-must-flip-the-script-on-ai-9211053bbadd&quot;&gt;Peter Zakrzewski&lt;/a&gt;, writing for &lt;em&gt;UX Collective&lt;/em&gt;, pushes back:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The current Silicon Valley pitch to designers is essentially this: AI is your MKO now. It knows more patterns than you do. It executes faster than you do. It can code. Your job is to learn how to give it good instructions — to become a fluent prompter of a more capable system. I want to challenge that framing directly.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;His challenge starts with a concrete test. He asked three leading AI systems to render a dining table with a concrete slab top resting on dry spaghetti legs, then show the scene five seconds after the legs gave way. All three rendered the impossibility with total confidence. None could feel that the physics don&amp;#39;t work.&lt;/p&gt;
&lt;p&gt;That test illustrates what Zakrzewski calls the Inversion Error:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We have built a Symbolic Giant resting on an Enactive Void. These systems can write about gravity with technical or even poetic fluency but cannot feel it. They can describe a structure but cannot tell you whether it will stand or fall. The ground is shaking because the floor is missing.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&amp;quot;Symbolic Giant resting on an Enactive Void&amp;quot; is a mouthful, but the floor metaphor does the work: AI&amp;#39;s language fluency masks a total absence of spatial, embodied reasoning. The kind designers rely on every day without naming it. Zakrzewski on what that means for the prompting pitch:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Designers do not think primarily in sentences. Our human cognition is deeply embodied. We think in diagrams, in spatial relationships, in load paths and sight lines and in the non-discursive logic of things that must connect to other things in three-dimensional space. [...] We are being asked to compress years of embodied cognition and our three-dimensional spatial judgment into a text prompt and then accept whatever the machine generates as an adequate rendering of our intent. We are, in other words, being asked to abandon the very capability that the AI lacks and that our projects require.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;When someone tells designers to compress spatial judgment into a text prompt, they&amp;#39;re asking designers to throw away the one capability AI genuinely lacks and the one we&amp;#39;re genuinely great at.&lt;/p&gt;
&lt;p&gt;There was a theme to some of the posts on this blog last week—about how words should come before the pixels. I made &lt;a href=&quot;https://newsletter.rogerwong.me/p/words-before-pixels&quot;&gt;a similar argument in the newsletter&lt;/a&gt;: the work is getting more verbal and conceptual, but the eye stays. Zakrzewski makes the case for what words alone can&amp;#39;t carry: the spatial, embodied judgment that tells you whether the thing will actually stand.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://uxdesign.cc/the-ground-is-shaking-why-designers-must-flip-the-script-on-ai-9211053bbadd?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-jq33muv9.jpg" length="0" type="image/jpeg"/></item><item><title>Beyond the Logo: How LA28 Turns Branding into a Platform for Culture</title><link>https://rogerwong.me/2026/04/la28-branding-platform-culture?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/la28-branding-platform-culture</guid><description>Jessica Deseo, writing for PRINT Magazine, reports on a talk by Ric Edwards, VP of Brand Design at LA28. His challenge: branding an Olympics for a city that resists a single identity. Edwards on LA: &amp;quot;There&amp;#39;s no one version of it. You would do a disservice if you limited it to one story.&amp;quo...</description><pubDate>Fri, 03 Apr 2026 19:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-osy84m9i.jpg&quot; alt=&quot;LA28 Olympics logo with three colorful tiles against a blurred bird of paradise flower background.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;&lt;a href=&quot;https://www.printmag.com/branding-identity-design/beyond-the-logo-how-la28-turns-branding-into-a-platform-for-culture/&quot;&gt;Jessica Deseo&lt;/a&gt;, writing for &lt;em&gt;PRINT Magazine&lt;/em&gt;, reports on a talk by Ric Edwards, VP of Brand Design at LA28. His challenge: branding an Olympics for a city that resists a single identity. Edwards on LA:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;There&amp;#39;s no one version of it. You would do a disservice if you limited it to one story.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I spent a few years in Los Angeles and visit regularly. It&amp;#39;s sprawling and each area is distinct. Edwards is right. So instead of a fixed logo, LA28 built a system. The &amp;quot;A&amp;quot; in the emblem is a canvas, reinterpreted by athletes, artists, and communities. The L, 2, and 8 are set in different typefaces. The brand holds many narratives rather than collapsing into one.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;We&amp;#39;re trying to be a stage for all of those stories.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That word, &amp;quot;stage,&amp;quot; is the whole strategy in one sentence. A stage doesn&amp;#39;t perform. It creates the conditions for others to perform on it. That&amp;#39;s a fundamentally different job than traditional branding, which is usually about control: one mark, one voice, one set of guidelines. LA28 is designing for distributed authorship at global scale, and Edwards is honest about what that costs:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;Operationally, it&amp;#39;s a nightmare.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Every variation of the emblem has to work across stadiums, broadcast, merchandise, and digital. And then each creative contribution has to pass through legal, production, and brand governance. The ambition is real, and so is the complexity behind it. The Olympics is…well…the Olympics of branding.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://www.printmag.com/branding-identity-design/beyond-the-logo-how-la28-turns-branding-into-a-platform-for-culture/?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-osy84m9i.jpg" length="0" type="image/jpeg"/></item><item><title>AI Design Field Guide</title><link>https://rogerwong.me/2026/04/ai-design-field-guide-nate-parrott?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/ai-design-field-guide-nate-parrott</guid><description>Nate Parrott, a product designer at Anthropic, in an interview with Ryan Mather for AI Design Field Guide: More Google Docs than you&amp;#39;d think. More Slack posts than you&amp;#39;d think. I meant what I said earlier: I think that this is the era of designers who design with words more so than designing...</description><pubDate>Fri, 03 Apr 2026 17:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-h8un5nag.png&quot; alt=&quot;Vibrant abstract illustration of stylized flowers with glowing, blurred edges in bold red, yellow, orange, pink, and blue tones against a soft gradient background.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;&lt;a href=&quot;https://www.aidesignfieldguide.com/articles/nate-parrott&quot;&gt;Nate Parrott&lt;/a&gt;, a product designer at Anthropic, in an interview with Ryan Mather for &lt;em&gt;AI Design Field Guide&lt;/em&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;More Google Docs than you&amp;#39;d think. More Slack posts than you&amp;#39;d think. I meant what I said earlier: I think that this is the era of designers who design with words more so than designing with pixels.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Parrott describes a content design team whose job is making alien concepts legible:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We have several people at the company on the design team whose job is content design. Their job is basically to look at concepts which are very alien, and figure out how to make them legible to human beings. They don&amp;#39;t draw any pixels, but their work is really important because they are literally thinking about the words we use to describe and the mental models we expect people to put on that will make this stuff work.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The Figma work, Parrott says, is &amp;quot;the easy part.&amp;quot; He uses Anthropic&amp;#39;s design system, drops in components, and moves on. The hard work is upstream: expressing the ideas, figuring out the right language, talking to users. The production of screens has become the smallest slice of the job.&lt;/p&gt;
&lt;p&gt;Jenny Wen &lt;a href=&quot;/2026/03/the-new-era-of-ux-designers&quot;&gt;described&lt;/a&gt; designers at Anthropic shipping code, prototyping against the live model, stretching into PM territory. Parrott is describing the same shift from a different angle. The deliverable used to be the mockup. Now the deliverable is the thinking that precedes it.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://www.aidesignfieldguide.com/articles/nate-parrott?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-h8un5nag.png" length="0" type="image/png"/></item><item><title>Claude Code Unpacked</title><link>https://rogerwong.me/2026/04/claude-code-source-leak?utm_source=rss&amp;utm_medium=feed&amp;utm_campaign=main-feed</link><guid isPermaLink="true">https://rogerwong.me/2026/04/claude-code-source-leak</guid><description>Anthropic accidentally included a debug file in a recent update to Claude Code. That file let people reconstruct the entire internal codebase: roughly 500,000 lines of code across nearly 2,000 files. It wasn&amp;#39;t a hack or breach—it was a packaging mistake. Anthropic cited &amp;quot;human error.&amp;quot; ...</description><pubDate>Fri, 03 Apr 2026 15:00:00 GMT</pubDate><content:encoded>
            &lt;p&gt;&lt;img src=&quot;https://rogerwong.b-cdn.net/media/preview-yeps52oj.png&quot; alt=&quot;Claude Code Unpacked&quot; title card showing stats: 1,900+ files, 519K+ lines of code, 53+ tools, 95+ commands, featured on Hacker News.&quot; /&gt;&lt;/p&gt;
            &lt;p&gt;Anthropic accidentally included a debug file in a recent update to Claude Code. That file let people reconstruct the entire internal codebase: roughly 500,000 lines of code across nearly 2,000 files. It wasn&amp;#39;t a hack or breach—it was a packaging mistake. Anthropic cited &amp;quot;human error.&amp;quot; No customer data or AI model secrets were exposed. What leaked was the scaffolding around the AI, the layer that decides how Claude Code thinks, acts, and talks to you.&lt;/p&gt;
&lt;p&gt;The reconstructed code hit GitHub and &lt;a href=&quot;https://www.theverge.com/ai-artificial-intelligence/904776/anthropic-claude-source-code-leak&quot;&gt;became one of the fastest-starred repos in the platform&amp;#39;s history&lt;/a&gt; before Anthropic started issuing takedowns. People found an always-on background agent mode codenamed &amp;quot;KAIROS,&amp;quot; a &amp;quot;dream&amp;quot; mode for continuous ideation, and Tamagotchi-style pet behavior baked into the tool. (See for yourself! Type &lt;code&gt;/buddy&lt;/code&gt; and see what happens.) &lt;a href=&quot;https://arstechnica.com/ai/2026/04/heres-what-that-claude-code-source-leak-reveals-about-anthropics-plans/&quot;&gt;Ars Technica has a good breakdown&lt;/a&gt; of what the code reveals about where Anthropic is headed.&lt;/p&gt;
&lt;p&gt;A developer in France named &lt;a href=&quot;https://ccunpacked.dev/&quot;&gt;Zack&lt;/a&gt; mapped the entire codebase and created this microsite to illustrate what happens when you send a message to Claude Code. Fascinating.&lt;/p&gt;

            &lt;p&gt;&lt;a href=&quot;https://ccunpacked.dev/?ref=rogerwong.me&quot; target=&quot;_blank&quot; rel=&quot;noopener&quot;&gt;Read the full article →&lt;/a&gt;&lt;/p&gt;
          </content:encoded><category>linked</category><enclosure url="https://rogerwong.b-cdn.net/media/preview-yeps52oj.png" length="0" type="image/png"/></item></channel></rss>