Skip to content

155 posts tagged with “product design”

It’s always interesting to hear how others think about the design process from the outside. Eli Woolery and Aaron Walter interview creativity researcher and author Keith Sawyer to learn about what he’s found to be true after interviewing hundreds of art and design professors and students over a decade for his new book:

The creativity doesn’t come at the beginning. You don’t start by having a brilliant insight. You just dive into the process. And then as you’re engaging in the process, the ideas emerge.

Sawyer emphasizes that art and design schools are not just teaching students how to create, but how to “see.” He found that many professors believe students already possess creativity, but the role of art and design school is to help them realize and develop that potential by teaching them to observe, critique, and reflect more deeply on their own work.

When I interviewed these artists and designers, I would say, how are you teaching students how to create? And everyone was quite uncomfortable with that question. A lot of them would say, we’re not teaching students how to create. Or they’ll say something like, the students are already creative. We’re teaching them how to realize the potential they have as creatives.

Sawyer notes that the hardest thing for students to learn is how to see their own work—that is, to understand what they have actually made rather than sticking rigidly to their original idea.

When we talk about learning to see, you’re talking about learning to see yourself. The hardest thing to teach a student is how to see their own work, to see something that they’ve just generated. Because these studio classes, students have opportunities to share their work in interim stages along the way. You don’t go off and work for two weeks or four weeks and then bring back in the finished product. You bring in your interim and you get a lot of feedback and comments on it.

And what the professors tell me is these 18, 19, and 20-year-olds, they don’t realize what they put on the canvas. Or if they’re a graphic designer, they don’t realize what it is that they’ve generated. A lot of times, they’ll think they’ve done a certain thing. So they have this kind of linear approach—model of the creative process where I’m going to have an idea and I’m going to execute it so they’ll start with their idea and they’ll execute it. They’ll think that what they put on the canvas is their original idea, but in a lot of cases, it’s not. They can’t see what they’ve done themselves, so that’s kind of powerful how do you teach someone that what you put on the canvas isn’t what you say you’re doing.

You can’t just tell them, “Hey, you’re wrong. Let me tell you what you’ve done.” You have to lead someone through that. You have to walk them through it.

One way you do it is you put students in the classroom together and then have them comment on other students’ work so they will be on the other side. And they’ll see another student. talking about what they’ve done and not really describing what’s really on the canvas.

So I think that’s the hardest thing about learning to see is learning to see yourself, learning to see your own work.

I think that’s the power of art and design school, this studio learning environment. I’m biased, of course, because that’s how I learned. Those who are self-taught or have gone through bootcamps miss out on a lot of this experience. The other thing the design school environment teaches is how to give and take critiques. It’s about the work, not you.

Keith Sawyer: Become more creative by learning to see

Keith Sawyer: Become more creative by learning to see

Episode 149 of the Design Better Podcast. Creativity comes from learning to observe and connect ideas, not from lone flashes of genius. Keith Sawyer shows that artists and designers discover vision through iterative work and embracing ambiguity.

designbetterpodcast.com icondesignbetterpodcast.com

Our profession is changing rapidly. I’ve been covering that here for nearly a year now. Lots of posts come across my desk that say similar things. Tom Scott repeats a lot of what’s been said, but I’ll pull out a couple nuggets that caught my eye.

He declares that “Hands-on is the new default.” Quoting Vitor Amaral, a designer at Intercom:

Being craft-focused means staying hands-on, regardless of specialty or seniority. This won’t be a niche role, it will be an expectation for everyone, from individual contributors to VPs. The value lies in deeply understanding how things actually work, and that comes from direct involvement in the work.

As AI speeds up execution, the craft itself will become easier, but what will matter most is the critical judgment to craft the right thing, move fast, and push the boundaries of quality.

For those looking for work, Scott says, “You NEED to change how you find a job.” Quoting Felix Haas, investor and designer at Lovable:

Start building a real product and get a feeling for it what it means pushing something out in the market

Learn to use AI to prototype interactively → even at a basic level

Get comfortable with AI tools early → they’ll be your co-designer / sparring partner

Focus on solving real problems, not just making things look good (Which was a problem for very long in the design space)

Scott also says that “Design roles are merging,” and Ridd from Dive Club illustrates the point:

We are seeing a collapse of design’s monopoly on ideation where designers no longer “own” the early idea stage. PMs, engineers, and others are now prototyping directly with new tools.

If designers move too slow, others will fill the gap. The line between PM, engineer, and designer is thinner than ever. Anyone tool-savvy can spin up prototypes — which raises the bar for designers.

Impact comes from working prototypes, not just facilitation. Leading brainstorms or “owning process” isn’t enough. Real influence comes from putting tangible prototypes in front of the team and aligning everyone around them.

Design is still best positioned — but not guaranteed

Designers could lead this shift, but only if they step up. Ownership of ideation is earned, not assumed.

The future of product design

The future of product design

The future belongs to AI-native designers

verifiedinsider.substack.com iconverifiedinsider.substack.com

I love this framing by Patrizia Bertini:

Let me offer a different provocation: AI is not coming for your job. It is coming for your tasks. And if you cannot distinguish between the two, then yes — you should be worried. Going further, she distinguishes between output and outcome: Output is what a process produces. Code. Copy. Designs. Legal briefs. Medical recommendations. Outputs are the tangible results of a system executing its programmed or prescribed function — the direct product of following steps, rules, or algorithms. The term emerged in the industrial era, literally describing the quantity of coal or iron a mine could extract in a given period. Output depends entirely on the efficiency and capability of the process that generates it.

Outcome is what happens when that output meets reality. An outcome requires context, interpretation, application, and crucially — intentionality. Outcomes demand understanding not just what was produced, but why it matters, who it affects, and what consequences ripple from it. Where outputs measure productivity, outcomes measure impact. They are the ultimate change or consequence that results from applying an output with purpose and judgment.

She argues that, “AI can generate outputs. It cannot, however, create outcomes.”

This reminds me of a recent thread by engineer Marc Love:

It’s insane just how much how I work has changed in the last 18 months.

I almost never hand write code anymore except when giving examples during planning conversations with LLMs.

I build multiple full features per day , each of which would’ve taken me a week or more to hand write. Building full drafts and discarding them is basically free.

Well over half of my day is spent ideating, doing systems design, and deciding what and what not to build.

It’s still conceptually the same job, but if i list out the specific things i do in a day versus 18 months ago, it’s almost completely different.

Care about the outcome, not the output.

preview-1759425572315-1200x533.png

When machines make outputs, humans must own outcomes

The future of work in the age of AI and deepware.

uxdesign.cc iconuxdesign.cc

When I read this, I thought to myself, “Geez, this is what a designer does.” I think there is a lot of overlap between what we do as product designers and what product managers do. One critical one—in my opinion, and why we’re calling ourselves product designers—is product sense. Product sense is the skill of finding real user needs and creating solutions that have impact.

So I think people can read this with two lenses:

  • If you’re a designer who executes the assignments you’re given, jumping into Figma right away, read this to be more well-rounded and understand the why of what you’re making.
  • If you’re a designer who spends 80% of your time questioning everything and defining the problem, and only 20% of your time in Figma, read this to see how much overlap you actually have with a PM.

BTW, if you’re in the first bucket, I highly encourage you to gain the skills necessary to migrate to the second bucket.

While designers often stay on top of visual design trends or the latest best practices from NNG, Jules Walter suggests an even wider aperture. Writing in Lenny’s Newsletter:

Another practice for developing creativity is to spend time learning about emerging trends in technology, society, and regulations. Changes in the industry create opportunities for launching new products that can address user needs in new ways. As a PM, you want to understand what’s possible in your domain in order to come up with creative solutions.

preview-1758776009017.png

How to develop product sense

Jules Walter shares a ton of actionable and practical advice to develop your product sense, explains what product sense is, how to know if you’re getting better,

lennysnewsletter.com iconlennysnewsletter.com

The headline rings true to me because that’s what I look for in designers and how I run my team. The software that we build is too complex and too mission-critical for designers to vibe-code—at least given today’s tooling. But each one of the designers on my team can fill in for a PM when they’re on vacation.

Kai Wong, writing in UX Collective:

One thing I’ve learned, talking with 15 design leaders (and one CEO), is that a ‘designer who codes’ may look appealing, but a ‘designer who understands business’ is far more valuable and more challenging to replace.

You already possess the core skill that makes this transition possible: the ability to understand users with systematic observation and thoughtful questioning.

The only difference, now, is learning to apply that same methodology to understand your business.

Strategic thinking doesn’t require fancy degrees (although it may sometimes help).

Ask strategic questions about business goals. Understand how to balance user and business needs. Frame your design decisions in terms of measurable business impact.

preview-1758775414784.png

Why many employers want Designers to think like PMs, not Devs

How asking questions, which used to annoy teams, is now critical to UX’s future

uxdesign.cc iconuxdesign.cc

As much as I defended the preview, and as much as Apple wants to make Liquid Glass a thing, the new UI is continuing to draw criticism. Dan Moren for Six Colors:

“Glass” is the overall look of these updates, and it’s everywhere. Transparent, frosted, distorting. In some places it looks quite cool, such as in the edge distortion when you’re swiping up on the lock screen. But elsewhere, it seems to me that glass may not be quite the right material for the job. The Glass House might be architecturally impressive, but it’s not particularly practical.

It’s also a definite philosophical choice, and one that’s going to engender some criticism—much of it well-deserved. Apple has argued that it’s about getting controls out of the way, but is that really what’s happening here? It’s hard to argue that having a transparent button sitting right on top of your email is helping that email be more prominent. To take this argument to its logical conclusion, why is the keyboard not fully transparent glass over our content?

I’ve yet to upgrade myself. I will say that everyone dislikes change. Lest we forget that the now-ubiquitous flat design introduced by iOS 7 was also criticized.

preview-1758732622764.png

iOS 26 Review: Through a glass, liquidly

iOS 26! It feels like just last year we were here discussing iOS 18. How time flies. After a year that saw the debut of Apple Intelligence and the subsequent controversy over the features that it d…

sixcolors.com iconsixcolors.com
Dark red-toned artwork of a person staring into a glowing phone, surrounded by swirling shadows.

Blood in the Feed: Social Media’s Deadly Design

The assassination of Charlie Kirk on September 10, 2025, marked a horrifying inflection point in the growing debate over how digital platforms amplify rage and destabilize politics. As someone who had already stepped back from social media after Trump’s re-election, watching these events unfold from a distance only confirmed my decision. My feeds had become pits of despair, grievances, and overall negativity that didn’t do well for my mental health. While I understand the need to shine a light on the atrocities of Trump and his government, the constant barrage was too much. So I mostly opted out, save for the occasional promotion of my writing.

Kirk’s death feels like the inevitable conclusion of systems we’ve built—systems that reward outrage, amplify division, and transform human beings into content machines optimized for engagement at any cost.

Jason Spielman put up a case study on his site for his work on Google’s NotebookLM:

The mental model of NotebookLM was built around the creation journey: starting with inputs, moving through conversation, and ending with outputs. Users bring in their sources (documents, notes, references), then interact with them through chat by asking questions, clarifying, and synthesizing before transforming those insights into structured outputs like notes, study guides, and Audio Overviews.

And yes, he includes a sketch he did on the back of a napkin.

I’ve always wondered about the UX of NotebookLM. It’s not typical and, if I’m being honest, not exactly super intuitive. But after a while, it does make sense. Maybe I’m the outlier though, because Spielman’s grandmother found it easy. In an interview last year on Sequoia Capital’s Training Data, he recalls:

I actually do think part of the explosion of audio overviews was the fact it was a simple one click experience. I was on the phone with my grandma trying to explain her how to use it and it actually didn’t take any explanation. I’m like, “Drop in a source.” And she’s like, “Oh! I see. I click this button to generate it.” And I think that the ease of creation is really actually what catalyzed so much explosion. So I think when we think about adding these knobs [for customization] I think we want to do it in a way that’s very intentional.

preview-1758507696745.png

Designing NotebookLM

Designer, builder, and visual storyteller. Now building Huxe. Previously led design on NotebookLM and contributed to Google AI projects like Gemini and Search. Also shoot photo/video for brands like Coachella, GoPro, and Rivian.

jasonspielman.com iconjasonspielman.com

Chatboxes have become the uber box for all things AI. The criticism of this blank box has been the cold start issue. New users don’t know what to type. Designers shipping these product mostly got around this problem by offering suggested prompts to teach users about the possibilities.

The issue on the other end is that expert users end up creating their own library of prompts to copy and paste into the chatbox for repetitive tasks.

Sharang Sharma writing in UX Collective illustrates how these UIs can be smarter by being predictive of intent:

Contrary, Predictive UX points to an alternate approach. Instead of waiting for users to articulate every step, systems can anticipate intent based on behavior or common patterns as the user types. Apple Reminders suggests likely tasks as you type. Grammarly predicts errors and offers corrections inline. Gmail’s Smart Compose even predicts full phrases, reducing the friction of drafting entirely.

Sharma says that the goal of predictive UX is to “reduce time-to-value and reframe AI as an adaptive partner that anticipates user’s intent as you type.”

Imagine a little widget that appears within the chatbox as you type. Kind of a cool idea.

preview-1758077109263.jpeg

How can AI UI capture intent?

Exploring contextual prompt patterns that capture user intent as it is typed

uxdesign.cc iconuxdesign.cc

Thinking about this morning’s link about web forms, if you abstract why it’s so powerful, you get to the point of human-computer interaction: the computer should do what the user intends, not the buttons they push.

Matt Webb reminds us about the DWIM, or Do What I Mean philosophy in computing that was coined by Warren Teitelman in 1966. Webb quotes computer scientist Larry Masinter:

DWIM is an embodiment of the idea that the user is interacting with an agent who attempts to interpret the user’s request from contextual information. Since we want the user to feel that he is conversing with the system, he should not be stopped and forced to correct himself or give additional information in situations where the correction or information is obvious.

Webb goes on to say:

Squint and you can see ChatGPT as a DWIM UI: it never, never, never says “syntax error.”

Now, arguably it should come back and ask for clarifications more often, and in particular DWIM (and AI) interfaces are more successful the more they have access to the user’s context (current situation, history, environment, etc).

But it’s a starting point. The algo is: design for capturing intent and then DWIM; iterate until that works. AI unlocks that.

preview-1757558679383.png

The destination for AI interfaces is Do What I Mean

Posted on Friday 29 Aug 2025. 840 words, 10 links. By Matt Webb.

interconnected.org iconinterconnected.org

Forms is one of the fundamental things we make users do in software. Whether it’s the login screen, billing address form, or a mortgage application, forms are the main method for getting data from users and into computer-accessible databases. The human is deciding what piece of information to put into which column in the database. With AI, form filling should be much simpler.

Luke Wroblewski makes the argument:

With Web forms, the burden is on people to adapt to databases. Today’s AI models, however, can flip this requirement. That is, they allow people to provide information in whatever form they like and use AI do the work necessary to put that information into the right structure for a database.

How can it work?

With AgentDB connected to an AI model (via an MCP server), a person can simply say “add this” and provide an image, PDF, audio, video, you name it. The model will use AgentDB’s template to decide what information to extract from this unstructured input and how to format it for the database. In the case where something is missing or incomplete, the model can ask for clarification or use tools (like search) to find possible answers.

preview-1757557969255.png

Unstructured Input in AI Apps Instead of Web Forms

Web forms exist to put information from people into databases. The input fields and formatting rules in online forms are there to make sure the information fits...

lukew.com iconlukew.com
Still from a video shown at Apple Keynote 2025. Split screen of AirPods Pro connection indicator on left, close-up of earbuds in charging case on right.

Notes About the September 2025 Apple Event

Today’s Apple keynote opened with a classic quote from Steve Jobs.

Steve Jobs quote at Apple Keynote 2025 – Black keynote slide with white text: “Design is not just what it looks like and feels like. Design is how it works.” – Steve Jobs.

Then a video played, focused on the fundamental geometric shapes that can be found in Apple’s products: circles in the HomePod, iPhone shutter button, iPhone camera, MagSafe charging ring, Digital Crown on Apple Watch; rounded squares in the charging block, Home scene button, Mac mini, keycaps, Finder icon, FaceID; to the lozenges found in the AirPods case, MagSafe port, Liquid Glass carousel control, and the Action button on Apple Watch Ultra.

Josh Miller, CEO, and Hursh Agrawal, CTO, of The Browser Company:

Today, The Browser Company of New York is entering into an agreement to be acquired by Atlassian in an all-cash transaction. We will operate independently, with Dia as our focus. Our objective is to bring Dia to the masses.

Super interesting acquisition here. There is zero overlap as far as I can tell. Atlassian’s move is out of left-field. Dia’s early users were college students. The Browser Company more recently opened it up to former Arc users. Is this bet for Atlassian—the company that makes tech-company-focused products like Jira and Confluence—around the future of work and collaboration? Is this their first move against Salesforce? 🤔

preview-1757007229906.jpeg

Your Tuesday in 2030

Or why The Browser Company is being acquired to bring Dia to the masses.

open.substack.com iconopen.substack.com

DOC is a publication from Fabricio Teixeira and Caio Braga that I’ve linked to before. Their latest reflection is on interfaces.

A good user interface is a good conversation.

Interfaces thrive on clarity, responsiveness, and mutual understanding. In a productive dialogue, each party clearly articulates their intentions and receives timely, understandable responses. Just as a good conversationalist anticipates the next question or need, a good interface guides you smoothly through your task. At their core, interfaces translate intent into action. They’re a bridge between what’s in your head and what the product can do.

Reflection is the best word I’ve found to describe these pieces. They’re hype-free, urging us to take a step back, and—at least for me—a reminder about our why.

In the end, interfaces are also a space for self-expression.

The ideal of “no interface” promises ultimate efficiency and direct access—but what do we lose in that pursuit? Perhaps the interface is not just a barrier to be minimized, but a space for human expression. It’s a canvas; a place to imbue a product with personality, visual expression, and a unique form of art.

When we strip that away, or make everything look the same, we lose something important. We trade the unique and the delightful for the purely functional. We sacrifice a vital part of what makes technology human: the thoughtful, and sometimes imperfect, ways we present ourselves to the world.

A pixelated hand

DOC • Interface

On connection, multi-modality, and self-expression.

doc.cc icondoc.cc
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Hard to believe that the Domino’s Pizza tracker debuted in 2008. The moment was ripe for them—about a year after the debut of the iPhone. Mobile e-commerce was in its early days.

Alex Mayyasi for The Hustle:

…the tracker’s creation was spurred by the insight that online orders were more profitable – and made customers more satisfied – than phone or in-person orders. The company’s push to increase digital sales from 20% to 50% of its business led to new ways to order (via a tweet, for example) and then a new way for customers to track their order.

Mayyasi weaves together a tale of business transparency, UI, and content design, tracing—or tracking?—the tracker’s impact on business since then. “The pizza tracker is essentially a progress bar.” But progress bars do so much for the user experience, most of which is setting proper expectations.

preview-1756791507284.png

How the Domino’s pizza tracker conquered the business world

One cheesy progress update at a time.

thehustle.co iconthehustle.co

I have always wanted to read 6,200 words about color! Sorry, that’s a lie. But I did skim it and really admired the very pretty illustrations. Dan Hollick is a saint for writing and illustrating this chapter in his living book called Making Software, a reference manual for designers and programmers that make digital products. From his newsletter:

I started writing this chapter just trying to explain what a color space is. But it turns out, you can’t really do that without explaining a lot of other stuff at the same time.

Part of the issue is color is really complicated and full of confusing terms that need a maths degree to understand. Gamuts, color models, perceptual uniformity, gamma etc. I don’t have a maths degree but I do have something better: I’m really stubborn.

And here are the opening sentences of the chapter on color:

Color is an unreasonably complex topic. Just when you think you’ve got it figured out, it reveals a whole new layer of complexity that you didn’t know existed.

This is partly because it doesn’t really exist. Sure, there are different wavelengths of light that our eyes perceive as color, but that doesn’t mean that color is actually a property of that light - it’s a phenomenon of our perception.

Digital color is about trying to map this complex interplay of light and perception into a format that computers can understand and screens can display. And it’s a miracle that any of it works at all.

I’m just waiting for him to put up a Stripe link so I can throw money at him.

preview-1756359522301.jpg

Making Software: What is a color space?

In which we answer every question you've ever had about digital color, and some you haven't.

makingsoftware.com iconmakingsoftware.com

Interesting piece from Vaughn Tan about a critical thinking framework that is disguised as a piece about building better AI UIs for critical thinking. Sorry, that sentence is kind of a tongue-twister. Tan calls out—correctly—that LLMs don’t think, or in his words, can’t make meaning:

Meaningmaking is making inherently subjective decisions about what’s valuable: what’s desirable or undesirable, what’s right or wrong. The machines behind the prompt box are remarkable tools, but they’re not meaningmaking entities.

Therefore when users ask LLMs for their opinions on matters, e.g., as in the therapy use case, the AIs won’t come back with actual thinking. IMHO, it’s semantics, but that’s another post.

Anyhow, Tan shares a pen and paper prototype he’s been testing, which breaks down a major decision into guided steps, or put another way, a framework.

This user experience was designed to simulate a multi-stage process of structured elicitation of various aspects of strongly reasoned arguments. This design explicitly addresses both requirements for good tool use. The structured prompts helped students think critically about what they were actually trying to accomplish with their custom major proposals — the meaningmaking work of determining value, worth, and personal fit. Simultaneously, the framework made clear what kinds of thinking work the students needed to do themselves versus what kinds of information gathering and analysis could potentially be supported by tools like LLMs.

This guided or framework-driven approach was something I attempted wtih Griffin AI. Via a series of AI-guided prompts to the user—or a glorified form, honestly—my tool helped users build brand strategies. To be sure, a lot of the “thinking” was done by the model, but the idea that an AI can guide you to critically think about your business or your client’s business was there.

preview-1756270668809.png

Designing AI tools that support critical thinking

Current AI interfaces lull us into thinking we’re talking to something that can make meaningful judgments about what’s valuable. We’re not — we’re using tools that are tremendously powerful but nonetheless can’t do “meaningmaking” work (the work of deciding what matters, what’s worth pursuing).

vaughntan.org iconvaughntan.org

Designer Tey Bannerman writes that when he hears “human in the loop,” he’s reminded of a story about Lieutenant Colonel Stanislav Petrov, a Soviet Union duty watch officer who monitored for incoming missile strikes from the US.

12:15 AM… the unthinkable. Every alarm in the facility started screaming. The screens showed five US ballistic missiles, 28 minutes from impact. Confidence level: 100%. Petrov had minutes to decide whether to trigger a chain reaction that would start nuclear war and could very well end civilisation as we knew it.

He was the “human in the loop” in the most literal, terrifying sense.

Everything told him to follow protocol. His training. His commanders. The computers.

But something felt wrong. His intuition, built from years of intelligence work, whispered that this didn’t match what he knew about US strategic thinking.

Against every protocol, against the screaming certainty of technology, he pressed the button marked “false alarm”.

Twenty-three minutes of gripping fear passed before ground radar confirmed: no missiles. The system had mistaken a rare alignment of sunlight on high-altitude clouds for incoming warheads.

His decision to break the loop prevented nuclear war.

Then Bannerman shares an awesome framework he developed that allows humans in the loop in AI systems “genuine authority, time to think, and understanding the bigger picture well enough to question” the system’s decision. Click on to get the PDF from his site.

Framework diagram by Tey Bannerman titled Beyond ‘human in the loop’. It shows a 4×4 matrix mapping AI oversight approaches based on what is being optimized (speed/volume, quality/accuracy, compliance, innovation) and what’s at stake (irreversible consequences, high-impact failures, recoverable setbacks, low-stakes outcomes). Colored blocks represent four modes: active control, human augmentation, guided automation, and AI autonomy. Right panel gives real-world examples in e-commerce email marketing and recruitment applicant screening.

Redefining ‘human in the loop’

"Human in the loop" is overused and vague. The Petrov story shows humans must have real authority, time, and context to safely override AI. Bannerman offers a framework that asks what you optimize for and what is at stake, then maps 16 practical approaches.

teybannerman.com iconteybannerman.com
Surreal black-and-white artwork of a glowing spiral galaxy dripping paint-like streaks over a city skyline at night.

Why I’m Keeping My Design Title

In the 2011 documentary Jiro Dreams of Sushi, then 85 year-old sushi master Jiro Ono says this about craft:

Once you decide on your occupation… you must immerse yourself in your work. You have to fall in love with your work. Never complain about your job. You must dedicate your life to mastering your skill. That’s the secret of success and is the key to being regarded honorably.

Craft is typically thought of as the formal aspects of any field such as design, woodworking, writing, or cooking. In design, we think about composition, spacing, and typography—being pixel-perfect. But one’s craft is much more than that. Ono’s sushi craft is not solely about slicing fish and pressing it against a bit of rice. It is also about picking the right fish, toasting the nori just so, cooking the rice perfectly, and running a restaurant. It’s the whole thing.

Cap Watkins, Head of Product Design at Lattice, was catching up with a former top-performing designer who was afraid other designers were mad at her for getting all the “cool” projects.

What made those projects glamorous and desirable was her and how she approached the work. There’s that old nugget about making your own luck and that is something she excelled at. She had a unique ability to take really hard or nebulous problems (both design and team-related) and morph them into something amazing that got people excited. Instead of getting discouraged, she’d respond to friction with more energy, more enthusiasm. In so many ways, she was a transformative presence on any team and project.

In other words, this designer cared and made the best of all her assignments.

Make things happen

Top designers aren’t handed “cool” projects—they transform hard, unglamorous work into exciting wins. Stop waiting. Make your work shine. Make things happen.

capwatkins.com iconcapwatkins.com

I enjoyed this interview with Notion’s CEO, Ivan Zhao over at the Decoder podcast, with substitute host, Casey Newton. What I didn’t quite get when I first used Notion was the “LEGO” aspect of it. Their vision is to build business software that is highly malleable and configurable to do all sorts of things. Here’s Zhao:

Well, because it didn’t quite exist with software. If you think about the last 15 years of [software-as-a-service], it’s largely people building vertical point solutions. For each buyer, for each point, that solution sort of makes sense. The way we describe it is that it’s like a hard plastic solution for your problem, but once you have 20 different hard plastic solutions, they sort of don’t fit well together. You cannot tinker with them. As an end user, you have to jump between half a dozen of them each day.

That’s not quite right, and we’re also inspired by the early computing pioneers who in the ‘60s and ‘70s thought that computing should be more LEGO-like rather than like hard plastic. That’s what got me started working on Notion a long time ago, when I was reading a computer science paper back in college.

From a user experience POV, Notion is both simple and exceedingly complicated. Taking notes is easy. Building the system for a workflow, not so much.

In the second half, Newton (gently) presses Zhao on the impact of AI on the workforce and how productivity software like Notion could replace headcount.

Newton: Do you think that AI and Notion will get to a point where executives will hire fewer people, because Notion will do it for them? Or are you more focused on just helping people do their existing jobs?

Zhao: We’re actually putting out a campaign about this, in the coming weeks or months. We want to push out a more amplifying, positive message about what Notion can do for you. So, imagine the billboard we’re putting out. It’s you in the center. Then, with a tool like Notion or other AI tools, you can have AI teammates. Imagine that you and I start a company. We’re two co-founders, we sign up for Notion, and all of a sudden, we’re supplemented by other AI teammates, some taking notes for us, some triaging, some doing research while we’re sleeping.

Zhao dodges the “hire fewer people” part of the question and instead, answers with “amplifying” people or making them more productive.

preview-1755062355751.jpg

Notion CEO Ivan Zhao wants you to demand better from your tools

Notion’s Ivan Zhao on AI agents, productivity, and how software will change in the future.

theverge.com icontheverge.com

Ben Davies-Romano argues that the AI chat box is our new design interface:

Every interaction with a large language model starts the same way: a blinking cursor in a blank text field. That unassuming box is more than an input — it’s the interface between our human intent and the model’s vast, probabilistic brain.

This is where the translation happens. We pour in the nuance, constraints, and context of our ideas; the model converts them into an output. Whether it’s generating words, an image, a video sequence, or an interactive prototype, every request passes through this narrow bridge.

It’s the highest-stakes, lowest-fidelity design surface I’ve ever worked with: a single field that stands between human creativity and an engine capable of reshaping it into almost any form, albeit with all the necessary guidance and expertise applied.

In other words, don’t just say “Make it better,” but guide the AI instead.

That’s why a vague, lazy prompt, like “make it better”, is the design equivalent of telling a junior designer “make it intuitive” and walking away. You’ll get something generic, safe, and soulless, not because the AI “missed the brief,” but because there was no brief.

Without clear stakes, a defined brand voice, and rich context, the system will fill in the blanks with its default, most average response. And “average” is rarely what design is aiming for.

And he makes a point that designers should be leading the charge on showing others what generative AI can do:

In the age of AI, it shouldn’t be everyone designing, per say. It should be designers using AI as an extension of our craft. Bringing our empathy, our user focus, our discipline of iteration, and our instinct for when to stop generating and start refining. AI is not a replacement for that process; it’s a multiplier when guided by skilled hands.

So, let’s lead. Let’s show that the real power of AI isn’t in what it can generate, but in how we guide it — making it safer, sharper, and more human. Let’s replace the fear and the gimmicks with clarity, empathy, and intentionality.

The blank prompt is our new canvas. And friends, we need to be all over it.

preview-1754887809469.jpeg

Prompting is designing. And designers need to lead.

Forget “prompt hacks.” Designers have the skills to turn AI from a gimmick into a powerful, human-centred tool if we take the lead.

medium.com iconmedium.com

Christopher K. Wong argues that desirability is a key part of design that helps decide which features users really want:

To give a basic definition, desirability is a strategic part of UX that revolves around a single user question: Have you defined (and solved) the right problem for users?

In other words, before drawing a single box or arrow, have you done your research and discovery to know you’re solving a pain point?

The way the post is written makes it hard to get at a succinct definition, but here’s my take. Desirability is about ensuring a product or feature is truly wanted, needed, and chosen by users—not just visual appeal—making it a core pillar for impactful design decisions and prioritization. And designers should own this.

preview-1754632102491.jpeg

Want to have a strategic design voice at work? Talk about desirability

Desirability isn’t just about visual appeal: it’s one of the most important user factors

dataanddesign.substack.com icondataanddesign.substack.com
Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession.

Part III: Adaptation Through Action

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.