Skip to content

162 posts tagged with “product design”

Alrighty, here’s one more “lens” thing to throw at you today.

In UX Collective, Daleen Rabe says that a “designer’s true value lies not in the polish of their pixels, but in the clarity of their lens.” She means our point-of-view, how we process the world:

  1. The method for creating truth
  2. The discipline of asking questions
  3. The mindset for enacting change
  4. The compass for navigating our ethics

The spec, as she calls it, is the designer’s way for creating truth. Others might call it a mockup or wireframe. Either way, it’s a visual representation of what we intend to build:

The spec is a democratic tool, while a text-based document can be ambiguous. It relies on a shared interpretation of language that often doesn’t exist. A visual, however, is a common language. It allows people with vastly different perspectives to align on something we can all agree exists in this reality. It’s a two-dimensional representation that is close enough to the truth to allow us to debate realistic scenarios and identify issues before they become code.

As designers, our role is to find the balance between the theoretical concept of what the business needs and what is tangibly feasible. The design spec is the tool we use to achieve this.

3D hexagonal prism sketched in black outline on a white background

The product designer’s Lens

Four tools that product designers use that have nothing to do with Figma

uxdesign.cc iconuxdesign.cc

T-shaped, M-shaped, and now Σ-shaped designers?! Feels like a personality quiz or something. Or maybe designers are overanalyzing as usual.

Here’s Darren Yeo telling us what it means:

The Σ-shape defines the new standard for AI expertise: not deep skills, but deep synthesis. This integrator manages the sum of complex systems (Σ) by orchestrating the continuous, iterative feedback loops (σ), ensuring system outputs align with product outcomes and ethical constraints.

Whether you subscribe to the Three Lens framework as proposed by Oliver West, or this sigma-shaped one being proposed by Darren Yeo, just be yourself and don’t bring it up in interviews.

Large purple sigma-shaped graphic on a grid-paper background with the text "Sigma shaped designer".

The AI era needs Sigma (Σ) shaped designers (Not T or π)

For years, design and tech teams have relied on shape metaphors to describe expertise. We had T-shaped people (one deep skill, broad…

uxdesign.cc iconuxdesign.cc

Oliver West argues in UX Magazine that UX designers aren’t monolithic—meaning we’re not all the same and see the world in the same way.

West:

UX is often described as a mix of art and science, but that definition is too simple. The truth is, UX is a spectrum made up of three distinct but interlinked lenses:

  • Creativity: Bringing clarity, emotion, and imagination to how we solve problems.
  • Science: Applying evidence, psychology, and rigor to understand behavior.
  • Business: Focusing on relevance, outcomes, and measurable value.

Every UX professional looks through these lenses differently. And that’s exactly how it should be.

He then outlines how those who are more focused on certain parts of the spectrum may be more apt for more specialized roles. For example, if you’re more focused on creativity, you might be more of a UI designer:

UI Designers lead with the creative lens. Their strength lies in turning complex ideas into interfaces that feel intuitive, elegant, and emotionally engaging. But the best UI Designers also understand the science of usability and the business context behind what they’re designing.

I think for product designers working in the startup world, you actually do need all three lenses, as it were. But with a bias towards Science and Business.

Glass triangular prism with red and blue reflections on a blue surface; overlay text about UX being more than one skill and using three lenses.

The Three Lenses of UX: Because Not All UX Is the Same

Great designers don’t do everything; they see the world through different lenses: creative, scientific, and strategic. This article explains why those differences aren’t flaws, but rather the core reason UX works, and how identifying your own lens can transform careers, hiring, and collaboration. If you’ve ever wondered why “unicorn” designers don’t exist, this perspective explains why.

uxmag.com iconuxmag.com

Hey designer, how are you? What is distracting you? Who are you having trouble working with?

Those are a couple of the questions designer Nikita Samutin and UX researcher Elizaveta Demchenko asked 340 product designers in a survey and in 10 interviews. They published their findings in a report called “State of Product Design: An Honest Conversation About the Profession.”

When I look at the calendars of the designers on my team, I see loads of meetings scheduled. So it’s no surprise to me that 64% of respondents said that switching between tasks distracted them. “Multitasking and unpredictable communication are among the main causes of distraction and stress for product designers,” the researchers wrote.

The most interesting to me, are the results in the section, “How Designers See Their Role.” Sixty-percent of respondents want to develop leadership skills and 47% want to improve presenting ideas.

For many, “leadership” doesn’t mean managing people—it means scaling influence: shaping strategy, persuading stakeholders, and leading high-impact projects. In other words, having a stronger voice in what gets built and why.

It’s telling because I don’t see pixel-pushing in the responses. And that’s a good thing in the age of AI.

Speaking of which, 77% of designers aren’t afraid that AI may replace them. “Nearly half of respondents (49%) say AI has already influenced their work, and many are actively integrating new tools into their processes. This reflects the state of things in early 2025.”

I’m sure that number would be bigger if the survey were conducted today.

State of Product Design: An Honest Conversation About the Profession — ’25; author avatars and summary noting a survey of 340 designers and 10 interviews.

State of Product Design 2025

2025 Product Design report: workflows, burnout, AI impact, career growth, and job market insights across regions and company types.

sopd.design iconsopd.design

There’s a lot of chatter in the news these days about the AI bubble. Most of it is because of the circular nature of the deals among the foundational model providers like OpenAI and Anthropic, and cloud providers (Microsoft, Amazon) and NVIDIA.

Diagram of market-value circles with OpenAI ($500B) and Nvidia ($4.5T) connected by colored arrows for hardware, investment, services and VC.

OpenAI recently published a report called “The state of enterprise AI” where they said:

The picture that emerges is clear: enterprise AI adoption is accelerating not just in breadth, but in depth. It is reshaping how people work, how teams collaborate, and how organizations build and deliver products.

AI use in enterprises is both scaling and maturing: activity is up eight-fold in weekly messages, with workers sending 30% more, and structured workflows rising 19x. More advanced reasoning is being integrated— with token usage up 320x—signaling a shift from quick questions to deeper, repeatable work across both breadth and depth.

Investors at Menlo Ventures are also seeing positive signs in their data, especially when it comes to the tech space outside the frontier labs:

The concerns aren’t unfounded given the magnitude of the numbers being thrown around. But the demand side tells a different story: Our latest market data shows broad adoption, real revenue, and productivity gains at scale, signaling a boom versus a bubble. 

AI has been hyped in the enterprise for the last three years. From deploying quickly-built chatbots, to outfitting those bots with RAG search, and more recently, to trying to shift towards agentic AI. What Menlo Venture’s report “The State of Generative AI in the Enterprise” says is that companies are moving away from rolling their own AI solutions internally, to buying.

In 2024, [confidence that teams could handle everything in-house] still showed in the data: 47% of AI solutions were built internally, 53% purchased. Today, 76% of AI use cases are purchased rather than built internally. Despite continued strong investments in internal builds, ready-made AI solutions are reaching production more quickly and demonstrating immediate value while enterprise tech stacks continue to mature.

Two donut charts: AI adoption methods 2024 vs 2025 — purchased 53% (2024) to 76% (2025); built internally 47% to 24%.

Also startups offering AI solutions are winning the wallet share:

At the AI application layer, startups have pulled decisively ahead. This year, according to our data, they captured nearly $2 in revenue for every $1 earned by incumbents—63% of the market, up from 36% last year when enterprises still held the lead.

On paper, this shouldn’t be happening. Incumbents have entrenched distribution, data moats, deep enterprise relationships, scaled sales teams, and massive balance sheets. Yet, in practice, AI-native startups are out-executing much larger competitors across some of the fastest-growing app categories.

How? They cite three reasons:

  • Product and engineering: Startups win the coding category because they ship faster and stay model‑agnostic, which let Cursor beat Copilot on repo context, multi‑file edits, diff approvals, and natural language commands—and that momentum pulled it into the enterprise.
  • Sales: Teams choose Clay and Actively because they own the off‑CRM work—research, personalization, and enrichment—and become the interface reps actually use, with a clear path to replacing the system of record.
  • Finance and operations: Accuracy requirements stall incumbents, creating space for Rillet, Campfire, and Numeric to build AI‑first ERPs with real‑time automation and win downmarket where speed matters.

There’s a lot more in the report, so it’s worth a full read.

Line chart: enterprise AI revenue rising from $0B (2022) to $1.7B (2023), $11.5B (2024) and $37.0B (2025) with +6.8x and +3.2x YoY.

2025: The State of Generative AI in the Enterprise

For all the fears of over-investment, AI is spreading across enterprises at a pace with no precedent in modern software history.

menlovc.com iconmenlovc.com

This episode of Design of AI with Dr. Maya Ackerman is wonderful. She echoed a lot of what I’ve been thinking about recently—how AI can augment what we as designers and creatives can do. There’s a ton of content out there that hypes up AI that can replace jobs—“Type this prompt and instantly get a marketing plan!” or “Type this prompt and get an entire website!”

Ackerman, as interviewed by Arpy Dragffy-Guerrero:

I have a model I developed which is called humble creative machines which is idea that we are inherently much smarter than the AI. We have not reached even 10% of our capacity as creative human beings. And the role of AI in this ecosystem is not to become better than us but to help elevate us. That applies to people who design AI, of course, because a lot of the ways that AI is designed these days, you can tell you’re cut out of the loop. But on the other hand, some of the most creative people, those who are using AI in the most beneficial way, take this attitude themselves. They fight to stay in charge. They find ways to have the AI serve their purposes instead of treating it like an all-knowing oracle. So really, it’s sort of the audacity, the guts to believe that you are smarter than this so-called oracle, right? It’s this confidence to lead, to demand that things go your way when you’re using AI.

Her stance is that those who use AI best are those that wield it and shape its output to match their sensibilities. And so, as we’ve been hearing ad nauseam, our taste and judgement as designers really matters right now.

I’ve been playing a lot with ComfyUI recently—I’m working on a personal project that I’ll share if/when I finish it. But it made me realize that prompting a visual to get it to match what I have in my mind’s eye is not easy. This recent Instagram reel from famed designer Jessica Walsh captures my thoughts well:

I would say most AI output is shitty. People just assumed, “Oh, you rendered that an AI.” “That must have been super easy.” But what they don’t realize is that it took an entire day of some of our most creative people working and pushing the different prompts and trying different tools out and experimenting and refining. And you need a good eye to understand how to curate and pick what the best outputs are. Without that right now, AI is still pretty worthless.

It takes a ton of time to get AI output to look great, beyond prompting: inpainting, control nets, and even Photoshopping. What most non-professionals do is they take the first output from an LLM or image generator and present it as great. But it’s really not.

So I like what Dr. Ackerman mentioned in her episode: we should be in control of the humble machines, not the other way around.

Headshot of a blonde woman in a patterned blazer with overlay text "Future of Human - AI Creativity" and "Design of AI

The Future of Human-AI Creativity [Dr. Maya Ackerman]

AI is threatening creativity, but that's because we're giving too much control to the machine to think on our behalf. In this episode, Dr. Maya Ackerman…

designof.ai icondesignof.ai

I spend a lot of time not talking about design nor hanging out with other designers. I suppose I do a lot of reading about design to write this blog, and I am talking with the designers on my team, but I see Design as the output of a lot of input that comes from the rest of life.

Hardik Pandya agrees and puts it much more elegantly:

Design is synthesizing the world of your users into your solutions. Solutions need to work within the user’s context. But most designers rarely take time to expose themselves to the realities of that context.

You are creative when you see things others don’t. Not necessarily new visuals, but new correlations. Connections between concepts. Problems that aren’t obvious until someone points them out. And you can’t see what you’re not exposed to.

Improving as a designer is really about increasing your exposure. Getting different experiences and widening your input of information from different sources. That exposure can take many forms. Conversations with fellow builders like PMs, engineers, customer support, sales. Or doing your own digging through research reports, industry blogs, GPTs, checking out other products, YouTube.

Male avatar and text "EXPOSURE AS A DESIGNER" with hvpandya.com/notes on left; stippled doorway and rock illustration on right.

Exposure

For equal amount of design skills, your exposure to the world determines how effective of a designer you can be.

hvpandya.com iconhvpandya.com

When Figma acquired Weavy last month, I wrote a little bit about node-based UIs and ComfyUI. Looks like Adobe has been exploring this user interface paradigm as well.

Daniel John writes in Creative Bloq:

Project Graph is capable of turning complex workflows into user-friendly UIs (or ‘capsules’), and can access tools from across the Creative Cloud suite, including Photoshop, Illustrator and Premiere Pro – making it a potentially game-changing tool for creative pros.

But it isn’t just Adobe’s own tools that Project Graph is able to tap into. It also has access to the multitude of third party AI models Adobe recently announced partnerships with, including those made by Google, OpenAI and many more.

These tools can be used to build a node-based workflow, which can then be packaged into a streamlined tool with a deceptively simple interface.

And from Adobe’s blog post about Project Graph:

Project Graph is a new creative system that gives artists and designers real control and customization over their workflows at scale. It blends the best AI models with the capabilities of Adobe’s creative tools, such as Photoshop, inside a visual, node-based editor so you can design, explore, and refine ideas in a way that feels tactile and expressive, while still supporting the precision and reliability creative pros expect.

I’ve been playing around with ComfyUI a lot recently (more about this in a future post), so I’m very excited to see how this kind of UI can fit into Adobe’s products.

Stylized dark grid with blue-purple modular devices linked by cables, central "Ps" Photoshop

Adobe just made its most important announcement in years

Here’s why Project Graph matters for creatives.

creativebloq.com iconcreativebloq.com

On Corporate Maneuvers Punditry

Mark Gurman, writing for Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

I don’t regularly cover personnel moves here, but Alan Dye jumping over to Meta has been a big deal in the Apple news ecosystem. John Gruber, in a piece titled “Bad Dye Job” on his Daring Fireball blog, wrote a scathing takedown of Dye, excoriating his tenure at Apple and flogging him for going over to Meta, which is arguably Apple’s arch nemesis.

Putting Alan Dye in charge of user interface design was the one big mistake Jony Ive made as Apple’s Chief Design Officer. Dye had no background in user interface design — he came from a brand and print advertising background. Before joining Apple, he was design director for the fashion brand Kate Spade, and before that worked on branding for the ad agency Ogilvy. His promotion to lead Apple’s software interface design team under Ive happened in 2015, when Apple was launching Apple Watch, their closest foray into the world of fashion. It might have made some sense to bring someone from the fashion/brand world to lead software design for Apple Watch, but it sure didn’t seem to make sense for the rest of Apple’s platforms. And the decade of Dye’s HI leadership has proven it.

I usually appreciate Gruber’s writing and take on things. He’s unafraid to tell it like it is and to be incredibly direct. Which makes people love him and fear him. But in paragraph after paragraph, Gruber just lays in on Dye.

It’s rather extraordinary in today’s hyper-partisan world that there’s nearly universal agreement amongst actual practitioners of user-interface design that Alan Dye is a fraud who led the company deeply astray. It was a big problem inside the company too. I’m aware of dozens of designers who’ve left Apple, out of frustration over the company’s direction, to work at places like LoveFrom, OpenAI, and their secretive joint venture io. I’m not sure there are any interaction designers at io who aren’t ex-Apple, and if there are, it’s only a handful. From the stories I’m aware of, the theme is identical: these are designers driven to do great work, and under Alan Dye, “doing great work” was no longer the guiding principle at Apple. If reaching the most users is your goal, go work on design at Google, or Microsoft, or Meta. (Design, of course, isn’t even a thing at Amazon.) Designers choose to work at Apple to do the best work in the industry. That has stopped being true under Alan Dye. The most talented designers I know are the harshest critics of Dye’s body of work, and the direction in which it’s been heading.

Designers can be great at more than one thing and they can evolve. Being in design leadership does not mean that you need to be the best practitioner of all the disciplines, but you do need to have the taste, sensibilities, and judgement of a good designer, no matter how you started. I’m a case in point. I studied traditional graphic design in art school. But I’ve been in digital design for most of my career now, and product design for the last 10 years.

Has UI over at Apple been worse over the last 10 years? Maybe. I will need to analyze things a lot more carefully. But I vividly remember having debates with my fellow designers about Mac OS X UI choices like the pinstriping, brushed metal, and many, many inconsistencies when I was working in the Graphic Design Group in 2004. UI design has never been perfect in Cupertino.

Alan Dye isn’t a CEO and wasn’t even at the same exposure level as Jony Ive when he was still at Apple. I don’t know Dye, though we’re certainly in the same design circles—we have 20 shared connections on LinkedIn. But as far as I’m concerned, he’s a civilian because he kept a low profile, like all Apple employees.

The parasocial relationships we have with tech executives is weird. I guess it’s one thing if they have a large online presence like Instagram’s Adam Mosseri or 37signals’ David Heinemeier Hansson (aka DHH), but Alan Dye made only a couple appearances in Apple keynotes and talked about Liquid Glass. In other words, why is Gruber writing 2,500 words in this particular post, and it’s just one of five posts covering this story!

Anyway, I’m not a big fan of Meta, but maybe Dye can bring some ethics to the design team over there. Who knows. Regardless, I am wishing him well rather than taking him down.

Critiques are the lifeblood of design. Anyone who went to design school has participated in and has been the focus of a crit. It’s “the intentional application of adversarial thought to something that isn’t finished yet,” as Fabricio Teixeira and Caio Braga, the editors of DOC put it.

A lot of solo designers—whether they’re a design team of one or if they’re a freelancer—don’t have the luxury of critiques. In my view, they’re handicapped. There are workarounds, of course. Such as critiques with cross-functional peers, but it’s not the same. I had one designer on my team—who used to be a design team of one in her previous company—come up to me and say she’s learned more in a month than a year at her former job.

Further down, Teixeira and Braga say:

In the age of AI, the human critique session becomes even more important. LLMs can generate ideas in 5 seconds, but stress-testing them with contextual knowledge, taste, and vision, is something that you should be better at. As AI accelerates the production of “technically correct” and “aesthetically optimized” work, relying on just AI creates the risks of mediocrity. AI is trained to be predictable; crits are all about friction: political, organizational, or strategic.

Critique

Critique

On elevating craft through critical thinking.

doc.cc icondoc.cc
Close-up of a Frankenstein-like monster face with stitched scars and neck bolts, overlaid by horizontal digital glitch bars

Architects and Monsters

According to recently unsealed court documents, Meta discontinued its internal studies on Facebook’s impact after discovering direct evidence that its platforms were detrimental to users’ mental health.

Jeff Horwitz reporting for Reuters:

In a 2020 research project code-named “Project Mercury,” Meta scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

As more and more evidence comes to light about Mark Zuckerberg and Meta’s failings and possibly criminal behavior, we as tech workers and specifically designers making technology that billions of people use, have to do better. While my previous essay written after the assassination of Charlie Kirk was an indictment on the algorithm, I’ve come across a couple of pieces recently that bring the responsibility closer to UX’s doorstep.

David Kelley is an icon in design. A restless tinkerer turned educator, he co-founded the reowned industrial design firm IDEO, helped shape human-centered design at Stanford’s d.school, and collaborated with Apple on seminal projects like the early mouse.

Here’s his take on creativity in a brief segment for PBS News Hour:

And as I started teaching, I realized that my purpose in life was figuring out how to help people gain confidence in their creative ability. Many people assume they’re not creative. Time and time again, they say, a teacher told me I wasn’t creative or that’s not a very good drawing of a horse or whatever it is. We don’t have to teach creativity. Once we remove the blocks, they can then feel themselves as being a creative person. Witnessing somebody realizing they’re creative for the first time is just a complete joy. You can just see them come out of the shop and beaming that I can weld. Like, what’s next?

Older man with glasses and a mustache seated at a workshop workbench, shelves of blue parts bins and tools behind him.

David Kelley's Brief But Spectacular take on creativity and design

For decades, David Kelley has helped people unlock their creativity. A pioneer of design, he founded the Stanford d.school as a place for creative, cross-disciplinary problem solving. He reflects on the journey that shaped his belief that everyone has the capacity to be creative and his Brief But Spectacular take on creativity and design.

pbs.org iconpbs.org

I’ve been playing with my systems in the past month—switching browsers, notetaking apps, and RSS feed readers. If I’m being honest, it’s causing me anxiety because I feel unmoored. My systems aren’t familiar enough to let me be efficient.

One thing that has stayed relatively stable is my LLM app—well, two of them. ChatGPT for everyday and Claude for coding and writing.

Christina Wodtke, writing on her blog:

The most useful model might not win.

What wins is the model that people don’t want to leave. The one that feels like home. The one where switching would mean losing something—not just access to features, but fluency, comfort, all those intangible things that make a tool feel like yours.

Amazon figured this out with Prime. Apple figured it out with the ecosystem. Salesforce figured it out by making itself so embedded in enterprise workflows that ripping it out would require an act of God.

AI companies are still acting like this is a pure technology competition. It’s not. It’s a competition to become essential—and staying power comes from experience, not raw capability.

Your moat isn’t your model. Your moat is whether users feel at home.

Solid black square filling the frame

UX Is Your Moat (And You’re Ignoring It)

Last week, Google released Nano Banana Pro, their latest image generator. The demos looked impressive. I opened Gemini to try it. Then I had a question I needed to ask. Something unrelated to image…

eleganthack.com iconeleganthack.com

Like it or not, as a designer, you have to be able to present your work proficiently. I remember I had always hated presenting. I was nervous and would get tongue-tied. Eventually, the more I did it, the more I got used to it. …But that’s public speaking, just half of what a presentation is. The other half is how to structure and tell your story. What story? The story of your design.

There’s a lot to be learned from master storytellers like Pixar. Laia Tremosa writing for the Interaction Design Foundation walks us through some storytelling techniques that we can pick up from Pixar.

Most professionals stay unseen not because their work lacks value, but because their message lacks resonance. They talk in facts when their audience needs meaning.

Storytelling is how you change that. It turns explanation into connection, and connection into influence. When you frame your ideas through story, people don’t just understand your work, they believe in it.

And out of the five that she mentions, the second one is my favorite, “Know Where You’re Going: Start with the End.”

Pixar designs for the final feeling. You should too. As a presenter, your version of that is a clear takeaway, a shift in perspective, or a call to action. You’re guiding your audience to a moment of clarity.

Maybe it’s a relief that a problem can be solved. Maybe it’s excitement about a new idea. Maybe it’s conviction that your proposal matters. Whatever the feeling, it’s your north star.

Don’t just prepare what to say; decide where you want to land. Start with the end, and build every word, visual, and story toward that moment of understanding and meaning.

Headline: "The 5 Pixar Storytelling Principles That Will Redefine How You Present and Fast-..." with "Article" tag and Interaction Design Foundation (IxDF) logo.

The 5 Pixar Storytelling Principles That Will Redefine How You Present and Fast-Track Your Career

Master Pixar storytelling techniques to elevate your presentations and boost career impact. Learn how to influence with storytelling.

interaction-design.org iconinteraction-design.org
Escher-like stone labyrinth of intersecting walkways and staircases populated by small figures and floating rectangular screens.

Generative UI and the Ephemeral Interface

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI.

I will admit that I’ve been skeptical of the notion of generative user interfaces. I was imagining an app for work, like a design app, that would rearrange itself depending on the task at hand. In other words, it’s dynamic and contextual. Adobe has tried a proto-version of this with the contextual task bar. Theoretically, it surfaces up the most pertinent three or four actions based on your current task. But I find that it just gets in the way.

When Interfaces Keep Moving

Others have been less skeptical. More than 18 months ago, NN/g published an article speculating about genUI and how it might manifest in the future. They define it as:

A generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context. So it’s a custom UI for that user at that point in time. Similar to how LLMs answer your question: tailored for you and specific to when that you asked the original question.

There are dark patterns in UX, and there are also dark patterns specific to games. Dark Pattern Games is a website that catalogs such patterns and the offending mobile games.

The site’s definition of a dark pattern is:

A gaming dark pattern is something that is deliberately added to a game to cause an unwanted negative experience for the player with a positive outcome for the game developer.

The “Social Pyramid Scheme” is one of my most loathed:

Some games will give you a bonus when you invite your friends to play and link to them to your account. This bonus may be a one-time benefit, or it may be an ongoing benefit that improves the gaming experience for each friend that you add. This gives players a strong incentive to convince their friends to play. Those friends then have to sign up more friends and so on, leading to a pyramid scheme and viral growth for the game.

Starry background with red pixelated text "Dark Pattern Games", a D-pad icon with red arrows, and URL www.darkpattern.games

DarkPattern.games » Healthy Gaming « Avoid Addictive Dark Patterns

Game reviews to help you find good games that don’t trick you into addictive gaming patterns.

darkpattern.games icondarkpattern.games

Geoffrey Litt is a design engineer at Notion. He is one of the authors at Ink & Switch of “Malleable software,” which I linked to back in July. I think it’s pretty fitting that he popped up at Notion, with the CEO Ivan Zhao likening the app to LEGO bricks.

In a recent interview with Rid on Dive Club, Litt explains the concept further:

So, when I say malleable software, I do not mean only disposable software. The main thing I think about with malleable software is actually much closer to … designing my interior space in my house. Let’s say when I come home I don’t want everything to be rearranged, right? I want it to be the way it was. And if I want to move the furniture or put things on the wall, I want to have the right to do that. And so I think of it much more as kind of crafting an environment over time that’s actually more stable and predictable, not only for myself, but also for my team. Having shared environments that we all work in together that are predictable is also really important, right? Ironically, actually, in some ways, I think sometimes malleable software results in more stable software because I have more control.

For building with AI, Litt advocates “coding like a surgeon”: stay in the loop and use agents for prep and grunt work.

How do we think of AI as a way to leverage our time better? [So we can] stay connected to the work and [do] it ourselves by having prep work done for us. Having tools in the moment helping us do it so that we can really focus on the stuff we love to do, and do less of everything else. And that’s how I’m trying to use coding agents for my core work that I care about today. Which is when I show up, sit down at my desk in the morning and work on a feature, I want to be prepped with a brief on all the code I’m going to be touching today, how it works, what the traps are. Maybe I’ll see a draft that the AI did for me overnight, sketching out how the coding could go. Maybe some ideas for me.

In other words, like an assistant who works overnight. And yeah, this could apply to design as well.

Geoffrey Litt - The Future of Malleable Software

AI is fundamentally shifting the way we think about digital products and the core deliverables that we’re bringing to the table as designers.So I asked Geoff…

youtube.com iconyoutube.com

Design Thinking has gotten a bad rap in recent years. It was supposed to change everything in the corporate world but ended up changing very little. While Design Thinking may not be the darling anymore, designers still need time to think, which is, for the sake of argument, time away from Figma and pushing pixels.

Chris Becker argues in UX Collective:

However, the canary in the coalmine is that Designers are not being used for their “thinking” but rather their “repetition”. Much of the consternation we feel in the UX industry is catapulted on us from this point of friction.

He says that agile software development and time for designers to think aren’t incompatible:

But allowing Designers to implement their thinking into the process is about trust. When good software teams collaborate effectively, there are high levels of trust and autonomy (a key requirement of agile teams). Designers must earn that trust, of course, and when we demonstrate that we have “done the thinking,” it builds confidence and garners more thinking time. Thinking begets thinking. So, Designers, let’s continue to work to maximise our “thinking” faculties.

Hand-drawn diagram titled THINKING: sensory icons and eyeballs feed a brain, plus a phone labeled "Illusory Truth Effect," leading to outputs labeled "Habits.

Let designers think

How “Thinking” + “Designing” need to be practiced outside AI.

uxdesign.cc iconuxdesign.cc

Game design is fascinating to me. As designers, “gamification” was all the rage a few years back, inspired by apps like Duolingo that made it fun to progress in a product. Raph Koster outlines a twelve-step, systems-first framework for game design, complete with illustrations. Notice how he’ll use UX terms like “affordance” because ultimately, game design is UX.

In step five, “Feedback,” Koster provides an example:

[The player] can’t learn and get better unless [they] get a whole host of information.

  • You need to know what actions – we usually call them verbs — are even available to you. There’s a gas pedal.
  • You need to be able to tell you used a verb. You hear the engine growl as you press the pedal.
  • You need to see that the use of the verb affected the state of the problem, and how it changed. The spedometer moved!
  • You need to be told if the state of the problem is better for your goal, or worse. Did you mean to go this fast?

Sound familiar? It’s Jakob Nielsen’s “Visibility of System Status.”

White-bordered hex grid with red, blue, yellow and black hex tiles marked by dot patterns, clustered on a dark tabletop

Game design is simple, actually

So, let’s just walk through the whole thing, end to end. Here’s a twelve-step program for understanding game design. One: Fun There are a lot of things people call “fun.” But most of them are not u…

raphkoster.com iconraphkoster.com

I think the headline is a hard stance, but I appreciate the sentiment. All the best designers and creatives—including developers—I’ve ever worked with do things on the side. Or in Rohit Prakash’s words, they tinker. They’re always making something, learning along the way.

Prakash, writing in his blog:

Acquiring good taste comes through using various things, discarding the ones you don’t like and keeping the ones you do. if you never try various things, you will not acquire good taste.

It’s important for designers to see other designs and use other products—if you’re a software designer. It’s equally important to look up from Dribbble, Behance, Instagram, and even this blog and go experience something unrelated to design. Art, concerts, cooking. All of it gets synthesized through your POV and becomes your taste.

Large white text "@seatedro on x dot com" centered on a black background.

If you don’t tinker, you don’t have taste

programmer by day, programmer by night.

seated.ro iconseated.ro

Apologies for sharing back-to-back articles from NN/g, but this is a good comprehensive index of all the AI-related guides the firm has published. Start here if you’re just getting into it.

Highlights from my POV:

  • Your AI UX Intern: Meet Ari. AI tools in UX act like junior interns whose output serves as a starting draft needing review, specific instructions, and added context. Their work should be checked and not used for final products or decisions without supervision.
  • The Future-Proof Designer. AI speeds up product development and automates design tasks, but creates risks like design marginalization and information overload. Designers must focus on strategic thinking, outcomes, and critical judgment to ensure decisions benefit users and business value.
  • Design Taste vs. Technical Skills in the Era of AI. Generative AI has equalized access to design output, but quality depends on creative discernment and taste, which remain essential for impactful results.
Using AI for UX Work: Study Guide — profile head with magnifying glass, robot face, papers, speech bubble and vector-cursor icons; NN/G logo

Using AI for UX Work: Study Guide

Unsure where to start? Use this collection of links to our articles and videos to learn about the best ways to use artificial intelligence for UX work.

nngroup.com iconnngroup.com

I’ve been a big fan of node-based UIs since I first experimented with Shake in the early 2000s. It’s kind of weird to wrap your head around, especially if you’re used to layers in Photoshop or Figma. The easiest way to think about nodes is to rotate the layer stack 90-degrees. Each node has inputs on the left, a distinct process that it does to the input, and outputs stuff on the right. You connect up multiple nodes to process assets to form your final composition. Popular apps with node-based workflows today include Unreal Engine (Blueprints), DaVinci Resolve (Fusion and Color), and n8n.

ComfyUI is another open source tool that uses the same node graph architecture. Made in 2023 to add some UI to the visual generative AI models like Stable Diffusion appearing around that time, it’s become popular among artists to wield the plethora of image and video gen AI models.

Fast-forward to last week, when Figma announced they had acquired Weavy, a much friendlier and cloud-based version of ComfyUI.

Weavy brings the world’s leading AI models together with professional editing tools on a single, browser-based canvas. With Weavy, you can choose the model you want for a task (e.g. Seedance, Sora, and Veo for cinematic video; Flux and Ideogram for realism; and Nano-Banana or Seedream for precision) and compose powerful primitives using generative AI outputs and hands-on edits (e.g. adjusting lighting, masking an object, color grading a shot). The end result is an inspiring environment for creative exploration and a flexible media pipeline where every output feeds the next.

This node-based approach brings a new level of craft and control to AI generation. Outputs can be branched, remixed, and refined, combining creative exploration with precision and craft. The Weavy team has inspired us with the balance they’ve struck between simplicity, approachability, and power. They’ve also created a tool that’s just a joy to use.

I must admit I had not heard about Weavy before the announcement. I had high hopes for Visual Electric, but it never quite lived up to its ambitions. I proceeded to watch all the official tutorial videos on YouTube and love it. Seems so much easier to use than ComfyUI. Let’s see what Figma does with the product.

Node-based image editor with connected panels showing a man in a rowboat on water then composited floating over a deep canyon.

Introducing Figma Weave: the next generation of AI-native creation at Figma

Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas.

figma.com iconfigma.com

In thinking about the three current AI-native web browsers, Fanny on Medium sees what lessons product designers can take from their different approaches.

On Perplexity Comet:

Design Insight: Comet succeeds by making AI feel like a natural extension of browsing, not an interruption. The sidecar model is brilliant because it respects the user’s primary task (reading, researching, shopping) while offering help exactly when context is fresh. But there’s a trade-off — Comet’s background assistant, which can handle multiple tasks simultaneously while you work, requires extensive permissions and introduces real security concerns.

On ChatGPT Atlas:

Design Insight: Atlas is making a larger philosophical statement — that the future of computing isn’t about better search, it’s about conversation as an interface. The key product decision here is making ChatGPT’s memory and context awareness central. Atlas remembers what sites you’ve visited, what you were working on, and uses that history to personalize responses. Ask “What was that doc I had my presentation plan in?” and it finds it.

On The Browser Company Dia:

Design Insight: Dia is asking the most interesting question — what happens when AI isn’t a sidebar or a search replacement, but a fundamental rethinking of input methods? The insertion cursor, the mouse, the address bar — these are the primitives of computing. Dia is making them intelligent.

She concludes that they “can’t all be right. But they’re probably all pointing at pieces of what comes next.”

I do think it’s a combo and Atlas is likely headed in the right direction. For AI to be truly assistive, it has to have relevant context. Since a lot of our lives are increasingly on the internet via web apps—and nearly everything is a web app these days—ChatGPT’s profile of you will have the most context, including your chats with the chatbot.

I began using Perplexity because I appreciated its accuracy compared with ChatGPT; this was pre-web search. But even with web search built into ChatGPT 5, I still find Perplexity’s (and therefore Comet’s) approach to be more trustworthy.

My conclusion stands though: I’m still waiting on the Arc-Dia-Comet browser smoothie.

Three app icons on dock: blue flower with paper plane, rounded square with sunrise gradient, and dark circle with white arches.

The AI Browser Wars: What Comet, Atlas, and Dia Reveal About Designing for AI-First Experiences

Last week, I watched OpenAI’s Sam Altman announce Atlas with the kind of confidence usually reserved for iPhone launches. “Tabs were…

uxplanet.org iconuxplanet.org

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

In this era of AI, we’ve been taught that LLMs are probabilistic, not deterministic, and that they will sometimes hallucinate. There’s a saying in AI circles that humans are right about 80% of the time, and so are AIs. Except when less than 100% accuracy is unacceptable. Accountants need to be 100% accurate, lest they lose track of money for their clients or businesses.

And that’s the problem Intuit had to solve to roll out their AI agent. Sean Michael Kerner, writing in VentureBeat:

Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

“The use cases that we’re trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls,” Joe Preston, Intuit’s VP of product and design, told VentureBeat.

So they built an agent that queries data from a multitude of sources and returns those exact results. But do users trust those results? It comes down to a design decision on being transparent:

Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit’s accounting agent categorizes a transaction, it doesn’t just display the result; it shows the reasoning. This isn’t marketing copy about explainable AI, it’s actual UI displaying data points and logic.

“It’s about closing that trust loop and making sure customers understand the why,” Alastair Simpson, Intuit’s VP of design, told VentureBeat.

Rusty metal bucket tipped over pouring a glowing stream of blue binary digits (ones and zeros) onto a dark surface.

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

The QuickBooks maker's approach to embedding AI agents reveals a critical lesson for enterprise AI adoption: in high-stakes domains like finance and tax, one mistake can erase months of user confidence.

venturebeat.com iconventurebeat.com