Skip to content

155 posts tagged with “product design”

Coincidentally, I was considering adding a service designer to my headcount plan when this article came across my feeds. Perfect timing. It’s hard to imagine that service design as a discipline is so young—only since 2012 according to the author.

Joe Foley, writing in Creative Bloq:

As a discipline, service design is still relatively new. A course at the Royal College of Art in London (RCA) only began in 2012 and many people haven’t even heard of the term. But that’s starting to change.

He interviews designer Clive Grinyer, whose new book on service design has just come out. He was co-founder of the design consultancy Tangerine, Director of Design and Innovation for the UK Design Council, and Head of Service Design at the Royal College of Art.

Griner:

Great service design is often invisible as it solves problems and removes barriers, which isn’t necessarily noticed as much as a shiny new product. The example of GDS (Government Digital Service) redesigning every government department from a service design perspective and removing many frustrating and laborious aspects of public life from taxing a car to getting a passport, is one of the best.

The key difference between service design and UX is that it’s end product is not something on a screen:

But service design is not just the experience we have through the glass of a screen or a device: it’s designed from the starting point of the broader objective and may include many other channels and touchpoints. I think it was Colin Burns who said a product is just a portal to a service.

In other words, if you open the aperture of what user experience means, and take on the challenge of designing real-world processes, flows, and interaction—that is service design.

preview-1753921779925.jpg

Service design isn't just a hot buzzword, it affects everything in your life

Brands need to catch up fast.

creativebloq.com iconcreativebloq.com
Human chain of designers supporting each other to reach laptops and design tools floating above them, illustrating collaborative mentorship and knowledge transfer in the design industry.

Why Young Designers Are the Antidote to AI Automation

In Part I of this series, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market.

Part II: Building New Ladders

When I met Benedict Allen, he had just finished with Portfolio Review a week earlier. That’s the big show all the design students in the Graphic Design program at San Diego City College work toward. It’s a nice event that brings out the local design community where seasoned professionals review the portfolios of the graduating students.

Allen was all smiles and relief. “I want to dabble in different aspects of design because the principles are generally the same.” He goes on to mention how he wants to start a fashion brand someday, DJ, try 3D. “I just want to test and try things and just have fun! Of course, I’ll have my graphic design job, but I don’t want that to be the end. Like when the workday ends, that’s not the end of my creativity.” He was bursting with enthusiasm.

Luke Wroblewski, writing in his blog:

Across several of our companies, software development teams are now “out ahead” of design. To be more specific, collaborating with AI agents (like Augment Code) allows software developers to move from concept to working code 10x faster. This means new features become code at a fast and furious pace.

When software is coded this way, however, it (currently at least) lacks UX refinement and thoughtful integration into the structure and purpose of a product. This is the work that designers used to do upfront but now need to “clean up” afterward. It’s like the development process got flipped around. Designers used to draw up features with mockups and prototypes, then engineers would have to clean them up to ship them. Now engineers can code features so fast that designers are ones going back and cleaning them up.

This is what I’ve been secretly afraid of. That we would go back to the times when designers were called in to do cleanup. Wroblewski says:

Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn’t go away.

Yeah, or the feature will get shipped this way and be stuck this way because startups move fast and move on.

My take is that as designers, we need to meet the moment and figure out how to build design systems and best practices into the agentic workflows our developer counterparts are using.

preview-1753725448535.png

AI Has Flipped Software Development

For years, it's been faster to create mockups and prototypes of software than to ship it to production. As a result, software design teams could stay "ahead" of...

lukew.com iconlukew.com

Sonos announced yesterday that interim CEO Tom Conrad was made permanent. From their press release:

Sonos has achieved notable progress under Mr. Conrad’s leadership as Interim CEO. This includes setting a new standard for the quality of Sonos’ software and product experience, clearing the path for a robust new product pipeline, and launching innovative new software enhancements to flagship products Sonos Ace and Arc Ultra.

Conrad surely navigated this landmine well after the disastrous app redesign that wiped almost $500 million from the company’s market value and cost CEO Patrick Spence his job. My sincere hope is that Conrad continues to rebuild Sonos’s reputation by continuing to improve their products.

Sonos Appoints Tom Conrad as Chief Executive Officer

Sonos Website

sonos.com iconsonos.com
Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

It’s no secret that I am a big fan of Severance, the Apple TV+ show that has 21 Emmy nominations this year. I made a fan project earlier in the year that generates Outie facts for your Innie.

After launching a teaser campaign back in April, Atomic Keyboard is finally taking pre-orders for their Severance-inspired keyboard just for Macrodata Refinement department users. The show based the MDR terminals on the Data General Dasher D2 terminal from 1977. So this new keyboard includes three layouts:

  1. “Innie” which is show-accurate, meaning no Escape, no Option, and no Control keys, and includes the trackball
  2. “Outie,” a 60% layout that includes modern modifier keys and the trackball
  3. “Dasher” which replicates the DG terminal layout

It’s not cheap. The final retail price will be $899, but they’re offering a pre-Kickstarter price of $599.

preview-1752862402377.png

MDR Dasher Keyboard | For Work That's Mysterious & Important

Standard equipment for Macrodata Refinement: CNC-milled body, integrated trackball, modular design. Please enjoy each keystroke equally.

mdrkeyboard.com iconmdrkeyboard.com

Stephanie Tyler, in a great essay about remembering what we do as designers:

In an age where AI can generate anything, the question is no longer ‘can it be made?’ but ‘is it worth making?’ The frontier isn’t volume—it’s discernment. And in that shift, taste has become a survival skill.

And this is my favorite passage, because this is how I think about this blog and my newsletter.

There will always be creators. But the ones who stand out in this era are also curators. People who filter their worldview so cleanly that you want to see through their eyes. People who make you feel sharper just by paying attention to what they pay attention to.

Curation is care. It says: I thought about this. I chose it. I didn’t just repost it. I didn’t just regurgitate the trending take. I took the time to decide what was worth passing on.

That’s rare now. And because it’s rare, it’s valuable.

We think of curation as a luxury. But it’s actually maintenance. It’s how you care for your mind. Your attention. Your boundaries.

This blog represents my current worldview, what I’m interested in and exploring. What I’m thinking about now.

preview-1752706649473.png

Taste Is the New Intelligence

Why curation, discernment, and restraint matter more than ever

wildbarethoughts.com iconwildbarethoughts.com

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas

  1. AI tools: Assembling an evolving AI design stack to ship fast
  2. AI fluency: Learning how to design for probabilistic systems
  3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

uxdesign.cc iconuxdesign.cc

From UX Magazine:

Copilots helped enterprises dip their toes into AI. But orchestration platforms and tools are where the real transformation begins — systems that can understand intent, break it down, distribute it, and deliver results with minimal hand-holding.

Think of orchestration as how “meta-agents” are conducting other agents.

The first iteration of AI in SaaS was copilots. They were like helpful interns eagerly awaiting your next command. Orchestration platforms are more like project managers. They break down big goals into smaller tasks, assign them to the right AI agents, and keep everything coordinated. This shift is changing how companies design software and user experiences, making things more seamless and less reliant on constant human input.

For designers and product teams, it means thinking about workflows that cross multiple tools, making sure users can trust and control what the AI is doing, and starting small with automation before scaling up.

Beyond Copilots: The Rise of the AI Agent Orchestration Platform

AI agent orchestration platforms are replacing simple copilots, enabling enterprises to coordinate autonomous agents for smarter, more scalable workflows.

uxmag.com iconuxmag.com

Let’s stay on the train of designing AI interfaces for a bit. Here’s a piece by Rob Chappell in UX Collective where he breaks down how to give users control—something I’ve been advocating—when working with AI.

AI systems are transforming the structure of digital interaction. Where traditional software waited for user input, modern AI tools infer, suggest, and act. This creates a fundamental shift in how control moves through a experience or product — and challenges many of the assumptions embedded in contemporary UX methods.

The question is no longer: “What is the user trying to do?”

The more relevant question is: “Who is in control at this moment, and how does that shift?”

Designers need better ways to track how control is initiated, shared, and handed back — focusing not just on what users see or do, but on how agency is negotiated between human and system in real time.

Most design frameworks still assume the user is in the driver’s seat. But AI is changing the rules. The challenge isn’t just mapping user flows or intent—it’s mapping who holds the reins, and how that shifts, moment by moment. Designers need new tools to visualize and shape these handoffs, or risk building systems that feel unpredictable or untrustworthy. The future of UX is about negotiating agency, not just guiding tasks.

preview-1752705140164.png

Beyond journey maps: designing for control in AI UX

When systems act on their own, experience design is about balancing agency — not just user flow

uxdesign.cc iconuxdesign.cc

Vitaly Friedman writes a good primer on the design possibilities for users to interact with AI features. As AI capabilities become more and more embedded in the products designers make, we have to become facile in manipulating AI as material.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

preview-1752639762962.jpg

Design Patterns For AI Interfaces

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick.

smashingmagazine.com iconsmashingmagazine.com

Since its debut at Config back in May, Figma has steadily added practical features to Figma Make for product teams. Supabase integration now allows for authentication, data storage, and file uploads. Designers can import design system libraries, which helps maintain visual consistency. Real-time collaboration has improved, giving teams the ability to edit code and prototypes together. The tool now supports backend connections for managing state and storing secrets. Prototypes can be published to custom domains. These changes move Figma Make closer to bridging the gap between design concepts and advanced prototypes.

In my opinion, there’s a stronger relationship between Sites and Make than there is Make and Design. The Make-generated code may be slightly better than when Sites debuted, but it is still not semantic.

Anyhow, I think Make is great for prototyping and it’s convenient to have it built right into Figma. Julius Patto, writing in UX Collective:

Prompting well in Figma Make isn’t about being clever, it’s about being clear, intentional, and iterative. Think of it as a new literacy in the design toolkit: the better you get at it, the more you unlock AI’s potential without losing your creative control.

preview-1752622395695.jpeg

How to prompt Figma Make’s AI better for product design

Learn how to use AI in Figma Make with UX intention, from smarter prompts to inclusive flows that reflect real user needs.

uxdesign.cc iconuxdesign.cc

Ted Goas, writing in UX Collective:

I predict the early parts of projects, getting from nothing to something, will become shared across roles. For designers looking to branch out, code is a natural next step. I see a future where we’re fixing small bugs ourselves instead of begging an engineer, implementing that animation that didn’t make the sprint but you know would absolutely slap, and even building simple features when engineering resources are tight.

Our new reality is that anyone can make a rough draft.

But that doesn’t mean those drafts are good. That’s where our training and taste come in.

I think Goas is right and it echoes the AI natives post by Elena Verna. I wrote a little more extensively in my newsletter over the weekend.

preview-1752467928143.jpg

Designers: We’ll all be design engineers in a year

And that’s a good thing.

uxdesign.cc iconuxdesign.cc

Miquad Jaffer, a product leader at OpenAI shares his 4D method on how to build AI products that users want. In summary, it’s…

  • Discover: Find and prioritize real user pain points and friction in daily workflows.
  • Design: Make AI features invisible and trustworthy, fitting naturally into users’ existing habits.
  • Develop: Build AI systematically, with robust evaluation and clear plans for failures or edge cases.
  • Deploy: Treat each first use like a product launch, ensuring instant value and building user trust quickly.
preview-1752209855759.png

OpenAI Product Leader: The 4D Method to Build AI Products That Users Actually Want

An OpenAI product leader's complete playbook to discover real user friction, design invisible AI, plan for failure cases, and go from "cool demo" to "daily habit"

creatoreconomy.so iconcreatoreconomy.so

Geoffrey Litt, Josh Horowitz, Peter van Hardenberg, and Todd Matthews writing a paper for research lab Ink & Switch, offer a great, well-thought piece on what they call “malleable software.”

We envision a new kind of computing ecosystem that gives users agency as co-creators. … a software ecosystem where anyone can adapt their tools to their needs with minimal friction. … When we say ‘adapting tools’ we include a whole range of customizations, from making small tweaks to existing software, to deep renovations, to creating new tools that work well in coordination with existing ones. Adaptation doesn’t imply starting over from scratch.

In their paper, they use analogies like kitchen tools and tool arrangement in a workshop to explore their idea. With regard to the current crop of AI prompt-to-code tools

We think these developments hold exciting potential, and represent a good reason to pursue malleable software at this moment. But at the same time, AI code generation alone does not address all the barriers to malleability. Even if we presume that every computer user could perfectly write and edit code, that still leaves open some big questions.

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Kind of a different take than the “personal software” we’ve seen written about before.

preview-1752208778544.jpg

Malleable software: Restoring user agency in a world of locked-down apps

The original promise of personal computing was a new kind of clay. Instead, we got appliances: built far away, sealed, unchangeable. In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs.

inkandswitch.com iconinkandswitch.com

John Calhoun joined Apple 30 years ago as a programmer to work on the Color Picker.

Having never written anything in assembly, you can imagine how overjoyed I was. It’s not actually a very accurate analogy, but imagine someone handing you a book in Chinese and asking you to translate it into English (I’m assuming here that you don’t know Chinese of course). Okay, it wasn’t that hard, but maybe you get a sense that this was quite a hurdle that I would have to overcome.

Calhoun was given an old piece of code and tasked with updating it. Instead, he translated it into a programming language he knew—C—and then decided to add to the feature. He explains:

I disliked HSL as a color space, I preferred HSV (Hue, Saturation, Value) because when I did artwork I was more comfortable thinking about color in those terms. So writing an HSV color picker was on my short list.

When I had my own color picker working I think I found that it was kind of fun. Perhaps for that reason, I struck out again and wrote another color picker. The World Wide Web (www) was a rather new thing that seemed to be catching on, so I naturally thought that an HTML color picker made sense. So I tackled that one as well. It was more or less the RGB color picker but the values were in hexadecimal and a combined RGB string value like “#FFCC33” was made easy to copy for the web designer.

So an engineer decided, all on his own, that he’d add a couple extra features. Including the fun crayon picker:

On a roll, I decided to also knock out a “crayon picker”. At this point, to be clear, the color picker was working and I felt I understood it well enough. As I say, I was kind of just having some fun now.

Screenshot of a classic Mac OS color picker showing the “Crayon Picker” tab. A green color named “Watercress” is selected, replacing the original orange color. Options include CMYK, HLS, and HSV pickers on the left.

And Calhoun makes this point:

It was frankly a thing I liked about working for Apple in those days. The engineers were the one’s driving the ship. As I said, I wrote an HSV picker because it was, I thought, a more intuitive color space for artists. I wrote the HTML color picker because of the advent of the web. And I wrote the crayon picker because it seemed to me to be the kind of thing Apple was all about: HSL, RGB — these were kind of nerdy color spaces — a box of crayons is how the rest of us picked colors.

Making software—especially web software—has matured since then, with product managers and designers now collaborating closely with engineers. But with AI coding assistants, the idea of an individual contributor making solo decisions and shipping code might become de rigueur again.

Man sitting outside 2 Infinite Loop, Apple’s former headquarters in Cupertino, holding a book with an ID badge clipped to his jeans.

Almost Fired

I was hired on at Apple in October of 1995. This was what I refer to as Apple’s circling the drain period. Maybe you remember all the doomsaying — speculation that Apple was going to be shuttering soon. It’s a little odd perhaps then that they were hiring at all but apparently Apple reasoned that they nonetheless needed another “graphics engineer” to work on the technology known as QuickdrawGX. I was then a thirty-one year old programmer who lived in Kansas and wrote games for the Macintosh — surely, Apple thought, I would be a good fit for the position.

engineersneedart.com iconengineersneedart.com

Read past some of the hyperbole in this piece by Andy Budd. I do think the message is sound.

If you’re working at a fast-growth tech startup, you’re probably already feeling the pressure. Execs want more output with fewer people. Product and engineering are experimenting with AI tooling. And you’re being asked to move faster than ever — with less clarity on what the team should even own.

I will admit that I personally feel this pressure too. Albeit, not from my employer but from the chatter in our industry. I’m observing the younger companies experiment with the process, collapsing roles, and expanding responsilities.

As AI eats into the production layer, the traditional boundaries between design and engineering are starting to dissolve. Many of the tasks once owned by design will soon be handled by others — or by machines.

Time will tell when this becomes widespread. I think designers will be asked to ship more code. And PMs and engineers may ship small design tweaks.

The reality is, we’ll likely need fewer designers overall. But the ones we do need will be more specialised, more senior, and more strategically valuable than ever before.

You’ll want AI-literate, full-stack designers — people who are comfortable working across the entire product surface, from UX to code, and from interface to infrastructure. Designers who can navigate ambiguity, embrace new tooling, and confidently operate in the blurred space between design and engineering.

I don’t know if I agree with the fewer number of designers. At least not in the near-term. The more AI is embedded into app experiences, the trend—I predict—will go in the opposite direction. The term “AI as material” has been floating around for a few months, but I think its meaning will morph. AI will be the new UI, and thus we need designers to help define those experiences.

preview-1751840519842.png

Design Leadership in the Age of AI: Seize the Narrative Before It’s Too Late

Design is changing. Fast. AI is transforming the way we work — automating production, collapsing handoffs, and enabling non-designers to ship work that once required a full design team. Like it or not, we’re heading into a world where many design tasks will no longer need a designer. If that fills you with unease, you’re not alone. But here’s the key difference between teams that will thrive and those that won’t: Some design leaders are taking control of the narrative. Others are waiting to be told what’s next.

andybudd.com iconandybudd.com

Tom Scott, giving advice to startups about how to hire designers:

The worst thing for a designer is join a company under the premise they are going to invest in craft and never get serious about it. This results in the designer getting stuck in an average company, making it harder for them to move into a top-tier design-led company afterwards.

The TL;DR is if you’re serious about hiring great talent, put your money where your mouth is, create the right environment and get serious about design like you do with product, eng, marketing etc.

While the post is aimed at startup employers, it’s good for designers to understand the advice they’re being given.

preview-1751839434347.png

FAQ - Product Design in 2025

How to hire designers, Super ICs, how to integrate AI into your workflow and more.

verifiedinsider.substack.com iconverifiedinsider.substack.com

Here we go. Figma has just dropped their S-1, or their registration for an initial public offering (IPO).

A financial metrics slide showing Figma's key performance indicators on a dark green background. The metrics displayed are: $821M LTM revenue, 46% YoY revenue growth, 18% non-GAAP operating margin, 91% gross margin, 132% net dollar retention, 78% of Forbes 2000 companies use Figma, and 76% of customers use 2 or more products.

Rollup of stats from Figma’s S-1.

While a lot of the risk factors are boilerplate—legalese to cover their bases—the one about AI is particularly interesting, “Competitive developments in AI and our inability to effectively respond to such developments could adversely affect our business, operating results, and financial condition.”

Developments in AI are already impacting the software industry significantly, and we expect this impact to be even greater in the future. AI has become more prevalent in the markets in which we operate and may result in significant changes in the demand for our platform, including, but not limited to, reducing the difficulty and cost for competitors to build and launch competitive products, altering how consumers and businesses interact with websites and apps and consume content in ways that may result in a reduction in the overall value of interface design, or by otherwise making aspects of our platform obsolete or decreasing the number of designers, developers, and other collaborators that utilize our platform. Any of these changes could, in turn, lead to a loss of revenue and adversely impact our business, operating results, and financial condition.

There’s a lot of uncertainty they’re highlighting:

  • Could competitors use AI to build competing products?
  • Could AI reduce the need for websites and apps which decreases the need for interfaces?
  • Could companies reduce workforces, thus reducing the number of seats they buy?

These are all questions the greater tech industry is asking.

preview-1751405229235.png

Figma Files Registration Statement for Proposed IPO | Figma Blog

An update on Figma's path to becoming a publicly traded company: our S-1 is now public.

figma.com iconfigma.com

In a dual profile, Ben Blumenrose spotlights Phil Vander Broek—whose startup Dopt was acquired last year by Airtable—and Filip Skrzesinski—who is currently working on Subframe—in the Designer Founders newsletter.

One of the lessons Vander Broek learned was to not interview customers just to validate an idea. Interview them to get the idea first. In other words, discover the pain points:

They ran 60+ interviews in three waves. The first 20 conversations with product and growth leaders surfaced a shared pain point: driving user adoption was painfully hard, and existing tools felt bolted on. The next 20 calls helped shape a potential solution through mockups and prototypes—one engineer was so interested he volunteered for weekly co-design sessions. A final batch of 20 calls confirmed their ideal customer was engineers, not PMs.

As for Skrzesinski, he’s learning that being a startup founder isn’t about building the product—it’s about building a business:

But here’s Filip’s counterintuitive advice: “Don’t start a company because you love designing products. Do it in spite of that.”

“You won’t be designing in the traditional sense—you’ll be designing the company’s DNA,” he explains. “It’s the invisible work: how you organize, how you think, how you make decisions. How it feels to work there, to use what you’re making, to believe in it.”

preview-1751333180140.jpeg

Designer founders on pain-hunting, seeking competitive markets, and why now is the time to build

Phil Vander Broek of Dopt and Filip Skrzesinski of Subframe share hard-earned lessons on getting honest about customer signals, moving faster, and the shift from designing products to companies.

designerfounders.substack.com icondesignerfounders.substack.com

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma’s visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it’s rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

figma.com iconfigma.com

If you want an introduction on how to use Cursor as a designer, here’s a must-watch video. It’s just over half-an-hour long and Elizabeth Lin goes through several demos in Cursor.

Cursor is much more advanced than the AI prompt-to-code tools I’ve covered here before. But with it, you’ll get much more control because you’re building with actual code. (Of course, sigh, you won’t have sliders and inputs for controlling design.)

preview-1750139600534.png

A designer's guide to Cursor: How to build interactive prototypes with sound, explore visual styles, and transform data visualizations | Elizabeth Lin

How to use Cursor for rapid prototyping: interactive sound elements, data visualization, and aesthetic exploration without coding expertise

open.substack.com iconopen.substack.com

Vincent Nguyen writing for Yanko Design, interviewing Alan Dye, VP of Human Interface Design at Apple:

This technical challenge reveals the core problem Apple set out to solve: creating a digital material that maintains form-changing capabilities while preserving transparency. Traditional UI elements either block content or disappear entirely, but Apple developed a material that can exist in multiple states without compromising visibility of underlying content. Dye’s emphasis on “celebrating user content” exposes Apple’s hierarchy philosophy, where the interface serves content instead of competing with it. When you tap to magnify text, the interface doesn’t resize but stretches and flows like liquid responding to pressure, ensuring your photos, videos, and web content remain the focus while navigation elements adapt around them.

Since the Jony Ive days, Apple’s hardware has always been about celebrating the content. Bezels got smaller. Screens got bigger and brighter. Even the flat design brought on by iOS 7 and eventually adopted by the whole ecosystem was a way to strip away the noise and focus on the content.

Dye’s explanation of the “glass layer versus application layer” architecture provides insight into how Apple technically implements this philosophy. The company has created a distinct separation between functional controls (the glass layer) and user content (the application layer), allowing each to behave according to different rules while maintaining visual cohesion. This architectural decision enables the morphing behavior Dye described, where controls can adapt and change while content remains stable and prominent.

The Apple platform UI today sort of does that, but Liquid Glass seems to take it even further.

Nguyen about his experience using the Music app on Mac:

The difference from current iOS becomes apparent in specific scenarios. In the current Music app, scrolling through your library feels like moving through flat, static layers. With Liquid Glass, scrolling creates a sense of depth. You can see your album artwork subtly shifting beneath the translucent controls, creating spatial awareness of where interface elements sit in relation to your content. The tab bar doesn’t just scroll with you; it creates gentle optical distortions that make the underlying content feel physically present beneath the glass surface.

preview-1749793045679.jpg

Apple’s Liquid Glass Hands-On: Why Every Interface Element Now Behaves Like Physical Material

Liquid Glass represents more than an aesthetic update or surface-level polish. It functions as a complex behavioral system, precisely engineered to dictate how interface layers react to user input. In practical terms, this means Apple devices now interact with interface surfaces not as static, interchangeable panes, but as dynamic, adaptive materials that fluidly flex and

yankodesign.com iconyankodesign.com

Sara Paul writing for NN/g:

The core principles of UX and product design remain unchanged, and AI amplifies their importance in many ways. To stay indispensable, designers must evolve: adapt to new workflows, deepen their judgment, and double down on the uniquely human skills that AI can’t replace.

They spoke with seven UX practitioners to get their take on AI and the design profession.

I think this is great advice and echoes what I’ve written about previously (here and here):

There is a growing misconception that AI tools can take over design, engineering, and strategy. However, designers offer more than interaction and visual-design skills. They offer judgment, built on expertise that AI cannot replicate.

Our panelists return to a consistent message: across every tech hype cycle, from responsive design to AI, the value of design hasn’t changed. Good design goes deeper than visuals; it requires critical thinking, empathy, and a deep understanding of user needs.

preview-1749705164986.png

The Future-Proof Designer

Top product experts share four strategies for remaining indispensable as AI changes UI design, accelerates feature production, and reshapes data analysis.

nngroup.com iconnngroup.com