Skip to content

Ben Davies-Romano argues that the AI chat box is our new design interface:

Every interaction with a large language model starts the same way: a blinking cursor in a blank text field. That unassuming box is more than an input — it’s the interface between our human intent and the model’s vast, probabilistic brain.

This is where the translation happens. We pour in the nuance, constraints, and context of our ideas; the model converts them into an output. Whether it’s generating words, an image, a video sequence, or an interactive prototype, every request passes through this narrow bridge.

It’s the highest-stakes, lowest-fidelity design surface I’ve ever worked with: a single field that stands between human creativity and an engine capable of reshaping it into almost any form, albeit with all the necessary guidance and expertise applied.

In other words, don’t just say “Make it better,” but guide the AI instead.

That’s why a vague, lazy prompt, like “make it better”, is the design equivalent of telling a junior designer “make it intuitive” and walking away. You’ll get something generic, safe, and soulless, not because the AI “missed the brief,” but because there was no brief.

Without clear stakes, a defined brand voice, and rich context, the system will fill in the blanks with its default, most average response. And “average” is rarely what design is aiming for.

And he makes a point that designers should be leading the charge on showing others what generative AI can do:

In the age of AI, it shouldn’t be everyone designing, per say. It should be designers using AI as an extension of our craft. Bringing our empathy, our user focus, our discipline of iteration, and our instinct for when to stop generating and start refining. AI is not a replacement for that process; it’s a multiplier when guided by skilled hands.

So, let’s lead. Let’s show that the real power of AI isn’t in what it can generate, but in how we guide it — making it safer, sharper, and more human. Let’s replace the fear and the gimmicks with clarity, empathy, and intentionality.

The blank prompt is our new canvas. And friends, we need to be all over it.

preview-1754887809469.jpeg

Prompting is designing. And designers need to lead.

Forget “prompt hacks.” Designers have the skills to turn AI from a gimmick into a powerful, human-centred tool if we take the lead.

medium.com iconmedium.com

There are over 1,800 font families in Google Fonts. While as designers, I’m sure were grateful for the trove of free fonts, good typefaces in the library are hard to spot.

Brand identity darlings Smith & Diction dropped a catalog of “Usable Google Fonts.” In a LinkedIn post, they wrote, “Screw it, here’s all of the google fonts that are actually good categorized by ‘vibe’.”

Huzzah! It’s in the form of a public Figma file. Enjoy.

preview-1754632730253.jpg

Usable Google Fonts

Catalog of "usable" Google fonts as curated by Smith & Diction

figma.com iconfigma.com

Christopher K. Wong argues that desirability is a key part of design that helps decide which features users really want:

To give a basic definition, desirability is a strategic part of UX that revolves around a single user question: Have you defined (and solved) the right problem for users?

In other words, before drawing a single box or arrow, have you done your research and discovery to know you’re solving a pain point?

The way the post is written makes it hard to get at a succinct definition, but here’s my take. Desirability is about ensuring a product or feature is truly wanted, needed, and chosen by users—not just visual appeal—making it a core pillar for impactful design decisions and prioritization. And designers should own this.

preview-1754632102491.jpeg

Want to have a strategic design voice at work? Talk about desirability

Desirability isn’t just about visual appeal: it’s one of the most important user factors

dataanddesign.substack.com icondataanddesign.substack.com

Yesterday, OpenAI launched GPT-5, their latest and greatest model that replaces the confusing assortment of GPT-4o, o3, o4-mini, etc. with just two options: GPT-5 and GPT-5 pro. The reasoning is built in and the new model is smart enough to know what to think harder, or when a quick answer suffices.

Simon Willison deep dives into GPT-5, exploring its mix of speed and deep reasoning, massive context limits, and competitive pricing. He sees it as a steady, reliable default for everyday work rather than a radical leap forward:

I’ve mainly explored full GPT-5. My verdict: it’s just good at stuff. It doesn’t feel like a dramatic leap ahead from other LLMs but it exudes competence—it rarely messes up, and frequently impresses me. I’ve found it to be a very sensible default for everything that I want to do. At no point have I found myself wanting to re-run a prompt against a different model to try and get a better result.

It’s a long technical read but interesting nonetheless.

preview-1754630277862.jpg

GPT-5: Key characteristics, pricing and model card

I’ve had preview access to the new GPT-5 model family for the past two weeks (see related video) and have been using GPT-5 as my daily-driver. It’s my new favorite …

simonwillison.net iconsimonwillison.net

Jay Hoffman, writing in his excellent The History of the Web website, reflects on Kevin Kelly’s 2005 Wired piece that celebrated the explosive growth of blogging—50 million blogs, one created every two seconds—and predicted a future powered by open participation and user-created content. Kelly was right about the power of audiences becoming creators, but he missed the crucial detail: 2005 would mark the peak of that open web participation before everyone moved into centralized platforms.

There are still a lot of blogs, 600 million by some accounts. But they have been supplanted over the years by social media networks. Commerce on the web has consolidated among fewer and fewer sites. Open source continues to be a major backbone to web technologies, but it is underfunded and powered almost entirely by the generosity of its contributors. Open API’s barely exist. Forums and comment sections are finding it harder and harder to beat back the spam. Users still participate in the web each and every day, but it increasingly feels like they do so in spite of the largest web platforms and sites, not because of them.

My blog—this website—is a direct response to the consolidation. This site and its content are owned and operated by me and not stuck behind a login or paywall to be monetized by Meta, Medium, Substack, or Elon Musk. That is the open web.

Hoffman goes on to say, “The web was created for participation, by its nature and by its design. It can’t be bottled up long.” He concludes with:

Independent journalists who create unique and authentic connections with their readers are now possible. Open social protocols that experts truly struggle to understand, is being powered by a community that talks to each other.

The web is just people. Lots of people, connected across global networks. In 2005, it was the audience that made the web. In 2025, it will be the audience again.

preview-1754534872678.jpg

We Are Still the Web

Twenty years ago, Kevin Kelly wrote an absolutely seminal piece for Wired. This week is a great opportunity to look back at it.

thehistoryoftheweb.com iconthehistoryoftheweb.com

Figma is adding to its keyboard shortcuts to improve navigation and selection for power users and for keyboard-only users. It’s a win-win that improves accessibility and efficiency. Sarah Kelley, product marketer at Figma writes:

For millions, navigating digital tools with a keyboard isn’t just about preference for speed and ergonomics—it’s a fundamental need. …

We’re introducing a series of new features that remove barriers for keyboard-only designers across most Figma products. Users can now pan the canvas, insert objects, and make precise selections quickly and easily. And, with improved screen reader support, these actions are read aloud as users work, making it easier to stay oriented.

Nice work!

preview-1754373987228.png

Who Says Design Needs a Mouse?

Figma's new accessibility features bring better keyboard and screen reader support to all creators.

figma.com iconfigma.com

My former colleague from Organic, Christian Haas—now ECD at YouTube—has been experimenting with AI video generation recently. He’s made a trilogy of short films called AI Jobs.

Play

You can watch part one above 👆, but don’t sleep on parts two and three.

Haas put together a “behind the scenes” article explaining his process. It’s fascinating and I’ll want to play with video generation myself at some point.

I started with a rough script, but that was just the beginning of a conversation. As I started generating images, I was casting my characters and scouting locations in real time. What the model produced would inspire new ideas, and I would rewrite the script on the fly. This iterative loop continued through every stage. Decisions weren’t locked in; they were fluid. A discovery made during the edit could send me right back to “production” to scout a new location, cast a new character and generate a new shot. This flexibility is one of the most powerful aspects of creating with Gen AI.

It’s a wonderful observation Haas has made—the workflow enabled by gen AI allows for more creative freedom. In any creative endeavor where the production of the final thing is really involved and utilizes a significant amount of labor and materials, be it a film, commercial photography, or software, planning is a huge part. We work hard to spec out everything before a crew of a hundred shows up on set or a team of highly-paid engineers start coding. With gen AI, as shown here with Google’s Veo 3, you have more room for exploration and expression.

UPDATE: I came across this post from Rory Flynn after I published this. He uses diagrams to direct Veo 3.

preview-1754327232920.jpg

Behind the Prompts — The Making of "AI Jobs"

Christian Haas created the first film with the simple goal of learning to use the tools. He didn’t know if it would yield anything worth watching but that was not the point.

linkedin.com iconlinkedin.com

For the past year, CPG behemoth Unilever has been “working with marketing services group Brandtech to build up its Beauty AI Studio: a bespoke, in-house system inside its beauty and wellbeing business. Now in place across 18 different markets (the U.S. and U.K. among them), the studio is being used to make assets for paid social, programmatic display inventory and e-commerce usage across brands including Dove Intensive Repair, TRESemme Lamellar Shine and Vaseline Gluta Hya.”

Sam Bradley, writing in Digiday:

The system relies on Pencil Pro, a generative AI application developed by Brandtech Group. The tool draws on several large language models (LLMs), as well as API access to Meta and TikTok for effectiveness measurement. It’s already used by hearing-care brand Amplifon to rapidly produce text and image assets for digital ad channels.

In Unilever’s process, marketers use prompts and their own insights about target audiences to generate images and video based on 3D renders of each product, a practice sometimes referred to as “digital twinning.” Each brand in a given market is assigned a “BrandDNAi” — an AI tool that can retrieve information about brand guidelines and relevant regulations and that provides further limitations to the generative process.

So far, they haven’t used this system to generate AI humans. Yet.

Inside Unilever’s AI beauty marketing assembly line — and its implications for agencies

The CPG giant has created an AI-augmented in-house production system. Could it be a template for others looking to bring AI in house?

digiday.com icondigiday.com

Coincidentally, I was considering adding a service designer to my headcount plan when this article came across my feeds. Perfect timing. It’s hard to imagine that service design as a discipline is so young—only since 2012 according to the author.

Joe Foley, writing in Creative Bloq:

As a discipline, service design is still relatively new. A course at the Royal College of Art in London (RCA) only began in 2012 and many people haven’t even heard of the term. But that’s starting to change.

He interviews designer Clive Grinyer, whose new book on service design has just come out. He was co-founder of the design consultancy Tangerine, Director of Design and Innovation for the UK Design Council, and Head of Service Design at the Royal College of Art.

Griner:

Great service design is often invisible as it solves problems and removes barriers, which isn’t necessarily noticed as much as a shiny new product. The example of GDS (Government Digital Service) redesigning every government department from a service design perspective and removing many frustrating and laborious aspects of public life from taxing a car to getting a passport, is one of the best.

The key difference between service design and UX is that it’s end product is not something on a screen:

But service design is not just the experience we have through the glass of a screen or a device: it’s designed from the starting point of the broader objective and may include many other channels and touchpoints. I think it was Colin Burns who said a product is just a portal to a service.

In other words, if you open the aperture of what user experience means, and take on the challenge of designing real-world processes, flows, and interaction—that is service design.

preview-1753921779925.jpg

Service design isn't just a hot buzzword, it affects everything in your life

Brands need to catch up fast.

creativebloq.com iconcreativebloq.com

Luke Wroblewski, writing in his blog:

Across several of our companies, software development teams are now “out ahead” of design. To be more specific, collaborating with AI agents (like Augment Code) allows software developers to move from concept to working code 10x faster. This means new features become code at a fast and furious pace.

When software is coded this way, however, it (currently at least) lacks UX refinement and thoughtful integration into the structure and purpose of a product. This is the work that designers used to do upfront but now need to “clean up” afterward. It’s like the development process got flipped around. Designers used to draw up features with mockups and prototypes, then engineers would have to clean them up to ship them. Now engineers can code features so fast that designers are ones going back and cleaning them up.

This is what I’ve been secretly afraid of. That we would go back to the times when designers were called in to do cleanup. Wroblewski says:

Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn’t go away.

Yeah, or the feature will get shipped this way and be stuck this way because startups move fast and move on.

My take is that as designers, we need to meet the moment and figure out how to build design systems and best practices into the agentic workflows our developer counterparts are using.

preview-1753725448535.png

AI Has Flipped Software Development

For years, it's been faster to create mockups and prototypes of software than to ship it to production. As a result, software design teams could stay "ahead" of...

lukew.com iconlukew.com

Kendra Albert, writing in her blog post about Heavyweight, a new tool she built to create “extremely law-firm-looking” letters:

Sometimes, you don’t need a lawyer, you just need to look like you have one.

That’s the idea behind Heavyweight, a project that democratizes the aesthetics of (in lieu of access to) legal representation. Heavyweight is a free, online, and open-source tool that lets you give any complaint you have extremely law-firm-looking formatting and letterhead. Importantly, it does so without ever using any language that would actually claim that the letter was written by a lawyer.

preview-1753379920512.png

Heavyweight: Letters Taken Seriously - Free & Open Legal Letterhead Generator

Generate professional-looking demand letters with style and snootiness

heavyweight.cc iconheavyweight.cc

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his “saas was built with Cursor, zero hand written code,” the story quickly turned dark.

Within weeks, disaster struck. The developer reported that “random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db.” Being non-technical, he couldn’t debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted “Cursor keeps breaking other parts of the code.”

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This *can *happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call “pseudo-developers.” These are people who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers' Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can't debug or maintain code.

finalroundai.com iconfinalroundai.com

This is gorgeous work from Collins in their rebrand for Muse Group, developers of music apps like Ultimate Guitar, MuseScore, Audacity, and MuseClass. Paul Moore, writing in It’s Nice That:

One of the issues, [chief creative officer] Nick [Ace] argues, in the design industry is a fixation on branding tech as “software from the future”, relying on literal representations from the 1980s that have created dull and homogeneous visuals that shy away from the timelessness of creativity. “Instead of showcasing technical specs or outlandish interfaces, we centered the brand around the raw experience of musical creation, itself,” says Nick. “Rather than depicting the tools, we visualized the outcomes—the resonance, the harmony, the creative breakthrough that happens when technical barriers disappear.”

Collins rebrand for Muse Group channels the invisible phenomena of experiencing music

Geometric abstraction, dynamic compositions and a distillation of musical feeling sets Collins new project apart from other software brands.

itsnicethat.com iconitsnicethat.com

Sonos announced yesterday that interim CEO Tom Conrad was made permanent. From their press release:

Sonos has achieved notable progress under Mr. Conrad’s leadership as Interim CEO. This includes setting a new standard for the quality of Sonos’ software and product experience, clearing the path for a robust new product pipeline, and launching innovative new software enhancements to flagship products Sonos Ace and Arc Ultra.

Conrad surely navigated this landmine well after the disastrous app redesign that wiped almost $500 million from the company’s market value and cost CEO Patrick Spence his job. My sincere hope is that Conrad continues to rebuild Sonos’s reputation by continuing to improve their products.

Sonos Appoints Tom Conrad as Chief Executive Officer

Sonos Website

sonos.com iconsonos.com

Elizabeth Goodspeed contextualizes today’s growing design influencers against designers-cum-artists like April Greiman and Stefan Sagmeister. Along with Tibor Kalman, Jessica Walsh, and Wade and Leta, all of these designers put themselves into their work.

Other designers ran with similar instincts. 40 Days of Dating, a joint project by Jessica Walsh and Timothy Goodman created in 2013, was presented as a kind of art-directed relationship experiment: two friends, both single, agreed to date each other for 40 days (40 days being the purported time needed to build a habit). The project was presented through highly polished daily updates with lush photography, motion graphics, custom lettering, and a parade of commissioned work from other artists – all accompanied by alarming candid journal entries from both parties about the dates they were going on. It wasn’t exactly a design project in the traditional sense, but it was unmistakably design-led; the relationship itself was the content, but it was design that made it viral.

These self-directed, clientless projects remind me of MFA design theses where design is the medium for self-expression. Bringing it back to 2025, Godspeed writes:

Designers film themselves in their bedrooms and running errands, narrating design decisions and venting about clients along the way. Just as remote work expects us to perform constant busyness, design influencing demands a continuous performance of creative output. …Brands have jumped in on the trend, too. Where once a designer might have been hired to create packaging or campaigns behind the scenes, many are now brought forward as faces of collaborations – they’re photographed in their studios and interviewed about their process as part of launch. The designer’s body, personality, and public profile become a commercial asset.

And of course, like with all content creators, it becomes a job that just might require more work than it seems.

Influencing can seem like a good, low-lift side-hustle at first. Most designers already have tons of unused work and in-progress sketches to share. Why not just post it and see what happens? But anyone who’s ever had to write captions or cut reels knows that making content is, in fact, harder than it looks. The more energy that goes into showcasing work, the less time there is to actually make work, even if you want to. “Influencing” can quickly become a time suck.

preview-1753150717056.png

Elizabeth Goodspeed on the rise of the designer as influencer

As social platforms reward visibility, creatives are increasingly expected to make their practice public. Designers are no longer just making work; they are the work. But what started as promotion now risks swallowing design itself.

itsnicethat.com iconitsnicethat.com

It’s no secret that I am a big fan of Severance, the Apple TV+ show that has 21 Emmy nominations this year. I made a fan project earlier in the year that generates Outie facts for your Innie.

After launching a teaser campaign back in April, Atomic Keyboard is finally taking pre-orders for their Severance-inspired keyboard just for Macrodata Refinement department users. The show based the MDR terminals on the Data General Dasher D2 terminal from 1977. So this new keyboard includes three layouts:

  1. “Innie” which is show-accurate, meaning no Escape, no Option, and no Control keys, and includes the trackball
  2. “Outie,” a 60% layout that includes modern modifier keys and the trackball
  3. “Dasher” which replicates the DG terminal layout

It’s not cheap. The final retail price will be $899, but they’re offering a pre-Kickstarter price of $599.

preview-1752862402377.png

MDR Dasher Keyboard | For Work That's Mysterious & Important

Standard equipment for Macrodata Refinement: CNC-milled body, integrated trackball, modular design. Please enjoy each keystroke equally.

mdrkeyboard.com iconmdrkeyboard.com

Stephanie Tyler, in a great essay about remembering what we do as designers:

In an age where AI can generate anything, the question is no longer ‘can it be made?’ but ‘is it worth making?’ The frontier isn’t volume—it’s discernment. And in that shift, taste has become a survival skill.

And this is my favorite passage, because this is how I think about this blog and my newsletter.

There will always be creators. But the ones who stand out in this era are also curators. People who filter their worldview so cleanly that you want to see through their eyes. People who make you feel sharper just by paying attention to what they pay attention to.

Curation is care. It says: I thought about this. I chose it. I didn’t just repost it. I didn’t just regurgitate the trending take. I took the time to decide what was worth passing on.

That’s rare now. And because it’s rare, it’s valuable.

We think of curation as a luxury. But it’s actually maintenance. It’s how you care for your mind. Your attention. Your boundaries.

This blog represents my current worldview, what I’m interested in and exploring. What I’m thinking about now.

preview-1752706649473.png

Taste Is the New Intelligence

Why curation, discernment, and restraint matter more than ever

wildbarethoughts.com iconwildbarethoughts.com

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas

  1. AI tools: Assembling an evolving AI design stack to ship fast
  2. AI fluency: Learning how to design for probabilistic systems
  3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

uxdesign.cc iconuxdesign.cc

From UX Magazine:

Copilots helped enterprises dip their toes into AI. But orchestration platforms and tools are where the real transformation begins — systems that can understand intent, break it down, distribute it, and deliver results with minimal hand-holding.

Think of orchestration as how “meta-agents” are conducting other agents.

The first iteration of AI in SaaS was copilots. They were like helpful interns eagerly awaiting your next command. Orchestration platforms are more like project managers. They break down big goals into smaller tasks, assign them to the right AI agents, and keep everything coordinated. This shift is changing how companies design software and user experiences, making things more seamless and less reliant on constant human input.

For designers and product teams, it means thinking about workflows that cross multiple tools, making sure users can trust and control what the AI is doing, and starting small with automation before scaling up.

Beyond Copilots: The Rise of the AI Agent Orchestration Platform

AI agent orchestration platforms are replacing simple copilots, enabling enterprises to coordinate autonomous agents for smarter, more scalable workflows.

uxmag.com iconuxmag.com

Let’s stay on the train of designing AI interfaces for a bit. Here’s a piece by Rob Chappell in UX Collective where he breaks down how to give users control—something I’ve been advocating—when working with AI.

AI systems are transforming the structure of digital interaction. Where traditional software waited for user input, modern AI tools infer, suggest, and act. This creates a fundamental shift in how control moves through a experience or product — and challenges many of the assumptions embedded in contemporary UX methods.

The question is no longer: “What is the user trying to do?”

The more relevant question is: “Who is in control at this moment, and how does that shift?”

Designers need better ways to track how control is initiated, shared, and handed back — focusing not just on what users see or do, but on how agency is negotiated between human and system in real time.

Most design frameworks still assume the user is in the driver’s seat. But AI is changing the rules. The challenge isn’t just mapping user flows or intent—it’s mapping who holds the reins, and how that shifts, moment by moment. Designers need new tools to visualize and shape these handoffs, or risk building systems that feel unpredictable or untrustworthy. The future of UX is about negotiating agency, not just guiding tasks.

preview-1752705140164.png

Beyond journey maps: designing for control in AI UX

When systems act on their own, experience design is about balancing agency — not just user flow

uxdesign.cc iconuxdesign.cc

Vitaly Friedman writes a good primer on the design possibilities for users to interact with AI features. As AI capabilities become more and more embedded in the products designers make, we have to become facile in manipulating AI as material.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

preview-1752639762962.jpg

Design Patterns For AI Interfaces

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick.

smashingmagazine.com iconsmashingmagazine.com

Speaking of prompt engineering, apparently, there’s a new kind in town called context engineering.

Developer Philipp Schmid writes:

What is context engineering? While “prompt engineering” focuses on crafting the perfect set of instructions in a single text string, context engineering is a far broader. Let’s put it simply: “Context Engineering is the discipline of designing and building dynamic systems that provides the right information and tools, in the right format, at the right time, to give a LLM everything it needs to accomplish a task.”

preview-1752639352021.jpg

The New Skill in AI is Not Prompting, It's Context Engineering

Context Engineering is the new skill in AI. It is about providing the right information and tools, in the right format, at the right time.

philschmid.de iconphilschmid.de

Since its debut at Config back in May, Figma has steadily added practical features to Figma Make for product teams. Supabase integration now allows for authentication, data storage, and file uploads. Designers can import design system libraries, which helps maintain visual consistency. Real-time collaboration has improved, giving teams the ability to edit code and prototypes together. The tool now supports backend connections for managing state and storing secrets. Prototypes can be published to custom domains. These changes move Figma Make closer to bridging the gap between design concepts and advanced prototypes.

In my opinion, there’s a stronger relationship between Sites and Make than there is Make and Design. The Make-generated code may be slightly better than when Sites debuted, but it is still not semantic.

Anyhow, I think Make is great for prototyping and it’s convenient to have it built right into Figma. Julius Patto, writing in UX Collective:

Prompting well in Figma Make isn’t about being clever, it’s about being clear, intentional, and iterative. Think of it as a new literacy in the design toolkit: the better you get at it, the more you unlock AI’s potential without losing your creative control.

preview-1752622395695.jpeg

How to prompt Figma Make’s AI better for product design

Learn how to use AI in Figma Make with UX intention, from smarter prompts to inclusive flows that reflect real user needs.

uxdesign.cc iconuxdesign.cc

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor’s Coke. They’re both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

saastr.com iconsaastr.com