74 posts tagged with “ai

Illustration of diverse designers collaborating around a table with laptops and design materials, rendered in a vibrant style with coral, yellow, and teal colors

Five Practical Strategies for Entry-Level Designers in the AI Era

In Part I of this series on the design talent crisis, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market. In Part II, I discussed how industry and education need to change in order to ensure the survival of the profession.

Part III: Adaptation Through Action 

Like most Gen X kids, I grew up with a lot of freedom to roam. By fifth grade, I was regularly out of the house. My friends and I would go to an arcade in San Francisco’s Fisherman’s Wharf called The Doghouse, where naturally, they served hot dogs alongside their Joust and TRON cabinets. But we would invariably go to the Taco Bell across the street for cheap pre-dinner eats. In seventh grade—this is 1986—I walked by a ComputerLand on Van Ness Avenue and noticed a little beige computer with a built-in black and white CRT. The Macintosh screen was actually pale blue and black, but more importantly, showed MacPaint. It was my first exposure to creating graphics on a computer, which would eventually become my career.

Desktop publishing had officially begun a year earlier with the introduction of Aldus PageMaker and the Apple LaserWriter printer for the Mac, which enabled WYSIWYG (What You See Is What You Get) page layouts and high-quality printed output. A generation of designers who had created layouts using paste-up techniques with tools and materials like X-Acto knives, Rapidograph pens, rubyliths, photostats, and rubber cement had to start learning new skills. Typesetters would eventually be phased out in favor of QuarkXPress. A decade of transition would revolutionize the industry, only to be upended again by the web.

My former colleague from Organic, Christian Haas—now ECD at YouTube—has been experimenting with AI video generation recently. He’s made a trilogy of short films called AI Jobs.

You can watch part one above 👆, but don’t sleep on parts two and three.

Haas put together a “behind the scenes” article explaining his process. It’s fascinating and I’ll want to play with video generation myself at some point.

I started with a rough script, but that was just the beginning of a conversation. As I started generating images, I was casting my characters and scouting locations in real time. What the model produced would inspire new ideas, and I would rewrite the script on the fly. This iterative loop continued through every stage. Decisions weren't locked in; they were fluid. A discovery made during the edit could send me right back to "production" to scout a new location, cast a new character and generate a new shot. This flexibility is one of the most powerful aspects of creating with Gen AI.

It’s a wonderful observation Haas has made—the workflow enabled by gen AI allows for more creative freedom. In any creative endeavor where the production of the final thing is really involved and utilizes a significant amount of labor and materials, be it a film, commercial photography, or software, planning is a huge part. We work hard to spec out everything before a crew of a hundred shows up on set or a team of highly-paid engineers start coding. With gen AI, as shown here with Google’s Veo 3, you have more room for exploration and expression.

UPDATE: I came across this post from Rory Flynn after I published this. He uses diagrams to direct Veo 3.

preview-1754327232920.jpg

Behind the Prompts — The Making of "AI Jobs"

Christian Haas created the first film with the simple goal of learning to use the tools. He didn’t know if it would yield anything worth watching but that was not the point.

Earth 3 Streamline Icon: https://streamlinehq.comlinkedin.com
Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Meet the 5 Recent Design Grads and 5 Design Educators

For my series on the Design Talent Crisis (see Part IPart II, and Part III) I interviewed five recent graduates from California College of the Arts (CCA) and San Diego City College. I’m an alum of CCA and I used to teach at SDCC. There’s a mix of folks from both the graphic design and interaction design disciplines. 

Meet the Grads

If these enthusiastic and immensely talented designers are available and you’re in a position to hire, please reach out to them!

Benedict Allen

For the past year, CPG behemoth Unilever has been “working with marketing services group Brandtech to build up its Beauty AI Studio: a bespoke, in-house system inside its beauty and wellbeing business. Now in place across 18 different markets (the U.S. and U.K. among them), the studio is being used to make assets for paid social, programmatic display inventory and e-commerce usage across brands including Dove Intensive Repair, TRESemme Lamellar Shine and Vaseline Gluta Hya.”

Sam Bradley, writing in Digiday:

The system relies on Pencil Pro, a generative AI application developed by Brandtech Group. The tool draws on several large language models (LLMs), as well as API access to Meta and TikTok for effectiveness measurement. It’s already used by hearing-care brand Amplifon to rapidly produce text and image assets for digital ad channels.

In Unilever’s process, marketers use prompts and their own insights about target audiences to generate images and video based on 3D renders of each product, a practice sometimes referred to as “digital twinning.” Each brand in a given market is assigned a “BrandDNAi” — an AI tool that can retrieve information about brand guidelines and relevant regulations and that provides further limitations to the generative process.

So far, they haven’t used this system to generate AI humans. Yet.

Inside Unilever’s AI beauty marketing assembly line — and its implications for agencies

The CPG giant has created an AI-augmented in-house production system. Could it be a template for others looking to bring AI in house?

Earth 3 Streamline Icon: https://streamlinehq.comdigiday.com
Human chain of designers supporting each other to reach laptops and design tools floating above them, illustrating collaborative mentorship and knowledge transfer in the design industry.

Why Young Designers Are the Antidote to AI Automation

In Part I of this series, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market.

Part II: Building New Ladders 

When I met Benedict Allen, he had just finished with Portfolio Review a week earlier. That’s the big show all the design students in the Graphic Design program at San Diego City College work toward. It’s a nice event that brings out the local design community where seasoned professionals review the portfolios of the graduating students.

Allen was all smiles and relief. “I want to dabble in different aspects of design because the principles are generally the same.” He goes on to mention how he wants to start a fashion brand someday, DJ, try 3D. “I just want to test and try things and just have fun! Of course, I’ll have my graphic design job, but I don’t want that to be the end. Like when the workday ends, that’s not the end of my creativity.” He was bursting with enthusiasm.

Luke Wroblewski, writing in his blog:

Across several of our companies, software development teams are now "out ahead" of design. To be more specific, collaborating with AI agents (like Augment Code) allows software developers to move from concept to working code 10x faster. This means new features become code at a fast and furious pace.

When software is coded this way, however, it (currently at least) lacks UX refinement and thoughtful integration into the structure and purpose of a product. This is the work that designers used to do upfront but now need to "clean up" afterward. It's like the development process got flipped around. Designers used to draw up features with mockups and prototypes, then engineers would have to clean them up to ship them. Now engineers can code features so fast that designers are ones going back and cleaning them up.

This is what I’ve been secretly afraid of. That we would go back to the times when designers were called in to do cleanup. Wroblewski says:

Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn't go away.

Yeah, or the feature will get shipped this way and be stuck this way because startups move fast and move on.

My take is that as designers, we need to meet the moment and figure out how to build design systems and best practices into the agentic workflows our developer counterparts are using.

preview-1753725448535.png

AI Has Flipped Software Development

For years, it's been faster to create mockups and prototypes of software than to ship it to production. As a result, software design teams could stay "ahead" of...

Earth 3 Streamline Icon: https://streamlinehq.comlukew.com

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his "saas was built with Cursor, zero hand written code," the story quickly turned dark.

Within weeks, disaster struck. The developer reported that "random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db." Being non-technical, he couldn't debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted "Cursor keeps breaking other parts of the code."

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can't understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This can happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call "pseudo-developers." These are people who can generate code but can't understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers' Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can't debug or maintain code.

Earth 3 Streamline Icon: https://streamlinehq.comfinalroundai.com
Illustration of people working on laptops atop tall ladders and multi-level platforms, symbolizing hierarchy and competition, set against a bold, abstract sunset background.

The Design Industry Created Its Own Talent Crisis. AI Just Made It Worse.

This is the first part in a three-part series about the design talent crisis. Read Part II and Part III.

Part I: The Vanishing Bottom Rung 

Erika Kim’s path to UX design represents a familiar pandemic-era pivot story, yet one that reveals deeper currents about creative work and economic necessity. Armed with a 2020 film and photography degree from UC Riverside, she found herself working gig photography—graduations, band events—when the creative industries collapsed. The work satisfied her artistic impulses but left her craving what she calls “structure and stability,” leading her to UX design. The field struck her as an ideal synthesis, “I’m creating solutions for companies. I’m working with them to figure out what they want, and then taking that creative input and trying to make something that works best for them.”

Since graduating from the interaction design program at San Diego City College a year ago, she’s had three internships and works retail part-time to pay the bills. “I’ve been in survival mode,” she admits. On paper, she’s a great candidate for any junior position. Speaking with her reveals a very thoughtful and resourceful young designer. Why hasn’t she been able to land a full-time job? What’s going on in the design job market? 

Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

But before diving into Dia, I should mention I also got access to another AI browser, Perplexity’s Comet about a week ago. I’m on their Pro plan but somehow got an invite in my email. I had thought it was limited to those on their much more expensive Max plan only. Shhh.

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas
1. AI tools: Assembling an evolving AI design stack to ship fast
2. AI fluency: Learning how to design for probabilistic systems
3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

Earth 3 Streamline Icon: https://streamlinehq.comuxdesign.cc
Copilots helped enterprises dip their toes into AI. But orchestration platforms and tools are where the real transformation begins — systems that can understand intent, break it down, distribute it, and deliver results with minimal hand-holding.

Think of orchestration as how “meta-agents” are conducting other agents.

The first iteration of AI in SaaS was copilots. They were like helpful interns eagerly awaiting your next command. Orchestration platforms are more like project managers. They break down big goals into smaller tasks, assign them to the right AI agents, and keep everything coordinated. This shift is changing how companies design software and user experiences, making things more seamless and less reliant on constant human input.

For designers and product teams, it means thinking about workflows that cross multiple tools, making sure users can trust and control what the AI is doing, and starting small with automation before scaling up.

Beyond Copilots: The Rise of the AI Agent Orchestration Platform

AI agent orchestration platforms are replacing simple copilots, enabling enterprises to coordinate autonomous agents for smarter, more scalable workflows.

Earth 3 Streamline Icon: https://streamlinehq.comuxmag.com

Let’s stay on the train of designing AI interfaces for a bit. Here’s a piece by Rob Chappell in UX Collective where he breaks down how to give users control—something I’ve been advocating—when working with AI.

AI systems are transforming the structure of digital interaction. Where traditional software waited for user input, modern AI tools infer, suggest, and act. This creates a fundamental shift in how control moves through a experience or product — and challenges many of the assumptions embedded in contemporary UX methods.

The question is no longer:
“What is the user trying to do?”

The more relevant question is:
“Who is in control at this moment, and how does that shift?”

Designers need better ways to track how control is initiated, shared, and handed back — focusing not just on what users see or do, but on how agency is negotiated between human and system in real time.

Most design frameworks still assume the user is in the driver’s seat. But AI is changing the rules. The challenge isn’t just mapping user flows or intent—it’s mapping who holds the reins, and how that shifts, moment by moment. Designers need new tools to visualize and shape these handoffs, or risk building systems that feel unpredictable or untrustworthy. The future of UX is about negotiating agency, not just guiding tasks.

preview-1752705140164.png

Beyond journey maps: designing for control in AI UX

When systems act on their own, experience design is about balancing agency — not just user flow

Earth 3 Streamline Icon: https://streamlinehq.comuxdesign.cc

Vitaly Friedman writes a good primer on the design possibilities for users to interact with AI features. As AI capabilities become more and more embedded in the products designers make, we have to become facile in manipulating AI as material.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.
preview-1752639762962.jpg

Design Patterns For AI Interfaces

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick.

Earth 3 Streamline Icon: https://streamlinehq.comsmashingmagazine.com

Speaking of prompt engineering, apparently, there’s a new kind in town called context engineering.

Developer Philipp Schmid writes:

What is context engineering? While "prompt engineering" focuses on crafting the perfect set of instructions in a single text string, context engineering is a far broader. Let's put it simply:
“Context Engineering is the discipline of designing and building dynamic systems that provides the right information and tools, in the right format, at the right time, to give a LLM everything it needs to accomplish a task.”
preview-1752639352021.jpg

The New Skill in AI is Not Prompting, It's Context Engineering

Context Engineering is the new skill in AI. It is about providing the right information and tools, in the right format, at the right time.

Earth 3 Streamline Icon: https://streamlinehq.comphilschmid.de

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor's Coke. They're both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf's CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

Earth 3 Streamline Icon: https://streamlinehq.comsaastr.com

Ted Goas, writing in UX Collective:

I predict the early parts of projects, getting from nothing to something, will become shared across roles. For designers looking to branch out, code is a natural next step. I see a future where we’re fixing small bugs ourselves instead of begging an engineer, implementing that animation that didn’t make the sprint but you know would absolutely slap, and even building simple features when engineering resources are tight.

Our new reality is that anyone can make a rough draft.

But that doesn’t mean those drafts are good. That’s where our training and taste come in.

I think Goas is right and it echoes the AI natives post by Elena Verna. I wrote a little more extensively in my newsletter over the weekend.

preview-1752467928143.jpg

Designers: We’ll all be design engineers in a year

And that’s a good thing.

Earth 3 Streamline Icon: https://streamlinehq.comuxdesign.cc

Miquad Jaffer, a product leader at OpenAI shares his 4D method on how to build AI products that users want. In summary, it's…

  • Discover: Find and prioritize real user pain points and friction in daily workflows.
  • Design: Make AI features invisible and trustworthy, fitting naturally into users’ existing habits.
  • Develop: Build AI systematically, with robust evaluation and clear plans for failures or edge cases.
  • Deploy: Treat each first use like a product launch, ensuring instant value and building user trust quickly.
preview-1752209855759.png

OpenAI Product Leader: The 4D Method to Build AI Products That Users Actually Want

An OpenAI product leader's complete playbook to discover real user friction, design invisible AI, plan for failure cases, and go from "cool demo" to "daily habit"

Earth 3 Streamline Icon: https://streamlinehq.comcreatoreconomy.so

Geoffrey Litt, Josh Horowitz, Peter van Hardenberg, and Todd Matthews writing a paper for research lab Ink & Switch, offer a great, well-thought piece on what they call “malleable software.”

We envision a new kind of computing ecosystem that gives users agency as co-creators. … a software ecosystem where anyone can adapt their tools to their needs with minimal friction. … When we say ‘adapting tools’ we include a whole range of customizations, from making small tweaks to existing software, to deep renovations, to creating new tools that work well in coordination with existing ones. Adaptation doesn’t imply starting over from scratch.

In their paper, they use analogies like kitchen tools and tool arrangement in a workshop to explore their idea. With regard to the current crop of AI prompt-to-code tools

We think these developments hold exciting potential, and represent a good reason to pursue malleable software at this moment. But at the same time, AI code generation alone does not address all the barriers to malleability. Even if we presume that every computer user could perfectly write and edit code, that still leaves open some big questions.

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Kind of a different take than the “personal software” we’ve seen written about before.

preview-1752208778544.jpg

Malleable software: Restoring user agency in a world of locked-down apps

The original promise of personal computing was a new kind of clay. Instead, we got appliances: built far away, sealed, unchangeable. In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs.

Earth 3 Streamline Icon: https://streamlinehq.cominkandswitch.com

This post has been swimming in my head since I read it. Elena Verna, who joined Lovable just over a month ago to lead marketing and growth, writing in her newsletter, observes that everyone at the company is an AI-native employee. “An AI-native employee isn’t someone who ‘uses AI.’ It’s someone who defaults to AI,” she says.

On how they ship product:

Here, when someone wants to build something (anything) - from internal tools, to marketing pages, to writing production code - they turn to AI and... build it. That’s it.

No headcount asks. No project briefs. No handoffs. Just action.

At Lovable, we’re mostly building with… Lovable. Our Shipped site is built on Lovable. I’m wrapping hackathon sponsorship intake form in Lovable as we speak. Internal tools like credit giveaways and influencer management? Also Lovable (soon to be shared in our community projects so ya’ll can remix them too). On top of that, engineering is using AI extensively to ship code fast (we don’t even really have Product Managers, so our engineers act as them).

I’ve been hearing about more and more companies operating this way. Crazy time to be alive.

More on this topic in a future long-form post.

preview-1752160625907.png

The rise of the AI-native employee

Managers without vertical expertise, this is your extinction call

Earth 3 Streamline Icon: https://streamlinehq.comelenaverna.com

Read past some of the hyperbole in this piece by Andy Budd. I do think the message is sound.

If you’re working at a fast-growth tech startup, you’re probably already feeling the pressure. Execs want more output with fewer people. Product and engineering are experimenting with AI tooling. And you’re being asked to move faster than ever — with less clarity on what the team should even own.

I will admit that I personally feel this pressure too. Albeit, not from my employer but from the chatter in our industry. I’m observing the younger companies experiment with the process, collapsing roles, and expanding responsilities.

As AI eats into the production layer, the traditional boundaries between design and engineering are starting to dissolve. Many of the tasks once owned by design will soon be handled by others — or by machines.

Time will tell when this becomes widespread. I think designers will be asked to ship more code. And PMs and engineers may ship small design tweaks.

The reality is, we’ll likely need fewer designers overall. But the ones we do need will be more specialised, more senior, and more strategically valuable than ever before.

You’ll want AI-literate, full-stack designers — people who are comfortable working across the entire product surface, from UX to code, and from interface to infrastructure. Designers who can navigate ambiguity, embrace new tooling, and confidently operate in the blurred space between design and engineering.

I don't know if I agree with the fewer number of designers. At least not in the near-term. The more AI is embedded into app experiences, the trend—I predict—will go in the opposite direction. The term "AI as material" has been floating around for a few months, but I think its meaning will morph. AI will be the new UI, and thus we need designers to help define those experiences.

preview-1751840519842.png

Design Leadership in the Age of AI: Seize the Narrative Before It’s Too Late

Design is changing. Fast. AI is transforming the way we work — automating production, collapsing handoffs, and enabling non-designers to ship work that once required a full design team. Like it or not, we’re heading into a world where many design tasks will no longer need a designer. If that fills you with unease, you’re not alone. But here’s the key difference between teams that will thrive and those that won’t: Some design leaders are taking control of the narrative. Others are waiting to be told what’s next.

Earth 3 Streamline Icon: https://streamlinehq.comandybudd.com
Stylized artwork showing three figures in profile - two humans and a metallic robot skull - connected by a red laser line against a purple cosmic background with Earth below.

Beyond Provocative: How One AI Company’s Ad Campaign Betrays Humanity

I was in London last week with my family and spotted this ad in a Tube car. With the headline “Humans Were the Beta Test,” this is for Artisan, a San Francisco-based startup peddling AI-powered “digital workers.” Specifically an AI agent that will perform sales outreach to prospects, etc.

London Underground tube car advertisement showing "Humans Were the Beta Test" with subtitle "The Era of AI Employees Is Here" and Artisan company branding on a purple space-themed background.

Artisan ad as seen in London, June 2025

I’ve long left the Bay Area, but I know that the 101 highway is littered with cryptic billboards from tech companies, where the copy only makes sense to people in the tech industry, which to be fair, is a large part of the Bay Area economy. Artisan is infamous for its “Stop Hiring Humans” campaign which went up late last year. Being based in San Diego, much further south in California, I had no idea. Artisan wasn’t even on my radar.

This piece from Mike Schindler is a good reminder that a lot of the content we see on LinkedIn is written for engagement. It's a double-edged sword, isn't it? We want our posts to be read, commented upon, and shared. We see the patterns that get a lot of reactions and we mimic them.

We’re losing ourselves to our worst instincts. Not because we’re doomed, but because we’re treating this moment like a game of hot takes and hustle. But right now is actually a rare and real opportunity for a smarter, more generous conversation — one that helps our design community navigate uncertainty with clarity, creativity, and a sense of shared agency.

But the point that Schindler is making is this: AI is a fundamental shift in the technology landscape that demands nuanced and thoughtful discourse. There's a lot of hype. But as technologists, designers, and makers of products, we really need to lead rather than scare.

I've tried to do that in my writing (though I may not always be successful). I hope you do too.

He has this handy table too…

Chart titled “AI & UX Discourse Detox” compares unhealthy discourse (e.g., FOMO, gaslighting, clickbait, hot takes, flexing, elitism) with healthy alternatives (e.g., curiosity-driven learning, critical perspective, nuanced storytelling, thoughtful dialogue, shared discovery, community stewardship). Created by Mike Schindler.

Designed by Mike Schindler (mschindler.com)

preview-1751429244220.png

The broken rhetoric of AI

A detox guide for designers navigating today’s AI discourse

Earth 3 Streamline Icon: https://streamlinehq.comuxdesign.cc

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma's visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it's rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

Earth 3 Streamline Icon: https://streamlinehq.comfigma.com

Christoph Niemann, in a visual essay about generative AI and art:

…the biggest challenge is that writing an A.I. prompt requires the artist to know what he wants. If only it were that simple.

Creating art is a nonlinear process. I start with a rough goal. But then I head into dead ends and get lost or stuck.

The secret to my process is to be on high alert in this deep jungle for unexpected twists and turns, because this is where a new idea is born.

It’s a fun meditation on the meaning of AI-assisted and AI-generated artwork.

preview-1751331004352.jpg

Sketched Out: An Illustrator Confronts His Fears About A.I. Art (Gift Article)

The advent of A.I. has shocked me into questioning my relationship with art. Will humans still be able to draw for a living?

Earth 3 Streamline Icon: https://streamlinehq.comnytimes.com

If you want an introduction on how to use Cursor as a designer, here’s a must-watch video. It's just over half-an-hour long and Elizabeth Lin goes through several demos in Cursor.

Cursor is much more advanced than the AI prompt-to-code tools I've covered here before. But with it, you'll get much more control because you're building with actual code. (Of course, sigh, you won't have sliders and inputs for controlling design.)

preview-1750139600534.png

A designer's guide to Cursor: How to build interactive prototypes with sound, explore visual styles, and transform data visualizations | Elizabeth Lin

How to use Cursor for rapid prototyping: interactive sound elements, data visualization, and aesthetic exploration without coding expertise

Earth 3 Streamline Icon: https://streamlinehq.comopen.substack.com