Skip to content

96 posts tagged with “ai”

For the past year, CPG behemoth Unilever has been “working with marketing services group Brandtech to build up its Beauty AI Studio: a bespoke, in-house system inside its beauty and wellbeing business. Now in place across 18 different markets (the U.S. and U.K. among them), the studio is being used to make assets for paid social, programmatic display inventory and e-commerce usage across brands including Dove Intensive Repair, TRESemme Lamellar Shine and Vaseline Gluta Hya.”

Sam Bradley, writing in Digiday:

The system relies on Pencil Pro, a generative AI application developed by Brandtech Group. The tool draws on several large language models (LLMs), as well as API access to Meta and TikTok for effectiveness measurement. It’s already used by hearing-care brand Amplifon to rapidly produce text and image assets for digital ad channels.

In Unilever’s process, marketers use prompts and their own insights about target audiences to generate images and video based on 3D renders of each product, a practice sometimes referred to as “digital twinning.” Each brand in a given market is assigned a “BrandDNAi” — an AI tool that can retrieve information about brand guidelines and relevant regulations and that provides further limitations to the generative process.

So far, they haven’t used this system to generate AI humans. Yet.

Inside Unilever’s AI beauty marketing assembly line — and its implications for agencies

The CPG giant has created an AI-augmented in-house production system. Could it be a template for others looking to bring AI in house?

digiday.com icondigiday.com
Human chain of designers supporting each other to reach laptops and design tools floating above them, illustrating collaborative mentorship and knowledge transfer in the design industry.

Why Young Designers Are the Antidote to AI Automation

In Part I of this series, I wrote about the struggles recent grads have had finding entry-level design jobs and what might be causing the stranglehold on the design job market.

**Part II: Building New Ladders **

When I met Benedict Allen, he had just finished with Portfolio Review a week earlier. That’s the big show all the design students in the Graphic Design program at San Diego City College work toward. It’s a nice event that brings out the local design community where seasoned professionals review the portfolios of the graduating students.

Allen was all smiles and relief. “I want to dabble in different aspects of design because the principles are generally the same.” He goes on to mention how he wants to start a fashion brand someday, DJ, try 3D. “I just want to test and try things and just have fun! Of course, I’ll have my graphic design job, but I don’t want that to be the end. Like when the workday ends, that’s not the end of my creativity.” He was bursting with enthusiasm.

And confidence. When asked about how prepared he felt about his job prospects, he shares, “I say this humbly, I really do feel confident because I’m very proud of my portfolio and the things I’ve made, my design decisions, and my thought processes.” Oh to be in my early twenties again and have his same zeal!

But here’s the thing, I believe him. I believe he’ll go on to do great things because of this young person’s sheer will. He idolizes Virgil Abloh, the died-too-young multi-hyphenate creative who studied architecture, founded the fashion label Off-White, became artistic director of menswear at Louis Vuitton, and designed furniture for IKEA and shoes for Nike. Abloh is Allen’s North Star. 

Artificial intelligence, despite its sycophantic tendencies, does not have that infectious passion. Young people are the life blood of companies. They can reinvigorate an organization and bring perspectives to a jaded workforce. Every single time I’ve ever had the privilege of working with interns, I have felt this. My teams have felt this. And they make the whole organization better.

What Companies Must Do

I love this quote by Robert F. Kennedy in his 1966 speech at the University of Cape Town:

This world demands the qualities of youth: not a time of life but a state of mind, a temper of the will, a quality of imagination, a predominance of courage over timidity, of the appetite for adventure over the life of ease.

As mentioned in Part I of this series, the design industry is experiencing an unprecedented talent crisis, with traditional entry-level career pathways rapidly eroding as the capabilities of AI expand and companies anticipate using AI to automate junior-level tasks. Youth is the key ingredient that sustains companies and industries.

The Business Case for Juniors

Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Five recent design graduates. From top left to right: Ashton Landis, Erika Kim, Emma Haines. Bottom row, left to right: Leah Ray, Benedict Allen.

Just as important as the energy and excitement Benedict Allen brings, is his natural ability to wield AI. He’s an AI native.

In my conversation with him, he’s tried all the major chatbots and has figured out what works best for what. “I’ve used Gemini as I find its voice feature amazing. Like, I use it all the time. …I use Claude sometimes for writing, but I find that the writing was not as good as ChatGPT. ChatGPT felt less like AI-speak. …I love Perplexity. That’s one of my favorites as well.”

He’s not alone. Leah Ray, who recently graduated from California College of the Arts with an MFA in Graphic Design, says that she can’t remember how her design process existed without AI, saying, “It’s become such an integral part of how I think and work.”

She parries with ChatGPT, using it as a creative partner:

I usually start by having a deep or sometimes extended conversation with ChatGPT. And it’s not about getting the direct answer, but more about using the dialogue to clarify my thoughts and challenging my assumptions and even arrive at a clear design direction.

She’ll go on and use the chatbot to help with project planning and timelines, copywriting, code generation, and basic image generation. Ray has even considered training her own AI model using tools like ComfyUI or LoRA that are based on her past design work. She says, “So it could assist me in generating proposals that match my visual styles.” Pretty advanced stuff.

Similar to Ray, Emma Haines, who is finishing up her MDes in Interaction Design at CCA, says that AI “comes into the picture very early on.” She’ll use ChatGPT for brainstorming and project planning, and less so in the later stages.

Unlike many established designers, these young ones don’t see AI as threatening, nor as a crutch. They treat AI as any other tool. Ashton Landis, who recently graduated from CCA with a BFA in Graphic Design, says, “I think right now it’s primarily a tool instead of a replacement.”

Elena Pacenti, Director of MDes Interaction Design at CCA, observes that students have embraced AI immediately and across the board. She says generative AI has been “adopted immediately by everyone, faculty and students” and it’s being used to create text, images, and all sorts of visual content—not just single images, but storyboards, videos, and more. It’s become just another tool in their toolkit.

Pacenti notices that her students are gravitating toward using AI for efficiency rather than exploration. She sees them “embracing all the tools that help make the process faster, more efficient, quicker” to get to their objective, rather than using AI “to explore things they haven’t thought about or to make things.” They’re using it as a shortcut rather than a creative partner. 

Restructure Entry-Level Roles

I don’t think it’s quite there yet, but AI will eventually take over the traditional tasks we give to junior designers. Anthropic recently released an integration with Canva, but the results are predictable—barely a good first draft. For companies that choose to live on the bleeding edge, that will likely be within 12 months. I think in two years, we’ll cede more and more of these junior-level design tasks like extending designs, resizing assets, and searching for stock to AI.

But I believe there is still a place for entry-level designers in any organization. 

Firstly, the tasks can simply be done faster. When we talk about AI and automation, oftentimes the human who’s initiating the task and then judging its output isn’t part of the conversation. Babysitting AI takes time and more importantly, breaks flow. I can imagine teaching a junior designer how to perform these tasks using AI and just stack up more in a day or week. They’ll still be able to practice their taste and curation skills with supervision from more senior peers.

Second, younger people are inherently better with newer technologies. Asking a much more senior designer to figure out advanced prototyping with Lovable or Cursor will be a non-starter. But junior designers should be able to pick this up quickly and become indispensable pairs of hands in the overall process.

Third, we can simply level up the complexity of the tasks we give to juniors. Aneesh Raman, chief economic opportunity officer at LinkedIn, wrote in The New York Times:

Unless employers want to find themselves without enough people to fill leadership posts down the road, they need to continue to hire young workers. But they need to redesign entry-level jobs that give workers higher-level tasks that add value beyond what can be produced by A.I. At the accounting and consulting firm KPMG, recent graduates are now handling tax assignments that used to be reserved for employees with three or more years of experience, thanks to A.I. tools. And at Macfarlanes, early-career lawyers are now tasked with interpreting complex contracts that once fell to their more seasoned colleagues. Research from the M.I.T. Sloan School of Management backs up this switch, indicating that new and low-skilled workers see the biggest productivity gains and benefits from working alongside A.I. tools.

In other words, let’s assume AI will tackle the campaign resizing or building out secondary and tertiary pages for a website. Have junior designers work on smaller projects as the primary designer so they can set strategy, and have them shadow more senior designers and develop their skills in concept, strategy, and decision-making, not just execution.

Invest in the Leadership Pipeline

The 2023 Writers Guild of America strike offers a sobering preview of what could happen to the design profession if we’re not careful about how AI reshapes entry-level opportunities. Unrelated to AI, but to simple budget-cutting, Hollywood studios began releasing writers immediately after scripts were completed, cutting them out of the production process where they would traditionally learn the hands-on skills needed to become showrunners and producers. As Oscar-winning writer Sian Heder (CODA) observed, “A writer friend has been in four different writers rooms and never once set foot on set. How are we training the next generation of producers and showrunners?” The result was a generation of writers missing the apprenticeship pathway that transforms scriptwriters into skilled creative leaders—exactly the kind of institutional knowledge loss that weakens an entire industry.

The WGA’s successful push for guaranteed on-set presence reveals what the design industry must do to avoid a similar talent catastrophe. Companies are avoiding junior hires entirely, anticipating that AI will handle execution tasks—but this eliminates the apprenticeship pathway where designers actually learn strategic thinking. Instead, they need to restructure entry-level roles to guarantee meaningful learning opportunities—pairing junior designers with real projects where they develop taste through guided decision-making. As one WGA member put it, “There’s just no way to learn to do this without learning on the job.” The design industry’s version of that job isn’t Figma execution—it’s the messy, collaborative process of translating business needs into human experiences. 

Today’s junior designers will become tomorrow’s creative directors, design directors, and heads of design. Senior folks like myself will eventually age out, so companies that don’t invest in junior talent now won’t have any experienced designers in five to ten years. 

And if this is an industry-wide trend, young designers who can’t get into the workforce today will pivot to other careers and we won’t have senior designers, period.

How Education is Responding

Portraits of five design educators. From top left to right: Bradford Prairie, smiling in a jacket and button-down against a soft purple background; Elena Pacenti, seated indoors, wearing a black top with long light brown hair; Sean Bacon, smiling in a light button-down against a white background; Bottom row, left to right: Josh Silverman, smiling in a striped shirt against a dark background; Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Our five design educators. From top left to right: Bradford Prairie, Elena Pacenti, Sean Bacon. Bottom row, left to right: Josh Silverman, Eric Heiman.

The Irreplaceable Human Element

When I spoke to the recent grads, all five of them mentioned how AI-created output just has an air of AI. Emma Haines:

People can tell what AI design looks like versus what human design looks like. I think that’s because we naturally just add soul into things when we design. We add our own experiences into our designs. And just being artists, we add that human element into it. I think people gravitate towards that naturally, just as humans.

It speaks to how educators are teaching—and have always been teaching—design. Bradford Prairie, a professor at San Diego City College:

We always tell students, “Try to expose yourself to a lot of great work. Try to look at a lot of inspiration. Try to just get outside more.” Because I think a lot of our students are introverts. They want to sit in their room and I tell them, “No, y’all have to get out in the world! …and go touch grass and touch other things out in the world. That’s how you learn what works and what doesn’t, and what culture looks like.

Leah Ray, explaining how our humanity imbues quality into our designs:

You can often recognize an AI look. Images and designs start to feel like templates and over-predictable in that sense. And everything becomes fast like fast food and sometimes even quicker than eating instant food.

And even though there is a scary trend towards synthetic user research, Elena Pacenti, discourages it. She’ll teach her students to start with provisional user archetypes using AI, but then they’ll need to perform primary research to validate it. “We’re going to do primary to validate. Please do not fake data through the AI.”

Redefining Entry-Level Value

I only talked to educators from two institutions for this series, since those are the two I have connections to. For both programs, there’s less emphasis on hard skills like how to use Figma and more on critical thinking and strategy. I suspect that bootcamps are different.

Sean Bacon, chair of the Graphic Design program at San Diego City College:

Our program is really about concepting, creative thinking, and strategy. Bradford and I are cautiously optimistic that maybe, luckily, the chips we put down, are in the right part of the board. But who knows?

I think he’s spot on. Josh Silverman, who teaches courses at CCA’s MDes Interaction Design, and also a design recruiter, observes: 

So what I’m seeing from my perspective is a lot of organizations that are hiring the kind of students that we graduate from the program, what I like to call a “dangerous generalist.” It’s someone who can do the research, strategy, prototyping, visual design, presentation, storytelling, and be a leader and make a measurable impact. And if a company is restructuring or just starting and only has the means to hire one person, they’re going to want someone who can do all those things. So we are poised to help a lot of students get meaningful employment because they can do all those things.

AI as Design Material, Not Just Tool

Much of the AI conversation has been about how to incorporate it into our design workflows. For UX designers, it’s just as important to discuss how we design AI experiences for users.

Elena Pacenti champions this shift in the conversation. “My take on the whole thing has been to move beyond the tools and to understand AI as a material we design with.” Similar to the early days of virtual reality, AI is an interaction paradigm with very few UI conventions and therefore ripe for designers to invent. Right now.

This profession specifically designs the interaction for complex systems, products, services, a combination—whatever it is out there—and ecosystems of technologies. What’s the next generation of these things that we’re going to design for? …There’s a very challenging task of imagining interactions that are not going just through a chatbot, but they don’t have shape yet. They look tremendously immaterial, more than the past. It’s not going to be necessarily through a screen.

Her program at CCA has implemented this through a specific elective called “Prototyping with AI,” which Pacenti describes as teaching students to “get your hands dirty and understand what’s behind the LLMs and how you can use this base of data and intelligence to do things that you want, not that they want.” The goal is to help students craft their own tools rather than just using prepackaged consumer AI tools—which she calls “a shift towards using it as a material.”

The Path Forward Requires Collective Action

Benedict Allen’s infectious enthusiasm after Portfolio Review represents everything the design industry risks losing if we don’t act deliberately. His confidence, creativity, and natural fluency with AI tools? That’s the potential young designers bring—but only if companies and educational institutions create pathways for that talent to flourish.

The solution isn’t choosing between human creativity and artificial intelligence. It’s recognizing that the combination is more powerful than either alone. Elena Pacenti’s insight about understanding “AI as a material we design with” points toward this synthesis, while companies like KPMG and Macfarlanes demonstrate how entry-level roles can evolve rather than disappear.

This transformation demands intentional investment from both sides. Design schools are adapting quickly—reimagining curriculum, teaching AI fluency alongside fundamental design thinking, emphasizing the irreplaceable human elements that no algorithm can replicate. Companies must match this effort. Restructure entry-level roles. Create new apprenticeship models. Recognize that today’s junior designers will become tomorrow’s creative leaders.

The young designers I profiled here prove that talent and enthusiasm haven’t disappeared. They’re evolving. Allen’s ambitious vision to start a fashion brand. Leah Ray’s ease with AI tools. The question isn’t whether these designers can adapt to an AI-enabled future.

It’s whether the industry will create space for them to shape it.


In the final part of this series, I’ll explore specific strategies for recent graduates navigating this current job market—from building AI-integrated portfolios to creating alternative pathways into the profession.

Luke Wroblewski, writing in his blog:

Across several of our companies, software development teams are now “out ahead” of design. To be more specific, collaborating with AI agents (like Augment Code) allows software developers to move from concept to working code 10x faster. This means new features become code at a fast and furious pace.

When software is coded this way, however, it (currently at least) lacks UX refinement and thoughtful integration into the structure and purpose of a product. This is the work that designers used to do upfront but now need to “clean up” afterward. It’s like the development process got flipped around. Designers used to draw up features with mockups and prototypes, then engineers would have to clean them up to ship them. Now engineers can code features so fast that designers are ones going back and cleaning them up.

This is what I’ve been secretly afraid of. That we would go back to the times when designers were called in to do cleanup. Wroblewski says:

Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn’t go away.

Yeah, or the feature will get shipped this way and be stuck this way because startups move fast and move on.

My take is that as designers, we need to meet the moment and figure out how to build design systems and best practices into the agentic workflows our developer counterparts are using.

preview-1753725448535.png

AI Has Flipped Software Development

For years, it's been faster to create mockups and prototypes of software than to ship it to production. As a result, software design teams could stay "ahead" of...

lukew.com iconlukew.com

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his “saas was built with Cursor, zero hand written code,” the story quickly turned dark.

Within weeks, disaster struck. The developer reported that “random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db.” Being non-technical, he couldn’t debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted “Cursor keeps breaking other parts of the code.”

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This *can *happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call “pseudo-developers.” These are people who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers' Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can't debug or maintain code.

finalroundai.com iconfinalroundai.com
Illustration of people working on laptops atop tall ladders and multi-level platforms, symbolizing hierarchy and competition, set against a bold, abstract sunset background.

The Design Industry Created Its Own Talent Crisis. AI Just Made It Worse.

This is the first part in a three-part series about the design talent crisis. Read Part II and Part III.

**Part I: The Vanishing Bottom Rung **

Erika Kim’s path to UX design represents a familiar pandemic-era pivot story, yet one that reveals deeper currents about creative work and economic necessity. Armed with a 2020 film and photography degree from UC Riverside, she found herself working gig photography—graduations, band events—when the creative industries collapsed. The work satisfied her artistic impulses but left her craving what she calls “structure and stability,” leading her to UX design. The field struck her as an ideal synthesis, “I’m creating solutions for companies. I’m working with them to figure out what they want, and then taking that creative input and trying to make something that works best for them.”

Since graduating from the interaction design program at San Diego City College a year ago, she’s had three internships and works retail part-time to pay the bills. “I’ve been in survival mode,” she admits. On paper, she’s a great candidate for any junior position. Speaking with her reveals a very thoughtful and resourceful young designer. Why hasn’t she been able to land a full-time job? What’s going on in the design job market? 

Back in January, Jared Spool offered an explanation. The UX job market crisis stems from a fundamental shift that occurred around late 2022—what he calls a “market inversion.” The market flipped from having far more open UX positions than qualified candidates to having far more unemployed UX professionals than available jobs. The reasons are multitude, but include expiring tax incentives, rising interest rates, an abundance of bootcamp graduates, automated hiring processes, and globalization.

But that’s only part of the equation. I believe there’s something much larger at play, one that affects more than just UX or product design, but all design disciplines. One in which the tip of the spear has already been felt by software developers in their job market. AI.

Closing Doors for New Graduates

In the first half of this year, 147 tech companies have laid off over 63,000 workers, with a significant portion of them engineers. Entry-level hiring has collapsed, revealing a new permanent reality. At Big Tech companies, new graduates now represent just 7% of all hires—a precipitous 25% decline from 2023 levels and a staggering 50% drop from pre-pandemic baselines in 2019.

The startup ecosystem tells an even more troubling story, where recent graduates comprise less than 6% of new hires, down 11% year-over-year and more than 30% since 2019. This isn’t merely a temporary adjustment; it represents a fundamental restructuring of how companies approach talent acquisition. Even the most credentialed computer science graduates from top-tier programs are finding themselves shut out, suggesting that the erosion of junior positions cuts across disciplines and skill levels.  

LinkedIn executive Aneesh Raman wrote in an op-ed for The New York Times that in a “recent survey of over 3,000 executives on LinkedIn at the vice president level or higher, 63 percent agreed that A.I. will eventually take on some of the mundane tasks currently allocated to their entry-level employees.”

There is already a harsh reality for entry-level tech workers. Companies have essentially frozen junior engineer and data analyst hiring because AI can now handle the routine coding and data querying tasks that were once the realm for new graduates. Hiring managers expect AI’s coding capabilities to expand rapidly, potentially eliminating entry-level roles within a year, while simultaneously increasing demand for senior engineers who can review and improve AI-generated code. It’s a brutal catch-22: junior staff lose their traditional stepping stones into the industry just as employers become less willing to invest in onboarding them. 

For design students and recent graduates, this data illuminates a broader industry transformation where companies are increasingly prioritizing proven experience over potential—a shift that challenges the very foundations of how creative careers traditionally begin.

While AI tools haven’t exactly been able to replace designers yet—even junior ones—the tech will get there sooner than we think. And CEOs and those holding the purse strings are anticipating this, thus holding back hiring of juniors.

Portraits of five recent design graduates. From top left to right: Ashton Landis, wearing a black sleeveless top with long blonde hair against a dark background; Erika Kim, outdoors in front of a mountain at sunset, smiling in a fleece-collared jacket; Emma Haines, smiling and looking over her shoulder in a light blazer, outdoors; Bottom row, left to right: Leah Ray, in a black-and-white portrait wearing a black turtleneck, looking ahead, Benedict Allen, smiling in a black jacket with layered necklaces against a light background

Five recent design graduates. From top left to right: Ashton Landis, Erika Kim, Emma Haines. Bottom row, left to right: Leah Ray, Benedict Allen.

The Learning-by-Doing Crisis

Ashton Landis recently graduated with a BFA in Graphic Design from California College of the Arts (full disclosure: my alma mater). She says:

I found that if you look on LinkedIn for “graphic designer” and you just say the whole San Francisco Bay area, so all of those cities, and you filter for internships and entry level as the job type, there are 36 [job postings] total. And when you go through it, 16 of them are for one or more years of experience. And five of those are for one to two years of experience. And then everything else is two plus years of experience, which doesn’t actually sound like sound like entry level to me. …So we’re pretty slim pickings right now.

When I graduated from CCA in 1995 (or CCAC as it was known back then), we were just climbing out of the labor effects of the early 1990s recession. For my early design jobs in San Francisco, I did a lot of production and worked very closely with more senior designers and creative directors to hone my craft. While school is great for academic learning, nothing beats real-world experience.

Eric Heiman, creative director and co-owner of Volume Inc., a small design studio based in San Francisco, has been teaching at CCA for 26 years. He observes:

We internalize so much by doing things slower, right? The repetition of the process, learning through tinkering with our process, and making mistakes, and things like that. We have internalized those skills.

Sean Bacon, chair of the Graphic Design program at San Diego City College wonders:

What is an entry level position in design then? Where do those exist? How often have I had these companies hire my students even though they clearly don’t have those requirements. So I don’t know. I don’t know what happens, but it is scary to think we’re losing out on what I thought was really valuable training in terms of how I learned to operate, at least in a studio.

Back to the beginnings of my career, I remember digitizing logos when I interned with Mark Fox, a talented logo designer based in Marin County. A brilliant draftsman, he had inked—and still inks—all of his logos by hand. The act of redrawing marks in Illustrator helped me develop my sense of proportions, curves, and optical alignment. At digital agencies, I started my journey redesigning layouts of banners in different sizes. I would eventually have juniors to do that for me as I rose through the ranks. These experiences—though a little painful at the time—were pivotal in perfecting our collective craft. To echo Bacon, it was “really valuable training.”

Apprenticeships at Agencies

Working in agencies and design studios was pretty much an apprenticeship model. Junior designers shadowed more senior designers and took their lead when executing a campaign or designing more pages for a website.

For a typical website project, as a senior designer or art director, I would design the homepage and a few other critical screens, setting up the look and feel. Once those were approved by the client, junior designers would take over and execute the rest. This was efficient and allowed the younger staff to participate and put their reps in.

Searching for stock photos was another classic assignment for interns and junior designers. These were oftentimes multi-day assignments, but it helped teach juniors how to see. 

But today, generative AI apps like Midjourney and Visual Electric are replacing stock photography. 

From Craft to Curation

As the industry marches towards incorporating AI into our workflows, strategy, judgement, and most importantly taste, are critical skills.

But the paradoxically, how do designers develop taste, craft, and strategic thinking without doing the grunt work?

And not only are they missing out on the mundane work because of the dearth of entry-level opportunities, but also because generative AI can give results so quickly.

Eric Heiman again:

I just give the AI a few words and poof, it’s there. How do you learn how to see things? I just feel like learning how to see is a lot about slowing down. And in the case of designers, doing things yourself over and over again, and they slowly reveal themselves through that process.

All the recent graduates I interviewed for this piece are smart, enthusiastic, and talented. Yet, Ashton Landis and Erika Kim are struggling to find full-time jobs. 

Landis doesn’t think her negative experience in the job market is “entirely because of AI,” attributing it more to “general unemployment rates are pretty high right now” and a job market that is “clearly not great.”

Questioning Career Choices

Leah Ray, a recent graphic design MFA graduate from CCA, was able to secure a position as International Visual Designer at Kuaishou, a popular Chinese short-form video and live-streaming app similar to TikTok. But it wasn’t easy. Her job search began months before graduation, extending through her thesis work and creating the kind of sustained anxiety that prompted her final school project—a speculative design exploring AI’s potential to predict alternative career futures.

I was so anxious about my next step after graduation because I didn’t have a job lined up and I didn’t know what to do. …I’m a person who follows the social clock. My parents and the people around me expect me to do the right thing at the right age. Getting a nice job was my next step, but I couldn’t finish that, which led to me feeling anxious and not knowing what to do.

But through her tenacity and some luck, she was able to land the job that she starts this month. 

No, it was not easy to find. But finding this was very lucky. I do remember I saw a lot of job descriptions for junior designers. They expect designers to have AI skills. And I think there are even some roles specifically created for people with AI-related design skills, like AI motion designer and AI model designer, sort of something like that. Like AI image training designers.

Ray’s observation reveals a fundamental shift in entry-level design expectations, where AI proficiency has moved from optional to essential, with entirely new roles emerging around AI-specific design skills.

Portraits of five design educators. From top left to right: Bradford Prairie, smiling in a jacket and button-down against a soft purple background; Elena Pacenti, seated indoors, wearing a black top with long light brown hair; Sean Bacon, smiling in a light button-down against a white background; Bottom row, left to right: Josh Silverman, smiling in a striped shirt against a dark background; Eric Heiman, in profile wearing a flat cap and glasses, black and white photo

Our five design educators. From top left to right: Bradford Prairie, Elena Pacenti, Sean Bacon. Bottom row, left to right: Josh Silverman, Eric Heiman.

Preparing Our Students

Emma Haines, a designer completing her masters degree in Interaction Design at CCA began her job search in May. (Her program concludes in August.) Despite not securing a job yet, she’s bullish because of the prestige and practicality of the Master of Design program.

I think this program has actually helped me a good amount from where I was starting out before. I worked for a year between undergrad and this program, and between where I was before and now, there’s a huge difference. That being said, since the industry is changing so rapidly, it feels a little hard to catch up with. That’s the part that makes me a little nervous going into it. I could be confident right now, but maybe in six months something changes and I’m not as confident going into the job market.

CCA’s one-year program represents a strategic bet on adaptability over specialization. Elena Pacenti, the program’s director, describes an intensive structure that “goes from a foundational semester with foundation of interaction design, form, communication, and research to the system part of it. So we do systems thinking, prototyping, also tangible computing.” The program’s Social Lab component is “two semester-long projects with community partners in partnership with stakeholders that are local or international from UNICEF down to the food bank in Oakland.” It positions design as a tool for social impact rather than purely commercial purposes. This compressed timeline creates what Pacenti calls curricular agility: “We’re lucky that we are very agile. We are a one-year program so we can implement changes pretty quickly without affecting years of classes and changes in the curriculum.”

Josh Silverman, who chaired it for nearly five years, reports impressive historical outcomes: “I think historically for the first nine years of the program—this is cohort 10—I think we’ve had something like 85% job placement within six months of graduation.”

Yet both educators acknowledge current market realities. Pacenti observes that “that fat and hungry market of UX designers is no longer there; it’s on a diet,” while maintaining optimism about design’s future relevance: “I do not believe that designers will be less in demand. I think there will be a tremendous need for designers.” Emma Haines’s nervousness about rapid industry change reflects this broader tension—the gap between educational preparation and market evolution that defines professional training during transformative periods.

Bradford Prairie, who has taught in San Diego City College’s Graphic Design program for nine years, embodies this experimental approach to AI in design education. “We get an easy out when it comes to AI tools,” he explains, “because we’re a program that’s meant to train people for the field. And if the field is embracing these tools, we have an obligation to make students aware of them and give some training on how to use the tools.”

Prairie’s classroom experiments reveal both the promise and pitfalls of AI-assisted design. He describes a student struggling with a logo for a DJ app who turned to ChatGPT for inspiration: “It generates a lot of expected things like turntables, headphones, and waveforms… they’re all too complicated. They all don’t really look like logos. They look more like illustrations.” But the process sparked some other ideas, so he told the student, “This is kind of interesting how the waveform is part of the turntable and… we can take this general idea and redraw it and make it simplified.”

This tension between AI output and human refinement has become central to his teaching philosophy: “If there’s one thing that AI can’t replace, it’s your sense of discernment for what is good and what is not good.” The challenge, he acknowledges, lies in developing that discernment in students who may be tempted to rely too heavily on AI from the start.

The Turning Point

These challenges are real, and they’re reshaping the design profession in fundamental ways. Traditional apprenticeships are vanishing, entry-level opportunities are scarce, and new graduates face an increasingly competitive landscape. But within this disruption lies opportunity. The same forces that have eliminated routine design tasks have also elevated the importance of uniquely human skills—strategic thinking, cultural understanding, and creative problem-solving. The path forward requires both acknowledging what’s been lost and embracing what’s possible.

Despite her struggles to land a full-time job in design, Erika Kim remains optimistic because she’s so enthused about her career choice and the opportunity ahead. Remarking on the parallels of today versus the beginning of the Covid-19 pandemic, she says “It’s kind of interesting that I’m also on completely different grounds in terms of uncertainty. But you just have to get through it, you know. Why not?”


In the next part of this series, I’ll focus on the opportunities ahead: how we as a design industry can do better and what we should be teaching our design students. In the final part, I’ll touch on what recent grads can do to find a job in this current market.

Retro-style robot standing at a large control panel filled with buttons, switches, and monitors displaying futuristic data.

The Era of the AI Browser Is Here

For nearly three years, Arc from The Browser Company has been my daily driver. To be sure, there was a little bit of a learning curve. Tabs disappeared after a day unless you pinned them. Then they became almost like bookmarks. Tabs were on the left side of the window, not at the top. Spaces let me organize my tabs based on use cases like personal, work, or finances. I could switch between tabs using control-Tab and saw little thumbnails of the pages, similar to the app switcher on my Mac. Shift-command-C copied the current page’s URL. 

All these little interface ideas added up to a productivity machine for web jockeys like myself. And so, I was saddened to hear in May that The Browser Company stopped actively developing Arc in favor of a new AI-powered browser called Dia. (They are keeping Arc updated with maintenance releases.)

They had started beta-testing Dia with college students first and just recently opened it up to Arc members. I finally got access to Dia a few weeks ago. 

But before diving into Dia, I should mention I also got access to another AI browser, Perplexity’s Comet about a week ago. I’m on their Pro plan but somehow got an invite in my email. I had thought it was limited to those on their much more expensive Max plan only. Shhh.

So this post is about both and how the future of web browsing is obviously AI-assisted, because it feels so natural.

Chat With Your Tabs

Landing page for Dia, a browser tool by The Browser Company, showcasing the tagline “Write with your tabs” and a button for early access download, along with a UI mockup for combining tabs into a writing prompt.

To be honest, I used Dia in fits and starts. It was easy to import my profiles from Arc and have all my bookmarks transferred over. But, I was missing all the pro-level UI niceties that Arc had. Tabs were back at the top and acted like tabs (though they just brought back sidebar tabs in the last week). There were no spaces. I felt like it was 2021 all over again. I tried to stick with it for a week. 

What Dia offers that Arc does not is, of course, a way to “chat” with your tabs. It’s a chat sidebar to the right of the web page that has the context of that page you’re on. You can also add additional tabs to the chat context by simply @mentioning them.

In a recent article about Dia in The New York Times, reporter Brian X. Chen describes using it to summarize a 22-minute YouTube video about car jump starters, instantly surfacing the top products without watching the whole thing. This is a vivid illustration of the “chat with your tabs” value prop. Saving time.

I’ve been doing the same thing. Asking the chat to summarize a page for me or explain some technical documentation to me in plain English. Or I use it as a fuzzy search to find a quote from the page that mentions something specific. For example, if I’m reading an interview with the CEO of Perplexity and I want to know if he’s tried the Dia browser yet, I can ask, “Has he used Dia yet?” Instead of reading through the whole thing. 

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Screenshot of the Dia browser displaying a Verge article about Perplexity’s CEO, with an AI-generated sidebar summary clarifying that Aravind Srinivas has not used Dia.

Another use case is to open a few tabs and ask for advice. For example, I can open up a few shirts from an e-commerce store and ask for a recommendation.

Screenshot of the Dia browser comparing shirts on the Bonobos website, with multiple tabs open for different shirt styles. The sidebar displays AI-generated advice recommending the Everyday Oxford Shirt for a smart casual look, highlighting its versatility, fit options, and stretch comfort.

Using Dia to compare shirts and get a smart casual recommendation from the AI.

Dia also has customizable “skills” which are essentially pre-saved prompts. I made one to craft summary bios from LinkedIn profiles.

Screenshot of the Dia browser on Josh Miller’s LinkedIn profile, with the “skills” feature generating a summarized biography highlighting his role as CEO of The Browser Company and his career background.

Using Dia’s skills feature to generate a summarized biography from a LinkedIn profile.

It’s cool. But I found that it’s a little limited because the chat is usually just with the tabs that you feed Dia. It helps you digest and process information. In other words, it’s an incremental step up from ChatGPT.

Enter Comet.

Browsing Done for You

Landing page for Comet, an AI-powered browser by Perplexity, featuring the tagline “Browse at the speed of thought” with a prominent “Get Comet” download button.

Comet by Perplexity also allows you to chat with your tabs. Asking about that Verge interview, I received a very similar answer. (No, Aravind Srinivas has not used Dia yet.) And because Perplexity search is integrated into Comet, I find that it is much better at context-setting and answering questions than Dia. But that’s not Comet’s killer feature.

Screenshot of the Comet browser displaying a Verge article about Perplexity’s CEO, with the built-in AI assistant on the right confirming Aravind Srinivas has not used the Dia browser.

Viewing the same article in Comet, with its AI assistant answering questions about the content.

Instead, it’s doing stuff with your tabs. Comet’s onboarding experience shows a few use cases like replying to emails and setting meetings, or filling an Instacart cart with the ingredients for butter chicken.

Just like Dia, when I first launched Comet, I was able to import my profiles from Arc, which included bookmarks and cookies. I was essentially still logged into all the apps and sites I was already logged into. So I tried an assistant experiment. 

One thing I often do is to look up the restaurants that have availability on OpenTable in Yelp. I tend to agree more with Yelpers who are usually harsher critics than OpenTable diners. So I asked Comet to “Find me the highest rated sushi restaurants in San Diego that have availability for 2 at 7pm next Friday night on OpenTable. Pick the top 10 and then rank them by Yelp rating.” And it worked! And if I really want to, I can say “Book Takaramono sushi” and it would have done so. (Actually, I did and then quickly canceled.)

The Comet assistant helped me find a sushi restaurant reservation. Video is sped up 4x.

I tried a different experiment which is something I heard Aravind Srinivas say in his interview with The Verge. I navigated to Gmail and checked three emails I wanted to unsubscribe to. I asked the assistant, “unsubscribe from the checked emails.” The agent then essentially took over my Gmail screen and opened the first checked email, clicked on the unsubscribe link. It repeated this process for the other two emails though ran into a couple of snags. First, Gmail doesn’t keep the state of the checked emails when you click into an email. But the Comet assistant was smart enough to remember the subject lines of all three emails. For the second email, it had some issues filing out the right email for the form so it didn’t work. Therefore of the three unsubscribes, it succeeded on two. 

The whole process also took about two minutes. It was wild though to see my Gmail being navigated by the machine. So that you know it’s in control, Comet puts a teal glow around the edges of the page, not dissimilar to the purple glow of the new Siri. And I could have stopped Comet at any time by clicking a stop button. Obviously, sitting there for two minutes and watching my computer unsubscribe to three emails is a lot longer than the 20 seconds it would have take me to do this manually, but like with many agents, the thinking is to delegate a process to it and come back later to check it. 

I Want My AI Browser

A couple hours after Perplexity launched Comet, Reuters published a leak with the headline “Exclusive: OpenAI to release web browser in challenge to Google Chrome.” Perplexity’s CEO seems to suggest that it was on purpose, to take a bit of wind from their sails. The Justice Department is still trying to strong-arm Google to divest itself from Chrome. If that happens, we’re talking about breaking the most profitable feedback loop in tech history. Chrome funnels search queries directly to Google, which powers their ad empire, which funds Chrome development. Break that cycle, and suddenly you’ve got independent Chrome that could default to any search engine, giving AI-first challengers like The Browser Company, Perplexity, and OpenAI a real shot at users.

Regardless of Chrome’s fate, I strongly believe that AI-enabled browsers are the future. Once I started chatting with my tabs, asking for summaries, seeking clarification, asking for too-technical content to be dumbed down to my level, I just can’t go back. The agentic stuff that Perplexity’s Comet is at the forefront of is just the beginning. It’s not perfect yet, but I think its utility will get there as the models get better. To quote Srinivas again:

I’m betting on the fact that in the right environment of a browser with access to all these tabs and tools, a sufficiently good reasoning model — like slightly better, maybe GPT-5, maybe like Claude 4.5, I don’t know — could get us over the edge where all these things are suddenly possible and then a recruiter’s work worth one week is just one prompt: sourcing and reach outs. And then you’ve got to do state tracking… That’s the extent to which we have an ambition to make the browser into something that feels more like an OS where these are processes that are running all the time.

It must be said that both Opera and Microsoft’s Edge also have AI built in. However, the way those features are integrated feel more like afterthoughts, the same way that Arc’s own AI features felt like tiny improvements.

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

This is a really well-written piece that pulls the AI + design concepts neatly together. Sharang Sharma, writing in UX Collective:

As AI reshapes how we work, I’ve been asking myself, it’s not just how to stay relevant, but how to keep growing and finding joy in my craft.

In my learning, the new shift requires leveraging three areas

  1. AI tools: Assembling an evolving AI design stack to ship fast
  2. AI fluency: Learning how to design for probabilistic systems
  3. Human-advantage: Strengthening moats like craft, agency and judgment to stay ahead of automation

Together with strategic thinking and human-centric skills, these pillars shape our path toward becoming an AI-native designer.

Sharma connects all the crumbs I’ve been dropping this week:

preview-1752771124483.jpeg

AI tools + AI fluency + human advantage = AI-native designer

From tools to agency, is this what it would take to thrive as a product designer in the AI era?

uxdesign.cc iconuxdesign.cc

From UX Magazine:

Copilots helped enterprises dip their toes into AI. But orchestration platforms and tools are where the real transformation begins — systems that can understand intent, break it down, distribute it, and deliver results with minimal hand-holding.

Think of orchestration as how “meta-agents” are conducting other agents.

The first iteration of AI in SaaS was copilots. They were like helpful interns eagerly awaiting your next command. Orchestration platforms are more like project managers. They break down big goals into smaller tasks, assign them to the right AI agents, and keep everything coordinated. This shift is changing how companies design software and user experiences, making things more seamless and less reliant on constant human input.

For designers and product teams, it means thinking about workflows that cross multiple tools, making sure users can trust and control what the AI is doing, and starting small with automation before scaling up.

Beyond Copilots: The Rise of the AI Agent Orchestration Platform

AI agent orchestration platforms are replacing simple copilots, enabling enterprises to coordinate autonomous agents for smarter, more scalable workflows.

uxmag.com iconuxmag.com

Let’s stay on the train of designing AI interfaces for a bit. Here’s a piece by Rob Chappell in UX Collective where he breaks down how to give users control—something I’ve been advocating—when working with AI.

AI systems are transforming the structure of digital interaction. Where traditional software waited for user input, modern AI tools infer, suggest, and act. This creates a fundamental shift in how control moves through a experience or product — and challenges many of the assumptions embedded in contemporary UX methods.

The question is no longer: “What is the user trying to do?”

The more relevant question is: “Who is in control at this moment, and how does that shift?”

Designers need better ways to track how control is initiated, shared, and handed back — focusing not just on what users see or do, but on how agency is negotiated between human and system in real time.

Most design frameworks still assume the user is in the driver’s seat. But AI is changing the rules. The challenge isn’t just mapping user flows or intent—it’s mapping who holds the reins, and how that shifts, moment by moment. Designers need new tools to visualize and shape these handoffs, or risk building systems that feel unpredictable or untrustworthy. The future of UX is about negotiating agency, not just guiding tasks.

preview-1752705140164.png

Beyond journey maps: designing for control in AI UX

When systems act on their own, experience design is about balancing agency — not just user flow

uxdesign.cc iconuxdesign.cc

Vitaly Friedman writes a good primer on the design possibilities for users to interact with AI features. As AI capabilities become more and more embedded in the products designers make, we have to become facile in manipulating AI as material.

Many products are obsessed with being AI-first. But you might be way better off by being AI-second instead. The difference is that we focus on user needs and sprinkle a bit of AI across customer journeys where it actually adds value.

preview-1752639762962.jpg

Design Patterns For AI Interfaces

Designing a new AI feature? Where do you even begin? From first steps to design flows and interactions, here’s a simple, systematic approach to building AI experiences that stick.

smashingmagazine.com iconsmashingmagazine.com

Speaking of prompt engineering, apparently, there’s a new kind in town called context engineering.

Developer Philipp Schmid writes:

What is context engineering? While “prompt engineering” focuses on crafting the perfect set of instructions in a single text string, context engineering is a far broader. Let’s put it simply: “Context Engineering is the discipline of designing and building dynamic systems that provides the right information and tools, in the right format, at the right time, to give a LLM everything it needs to accomplish a task.”

preview-1752639352021.jpg

The New Skill in AI is Not Prompting, It's Context Engineering

Context Engineering is the new skill in AI. It is about providing the right information and tools, in the right format, at the right time.

philschmid.de iconphilschmid.de

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor’s Coke. They’re both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

saastr.com iconsaastr.com

Ted Goas, writing in UX Collective:

I predict the early parts of projects, getting from nothing to something, will become shared across roles. For designers looking to branch out, code is a natural next step. I see a future where we’re fixing small bugs ourselves instead of begging an engineer, implementing that animation that didn’t make the sprint but you know would absolutely slap, and even building simple features when engineering resources are tight.

Our new reality is that anyone can make a rough draft.

But that doesn’t mean those drafts are good. That’s where our training and taste come in.

I think Goas is right and it echoes the AI natives post by Elena Verna. I wrote a little more extensively in my newsletter over the weekend.

preview-1752467928143.jpg

Designers: We’ll all be design engineers in a year

And that’s a good thing.

uxdesign.cc iconuxdesign.cc

Miquad Jaffer, a product leader at OpenAI shares his 4D method on how to build AI products that users want. In summary, it’s…

  • Discover: Find and prioritize real user pain points and friction in daily workflows.
  • Design: Make AI features invisible and trustworthy, fitting naturally into users’ existing habits.
  • Develop: Build AI systematically, with robust evaluation and clear plans for failures or edge cases.
  • Deploy: Treat each first use like a product launch, ensuring instant value and building user trust quickly.
preview-1752209855759.png

OpenAI Product Leader: The 4D Method to Build AI Products That Users Actually Want

An OpenAI product leader's complete playbook to discover real user friction, design invisible AI, plan for failure cases, and go from "cool demo" to "daily habit"

creatoreconomy.so iconcreatoreconomy.so

Geoffrey Litt, Josh Horowitz, Peter van Hardenberg, and Todd Matthews writing a paper for research lab Ink & Switch, offer a great, well-thought piece on what they call “malleable software.”

We envision a new kind of computing ecosystem that gives users agency as co-creators. … a software ecosystem where anyone can adapt their tools to their needs with minimal friction. … When we say ‘adapting tools’ we include a whole range of customizations, from making small tweaks to existing software, to deep renovations, to creating new tools that work well in coordination with existing ones. Adaptation doesn’t imply starting over from scratch.

In their paper, they use analogies like kitchen tools and tool arrangement in a workshop to explore their idea. With regard to the current crop of AI prompt-to-code tools

We think these developments hold exciting potential, and represent a good reason to pursue malleable software at this moment. But at the same time, AI code generation alone does not address all the barriers to malleability. Even if we presume that every computer user could perfectly write and edit code, that still leaves open some big questions.

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Kind of a different take than the “personal software” we’ve seen written about before.

preview-1752208778544.jpg

Malleable software: Restoring user agency in a world of locked-down apps

The original promise of personal computing was a new kind of clay. Instead, we got appliances: built far away, sealed, unchangeable. In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs.

inkandswitch.com iconinkandswitch.com

This post has been swimming in my head since I read it. Elena Verna, who joined Lovable just over a month ago to lead marketing and growth, writing in her newsletter, observes that everyone at the company is an AI-native employee. “An AI-native employee isn’t someone who ‘uses AI.’ It’s someone who defaults to AI,” she says.

On how they ship product:

Here, when someone wants to build something (anything) - from internal tools, to marketing pages, to writing production code - they turn to AI and… build it. That’s it.

No headcount asks. No project briefs. No handoffs. Just action.

At Lovable, we’re mostly building with… Lovable. Our Shipped site is built on Lovable. I’m wrapping hackathon sponsorship intake form in Lovable as we speak. Internal tools like credit giveaways and influencer management? Also Lovable (soon to be shared in our community projects so ya’ll can remix them too). On top of that, engineering is using AI extensively to ship code fast (we don’t even really have Product Managers, so our engineers act as them).

I’ve been hearing about more and more companies operating this way. Crazy time to be alive.

More on this topic in a future long-form post.

preview-1752160625907.png

The rise of the AI-native employee

Managers without vertical expertise, this is your extinction call

elenaverna.com iconelenaverna.com

Read past some of the hyperbole in this piece by Andy Budd. I do think the message is sound.

If you’re working at a fast-growth tech startup, you’re probably already feeling the pressure. Execs want more output with fewer people. Product and engineering are experimenting with AI tooling. And you’re being asked to move faster than ever — with less clarity on what the team should even own.

I will admit that I personally feel this pressure too. Albeit, not from my employer but from the chatter in our industry. I’m observing the younger companies experiment with the process, collapsing roles, and expanding responsilities.

As AI eats into the production layer, the traditional boundaries between design and engineering are starting to dissolve. Many of the tasks once owned by design will soon be handled by others — or by machines.

Time will tell when this becomes widespread. I think designers will be asked to ship more code. And PMs and engineers may ship small design tweaks.

The reality is, we’ll likely need fewer designers overall. But the ones we do need will be more specialised, more senior, and more strategically valuable than ever before.

You’ll want AI-literate, full-stack designers — people who are comfortable working across the entire product surface, from UX to code, and from interface to infrastructure. Designers who can navigate ambiguity, embrace new tooling, and confidently operate in the blurred space between design and engineering.

I don’t know if I agree with the fewer number of designers. At least not in the near-term. The more AI is embedded into app experiences, the trend—I predict—will go in the opposite direction. The term “AI as material” has been floating around for a few months, but I think its meaning will morph. AI will be the new UI, and thus we need designers to help define those experiences.

preview-1751840519842.png

Design Leadership in the Age of AI: Seize the Narrative Before It’s Too Late

Design is changing. Fast. AI is transforming the way we work — automating production, collapsing handoffs, and enabling non-designers to ship work that once required a full design team. Like it or not, we’re heading into a world where many design tasks will no longer need a designer. If that fills you with unease, you’re not alone. But here’s the key difference between teams that will thrive and those that won’t: Some design leaders are taking control of the narrative. Others are waiting to be told what’s next.

andybudd.com iconandybudd.com
Stylized artwork showing three figures in profile - two humans and a metallic robot skull - connected by a red laser line against a purple cosmic background with Earth below.

Beyond Provocative: How One AI Company’s Ad Campaign Betrays Humanity

I was in London last week with my family and spotted this ad in a Tube car. With the headline “Humans Were the Beta Test,” this is for Artisan, a San Francisco-based startup peddling AI-powered “digital workers.” Specifically an AI agent that will perform sales outreach to prospects, etc.

London Underground tube car advertisement showing "Humans Were the Beta Test" with subtitle "The Era of AI Employees Is Here" and Artisan company branding on a purple space-themed background.

Artisan ad as seen in London, June 2025

I’ve long left the Bay Area, but I know that the 101 highway is littered with cryptic billboards from tech companies, where the copy only makes sense to people in the tech industry, which to be fair, is a large part of the Bay Area economy. Artisan is infamous for its “Stop Hiring Humans” campaign which went up late last year. Being based in San Diego, much further south in California, I had no idea. Artisan wasn’t even on my radar.

Highway billboard reading "Stop Hiring Humans, Hire Ava, the AI BDR" with Artisan branding and an AI avatar image on the right side.

Artisan billboard off Highway 101, between San Francisco and SFO Airport

There’s something to be said about shockvertising. It’s meant to be shocking or offensive to grab attention. And the company sure increased their brand awareness, claiming a +197% increase in brand search growth. Artisan CEO Jaspar Carmichael-Jack writes a post-mortem in their company blog about the campaign:

The impact exceeded our wildest expectations. When I meet new people in San Francisco, 70% of the time they know about Artisan and what we do. Before, that number was around 5%. aHrefs ranked us #2 fastest growing AI companies by brand search. We’ve seen 1000s of sales meetings getting booked.

According to him, “October and November became our biggest months ever, bringing in over $2M in new ARR.”

I don’t know how I feel about this. My initial reaction to seeing “Humans Were the Beta Test” in London was disgust. As my readers know, I’m very much pro-AI, but I’m also very pro-human. Calling humanity a beta test is simply tone-deaf and nihilistic. It is belittling our worth and betting on the end of our species. Yes, yes, I know it’s just advertising and some ads are simply offensive to various people for a variety of reasons. But as technology people, Artisan should know better.

Despite ChatGPT’s soaring popularity, there is still ample fear about AI, especially around job displacement and safety. The discourse around AI is already too hyped up.

I even think “Stop Hiring Humans” is slightly less offensive. As to why the company chose to create a rage-bait campaign, Carmichael-Jack says:

We knew that if we made the billboards as vanilla as everybody else’s, nobody would care. We’d spend $100s of thousands and get nothing in return.

We spent days brainstorming the campaign messaging. We wanted to draw eyes and spark interest, we wanted to cause intrigue with our target market while driving a bit of rage with the wider public. The messaging we came up with was simple but provocative: “Stop Hiring Humans.”

Bus stop advertisement displaying "Stop Hiring Humans" with "The Era of AI Employees Is Here" and three human faces, branded by Artisan, on a city street with a passing bus.RetryClaude can make mistakes. Please double-check responses.

When the full campaign which included 50 bus shelter posters went up, death threats started pouring in. He was in Miami on business and thought going home to San Francisco might be risky. “I was like, I’m not going back to SF,” Carmichael-Jack says in a quote to The San Francisco Standard. “I will get murdered if I go back.”

(For the record, I’m morally opposed to death threats. They’re cowardly and incredibly scary for the recipient, regardless of who that person is.)

I’ve done plenty of B2B advertising campaigns in my day. Shock is not a tactic I would have used, nor would I ever recommend to a brand trying to raise positive awareness. I wish Artisan would have used the services of a good B2B ad agency. There are plenty out there and I used to work at one.

Think about the brands that have used shockvertising tactics in the past like Benetton and Calvin Klein. I’ve liked Oliviero Toscani’s controversial photographs that have been the central part of Benetton’s campaigns because they instigate a positive *liberal *conversation. The Pope kissing Egypt’s Islamic leader invites dialogue about religious differences and coexistence and provocatively expresses the campaign concept of “Unhate.”

But Calvin Klein’s sexualized high schoolers? No. There’s no good message in that.

And for me, there’s no good message in promoting the death of the human race. After all, who will pay for the service after we’re all end-of-lifed?

This piece from Mike Schindler is a good reminder that a lot of the content we see on LinkedIn is written for engagement. It’s a double-edged sword, isn’t it? We want our posts to be read, commented upon, and shared. We see the patterns that get a lot of reactions and we mimic them.

We’re losing ourselves to our worst instincts. Not because we’re doomed, but because we’re treating this moment like a game of hot takes and hustle. But right now is actually a rare and real opportunity for a smarter, more generous conversation — one that helps our design community navigate uncertainty with clarity, creativity, and a sense of shared agency.

But the point that Schindler is making is this: AI is a fundamental shift in the technology landscape that demands nuanced and thoughtful discourse. There’s a lot of hype. But as technologists, designers, and makers of products, we really need to lead rather than scare.

I’ve tried to do that in my writing (though I may not always be successful). I hope you do too.

He has this handy table too…

Chart titled “AI & UX Discourse Detox” compares unhealthy discourse (e.g., FOMO, gaslighting, clickbait, hot takes, flexing, elitism) with healthy alternatives (e.g., curiosity-driven learning, critical perspective, nuanced storytelling, thoughtful dialogue, shared discovery, community stewardship). Created by Mike Schindler.

Designed by Mike Schindler (mschindler.com)

preview-1751429244220.png

The broken rhetoric of AI

A detox guide for designers navigating today’s AI discourse

uxdesign.cc iconuxdesign.cc

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma’s visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it’s rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

figma.com iconfigma.com

Christoph Niemann, in a visual essay about generative AI and art:

…the biggest challenge is that writing an A.I. prompt requires the artist to know what he wants. If only it were that simple.

Creating art is a nonlinear process. I start with a rough goal. But then I head into dead ends and get lost or stuck.

The secret to my process is to be on high alert in this deep jungle for unexpected twists and turns, because this is where a new idea is born.

It’s a fun meditation on the meaning of AI-assisted and AI-generated artwork.

preview-1751331004352.jpg

Sketched Out: An Illustrator Confronts His Fears About A.I. Art (Gift Article)

The advent of A.I. has shocked me into questioning my relationship with art. Will humans still be able to draw for a living?

nytimes.com iconnytimes.com

If you want an introduction on how to use Cursor as a designer, here’s a must-watch video. It’s just over half-an-hour long and Elizabeth Lin goes through several demos in Cursor.

Cursor is much more advanced than the AI prompt-to-code tools I’ve covered here before. But with it, you’ll get much more control because you’re building with actual code. (Of course, sigh, you won’t have sliders and inputs for controlling design.)

preview-1750139600534.png

A designer's guide to Cursor: How to build interactive prototypes with sound, explore visual styles, and transform data visualizations | Elizabeth Lin

How to use Cursor for rapid prototyping: interactive sound elements, data visualization, and aesthetic exploration without coding expertise

open.substack.com iconopen.substack.com

David Singleton, writing in his blog:

Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without. Not in a hand-wavy, hype-cycle way, but in a very concrete “this is changing how I ship software” way.

I have to agree. My recent tinkering projects with Cursor using Claude 4 Sonnet (and set to Cursor’s MAX mode) have been much smoother and much more autonomous.

And Singleton has found that Claude Code and OpenAI Codex are good for different things:

For personal tools, I’ve completely shifted my approach. I don’t even look at the code anymore - I describe what I want to Claude Code, test the result, make some minor tweaks with the AI and if it’s not good enough, I start over with a slightly different initial prompt. The iteration cycle is so fast that it’s often quicker to start over than trying to debug or modify the generated code myself. This has unlocked a level of creative freedom where I can build small utilities and experiments without the usual friction of implementation details.

And the larger point Singleton makes is that if you direct the right context to the reasoning model, it can help you solve your problem more effectively:

This points to something bigger: there’s an emerging art to getting the right state into the context window. It’s sometimes not enough to just dump code at these models and ask “what’s wrong?” (though that works surprisingly often). When stuck, you need to help them build the same mental framework you’d give to a human colleague. The sequence diagram was essentially me teaching Claude how to think about our OAuth flow. In another recent session, I was trying to fix a frontend problem (some content wouldn’t scroll) and couldn’t figure out where I was missing the correct CSS incantation. Cursor’s Agent mode couldn’t spot it either. I used Chrome dev tools to copy the entire rendered HTML DOM out of the browser, put that in the chat with Claude, and it immediately pinpointed exactly where I was missing an overflow: scroll.

For my designer audience out there—likely 99% of you—I think this post is informative as to how to work with reasoning models like Claude 4 or o4. This can totally apply to prompt-to-code tools like Lovable and v0. And these ideas can likely apply to Figma Make and Subframe.

preview-1750138847348.jpg

Coding agents have crossed a chasm

Coding agents have crossed a chasm Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without.

blog.singleton.io iconblog.singleton.io

Brian Balfour, writing for the Reforge blog:

Speed isn’t just about shipping faster, it’s about accelerating your entire learning metabolism. The critical metric isn’t feature velocity but rather your speed through the complete Insight → Act → Learn loop. This distinction separates products that compound advantages from those that compound technical debt.

The point being that now with AI, product teams are shipping faster. And those who aren’t might get lapped (to use an F1 phrase).

When Speed Becomes Table Stakes: 5 Improvements to Accelerate Insight to Action

In a world where traditional moats can evaporate in weeks rather than years, speed has transformed from competitive advantage to baseline requirement—yet here lies the paradox: while building and shipping have never been faster, the insights to fuel that building remain trapped in months-long archaeological expeditions through disconnected tools.

reforge.com iconreforge.com