Skip to content

It’s like Footer, the footer gallery, but for navigation bars! Inspired by the former and incubated on Design Twitter, Christina Liubynska makes a curated space to celebrate navbars.

Large white headline "Home to the best navbars on the internet" on a dark background with colorful floating navbar icons.

Navbar Gallery – Navigation Design Inspiration

Navbar Gallery is a collection of the best website navbar inspiration designs on the web. Find the ideal navigation example for your design from our collection.

navbar.gallery iconnavbar.gallery

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

Ethan Mollick, a professor of entrepreneurship at the Wharton School says that AI has gotten so good that our relationship with them is changing. “We’re moving from partners to audience, from collaboration to conjuring,” he says.

He fed NotebookLM his book and 140 Substack posts and asked for a video overview. AI famously hallucinates. But Mollick found no factual errors in the six-minute video.

We’re shifting from being collaborators who shape the process to being supplicants who receive the output. It is a transition from working with a co-intelligence to working with a wizard. Magic gets done, but we don’t always know what to do with the results. This pattern — impressive output, opaque process — becomes even more pronounced with research tasks.

Mollick believes that the most wizard-like model today is GPT-5 Pro. He uploaded an academic paper that took him a year to write, which was peer-reviewed, and was then published in a major journal…

Nine minutes and forty seconds later, I had a very detailed critique. This wasn’t just editorial criticism, GPT-5 Pro apparently ran its own experiments using code to verify my results, including doing Monte Carlo analysis and re-interpreting the fixed effects in my statistical models. It had many suggestions as a result (though it fortunately concluded that “the headline claim [of my paper] survives scrutiny”), but one stood out. It found a small error, previously unnoticed. The error involved two different sets of numbers in two tables that were linked in ways I did not explicitly spell out in my paper. The AI found the minor error, no one ever had before.

Later in his post, Mollick says that there’s a problem with this wizardry—it’s too opaque. So what can we do?

First, learn when to summon the wizard versus when to work with AI as a co-intelligence or to not use AI at all. AI is far from perfect, and in areas where it still falls short, humans often succeed. But for the increasing number of tasks where AI is useful, co-intelligence, and the back-and-forth it requires, is often superior to a machine alone. Yet, there are, increasingly, times when summoning a wizard is best, and just trusting what it conjures.

Second, we need to become connoisseurs of output rather than process. We need to curate and select among the outputs the AI provides, but more than that, we need to work with AI enough to develop instincts for when it succeeds and when it fails.

And lastly, trust it. Trust the technology, he suggests. “The question isn’t ‘Is this completely correct?’ but ‘Is this useful enough for this purpose?’”

I think we’re in that transition period. AI is indeed dastardly great at some things and constantly getting better at the tasks it’s not. But we all know where this is headed.

Witch hat hovering over a desktop monitor with circuit-like lines flowing into the screen, small coffee mug on the desk.

On Working with Wizards

Verifying magic on the jagged frontier

oneusefulthing.org icononeusefulthing.org

In this era of AI, we’ve been taught that LLMs are probabilistic, not deterministic, and that they will sometimes hallucinate. There’s a saying in AI circles that humans are right about 80% of the time, and so are AIs. Except when less than 100% accuracy is unacceptable. Accountants need to be 100% accurate, lest they lose track of money for their clients or businesses.

And that’s the problem Intuit had to solve to roll out their AI agent. Sean Michael Kerner, writing in VentureBeat:

Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

“The use cases that we’re trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls,” Joe Preston, Intuit’s VP of product and design, told VentureBeat.

So they built an agent that queries data from a multitude of sources and returns those exact results. But do users trust those results? It comes down to a design decision on being transparent:

Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit’s accounting agent categorizes a transaction, it doesn’t just display the result; it shows the reasoning. This isn’t marketing copy about explainable AI, it’s actual UI displaying data points and logic.

“It’s about closing that trust loop and making sure customers understand the why,” Alastair Simpson, Intuit’s VP of design, told VentureBeat.

Rusty metal bucket tipped over pouring a glowing stream of blue binary digits (ones and zeros) onto a dark surface.

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

The QuickBooks maker's approach to embedding AI agents reveals a critical lesson for enterprise AI adoption: in high-stakes domains like finance and tax, one mistake can erase months of user confidence.

venturebeat.com iconventurebeat.com

We’ve been hearing a lot about AI agents and now enough time has passed that we’re starting to see some learnings in industry. Writing in Harvard Business Review, Linda Mantia, Surojit Chatterjee and Vivian S. Lee showcase three case studies of enterprises that have deployed AI agents.

They write about Hitachi Digital and how they deployed an AI agent as the first responder to the 90,000 questions employees send to their HR team annually.

Every year, employees put over 90,000 questions about everything from travel policies and remote work to training and IT support to the company’s HR team of 120 human responders. Answering these queries can be difficult, in part because of Hitachi’s complex infrastructure of over 20 systems of record, including multiple disparate HR systems, various payroll providers, and different IT environments.

Their system, called “Skye,” is actually a system of agents, coordinating with one another and firing off queries depending on the intent and task.

For example, the intent classifier agent sends a simple policy question like “What are allowed expenses for traveling overseas?” or “Does this holiday count in paid time off?” to a file search and respond agent, which provides immediate answers by examining the right knowledge base given the employee’s position and organization. A document generation agent can create employee verification letters (which verify individuals’ employment status) in seconds, with an option for human approval. When an employee files a request for vacation, the leave management agent uses the appropriate HR management system based on its understanding of the user’s identity, completes the necessary forms, waits for the approval of the employee’s manager, and reports back to the employee.

The authors see three essential imperatives when designing and deploying AI agents into companies.

  1. Design around outcomes and appoint accountable mission owners. Companies need to stop organizing around internal functions and start building teams around actual customer outcomes—which means putting someone in charge of the whole journey, not just pieces of it.
  2. Unlock data silos and clarify the business logic. Your data doesn’t need to be perfect or centralized, but you do need to map out how work actually gets done so AI agents know where to find things and what decisions to make.
  3. Develop the leaders and guardrails that intelligent systems require. You can’t just drop AI agents into your org and hope for the best—leaders need to understand how these systems work, build trust with their teams, and put real governance in place to keep things on track.
Top-down view of two people at a white desk with monitor, keyboard and mouse, overlaid by a multicolored translucent grid.

Designing a Successful Agentic AI System

Agentic AI systems can execute workflows, make decisions, and coordinate across departments. To realize its promise, companies must design workflows around outcomes and appoint mission owners who define the mission, steer both humans and AI agents, and own the outcome; unlock the data silos it needs to access and clarify the business logic underpinning it; and develop the leaders and guardrails that these intelligent systems require.

hbr.org iconhbr.org

It’s interesting to me that Figma had to have a separate conference and set of announcements focused on design systems. In some sense it’s an indicator of how big and mature this part of design has become.

A few highlights from my point-of-view…

Slots seems to solve one of those small UX paper cuts—those niggly inconveniences that we just lived with. But this is a big deal. You’ll be able to add layers within component instances without breaking the connection to your design system. No more pre-building hidden list items or forcing designers to detach components. Pretty advanced stuff.

On the code front, they’re making Code Connect actually approachable with a new UI that connects directly to GitHub and uses AI to map components. The Figma MCP server is out of beta and now supports design system guidelines—meaning your agentic coding tools can actually respect your design standards. Can’t wait to try these.

For teams like mine that are using Make, you’ll be able to pull in design systems through two routes: Make kits (generate React and CSS from Figma libraries) or npm package imports (bring in your existing code components). This is the part where AI-assisted design doesn’t have to mean throwing pixelcraft out the window.

Design systems have always been about maintaining quality at scale. These updates are very welcomed.

Bright cobalt background with "schema" in a maroon bar and light-blue "by Figma" text, stepped columns of orange semicircles on pale-cyan blocks along right and bottom.

Schema 2025: Design Systems For A New Era

As AI accelerates product development, design systems keep the bar for craft and quality high. Here’s everything we announced at Schema to help teams design for the AI era.

figma.com iconfigma.com

I will say that A-ha’s 1985 hit “Take On Me” and its accompanying video was incredibly influential on me as a kid. Listening to the struggles the band endured and the constant tuning of the song they did is very inspiring. In an episode of Song Exploder, Hrishikesh Hirway interviews Paul Waaktaar-Savoy, who originally wrote the bones of the song as a teenager, about the creative journey the band took to realize the version we know and love.

Hirway:

Okay, so you have spent the whole budget and then this version of the song comes out in 1984, and it flops. How were you able to convince anybody to give you another chance? Or maybe even more so, I’m curious, for your own sake: How were you able to feel like that wasn’t the end of the road for the song? Like, it had its chance, it didn’t happen, and that was that.

Waaktaar-Savoy:

Yeah, that’s the good thing about being young. You don’t feel, (chuckles) you know, you just sort of, brush it off your shoulders, you know. We were a hundred percent confident. We were like, there’s not a doubt in our minds.

…it took some time, you know, it was very touch and go. ‘Cause the, you know, they’ve spent this much money on the half-finished album. Are they gonna pour more money into it and risk losing more money? So, from Norway? Hey, no one comes from Norway and makes it. And so it was a risk for people.

Having gone to England from their native Norway, A-ha released two versions of the song in the UK before it became a hit in the US. With the help of the music video, of course.

A new record exec at the US arm of Warner Bros. took a liking to the band and the album, as Waaktaar-Savoy recalls:

And there was a new guy on the company, Jeff Ayeroff. He fell in love with the, the album and the song. And he had been keeping this one particular idea sort of in the back of his head. There was this art film called Commuter, with animation. So, he was the one who put together that with Steve Barron, who was the director.

And they made the video. And the song slowly climbed the charts to become a number one hit.

Play
Episode 301: A-ha

Episode 301: A-ha

Explore the making of “Take On Me” by A-ha on Song Exploder. Listen as band member Paul Waaktaar-Savoy shares the origins, evolution, and creative process behind their iconic hit. This episode delves into the band’s journey, the song’s chart-topping success, and the inspiration behind the legendary music video. Find full episode audio, streaming links, a transcript, and behind-the-scenes stories from A-ha, the most successful Norwegian pop group of all time. Discover music history and artist insights only on Song Exploder’s in-depth podcast series.

songexploder.net iconsongexploder.net

Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.

Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.

Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.

These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Kind Products Win

Kind Products Win

Why do so many products feel soulless?

designplusai.com icondesignplusai.com

As a follow-up to our previous item on Claude Code, here’s an article by Nick Babich who gives us three ways product designers can use Claude to code.

Remember that Anthropic’s Claude has been the leading LLM for coding for a while now.

Claude For Code: How to use Claude to Streamline Product Design Process

Claude For Code: How to use Claude to Streamline Product Design Process

Anthropic Claude is a primary competitor of OpenAI’s ChatGPT. Just like ChatGPT this is versatile tool that can be use used in many…

uxplanet.org iconuxplanet.org

With Cursor and Lovable as the darlings of AI coding tools, don’t sleep on Claude Code. Personally, I’ve been splitting my time between Claude Code and Cursor. While Claude Code’s primary persona is coders and tinkerers, it can be used for so much more.

Lenny Rachitsky calls it “the most underrated AI tool for non-technical people.”

The key is to forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent. It’s essentially a super-intelligent AI running locally, able to do stuff directly on your computer—from organizing your files and folders to enhancing image quality, brainstorming domain names, summarizing customer calls, creating Linear tickets, and, as you’ll see below, so much more.

Since it’s running locally, it can handle huge files, run much longer than the cloud-based Claude/ChatGPT/Gemini chatbots, and it’s fast and versatile. Claude Code is basically Claude with even more powers.

Rachitsky shares 50 of his “favorite and most creative ways non-technical people are using Claude Code in their work and life.”

Everyone should be using Claude Code more

Everyone should be using Claude Code more

How to get started, and 50 ways non-technical people are using Claude Code in their work and life

lennysnewsletter.com iconlennysnewsletter.com

Remote work really exploded when the Covid-19 pandemic hit. Everyone had to adjust to working from home, relying on Zoom and Slack and other collaborative tools much more. But beyond tooling, there’s also process. Matt Mullenweg, CEO of Automattic, has famously been a proponent of distributed work for a while.

Paolo Belcastro peels back the curtain to share how the 1,500 or so global employees of Automattic stay connected via two core principles:

There are two ideas that define our communication culture:

Radical Transparency: we default to openness, with every conversation accessible to everyone in the company. Asynchronous by Design: we don’t expect everyone to be “on” at the same time.

Everything is written down:

Our internal platform, P2, started life as a WordPress theme (it was called Prologue, later updated to version 2 and eventually shortened to P2) that lets people post directly on the front end of a site—fast, simple, and visible to everyone. Over time it evolved into a network of thousands of P2s for teams, projects, and watercooler chats (couch surfing, classified ads, house renovations, babies, pets, music, or games, we kind of have it all).

Every post, every comment, every decision ever made in the history of Automattic is preserved there.

As you can imagine, it soon becomes a volume problem. There’s too much stuff.

No one can read everything.

That’s why onboarding is designed to help people adapt:

  • Each newcomer is paired with a mentor from a different team, to give them a cross-company perspective.
  • They receive a curated list of “milestone posts” that map the history of Automattic, along with role-specific threads relevant to their work.
  • The Field Guide offers principles, templates, and advice about how to handle communication.

Somehow, they make it work.

Using chaos to communicate order

Using chaos to communicate order

How we communicate at Automattic

ttl.blog iconttl.blog

Building on Matthew Ström-Awn’s argument that true quality emerges from decentralized, ground-level ownership, Sean Goedecke writes an essay exploring how software companies navigate the tension between formalized control and the informal, often invisible work that actually drives product excellence.

But first, what does legibility even mean?

What does legibility mean to a tech company, in practice? It means:

  • The head of a department knows, to the engineer, all the projects the department is currently working on
  • That head also knows (or can request) a comprehensive list of all the projects the department has shipped in the last quarter
  • That head has the ability to plan work at least one quarter ahead (ideally longer)
  • That head can, in an emergency, direct the entire resources of the department at immediate work

Note that “shipping high quality software” or “making customers happy” or even “making money” is not on this list. Those are all things tech companies want to do, but they’re not legibility.

Goedecke argues that while leaders prize formal processes and legibility to facilitate predictability and coordination, these systems often overlook the messier, less measurable activities that drive true product quality and user satisfaction.

All organizations - tech companies, social clubs, governments - have both a legible and an illegible side. The legible side is important, past a certain size. It lets the organization do things that would otherwise be impossible: long-term planning, coordination with other very large organizations, and so on. But the illegible side is just as important. It allows for high-efficiency work, offers a release valve for processes that don’t fit the current circumstances, and fills the natural human desire for gossip and soft consensus.

Seeing like a software company

The big idea of James C. Scott’s Seeing Like A State can be expressed in three points: Modern organizations exert control by maximising “legibility”: by…

seangoedecke.com iconseangoedecke.com

Matt Ström-Awn makes the argument that companies can achieve sustainable excellence by empowering everyone at each level to take ownership of quality, rather than relying solely on top-down mandates or standardized procedures.

But more and more I’ve come to believe that quality isn’t a slogan, a program, or a scorecard. It’s a promise kept at the edge by the people doing the work. And, ideally, quality is fundamental to the product itself, where users can judge it without our permission. That’s the shift we need: away from heroics at the center, toward systems that make quality inevitable.

The stakes are high. Centralized quality — slogans, KPIs, executive decrees — can produce positive results, but it’s brittle. Decentralized quality — continuous feedback, distributed ownership, emergent standards — builds resilience. In this essay, I’d like to make the case that the future belongs to those who can decentralize their mindset and approach to quality.

Ström-Awn offers multiple case studies, contrasting centralized systems with decentralized ones, using Ford, Amazon, Apple, Toyota, Netflix, 3M, Morning Star, W.L. Gore, Valve, Barnes & Noble, and Microsoft under Satya Nadella as examples.

These stories share a common thread: organizations that trusted their frontline workers to identify and solve quality problems. But decentralized quality has its own vulnerabilities. Valve’s radical structure has been criticized for creating informal power hierarchies and making it difficult to coordinate large projects. Some ex-employees describe a “high school clique” atmosphere where popular workers accumulate influence while others struggle. Without traditional management oversight, initiatives can moulder, or veer in directions that don’t serve broader company goals.

Still, these examples show a different path for achieving quality, where excellence is defined in the course of building a product. Unlike centralized approaches relying on visionary (but fallible) leaders, decentralized systems are resilient to individual failures, adaptable to change, and empowering to builders. The andon cord, the rolling desk, and the local bookstore manager each represent a small bet on human judgment over institutional control. Those bets look like they’re paying off.

Decentralizing quality

Decentralizing quality

Why moving judgment to the edges wins in the long run

matthewstrom.com iconmatthewstrom.com

Slow and steady wins the race, so they say. And in Waymo’s case, that’s true. Unlike the stereotypical Silicon Valley of “Move fast and break things,” Waymo has been very deliberate and intentional in developing its self-driving tech. In other words, they’re really trying to account for the unintended consequences.

Writing for The Atlantic, Saahil Desai:

Compared with its robotaxi competitors, “Waymo has moved the slowest and the most deliberately,” [Bryant Walker Smith] said—which may be a lesson for the world’s AI developers. The company was founded in 2009 as a secretive project inside of Google; a year later, it had logged 1,000 miles of autonomous rides in a tricked-out Prius. Close to a decade later, in 2018, Waymo officially launched its robotaxi service. Even now, when Waymos are inching their way into the mainstream, the company has been hypercautious. The company is limited to specific zones within the five cities it operates in (San Francisco, Phoenix, Los Angeles, Austin, and Atlanta). And only Waymo employees and “a growing number of guests” can ride them on the highway, Chris Bonelli, a Waymo spokesperson, told me. Although the company successfully completed rides on the highway years ago, higher speeds bring more risk for people and self-driving cars alike. What might look like a few grainy pixels to Waymo’s cameras one moment could be roadkill to swerve around the very next.

Move Fast and Break Nothing

Move Fast and Break Nothing

Waymo’s robotaxis are probably safer than ChatGPT.

theatlantic.com icontheatlantic.com

As UX designers, we try to anticipate the edge cases—what might a user do and how can we ensure they don’t hit any blockers. But beyond the confines of the products we build, we must also remember to anticipate the unintended consequences. How might this product or feature affect the user emotionally? Are we creating bad habits? Are we fomenting rage in pursuit of engagement?

Martin Tomitsch and Steve Baty write in DOC, suggesting some frameworks to anticipate the unpredictable:

Chaos theory describes the observation that even tiny perturbations like the flutter of a butterfly can lead to dramatic, non-linear effects elsewhere over time. Seemingly small changes or decisions that we make as designers can have significant and often unforeseen consequences.

As designers, we can’t directly control the chain of reactions that will follow an action. Reactions are difficult to predict, as they occur depending on factors beyond our direct control.

But by using tools like systems maps, the impact ripple canvas, and iceberg visuals, we can take potential reactions out of the unpredictable pile and shift them into the foreseeable pile.

The UX butterfly effect

The UX butterfly effect

Understanding unintended consequences in design and how to plan for them.

doc.cc icondoc.cc

OK, so there’s workslop, but there’s also general AI slop. With OpenAI’s recent launch of the Sora app, there going to be more and more AI-generated image and video content making the rounds. I do believe that there’s a place for using AI to generate imagery. It can be done well (see Christian Haas’s “AI Jobs”). Or not.

Casey Newton, writing in his Platformer newsletter:

In Sora we find the entire debate over AI-generated media in miniature. On one hand, the content now widely derided as “slop” continually receives brickbats on social media, in blog posts and in YouTube comments. And on the other, some AI-generated material is generating millions of views — presumably not all from people who are hate-watching it.

As the content on the internet is increasingly AI-generated, platforms will need to balance how much of it they let in, lest the overall quality drops.

As Sarah Perez noted at TechCrunch, Pinterest has come under fire from its user base all year for a perceived decline in quality of the service as the percentage of slop there increases. Many people use the service to find real objects they can buy and use; the more that those objects are replaced with AI fantasies, the worse Pinterest becomes for them.

Like most platforms, Pinterest sees little value in banning slop altogether. After all, some people enjoy looking at fantastical AI creations. At the same time, its success depends in some part on creators believing that there is value in populating the site with authentic photos and videos. The more that Pinterest’s various surfaces are dominated by slop, the less motivated traditional creators may be to post there.

How platforms are handling the slop backlash

How platforms are handling the slop backlash

AI-generated media is generating millions of views. But some companies are beginning to rein it in

platformer.news iconplatformer.news

Sticking with the workslop or outsourcing our main work to AI, Douglas Rushkoff writes in Fast Company:

By using the AI to do the big stuff—by outsourcing our primary competencies to the machines instead of giving them the boring busywork—we deskill ourselves and deprive everyone of the opportunity for AI-enhanced outputs. Too many of us are using AI as the primary architect for a project, rather than the general contractor who supports the architect’s human vision.

People forget that it’s the process of doing something that helps us synthesize and form the connections necessary for innovation.

As the researcher behind MIT’s study “This is Your Brain on ChatGPT” explained at a recent ANDUS event, when people turn to an AI for a solution before working on a problem themselves, the number of connections formed in their brains decreases. But when they turn to the AI after working on the problem for a while, they end up with more neural connections than if they worked entirely alone.

That’s because the value of the AI is not its ability to create product for us, but to engage with us in our process. Working and iterating with an AI—doing what we could call generative thinking—is actually a break from Industrial Age thinking. We focus less on outputs than on cycles. Less on the volume of short-term results (assembly line), and more on the quality and complexity of thought and innovation.

The value of the AI is not its ability to create product for us, but to engage with us in our process

The value of the AI is not its ability to create product for us, but to engage with us in our process

AI doesn’t have to replace our competencies or even our employees.

fastcompany.com iconfastcompany.com

Speaking of workslop, here’s an article from NN/g on how to avoid falling into over-reliance on AI in our design field. They call it the “7 Deadly AI Sins for UX Professionals.”

  1. Outsourced Thinking
  2. Wasted Time
  3. Lost Details
  4. Isolated Ideation
  5. Naïve Trust
  6. Bland Taste
  7. Defensive Outlook

As Tanner Kohler writes:

It’s not about avoiding AI. It’s about maintaining your own growth and the quality of your work as you use AI. AI will constantly be changing. Never let yourself slip into repeatedly committing the sins that weaken you and your UX skills.

7 Deadly AI Sins for UX Professionals

7 Deadly AI Sins for UX Professionals

Succumbing to AI temptations weakens your UX skills. Strive for the AI virtues to keep yourself strong as you use AI in your work.

nngroup.com iconnngroup.com

Definitely use AI at work if you can. You’d be guilty of professional negligence if you don’t. But, you must not blindly take output from ChatGPT, Claude, or Gemini and use it as-is. You have to check it, verify that it’s free from hallucinations, and applicable to the task at hand. Otherwise, you’ll generate “workslop.”

Kate Niederhoffer, Gabriella Rosen Kellerman, et. al., in Harvard Business Review, report on a study by Stanford Social Media Lab and BetterUp Labs. They write, “Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers.”

Here’s how this happens. As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.

Don’t be like this. Use it to do better work, not to turn in mediocre work.

Workslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of. Leaders will do best to model thoughtful AI use that has purpose and intention. Set clear guardrails for your teams around norms and acceptable use. Frame AI as a collaborative tool, not a shortcut. Embody a pilot mindset, with high agency and optimism, using AI to accelerate specific outcomes with specific usage. And uphold the same standards of excellence for work done by bionic human-AI duos as by humans alone.

AI-Generated “Workslop” Is Destroying Productivity

AI-Generated “Workslop” Is Destroying Productivity

Despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop”—content that appears polished but lacks real substance, offloading cognitive labor onto coworkers. Research from BetterUp Labs and Stanford found that 41% of workers have encountered such AI-generated output, costing nearly two hours of rework per instance and creating downstream productivity, trust, and collaboration issues. Leaders need to consider how they may be encouraging indiscriminate organizational mandates and offering too little guidance on quality standards. To counteract workslop, leaders should model purposeful AI use, establish clear norms, and encourage a “pilot mindset” that combines high agency with optimism—promoting AI as a collaborative tool, not a shortcut.

hbr.org iconhbr.org

The web is a magical place. It started out as a way to link documents like research papers across the internet, but has evolved into the representation of the internet and the place where we get information and get things done. Writer Will Leitch on Medium:

It is difficult to describe, to a younger person or, really, anyone who wasn’t there, what the emergence of the Internet — this thing that had not been there your entire life, that you had no idea existed, that was suddenly just everywhere — meant to someone who wanted to write. When I graduated college in 1997, the expectation for me, and most wanna-be writers, was that we had two options: Start on the bottom rung of a print publication and toil away for years, hoping that enough people with jobs above you would retire or die in time for you to get a real byline by the time you were 40, or write a brilliant novel or memoir that turned you into Dave Eggers or Elizabeth Wurtzel. That was pretty much it! Then, suddenly, from the sky, there was this place where you could:

  • Write whatever you wanted.
  • Write as long as you wanted.
  • Have your work available to read by anyone, anywhere on the entire freaking planet.

This was — and still is — magical.

The core argument of what Leitch write is that while the business and traffic models that fueled web publishing are collapsing—due to changing priorities of platforms like Google and the dominance of video on social media (i.e., TikTok and Reels), the essential, original magic of publishing on the web isn’t dead.

But that does not mean that Web publishing — that writing on the Internet, the pure pleasure of putting something out in the world and having it be yours, of discovering other people who are doing the same thing — itself is somehow dead, or any less magical than it was in the first place. Because it is magical. It still is. It always was.

It’s the (Theoretical) End of Web Publishing (and I Feel Fine)

It’s the (Theoretical) End of Web Publishing (and I Feel Fine)

Let’s remember why we started publishing on the Web in the first place.

williamfleitch.medium.com iconwilliamfleitch.medium.com

Noah Davis writing in Web Designer Depot, says aloud what I’d thought—but never wrote down—before AI, templates started to kill creativity in web design.

If you’re wondering why the web feels dead, lifeless, or like you’re stuck in a scrolling Groundhog Day of “hero image, tagline, three icons, CTA,” it’s not because AI hallucinated its way into the design department.

It’s because we templatified creativity into submission!

We used to design websites like we were crafting digital homes—custom woodwork, strange hallways, surprise color choices, even weird sound effects if you dared. Each one had quirks. A personality. A soul.

When I was coming up as a designer in the late 1990s and early 2000s, one of my favorite projects was designing Pixar.com. The animation studio’s soul—and by extension the soul I’d imbue into the website—was story. The way this manifest was a linear approach to the site, similar to a slideshow, to tell the story of each of their films.

And as the web design industry grew, and everyone needed and wanted a website, from Fortune 500s to the local barber shop, access to well-designed websites was made possible via templates.

Let’s be real: clients aren’t asking for design anymore. They’re asking for “a site like this.” You know the one. It looks clean. It has animations. It scrolls smoothly. It’s “modern.” Which, in 2025, is just a euphemism for “I want what everyone else has so I don’t have to think.”

Templates didn’t just streamline web development. They rewired what people expect a website to be.

Why hire a designer when you can drop your brand colors into a no-code template, plug in some Lottie files, and call it a day? The end result isn’t bad. It’s worse than bad. It’s forgettable.

Davis ends his rant with a call to action: “If you want design to live, stop feeding the template machine. Build weird stuff. Ugly stuff. Confusing stuff. Human stuff.”

AI Didn’t Kill Web Design —Templates Did It First

AI Didn’t Kill Web Design —Templates Did It First

The web isn’t dying because of AI—it’s drowning in a sea of templates. Platforms like Squarespace, Wix, and Shopify have made building a site easier than ever—but at the cost of creativity, originality, and soul. If every website looks the same, does design even matter anymore?

webdesignerdepot.com iconwebdesignerdepot.com

Designer Ben Holliday writes a wonderful deep dive into how caring is good design. In it, he references the conversation that Jony Ive had with Patrick Collison a few months ago. (It’s worth watching in its entirety if you haven’t already.)

Watching the interview back, I was struck by how he spoke about applying care to design, describing how:

“…everyone has the ability to sense the care in designed things because we can all recognise carelessness.”

Talking about the history of industrial design at Apple, Ive speaks about the care that went into the design of every product. That included the care that went into packaging – specifically things that might seem as inconsequential as how a cable was wrapped and then unpackaged. In reality, the type of small interactions that millions of people experienced when unboxing the latest iPhone. These are details that people wouldn’t see as such, but Ive and team believed that they would sense care when they had been carefully considered and designed.

This approach has always been a part of Jony Ive’s design philosophy, or the principles applied by his creative teams at Apple. I looked back and found an earlier 2015 interview and notes I’d made where he says how he believes that the majority of our manufactured environment is characterised by carelessness. But then, how, at Apple, they wanted people to sense care in their products.

The attention to detail and the focus and attention we can all bring to design is care. It’s important.

Holliday’s career has been focused in government, public sector, and non-profit environments. In other words, he thinks a lot about how design can impact people’s lives at massive scale.

In the past few months, I’ve been drawn to the word ‘careless’ when thinking about the challenges faced by our public services and society. This is especially the case with the framing around the impact of technology in our lives, and increasingly the big bets being made around AI to drive efficiency and productivity.

The word careless can be defined as the failure to give sufficient attention to avoiding harm or errors. Put simply, carelessness can be described as ‘negligence’.

Later, he cites Facebook/Meta’s carelessness when they “used data to target young people when at their most vulnerable,” specifically, body confidence.

Design is care (and sensing carelessness)

Design is care (and sensing carelessness)

Why design is care, and how the experiences we shape and deliver will be defined by how people sense that care in the future.

benholliday.com iconbenholliday.com

Designer Davide Mascioli created a book and online archive of over 450 space exploration-related logos from around the world.

It’s a wonderful archive—pretty exhaustive—and includes a smattering of logos from science fiction (though less exhaustive there, since there are so many sci-fi properties).

Here are some of my favorites (graphically)…

NASA 1975 “Worm” logo page with bold typographic mark on light blue background and logotype samples.

South African National Space Agency 2010 page with swirling logo on mint background and mission control display.

Australian Space Agency 2018 page with abstract black circles on pink background and a rocket launch photo.

Firefly Aerospace 2017 page with stylized firefly logo on yellow background and rocket assembly image.

Zero 2 Infinity 2009 page with circular “011∞” logo on yellow background and high-altitude balloon pod photo.

Space Exploration Logo Archive

Space Exploration Logo Archive

S.E.L.A. is an archive of logos related to the world of Space Exploration. The collection spans more than 80 years of works and includes the most iconic and noteworthy logos distributed in seven chapters, starting with the best known up to the raw & rare ones.

spaceexplorationlogoarchive.webflow.io iconspaceexplorationlogoarchive.webflow.io

I will admit that I’d not heard of this website until I came across this article. Playing around with Perfectly Imperfect myself, I find it to be the strange web Brutalist manifestation of MySpace for the Gen Z generation.

Sudi Jama, writing for It’s Nice That:

Talking about the design for Perfectly Imperfect’s social site pi.fyi, on the other hand, Tyler says: “The design calls back to an era where algorithms didn’t dominate your day-to-day experience on the internet.” Tyler rejects the homogenisation of web design and decided to swerve Perfectly Imperfect into a lane of its own, inspired by the early internet aesthetics of “solid but saturated colours, lack of texture, MS Paint-style airbrushing, and a singular broadcast-style aesthetic”, Brent David Freaney tells us. Brent’s studio Special Offer collaborated with Tyler to bring the best parts of early internet’s visuality, whilst still creating something that belongs in 2025. Some fun facts: Pi.fyi’s colour system was modelled from 1990s McDonald’s brand and style guidelines, and the spray paint logo was inspired by an old Teenage Fanclub band t-shirt Tyler got on eBay.

The platform thrives in the chaos, all born from its visible human touch. “A lot of the core pages that users spend time on (the home page, profiles, etc) are designed to look more like a magazine than a social site.” The visuals are deliberately flat, featuring few animations, in order to let the design cut through. The mixture of a home page presented as acting front page, with editorial content, user posts, profiles adorned in large image paired with bold bordered text, and written content pouring from the right side of the screen. Tyler says: “It’s this approach that’s led us to calling Perfectly Imperfect a ‘social magazine’.” Tyler is inspired by the likes of Index Mag, MySpace, and i-D, among others – all boundary-pushing platforms which hold a cultural authority.

Perfectly Imperfect is the ‘social magazine’ (and nerd’s paradise) remodelling the online sphere

Perfectly Imperfect is the ‘social magazine’ (and nerd’s paradise) remodelling the online sphere

Split between a platform to profile figures from Charli XCX to Francis Ford Coppola, and a social network that refuses to serve the algorithm overlords, this magazine is breaking necks.

itsnicethat.com iconitsnicethat.com