Skip to content

Did you know that Apple made Office before Microsoft made Office? It was called AppleWorks and launched in 1984 for the Apple II. They’d make it for the Mac in 1991 and called it ClarisWorks because Apple spun off a software subsidiary for who knows what reason.

Howard Oakley recently wrote a brief history of AppleWorks and shared some nice visuals. Though I wished he included an image from that original Apple II text-based AppleWorks as well.

AppleWorks screenshot: Certificate of Achievement for Marcia Marks, ornate black border, yellow seal, color palette panel

A brief history of AppleWorks

It took 7 years for it to become available for the Mac, changed names and hands twice, but somehow survived until 2007.

eclecticlight.co iconeclecticlight.co

The good folks at Linear have proven that a design-led company can carve out a space against an entrenched company like Atlassian. They do this by having very strong opinions about how things should work and then pixelfucking the hell out of their products. I truly admire their dedication to craft.

When Apple introduced Liquid Glass, they decided to write their own version of it for more control. Robb Böhnke, writing on Linear’s blog:

Liquid Glass is a beautiful successor to Aqua. Its primary purpose is to feel fluid and friendly for a broad consumer audience. Apple has to design for every kind of app—education, entertainment, banking, fitness—and build systems that adapt to all of them.

We have a different set of constraints. Our users come to Linear to do a specific kind of work in a focused environment. That gives us freedom to push the design in specific ways Apple can’t.

In that sense, we saw an opportunity to take Liquid Glass’s aesthetic qualities—its translucency, depth, and physicality—and apply them with a ProKit philosophy: purpose-built, disciplined, and designed for sustained focus.

ProKit—as Böhnke explained—was Apple’s “pro” theme, the slightly flatter, less flashy big brother to the lickable Aqua. It was “built for professional tools like Final Cut or Logic with complex, information-dense workflows where clarity and control are more important than visual flourish.”

Dark schematic: two circular 3×3 shaded-grid nodes linked by three rounded horizontal tracks with downward arrows.

A Linear spin on Liquid Glass

Earlier this year, we were ready to redesign our mobile app. The original version had served us well, but it was built with a narrow use case foremost in mind: individual contributors engaging with issues.

linear.app iconlinear.app

To close us out on Halloween, here’s one more archive full of spooky UX called the Dark Patterns Hall of Shame. It’s managed by a team of designers and researchers, who have dedicated themselves to identifying and exposing dark patterns and unethical design examples on the internet. More than anything, I just love the names some of these dark patterns have, like Confirmshaming, Privacy Zuckering, and Roach Motel.

Small gold trophy above bold dark text "Hall of shame. design" on a pale beige background.

Collection of Dark Patterns and Unethical Design

Discover a variety of dark pattern examples, sorted by category, to better understand deceptive design practices.

hallofshame.design iconhallofshame.design

’Tis the season for online archives. From GQ comes this archive of the work of Virgil Abloh, the multi-hyphenate creative powerhouse who started as an intern at Fendi and rose to found his own streetwear label Off-White, before becoming artistic director of Louis Vuitton’s menswear collection. He had collabs with Nike, IKEA, and artist Jenny Holzer.

I do think my favorite from this archive is his collection of LV bags. I’m used to seeing them in monochromatic colors, not these bright ones.

Inside the Top Secret Virgil Abloh Archive

Inside the Top Secret Virgil Abloh Archive

In the years since the premature death of the former Off-White and Louis Vuitton creative director, a team of archivists has tirelessly catalogued one of the most remarkable private fashion collections ever assembled. We’re revealing it here for the first time.

gq.com icongq.com

In a world where case studies dominate portfolios, explaining the problem and sharing the outcomes, a visuals-only gallery feels old fashioned. But Pentagram has earned the right to compile their own online monograph. It is one of the very few agencies in the world who could pull together an archive like this that features over 2,000 projects spanning their 53-year existence.

Try searches like: album covers, New York City, SNL, and Paula Scher.

*The folks at Pentagram aren’t complete heretics. They have a more traditional case studies section here.

Dark gallery grid of small thumbnails with a centered translucent search box saying "Show me album covers".

Archive — Pentagram

A place where we’ve condensed over 50 years of our design prowess into an immersive exploration. Delve into 2,000+ projects, spanning from 1972 to the present and beyond, all empowered by Machine Learning.

pentagram.com iconpentagram.com

Celine Nguyen wrote a piece that connects directly to what Ethan Mollick calls “working with wizards” and what SAP’s Ellie Kemery describes as the “calibration of trust” problem. It’s about how the interfaces we design shape the relationships we have with technology.

The through-line is metaphor. For LLMs, that metaphor is conversation. And it’s working—maybe too well:

Our intense longing to be understood can make even a rudimentary program seem human. This desire predates today’s technologies—and it’s also what makes conversational AI so promising and problematic.

When the metaphor is this good, we forget it’s a metaphor at all:

When we interact with an LLM, we instinctively apply the same expectations that we have for humans: If an LLM offers us incorrect information, or makes something up because it the correct information is unavailable, it is lying to us. …The problem, of course, is that it’s a little incoherent to accuse an LLM of lying. It’s not a person.

We’re so trapped inside the conversational metaphor that we accuse statistical models of having intent, of choosing to deceive. The interface has completely obscured the underlying technology.

Nguyen points to research showing frequent chatbot users “showed consistently worse outcomes” around loneliness and emotional dependence:

Participants who are more likely to feel hurt when accommodating others…showed more problematic AI use, suggesting a potential pathway where individuals turn to AI interactions to avoid the emotional labor required in human relationships.

However, replacing human interaction with AI may only exacerbate their anxiety and vulnerability when facing people.

This isn’t just about individual users making bad choices. It’s about an interface design that encourages those choices by making AI feel like a relationship rather than a tool.

The kicker is that we’ve been here before. In 1964, Joseph Weizenbaum created ELIZA, a simple chatbot that parodied a therapist:

I was startled to see how quickly and how very deeply people conversing with [ELIZA] became emotionally involved with the computer and how unequivocally they anthropomorphized it…What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Sixty years later, we’ve built vastly more sophisticated systems. But the fundamental problem remains unchanged.

The reality is we’re designing interfaces that make powerful tools feel like people. Susan Kare’s icons for the Macintosh helped millions understand computers. But they didn’t trick people into thinking their computers cared about them.

That’s the difference. And it matters.

Old instant-message window showing "MeowwwitsMadix3: heyyy" and "are you mad at me?" with typed reply "no i think im just kinda embarassed" and buttons Warn, Block, Expressions, Games, Send.

how to speak to a computer

against chat interfaces ✦ a brief history of artificial intelligence ✦ and the (worthwhile) problem of other minds

personalcanon.com iconpersonalcanon.com

Speaking of trusting AI, in a recent episode of Design Observer’s Design As, Lee Moreau speaks with four industry leaders about trust and doubt in the age of AI.

We’ve linked to a story about Waymo before, so here’s Ryan Powell, head of UX at Waymo:

Safety is at the heart of everything that we do. We’ve been at this for a long time, over a decade, and we’ve taken a very cautious approach to how we scale up our technology. As designers, what we have really focused on is that idea that more people will use us as a serious transportation option if they trust us. We peel that back a little bit. Okay, well, How do we design for trust? What does it actually mean?

Ellie Kemery, principal research lead, advancing responsible AI at SAP, on maintaining critical thinking and transparency in AI-driven products:

We need to think about ethics as a part of this because the unintended consequences, especially at the scale that we operate, are just too big, right?

So we focus a lot of our energy on value, delivering the right value, but we also focus a lot of our energy on making sure that people are aware of how the technology came to that output,…making sure that people are in control of what’s happening at all times, because at the end of the day, they need to be the ones making the call.

Everybody’s aware that without trust, there is no adoption. But there is something that people aren’t talking about as much, which is that people should also not blindly trust a system, right? And there’s a huge risk there because, humans we tend to, you know, we’ll try something a couple of times and if it works it works. And then we lose that critical thinking. We stop checking those things and we simply aren’t in a space where we can do that yet. And so making sure that we’re focusing on the calibration of trust, like what is the right amount of trust that people should have to be able to benefit from the technology while at the same time making sure that they’re aware of the limitations.

Bold white letters in a 3x3 grid reading D E S / I G N / A S on a black background, with a right hand giving a thumbs-up over the right column.

Design as Trust | Design as Doubt

Explore how designers build trust, confront doubt, and center equity and empathy in the age of AI with leaders from Adobe, Waymo, RUSH, and SAP

designobserver.com icondesignobserver.com

Ethan Mollick, a professor of entrepreneurship at the Wharton School says that AI has gotten so good that our relationship with them is changing. “We’re moving from partners to audience, from collaboration to conjuring,” he says.

He fed NotebookLM his book and 140 Substack posts and asked for a video overview. AI famously hallucinates. But Mollick found no factual errors in the six-minute video.

We’re shifting from being collaborators who shape the process to being supplicants who receive the output. It is a transition from working with a co-intelligence to working with a wizard. Magic gets done, but we don’t always know what to do with the results. This pattern — impressive output, opaque process — becomes even more pronounced with research tasks.

Mollick believes that the most wizard-like model today is GPT-5 Pro. He uploaded an academic paper that took him a year to write, which was peer-reviewed, and was then published in a major journal…

Nine minutes and forty seconds later, I had a very detailed critique. This wasn’t just editorial criticism, GPT-5 Pro apparently ran its own experiments using code to verify my results, including doing Monte Carlo analysis and re-interpreting the fixed effects in my statistical models. It had many suggestions as a result (though it fortunately concluded that “the headline claim [of my paper] survives scrutiny”), but one stood out. It found a small error, previously unnoticed. The error involved two different sets of numbers in two tables that were linked in ways I did not explicitly spell out in my paper. The AI found the minor error, no one ever had before.

Later in his post, Mollick says that there’s a problem with this wizardry—it’s too opaque. So what can we do?

First, learn when to summon the wizard versus when to work with AI as a co-intelligence or to not use AI at all. AI is far from perfect, and in areas where it still falls short, humans often succeed. But for the increasing number of tasks where AI is useful, co-intelligence, and the back-and-forth it requires, is often superior to a machine alone. Yet, there are, increasingly, times when summoning a wizard is best, and just trusting what it conjures.

Second, we need to become connoisseurs of output rather than process. We need to curate and select among the outputs the AI provides, but more than that, we need to work with AI enough to develop instincts for when it succeeds and when it fails.

And lastly, trust it. Trust the technology, he suggests. “The question isn’t ‘Is this completely correct?’ but ‘Is this useful enough for this purpose?’”

I think we’re in that transition period. AI is indeed dastardly great at some things and constantly getting better at the tasks it’s not. But we all know where this is headed.

Witch hat hovering over a desktop monitor with circuit-like lines flowing into the screen, small coffee mug on the desk.

On Working with Wizards

Verifying magic on the jagged frontier

oneusefulthing.org icononeusefulthing.org

In this era of AI, we’ve been taught that LLMs are probabilistic, not deterministic, and that they will sometimes hallucinate. There’s a saying in AI circles that humans are right about 80% of the time, and so are AIs. Except when less than 100% accuracy is unacceptable. Accountants need to be 100% accurate, lest they lose track of money for their clients or businesses.

And that’s the problem Intuit had to solve to roll out their AI agent. Sean Michael Kerner, writing in VentureBeat:

Even when its accounting agent improved transaction categorization accuracy by 20 percentage points on average, they still received complaints about errors.

“The use cases that we’re trying to solve for customers include tax and finance; if you make a mistake in this world, you lose trust with customers in buckets and we only get it back in spoonfuls,” Joe Preston, Intuit’s VP of product and design, told VentureBeat.

So they built an agent that queries data from a multitude of sources and returns those exact results. But do users trust those results? It comes down to a design decision on being transparent:

Intuit has made explainability a core user experience across its AI agents. This goes beyond simply providing correct answers: It means showing users the reasoning behind automated decisions.

When Intuit’s accounting agent categorizes a transaction, it doesn’t just display the result; it shows the reasoning. This isn’t marketing copy about explainable AI, it’s actual UI displaying data points and logic.

“It’s about closing that trust loop and making sure customers understand the why,” Alastair Simpson, Intuit’s VP of design, told VentureBeat.

Rusty metal bucket tipped over pouring a glowing stream of blue binary digits (ones and zeros) onto a dark surface.

Intuit learned to build AI agents for finance the hard way: Trust lost in buckets, earned back in spoonfuls

The QuickBooks maker's approach to embedding AI agents reveals a critical lesson for enterprise AI adoption: in high-stakes domains like finance and tax, one mistake can erase months of user confidence.

venturebeat.com iconventurebeat.com

We’ve been hearing a lot about AI agents and now enough time has passed that we’re starting to see some learnings in industry. Writing in Harvard Business Review, Linda Mantia, Surojit Chatterjee and Vivian S. Lee showcase three case studies of enterprises that have deployed AI agents.

They write about Hitachi Digital and how they deployed an AI agent as the first responder to the 90,000 questions employees send to their HR team annually.

Every year, employees put over 90,000 questions about everything from travel policies and remote work to training and IT support to the company’s HR team of 120 human responders. Answering these queries can be difficult, in part because of Hitachi’s complex infrastructure of over 20 systems of record, including multiple disparate HR systems, various payroll providers, and different IT environments.

Their system, called “Skye,” is actually a system of agents, coordinating with one another and firing off queries depending on the intent and task.

For example, the intent classifier agent sends a simple policy question like “What are allowed expenses for traveling overseas?” or “Does this holiday count in paid time off?” to a file search and respond agent, which provides immediate answers by examining the right knowledge base given the employee’s position and organization. A document generation agent can create employee verification letters (which verify individuals’ employment status) in seconds, with an option for human approval. When an employee files a request for vacation, the leave management agent uses the appropriate HR management system based on its understanding of the user’s identity, completes the necessary forms, waits for the approval of the employee’s manager, and reports back to the employee.

The authors see three essential imperatives when designing and deploying AI agents into companies.

  1. Design around outcomes and appoint accountable mission owners. Companies need to stop organizing around internal functions and start building teams around actual customer outcomes—which means putting someone in charge of the whole journey, not just pieces of it.
  2. Unlock data silos and clarify the business logic. Your data doesn’t need to be perfect or centralized, but you do need to map out how work actually gets done so AI agents know where to find things and what decisions to make.
  3. Develop the leaders and guardrails that intelligent systems require. You can’t just drop AI agents into your org and hope for the best—leaders need to understand how these systems work, build trust with their teams, and put real governance in place to keep things on track.
Top-down view of two people at a white desk with monitor, keyboard and mouse, overlaid by a multicolored translucent grid.

Designing a Successful Agentic AI System

Agentic AI systems can execute workflows, make decisions, and coordinate across departments. To realize its promise, companies must design workflows around outcomes and appoint mission owners who define the mission, steer both humans and AI agents, and own the outcome; unlock the data silos it needs to access and clarify the business logic underpinning it; and develop the leaders and guardrails that these intelligent systems require.

hbr.org iconhbr.org

It’s interesting to me that Figma had to have a separate conference and set of announcements focused on design systems. In some sense it’s an indicator of how big and mature this part of design has become.

A few highlights from my point-of-view…

Slots seems to solve one of those small UX paper cuts—those niggly inconveniences that we just lived with. But this is a big deal. You’ll be able to add layers within component instances without breaking the connection to your design system. No more pre-building hidden list items or forcing designers to detach components. Pretty advanced stuff.

On the code front, they’re making Code Connect actually approachable with a new UI that connects directly to GitHub and uses AI to map components. The Figma MCP server is out of beta and now supports design system guidelines—meaning your agentic coding tools can actually respect your design standards. Can’t wait to try these.

For teams like mine that are using Make, you’ll be able to pull in design systems through two routes: Make kits (generate React and CSS from Figma libraries) or npm package imports (bring in your existing code components). This is the part where AI-assisted design doesn’t have to mean throwing pixelcraft out the window.

Design systems have always been about maintaining quality at scale. These updates are very welcomed.

Bright cobalt background with "schema" in a maroon bar and light-blue "by Figma" text, stepped columns of orange semicircles on pale-cyan blocks along right and bottom.

Schema 2025: Design Systems For A New Era

As AI accelerates product development, design systems keep the bar for craft and quality high. Here’s everything we announced at Schema to help teams design for the AI era.

figma.com iconfigma.com
Worn white robots with glowing pink eyes, one central robot displaying a pink-tinted icon for ChatGPT Atlas, in a dark alley with pink neon circle

OpenAI’s ChatGPT Atlas Browser Needs Work

Like many people, I tried OpenAI’s ChatGPT Atlas browser last week. I immediately made it my daily driver, seeing if I could make the best of it. Tl;dr: it’s still early days and I don’t believe it’s quite ready for primetime. But let’s back up a bit.

The Era of the AI Browser Is Here

Back in July, I reviewed both Comet from Perplexity and Dia from The Browser Company. It was a glimpse of the future that I wanted. I concluded:

The AI-powered ideas in both Dia and Comet are a step change. But the basics also have to be there, and in my opinion, should be better than what Chrome offers. The interface innovations that made Arc special shouldn’t be sacrificed for AI features. Arc is/was the perfect foundation. Integrate an AI assistant that can be personalized to care about the same things you do so its summaries are relevant. The assistant can be agentic and perform tasks for you in the background while you focus on more important things. In other words, put Arc, Dia, and Comet in a blender and that could be the perfect browser of the future.

There were also open rumors that OpenAI was working on a browser of their own, so the launch of Atlas was inevitable.

I will say that A-ha’s 1985 hit “Take On Me” and its accompanying video was incredibly influential on me as a kid. Listening to the struggles the band endured and the constant tuning of the song they did is very inspiring. In an episode of Song Exploder, Hrishikesh Hirway interviews Paul Waaktaar-Savoy, who originally wrote the bones of the song as a teenager, about the creative journey the band took to realize the version we know and love.

Hirway:

Okay, so you have spent the whole budget and then this version of the song comes out in 1984, and it flops. How were you able to convince anybody to give you another chance? Or maybe even more so, I’m curious, for your own sake: How were you able to feel like that wasn’t the end of the road for the song? Like, it had its chance, it didn’t happen, and that was that.

Waaktaar-Savoy:

Yeah, that’s the good thing about being young. You don’t feel, (chuckles) you know, you just sort of, brush it off your shoulders, you know. We were a hundred percent confident. We were like, there’s not a doubt in our minds.

…it took some time, you know, it was very touch and go. ‘Cause the, you know, they’ve spent this much money on the half-finished album. Are they gonna pour more money into it and risk losing more money? So, from Norway? Hey, no one comes from Norway and makes it. And so it was a risk for people.

Having gone to England from their native Norway, A-ha released two versions of the song in the UK before it became a hit in the US. With the help of the music video, of course.

A new record exec at the US arm of Warner Bros. took a liking to the band and the album, as Waaktaar-Savoy recalls:

And there was a new guy on the company, Jeff Ayeroff. He fell in love with the, the album and the song. And he had been keeping this one particular idea sort of in the back of his head. There was this art film called Commuter, with animation. So, he was the one who put together that with Steve Barron, who was the director.

And they made the video. And the song slowly climbed the charts to become a number one hit.

Play
Episode 301: A-ha

Episode 301: A-ha

Explore the making of “Take On Me” by A-ha on Song Exploder. Listen as band member Paul Waaktaar-Savoy shares the origins, evolution, and creative process behind their iconic hit. This episode delves into the band’s journey, the song’s chart-topping success, and the inspiration behind the legendary music video. Find full episode audio, streaming links, a transcript, and behind-the-scenes stories from A-ha, the most successful Norwegian pop group of all time. Discover music history and artist insights only on Song Exploder’s in-depth podcast series.

songexploder.net iconsongexploder.net

Circling back to Monday’s item on how caring is good design, Felix Haas has a subtly different take: build kindness into your products.

Kindness in design isn’t about adding smiley faces or writing cheerful copy. It’s deeper than tone. It’s about intent embedded in every interaction.

Kindness shows up in the patience of an empty state that doesn’t rush you. In the warmth of micro-interactions that acknowledge your actions without demanding attention. In error messages that guide rather than scold. In defaults that assume good intent rather than user incompetence.

These moments seem subtle, even trivial, in isolation. But they accumulate. They shape how we feel about a product over weeks and months. They turn interfaces into relationships. They build trust.

Kind Products Win

Kind Products Win

Why do so many products feel soulless?

designplusai.com icondesignplusai.com

As a follow-up to our previous item on Claude Code, here’s an article by Nick Babich who gives us three ways product designers can use Claude to code.

Remember that Anthropic’s Claude has been the leading LLM for coding for a while now.

Claude For Code: How to use Claude to Streamline Product Design Process

Claude For Code: How to use Claude to Streamline Product Design Process

Anthropic Claude is a primary competitor of OpenAI’s ChatGPT. Just like ChatGPT this is versatile tool that can be use used in many…

uxplanet.org iconuxplanet.org

With Cursor and Lovable as the darlings of AI coding tools, don’t sleep on Claude Code. Personally, I’ve been splitting my time between Claude Code and Cursor. While Claude Code’s primary persona is coders and tinkerers, it can be used for so much more.

Lenny Rachitsky calls it “the most underrated AI tool for non-technical people.”

The key is to forget that it’s called Claude Code and instead think of it as Claude Local or Claude Agent. It’s essentially a super-intelligent AI running locally, able to do stuff directly on your computer—from organizing your files and folders to enhancing image quality, brainstorming domain names, summarizing customer calls, creating Linear tickets, and, as you’ll see below, so much more.

Since it’s running locally, it can handle huge files, run much longer than the cloud-based Claude/ChatGPT/Gemini chatbots, and it’s fast and versatile. Claude Code is basically Claude with even more powers.

Rachitsky shares 50 of his “favorite and most creative ways non-technical people are using Claude Code in their work and life.”

Everyone should be using Claude Code more

Everyone should be using Claude Code more

How to get started, and 50 ways non-technical people are using Claude Code in their work and life

lennysnewsletter.com iconlennysnewsletter.com

Remote work really exploded when the Covid-19 pandemic hit. Everyone had to adjust to working from home, relying on Zoom and Slack and other collaborative tools much more. But beyond tooling, there’s also process. Matt Mullenweg, CEO of Automattic, has famously been a proponent of distributed work for a while.

Paolo Belcastro peels back the curtain to share how the 1,500 or so global employees of Automattic stay connected via two core principles:

There are two ideas that define our communication culture:

Radical Transparency: we default to openness, with every conversation accessible to everyone in the company. Asynchronous by Design: we don’t expect everyone to be “on” at the same time.

Everything is written down:

Our internal platform, P2, started life as a WordPress theme (it was called Prologue, later updated to version 2 and eventually shortened to P2) that lets people post directly on the front end of a site—fast, simple, and visible to everyone. Over time it evolved into a network of thousands of P2s for teams, projects, and watercooler chats (couch surfing, classified ads, house renovations, babies, pets, music, or games, we kind of have it all).

Every post, every comment, every decision ever made in the history of Automattic is preserved there.

As you can imagine, it soon becomes a volume problem. There’s too much stuff.

No one can read everything.

That’s why onboarding is designed to help people adapt:

  • Each newcomer is paired with a mentor from a different team, to give them a cross-company perspective.
  • They receive a curated list of “milestone posts” that map the history of Automattic, along with role-specific threads relevant to their work.
  • The Field Guide offers principles, templates, and advice about how to handle communication.

Somehow, they make it work.

Using chaos to communicate order

Using chaos to communicate order

How we communicate at Automattic

ttl.blog iconttl.blog

Building on Matthew Ström-Awn’s argument that true quality emerges from decentralized, ground-level ownership, Sean Goedecke writes an essay exploring how software companies navigate the tension between formalized control and the informal, often invisible work that actually drives product excellence.

But first, what does legibility even mean?

What does legibility mean to a tech company, in practice? It means:

  • The head of a department knows, to the engineer, all the projects the department is currently working on
  • That head also knows (or can request) a comprehensive list of all the projects the department has shipped in the last quarter
  • That head has the ability to plan work at least one quarter ahead (ideally longer)
  • That head can, in an emergency, direct the entire resources of the department at immediate work

Note that “shipping high quality software” or “making customers happy” or even “making money” is not on this list. Those are all things tech companies want to do, but they’re not legibility.

Goedecke argues that while leaders prize formal processes and legibility to facilitate predictability and coordination, these systems often overlook the messier, less measurable activities that drive true product quality and user satisfaction.

All organizations - tech companies, social clubs, governments - have both a legible and an illegible side. The legible side is important, past a certain size. It lets the organization do things that would otherwise be impossible: long-term planning, coordination with other very large organizations, and so on. But the illegible side is just as important. It allows for high-efficiency work, offers a release valve for processes that don’t fit the current circumstances, and fills the natural human desire for gossip and soft consensus.

Seeing like a software company

The big idea of James C. Scott’s Seeing Like A State can be expressed in three points: Modern organizations exert control by maximising “legibility”: by…

seangoedecke.com iconseangoedecke.com

Matt Ström-Awn makes the argument that companies can achieve sustainable excellence by empowering everyone at each level to take ownership of quality, rather than relying solely on top-down mandates or standardized procedures.

But more and more I’ve come to believe that quality isn’t a slogan, a program, or a scorecard. It’s a promise kept at the edge by the people doing the work. And, ideally, quality is fundamental to the product itself, where users can judge it without our permission. That’s the shift we need: away from heroics at the center, toward systems that make quality inevitable.

The stakes are high. Centralized quality — slogans, KPIs, executive decrees — can produce positive results, but it’s brittle. Decentralized quality — continuous feedback, distributed ownership, emergent standards — builds resilience. In this essay, I’d like to make the case that the future belongs to those who can decentralize their mindset and approach to quality.

Ström-Awn offers multiple case studies, contrasting centralized systems with decentralized ones, using Ford, Amazon, Apple, Toyota, Netflix, 3M, Morning Star, W.L. Gore, Valve, Barnes & Noble, and Microsoft under Satya Nadella as examples.

These stories share a common thread: organizations that trusted their frontline workers to identify and solve quality problems. But decentralized quality has its own vulnerabilities. Valve’s radical structure has been criticized for creating informal power hierarchies and making it difficult to coordinate large projects. Some ex-employees describe a “high school clique” atmosphere where popular workers accumulate influence while others struggle. Without traditional management oversight, initiatives can moulder, or veer in directions that don’t serve broader company goals.

Still, these examples show a different path for achieving quality, where excellence is defined in the course of building a product. Unlike centralized approaches relying on visionary (but fallible) leaders, decentralized systems are resilient to individual failures, adaptable to change, and empowering to builders. The andon cord, the rolling desk, and the local bookstore manager each represent a small bet on human judgment over institutional control. Those bets look like they’re paying off.

Decentralizing quality

Decentralizing quality

Why moving judgment to the edges wins in the long run

matthewstrom.com iconmatthewstrom.com

Slow and steady wins the race, so they say. And in Waymo’s case, that’s true. Unlike the stereotypical Silicon Valley of “Move fast and break things,” Waymo has been very deliberate and intentional in developing its self-driving tech. In other words, they’re really trying to account for the unintended consequences.

Writing for The Atlantic, Saahil Desai:

Compared with its robotaxi competitors, “Waymo has moved the slowest and the most deliberately,” [Bryant Walker Smith] said—which may be a lesson for the world’s AI developers. The company was founded in 2009 as a secretive project inside of Google; a year later, it had logged 1,000 miles of autonomous rides in a tricked-out Prius. Close to a decade later, in 2018, Waymo officially launched its robotaxi service. Even now, when Waymos are inching their way into the mainstream, the company has been hypercautious. The company is limited to specific zones within the five cities it operates in (San Francisco, Phoenix, Los Angeles, Austin, and Atlanta). And only Waymo employees and “a growing number of guests” can ride them on the highway, Chris Bonelli, a Waymo spokesperson, told me. Although the company successfully completed rides on the highway years ago, higher speeds bring more risk for people and self-driving cars alike. What might look like a few grainy pixels to Waymo’s cameras one moment could be roadkill to swerve around the very next.

Move Fast and Break Nothing

Move Fast and Break Nothing

Waymo’s robotaxis are probably safer than ChatGPT.

theatlantic.com icontheatlantic.com

As UX designers, we try to anticipate the edge cases—what might a user do and how can we ensure they don’t hit any blockers. But beyond the confines of the products we build, we must also remember to anticipate the unintended consequences. How might this product or feature affect the user emotionally? Are we creating bad habits? Are we fomenting rage in pursuit of engagement?

Martin Tomitsch and Steve Baty write in DOC, suggesting some frameworks to anticipate the unpredictable:

Chaos theory describes the observation that even tiny perturbations like the flutter of a butterfly can lead to dramatic, non-linear effects elsewhere over time. Seemingly small changes or decisions that we make as designers can have significant and often unforeseen consequences.

As designers, we can’t directly control the chain of reactions that will follow an action. Reactions are difficult to predict, as they occur depending on factors beyond our direct control.

But by using tools like systems maps, the impact ripple canvas, and iceberg visuals, we can take potential reactions out of the unpredictable pile and shift them into the foreseeable pile.

The UX butterfly effect

The UX butterfly effect

Understanding unintended consequences in design and how to plan for them.

doc.cc icondoc.cc

OK, so there’s workslop, but there’s also general AI slop. With OpenAI’s recent launch of the Sora app, there going to be more and more AI-generated image and video content making the rounds. I do believe that there’s a place for using AI to generate imagery. It can be done well (see Christian Haas’s “AI Jobs”). Or not.

Casey Newton, writing in his Platformer newsletter:

In Sora we find the entire debate over AI-generated media in miniature. On one hand, the content now widely derided as “slop” continually receives brickbats on social media, in blog posts and in YouTube comments. And on the other, some AI-generated material is generating millions of views — presumably not all from people who are hate-watching it.

As the content on the internet is increasingly AI-generated, platforms will need to balance how much of it they let in, lest the overall quality drops.

As Sarah Perez noted at TechCrunch, Pinterest has come under fire from its user base all year for a perceived decline in quality of the service as the percentage of slop there increases. Many people use the service to find real objects they can buy and use; the more that those objects are replaced with AI fantasies, the worse Pinterest becomes for them.

Like most platforms, Pinterest sees little value in banning slop altogether. After all, some people enjoy looking at fantastical AI creations. At the same time, its success depends in some part on creators believing that there is value in populating the site with authentic photos and videos. The more that Pinterest’s various surfaces are dominated by slop, the less motivated traditional creators may be to post there.

How platforms are handling the slop backlash

How platforms are handling the slop backlash

AI-generated media is generating millions of views. But some companies are beginning to rein it in

platformer.news iconplatformer.news

Sticking with the workslop or outsourcing our main work to AI, Douglas Rushkoff writes in Fast Company:

By using the AI to do the big stuff—by outsourcing our primary competencies to the machines instead of giving them the boring busywork—we deskill ourselves and deprive everyone of the opportunity for AI-enhanced outputs. Too many of us are using AI as the primary architect for a project, rather than the general contractor who supports the architect’s human vision.

People forget that it’s the process of doing something that helps us synthesize and form the connections necessary for innovation.

As the researcher behind MIT’s study “This is Your Brain on ChatGPT” explained at a recent ANDUS event, when people turn to an AI for a solution before working on a problem themselves, the number of connections formed in their brains decreases. But when they turn to the AI after working on the problem for a while, they end up with more neural connections than if they worked entirely alone.

That’s because the value of the AI is not its ability to create product for us, but to engage with us in our process. Working and iterating with an AI—doing what we could call generative thinking—is actually a break from Industrial Age thinking. We focus less on outputs than on cycles. Less on the volume of short-term results (assembly line), and more on the quality and complexity of thought and innovation.

The value of the AI is not its ability to create product for us, but to engage with us in our process

The value of the AI is not its ability to create product for us, but to engage with us in our process

AI doesn’t have to replace our competencies or even our employees.

fastcompany.com iconfastcompany.com

Speaking of workslop, here’s an article from NN/g on how to avoid falling into over-reliance on AI in our design field. They call it the “7 Deadly AI Sins for UX Professionals.”

  1. Outsourced Thinking
  2. Wasted Time
  3. Lost Details
  4. Isolated Ideation
  5. Naïve Trust
  6. Bland Taste
  7. Defensive Outlook

As Tanner Kohler writes:

It’s not about avoiding AI. It’s about maintaining your own growth and the quality of your work as you use AI. AI will constantly be changing. Never let yourself slip into repeatedly committing the sins that weaken you and your UX skills.

7 Deadly AI Sins for UX Professionals

7 Deadly AI Sins for UX Professionals

Succumbing to AI temptations weakens your UX skills. Strive for the AI virtues to keep yourself strong as you use AI in your work.

nngroup.com iconnngroup.com