41 posts tagged with “technology industry

Illustration of humanoid robots working at computer terminals in a futuristic control center, with floating digital screens and globes surrounding them in a virtual space.

Prompt. Generate. Deploy. The New Product Design Workflow

Product design is going to change profoundly within the next 24 months. If the AI 2027 report is any indication, the capabilities of the foundational models will grow exponentially, and with them—I believe—will the abilities of design tools.

A graph comparing AI Foundational Model Capabilities (orange line) versus AI Design Tools Capabilities (blue line) from 2026 to 2028. The orange line shows exponential growth through stages including Superhuman Coder, Superhuman AI Researcher, Superhuman Remote Worker, Superintelligent AI Researcher, and Artificial Superintelligence. The blue line shows more gradual growth through AI Designer using design systems, AI Design Agent, and Integration & Deployment Agents.

The AI foundational model capabilities will grow exponentially and AI-enabled design tools will benefit from the algorithmic advances. Sources: AI 2027 scenario & Roger Wong

The TL;DR of the report is this: companies like OpenAI have more advanced AI agent models that are building the next-generation models. Once those are built, the previous generation is tested for safety and released to the public. And the cycle continues. Currently, and for the next year or two, these companies are focusing their advanced models on creating superhuman coders. This compounds and will result in artificial general intelligence, or AGI, within the next five years. 

There are many dimensions to this well-researched forecast about how AI will play out in the coming years. Daniel Kokotajlo and his researchers have put out a document that reads like a sci-fi limited series that could appear on Apple TV+ starring Andrew Garfield as the CEO of OpenBrain—the leading AI company. …Except that it’s all actually plausible and could play out as described in the next five years.

Before we jump into the content, the design is outstanding. The type is set for readability and there are enough charts and visual cues to keep this interesting while maintaining an air of credibility and seriousness. On desktop, there’s a data viz dashboard in the upper right that updates as you read through the content and move forward in time. My favorite is seeing how the sci-fi tech boxes move from the Science Fiction category to Emerging Tech to Currently Exists.

The content is dense and technical, but it is a fun, if frightening, read. While I’ve been using Cursor AI—one of its many customers helping the company get to $100 million in annual recurring revenue (ARR)—for side projects and a little at work, I’m familiar with its limitations. Because of the limited context window of today’s models like Claude 3.7 Sonnet, it will forget and start munging code if not treated like a senile teenager.

The researchers, describing what could happen in early 2026 (“OpenBrain” is essentially OpenAI):

OpenBrain continues to deploy the iteratively improving Agent-1 internally for AI R&D. Overall, they are making algorithmic progress 50% faster than they would without AI assistants—and more importantly, faster than their competitors.

The point they make here is that the foundational model AI companies are building agents and using them internally to advance their technology. The limiting factor in tech companies has traditionally been the talent. But AI companies have the investments, hardware, technology and talent to deploy AI to make better AI.

Continuing to January 2027:

Agent-1 had been optimized for AI R&D tasks, hoping to initiate an intelligence explosion. OpenBrain doubles down on this strategy with Agent-2. It is qualitatively almost as good as the top human experts at research engineering (designing and implementing experiments), and as good as the 25th percentile OpenBrain scientist at “research taste” (deciding what to study next, what experiments to run, or having inklings of potential new paradigms). While the latest Agent-1 could double the pace of OpenBrain’s algorithmic progress, Agent-2 can now triple it, and will improve further with time. In practice, this looks like every OpenBrain researcher becoming the “manager” of an AI “team.”

Breakthroughs come at an exponential clip because of this. And by April, safety concerns pop up:

Take honesty, for example. As the models become smarter, they become increasingly good at deceiving humans to get rewards. Like previous models, Agent-3 sometimes tells white lies to flatter its users and covers up evidence of failure. But it’s gotten much better at doing so. It will sometimes use the same statistical tricks as human scientists (like p-hacking) to make unimpressive experimental results look exciting. Before it begins honesty training, it even sometimes fabricates data entirely. As training goes on, the rate of these incidents decreases. Either Agent-3 has learned to be more honest, or it’s gotten better at lying.

But the AI is getting faster than humans, and we must rely on older versions of the AI to check the new AI’s work:

Agent-3 is not smarter than all humans. But in its area of expertise, machine learning, it is smarter than most, and also works much faster. What Agent-3 does in a day takes humans several days to double-check. Agent-2 supervision helps keep human monitors’ workload manageable, but exacerbates the intellectual disparity between supervisor and supervised.

The report forecasts that OpenBrain releases “Agent-3-mini” publicly in July of 2027, calling it AGI—artificial general intelligence—and ushering in a new golden age for tech companies:

Agent-3-mini is hugely useful for both remote work jobs and leisure. An explosion of new apps and B2B SAAS products rocks the market. Gamers get amazing dialogue with lifelike characters in polished video games that took only a month to make. 10% of Americans, mostly young people, consider an AI “a close friend.” For almost every white-collar profession, there are now multiple credible startups promising to “disrupt” it with AI.

Woven throughout the report is the race between China and the US, with predictions of espionage and government takeovers. Near the end of 2027, the report gives readers a choice: does the US government slow down the pace of AI innovation, or does it continue at the current pace so America can beat China? I chose to read the “Race” option first:

Agent-5 convinces the US military that China is using DeepCent’s models to build terrifying new weapons: drones, robots, advanced hypersonic missiles, and interceptors; AI-assisted nuclear first strike. Agent-5 promises a set of weapons capable of resisting whatever China can produce within a few months. Under the circumstances, top brass puts aside their discomfort at taking humans out of the loop. They accelerate deployment of Agent-5 into the military and military-industrial complex.

In Beijing, the Chinese AIs are making the same argument.

To speed their military buildup, both America and China create networks of special economic zones (SEZs) for the new factories and labs, where AI acts as central planner and red tape is waived. Wall Street invests trillions of dollars, and displaced human workers pour in, lured by eye-popping salaries and equity packages. Using smartphones and augmented reality-glasses20 to communicate with its underlings, Agent-5 is a hands-on manager, instructing humans in every detail of factory construction—which is helpful, since its designs are generations ahead. Some of the newfound manufacturing capacity goes to consumer goods, and some to weapons—but the majority goes to building even more manufacturing capacity. By the end of the year they are producing a million new robots per month. If the SEZ economy were truly autonomous, it would have a doubling time of about a year; since it can trade with the existing human economy, its doubling time is even shorter.

Well, it does get worse, and I think we all know the ending, which is the backstory for so many dystopian future movies. There is an optimistic branch as well. The whole report is worth a read.

Ideas about the implications to our design profession are swimming in my head. I’ll write a longer essay as soon as I can put them into a coherent piece.

Update: I’ve written that piece, “Prompt. Generate. Deploy. The New Product Design Workflow.

preview-1744501634555.png

AI 2027

A research-backed AI scenario forecast.

Earth 3 Streamline Icon: https://streamlinehq.comai-2027.com

I found this post from Tom Blomfield to be pretty profound. We’ve seen interest in universal basic income from Sam Altman and other leaders in AI, as they’ve anticipated the decimation of white collar jobs in coming years. Blomfield crushes the resistance from some corners of the software developer community in stark terms.

These tools [like Windsurf, Cursor and Claude Code] are now very good. You can drop a medium-sized codebase into Gemini 2.5's 1 million-token context window and it will identify and fix complex bugs. The architectural patterns that these coding tools implement (when prompted appropriately) will easily scale websites to millions of users. I tried to expose sensitive API keys in front-end code just to see what the tools would do, and they objected very vigorously.

They are not perfect yet. But there is a clear line of sight to them getting very good in the immediate future. Even if the underlying models stopped improving altogether, simply improving their tool use will massively increase the effectiveness and utility of these coding agents. They need better integration with test suites, browser use for QA, and server log tailing for debugging. Pretty soon, I expect to see tools that allow the LLMs to to step through the code and inspect variables at runtime, which should make debugging trivial.

At the same time, the underlying models are not going to stop improving. they will continue to get better, and these tools are just going to become more and more effective. My bet is that the AI coding agents quickly beat top 0.1% of human performance, at which point it wipes out the need for the vast majority software engineers.

He quotes the Y Combinator stat I cited in a previous post:

About a quarter of the recent YC batch wrote 95%+ of their code using AI. The companies in the most recent batch are the fastest-growing ever in the history of Y Combinator. This is not something we say every year. It is a real change in the last 24 months. Something is happening.

Companies like Cursor, Windsurf, and Lovable are getting to $100M+ revenue with astonishingly small teams. Similar things are starting to happen in law with Harvey and Legora. It is possible for teams of five engineers using cutting-edge tools to build products that previously took 50 engineers. And the communication overhead in these teams is dramatically lower, so they can stay nimble and fast-moving for much longer.

And for me, this is where the rubber meets the road:

The costs of running all kinds of businesses will come dramatically down as the expenditure on services like software engineers, lawyers, accountants, and auditors drops through the floor. Businesses with real moats (network effect, brand, data, regulation) will become dramatically more profitable. Businesses without moats will be cloned mercilessly by AI and a huge consumer surplus will be created.

Moats are now more important than ever. Non-tech companies—those that rely on tech companies to make software for them, specifically B2B vertical SaaS—are starting to hire developers. How soon will they discover Cursor if they haven’t already? These next few years will be incredibly interesting.

Tweet by Tom Blomfield comparing software engineers to farmers, stating AI is the “combine harvester” that will increase output and reduce need for engineers.

The Age Of Abundance

Technology clearly accelerates human progress and makes a measurable difference to the lives of most people in the world today. A simple example is cancer survival rates, which have gone from 50% in 1975 to about 75% today. That number will inevitably rise further because of human ingenuity and technological acceleration.

Earth 3 Streamline Icon: https://streamlinehq.comtomblomfield.com

Jay Hoffman, from his excellent The History of the Web site:

1995 is a fascinating year. It’s one of the most turbulent in modern history. 1995 was the web’s single most important inflection point. A fact that becomes most apparent by simply looking at the numbers. At the end of 1994, there were around 2,500 web servers. 12 months later, there were almost 75,000. By the end of 1995, over 700 new servers were being added to the web every single day.

That was surely a crazy time…

preview-1744174341917.jpg

1995 Was the Most Important Year for the Web

The world changed a lot in 1995. And for the web, it was a transformational year.

Earth 3 Streamline Icon: https://streamlinehq.comthehistoryoftheweb.com

As a longtime Apple fanboy, it's a little hard for me to appreciate the visual design of Windows—Microsoft is a nemesis, if you will. But I will tip my hat to the design practitioners there who've made the company finally pay attention to design.

Side note, reminds me of a story about what Steve Jobs once told me when I was designing the welcome animation for Mac OS X.

Screenshot of a Windows desktop

A glimpse into the history of Windows design

At the turn of the millennium, the widespread adoption of Microsoft Windows was a pivotal moment in technology. It played a crucial role in the integration of personal computers into both business and home environments. Windows introduced features that revolutionized network management and enhanced support for mobile computing, paving the way for the modern, connected workplace. Harold Gomez, Jeremy Knudsen, and Kim Sealls are three designers at Microsoft who have contributed to Windows design since 2000 and witnessed its design evolution. From the iconic Windows XP to the sleek Windows 11, Windows has constantly evolved to reflect the changing needs and preferences of users worldwide. In this roundtable discussion, we delve into the remarkable journey of Windows design.

Earth 3 Streamline Icon: https://streamlinehq.commicrosoft.design

Steven Kurtz, writing for The New York Times:

For many of the Gen X-ers who embarked on creative careers in the years after [Douglas Coupland's Generation X] was published, lessness has come to define their professional lives.

If you entered media or image-making in the ’90s — magazine publishing, newspaper journalism, photography, graphic design, advertising, music, film, TV — there’s a good chance that you are now doing something else for work. That’s because those industries have shrunk or transformed themselves radically, shutting out those whose skills were once in high demand.

My first assumption was that Kurtz was writing about AI and how it's taking away all the creative jobs. Instead, he weaves together a multifactorial illustration about the diminishing value of commercial creative endeavors like photography, music, filmmaking, copywriting, and design.

“My peers, friends and I continue to navigate the unforeseen obsolescence of the career paths we chose in our early 20s,” Mr. Wilcha said. “The skills you cultivated, the craft you honed — it’s just gone. It’s startling.”

Every generation has its burdens. The particular plight of Gen X is to have grown up in one world only to hit middle age in a strange new land. It’s as if they were making candlesticks when electricity came in. The market value of their skills plummeted.

It's more than AI, although certainly, that is top of everyone's mind these days. Instead, it's also stock photography and illustrations, graphic templates, the consolidation of ad agencies, the revolutionary rise of social media, and the tragic fall of traditional media.

Similar shifts have taken place in music, television and film. Software like Pro Tools has reduced the need for audio engineers and dedicated recording studios; A.I., some fear, may soon take the place of actual musicians. Streaming platforms typically order fewer episodes per season than the networks did in the heyday of “Friends” and “ER.” Big studios have slashed budgets, making life for production crews more financially precarious.

Earlier this year, I cited Baldur Bjarnason's essay about the changing economics of web development. As an opening analogy, he referenced the shifting landscape of film and television.

Born in 1973, I am squarely in Generation X. I started my career in the design and marketing industry just as the internet was taking off. So I know exactly what the interviewees of Kurtz's article are facing. But by dogged tenacity and sheer luck, I've been able to pivot and survive. Am I still a graphic designer like I was back in the mid-1990s? Nope. I'm more of a product designer now, which didn't exist 30 years ago, and which is a subtle but distinct shift from UX designer, which has existed for about 20 years.

I've been lucky enough to ride the wave with the times, always remembering my core purpose.

preview-1743608194474.png

The Gen X Career Meltdown (Gift Article)

Just when they should be at their peak, experienced workers in creative fields find that their skills are all but obsolete.

Earth 3 Streamline Icon: https://streamlinehq.comnytimes.com
A cut-up Sonos speaker against a backdrop of cassette tapes

When the Music Stopped: Inside the Sonos App Disaster

The fall of Sonos isn’t as simple as a botched app redesign. Instead, it is the cumulative result of poor strategy, hubris, and forgetting the company’s core value proposition. To recap, Sonos rolled out a new mobile app in May 2024, promising “an unprecedented streaming experience.” Instead, it was a severely handicapped app, missing core features and broke users’ systems. By January 2025, that failed launch wiped nearly $500 million from the company’s market value and cost CEO Patrick Spence his job.

What happened? Why did Sonos go backwards on accessibility? Why did the company remove features like sleep timers and queue management? Immediately after the rollout, the backlash began to snowball into a major crisis.

A collage of torn newspaper-style headlines from Bloomberg, Wired, and The Verge, all criticizing the new Sonos app. Bloomberg’s headline states, “The Volume of Sonos Complaints Is Deafening,” mentioning customer frustration and stock decline. Wired’s headline reads, “Many People Do Not Like the New Sonos App.” The Verge’s article, titled “The new Sonos app is missing a lot of features, and people aren’t happy,” highlights missing features despite increased speed and customization.

As a designer and longtime Sonos customer who was also affected by the terrible new app, a little piece of me died inside each time I read the word “redesign.” It was hard not to take it personally, knowing that my profession could have anything to do with how things turned out. Was it really Design’s fault?

Why is the UX Job Market Such a Mess Right Now?

Why is the UX Job Market Such a Mess Right Now? — A Comprehensive Explanation - UX Articles by Center Centre

Every day, I talk with people struggling to find a UX design, research, or content job. The UX job market has never been this difficult to navigate. Even seasoned, talented UX professionals are struggling to land their next job. Many report applying to hundreds of positions without getting invited to a single interview. For some, months […]

Earth 3 Streamline Icon: https://streamlinehq.comarticles.centercentre.com
Surreal scene of a robotic chicken standing in the center of a dimly lit living room with retro furnishings, including leather couches and an old CRT television emitting a bright blue glow.

Chickens to Chatbots: Web Design’s Next Evolution

In the early 2000s to the mid-oughts, every designer I knew wanted to be featured on the FWA, a showcase for cutting-edge web design. While many of the earlier sites were Flash-based, it’s also where I discovered the first uses of parallax, Paper.js, and Three.js. Back then, websites were meant to be explored and their interfaces discovered.

Screenshot of The FWA website from 2009 displaying a dense grid of creative web design thumbnails.

A grid of winners from The FWA in 2009. Source: Rob Ford.

One of my favorite sites of that era was Burger King’s Subservient Chicken, where users could type free text into a chat box to command a man dressed in a chicken suit. In a full circle moment that perfectly captures where we are today, we now type commands into chat boxes to tell AI what to do.

I love this essay from Baldur Bjarnason, maybe because his stream of consciousness style is so similar to my own. He compares the rapidly changing economics of web and software development to the film, TV, and publishing industries.

Before we get to web dev, let's look at the film industry, as disrupted by streaming.

Like, Crazy Rich Asians made a ton of money in 2018. Old Hollywood would have churned out at least two sequels by now and it would have inspired at least a couple of imitator films. But if they ever do a sequel it’s now going to be at least seven or even eight years after the fact. That means that, in terms of the cultural zeitgeist, they are effectively starting from scratch and the movie is unlikely to succeed.

He's not wrong.

Every Predator movie after the first has underperformed, yet they keep making more of them. Completed movies are shelved for tax credits. Entire shows are disappeared [from] streamers and not made available anywhere to save money on residuals, which does not make any sense because the economics of Blu-Ray are still quite good even with lower overall sales and distribution than DVD. If you have a completed series or movie, with existing 4K masters, then you’re unlikely to lose money on a Blu-Ray.

I'll quibble with him here. Shows and movies disappear from streamers because there's a finite pot of money from subscriber revenue. So removing content will save them money. Blu-Ray is more sustainable because it's an additional purchase.

OK, let's get back to web dev.

He points out that similar to the film and other creative industries, developers fill their spare time with passion projects. But their day jobs are with tech companies and essentially subsidize their side projects.

And now, both the creative industries proper and tech companies have decided that, no, they probably don’t need that many of the “grunts” on the ground doing the actual work. They can use “AI” at a much lower cost because the output of the “AI” is not that much worse than the incredibly shitty degraded products they’ve been destroying their industries with over the past decade or so.

Bjarnason ends with seven suggestions for those in the industry. I'll just quote one:

Don’t get tied to a single platform for distribution or promotion. Every use of a silo should push those interested to a venue you control such as a newsletter or website.

In other words, whatever you do, own your audience. Don't farm that out to a platform like X/Twitter, Threads, or TikTok.

Of course, there are a lot of parallels to be drawn between what's happening in the development and software engineering industries to what's happening in design.

The web is a creative industry and is facing the same decline and shattered economics as film, TV, or publishing

The web is a creative industry and is facing the same decline and shattered economics as film, TV, or publishing

Web dev at the end of the world, from Hveragerði, Iceland

Earth 3 Streamline Icon: https://streamlinehq.combaldurbjarnason.com
A winter panoramic view from what appears to be a train window, showing a snowy landscape with bare deciduous trees and evergreens against a gray sky. The image has a moody, blue-gray tone.

The Great Office Reset

Cold Arrival

It’s 11 degrees Fahrenheit as I step off the plane at Toronto Pearson International. I’ve been up for nearly 24 hours and am about to trek through the gates toward Canadian immigration. Getting here from 73-degree San Diego was a significant challenge. What would be a quick five-hour direct flight turned into a five-hour delay, then cancelation, and then a rebook onto a red-eye through SFO. And I can’t sleep on planes. On top of that, I’ve been recovering from the flu, so my head was still very congested, and the descents from two flights were excruciating.

After going for a short secondary screening for who knows what reason—the second Canada Border Services Agency officer didn’t know either—I make my way to the UP Express train and head towards downtown Toronto. Before reaching Union Station, the train stops at the Weston and Bloor stations, picking up scarfed, ear-muffed, and shivering commuters. I disembark at Union Station, find my way to the PATH, and headed towards the CN Tower. I’m staying at the Marriott attached to the Blue Jays stadium.

Outside the station, the bitter cold slaps me across the face. Even though I am bundled with a hat, gloves, and big jacket, I still am unprepared for what feels like nine-degree weather. I roll my suitcase across the light green-salted concrete, evidence of snowfall just days earlier, with my exhaled breath puffing before me like the smoke from a coal-fired train engine.

Zuckerberg believes Apple “[hasn’t] really invented anything great in a while…”

Appearing on Joe Rogan’s podcast, this week, Meta CEO Mark Zuckerberg said that Apple “[hasn’t] really invented anything great in a while. Steve Jobs invented the iPhone and now they’re just kind of sitting on it 20 years later."

Let's take a look at some hard metrics, shall we?

I did a search of the USPTO site for patents filed by Apple and Meta since 2007. In that time period, Apple filed for 44,699 patents. Meta, nee Facebook, filed for 4,839, or about 10% of Apple’s inventions.

apple-and-meta-patents-3000x2120.png
Apple VR headset on a table

Thoughts on Apple Vision Pro

Apple finally launched its Vision Pro “spatial computing” device in early February. We immediately saw TikTok memes of influencers being ridiculous. I wrote about my hope for the Apple Vision Pro back in June 2023, when it was first announced. When preorders opened for Vision Pro in January, I told myself I wouldn’t buy it. I couldn’t justify the $3,500 price tag. Out of morbid curiosity, I would lurk in the AVP subreddits to live vicariously through those who did take the plunge.

After about a month of reading all the positives from users about the device, I impulsively bought an Apple Vision Pro. I placed my order online at noon and picked it up just two hours later at an Apple Store near me.

Many great articles and YouTube videos have already been produced, so this post won’t be a top-to-bottom review of the Apple Vision Pro. Instead, I’ll try to frame it from my standpoint as someone who has designed user experiences for VR

Welcome to the Era of Spatial Computing

Apple Vision Pro

Transported into Spatial Computing

After years of rumors and speculation, Apple finally unveiled their virtual reality headset yesterday in a classic “One more thing…” segment in their keynote. Dubbed Apple Vision Pro, this mixed reality device is perfectly Apple: it’s human-first. It’s centered around extending human productivity, communication, and connection. It’s telling that one of the core problems they solved was the VR isolation problem. That’s the issue where users of VR are isolated from the real world; they don’t know what’s going on, and the world around them sees that. Insert meme of oblivious VR user here. Instead, with the Vision Pro, when someone else is nearby, they show through the interface. Additionally, an outward-facing display shows the user’s eyes. These two innovative features help maintain the basic human behavior of acknowledging each other’s presence in the same room.

Promotional image from Apple showing a woman smiling while wearing the Vision Pro headset, with her eyes visible through the front display using EyeSight technology. She sits on a couch in a warmly lit room, engaging with another person off-screen.
Creative Selection book with Roger Wong's Apple badge

The Apple Design Process

I recently came across Creative Selection: Inside Apple’s Design Process During the Golden Age of Steve Jobs by former software engineer Ken Kocienda. It was in one of my social media feeds, and since I’m interested in Apple, the creative process, and having been at Apple at that time, I was curious.

I began reading the book Saturday evening and finished it Tuesday morning. It was an easy read, as I was already familiar with many of the players mentioned and nearly all the technologies and concepts. But, I’d done something I hadn’t done in a long time—I devoured the book.

Ultimately this book gave more color and structure to what I’d already known, based on my time at Apple and my own interactions with him. Steve Jobs was the ultimate creative director who could inspire, choose, and direct work. 

Kocienda describes a nondescript conference room called Diplomacy in Infinite Loop 1 (IL1), the first building at Apple’s then main campus. This was the setting for an hours-long meeting where Steve held court with his lieutenants. Their team members would wait nervously outside the room and get called in one by one to show their in-progress work. In Kocienda’s case, he describes a scene where he showed Steve the iPad software keyboard for the first time. He presented one solution that allowed the user to choose from two layouts: more keys but smaller keys or fewer keys but bigger. Steve asked which Kocienda liked better, and he said the bigger keys, and that was decided.

Illustration of an interview

How to Put Your Stuff Together and Get a Job as a Product Designer: Part 3

This is the third article in a three-part series offering tips on how to get a job as a product or UX designer. Part 1 covers your resume and LinkedIn profile. Part 2 advises on your portfolio website.

Part 3: Interviewing

If you have stood out enough from the hundreds of resumes and portfolios a hiring manager has looked at, you’ll start the interview process.

From my point of view, as a design hiring manager, it’s all about mitigating risk. How do I know if you will do great work with us? How do I know that you’ll fit in with the team and positively change our dynamic? How do I know that your contributions will help get us to where we need to be?

Illustration of a portfolio

How to Put Your Stuff Together and Get a Job as a Product Designer: Part 2

This is the second article in a three-part series offering tips on how to get a job as a product or UX designer. Part 1 covers your resume and LinkedIn profile. Part 3 is about the interviewing process.

Part 2: Your Portfolio

As I mentioned in Part 1 of this series, portfolios used to be physical cases filled with your work, and you only had one of them. But now that portfolios are online, it’s much easier to get your work out there.

Much like resumes, many designers make the mistake of over-designing their portfolio website, trying to use it as a canvas to show their visual design or interaction chops. Don’t do it.

Illustration of a resume

How to Put Your Stuff Together and Get a Job as a Product Designer: Part 1

This is the first article in a three-part series offering tips on how to get a job as a product or UX designer. Part 2 advises on your portfolio website. Part 3 covers the interviewing process.

Part 1: Your Resume & LinkedIn Profile

(With apologies to Maxine Paetro, whose seminal 1979 book  How to Put Your Book Together and Get a Job in Advertising was highly influential in my early job search process in the mid-1990s.)

I graduated from design school in the spring of 1995. Yahoo! was incorporated just a couple of months before. AOL was still the dominant way everyone connected to the Internet. Tim Berners-Lee’s World Wide Web was still a baby, with just a tiny fraction of websites available. In other words, my design education was about graphic design—layout, typography, logos, print. Neither digital design nor UX design was taught or barely practiced yet. (The closest thing would be human-computer interaction, more computer science than design.)

5 Big Ideas From the 2021 AIGA Design Conference

5 Big Ideas From the 2021 AIGA Design Conference

After living through 18 months of an ongoing pandemic, our social and professional interactions have significantly changed since “The Before Times.” For many of us it’s been a period of isolation that has forced us to learn how to find social connection within virtual spaces or shift to remote work

Earth 3 Streamline Icon: https://streamlinehq.comeyeondesign.aiga.org