Skip to content

19 posts tagged with “developer tools”

10 min read
A computer circuit board traveling at warp speed through space with motion-blurred light streaks radiating outward, symbolizing high-performance computing and speed.

The Need for Speed: Why I Rebuilt My Blog with Astro

Two weekends ago, I quietly relaunched my blog. It was a heart transplant really, of the same design I’d launched in late March.

The First Iteration

Back in early November of last year, I re-platformed from WordPress to a home-grown, Cursor-made static site generator. I’d write in Markdown and push code to my GitHub repository and the post was published via Vercel’s continuous deployment feature. The design was simple and it was a great learning project for me.

Screenshot of Roger Wong's first blog design from November 2024, featuring a dark navy background with white text. The homepage shows a large hero section with Roger's bio and headshot, followed by a "Latest Posts" section displaying the essay "From Craft to Curation: Design Leadership in the Age of AI" with a stylized illustration of a person wearing glasses with orange and blue gradient reflections. A "Latest Links" section appears on the right side.

My first blog redesign from November 2024, built with Cursor as a static site generator. Simple, clean, and good enough to get me writing again.

As soon as I launched it, I got the bug to write more because the platform was shiny and new. And as soon as I started to write more essays, I also really wanted to write short-form comments on links in the vein of Jason Kottke and John Gruber. So in January of this year, I started to design a new version of the site.

Designing for a Feed

My idea was to create a feed-like experience, since the majority of the posts were likely going to be short and link off to external sites. I was heavily inspired by the design of Bluesky and by the aforementioned blogs. I don’t pretend to be Kottke or Gruber, but that’s the style of blog I wanted to have.

I put down my idea quickly in Figma, in bed, half-watching Top Chef with my wife.

Screenshot of a Figma design mockup showing a feed-style blog layout with a light gray background and minimal sidebar navigation on the left (Home, Posts, Linked, Search, About). The main content area displays a vertical feed of posts with colored preview cards - one coral/pink card about a Clamshell keyboard case, and one mint green card for an essay titled "Design's Purpose Remains Constant." The right sidebar shows author info and navigation links.

The initial Figma sketch done in bed while half-watching Top Chef. A feed-like layout inspired by Bluesky, optimized for short-form link posts with commentary.

The main content column is supposed to look like a social media app’s feed, a long list with link previews and commentary if I had any. I optimized the structure for mobile—though only 38% of my traffic from the last six months is mobile. I noodled on the design details for a few more nights before jumping into the tech solution.

Why I Chose a CMS

Markdown is great for writing, especially if there’s a good editor. For example, I use Ulysses for Mac (and sometimes for iPad). I can easily export MD files from Ulysses.

But because I came from WordPress, it seemed conceptually silly to me to rebuild the whole site every time I published a post. Granted, that’s how Movable Type used to do it in the old days (and I guess they still do!). So I looked around and found Payload CMS, which was built by designers and developers coming from the WordPress ecosystem. And it made sense to me: render a template and fill in the content slots with data from the database. (I’m sure the developers out there have lots of arguments for the static files. I know! As you’ll see, I learned my lesson.)

I tapped Cursor again to help me build the site that would be on Next.js with Payload as the CMS. I spent three months on it, building custom functionality, perfecting all the details, and launched quietly with my first official post on March 27, linking to a lovely visual essay from Amelia Wattenberger.

And I loved the site. Workflow was easy and that encouraged me to post regularly. It worked great. Until it didn’t.

When Things Started Breaking

The database I used to power the site was MongoDB, a modern cloud-based database that’s recommended by Payload. It worked great initially. I did a lot of performance tuning to make the site feel snappy and it did mostly. Or I got used to the lag.

But as the post count grew, three things started going wrong:

  1. List pages sometimes wouldn’t load and result in an error.
  2. Search results would sometimes take forever, like 10 seconds to return something.
  3. In the admin UI, when clicking on a post lookup menu to link to a related post, it often errored out.

Despite additional optimizations I did to minimize the database connections and usage, I couldn’t solve it. The only solution was to upgrade from the lowest plan, which cost me about $10 per month, to the next level up at $60 per month. A six-times increase for a hobby blog. I didn’t think that was a prudent financial decision.

Enter Astro.

The Migration to Astro

I looked around for a more performant content framework. Astro had come up in my initial search, but after learning about it more, it became clear to me this was the way to go. So I spent about a week (nights only) migrating my Next.js/Payload site to Astro. Since many of the components in the original site were written in TypeScript, it was actually not that hard to tell Claude Code and Cursor to “look at the reference” to get the styling nailed. I wanted the exact same design and only needed to change the backend. The trickiest part of the whole migration has been extracting the posts from MongoDB and transforming them into Markdown files, more specifically, MDX files, which allow for JavaScript within the content in case I ever needed that flexibility.

Astro also doesn’t have built in search, so I chose to integrate Algolia.

The results are fantastic. The site is even faster. Search is lightning fast. Here are two comparisons I’ve done: the /posts page and a single post (specifically, “Why We Still Need a HyperCard for the AI Era”). The difference is pretty stark:

Bar chart comparing web performance metrics for a posts page between Astro and Next.js/Payload. Astro shows 33 requests, 2.8 MB transferred, 3.2 MB resources, 909ms finish time, 178ms DOMContentLoaded, and 263ms Load time. Next.js/Payload shows 87 requests, 4.5 MB transferred, 6.9 MB resources, 8.45 second finish time, 390ms DOMContentLoaded, and 479ms Load time. Astro delivers substantially faster performance across all measurements.

Performance comparison loading the posts index page: Astro (purple) vs Next.js/Payload (blue). Astro completes in 909ms with 33 requests, while Next.js takes 8.45 seconds with 87 requests.

Bar chart comparing web performance metrics for a single post page between Astro and Next.js/Payload. Astro shows 27 requests, 1.6 MB transferred, 1.7 MB resources, 746ms finish time, 84ms DOMContentLoaded, and 127ms Load time. Next.js/Payload shows 72 requests, 2.1 MB transferred, 3.6 MB resources, 21.85 second finish time, 175ms DOMContentLoaded, and 272ms Load time. Astro significantly outperforms Next.js across all metrics.

Performance comparison loading a single blog post: Astro (purple) vs Next.js/Payload (blue). Astro finishes in 746ms with 27 requests, while Next.js takes 21.85 seconds with 72 requests.

The Numbers Don’t Lie

The performance difference is staggering. On the posts page, Astro loads in under a second (909 ms) while Next.js takes over 8 seconds. For a single post page, it’s even more dramatic—Astro finishes downloading and rendering all resources in 746 ms while Next.js takes a brutal 21.85 seconds. That’s nearly thirty-times slower for the exact same content. The numbers tell the story: Astro makes two- to-three times fewer server requests and transfers significantly less data. But the real difference is in how it feels—with Astro, the content appears almost instantly (84 ms DOMContentLoaded on the single post), while it took twice that for Next.js.

The kicker? Search performance. On the old Next.js/MongoDB setup, searching for “paul rand” took 3.63 seconds. With Algolia on Astro, that same search completes in 29.55 milliseconds. That’s over a hundred times faster. Not “a bit snappier.” Not “noticeably improved.” It’s the difference between a search that makes you wait and one that feels instantaneous—the kind of speed that fundamentally changes how you interact with content.

Building a Simple Admin

The advantage that Payload CMS has, of course, is its fully-featured admin experience. That doesn’t come with Astro and this setup. I started working on a simple admin UI for myself that will help me fill in the “frontmatter”—the metadata at the top of the MDX file, like tags, related posts, publish date, etc.

Screenshot of a custom blog post editor interface showing two panels: the left panel contains post metadata fields including Featured Image, SEO Meta Title, Category, Tags, and Related Posts; the right panel displays the post content in Markdown format with sections on performance comparisons and building a simple admin, plus an "Upload Image" section at the bottom with fields for Bunny URL and Image Alt Text.

The simple admin UI I’m building for myself.

The basics are working so far, but there is more I’d like to do with it, including adding an AI feature to help autofill tags and write alt text for images.

What I Learned

Sometimes the right tool isn’t the most feature-rich one—it’s the one that gets out of the way. I spent months building on Next.js and Payload because it felt like the “proper” way to build a modern CMS-driven site. Database, API routes, server-side rendering—all the things you’re supposed to want. (I learned a lot along the way, so I don’t see any of it as time wasted.)

But here’s what I actually needed: fast page loads and a simple way to write. That’s it.

Astro gives me both. The static site generation approach I initially dismissed turned out to be exactly right for a content site like this. No database queries slowing things down. No server costs scaling with traffic. Just clean, fast HTML with the minimum JavaScript needed to make things work.

The trade-off? I lost the polished admin interface. But I gained something more valuable: a site that loads instantly and costs almost nothing to run. Between ditching the $10/month MongoDB plan (which wanted to become $60/month) and Astro’s efficient static generation, hosting costs dropped to basically just the $20/month pro plan on Vercel. For a personal blog, that’s the right exchange.

It turns out the old ways—static files, Markdown, simple deployments—weren’t outdated. They were just waiting for better tools. Astro is that better tool. And honestly? Writing in MDX files feels pretty good. Clean. Direct. Just me and the content.

The site looks exactly the same as it did before. But now it actually works the way it should have from the start.

Is the AI bubble about to burst? Apparently, AI prompt-to-code tools like Lovable and v0 have peaked and are on their way down.

Alistair Barr writing for Business Insider:

The drop-off raises tough questions for startups that flaunted exponential annual recurring revenue growth just months ago. Analysts wrote that much of that revenue comes from month-to-month subscribers who may churn as quickly as they signed up, putting the durability of those flashy numbers in doubt.

Barr interviewed Eric Simons, CEO of Bolt who said:

“This is the problem across all these companies right now. The churn rate for everyone is really high,” Simons said. “You have to build a retentive business.”

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

Vibe coding tools have seen traffic drop, with Vercel’s v0 and Lovable seeing significant declines, raising sustainability questions, Barclays warns.

businessinsider.com iconbusinessinsider.com
Conceptual 3D illustration of stacked digital notebooks with a pen on top, overlaid on colorful computer code patterns.

Why We Still Need a HyperCard for the AI Era

I rewatched the 1982 film TRON for the umpteenth time the other night with my wife. I have always credited this movie as the spark that got me interested in computers. Mind you, I was nine years old when this film came out. I was so excited after watching the movie that I got my father to buy us a home computer—the mighty Atari 400 (note sarcasm). I remember an educational game that came on cassette called “States & Capitals” that taught me, well, the states and their capitals. It also introduced me to BASIC, and after watching TRON, I wanted to write programs!

Vintage advertisement for the Atari 400 home computer, featuring the system with its membrane keyboard and bold headline “Introducing Atari 400.”

The Atari 400’s membrane keyboard was easy to wipe down, but terrible for typing. It also reminded me of fast food restaurant registers of the time.

Back in the early days of computing—the 1960s and ’70s—there was no distinction between users and programmers. Computer users wrote programs to do stuff for them. Hence the close relationship between the two that’s depicted in TRON. The programs in the digital world resembled their creators because they were extensions of them. Tron, the security program that Bruce Boxleitner’s character Alan Bradley wrote, looks like its creator. Clu looked like Kevin Flynn, played by Jeff Bridges. Early in the film, a compound interest program who was captured by the MCP’s goons says to a cellmate, “if I don’t have a User, then who wrote me?”

Scene from the 1982 movie TRON showing programs in glowing blue suits standing in a digital arena.

The programs in TRON looked like their users. Unless the user was the program, which was the case with Kevin Flynn (Jeff Bridges), third from left.

I was listening to a recent interview with Ivan Zhao, CEO and cofounder of Notion, in which he said he and his cofounder were “inspired by the early computing pioneers who in the ’60s and ’70s thought that computing should be more LEGO-like rather than like hard plastic.” Meaning computing should be malleable and configurable. He goes on to say, “That generation of thinkers and pioneers thought about computing kind of like reading and writing.” As in accessible and fundamental so all users can be programmers too.

The 1980s ushered in the personal computer era with the Apple IIe, Commodore 64, TRS-80, (maybe even the Atari 400 and 800), and then the Macintosh, etc. Programs were beginning to be mass-produced and consumed by users, not programmed by them. To be sure, this move made computers much more approachable. But it also meant that users lost a bit of control. They had to wait for Microsoft to add a feature into Word that they wanted.

Of course, we’re coming back to a full circle moment. In 2025, with AI-enabled vibecoding, users are able to spin up little custom apps that do pretty much anything they want them to do. It’s easy, but not trivial. The only interface is the chatbox, so your control is only as good as your prompts and the model’s understanding. And things can go awry pretty quickly if you’re not careful.

What we’re missing is something accessible, but controllable. Something with enough power to allow users to build a lot, but not so much that it requires high technical proficiency to produce something good. In 1987, Apple released HyperCard and shipped it for free with every new Mac. HyperCard, as fans declared at the time, was “programming for the rest of us.”

HyperCard—Programming for the Rest of Us

Black-and-white screenshot of HyperCard’s welcome screen on a classic Macintosh, showing icons for Tour, Help, Practice, New Features, Art Bits, Addresses, Phone Dialer, Graph Maker, QuickTime Tools, and AppleScript utilities.

HyperCard’s welcome screen showed some useful stacks to help the user get started.

Bill Atkinson was the programmer responsible for MacPaint. After the Mac launched, and apparently on an acid trip, Atkinson conceived of HyperCard. As he wrote on the Apple history site Folklore:

Inspired by a mind-expanding LSD journey in 1985, I designed the HyperCard authoring system that enabled non-programmers to make their own interactive media. HyperCard used a metaphor of stacks of cards containing graphics, text, buttons, and links that could take you to another card. The HyperTalk scripting language implemented by Dan Winkler was a gentle introduction to event-based programming.

There were five main concepts in HyperCard: cards, stacks, objects, HyperTalk, and hyperlinks. 

  • Cards were screens or pages. Remember that the Mac’s nine-inch monochrome screen was just 512 pixels by 342 pixels.
  • Stacks were collections of cards, essentially apps.
  • Objects were the UI and layout elements that included buttons, fields, and backgrounds.
  • HyperTalk was the scripting language that read like plain English.
  • Hyperlinks were links from one interactive element like a button to another card or stack.

When I say that HyperTalk read like plain English, I mean it really did. AppleScript and JavaScript are descendants. Here’s a sample logic script:

if the text of field "Password" is "open sesame" then
  go to card "Secret"
else
  answer "Wrong password."
end if

Armed with this kit of parts, users were able to use this programming “erector set” and build all sorts of banal or wonderful apps. From tracking vinyl records to issuing invoices, or transporting gamers to massive immersive worlds, HyperCard could do it all. The first version of the classic puzzle adventure game, Myst was created with HyperCard. It was comprised of six stacks and 1,355 cards. From Wikipedia:

The original HyperCard Macintosh version of Myst had each Age as a unique HyperCard stack. Navigation was handled by the internal button system and HyperTalk scripts, with image and QuickTime movie display passed off to various plugins; essentially, Myst functions as a series of separate multimedia slides linked together by commands.

Screenshot from the game Myst, showing a 3D-rendered island scene with a ship in a fountain and classical stone columns.

The hit game Myst was built in HyperCard.

For a while, HyperCard was everywhere. Teachers made lesson plans. Hobbyists made games. Artists made interactive stories. In the Eighties and early Nineties, there was a vibrant shareware community. Small independent developers who created and shared simple programs for a postcard, a beer, or five dollars. Thousands of HyperCard stacks were distributed on aggregated floppies and CD-ROMs. Steve Sande, writing in Rocket Yard:

At one point, there was a thriving cottage industry of commercial stack authors, and I was one of them. Heizer Software ran what was called the “Stack Exchange”, a place for stack authors to sell their wares. Like Apple with the current app stores, Heizer took a cut of each sale to run the store, but authors could make a pretty good living from the sale of popular stacks. The company sent out printed catalogs with descriptions and screenshots of each stack; you’d order through snail mail, then receive floppies (CDs at a later date) with the stack(s) on them.

Black-and-white screenshot of Heizer Software’s “Stack Exchange” HyperCard catalog, advertising a marketplace for stacks.

Heizer Software’s “Stack Exchange,” a marketplace for HyperCard authors.

From Stacks to Shrink-Wrap

But even as shareware tiny programs and stacks thrived, the ground beneath this cottage industry was beginning to shift. The computer industry—to move from niche to one in every household—professionalized and commoditized software development, distribution, and sales. By the 1990s, the dominant model was packaged software that was merchandised on store shelves in slick shrink-wrapped boxes. The packaging was always oversized for the floppy or CD it contained to maximize visual space.

Unlike the users/programmers from the ’60s and ’70s, you didn’t make your own word processor anymore, you bought Microsoft Word. You didn’t build your own paint and retouching program—you purchased Adobe Photoshop. These applications were powerful, polished, and designed for thousands and eventually millions of users. But that meant if you wanted a new feature, you had to wait for the next upgrade cycle—typically a couple of years. If you had an idea, you were constrained by what the developers at Microsoft or Adobe decided was on the roadmap.

The ethos of tinkering gave way to the economics of scale. Software became something you consumed rather than created.

From Shrink-Wrap to SaaS

The 2000s took that shift even further. Instead of floppy disks or CD-ROMs, software moved into the cloud. Gmail replaced the personal mail client. Google Docs replaced the need for a copy of Word on every hard drive. Salesforce, Slack, and Figma turned business software into subscription services you didn’t own, but rented month-to-month.

SaaS has been a massive leap for collaboration and accessibility. Suddenly your documents, projects, and conversations lived everywhere. No more worrying about hard drive crashes or lost phones! But it pulled users even farther away from HyperCard’s spirit. The stack you made was yours; the SaaS you use belongs to someone else’s servers. You can customize workflows, but you don’t own the software.

Why Modern Tools Fall Short

For what started out as a note-taking app, Notion has come a long way. With its kit of parts—pages, databases, tags, etc.—it’s highly configurable for tracking information. But you can’t make games with it. Nor can you really tell interactive stories (sure, you can link pages together). You also can’t distribute what you’ve created and share with the rest of the world. (Yes, you can create and sell Notion templates.)

No productivity software programs are malleable in the HyperCard sense. 

[IMAGE: Director]

Of course, there are specialized tools for creativity. Unreal Engine and Unity are great for making games. Director and Flash continued the tradition started by HyperCard—at least in the interactive media space—before they were supplanted by more complex HTML5, CSS, and JavaScript. Objectively, these authoring environments are more complex than HyperCard ever was.

The Web’s HyperCard DNA

In a fun remembrance, Constantine Frantzeskos writes:

HyperCard’s core idea was linking cards and information graphically. This was true hypertext before HTML. It’s no surprise that the first web pioneers drew direct inspiration from HyperCard – in fact, HyperCard influenced the creation of HTTP and the Web itself​. The idea of clicking a link to jump to another document? HyperCard had that in 1987 (albeit linking cards, not networked documents). The pointing finger cursor you see when hovering over a web link today? That was borrowed from HyperCard’s navigation cursor​.

Ted Nelson coined the terms “hypertext” and “hyperlink” in the mid-1960s, envisioning a world where digital documents could be linked together in nonlinear “trails”—making information interwoven and easily navigable. Bill Atkinson’s HyperCard was the first mass-market program that popularized this idea, even influencing Tim Berners-Lee, the father of the World Wide Web. Berners-Lee’s invention was about linking documents together on a server and linking to other documents on other servers. A web of documents.

Early ViolaWWW hypermedia browser from 1993, displaying a window with navigation buttons, URL bar, and hypertext description.

Early web browser from 1993, ViolaWWW, directly inspired by the concepts in HyperCard.

Pei-Yuan Wei, developer of one of the first web browsers called ViolaWWW, also drew direct inspiration from HyperCard. Matthew Lasar writing for Ars Technica:

“HyperCard was very compelling back then, you know graphically, this hyperlink thing,” Wei later recalled. “I got a HyperCard manual and looked at it and just basically took the concepts and implemented them in X-windows,” which is a visual component of UNIX. The resulting browser, Viola, included HyperCard-like components: bookmarks, a history feature, tables, graphics. And, like HyperCard, it could run programs.

And of course, with the built-in source code viewer, browsers brought on a new generation of tinkerers who’d look at HTML and make stuff by copying, tweaking, and experimenting.

The Missing Ingredient: Personal Software

Today, we have low-code and no code tools like Bubble for making web apps, Framer for building web sites, and Zapier for automations. The tools are still aimed at professionals though. Maybe with the exception of Zapier and IFTTT, they’ve expanded the number of people who can make software (including websites), but they’re not general purpose. These are all adjacent to what HyperCard was.

(Re)enter personal software.

In an essay titled “Personal software,” Lee Robinson wrote, “You wouldn’t search ‘best chrome extensions for note taking’. You would work with AI. In five minutes, you’d have something that works exactly how you want.”

Exploring the idea of “malleable software,” researchers at Ink & Switch wrote:

How can users tweak the existing tools they’ve installed, rather than just making new siloed applications? How can AI-generated tools compose with one another to build up larger workflows over shared data? And how can we let users take more direct, precise control over tweaking their software, without needing to resort to AI coding for even the tiniest change? None of these questions are addressed by products that generate a cloud-hosted application from a prompt.

Of course, AI prompt-to-code tools have been emerging this year, allowing anyone who can type to build web applications. However, if you study these tools more closely—Replit, Lovable, Base44, etc.—you’ll find that the audience is still technical people. Developers, product managers, and designers can understand what’s going on. But not everyday people.

These tools are still missing ingredients HyperCard had that allowed it to be in the general zeitgeist for a while, that enabled users to be programmers again.

They are:

  • Direct manipulation
  • Technical abstraction
  • Local apps

What Today’s Tools Still Miss

Direct Manipulation

As I concluded in my exhaustive AI prompt-to-code tools roundup from April, “We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector.” The latency of the roundtrip of prompting the model, waiting for it to think and then generate code, and then rebuild the app is much too long. If you don’t know how to code, every change takes minutes, so building something becomes tedious, not fun.

Tools need to be a canvas-first, not chatbox-first. Imagine a kit of UI elements on the left that you can drag onto the canvas and then configure and style—not unlike WordPress page builders. 

AI is there to do the work for you if you want, but you don’t need to use it.

Hand-drawn sketch of a modern HyperCard-like interface, with a canvas in the center, object palette on the left, and chat panel on the right.

My sketch of the layout of what a modern HyperCard successor could look like. A directly manipulatable canvas is in the center, object palette on the left, and AI chat panel on the right.

Technical Abstraction

For gen pop, I believe that these tools should hide away all the JavaScript, TypeScript, etc. The thing that the user is building should just work.

Additionally, there’s an argument to be made to bring back HyperTalk or something similar. Here is the same password logic I showed earlier, but in modern-day JavaScript:

const password = document.getElementById("Password").value;

if (password === "open sesame") {
  window.location.href = "secret.html";
} else {
  alert("Wrong password.");
} 

No one is going to understand that, much less write something like it.

One could argue that the user doesn’t need to understand that code since the AI will write it. Sure, but code is also documentation. If a user is working on an immersive puzzle game, they need to know the algorithm for the solution. 

As a side note, I think flow charts or node-based workflows are great. Unreal Engine’s Blueprints visual scripting is fantastic. Again, AI should be there to assist.

Unreal Engine Blueprints visual scripting interface, with node blocks connected by wires representing game logic.

Unreal Engine has a visual scripting interface called Blueprints, with node blocks connected by wires representing game logic.

Local Apps

HyperCard’s file format was “stacks.” And stacks could be compiled into applications that can be distributed without HyperCard. With today’s cloud-based AI coding tools, they can all publish a project to a unique URL for sharing. That’s great for prototyping and for personal use, but if you wanted to distribute it as shareware or donation-ware, you’d have to map it to a custom domain name. It’s not straightforward to purchase from a registrar and deal with DNS records.

What if these web apps can be turned into a single exchangeable file format like “.stack” or some such? Furthermore, what if they can be wrapped into executable apps via Electron?

Rip, Mix, Burn

Lovable, v0, and others already have sharing and remixing built in. This ethos is great and builds on the philosophies of the hippie computer scientists. In addition to fostering a remix culture, I imagine a centralized store for these apps. Of course, those that are published as runtime apps can go through the official Apple and Google stores if they wish. Finally, nothing stops third-party stores, similar to the collections of stacks that used to be distributed on CD-ROMs.

AI as Collaborator, Not Interface

As mentioned, AI should not be the main UI for this. Instead, it’s a collaborator. It’s there if you want it. I imagine that it can help with scaffolding a project just by describing what you want to make. And as it’s shaping your app, it’s also explaining what it’s doing and why so that the user is learning and slowly becoming a programmer too.

Democratizing Programming

When my daughter was in middle school, she used a site called Quizlet to make flash cards to help her study for history tests. There were often user-generated sets of cards for certain subjects, but there were never sets specifically for her class, her teacher, that test. With this HyperCard of the future, she would be able to build something custom in minutes.

Likewise, a small business owner who runs an Etsy shop selling T-shirts can spin up something a little more complicated to analyze sales and compare against overall trends in the marketplace.

And that same Etsy shop owner could sell the little app they made to others wanting the same tool for for their stores.

The Future Is Close

Scene from TRON showing a program with raised arms, looking upward at a floating disc in a beam of light.

Tron talks to his user, Alan Bradley, via a communication beam.

In an interview with Garry Tan of Y Combinator in June, Michael Truell, the CEO of Anysphere, which is the company behind Cursor, said his company’s mission is to “replace coding with something that’s much better.” He acknowledged that coding today is really complicated:

Coding requires editing millions of lines of esoteric formal programming languages. It requires doing lots and lots of labor to actually make things show up on the screen that are kind of simple to describe.

Truell believes that in five to ten years, making software will boil down to “defining how you want the software to work and how you want the software to look.”

In my opinion, his timeline is a bit conservative, but maybe he means for professionals. I wonder if something simpler will come along sooner that will capture the imagination of the public, like ChatGPT has. Something that will encourage playing and tinkering like HyperCard did.

There’s a third sequel to TRON that’s coming out soon—TRON: Ares. In a panel discussion in the 5,000-seat Hall H at San Diego Comic-Con earlier this summer, Steven Lisberger, the creator of the franchise provided this warning about AI, “Let’s kick the technology around artistically before it kicks us around.” While he said it as a warning, I think it’s an opportunity as well.

AI opens up computer “programming” to a much larger swath of people—hell, everyone. As an industry, we should encourage tinkering by building such capabilities into our products. Not UIs on the fly, but mods as necessary. We should build platforms that increase the pool of users from technical people to everyday users like students, high school teachers, and grandmothers. We should imagine a world where software is as personalizable as a notebook—something you can write in, rearrange, and make your own. And maybe users can be programmers once again.

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma’s visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it’s rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

figma.com iconfigma.com

If you want an introduction on how to use Cursor as a designer, here’s a must-watch video. It’s just over half-an-hour long and Elizabeth Lin goes through several demos in Cursor.

Cursor is much more advanced than the AI prompt-to-code tools I’ve covered here before. But with it, you’ll get much more control because you’re building with actual code. (Of course, sigh, you won’t have sliders and inputs for controlling design.)

preview-1750139600534.png

A designer's guide to Cursor: How to build interactive prototypes with sound, explore visual styles, and transform data visualizations | Elizabeth Lin

How to use Cursor for rapid prototyping: interactive sound elements, data visualization, and aesthetic exploration without coding expertise

open.substack.com iconopen.substack.com

David Singleton, writing in his blog:

Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without. Not in a hand-wavy, hype-cycle way, but in a very concrete “this is changing how I ship software” way.

I have to agree. My recent tinkering projects with Cursor using Claude 4 Sonnet (and set to Cursor’s MAX mode) have been much smoother and much more autonomous.

And Singleton has found that Claude Code and OpenAI Codex are good for different things:

For personal tools, I’ve completely shifted my approach. I don’t even look at the code anymore - I describe what I want to Claude Code, test the result, make some minor tweaks with the AI and if it’s not good enough, I start over with a slightly different initial prompt. The iteration cycle is so fast that it’s often quicker to start over than trying to debug or modify the generated code myself. This has unlocked a level of creative freedom where I can build small utilities and experiments without the usual friction of implementation details.

And the larger point Singleton makes is that if you direct the right context to the reasoning model, it can help you solve your problem more effectively:

This points to something bigger: there’s an emerging art to getting the right state into the context window. It’s sometimes not enough to just dump code at these models and ask “what’s wrong?” (though that works surprisingly often). When stuck, you need to help them build the same mental framework you’d give to a human colleague. The sequence diagram was essentially me teaching Claude how to think about our OAuth flow. In another recent session, I was trying to fix a frontend problem (some content wouldn’t scroll) and couldn’t figure out where I was missing the correct CSS incantation. Cursor’s Agent mode couldn’t spot it either. I used Chrome dev tools to copy the entire rendered HTML DOM out of the browser, put that in the chat with Claude, and it immediately pinpointed exactly where I was missing an overflow: scroll.

For my designer audience out there—likely 99% of you—I think this post is informative as to how to work with reasoning models like Claude 4 or o4. This can totally apply to prompt-to-code tools like Lovable and v0. And these ideas can likely apply to Figma Make and Subframe.

preview-1750138847348.jpg

Coding agents have crossed a chasm

Coding agents have crossed a chasm Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without.

blog.singleton.io iconblog.singleton.io

Brad Feld is sharing the Cursor prompts his friend Michael Natkin put together. It is more or less the same that I’ve gleaned from the Cursor forums, but it’s nice to have it consolidated here. If you’re curious to tackle any weekend coding project, follow these steps.

preview-1749010031497.png

Vibecoding Prompts

A long time ago, in a galaxy far, far away, I was a CTO of a large, fast-growing public company. Well, I was a Quasi CTO in the same way […]

feld.com iconfeld.com
Surreal, digitally manipulated forest scene with strong color overlays in red, blue, and purple hues. A dark, blocky abstract logo is superimposed in the foreground.

Thoughts on the 2024 Design Tools Survey

Tommy Geoco and team are finally out with the results of their 2024 UX Design Tools Survey.

First, two quick observations before I move on to longer ones:

  • The respondent population of 2,200+ designers is well-balanced among company size, team structure, client vs. product focus, and leadership responsibility
  • Predictably, Figma dominates the tools stacks of most segments

Surprise #1: Design Leaders Use AI More Than ICs

Bar chart comparing AI adoption rates among design leaders and ICs across different work environments. Agency leaders show the highest adoption at 88.7%, followed by startups, growth-stage, and corporate environments.

From the summary of the AI section:

Three clear patterns emerge from our data:

  1. Leadership-IC Divide. Leaders adopt AI at a higher rate (29.0%) than ICs (19.9%)
  2. Text-first adoption. 75.2% of AI usage focuses on writing, documentation, and content—not visuals
  3. Client Influence. Client-facing designers show markedly higher AI adoption than internal-facing peers

That nine-point difference is interesting. The report doesn’t go into speculating why, but here are some possible reasons:

  • Design leaders are experimenting with AI tools looking for efficiency gains
  • Leaders write more than design, so they’re using AI more for emails, memos, reports, and presentations
  • ICs are set in their processes and don’t have time to experiment

Bar chart showing that most AI usage is for text-based tasks like copywriting, documentation, and content generation. Visual design tasks such as wireframes, assets, and components are much less common.

I believe that any company operating with resource constraints—which is all startups—needs to embrace AI. AI enables us to do more. I don’t believe—at least not yet—mid- to senior-level jobs are on the line. Engineers can use Cursor to write code, sure, but it’s probably better for them to give Cursor junior-level tasks like bug fixes. Designers should use AI to generate prototypes so that they can test and iterate on ideas more quickly. 

Bar chart showing 17.7% of advanced prototypers use code-based tools like SwiftUI, HTML/CSS/JS, React, and Flutter. Ratings indicate high satisfaction with these approaches, signaling a shift toward development-integrated prototyping.

The data here is stale, unfortunately. The survey was conducted between November 2024 and January 2025, just as the AI prompt-to-code tools were coming to market. I suspect we will see a huge jump in next year’s results.

Surprise #2: There’s Excitement for Framer

Alt Text: “Future of Design Award” banner featuring the Framer logo. Below, text explains the award celebrates innovations shaping design’s future, followed by “Winner: Framer.” Three key stats appear: 10.0% of respondents ranked Framer as a 2025 “tool to try,” 12.1% share in portfolio-building (largest in its category), and a 4.57 / 5 average satisfaction rating (tied for highest).

I’m surprised about Framer winning the “Future of Design” award. Maybe it’s the name of the award; does Framer really represent the “future of design”? Ten percent of respondents say they want to try this. 

I’ve not gone back to Framer since its early days when it supported code exports. I will give them kudos that they’ve pivoted and built a solid business and platform. I’m personally weary of creating websites for clients in a closed platform; I would rather it be portable like a Node.js app or even WordPress. But to each their own.

Not Surprised at All

In the report’s conclusion, its first two points are unsurprising:

  1. AI enters the workflow. 8.5% of designers cited AI tools as their top interest for 2025. With substantial AI tooling innovation in early 2025, we expect widespread adoption to accelerate next year.

Like I mentioned earlier, I think this will shift big time. 

  1. Design-code gap narrows. Addressing the challenge faced by 46.3% of teams reporting inconsistencies between design system specifications and code implementations.

As I said in a previous essay on the future of product design, the design-to-code gap is begging to be solved, “For any designer who has ever handed off a Figma file to a developer, they have felt the stinging disappointment days or weeks later when it’s finally coded.…The developer handoff experience has been a well-trodden path full of now-defunct or dying companies like InVision, Abstract, and Zeplin.”

Reminder: The Tools Don’t Make You a Better Designer

Inevitably, someone in the comments section will point this out: Don’t focus on the tool. To quote photographer and camera reviewer Ken Rockwell, “Cameras don’t take pictures, photographers do. Cameras are just another artist’s tool.” Tools are commodities, but our skills as craftspeople, thinkers, curators, and tastemakers are not.

Colorful illustration featuring the Figma logo on the left and a whimsical character operating complex, abstract machinery with gears, dials, and mechanical elements in vibrant colors against a yellow background.

Figma Make: Great Ideas, Nowhere to Go

Nearly three weeks after it was introduced at Figma Config 2025, I finally got access to Figma Make. It is in beta and Figma made sure we all know. So I will say upfront that it’s a bit unfair to do an official review. However, many of the tools in my AI prompt-to-code shootout article are also in beta. 

Since this review is fairly visual, I made a video as well that summarizes the points in this article pretty well.

Play

The Promise: One-to-One With Your Design

Figma's Peter Ng presenting on stage with large text reading "0→1 but 1:1 with your designs" against a dark background with purple accent lighting.

Figma’s Peter Ng presenting on stage Make’s promise: “0→1 but 1:1 with your designs.”

“What if you could take an idea not only from zero to one, but also make it one-to-one with your designs?” said Peter Ng, product designer at Figma. Just like all the other AI prompt-to-code tools, Figma Make is supposed to enable users to prompt their way to a working application. 

The Figma spin is that there’s more control over the output. Click an element and have the prompt only apply to that element. Or also click on something in the canvas and change some details like the font family, size, or color. 

The other Figma advantage is to be able to use pasted Figma designs for a more accurate translation to code. That’s the “one-to-one” Ng refers to.

The Reality: Falls Short

I evaluated Figma Make via my standard checkout flow prompt (thus covering the zero-to-one use case), another prompt, and with a pasted design (one-to-one).

Let’s get the standard evaluation out of the way before moving onto a deeper dive.

Figma Make Scorecard

Figma Make scorecard showing a total score of 58 out of 100, with breakdown: User experience 18/25, Visual design 13/15, Prototype 8/10, Ease of use 9/15, Design Control 6/15, Design system integration 0/15, Speed 9/10, and Editor's Discretion -5/10.

I ran the same prompt through it as the other AI tools:

Create a complete shopping cart checkout experience for an online clothing retailer

Figma Make’s score totaled 58, which puts it squarely in the middle of the pack. This was for a variety of reasons.

The quality of the generated output was pretty good. The UI was nice and clean, if a bit unstyled. This is because Make uses Shadcn UI components. Overall, the UX was exactly what I would expect. Perhaps a progress bar would have been a nice touch.

The generation was fast, clocking in at three minutes, which puts it near the top in terms of speed.

And the fine-grained editing sort of worked as promised. However, my manual changes were sometimes overridden if I used the chat.

Where It Actually Shines

Figma Make interface showing a Revenue Forecast Calculator with a $200,000 total revenue input, "Normal" distribution type selected, monthly breakdown table showing values from January ($7,407) to December ($7,407), and an orange bar chart displaying the normal distribution curve across 12 months with peak values in summer months.

The advantage of these prompt-to-code tools is that it’s really easy to prototype—maybe it’s even production-ready?—complex interactions.

To test this, I used a new prompt:

Build a revenue forecast calculator. It should take the input of a total budget from the user and automatically distribute the budget to a full calendar year showing the distribution by month. The user should be able to change the distribution curve from “Even” to “Normal” where “Normal” is a normal distribution curve.

Along with the prompt, I also included a wireframe as a still image. This gave the AI some idea of the structure I was looking for, at least.

The resulting generation was great and the functionality worked as expected. I iterated the design to include a custom input method and that worked too.

The One-to-One Promise Breaks Down

I wanted to see how well Figma Make would work with a well-structured Figma Design file. So I created a homepage for fictional fitness instructor using auto layout frames, structuring the file as I would divs in HTML.

Figma Design interface showing the original "Body by Reese" fitness instructor homepage design with layers panel on left, main canvas displaying the Pilates hero section and content layout, and properties panel on right. This is the reference design that was pasted into Figma Make for testing.

This is the reference design that was pasted into Figma Make for testing. Notice the well-structured layers!

Then I pasted the design into the chatbox and included a simple prompt. The result was…disappointing. The layout was correct but the type and type sizes were all wrong. I input that feedback into the chat and then the right font finally appeared. 

Then I manually updated the font sizes and got the design looking pretty close to my original. There was one problem: an image was the wrong size and not proportionally-scaled. So I asked the AI to fix it.

Figma Make interface showing a fitness instructor homepage with "Body by Reese" branding, featuring a hero image of someone doing Pilates with "Sculpt. Strengthen. Shine." text overlay, navigation menu, and content section with instructor photo and "Book a Class" call-to-action button.

Figma Make’s attempt at translating my Figma design to code.

The AI did not fix it and reverted some of my manual overrides for the fonts. In many ways this is significantly worse than not giving designers fine-grained control in the first place. Overwriting my overrides made me lose trust in the product because I lost work—however minimal it was. It brought me back to the many occasions that Illustrator or Photoshop crashed while saving, thus corrupting the file. Yes, it wasn’t as bad, but it still felt that way.

Dead End by Design

The question of what to do with the results of a Figma Make chat remain. A Figma Make file is its own filetype. You can’t bring it back into Figma Design nor even Figma Sites to make tweaks. You can publish it and it’s hosted on Figma’s infrastructure, just like Sites. You can download the code, but it’s kind of useless.

Code Export Is Capped at the Knees

You can download the React code as a zip file. But the code does not contain the necessary package.json that makes it installable on your local machine nor on a Node.js server. The package file tells the npm installer which dependencies need to be installed for the project to run.

I tried using Cursor to figure out where to move the files around—they have to be in a src directory—and to help me write a package.json but it would have taken too much time to reverse engineer it.

Nowhere to Go

Maybe using Figma Make inside Figma Sites will be a better use case. It’s not yet enabled for me, but that feature is the so-called Code Layers that was mentioned in the Make and Sites deep dive presentation at Config. In practice, it sounds very much like Code Components in Framer.

The Bottom Line

Figma had to debut Make in order to stay competitive. There’s just too much out there nipping at their heels. While a design tool like Figma is necessary to unlock the freeform exploration designers need, it is also the natural next step to be able to make it real from within the tool. The likes of Lovable, v0, and Subframe allow you to start with a design from Figma and turn that design into working code. The thesis for many of those tools is that they’re taking care of the post design-to-developer handoff: get a design, give the AI some context, and we’ll make it real. Figma has occupied the pre-designer-to-developer handoff for a while and they’re finally taking the next step.

However, in its current state, Figma Make is a dead end (see previous section). But it is beta software which should get better before official release. As a preview I think it’s cool, despite its flaws and bugs. But I wouldn’t use it for any actual work.

During this beta period, Figma needs to…

  • Add complete code export so the resulting code is portable, rather than keeping it within its closed system
  • Fix the fiendish bugs around the AI overwriting manual overrides
  • Figure out tighter integration between Make and the other products, especially Design
A futuristic scene with a glowing, tech-inspired background showing a UI design tool interface for AI, displaying a flight booking project with options for editing and previewing details. The screen promotes the tool with a “Start for free” button.

Beyond the Prompt: Finding the AI Design Tool That Actually Works for Designers

There has been an explosion of AI-powered prompt-to-code tools within the last year. The space began with full-on integrated development environments (IDEs) like Cursor and Windsurf. These enabled developers to use leverage AI assistants right inside their coding apps. Then came a tools like v0, Lovable, and Replit, where users could prompt screens into existence at first, and before long, entire applications.

A couple weeks ago, I decided to test out as many of these tools as I could. My aim was to find the app that would combine AI assistance, design capabilities, and the ability to use an organization’s coded design system.

While my previous essay was about the future of product design, this article will dive deep into a head-to-head between all eight apps that I tried. I recorded the screen as I did my testing, so I’ve put together a video as well, in case you didn’t want to read this.

Play

It is a long video, but there’s a lot to go through. It’s also my first video on YouTube, so this is an experiment.

The Bottom Line: What the Testing Revealed

I won’t bury the lede here. AI tools can be frustrating because they are probabilistic. One hour they can solve an issue quickly and efficiently, while the next they can spin on a problem and make you want to pull your hair out. Part of this is the LLM—and they all use some combo of the major LLMs. The other part is the tool itself for not handling what happens when their LLMs fail. 

For example, this morning I re-evaluated Lovable and Bolt because they’ve released new features within the last week, and I thought it would only be fair to assess the latest version. But both performed worse than in my initial testing two weeks ago. In fact, I tried Bolt twice this morning with the same prompt because the first attempt netted a blank preview. Unfortunately, the second attempt also resulted in a blank screen and then I ran out of credits. 🤷‍♂️

Scorecard for Subframe, with a total of 79 points across different categories: User experience (22), Visual design (13), Prototype (6), Ease of use (13), Design control (15), Design system integration (5), Speed (5), Editor’s discretion (0).

For designers who want actual design tools to work on UI, Subframe is the clear winner. The other tools go directly from prompt to code, skipping giving designers any control via a visual editor. We’re not developers, so manipulating the design in code is not for us. We need to be able to directly manipulate the components by clicking and modifying shapes on the canvas or changing values in an inspector.

For me, the runner-up is v0, if you want to use it only for prototyping and for getting ideas. It’s quick—the UI is mostly unstyled, so it doesn’t get in the way of communicating the UX.

The Players: Code-Only vs. Design-Forward Tools

There are two main categories of contenders: code-only tools, and code plus design tools.

Code-Only

  • Bolt
  • Lovable
  • Polymet
  • Replit
  • v0

Code + Design

  • Onlook
  • Subframe
  • Tempo

My Testing Approach: Same Prompt, Different Results

As mentioned at the top, I tested these tools between April 16–27, 2025. As with most SaaS products, I’m sure things change daily, so this report captures a moment in time.

For my evaluation, since all these tools allow for generating a design from a prompt, that’s where I started. Here’s my prompt:

Create a complete shopping cart checkout experience for an online clothing retailer

I would expect the following pages to be generated:

  • Shopping cart
  • Checkout page (or pages) to capture payment and shipping information
  • Confirmation

I scored each app based on the following rubric:

  • Sample generation quality
  • User experience (25)
  • Visual design (15)
  • Prototype (10)
  • Ease of use (15)
  • Control (15)
  • Design system integration (10)
  • Speed (10)
  • Editor’s discretion (±10)

The Scoreboard: How Each Tool Stacked Up

AI design tools for designers, with scores: Subframe 79, Onlook 71, v0 61, Tempo 59, Polymet 58, Lovable 49, Bolt 43, Replit 31. Evaluations conducted between 4/16–4/27/25.

Final summary scores for AI design tools for designers. Evaluations conducted between 4/16–4/27/25.

Here are the summary scores for all eight tools. For the detailed breakdown of scores, view the scorecards here in this Google Sheet.

The Blow-by-Blow: The Good, the Bad, and the Ugly

Bolt

Bolt screenshot: A checkout interface with a shopping cart summary, items listed, and a “Proceed to Checkout” button, displaying prices and order summary.

First up, Bolt. Classic prompt-to-code pattern here—text box, type your prompt, watch it work. 

Bolt shows you the code generation in real-time, which is fascinating if you’re a developer but mostly noise if you’re not. The resulting design was decent but plain, with typical UX patterns. It missed delivering the confirmation page I would expect. And when I tried to re-evaluate it this morning with their new features? Complete failure—blank preview screens until I ran out of credits. No rhyme or reason. And there it is—a perfect example of the maddening inconsistency these tools deliver. Working beautifully in one session, completely broken in another. Same inputs, wildly different outputs.

Score: 43

Lovable

Lovable screenshot: A shipping information form on a checkout page, including fields for personal details and a “Continue to Payment” button.

Moving on to Lovable, which I captured this morning right after they launched their 2.0 version. The experience was a mixed bag. While it generated clean (if plain) UI with some nice touches like toast notifications and a sidebar shopping cart, it got stuck at a critical juncture—the actual checkout. I had to coax it along, asking specifically for the shopping cart that was missing from the initial generation.

The tool encountered an error but at least provided a handy “Try to fix” button. Unlike Bolt, Lovable tries to hide the code, focusing instead on the browser preview—which as a designer, I appreciate. When it finally worked, I got a very vanilla but clean checkout flow and even the confirmation page I was looking for. Not groundbreaking, but functional. The approach of hiding code complexity might appeal to designers who don’t want to wade through development details.

Score: 49

Polymet

Polymet screenshot: A checkout page design for a fashion store showing payment method options (Credit Card, PayPal, Apple Pay), credit card fields, order summary with subtotal, shipping, tax, and total.

Next up is Polymet. This one has a very interesting interface and I kind of like it. You have your chat on the left and a canvas on the right. But instead of just showing the screen it’s working on, it’s actually creating individual components that later get combined into pages. It’s almost like building Figma components and then combining them at the end, except these are all coded components.

The design is pretty good—plain but very clean. I feel like it’s got a little more character than some of the others. What’s nice is you can go into focus mode and actually play with the prototype. I was able to navigate from the shopping cart through checkout (including Apple Pay) to confirmation. To export the code, you need to be on a paid plan, but the free trial gives you at least a taste of what it can do.

Score: 58

Replit

Replit screenshot: A developer interface showing progress on an online clothing store checkout project with error messages regarding the use of the useCart hook.

Replit was a test of patience—no exaggeration, it was the slowest tool of the bunch at 20 minutes to generate anything substantial. Why so slow? It kept encountering errors and falling into those weird loops that LLMs often do when they get stuck. At one point, I had to explicitly ask it to “make it work” just to progress beyond showing product pages, which wasn’t even what I’d asked for in the first place.

When it finally did generate a checkout experience, the design was nothing to write home about. Lines in the stepper weren’t aligning properly, there were random broken elements, and ultimately—it just didn’t work. I couldn’t even complete the checkout flow, which was the whole point of the exercise. I stopped recording at that point because, frankly, I just didn’t want to keep fighting with a tool that’s both slow and ineffective. 

Score: 31

v0

v0 screenshot: An online shopping cart with a multi-step checkout process, including a shipping form and order summary with prices and a “Continue to Payment” button.

Taking v0 for a spin next, which comes from Vercel. I think it was one of the earlier prompt-to-code generators I heard about—originally just for components, not full pages (though I could be wrong). The interface is similar to Bolt with a chat panel on the left and code on the right. As it works, it shows you the generated code in real-time, which I appreciate. It’s pretty mature and works really well.

The result almost looks like a wireframe, but the visual design has a bit more personality than Bolt’s version, even though it’s using the unstyled shadcn components. It includes form validation (which I checked), and handles the payment flow smoothly before showing a decent confirmation page. Speed-wise, v0 is impressively quick compared to some others I tested—definitely a plus when you’re iterating on designs and trying to quickly get ideas.

Score: 61

Onlook

Onlook screenshot: A design tool interface showing a cart with empty items and a “Continue Shopping” button on a fashion store checkout page.

Onlook stands out as a self-contained desktop app rather than a web tool like the others. The experience starts the same way—prompt in, wait, then boom—but instead of showing you immediate results, it drops you into a canvas view with multiple windows displaying localhost:3000, which is your computer running a web server locally. The design it generated was fairly typical and straightforward, properly capturing the shopping cart, shipping, payment, and confirmation screens I would expect. You can zoom out to see a canvas-style overview and manipulate layers, with a styles tab that lets you inspect and edit elements.

The dealbreaker? Everything gets generated as a single page application, making it frustratingly difficult to locate and edit specific states like shipping or payment. I couldn’t find these states visually or directly in the pages panel—they might’ve been buried somewhere in the layers, but I couldn’t make heads or tails of it. When I tried using it again today to capture the styles functionality for the video, I hit the same wall that plagued several other tools I tested—blank previews and errors. Despite going back and forth with the AI, I couldn’t get it running again.

Score: 71

Subframe

Subframe screenshot: A design tool interface with a checkout page showing a cart with items, a shipping summary, and the option to continue to payment.

My time with Subframe revealed a tool that takes a different approach to the same checkout prompt. Unlike most competitors, Subframe can’t create an entire flow at once (though I hear they’re working on multi-page capabilities). But honestly, I kind of like this limitation—it forces you as a designer to actually think through the process.

What sets Subframe apart is its MidJourney-like approach, offering four different design options that gradually come into focus. These aren’t just static mockups but fully coded, interactive pages you can preview in miniature. After selecting a shopping cart design, I simply asked it to create the next page, and it intelligently moved to shipping/billing info.

The real magic is having actual design tools—layers panel, property inspector, direct manipulation—alongside the ability to see the working React code. For designers who want control beyond just accepting whatever the AI spits out, Subframe delivers the best combination of AI generation and familiar design tooling.

Score: 79

Tempo

Tempo screenshot: A developer tool interface generating a clothing store checkout flow, showing wireframe components and code previews.

Lastly, Tempo. This one takes a different approach than most other tools. It starts by generating a PRD from your prompt, then creates a user flow diagram before coding the actual screens—mimicking the steps real product teams would take. Within minutes, it had generated all the different pages for my shopping cart checkout experience. That’s impressive speed, but from a design standpoint, it’s just fine. The visual design ends up being fairly plain, and the prototype had some UX issues—the payment card change was hard to notice, and the “Place order” action didn’t properly lead to a confirmation screen even though it existed in the flow.

The biggest disappointment was with Tempo’s supposed differentiator. Their DOM inspector theoretically allows you to manipulate components directly on canvas like you would in Figma—exactly what designers need. But I couldn’t get it to work no matter how hard I tried. I even came back days later to try again with a different project and reached out to their support team, but after a brief exchange—crickets. Without this feature functioning, Tempo becomes just another prompt-to-code tool rather than something truly designed for visual designers who want to manipulate components directly. Not great.

Score: 59

The Verdict: Control Beats Code Every Time

Subframe screenshot: A design tool interface displaying a checkout page for a fashion store with a cart summary and a “Proceed to Checkout” button.

Subframe offers actual design tools—layers panel, property inspector, direct manipulation—along with AI chat.

I’ve spent the last couple weeks testing these prompt-to-code tools, and if there’s one thing that’s crystal clear, it’s this: for designers who want actual design control rather than just code manipulation, Subframe is the standout winner.

I will caveat that I didn’t do a deep dive into every single tool. I played with them at a cursory level, giving each a fair shot with the same prompt. What I found was a mix of promising starts and frustrating dead ends.

The reality of AI tools is their probabilistic nature. Sometimes they’ll solve problems easily, and then at other times they’ll spectacularly fail. I experienced this firsthand when retesting both Lovable and Bolt with their latest features—both performed worse than in my initial testing just two weeks ago. Blank screens. Error messages. No rhyme or reason.

For designers like me, the dealbreaker with most of these tools is being forced to manipulate designs through code rather than through familiar design interfaces. We need to be able to directly manipulate components by clicking and modifying shapes on the canvas or changing values in an inspector. That’s where Subframe delivers while others fall short—if their audience includes designers, which might not be the case.

For us designers, I believe Subframe could be the answer. But I’m also looking forward to if Figma will have an answer. Will the company get in the AI > design > code game? Or will it be left behind? 

The future belongs to applications that balance AI assistance with familiar design tooling—not just code generators with pretty previews.

While Josh W. Comeau writes for his developer audience, a lot of what he says can be applied to design. Referring to a recent Forbes article:

AI may be generating 25% of the code that gets committed at Google, but it’s not acting independently. A skilled human developer is in the driver’s seat, using their knowledge and experience to guide the AI, editing and shaping its output, and mixing it in with the code they’ve written. As far as I know, 100% of code at Google is still being created by developers. AI is just one of many tools they use to do their job.

In other words, developers are editing and curating the output of AI, just like where I believe the design discipline will end up soon.

On incorporating Cursor into his workflow:

And that’s kind of a problem for the “no more developers” theory. If I didn’t know how to code, I wouldn’t notice the subtle-yet-critical issues with the model’s output. I wouldn’t know how to course-correct, or even realize that course-correction was required!

I’ve heard from no-coders who have built projects using LLMs, and their experience is similar. They start off strong, but eventually reach a point where they just can’t progress anymore, no matter how much they coax the AI. The code is a bewildering mess of non sequiturs, and beyond a certain point, no amount of duct tape can keep it together. It collapses under its own weight.

I’ve noticed that too. For a non-coder like me, rebuilding this website yet again—I need to write a post about it—has been a challenge. But I knew and learned enough to get something out there that works. But yes, relying solely on AI for any professional work right now is precarious. It still requires guidance.

On the current job market for developers and the pace of AI:

It seems to me like we’ve reached the point in the technology curve where progress starts becoming more incremental; it’s been a while since anything truly game-changing has come out. Each new model is a little bit better, but it’s more about improving the things it already does well rather than conquering all-new problems.

This is where I will disagree with him. I think the AI labs are holding back the super-capable models that they are using internally. Tools like Claude Code and the newly-released OpenAI Codex are clues that the foundational model AI companies have more powerful agents behind-the-scenes. And those agents are building the next generation of models.

preview-1745259603982.jpg

The Post-Developer Era

When OpenAI released GPT-4 back in March 2023, they kickstarted the AI revolution. The consensus online was that front-end development jobs would be totally eliminated within a year or two.Well, it’s been more than two years since then, and I thought it was worth revisiting some of those early predictions, and seeing if we can glean any insights about where things are headed.

joshwcomeau.com iconjoshwcomeau.com
Griffin AI logo

How I Built and Launched an AI-Powered App

I’ve always been a maker at heart—someone who loves to bring ideas to life. When AI exploded, I saw a chance to create something new and meaningful for solo designers. But making Griffin AI was only half the battle…

Birth of an Idea

About a year ago, a few months after GPT-4 was released and took the world by storm, I worked on several AI features at Convex. One was a straightforward email drafting feature but with a twist. We incorporated details we knew about the sender—such as their role and offering—and the email recipient, as well as their role plus info about their company’s industry. To accomplish this, I combined some prompt engineering and data from our data providers, shaping the responses we got from GPT-4.

Playing with this new technology was incredibly fun and eye-opening. And that gave me an idea. Foundational large language models (LLMs) aren’t great yet for factual data retrieval and analysis. But they’re pretty decent at creativity. No, GPT, Claude, or Gemini couldn’t write an Oscar-winning screenplay or win the Pulitzer Prize for poetry, but it’s not bad for starter ideas that are good enough for specific use cases. Hold that thought.

I belong to a Facebook group for WordPress developers and designers. From the posts in the group, I could see most members were solopreneurs, with very few having worked at a large agency. From my time at Razorfish, Organic, Rosetta, and others, branding projects always included brand strategy, usually weeks- or months-long endeavors led by brilliant brand or digital strategists. These brand insights and positioning always led to better work and transformed our relationship with the client into a partnership.

So, I saw an opportunity. Harness the power of gen AI to create brand strategies for this target audience. In my mind, this could allow these solo developers and designers to charge a little more money, give their customers more value, and, most of all, act like true partners.

Validating the Problem Space

The prevailing wisdom is to leverage Facebook groups and Reddit forums to perform cheap—free—market research. However, the reality is that good online communities ban this sort of activity. So, even though I had a captive audience, I couldn’t outright ask. The next best thing for me was paid research. I found Pollfish, an online survey platform that could assemble a panel of 100 web developers who own their own businesses. According to the data, there was overwhelming interest in a tool like this.*

Screenshot of two survey questions showing 79% of respondents would "Definitely buy" and "probably buy" Griffin AI, and 58% saying they need the app a lot.

Notice the asterisk. We’ll come back to that later on.

I also asked some of my designer and strategist friends who work in branding. They all agreed that there was likely a market for this.

Testing the Theory

I had a vague sense of what the application would be. The cool thing about ChatGPT is that you can bounce ideas back and forth with it as almost a co-creation partner. But you had to know what to ask, which is why prompt engineering skills were developed.

I first tested GPT 3.5’s general knowledge. Did it know about brand strategy? Yes. What about specific books on brand strategy, like Designing Brand Identity by Alina Wheeler? Yes. OK, so the knowledge is in there. I just needed the right prompts to coax out good answers.

I developed a method whereby the prompt reminded GPT of how to come up with the answer and, of course, contained the input from the user about the specific brand.

Screenshot of prompt

Through trial and error and burning through a lot of OpenAI credits, I figured out a series of questions and prompts to produce a decent brand strategy document.

I tested this flow with a variety of brands, including real ones I knew and fake ones I’d have GPT imagine.

Designing the MVP

The Core Product

Now that I had the conceptual flow, I had to develop a UI to solicit the answers from the user and have those answers inform subsequent prompts. Everything builds on itself.

I first tried an open chat, just like ChatGPT, but with specific questions. Only issue was I couldn’t limit what the user wrote in the text box.

Early mockup of the chat UI for Griffin AI

Early mockup of the chat UI for Griffin AI

AI Prompts as Design

Because the prompts were central to the product design, I decided to add them into my Figma file as part of the flow. In each prompt, I indicated where the user inputs would be injected. Also, most of the answers from the LLM needed to be stored for reuse in later parts of the flow.

Screenshot of app flow in Figma

AI prompts are indicated directly in the Figma file

Living With Imperfect Design

Knowing that I wanted a freelance developer to help me bring my idea to life, I didn’t want to fuss too much about the app design. So, I settled on using an off-the-shelf design system called Flowbite. I just tweaked the colors and typography and lived with the components as-is.

Building the MVP

Building the app would be out of my depths. When GPT 3.5 first came out, I test-drove it for writing simple Python scripts. But it failed, and I couldn’t figure out a good workflow to get working code. So I gave up. (Of course, fast-forward until now, and gen AI for coding is much better!)

I posted a job on Upwork and interviewed four developers. I chose Geeks of Kolachi, a development agency out of Pakistan. I picked them because they were an agency—meaning they would be a team rather than an individual. Their process included oversight and QA, which I was familiar with working at a tech company.

Working Proof-of-Concept in Six Weeks

In just six weeks, I had a working prototype that I could start testing with real users. My first beta testers were friends who graciously gave me feedback on the chat UI.

Through this early user testing, I found that I needed to change the UI. Users wanted more real estate for the generated content, and the free response feedback text field was simply too open, as users didn’t know what to do next.

So I spent another few weekends redesigning the main chat UI, and then the development team needed another three or four weeks to refactor the interface.

Mockup of the revised chat UI

The revised UI gives more room for the main content and allows the user to make their own adjustments.

AI Slop?

As a creative practitioner, I was very sensitive to not developing a tool that would eliminate jobs. The fact is that the brand strategies GPT generated were OK; they were good enough. However, to create a real strategy, a lot more research is required. This would include interviewing prospects, customers, and internal stakeholders, studying the competition, and analyzing market trends.

Griffin AI was a shortcut to producing a brand strategy good enough for a small local or regional business. It was something the WordPress developer could use to inform their website design. However, these businesses would never be able to afford the services of a skilled agency strategist in addition to the logo or website work.

However, the solo designer could charge a little extra for this branding exercise or provide more value in addition to their normal offering.

I spent a lot of time tweaking the prompts and the flow to produce more than decent brand strategies for the likes of Feline Friends Coffee House (cat cafe), WoofWagon Grooming (mobile pet wash), and Dice & Duels (board game store).

Beyond the Core Product

While the core product was good enough for an MVP, I wanted to figure out a valuable feature to justify monthly recurring revenue, aka a subscription. LLMs are pretty good at mimicking voice and tone if you give it enough direction. Therefore I decided to include copywriting as a feature, but writing based on a brand voice created after a brand strategy has been developed. ChatGPT isn’t primed to write in a consistent voice, but it can with the right prompting and context.

Screenshots of the Griffin AI marketing site

Screenshots of the Griffin AI marketing site

Beyond those two features, I also had to build ancillary app services like billing, administration, onboarding, tutorials, and help docs. I had to extend the branding and come up with a marketing website. All this ate up weeks more time.

Failure to Launch

They say the last 20% takes 80% of the time, or something like that. And it’s true. The stuff beyond the core features just took a lot to perfect. While the dev team was building and fixing bugs, I was on Reddit, trying to gather leads to check out the app in its beta state.

Griffin AI finally launched in mid-June. I made announcements on my social media accounts. Some friends congratulated me and even checked out the app a little. But my agency and tech company friends weren’t the target audience. No, my ideal customer was in that WordPress developers Facebook group where I couldn’t do any self-promotion.

Screenshot of the announcement on LinkedIn

I continued to talk about it on Reddit and everywhere I could. But the app never gained traction. I wasn’t savvy enough to build momentum and launch on ProductHunt. The Summer Olympics in Paris happened. Football season started. The Dodgers won the World Series. And I got all but one sale.

When I told this customer that I was going to shut down the app, he replied, “I enjoyed using the app, and it helped me brief my client on a project I’m working on.” Yup, that was the idea! But not enough people knew about it or thought it was worthwhile to keep it going.

Lessons Learned

I’m shutting Griffin AI down, but I’m not too broken up about it. For me, I learned a lot and that’s all that matters. Call it paying tuition into the school of life.

When I perform a post-mortem on why it didn’t take off, I can point to a few things.

I’m a maker, not a seller.

I absolutely love making and building. And I think I’m not too bad at it. But I hate the actual process of marketing and selling. I believe that had I poured more time and money into getting the word out, I could have attracted more customers. Maybe.

Don’t rely on survey data.

Remember the asterisk? The Pollfish data that showed interest in a product like this? Well, I wonder if this was a good panel at all. In the verbatims, some comments didn’t sound like these respondents were US-based, business owners, or taking the survey seriously. Comments like “i extremely love griffin al for many more research” and “this is a much-needed assistant for my work.” Instead of survey data with a suspect panel, I need to do more first-hand research before jumping into it.

AI moves really fast.

AI has been a rocket ship this past year-and-a-half. Keeping up with the changes and new capabilities is brutal as a side hustle and as a non-engineer. While I thought there might be a market for a specialized AI tool like Griffin, I think people are satisfied enough with a horizontal app like ChatGPT. To break through, you’d have to do something very different. I think Cursor and Replit might be onto something.


I still like making things, and I’ll always be a tinkerer. But maybe next time, I’ll be a little more aware of my limitations and either push past them or find collaborators who can augment my skills.

Closeup of MU/TH/UR 9000 computer screen from the movie Alien:Romulus

Re-Platforming with a Lot of Help From AI

I decided to re-platform my personal website, moving it from WordPress to React. It was spurred by a curiosity to learn a more modern tech stack like React and the drama in the WordPress community that erupted last month. While I doubt WordPress is going away anytime soon, I do think this rift opens the door for designers, developers, and clients to consider alternatives.

First off, I’m not a developer by any means. I’m a designer and understand technical things well, but I can’t code. When I was young, I wrote programs in BASIC and HyperCard. In the early days of content management systems, I built a version of my personal site using ExpressionEngine. I was always able to tweak CSS to style themes in WordPress. When Elementor came on the scene, I could finally build WP sites from scratch. Eventually, I graduated to other page builders like Oxygen and Bricks.

So, rebuilding my site in React wouldn’t be easy. I went through the React foundations tutorial by Next.js and their beginner full-stack course. But honestly, I just followed the steps and copied the code, barely understanding what was being done and not remembering any syntax. Then I stumbled upon Cursor, and a whole new world opened up.

Screenshot of the Cursor website, promoting it as “The AI Code Editor” designed to boost productivity. It features a “Download for Free” button, a 1-minute demo video, and a coding interface with AI-generated suggestions and chat assistance.

Cursor is an AI-powered code editor (IDE) like VS Code. In fact, it’s a fork of VS Code with AI chat bolted onto the side panel. You can ask it to generate and debug code for you. And it works! I was delighted when I asked it to create a light/dark mode toggle for my website. In seconds, it outputted code in the chat for three files. I would have to go into each code example and apply it to the correct file, but even that’s mostly automatic. I simply have to accept or reject the changes as the diff showed up in the editor. And I had dark mode on my site in less than a minute. I was giddy!

To be clear, it still took about two weekends of work and a lot of trial and error to finish the project. But a non-coder like me, who still can’t understand JavaScript, would not have been able to re-platform their site to a modern stack without the help of AI.

Here are some tips I learned along the way.

Plan the Project and Write a PRD

While watching some React and Next.js tutorials on YouTube, this video about 10xing your Cursor workflow by Jason Zhou came up. I didn’t watch the whole thing, but his first suggestion was to write a product requirements document, or PRD, which made a lot of sense. So that’s what I did. I wrote a document that spelled out the background (why), what I wanted the user experience to be, what the functionality should be, and which technologies to use. Not only did this help Cursor understand what it was building, but it also helped me define the functionality I wanted to achieve.

Screenshot of a project requirements document titled “Personal Website Rebuild,” outlining a plan to migrate the site rogerwong.me from WordPress to a modern stack using React, Next.js, and Tailwind CSS. It includes background context, required pages, and navigation elements for the new site.

A screenshot of my PRD

My personal website is a straightforward product when compared to the Reddit sentiment analysis tool Jason was building, but having this document that I could refer back to as I was making the website was helpful and kept things organized.

Create the UI First

I’ve been designing websites since the 1990s, so I’m pretty old school. I knew I wanted to keep the same design as my WordPress site, but I still needed to design it in Figma. I put together a quick mockup of the homepage, which was good enough to jump into the code editor.

I know enough CSS to style elements however I want, but I don’t know any best practices. Thankfully, Tailwind CSS exists. I had heard about it from my engineering coworkers but never used it. I watched a quick tutorial from Lukas, who made it very easy to understand, and I was able to code the design pretty quickly.

Prime the AI

Once the design was in HTML and Tailwind, I felt ready to get Cursor started. In the editor, there’s a chat interface on the right side. You can include the current file, additional files, or the entire codebase for context for each chat. I fed it the PRD and told it to wait for further instructions. This gave Cursor an idea of what we were building.

Make It Dynamic

Then, I included the homepage file and told Cursor to make it dynamic according to the PRD. It generated the necessary code and, more importantly, its thought process and instructions on implementing the code, such as which files to create and which Next.js and React modules to add.

Screenshot of the AI coding assistant in the Cursor editor helping customize Tailwind CSS Typography plugin settings. The user reports issues with link and heading colors, especially in dark mode. The assistant suggests editing tailwind.config.ts and provides code snippets to fix styling.

A closeup of the Cursor chat showing code generation

The UI is well-considered. For each code generation box, Cursor shows the file it should be applied to and an Apply button. Clicking the Apply button will insert the code in the right place in the file, showing the new code in green and the code to be deleted in red. You can either reject or accept the new code.

Be Specific in Your Prompts

The more specific you can be, the better Cursor will work. As I built the functionality piece by piece, I found that the generated code would work better—less error-prone—when I was specific in what I wanted.

When errors did occur, I would simply copy the error and paste it into the chat. Cursor would do its best to troubleshoot. Sometimes, it solved the problem on its first try. Other times, it would take several attempts. I would say Cursor generated perfect code the first time 80% of the time. The remainder took at least another attempt to catch the errors.

Know Best Practices

Screenshot of the Cursor AI code editor with a TypeScript file (page.tsx) open, showing a blog post index function. An AI chat panel on the right helps troubleshoot Tailwind CSS Typography plugin issues, providing a tailwind.config.ts code snippet to fix link and heading colors in dark mode.

Large language models today can’t quite plan. So, it’s essential to understand the big picture and keep that plan in mind. I had to specify the type of static site generator I wanted to build. In my case, just simple Markdown files for blog posts. However, additional best practices include SEO and accessibility. I had to have Cursor modify the working code to incorporate best practices for both, as they weren’t included automatically.

Build Utility Scripts

Since I was migrating my posts and links from WordPress, a fair bit of conversion had to be done to get it into the new format, Markdown. I thought I would have to write my own WordPress plugin or something, but when I asked Cursor how to transfer my posts, it proposed the existing WordPress-to-Markdown script. That was 90% of the work!

I ended up using Cursor to write additional small scripts to add alt text to all the images and to ensure no broken images. These utility scripts came in handy to process 42 posts and 45 links in the linklog.

The Takeaway: Developers’ Jobs Are Still Safe

I don’t believe AI-powered coding tools like Cursor, GitHub Copilot, and Replit will replace developers in the near future. However, I do think these tools have a place in three prominent use cases: learning, hobbying, and acceleration.

For students and those learning how to code, Cursor’s plain language summary explaining its code generation is illuminating. For hobbyists who need a little utilitarian script every once in a while, it’s also great. It’s similar to 3D printing, where you can print out a part to fix the occasional broken something.

Two-panel graphic promoting GitHub Copilot. The left panel states, “Proven to increase developer productivity and accelerate the pace of software development,” with a link to “Read the research.” The right panel highlights “55% Faster coding” with a lightning bolt icon on a dark gradient background.

For professional engineers, I believe this technology can help them do more faster. In fact, that’s how GitHub positions Copilot: “code 55% faster” by using their product. Imagine planning out an app, having the AI draft code for you, and then you can fine-tune it. Or have it debug for you. This reduces a lot of the busy work.

I’m not sure how great the resulting code is. All I know is that it’s working and creating the functionality I want. It might be similar to early versions of Macromedia (now Adobe) Dreamweaver, where the webpage looked good, but when you examined the HTML more closely, it was bloated and inefficient. Eventually, Dreamweaver’s code got better. Similarly, WordPress page builders like Elementor and Bricks Builder generated cleaner code in the end.

Tools like Cursor, Midjourney, and ChatGPT are enablers of ideas. When wielded well, they can help you do some pretty cool things. As a fun add-on to my site, I designed some dingbats—mainly because of my love for 1960s op art and ’70s corporate logos—at the bottom of every blog post. See what happens if you click them. Enjoy.

Introducing DesignScene App for iPad

I’m really proud to announce that DesignScene for iPad has shipped today. From idea to release, it’s been about a year in the making. Here’s a little trailer I made in case you missed it:

Play

I’ll be frank and say that this app was really made for me. Like many designers I spend a lot of my time going from website to website looking at stuff and reading up on trends. I eventually started using RSS feeds but even my feeds got unwieldy. I dreaded opening up Google Reader and seeing “1000+” unread items.

When Apple announced the iPad 12 months ago it struck me that this device was the perfect thing to visually browse through all of my design-related feeds. It didn’t take me too long to sketch and comp up something.

Early mockup of the DesignScene app interface, showing a grid of vibrant visual content on the left—including illustrations, photos, and videos—and a right-hand column with repeated tech news headlines about Ferrari-red robots at Santander Bank from TechCrunch. A refresh timestamp is shown at the bottom.

Of course I am just a designer and had zero Objective-C skills whatsoever. I can do simple HTML, CSS and even PHP, but real programming languages elude me. I knew I had to find a development partner. Problem is that there are tons of people like me with an idea, while developers are in high demand. I asked my network of friends and contacts, posted on Craigslist and BuildItWithMe but didn’t really find anyone. I had a couple of meetings with friends of friends who were iPhone developers but they had their own objectives. Finally I got in touch with an old friend who was working on his first iPhone app.

I presented my idea to David and he liked it. We decided to go to iPad Dev Camp which took place a week after the iPad shipped and just a couple of weeks after David and I initially talked. We built the prototype for DesignScene at the camp (and received an Honorable Mention). We were off to a great start.

The reality of day jobs and personal lives slowed progress down as we got into the spring and summer of 2010. But in the fall as chatter of curated content emerged we kicked ourselves into high gear. David worked on functionality (there’s a lot of backend processing that actually happens so that the app is as fast as it can be) and I worked on reaching out to sources to get official permission.

Fast-forward to today, and DesignScene is now available for purchase on the App Store. We’ve worked incredibly hard on this, sweated all the details (there’s actually a maintenance upgrade that we released hours after 1.0.0 went on sale), and are really proud of what we’ve accomplished. Of course we could not have done this without the immense and loving support from our families. A million thanks to our wives and kids for putting up with our late night hackathons.

We are going to keep working on to improve DesignScene (we have some neat features we’ve been thinking about) but we’re also going to think about other apps. Stay tuned and wish us luck!

iTunes Link to DesignScene app for iPad

David’s side of the story