Skip to content

11 posts tagged with “coding”

2 min read

Auto-Tagging the Post Archive

Since I finished migrating my site from Next.js/Payload CMS to Astro, I’ve been wanting to redo the tag taxonomy for my posts. They’d gotten out of hand over time, and the tag tumbleweed grew to more than 80 tags. What the hell was I thinking when I had both “product design” and “product designer”?

Anyway, I tried a few programmatic ways to determine the best taxonomy, but ultimately manually culled it down to 29 tags. Then, I really didn’t want to have to manually go back and re-tag more than 350 posts. So I turned to AI. It took two attempts. The first one that Cursor planned for me used ML to discern the tags, but that failed spectacularly because it was using frequency of words, not semantic meaning.

So I ultimately tried an LLM approach and that worked. I spec’d it out and had Claude Code write it for me. Then after another hour or so of experimenting and seeing if the resulting tags worked, I let it run concurrently in four terminal windows to process all the posts from the past 20 years. Et voila!

I spot-checked at least half of all the posts manually and made some adjustments. But I’m pretty happy with the results.

See the new tags on the Search page or just click around and explore.

A computer circuit board traveling at warp speed through space with motion-blurred light streaks radiating outward, symbolizing high-performance computing and speed.

The Need for Speed: Why I Rebuilt My Blog with Astro

Two weekends ago, I quietly relaunched my blog. It was a heart transplant really, of the same design I’d launched in late March.

The First Iteration

Back in early November of last year, I re-platformed from WordPress to a home-grown, Cursor-made static site generator. I’d write in Markdown and push code to my GitHub repository and the post was published via Vercel’s continuous deployment feature. The design was simple and it was a great learning project for me.

Screenshot of Roger Wong's first blog design from November 2024, featuring a dark navy background with white text. The homepage shows a large hero section with Roger's bio and headshot, followed by a "Latest Posts" section displaying the essay "From Craft to Curation: Design Leadership in the Age of AI" with a stylized illustration of a person wearing glasses with orange and blue gradient reflections. A "Latest Links" section appears on the right side.

My first blog redesign from November 2024, built with Cursor as a static site generator. Simple, clean, and good enough to get me writing again.

As soon as I launched it, I got the bug to write more because the platform was shiny and new. And as soon as I started to write more essays, I also really wanted to write short-form comments on links in the vein of Jason Kottke and John Gruber. So in January of this year, I started to design a new version of the site.

Designing for a Feed

My idea was to create a feed-like experience, since the majority of the posts were likely going to be short and link off to external sites. I was heavily inspired by the design of Bluesky and by the aforementioned blogs. I don’t pretend to be Kottke or Gruber, but that’s the style of blog I wanted to have.

I put down my idea quickly in Figma, in bed, half-watching Top Chef with my wife.

Screenshot of a Figma design mockup showing a feed-style blog layout with a light gray background and minimal sidebar navigation on the left (Home, Posts, Linked, Search, About). The main content area displays a vertical feed of posts with colored preview cards - one coral/pink card about a Clamshell keyboard case, and one mint green card for an essay titled "Design's Purpose Remains Constant." The right sidebar shows author info and navigation links.

The initial Figma sketch done in bed while half-watching Top Chef. A feed-like layout inspired by Bluesky, optimized for short-form link posts with commentary.

The main content column is supposed to look like a social media app’s feed, a long list with link previews and commentary if I had any. I optimized the structure for mobile—though only 38% of my traffic from the last six months is mobile. I noodled on the design details for a few more nights before jumping into the tech solution.

Why I Chose a CMS

Markdown is great for writing, especially if there’s a good editor. For example, I use Ulysses for Mac (and sometimes for iPad). I can easily export MD files from Ulysses.

But because I came from WordPress, it seemed conceptually silly to me to rebuild the whole site every time I published a post. Granted, that’s how Movable Type used to do it in the old days (and I guess they still do!). So I looked around and found Payload CMS, which was built by designers and developers coming from the WordPress ecosystem. And it made sense to me: render a template and fill in the content slots with data from the database. (I’m sure the developers out there have lots of arguments for the static files. I know! As you’ll see, I learned my lesson.)

I tapped Cursor again to help me build the site that would be on Next.js with Payload as the CMS. I spent three months on it, building custom functionality, perfecting all the details, and launched quietly with my first official post on March 27, linking to a lovely visual essay from Amelia Wattenberger.

And I loved the site. Workflow was easy and that encouraged me to post regularly. It worked great. Until it didn’t.

When Things Started Breaking

The database I used to power the site was MongoDB, a modern cloud-based database that’s recommended by Payload. It worked great initially. I did a lot of performance tuning to make the site feel snappy and it did mostly. Or I got used to the lag.

But as the post count grew, three things started going wrong:

  1. List pages sometimes wouldn’t load and result in an error.
  2. Search results would sometimes take forever, like 10 seconds to return something.
  3. In the admin UI, when clicking on a post lookup menu to link to a related post, it often errored out.

Despite additional optimizations I did to minimize the database connections and usage, I couldn’t solve it. The only solution was to upgrade from the lowest plan, which cost me about $10 per month, to the next level up at $60 per month. A six-times increase for a hobby blog. I didn’t think that was a prudent financial decision.

Enter Astro.

The Migration to Astro

I looked around for a more performant content framework. Astro had come up in my initial search, but after learning about it more, it became clear to me this was the way to go. So I spent about a week (nights only) migrating my Next.js/Payload site to Astro. Since many of the components in the original site were written in TypeScript, it was actually not that hard to tell Claude Code and Cursor to “look at the reference” to get the styling nailed. I wanted the exact same design and only needed to change the backend. The trickiest part of the whole migration has been extracting the posts from MongoDB and transforming them into Markdown files, more specifically, MDX files, which allow for JavaScript within the content in case I ever needed that flexibility.

Astro also doesn’t have built in search, so I chose to integrate Algolia.

The results are fantastic. The site is even faster. Search is lightning fast. Here are two comparisons I’ve done: the /posts page and a single post (specifically, “Why We Still Need a HyperCard for the AI Era”). The difference is pretty stark:

Bar chart comparing web performance metrics for a posts page between Astro and Next.js/Payload. Astro shows 33 requests, 2.8 MB transferred, 3.2 MB resources, 909ms finish time, 178ms DOMContentLoaded, and 263ms Load time. Next.js/Payload shows 87 requests, 4.5 MB transferred, 6.9 MB resources, 8.45 second finish time, 390ms DOMContentLoaded, and 479ms Load time. Astro delivers substantially faster performance across all measurements.

Performance comparison loading the posts index page: Astro (purple) vs Next.js/Payload (blue). Astro completes in 909ms with 33 requests, while Next.js takes 8.45 seconds with 87 requests.

Bar chart comparing web performance metrics for a single post page between Astro and Next.js/Payload. Astro shows 27 requests, 1.6 MB transferred, 1.7 MB resources, 746ms finish time, 84ms DOMContentLoaded, and 127ms Load time. Next.js/Payload shows 72 requests, 2.1 MB transferred, 3.6 MB resources, 21.85 second finish time, 175ms DOMContentLoaded, and 272ms Load time. Astro significantly outperforms Next.js across all metrics.

Performance comparison loading a single blog post: Astro (purple) vs Next.js/Payload (blue). Astro finishes in 746ms with 27 requests, while Next.js takes 21.85 seconds with 72 requests.

The Numbers Don’t Lie

The performance difference is staggering. On the posts page, Astro loads in under a second (909 ms) while Next.js takes over 8 seconds. For a single post page, it’s even more dramatic—Astro finishes downloading and rendering all resources in 746 ms while Next.js takes a brutal 21.85 seconds. That’s nearly thirty-times slower for the exact same content. The numbers tell the story: Astro makes two- to-three times fewer server requests and transfers significantly less data. But the real difference is in how it feels—with Astro, the content appears almost instantly (84 ms DOMContentLoaded on the single post), while it took twice that for Next.js.

The kicker? Search performance. On the old Next.js/MongoDB setup, searching for “paul rand” took 3.63 seconds. With Algolia on Astro, that same search completes in 29.55 milliseconds. That’s over a hundred times faster. Not “a bit snappier.” Not “noticeably improved.” It’s the difference between a search that makes you wait and one that feels instantaneous—the kind of speed that fundamentally changes how you interact with content.

Building a Simple Admin

The advantage that Payload CMS has, of course, is its fully-featured admin experience. That doesn’t come with Astro and this setup. I started working on a simple admin UI for myself that will help me fill in the “frontmatter”—the metadata at the top of the MDX file, like tags, related posts, publish date, etc.

Screenshot of a custom blog post editor interface showing two panels: the left panel contains post metadata fields including Featured Image, SEO Meta Title, Category, Tags, and Related Posts; the right panel displays the post content in Markdown format with sections on performance comparisons and building a simple admin, plus an "Upload Image" section at the bottom with fields for Bunny URL and Image Alt Text.

The simple admin UI I’m building for myself.

The basics are working so far, but there is more I’d like to do with it, including adding an AI feature to help autofill tags and write alt text for images.

What I Learned

Sometimes the right tool isn’t the most feature-rich one—it’s the one that gets out of the way. I spent months building on Next.js and Payload because it felt like the “proper” way to build a modern CMS-driven site. Database, API routes, server-side rendering—all the things you’re supposed to want. (I learned a lot along the way, so I don’t see any of it as time wasted.)

But here’s what I actually needed: fast page loads and a simple way to write. That’s it.

Astro gives me both. The static site generation approach I initially dismissed turned out to be exactly right for a content site like this. No database queries slowing things down. No server costs scaling with traffic. Just clean, fast HTML with the minimum JavaScript needed to make things work.

The trade-off? I lost the polished admin interface. But I gained something more valuable: a site that loads instantly and costs almost nothing to run. Between ditching the $10/month MongoDB plan (which wanted to become $60/month) and Astro’s efficient static generation, hosting costs dropped to basically just the $20/month pro plan on Vercel. For a personal blog, that’s the right exchange.

It turns out the old ways—static files, Markdown, simple deployments—weren’t outdated. They were just waiting for better tools. Astro is that better tool. And honestly? Writing in MDX files feels pretty good. Clean. Direct. Just me and the content.

The site looks exactly the same as it did before. But now it actually works the way it should have from the start.

Is the AI bubble about to burst? Apparently, AI prompt-to-code tools like Lovable and v0 have peaked and are on their way down.

Alistair Barr writing for Business Insider:

The drop-off raises tough questions for startups that flaunted exponential annual recurring revenue growth just months ago. Analysts wrote that much of that revenue comes from month-to-month subscribers who may churn as quickly as they signed up, putting the durability of those flashy numbers in doubt.

Barr interviewed Eric Simons, CEO of Bolt who said:

“This is the problem across all these companies right now. The churn rate for everyone is really high,” Simons said. “You have to build a retentive business.”

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

AI vibe coding tools were supposed to change everything. Now traffic is crashing.

Vibe coding tools have seen traffic drop, with Vercel’s v0 and Lovable seeing significant declines, raising sustainability questions, Barclays warns.

businessinsider.com iconbusinessinsider.com

In many ways, this excellent article by Kaustubh Saini for Final Round AI’s blog is a cousin to my essay on the design talent crisis. But it’s about what happens when people “become” developers and only know vibe coding.

The appeal is obvious, especially for newcomers facing a brutal job market. Why spend years learning complex programming languages when you can just describe what you want in plain English? The promise sounds amazing: no technical knowledge required, just explain your vision and watch the AI build it.

In other words, these folks don’t understand the code and, well, bad things can happen.

The most documented failure involves an indie developer who built a SaaS product entirely through vibe coding. Initially celebrating on social media that his “saas was built with Cursor, zero hand written code,” the story quickly turned dark.

Within weeks, disaster struck. The developer reported that “random things are happening, maxed out usage on api keys, people bypassing the subscription, creating random shit on db.” Being non-technical, he couldn’t debug the security breaches or understand what was going wrong. The application was eventually shut down permanently after he admitted “Cursor keeps breaking other parts of the code.”

This failure illustrates the core problem with vibe coding: it produces developers who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

I don’t foresee something this disastrous with design. I mean, a newbie designer wielding an AI-enabled Canva or Figma can’t tank a business alone because the client will have eyes on it and won’t let through something that doesn’t work. It could be a design atrocity, but it’ll likely be fine.

This *can *happen to a designer using vibe coding tools, however. Full disclosure: I’m one of them. This site is partially vibe-coded. My Severance fan project is entirely vibe-coded.

But back to the idea of a talent crisis. In the developer world, it’s already happening:

The fundamental problem is that vibe coding creates what experts call “pseudo-developers.” These are people who can generate code but can’t understand, debug, or maintain it. When AI-generated code breaks, these developers are helpless.

In other words, they don’t have the skills necessary to be developers because they can’t do the basics. They can’t debug, don’t understand architecture, have no code review skills, and basically have no fundamental knowledge of what it means to be a programmer. “They miss the foundation that allows developers to adapt to new technologies, understand trade-offs, and make architectural decisions.”

Again, assuming our junior designers have the requisite fundamental design skills, not having spent time developing their craft and strategic skills through experience will be detrimental to them and any org that hires them.

preview-1753377392986.jpg

How AI Vibe Coding Is Destroying Junior Developers' Careers

New research shows developers think AI makes them 20% faster but are actually 19% slower. Vibe coding is creating unemployable pseudo-developers who can't debug or maintain code.

finalroundai.com iconfinalroundai.com

In case you missed it, there’s been a major shift in the AI tool landscape.

On Friday, OpenAI’s $3 billion offer to acquire AI coding tool Windsurf expired. Windsurf is the Pepsi to Cursor’s Coke. They’re both IDEs, the programming desktop application that software developers use to code. Think of them as supercharged text editors but with AI built in.

On Friday evening, Google announced that it had hired Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and several key researchers for $2.4 billion.

On Monday, Cognition, the company behind Devin, the self-described “AI engineer” announced that it had acquired Windsurf for an undisclosed sum, but noting that its remaining 250 employees will “participate financially in this deal.”

Why does this matter to designers?

The AI tools market is changing very rapidly. With AI helping to write these applications, their numbers and features are always increasing—or in this case, maybe consolidating. Choose wisely before investing too deeply into one particular tool. The one piece of advice I would give here is to avoid lock-in. Don’t get tied to a vendor. Ensure that your tool of choice can export your work—the code.

Jason Lemkin has more on the business side of things and how it affects VC-backed startups.

preview-1752536770924.png

Did Windsurf Sell Too Cheap? The Wild 72-Hour Saga and AI Coding Valuations

The last 72 hours in AI coding have been nothing short of extraordinary. What started as a potential $3 billion OpenAI acquisition of Windsurf ended with Google poaching Windsurf’s CEO and co…

saastr.com iconsaastr.com

Ted Goas, writing in UX Collective:

I predict the early parts of projects, getting from nothing to something, will become shared across roles. For designers looking to branch out, code is a natural next step. I see a future where we’re fixing small bugs ourselves instead of begging an engineer, implementing that animation that didn’t make the sprint but you know would absolutely slap, and even building simple features when engineering resources are tight.

Our new reality is that anyone can make a rough draft.

But that doesn’t mean those drafts are good. That’s where our training and taste come in.

I think Goas is right and it echoes the AI natives post by Elena Verna. I wrote a little more extensively in my newsletter over the weekend.

preview-1752467928143.jpg

Designers: We’ll all be design engineers in a year

And that’s a good thing.

uxdesign.cc iconuxdesign.cc

Darragh Burke and Alex Kern, software engineers at Figma, writing on the Figma blog:

Building code layers in Figma required us to reconcile two different models of thinking about software: design and code. Today, Figma’s visual canvas is an open-ended, flexible environment that enables users to rapidly iterate on designs. Code unlocks further capabilities, but it’s more structured—it requires hierarchical organization and precise syntax. To reconcile these two models, we needed to create a hybrid approach that honored the rapid, exploratory nature of design while unlocking the full capabilities of code.

The solution turned out to be code layers, actual canvas primitives that can be manipulated just like a rectangle, and respects auto layout properties, opacity, border radius, etc.

The solution we arrived at was to implement code layers as a new canvas primitive. Code layers behave like any other layer, with complete spatial flexibility (including moving, resizing, and reparenting) and seamless layout integration (like placement in autolayout stacks). Most crucially, they can be duplicated and iterated on easily, mimicking the freeform and experimental nature of the visual canvas. This enables the creation and comparison of different versions of code side by side. Typically, making two copies of code for comparison requires creating separate git branches, but with code layers, it’s as easy as pressing ⌥ and dragging. This automatically creates a fork of the source code for rapid riffing.

In my experience, it works as advertised, though the code layer element will take a second to render when its spatial properties are edited. Makes sense though, since it’s rendering code.

preview-1751332174370.png

Canvas, Meet Code: Building Figma’s Code Layers

What if you could design and build on the same canvas? Here's how we created code layers to bring design and code together.

figma.com iconfigma.com

David Singleton, writing in his blog:

Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without. Not in a hand-wavy, hype-cycle way, but in a very concrete “this is changing how I ship software” way.

I have to agree. My recent tinkering projects with Cursor using Claude 4 Sonnet (and set to Cursor’s MAX mode) have been much smoother and much more autonomous.

And Singleton has found that Claude Code and OpenAI Codex are good for different things:

For personal tools, I’ve completely shifted my approach. I don’t even look at the code anymore - I describe what I want to Claude Code, test the result, make some minor tweaks with the AI and if it’s not good enough, I start over with a slightly different initial prompt. The iteration cycle is so fast that it’s often quicker to start over than trying to debug or modify the generated code myself. This has unlocked a level of creative freedom where I can build small utilities and experiments without the usual friction of implementation details.

And the larger point Singleton makes is that if you direct the right context to the reasoning model, it can help you solve your problem more effectively:

This points to something bigger: there’s an emerging art to getting the right state into the context window. It’s sometimes not enough to just dump code at these models and ask “what’s wrong?” (though that works surprisingly often). When stuck, you need to help them build the same mental framework you’d give to a human colleague. The sequence diagram was essentially me teaching Claude how to think about our OAuth flow. In another recent session, I was trying to fix a frontend problem (some content wouldn’t scroll) and couldn’t figure out where I was missing the correct CSS incantation. Cursor’s Agent mode couldn’t spot it either. I used Chrome dev tools to copy the entire rendered HTML DOM out of the browser, put that in the chat with Claude, and it immediately pinpointed exactly where I was missing an overflow: scroll.

For my designer audience out there—likely 99% of you—I think this post is informative as to how to work with reasoning models like Claude 4 or o4. This can totally apply to prompt-to-code tools like Lovable and v0. And these ideas can likely apply to Figma Make and Subframe.

preview-1750138847348.jpg

Coding agents have crossed a chasm

Coding agents have crossed a chasm Somewhere in the last few months, something fundamental shifted for me with autonomous AI coding agents. They’ve gone from a “hey this is pretty neat” curiosity to something I genuinely can’t imagine working without.

blog.singleton.io iconblog.singleton.io

Brad Feld is sharing the Cursor prompts his friend Michael Natkin put together. It is more or less the same that I’ve gleaned from the Cursor forums, but it’s nice to have it consolidated here. If you’re curious to tackle any weekend coding project, follow these steps.

preview-1749010031497.png

Vibecoding Prompts

A long time ago, in a galaxy far, far away, I was a CTO of a large, fast-growing public company. Well, I was a Quasi CTO in the same way […]

feld.com iconfeld.com

While Josh W. Comeau writes for his developer audience, a lot of what he says can be applied to design. Referring to a recent Forbes article:

AI may be generating 25% of the code that gets committed at Google, but it’s not acting independently. A skilled human developer is in the driver’s seat, using their knowledge and experience to guide the AI, editing and shaping its output, and mixing it in with the code they’ve written. As far as I know, 100% of code at Google is still being created by developers. AI is just one of many tools they use to do their job.

In other words, developers are editing and curating the output of AI, just like where I believe the design discipline will end up soon.

On incorporating Cursor into his workflow:

And that’s kind of a problem for the “no more developers” theory. If I didn’t know how to code, I wouldn’t notice the subtle-yet-critical issues with the model’s output. I wouldn’t know how to course-correct, or even realize that course-correction was required!

I’ve heard from no-coders who have built projects using LLMs, and their experience is similar. They start off strong, but eventually reach a point where they just can’t progress anymore, no matter how much they coax the AI. The code is a bewildering mess of non sequiturs, and beyond a certain point, no amount of duct tape can keep it together. It collapses under its own weight.

I’ve noticed that too. For a non-coder like me, rebuilding this website yet again—I need to write a post about it—has been a challenge. But I knew and learned enough to get something out there that works. But yes, relying solely on AI for any professional work right now is precarious. It still requires guidance.

On the current job market for developers and the pace of AI:

It seems to me like we’ve reached the point in the technology curve where progress starts becoming more incremental; it’s been a while since anything truly game-changing has come out. Each new model is a little bit better, but it’s more about improving the things it already does well rather than conquering all-new problems.

This is where I will disagree with him. I think the AI labs are holding back the super-capable models that they are using internally. Tools like Claude Code and the newly-released OpenAI Codex are clues that the foundational model AI companies have more powerful agents behind-the-scenes. And those agents are building the next generation of models.

preview-1745259603982.jpg

The Post-Developer Era

When OpenAI released GPT-4 back in March 2023, they kickstarted the AI revolution. The consensus online was that front-end development jobs would be totally eliminated within a year or two.Well, it’s been more than two years since then, and I thought it was worth revisiting some of those early predictions, and seeing if we can glean any insights about where things are headed.

joshwcomeau.com iconjoshwcomeau.com
Closeup of MU/TH/UR 9000 computer screen from the movie Alien:Romulus

Re-Platforming with a Lot of Help From AI

I decided to re-platform my personal website, moving it from WordPress to React. It was spurred by a curiosity to learn a more modern tech stack like React and the drama in the WordPress community that erupted last month. While I doubt WordPress is going away anytime soon, I do think this rift opens the door for designers, developers, and clients to consider alternatives.

First off, I’m not a developer by any means. I’m a designer and understand technical things well, but I can’t code. When I was young, I wrote programs in BASIC and HyperCard. In the early days of content management systems, I built a version of my personal site using ExpressionEngine. I was always able to tweak CSS to style themes in WordPress. When Elementor came on the scene, I could finally build WP sites from scratch. Eventually, I graduated to other page builders like Oxygen and Bricks.

So, rebuilding my site in React wouldn’t be easy. I went through the React foundations tutorial by Next.js and their beginner full-stack course. But honestly, I just followed the steps and copied the code, barely understanding what was being done and not remembering any syntax. Then I stumbled upon Cursor, and a whole new world opened up.

Screenshot of the Cursor website, promoting it as “The AI Code Editor” designed to boost productivity. It features a “Download for Free” button, a 1-minute demo video, and a coding interface with AI-generated suggestions and chat assistance.

Cursor is an AI-powered code editor (IDE) like VS Code. In fact, it’s a fork of VS Code with AI chat bolted onto the side panel. You can ask it to generate and debug code for you. And it works! I was delighted when I asked it to create a light/dark mode toggle for my website. In seconds, it outputted code in the chat for three files. I would have to go into each code example and apply it to the correct file, but even that’s mostly automatic. I simply have to accept or reject the changes as the diff showed up in the editor. And I had dark mode on my site in less than a minute. I was giddy!

To be clear, it still took about two weekends of work and a lot of trial and error to finish the project. But a non-coder like me, who still can’t understand JavaScript, would not have been able to re-platform their site to a modern stack without the help of AI.

Here are some tips I learned along the way.

Plan the Project and Write a PRD

While watching some React and Next.js tutorials on YouTube, this video about 10xing your Cursor workflow by Jason Zhou came up. I didn’t watch the whole thing, but his first suggestion was to write a product requirements document, or PRD, which made a lot of sense. So that’s what I did. I wrote a document that spelled out the background (why), what I wanted the user experience to be, what the functionality should be, and which technologies to use. Not only did this help Cursor understand what it was building, but it also helped me define the functionality I wanted to achieve.

Screenshot of a project requirements document titled “Personal Website Rebuild,” outlining a plan to migrate the site rogerwong.me from WordPress to a modern stack using React, Next.js, and Tailwind CSS. It includes background context, required pages, and navigation elements for the new site.

A screenshot of my PRD

My personal website is a straightforward product when compared to the Reddit sentiment analysis tool Jason was building, but having this document that I could refer back to as I was making the website was helpful and kept things organized.

Create the UI First

I’ve been designing websites since the 1990s, so I’m pretty old school. I knew I wanted to keep the same design as my WordPress site, but I still needed to design it in Figma. I put together a quick mockup of the homepage, which was good enough to jump into the code editor.

I know enough CSS to style elements however I want, but I don’t know any best practices. Thankfully, Tailwind CSS exists. I had heard about it from my engineering coworkers but never used it. I watched a quick tutorial from Lukas, who made it very easy to understand, and I was able to code the design pretty quickly.

Prime the AI

Once the design was in HTML and Tailwind, I felt ready to get Cursor started. In the editor, there’s a chat interface on the right side. You can include the current file, additional files, or the entire codebase for context for each chat. I fed it the PRD and told it to wait for further instructions. This gave Cursor an idea of what we were building.

Make It Dynamic

Then, I included the homepage file and told Cursor to make it dynamic according to the PRD. It generated the necessary code and, more importantly, its thought process and instructions on implementing the code, such as which files to create and which Next.js and React modules to add.

Screenshot of the AI coding assistant in the Cursor editor helping customize Tailwind CSS Typography plugin settings. The user reports issues with link and heading colors, especially in dark mode. The assistant suggests editing tailwind.config.ts and provides code snippets to fix styling.

A closeup of the Cursor chat showing code generation

The UI is well-considered. For each code generation box, Cursor shows the file it should be applied to and an Apply button. Clicking the Apply button will insert the code in the right place in the file, showing the new code in green and the code to be deleted in red. You can either reject or accept the new code.

Be Specific in Your Prompts

The more specific you can be, the better Cursor will work. As I built the functionality piece by piece, I found that the generated code would work better—less error-prone—when I was specific in what I wanted.

When errors did occur, I would simply copy the error and paste it into the chat. Cursor would do its best to troubleshoot. Sometimes, it solved the problem on its first try. Other times, it would take several attempts. I would say Cursor generated perfect code the first time 80% of the time. The remainder took at least another attempt to catch the errors.

Know Best Practices

Screenshot of the Cursor AI code editor with a TypeScript file (page.tsx) open, showing a blog post index function. An AI chat panel on the right helps troubleshoot Tailwind CSS Typography plugin issues, providing a tailwind.config.ts code snippet to fix link and heading colors in dark mode.

Large language models today can’t quite plan. So, it’s essential to understand the big picture and keep that plan in mind. I had to specify the type of static site generator I wanted to build. In my case, just simple Markdown files for blog posts. However, additional best practices include SEO and accessibility. I had to have Cursor modify the working code to incorporate best practices for both, as they weren’t included automatically.

Build Utility Scripts

Since I was migrating my posts and links from WordPress, a fair bit of conversion had to be done to get it into the new format, Markdown. I thought I would have to write my own WordPress plugin or something, but when I asked Cursor how to transfer my posts, it proposed the existing WordPress-to-Markdown script. That was 90% of the work!

I ended up using Cursor to write additional small scripts to add alt text to all the images and to ensure no broken images. These utility scripts came in handy to process 42 posts and 45 links in the linklog.

The Takeaway: Developers’ Jobs Are Still Safe

I don’t believe AI-powered coding tools like Cursor, GitHub Copilot, and Replit will replace developers in the near future. However, I do think these tools have a place in three prominent use cases: learning, hobbying, and acceleration.

For students and those learning how to code, Cursor’s plain language summary explaining its code generation is illuminating. For hobbyists who need a little utilitarian script every once in a while, it’s also great. It’s similar to 3D printing, where you can print out a part to fix the occasional broken something.

Two-panel graphic promoting GitHub Copilot. The left panel states, “Proven to increase developer productivity and accelerate the pace of software development,” with a link to “Read the research.” The right panel highlights “55% Faster coding” with a lightning bolt icon on a dark gradient background.

For professional engineers, I believe this technology can help them do more faster. In fact, that’s how GitHub positions Copilot: “code 55% faster” by using their product. Imagine planning out an app, having the AI draft code for you, and then you can fine-tune it. Or have it debug for you. This reduces a lot of the busy work.

I’m not sure how great the resulting code is. All I know is that it’s working and creating the functionality I want. It might be similar to early versions of Macromedia (now Adobe) Dreamweaver, where the webpage looked good, but when you examined the HTML more closely, it was bloated and inefficient. Eventually, Dreamweaver’s code got better. Similarly, WordPress page builders like Elementor and Bricks Builder generated cleaner code in the end.

Tools like Cursor, Midjourney, and ChatGPT are enablers of ideas. When wielded well, they can help you do some pretty cool things. As a fun add-on to my site, I designed some dingbats—mainly because of my love for 1960s op art and ’70s corporate logos—at the bottom of every blog post. See what happens if you click them. Enjoy.