Skip to content

Talking to Peter Yang, Ravi Mehta—former CPO of Tinder, now teaching AI prototyping at Reforge—walks through a live demo of building the same Spotify-style genre page three different ways. The first attempt uses a short functional prompt and produces something that, in Mehta’s words, kind of feels like AI slop. The third uses what he calls a full-stack context bundle: a functional spec, a 20-minute Figma wireframe, and a JSON file of real album data pulled together in Claude with an MCP server. The output is night and day.

His definition of the shift:

Context engineering is designing and building systems that provide an AI model with the right information and tools to accomplish the task. And I think a lot of the common mistake I see with prototyping is people don’t think about context within that 360 degree way. And as a result, people just, you know, write a quick prompt or a quick little mini spec and expect the prototype tool to be able to create something as high fidelity as what they used to create before when they had all of these different artifacts that are a critical part of the product lifecycle.

That definition will sound familiar to anyone who saw Philipp Schmid’s framing of context engineering when it first circulated. Same emphasis on “right information and tools.” It’s the working definition the field has settled on. What Mehta adds is the concrete answer to “okay, what are the three things you actually have to assemble?” Functional context (a spec), visual context (a wireframe), and data context (real structured JSON, not lorem ipsum). Skip any of them and the prototype either looks generic, behaves wrong at edge cases, or breaks suspension of disbelief the moment a real customer touches it.

The piece I want to underline is his defense of visual thinking, because the “designers are obsolete” takes haven’t stopped, and Mehta gives them a clean rebuttal:

So if you start to think differently about the different types of context that are available, you can actually get much more specific and have a lot more control over what gets built and build something that’s a lot more robust. This is functional context. The next level that is really important is visual context. […] And so here, I very quickly in Figma, just taking 20 minutes, done a wireframe, and sort of outlined what I want this interface to look like. […] The prototype needs to have a level of fidelity that’s hard to get with sort of traditional prompting techniques.

Twenty minutes in Figma, then a short prompt that says “use the attached wireframe.” A wireframe does what a 17-page PRD and three rounds of trying to describe a layout in English to the model can’t. The wireframe is part of the input to the deliverable now.

The corollary cuts the other way too. If the wireframe is now an AI briefing document, the people who can produce a decent one in twenty minutes have a real edge over the people who can’t. That’s still designers, still us. It’s just that the wireframe now feeds the model directly, not only the engineer reading the spec next sprint.

Subscribe for updates

Get weekly (or so) post updates and design insights in your inbox.