How 5 Builders Actually Vibe Code With AI
Five real workflows from coders, creators, and everyone in between
Vibe coding is blowing up. Non-coders are building apps, engineers are shipping faster, and AI tools are multiplying by the week. But here’s the thing: the best vibe coders don’t follow the same method.
After months of experimenting, five of us AI builders and enthusiasts realized the most effective AI workflows don’t come from copying others. They come from aligning with how you think.
Some of us think in data flows. Others in user journeys. Some prototype rapidly, others architect carefully.
In this post, each of us share how we actually build with AI. They’re actual workflows we developed through trial, error, and late-night debugging sessions.
Before we dive in, let’s meet the builders:
Claudia - AI Weekender: Data Scientist and AI builder who discovered that thinking in data flows completely changes how AI builds system architecture.
Joel - Leadership in Change: Business leader, humanitarian, and investor who created the Leadership in Change platform to help mission-driven leaders maximize their impact through AI.
Zain - StrAItegy Hub: Fortune 100 strategy manager who discovered that systematic thinking frameworks from knowledge work are the key to getting the most out of AI.
Jenny - Build to Launch: Scientist turned AI builder who creates practical guides for the complete journey from idea to shipped products, showing anyone can thrive with AI.
Wyndo - AI Maker: AI Operator/Maker who shares AI workflow and thinking framework to build smarter, work faster, and live better—with AI.
1. Claudia / A Data-First Perspective
Six years in FinTech taught me that mistakes in data flow could cost millions in bad loans. So when I started building with AI, I naturally described systems the way I always had: data in, transformations, data out. Here's what works for me:
Start with data flow, not features. I describe how information moves through the system before mentioning UI. Here’s an example: “when someone submits a form, the data gets validated against these rules, stored in this table structure, then sent to this API endpoint."
Include examples. AI-generated code gets 80% closer to what I want when I can point to examples. "Build something like [this repo] but for [my use case]" saves hours of back-and-forth.
Backend first. Define what the system does before worrying about how it looks. Frontend becomes skin around working bones.
Test with specifics. I bring concrete test cases, then ask Claude for edge cases. This is a good way to verify the AI actually understood requirements.
Front-end is pure vibes. I don’t have a background in design, so I delegate this entirely to Claude and give aesthetic feedback: "Make this feel more modern."
Context management scales with tools. For simple projects, I copy code into separate Claude chats and read every function before proceeding. For complex builds, I use Claude Code's /resume
feature to maintain context across sessions without losing the data transformation logic I've mapped out.
Vibe coding isn't about trusting AI completely. It's about finding the rhythm that keeps you in control while leveraging AI's speed. I think in data transformations, so that's how I talk to Claude. The best AI conversations happen when you stop trying to speak its language and start teaching it to understand yours.
2. Joel / The Rapid Prototyper
Most people I talk to get stuck planning the perfect system, either waiting for a developer or simply stuck in the idea phase. Meanwhile, my 7-year-old built an app in 30 minutes. This helped me realize… most of us think we need perfect plans. I think we need a few perfect failures to get started.
Let me walk you through exactly how I build, using a Leadership Decision Tracker as the example:
Step 1: Describe Problems, Not Solutions.
I opened Claude and described my problem like I'm explaining it over coffee: "I need something that helps leadership teams track their decisions and see what actually worked six months later. Too many times, we make big calls in board meetings, but six months later, nobody remembers the reasoning. Ask me a few questions, then generate a ready-to-use prompt for bolt.new"
No technical specs, just the human problem. Here’s a sample of how that went…
Step 2: See Your Idea in 10 Minutes.
I took Claude's concept and immediately prototyped it in Bolt.new (a visual web builder that lets you see working apps instantly). Here's what I got - a basic form for logging decisions with follow-up dates. It looked terrible, but it worked.
The prototype revealed I was missing decision inputs entirely.
Step 3: Let Confusion Guide Your Build.
I would normally share this with 3 people, 3 who will give me honest, unvarnished feedback on what works, what doesn’t, and what was just dumb to add 🙂
Many software projects fail because teams spend 6 months building features nobody wants.
Step 4: Turn User Frustration Into Better Prompts.
Take that user feedback from step 3, paste it into Claude…
Then, copy the code from Bolt, paste it into the same Claude chat window, and ask for solutions. Finally, send the solutions back to Bolt.new.
MVP (Minimum Viable Product):
This can all be done in an afternoon, depending on how quickly you get user feedback. While YES, you need more testing and time, this will get you an actual working app to build on.
Here is my result for this example, a working Strategic Decision Tracker:
For more on the intersection of Leadership & AI to maximize your impact, subscribe! And check out the tools available to supercharge you in my Premium Member Hub.
3. Zain / The Context Architect
When Andrej Karpathy declared that "The hottest new programming language is English," he fundamentally changed how we think about building software. In his recent Software 3.0 talk, he explained how "your prompts are now programs that program the LLM" and that "the better you get at prompting, the more powerful your programs become."
But here's what I discovered building OpportunityOS without any coding background: the quality of your "English programs" depends entirely on how well you architect the context that guides AI understanding.
While Karpathy proved that "everyone is now a programmer because everyone speaks a natural language like English," different builders have different strengths. Some excel through conversational flow, others through systematic structure. For me, success comes from treating each AI interaction as a context architecture challenge - systematically constructing the contextual foundation that determines whether you build something mediocre or transformational.
Context Architecture: The Four-Layer System
When I built OpportunityOS - the personalized AI prompt platform I launched in this post - I didn't start by describing features. I started by architecting context across four critical layers:
Personal Context Layer: Who is using this, what's their AI fluency level, their goals, constraints, and workflow patterns?
Problem Context Layer: What specific friction points exist in their current AI interactions? Where do they waste time or get stuck?
Business Context Layer: How does this create value? What's the usage pattern that makes it indispensable rather than just convenient?
Technical Context Layer: What does the AI need to understand about implementation to make this actually work in practice?
The beauty of Karpathy's insight is that it opens multiple pathways to effective AI collaboration. My approach works for people who think systematically: I spend 80% of my time building the contextual architecture that makes the build inevitable. As Karpathy noted, "it's kind of a very interesting programming language, and the better you get at prompting, the more powerful your programs become."
The OpportunityOS Context Method In Action
Here's exactly how context architecture shaped my build process:
Instead of telling AI "build me a prompt generator," I first created a comprehensive manual prompt that captured the essence of what I wanted the software to accomplish. Then I used that structured prompt as the foundation to work with AI in building increasingly sophisticated context layers.
The result wasn't just a prompt generator - it was a strategic framework built on the insight from my AI Fluency Gap research that people need contextually optimized, personalized starting points rather than generic prompts. Each conversation with AI built context upon context, creating well-structured prompts that guided the AI toward my vision.
This is what Karpathy meant when he said "vibe coding is a gateway drug to software development." But for systematic thinkers, the gateway isn't just natural language - it's methodical context construction.
Context as Strategic Advantage
There are many effective approaches to AI collaboration - conversational clarity, rapid prototyping, technical precision. Context architecture works for those who naturally think in systems and frameworks. The systematic context construction determines whether you get generic output or breakthrough results that align with your strategic vision.
This becomes especially valuable as we move deeper into Software 3.0. When everyone can "program in English," competitive advantage comes from understanding that context isn't just helpful information - it's the entire foundation that determines whether your AI interactions create incremental improvements or transformational outcomes.
The Strategic Vision: Building the Human-AI Collaboration Platform
OpportunityOS is just the beginning of what I'm building at StrAItegy Hub - a complete human MCP (Model Context Protocol) platform that puts strategic context mastery at the center of everything. Because in Karpathy's world where "remarkably, we're now programming computers in English," there's room for many different approaches to excellence.
The real breakthrough isn't complex prompting techniques. It's understanding that when you architect context systematically, you can build exactly what you envision - because you've taught the AI to think through the problem the way you do. You're not just building software. You're building systems that think the way you think.
4. Jenny / The Integration Specialist
I used to do everything in Cursor. I loved the feeling of writing code, staying in flow, and pushing from start to finish without leaving that window. It felt efficient, assured that “this is how real builders build.”
That flow broke early when I was building my first real app: Image Finder.
I built almost the whole thing with early versions of Cursor, convinced I was moving fast. But reality hit hard: two weeks just to finish the core, two more to experiment, tweak, package, and launch. It was the most draining project I’ve done. Sure, the LLMs were weaker back then, but really, I just pushed one tool too far.
With my second and third apps — Quick Viral Notes and Substack Explorer — I wised up. I started using different tools, each for what it did best: ChatGPT for ideation, Claude for structure, Cursor for execution. The time to launch was cut in half. More importantly, everything just flowed better. I wasn’t stuck forcing one tool to do everything.
Platforms like Lovable and Bolt made a huge difference in those early stages. They let me prototype visually without worrying about code structure. I’d literally ask ChatGPT: “Here are the requirements, write a prompt that lets Lovable generate the full skeleton in one shot.” Ten minutes later: a working prototype with a surprisingly polished UI.
But sandboxes aren’t reality.
The moment you try to actually use those apps, things start breaking. That’s where chaining tools became a game changer.
So now, I build to launch using this integration workflow:
Ideate: Use ChatGPT or Claude to clarify your app concept and create one strong, end-to-end generation prompt.
Prototype: Drop it into Lovable or Bolt. See if the UI reflects your intent.
Bridge to Code: Push it to GitHub, then pull it locally and inspect the structure.
Build: Use Cursor to implement missing features and tighten the architecture.
Launch: Connect GitHub to Vercel, then add auth, payments, and other real-world needs.
It wasn’t about moving faster. It was about moving better.
Each tool had a job, and each handoff mattered.
The magic isn’t in any single tool. It’s in the transitions: from idea to prototype, from prototype to code, from code to a product someone can actually use.
5. Wyndo / The Minimalist
I've watched dozens of vibe coding projects crash because people couldn't explain what they were building in plain English. If you can't describe your app to a friend over coffee, you're not ready to code it.
My approach is different: I use conversation to think through the problem before writing a single line of code.
Here's exactly how I build:
Start with Direction, Not Code
When I started building my current SaaS (can't reveal details yet, but it began as a simple conversation), I didn't come with a PRD or technical specs. I started talking through the problem like I would with a human colleague:
"I have this idea for helping people analyze their brand voice. Instead of coding anything, can you help me understand what this would actually need to do and what problems we might run into?"
This conversation revealed assumptions I hadn't considered and edge cases I would have coded myself into later.
My Core Conversation Techniques
Through hundreds of these collaborative sessions, I've developed three specific techniques that consistently prevent the common vibe coding pitfalls:
Direction-First Building: "Don't code anything until you have 95% confidence about the context. Please create a plan before coding anything, explain what you're going to do, what the impact to users will be, and what you will NOT be doing based on my request."
This forces both of us to think through the full scope before execution.
Hypothesis-Driven Debugging: When things break (and they always do), I don't ask for quick fixes. Instead: "Can you find the root cause of this error, create hypotheses about what's happening, and execute solutions based on confidence level?"
This approach teaches me what actually went wrong instead of just patching symptoms.
One Chat, One Problem: I keep each conversation focused on a single feature or problem. A few days ago, I spent an entire session just on building a brand voice analysis feature. I gave zero chance for scope creep to happen. This maintains context and prevents the conversation from becoming a mess of half-finished ideas.
Why Conversation Beats Complex Prompting
Notice nothing about this has something to do with prompt engineering, right? The truth is the back-and-forth and conversation force clarity. When AI asks clarifying questions, I discover assumptions I didn't know I was making. It feels like speaking to a human rather than programming a machine.
Other builders optimize for getting AI to understand what they want. I optimize for using AI dialogue to understand what I actually want to build. By the time we start coding, we both understand not just the "what" but the "why" and "what could go wrong."
Most complex systems start as simple conversations. The trick is having the right conversation before you start building.
Conclusion: Your Personal AI Operating System
What we discovered building together is that effective vibe coding isn't about finding the "right" method. It's about recognizing how your brain naturally approaches problems, then designing AI workflows that amplify that instead of fighting it.
If you think in systems and data flows → Start with Claudia's data-first approach. Map information movement before features.
If you learn by building quickly → Use Joel's rapid prototyping cycle. Get something working in 10 minutes, then let user confusion guide your next iteration.
If you need structure and frameworks → Apply Zain's context architecture. Spend 80% of your time building the contextual foundation that makes the build inevitable.
If you work across multiple tools → Follow Jenny's integration workflow. Use each tool for what it does best, and master the handoffs between them.
If you think out loud → Embrace Wyndo's conversation-driven approach. Use dialogue to understand what you actually want to build before writing code.
The breakthrough isn't learning to prompt better. It's recognizing that AI works best when it adapts to how you already think, not when you try to think like AI expects you to.
Try this: Pick the approach that resonates most with how you naturally solve problems. Use it for your next AI project. Don't try to combine methods until you've mastered one that fits your thinking style.
The future belongs to builders who can think clearly about problems and communicate that thinking to AI. The specific prompts and tools will change. Your natural problem-solving patterns won't.
Love to see this out!
Fun experiment to write with. Same goal but 5 different ways to do it.
This was such a fun collab! Came out so well and the different perspectives are so valuable to understand. Appreciate you bringing us together on this Claudia!