“Building an AI-native firm in 30 days” is a great LinkedIn AI slop bait title.
We are not AI-native. We are roughly 10x more AI-enabled than we were on April 1st. Those are different sentences. The gap between them, between AI-enabled and AI-native, is most of what I want to share, because it is the thing every VC, CTO, and partnership-led firm I now talk to is getting wrong.
For 30 days in April, I ran a sprint that pulled our entire firm (partners, investors, finance, legal, EAs, ops, all 30 of us) out of business-as-usual. The one thing we did not drop was answering our founders. We are a venture firm, and that line is sacred. Everything else moved aside, so that between 50 and 80% of every person’s time, every day, went into building with AI. By Demo Day 5 on April 30th, we had shipped 52 internal apps, deployed multiple production AI agents, redesigned core processes, and turned a partnership of investors and operators into a partnership of builders.
It worked. It also left a trail of friction, false starts, and lessons I would happily redo. Both things are true at once, and this post is my effort to share both for the founders, VCs, and CTOs who have messaged me asking how to kick this off in their own organizations.
Why now?
Three reasons. The first is macro. The second is uncomfortable. The third is personal.
The macro reason. In late February, Citrini Research published “The 2028 Global Intelligence Crisis,” a fictional macro memo from June 2028 describing an AI-driven displacement spiral, a 38% drawdown in the S&P, and a deflationary loop they called “Ghost GDP.” The piece moved markets. Citadel Securities responded a week later, arguing that technological diffusion follows S-curves, organizational integration is expensive, and labor markets are more elastic than the doom case admits. Around the same time, Matt Shumer’s “Something Big Is Happening” essay racked up 80 million views, with a more visceral message: this time is different, and the timeline is shorter than anyone is willing to say publicly.
My honest read: Citadel is directionally right that adoption will be slower than the doomers think across the economy. Shumer is directionally right that the gap inside knowledge-work firms is widening fast. OpenAI’s own usage data shows that frontier workers (the 95th percentile) send six times as many messages as the median worker in the same company. The bottleneck is no longer the model. It is the organization. Put plainly: if nothing changes inside our firm, we truly have an issue. Not in five years. In quarters.
The uncomfortable reason. Most VCs invest heavily in AI as investors, not as users. We write the checks, we read the memos, we sit on the panels. We do not actually live in the products. We do not feel where they break. We do not feel what they unlock. You cannot accurately judge what you do not use, and an industry whose product is judgment cannot afford to be a tourist in the most consequential technology shift of our careers.
So part of the explicit goal of the sprint was to become better investors by becoming people who actually understand the AI landscape from the inside. Six weeks in, my pattern recognition on AI-native company-building is unrecognizable from where it was in February. I now know, in my hands and not in slides, what is hard, what is easy, what is a feature, what is a defensible product, and where the seams are. That alone justified the month.
The personal reason. I have spent the last several years working with and in AI in one form or another. None of this is new to me (and yes, I have written about data-driven and AI-led VC on my blog more than once already). And yet, over the Christmas break and into early January, I sat down with OpenClaw and had a wow moment that honestly surprised me at this stage of the cycle. Stand up an agent. Give it tools. Give it memory. Watch it do real work for you on its own initiative. I came back from those holidays convinced of two things. This is qualitatively different from anything we have worked with before. And I wanted to take the rest of my firm on that journey so they would have their own wow moment, not one I described in a slide.
That last bit matters. You cannot transfer a wow moment by telling someone about yours. You have to engineer the conditions for theirs.
The setup most people skip
In early March, I wrote a reset memo to the partnership. It was not a strategy document. It was unflinching: where the firm was on AI, what it would cost us to keep treating it as a side project, and what genuine transformation would actually require, including the parts that would be uncomfortable for partners specifically.
We then locked ourselves in a room together for three days, March 10 to 12. No staff. No agenda creep. The three days were not about tooling. They were about alignment. Did we agree that the firm would change? Did we agree that the change would be uncomfortable? Did we agree that the partners would model it before we asked anyone else to?
What got us aligned faster than I expected was Claire Vo’s recent “adapt or fire” framing: the blunt argument that, in a builder economy, the leaders who do not pick up new tools become the bottleneck and have to be routed around. There is no neutral position. You either adapt or you become the thing the rest of the org has to work around. That language landed in the room in a way that more polite framings had not, and it cut through the residual “this is for the engineers” instinct that lurks in any partnership. Once partners had internalized that frame, getting the rest of the firm to commit was direct because the example was being set visibly from the top.
If you are a managing partner reading this and you are not yet aligned at the top, do not run a sprint. Lock yourselves in a room first. The reset memo is the artifact, but the alignment is the work.
Building the skill floor: the workshop before the sprint
A subtle move that paid off disproportionately: in late March, before sprint kickoff, I ran a two-day OpenClaw workshop for the firm. The goal was not to teach AI concepts (the internet has plenty of those). The goal was to give every non-builder a concrete, hands-on, visceral AI wow moment within 48 hours.
By the end of day two, every participant had stood up and hardened a VPS, installed an agent on it, gotten it talking to them on Telegram, and connected it to their own Google Calendar. That last bit is the trick. Once an agent is reading your real calendar and replying on the channel where you actually live, AI stops being a chatbot in a tab and starts being a thing in your life.
People walked out of those two days feeling 10x more ready to build, not because they had learned 10x more, but because they had done something on their own infrastructure that they had previously believed was beyond them. The confidence shift was the precondition for everything that came after. When April 1st arrived, the firm was not starting from “what is an API key?” It started with “I have already deployed something.”
This is the move most “AI training programs” get wrong. They teach concepts. We forced shipping on real infrastructure, with real personal data, and a reward at the end. If you are planning a sprint, build the skill floor before kickoff. Do not try to teach during the sprint itself. The sprint should be where the skill gets used, not where it is first acquired.
The structure of the sprint
One month. Mandatory. Time-boxed. The instruction was simple: drop everything except answering founders, and put 50-80% of your time into building.
Every function, every level. Partners. Investors. Finance. Legal. EAs. Ops. Every single person was personally shipping code, not “supporting the builders.” We were the builders. There was no sideline.
That phrase “everyone is a builder” has gone viral in the past year, and I want to be precise about what we mean by it. We do not mean everyone became an engineer. We mean the line between the person who has the idea and the person who ships the prototype collapsed. An EA who watched an investor describe a recurring data pull could have a working app the next morning. A finance lead who lived a painful month-end close could rebuild the workflow in week two. A legal associate who knew exactly how an NDA review should work could prototype the agent that does it.
Tooling was settled by usage, not by mandate. People used Claude Code, Codex, and other agentic coding tools, often switching between them inside the same project depending on the task. We did not standardize on one. The right answer in 2026 is not religious about tool choice. It is religious about shipping.
What worked: five decisions that did most of the work
If I were advising another VC, professional services firm, or CTO starting from scratch, these are the five things I would build before the sprint, not during it.
1. An internal deployment platform.
The default failure mode of a firm-wide vibe-coding sprint is predictable and ugly: 50 people pushing 50 apps to 50 personal Vercel accounts, leaking environment variables, embedding API keys in client-side bundles, and standing up shadow infrastructure your security team cannot see. April 2026 made this risk impossible to ignore. Several high-profile public breaches in vibe-coding-adjacent platforms hit the news the same month we were running our sprint. Researchers scanning thousands of vibe-coded apps in production are now finding around 65% with security issues and a meaningful share with exposed secrets and PII.
We were not willing to take that on. So we built our own internal hosting layer on Cloudflare. Workers for Platforms gives you sandboxed, isolated execution per app. Cloudflare Access sits in front of every deployment with SSO. Secrets are managed centrally, not in a developer’s .env file. Every app deploys through one path. One control plane. No sprawl. No 50-account chaos.
This was the single highest-leverage decision we made. If you do nothing else from this list, do this.
2. A GitHub PR-check flow that catches novice mistakes (and permits reluctant builders to start).
When you have 50 first-time builders shipping code, the bottleneck is review, and you cannot put senior engineers in front of every PR. The fix is automated checks: secret scanning, IAM linting, route auth verification, dependency lockfile enforcement, combined with a shared AGENTS.md at every repo root that encodes the firm’s conventions.
The technical case for this was obvious from day one. The cultural case was the surprise. This flow changed reluctant participants into active ones. Before April, several people in the firm had been quietly nervous about AI: Can I give this access? Can I share this data? What if I leak something? The PR checks gave them an answer. They could try things, and the system would catch the mistakes that mattered before they became real. Within two weeks, the most cautious people in the firm were among the most prolific shippers. Guardrails as confidence-builder turned out to matter more than guardrails as bug-catcher.
3. No external consultants. No thought leaders. No AI wizards.
This was an explicit call I made early, and I would make it again ten times out of ten. Every dollar I have seen spent on outside “AI transformation experts” in the last twelve months has been a worse investment than spending the same dollar on giving your own people a month and a license to build. Consultants leave. Slide decks get filed. Frameworks expire the day a new model drops. Wow moments and built capability stick.
We made an explicit call at the start: we would learn this ourselves, using our own data, our own workflows, and our own hands. No external advisor was going to set up our deployment platform. No thought leader was going to tell our legal team what an NDA agent should look like, because no thought leader has ever read our NDAs. No AI wizard was going to redesign our investment process, because no AI wizard has ever sat in our investment committee.
The capability we wanted is the firm’s ability to keep doing this after the sprint ends, and you do not build that capability by renting it. The only outside ingredient was the model providers. Everything else (the platform, the data layer, the agents, the apps, the conventions, the workshops) was us. The byproduct: we now have institutional knowledge that really belongs to the firm and travels with our people, rather than living in someone else’s deck.
If you take one prescription from this piece, take this one, especially if you are tempted to do the opposite.
4. Weekly mandatory demos. Three minutes per person. No extensions.
This is the discipline mechanism, and it is the most underrated lever in the whole playbook: every Friday, every builder, three minutes, hard stop. The constraint forces focus on delivery, sharing, and pushing code, rather than vibing on something privately for two weeks and producing nothing the firm can see.
People are free to collaborate, but we found genuine pair-coding hard for non-engineers. The cognitive overhead of two non-builders trying to write code together exceeded the benefit. So the unit was the individual builder, and the demo was the social forum. Weekly cadence built the social capital. Three-minute slots made it ruthless. Mandatory made it real. Optional demos become a parade for the already-confident. Mandatory demos surface the people who are quietly stuck and need help.
5. The firm’s full-time builders, at every level.
Investors built apps for their own deal flow. Finance rebuilt the month-end—legal vibe-coded NDA review. EAs were writing SQL around week three (and I am still slightly amazed by that one). The combination of domain expertise and building energy in the same person beats any external consultant or central platform team.
What didn’t work: the messy middle
I would be lying if I said the month was clean. Some of it was a mess, and I want to be specific about it.
Ambition was bimodal. A meaningful chunk of apps were over-ambitious (multiplayer, stateful, deeply agentic) and collapsed under their own weight by week 3. An equally meaningful chunk was under-ambitious, wrapping around a single prompt that did not need to be apps. Calibrating ambition is a coaching problem, and we underestimated it. In the next sprint, we will set ambition tiers up front and assign each builder a target.
Building ≠ adoption. The hardest thing about Demo Day is not the demo. It is whether the app survives the week after. Several beautiful demos quietly died because nobody owned ongoing usage, durability, or improvement. We are still working through which of the 52 apps will be promoted into the firm’s permanent operating system, and which were learning artifacts that served their purpose at the time they were built. That is a genuinely hard call, and we do not yet have a clear rule for it.
Multiplayer and collaboration friction. Apps that needed to be used by more than one person at a time hit a wall. Auth, state, role permissions, audit, all the boring infrastructure that real software needs, kept catching us. The lesson: most useful internal apps are single-player. For multi-user, factor in another week of platform work.
Onboarding friction. Setting up local environments for non-developers took more hours than I want to admit, especially on Windows and with Git. Even with the OpenClaw workshop providing a baseline, the first hour of a new project still meant fighting paths and PATHs. In the next sprint, we will pre-provision cloud-based dev environments so the first hour is productive rather than spent on environment setup.
Workflow vs. agent confusion. Some processes are workflow problems best solved with a deterministic app. Others are agent problems (fuzzy, contextual, judgment-heavy) best solved with something more autonomous. We got the call wrong as often as right. The pattern that has emerged: if the input space is bounded and the output is structured, build a workflow. If the input is open-ended and the value is in the synthesis, build an agent, and invest in agent memory architecture (vector search on object storage, à la Turbopuffer, has been the cleanest pattern), MCP servers for tool access, and clear AGENTS.md scoping.
Some processes resist appification entirely. Investment judgment. Founder relationships. Board work. The temptation in a sprint is to build something for everything. Resist it. The right primitive is sometimes an agent embedded in an existing workflow, not an app in a new tab.
So where are we now?
52 apps built. Multiple genuine AI employees in production: agents that handle real work end-to-end, not demos. A partnership that can ship. An EA team writing SQL on day 25, who could not on day 1. A legal function that vibe-codes. Investment processes redesigned, not purely automated. A culture that has internalized “everyone a builder” as a fact, not a slogan.
And, most importantly to me, a firm of investors who now understand the AI landscape as users, not simply as people writing checks into it. The improvement in the quality of our investment conversations, even six weeks in, is the outcome I would point to before the app count.
And: still not AI-native. Just dramatically more capable than 30 days ago.
I want to be precise about that distinction one more time, because it is the entire point. AI-native is a destination, and frankly, I am not sure any 30-person firm with 20 years of legacy process gets there in a single month. AI-enabled is a direction. The win is not the destination. The win is that we now have the muscle to keep going.
The blueprint, if you are going to try this in your firm
When other VCs and CTOs ask me how to start, this is roughly the script I use on a 30-minute call.
- Align the partnership first—three days, locked in a room, no staff. If you cannot get there, do not run the sprint. Use Claire Vo’s “adapt or fire” frame if you need to puncture the tactful version. The reset memo is the artifact. The alignment is the work.
- Build the skill floor before kickoff. A two-day workshop in which every non-builder ships something real on their own infrastructure that talks to their own life. Wow moment in 48 hours, or you do not have one.
- Build the platform layer before the sprint, not during. One internal deployment path. SSO. Secret management. PR checks—shared data layer (do not make my week-three mistake). Shared
AGENTS.mdconventions. - Do not hire consultants. Do not hire AI wizards. Buy your own people the time and the license to build. The capability you rent leaves with the invoice.
- Make participation mandatory and time-boxed. Drop everything except customers and founders. 50-80% of the time for one month. Optional sprints become side projects for the already-curious. Mandatory sprints become culture.
- Everyone builds. No sideline. Partners, investors, finance, legal, EAs, ops. Personally shipping code. Tool choice is settled by usage, not by mandate.
- Weekly mandatory demos. Three minutes. No extensions. This is the discipline mechanism. Do not soften it.
- Plan for after Demo Day before Demo Day. Decide which apps get promoted, who owns ongoing maintenance, and how durability gets funded. Without this, you produce a great month and an ignored next month.
- Set the next sprint before the current one ends. The bar moves. So should you.
The work is not to become AI-native in 30 days. The work is to build the muscle that lets you never stop becoming more AI-enabled.
Three weeks of consolidation. Then we go again.