launchthat
Portal V3: The Convex Migration — When the Backend Finally Clicked
Switching from Drizzle/Postgres to Convex collapsed the frontend-backend gap, doubled development speed, and let me cancel $230/month in third-party services. Here is how the migration worked and what it unlocked.
This is Part 3 of a four-part series tracing the Portal platform from WordPress to Kubernetes. Part 2 covered the Next.js rebuild. This article covers the move to Convex — the migration that changed how I think about building software.
The two-day prototype that changed everything
At the end of V2, I was spending more time coordinating layers than writing features. Schema change in Drizzle, migration file, run migration, update tRPC router, update React components, deploy, verify. Six steps for a single field addition. The type safety was real, but the deployment pipeline had too many joints.
A colleague mentioned Convex. I was skeptical — another "revolutionary" backend platform. I had seen plenty of those come and go. But the pitch was specific to my exact pain: schema, API, and real-time subscriptions as a single system, deployed atomically.
I rebuilt the course module in a weekend prototype. Define the schema, write queries and mutations, use them directly in React components with real-time subscriptions. No migration files. No separate API layer. No deployment choreography. When I changed a field in the schema, the TypeScript compiler immediately flagged every query and component that needed updating. One system, one deployment, one source of truth.
The prototype handled in two days what had taken two weeks in V2. I started planning the migration the following Monday.
The migration strategy
I could not migrate everything at once. Clients were using the platform daily. The approach was incremental:
- Set up a Convex backend alongside the existing Postgres database
- Migrate one domain at a time, starting with the least critical (internal tools, admin dashboards)
- Run dual-write during transition — new data goes to Convex, reads check both sources
- Once a domain was fully migrated and verified, decommission its Postgres tables
- Repeat until Postgres was empty
The LMS module — courses, lessons, student progress — migrated first because it had the most to gain from real-time updates. When a student completed a lesson, every connected client saw the progress update instantly. No polling. No refresh button. The data just appeared.
const lessons = defineTable({
courseId: v.id("courses"),
moduleId: v.id("modules"),
title: v.string(),
content: v.string(),
order: v.number(),
workspaceId: v.id("workspaces"),
}).index("by_module", ["moduleId", "order"]);
const progress = defineTable({
studentId: v.id("users"),
lessonId: v.id("lessons"),
courseId: v.id("courses"),
completedAt: v.number(),
workspaceId: v.id("workspaces"),
}).index("by_student_course", ["studentId", "courseId"]);
No migration files. No ALTER TABLE statements. The schema was the schema — Convex handled the rest. Adding a field meant adding a line to the schema definition and updating the queries that used it. That was it.
Development speed
The velocity change was not subtle. It was a step function.
In V2, building a new feature like "support chat" would have required: a new Postgres table, a Drizzle schema update, a migration, tRPC procedures for sending and receiving messages, a WebSocket layer for real-time delivery, client-side state management for optimistic updates, and a deployment pipeline that coordinated all of these. Estimated time: two to three weeks.
In V3, support chat was a Convex schema definition, a few mutations and queries, and React components that subscribed to the message list. Real-time delivery came free — every Convex query is a live subscription by default. Optimistic updates were built into the mutation layer. Three days from concept to production.
That pattern repeated across every feature. Development speed did not just improve — it roughly doubled or tripled depending on the feature's real-time requirements.
Canceling third-party services
With a backend that was fast to develop against, I started replacing the external services that V2 still depended on. Each cancellation was a small victory — one less monthly bill, one less integration point, one less system I did not control.
| Service | What it did | Replaced with | Monthly savings |
|---|---|---|---|
| Pandadoc | Disclaimer signing | Custom disclaimer flow in Convex | $35 |
| Zoho CRM | Contact management | Built-in CRM plugin | $25 |
| GoHighLevel | CRM + pipelines | Built-in CRM plugin | $97 |
| Make.com | Automation glue | Convex scheduled functions | $29 |
| Zapier | Automation glue | Convex HTTP actions + crons | $49 |
| Mailchimp | Email campaigns | Custom email plugin | $20 |
| Total savings | $255/mo |
The CRM replacement was the most satisfying. GoHighLevel alone cost $97/month and provided a fraction of what I could build directly in the platform. A contacts table, a pipelines table, deal tracking, activity logging — all backed by Convex with real-time updates. Clients could see their pipeline update the moment a deal moved stages. GoHighLevel could not do that.
The disclaimer system was the most impactful for clients. Instead of being redirected to Pandadoc's interface to sign a form, clients signed directly inside their portal dashboard. The signature, timestamp, and document version were stored in Convex. No external redirect, no Pandadoc branding, no Zapier webhook to sync the result back. One system, one flow, one source of truth.
The plugin system takes shape
V1 had the concept of modular features — clients should only see what they pay for. V2 started to implement it with conditional routing. V3 made it real with Convex components.
Each plugin became a self-contained Convex component with its own schema, queries, and mutations:
// CRM plugin — its own isolated schema
const contacts = defineTable({
workspaceId: v.id("workspaces"),
name: v.string(),
email: v.optional(v.string()),
phone: v.optional(v.string()),
source: v.string(),
createdAt: v.number(),
}).index("by_workspace", ["workspaceId"]);
// LMS plugin — completely separate
const enrollments = defineTable({
workspaceId: v.id("workspaces"),
studentId: v.id("users"),
courseId: v.id("courses"),
enrolledAt: v.number(),
status: v.union(v.literal("active"), v.literal("completed"), v.literal("paused")),
}).index("by_student", ["studentId", "workspaceId"]);
Plugins communicated through events, not direct imports. The CRM plugin did not know about the LMS plugin. But when a student enrolled in a course, an event fired, and the CRM plugin could update the contact's activity log. Loose coupling with real-time reactivity.
This architecture is covered in depth in the plugin architecture article. The key point here is that Convex components made plugin isolation practical. In V2, every plugin's data lived in the same Postgres schema file. In V3, each plugin owned its own schema, deployed as a component. Adding or removing a plugin did not touch the core schema at all.
Everything in one place
The most visible change for clients was consolidation. In V2, a typical client workflow looked like this:
- Open GoHighLevel to check the CRM pipeline
- Switch to Pandadoc to send a disclaimer
- Open the Portal to see course progress
- Check Mailchimp for email campaign stats
- Monitor Zapier for failed automations
In V3, every step happened inside the Portal dashboard. CRM pipeline, disclaimer signing, course management, email tools, automation status — all in one interface, all updating in real time. Clients stopped asking "where do I find X?" because the answer was always "in the portal."
One client told me they used to spend 20 minutes at the start of each day logging into their various platforms and checking dashboards. After V3, that dropped to opening one tab.
PWA and push notifications
With the platform consolidated, making it mobile-friendly became the obvious next step. I turned the Portal into a Progressive Web App. Clients could add it to their phone's home screen and get a near-native experience without an app store submission.
The real power was push notifications. When a new support message came in, the client's phone buzzed with a branded notification — their business name, their icon, the message preview. When a customer placed an order, another notification. No email required. No SMS service fees.
The notification system was straightforward with Convex:
const sendPushNotification = internalAction({
args: {
workspaceId: v.id("workspaces"),
userId: v.id("users"),
title: v.string(),
body: v.string(),
},
returns: v.null(),
handler: async (ctx, args) => {
const subscriptions = await ctx.runQuery(
internal.notifications.getSubscriptions,
{ userId: args.userId },
);
for (const sub of subscriptions) {
await webpush.sendNotification(sub.endpoint, {
title: args.title,
body: args.body,
icon: sub.workspaceIcon,
});
}
return null;
},
});
Each workspace had its own push notification branding. Client A's students saw Client A's logo. Client B's customers saw Client B's colors. One platform, completely white-labeled, delivering personalized mobile experiences without a single line of Swift or Kotlin.
The page builder experiment
Around this time I wanted to give clients the ability to build custom landing pages within their portal — sales pages, course landing pages, event registration pages. A visual drag-and-drop page builder.
Building a full page builder inside the monolithic Next.js app felt wrong. It was a large, complex feature with its own UI framework, its own preview rendering, and its own deployment considerations. A bug in the page builder should not crash the CRM.
Vercel had recently introduced microfrontend support. I built the page builder as a separate Next.js app that communicated with the same Convex backend. Same database, same auth, different deployment. The page builder could be developed and deployed independently of the main portal.
This worked well. It was the first concrete proof that splitting the frontend into smaller, independently deployable services made the platform more resilient. When the page builder had a rendering bug, the main portal was unaffected. When the main portal deployed, the page builder kept running.
But then I looked at scaling this pattern. The portal had 36 plugin features. If each became a microfrontend on Vercel, the cost would be:
36 microfrontends x $20/month minimum per project = $720/month — just for hosting frontend code.
And that was the base plan. Real usage with builds and bandwidth would push it higher. For a bootstrapped platform, that was a non-starter.
The cost picture
V3 was dramatically cheaper than any previous version:
| Category | V1 (WordPress) | V2 (Next.js) | V3 (Convex) |
|---|---|---|---|
| Hosting / compute | $100 | $20 | $20 |
| Database | included | $0 | $25 (Convex) |
| Third-party services | $270 | $230 | ~$0 |
| Total | $370/mo | $250/mo | ~$45/mo |
From $370/month to $45/month. An 88% reduction in platform costs. And the platform was faster, more reliable, and had more features than it ever had on WordPress or the early Next.js version.
The next ceiling
V3 was the best version of Portal yet. Development was fast. Clients were happy. Costs were low. But two new requirements were forming on the horizon.
True multi-tenant isolation. All clients still shared one Convex backend instance. Their data was isolated through workspaceId scoping, but it was still one database. For clients in regulated industries — finance, healthcare — that was starting to matter. They wanted assurance that their data was not just logically separated but physically separated. Different database instances, different backup schedules, different geographic regions.
Microfrontend economics. The page builder experiment proved that splitting the frontend worked architecturally. But Vercel's pricing made it impractical at scale. I needed a way to run many small frontend services cheaply — ideally on infrastructure I controlled.
Both requirements pointed in the same direction: self-hosted infrastructure. A place where I could spin up per-client containers and per-client backend instances without per-service platform fees.
I had recently purchased a bare-metal server in a data center for another project. It had 128GB of RAM and was sitting mostly idle. The infrastructure I needed was already in the rack.
Next in the series: Portal V4: Bare Metal, Kubernetes, and True Multi-Tenancy — where each client gets their own database, their own frontend containers, and I learn that owning infrastructure is both liberating and terrifying.
Want to see how this was built?
See the Portal projectWant to see how this was built?
Browse all posts