launchthat
The Engineering Patterns Behind a 34-Plugin Platform
Every system in the LaunchThat ecosystem uses classical software engineering patterns — Strategy, Observer, Adapter, State Machine — and formal data structures. Here is how GoF patterns, SOLID principles, and CS fundamentals show up in production code.
Engineers use design patterns every day without naming them. The patterns are there — in how plugins register, how integrations route, how automation pipelines retry — but they live as code, not as vocabulary. This post maps the formal software engineering concepts behind the LaunchThat ecosystem to the production systems where they run.
This is not a textbook exercise. Every pattern, diagram, and data structure here comes from a shipped system with real users.
Design patterns in production
Strategy + Factory: the plugin registry
Portal's plugin system is a textbook Strategy pattern. The core defines an interface — what a "project management provider" must do — and plugins provide interchangeable implementations:
export interface ProjectManagementProvider {
createProject(name: string): Promise<Project>;
createTask(projectId: string, name: string): Promise<Task>;
moveTask(taskId: string, column: string): Promise<void>;
getTasks(projectId: string): Promise<Task[]>;
}
The Monday.com plugin implements this interface by wrapping their API. A Notion plugin could implement it differently. The core never knows which one is active — it calls the interface, and the active strategy handles the rest.
The plugin registry acts as a Factory, resolving the active implementation at runtime:
export const pluginRegistry = {
crm: {
name: "CRM",
component: () => import("launchthat-plugin-crm"),
dependencies: ["contacts"],
permissions: ["contacts", "companies", "deals"],
},
stripe: {
name: "Stripe",
component: () => import("launchthat-plugin-ecommerce-stripe"),
dependencies: ["ecommerce"],
permissions: ["billing", "subscriptions"],
},
};
GoF classification: Strategy (interchangeable algorithms behind a common interface) combined with Abstract Factory (the registry resolves the concrete implementation based on configuration).
Observer: event-driven plugin communication
Plugins in Portal do not call each other directly. They communicate through a WordPress-inspired hook system — addAction and addFilter — which is the Observer pattern:
// Plugin A: Stripe
addAction("checkout.completed", async (event) => {
await activateSubscription(event.sessionId);
});
// Plugin B: CRM
addAction("checkout.completed", async (event) => {
await updateContactStatus(event.customerId, "paid");
});
The checkout event is the subject. Stripe and CRM are observers. Neither knows the other exists. The hook system dispatches events to all registered callbacks. Adding a new plugin that reacts to checkout — say, a loyalty points plugin — requires zero changes to existing code.
GoF classification: Observer (one-to-many dependency where subjects notify observers of state changes without coupling).
Adapter: the strangler migration
Phoenix Migration replaced a Flask monolith with FastAPI services using the Strangler Fig pattern. The critical piece was a compatibility adapter layer that preserved legacy response contracts during the transition:
// Adapter: Flask response shape → FastAPI response shape
class LegacyResponseAdapter {
adapt(fastApiResponse: FastApiResponse): FlaskCompatResponse {
return {
data: fastApiResponse.result,
status: fastApiResponse.statusCode === 200 ? "ok" : "error",
timestamp: fastApiResponse.processedAt.toISOString(),
};
}
}
Legacy consumers expected { data, status, timestamp }. The new FastAPI services returned { result, statusCode, processedAt }. The adapter translated between them, allowing incremental migration without breaking existing clients.
Feature flags controlled traffic routing — some requests went to Flask, others to FastAPI through the adapter. Shadow runs compared outputs from both paths before cutting over.
GoF classification: Adapter (converts the interface of one class into the interface another expects).
State Machine: infrastructure provisioning
LaunchThatBot provisions servers through a state machine with defined transitions, guard conditions, and failure recovery:
create_server → wait_server_ready → verify_heartbeat → ready
↓ ↓
timeout no_heartbeat
↓ ↓
error ←←←←←←←←←←← retry (max 3)
Each state has explicit entry conditions, side effects, and allowed transitions. The provisioning workflow cannot skip states — a server must pass heartbeat verification before being marked ready, even if the cloud API reports it as running.
The state machine prevents a class of bugs that conditional logic invites. There is no way for a server to be "ready" without a verified heartbeat. There is no way to skip verification. The states are the source of truth.
GoF classification: State (an object alters its behavior when its internal state changes, appearing to change its class).
Template Method: the automation pipeline
BrowserLaunch's automation pipeline follows a Template Method pattern — every job runs through the same acquire/process/release lifecycle, but each stage's implementation varies:
async function processUrl(pool: BrowserPool, url: string) {
const browser = await pool.acquire(); // Step 1: Acquire resource
const context = await browser.createBrowserContext();
const page = await context.newPage();
try {
await page.goto(url, { timeout: 30000 }); // Step 2: Execute task
const data = await extractData(page); // Step 3: Extract result
return { url, data, status: "success" };
} catch (error) {
return { url, error: String(error), status: "failed" };
} finally {
await context.close(); // Step 4: Release resource
await pool.release(browser);
}
}
The template is: acquire a browser, create an isolated context, execute the task, release the resource. What changes between jobs is the extractData step — ADA compliance scanning uses different selectors than job application automation. The lifecycle invariants (isolation, cleanup, error handling) are enforced by the template.
GoF classification: Template Method (defines the skeleton of an algorithm, deferring specific steps to subclasses or callables).
OOD principles: SOLID in practice
The plugin architecture is where SOLID principles become concrete decisions rather than textbook definitions.
Single Responsibility: each plugin owns exactly one domain. The CRM plugin manages contacts and deals. The Stripe plugin manages billing. The LMS plugin manages courses. None of them handle authentication, routing, or tenant isolation — that is the core's job.
Open/Closed: the plugin registry is open for extension (add a new plugin by importing its package and registering hooks) and closed for modification (adding a plugin never requires changing core code). This was a deliberate architectural constraint, not an accident.
Liskov Substitution: any implementation of ProjectManagementProvider is substitutable. If you swap the Monday.com plugin for a Notion plugin, every feature that depends on project management continues working. The core calls the interface, not the implementation.
Interface Segregation: each domain gets its own interface. CRM operations, billing operations, and project management operations are separate contracts. A plugin that only handles billing does not need to implement CRM methods. Compare this to a single PlatformPlugin interface with 50 methods — most of which any given plugin would stub out.
Dependency Inversion: the core depends on the ProjectManagementProvider abstraction, not on the Monday.com SDK. Swapping integrations means swapping the concrete implementation behind the interface, not rewiring the core. The direction of dependency points inward, toward the abstractions.
Diagrams from real systems
The following diagrams are not theoretical examples. Each one represents an actual system in production.
Sequence diagram: RelayFlow webhook pipeline
When an external service sends a webhook to RelayFlow, the event passes through signature verification, normalization, routing, and execution — with failure recovery at each stage:
The key design decision: the endpoint always returns 200 after signature verification, even if the handler fails. Failed events go to the DLQ for later replay. Returning errors to the external service would trigger their retry logic, creating duplicate events that complicate idempotency.
Entity-Relationship diagram: Portal multi-tenant schema
Portal V2's relational schema (before the Convex migration) normalized tenant, user, role, and plugin relationships:
The critical relationship: PLUGIN_CONFIG joins WORKSPACE and PLUGIN, scoping plugin enablement per tenant. ROLE_PERMISSION joins ROLE and PLUGIN, scoping permissions per plugin. This normalized schema prevented the data anomalies that plague denormalized multi-tenant designs — no orphaned permissions, no plugin configs pointing to deleted workspaces.
Activity diagram: BrowserLaunch automation pipeline
The browser automation pipeline processes URLs through discrete, isolated stages with retry logic and validation gates:
The activity diagram reveals a design decision that is not obvious from reading the code linearly: the browser pool recycles instances after 50 pages. This is the bounded memory leak strategy — instead of preventing leaks (which is impossible with browser engines), we bound them by destroying the browser before they accumulate.
Data flow diagram: TraderLaunchpad candle pipeline
Market data flows through a pipeline with a clear boundary between the real-time operational layer (Convex) and the analytical layer (ClickHouse):
The boundary between Convex and ClickHouse maps to the boundary between mutable and immutable data. In-progress candles live in Convex where they can be upserted on every tick with reactive push to subscribers. Finalized candles live in ClickHouse where columnar storage makes range scans over millions of bars fast.
Data structures in production
Production systems do not use data structures because a textbook says to. They use them because the problem demands a specific access pattern.
Hash Map: request deduplication
The Monday.com caching layer uses a Map<string, Promise<unknown>> for in-flight request deduplication:
class RequestDeduplicator {
private inflight = new Map<string, Promise<unknown>>();
async fetch<T>(key: string, fetcher: () => Promise<T>): Promise<T> {
const existing = this.inflight.get(key);
if (existing) return existing as Promise<T>;
const promise = fetcher().finally(() => this.inflight.delete(key));
this.inflight.set(key, promise);
return promise;
}
}
The Map provides O(1) lookup by request key. If three components request the same board metadata simultaneously, only one API call fires. The other two await the same promise. This cut Monday.com API calls from 47 to 28 per dashboard load.
The TTL cache uses the same structure — Map<string, CacheEntry<unknown>> — with an additional eviction check on read. Entries older than 2x their TTL are deleted on access, providing lazy garbage collection without a background sweep.
Queue: distributed task processing
BrowserLaunch uses Convex's scheduled function system as a distributed task queue. Tasks are enqueued with enqueueTask, indexed by (status, createdAt) for priority-ordered FIFO dequeue, and processed by workers that claim tasks with lease tracking:
enqueueTask → [pending] → worker claims → [running] → [completed/failed]
The index (queue, status, createdAt) enables efficient queries like "give me the oldest pending task in the ADA scanning queue" — an O(log n) index scan rather than a full table scan with filter.
B-tree indexes vs. columnar storage
Convex and ClickHouse use fundamentally different data structures for different access patterns:
Convex (B-tree indexes): optimized for point lookups and small range scans. The priceLiveCandles index on (sourceKey, tradableInstrumentId, resolution) finds the current candle for a specific instrument in O(log n). Perfect for "give me one document by its composite key."
ClickHouse (columnar MergeTree): optimized for scanning large ranges across few columns. The candles_1m table partitioned by toYYYYMM(ts) and ordered by (sourceKey, tradableInstrumentId, ts) can scan 10,000 bars in microseconds because it only reads the OHLCV columns, skipping everything else. Perfect for "give me the last 1,500 bars sorted by timestamp."
The choice is not about which is "better." It is about matching the data structure to the access pattern.
Set: connection tracking
Durable Objects use Set<WebSocket> for managing active connections:
class ChatRoom {
connections = new Set<WebSocket>();
onConnect(ws: WebSocket) {
this.connections.add(ws);
}
broadcast(message: string) {
for (const ws of this.connections) {
ws.send(message);
}
}
}
A Set provides O(1) add, O(1) delete, and O(n) iteration — exactly the operations a connection pool needs. Using an array would make deletion O(n) and risk duplicates. Using a Map would add unnecessary key management.
Algorithms
Exponential backoff
The browser automation retry strategy uses exponential backoff with a cap:
async function withRetry<T>(fn: () => Promise<T>, maxRetries = 3): Promise<T> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (attempt === maxRetries) throw error;
const delay = Math.min(1000 * Math.pow(2, attempt), 30000);
await new Promise((r) => setTimeout(r, delay));
}
}
throw new Error("Unreachable");
}
The delay sequence is 1s, 2s, 4s — capped at 30s. The cap prevents absurd wait times on higher retry counts. The exponential curve avoids thundering herd problems when many tasks retry simultaneously after a transient failure.
Cache invalidation: stale-while-revalidate
The Monday.com TTL cache implements a two-phase invalidation algorithm:
- Serve stale: if the entry is older than its TTL but younger than 2x TTL, return it immediately and mark it stale
- Revalidate async: trigger a background refresh that updates the cache entry
- Evict dead: if the entry is older than 2x TTL, delete it and treat as a cache miss
This is a latency optimization. The user sees data immediately (even if 30 seconds old), and gets fresh data on the next render cycle. The alternative — blocking on a cache miss — adds 200-800ms of perceived latency for data freshness that users cannot distinguish.
Dependency resolution: topological ordering
The plugin registry resolves dependencies using topological ordering. Enabling the CRM plugin automatically enables Contacts (its dependency). Disabling Contacts warns that CRM depends on it:
CRM → depends on → Contacts
E-Commerce → depends on → Stripe
LMS → depends on → Content, Calendar
The resolution algorithm walks the dependency graph, enables prerequisites first, and detects cycles (A depends on B depends on A) at registration time rather than at runtime.
Idempotency: webhook deduplication
RelayFlow ensures that duplicate webhook deliveries do not create duplicate side effects. Each event gets an idempotency key combining provider, event type, and external ID:
const idempotencyKey = `${provider}:${eventType}:${externalId}`;
Before processing, the handler checks if this key has been seen. If it has, the event is acknowledged without re-executing. This is critical for webhook-based integrations where providers retry on timeout — a single Stripe checkout.session.completed event might arrive three times if the first acknowledgment was slow.
The connection
None of these patterns exist in isolation. The Observer pattern enables the Strategy pattern — plugins register hooks (Observer) that provide domain-specific implementations (Strategy). The State Machine uses the Template Method for each state's entry/exit actions. The data structures underpin the algorithms — exponential backoff needs a counter, deduplication needs a Map, dependency resolution needs a graph.
The value of naming these patterns is not academic. It is communicative. When a new contributor joins the project and I say "the plugin system uses Strategy with an Observer-based event bus," they know the architecture before reading a single line of code. When I say "we use stale-while-revalidate with webhook-driven invalidation," they know the caching behavior without tracing through the implementation.
Formal software engineering vocabulary exists so that engineers can communicate precisely about complex systems. Every pattern in this post was in production before I named it. Naming it made it easier to discuss, document, and extend.
Want to see how this was built?
See the plugin architecture deep diveWant to see how this was built?
Browse all posts