PulseAPI
Backend Engineer & Performance Architect
PulseAPI is a backend service optimized for read-heavy search and analytics workloads. It combines indexed SQL query paths, cursor-based pagination, and Redis-backed response caching with async aggregate jobs to keep latency predictable under load.
Consistent traversal performance on large collections
Baseline and iterative tuning loop with measurable deltas
Expensive aggregate responses cached with explicit invalidation
Latency and error thresholds monitored per endpoint class
The Problem
Backend projects frequently claim to be scalable without proving performance behavior under realistic traffic patterns, especially at p95/p99 latency percentiles.
The Solution
Built an ASP.NET Core API with benchmark-driven tuning, query/index optimization, and cache-aware response strategies. Added load-test baselines, profiling loops, and operational metrics to validate improvements rather than relying on assumptions.
Technical Decisions
Key architecture decisions and their outcomes
Hybrid data access (Dapper + EF Core)
Needed expressive domain modeling and fine-grained performance control for hot queries.
Used EF Core for model consistency and Dapper for performance-critical query paths.
Maintained developer productivity while tuning the highest-impact endpoints.
Async precomputation for heavy aggregates
On-demand aggregate queries introduced latency spikes under concurrent load.
Moved expensive aggregation work into background jobs with cache hydration.
Read paths remained low-latency while preserving report fidelity.
Engineering Details
- Performance profile captured with baseline k6 suites before optimization work
- Cache keys include query fingerprint + tenant scope to avoid stale cross-context reads
- Rate limiting protects shared compute from abusive query patterns
- Tracing spans annotate DB calls, cache hits/misses, and serialization overhead
- Benchmark notes are versioned with code for transparent performance claims
Key Highlights
- Cursor-based pagination for large dataset traversal
- Index-aware query paths for dominant read scenarios
- Redis cache layer with targeted invalidation strategy
- Async aggregation jobs to keep hot paths responsive
- k6 load test baseline and optimization comparison reporting
- OpenTelemetry instrumentation for trace and latency visibility
Tech Stack
Skills & Technologies
Related Articles
Monday.com API Caching: How We Cut Response Times by 90%
Our Monday.com plugin made 47 API calls per dashboard load. Users waited 8 seconds for data they had seen 5 minutes ago. Here is how we built a caching layer that made the integration feel instant.
Real-Time Everything: Why We Stopped Polling and Never Went Back
Our trading dashboard polled every 5 seconds and users complained about stale data. We rebuilt on Convex with real-time subscriptions and the difference was not incremental — it was a different product.
Observability Across Five Production Systems
Monitoring, tracing, and instrumentation look different in every system we run. Here is what we actually measure, how we measure it, and what those measurements have caught — across Kubernetes infrastructure, integration pipelines, high-performance APIs, browser automation, and AI agents.
.NET vs Convex: When to Use a Traditional Backend vs a Reactive Platform
Both .NET and Convex can power production backends. After building multi-tenant SaaS platforms, high-performance APIs, and real-time dashboards across both, here is how I decide which one fits — and when the answer is both.