When NOT to migrate to microservices
Before discussing how, let's discuss when you shouldn't:
- Your team has fewer than 10 engineers: the operational overhead of microservices (networking, observability, independent deploys) outweighs the benefits.
- Your monolith works fine: "microservices" is not a synonym for "modern." A well-structured monolith is easier to maintain.
- You don't have differentiated scaling problems: if every part of the system needs the same capacity, microservices add complexity without benefit.
The right reason to migrate is: you need to scale, deploy or evolve parts of the system independently, and your monolith has become an organizational bottleneck.
Signals that it's time to migrate
- Deploys are risky: a change to the payments module forces a redeploy of everything, including the product catalog.
- Teams block each other: the checkout team waits for the inventory team to finish a feature because they share the same codebase.
- Disproportionate scale: your search API handles 100× more traffic than your reporting module, but they run on the same servers.
- Long release cycles: releases happen monthly because coordinating every change in one deploy is complex.
Strategy: Strangler Fig, not Big Bang
Never rewrite everything from scratch. The Strangler Fig strategy works like this:
- Identify the most independent domain with the largest separation benefit
- Extract it as a service behind the same API
- Route traffic gradually (10% → 50% → 100%)
- Repeat with the next domain
Step 1: Bounded Contexts
Before touching code, map the bounded contexts with the team:
Current monolith:
├── Users & Auth → Context: Identity
├── Product Catalog → Context: Catalog
├── Shopping Cart → Context: Cart
├── Checkout & Payments → Context: Payments
├── Order Fulfillment → Context: Fulfillment
├── Notifications → Context: Notifications
└── Reporting → Context: Analytics
Each context has its own data, its own rules, and an owning team. If two contexts share tables in the database, you must resolve that dependency before separating them.
Step 2: The first extraction
Pick the service that:
- Has the lowest coupling with the rest of the system
- Has the highest benefit from independent scaling
- The owning team is motivated and available
Most of the time, Notifications or Analytics are good initial candidates: few inbound dependencies, not on the critical transaction path, and the team can fail without impacting checkout.
Step 3: The communication pattern
For inter-service communication:
| Pattern | Use it when | Avoid when |
|---|---|---|
| Synchronous HTTP/REST | Simple queries, low latency required | Operations that may fail and need retry |
| Asynchronous events (Kafka/NATS) | Notifications, analytics, eventual consistency | When you need an immediate response |
| gRPC | High-volume internal communication | Public APIs, browser compatibility |
Rule of thumb: if service A needs to wait for B's response to continue, use synchronous. If A only needs B to eventually know, use events.
Anti-patterns we've seen
The distributed monolith
Service A → Service B → Service C → Service D → Shared database
If all your microservices share a database and call each other in a synchronous chain, you don't have microservices — you have a distributed monolith with extra latency and more failure points.
The nano-service
One service per database entity. UserService, AddressService, PhoneService… If a single domain change requires modifying 5 services, the granularity is excessive.
CRUD-as-a-service
If your "microservice" is a wrapper around a table with GET/POST/PUT/DELETE endpoints and zero business logic, it's not a service — it's an unnecessary data-access layer.
What you need from day 1
Don't extract the first service without these in place:
Observability
- Distributed tracing (Jaeger/Tempo): every request must be traceable across services
- Centralized logs (ELK/Loki): one place to look for errors
- Metrics (Prometheus/Grafana): p50/p95/p99 latency per service, error rate, saturation
Per-service CI/CD
Each service has its own pipeline:
- Independent build
- Independent tests
- Independent deploy
- Independent rollback
If a Payments deploy fails, it should never affect Catalog.
Service mesh or API gateway
For routing, rate limiting, circuit breaking and mTLS between services. Options:
- Istio/Linkerd for Kubernetes
- Kong/Traefik as API gateway
- NGINX with manual config (viable for < 10 services)
Real case: from 6-week releases to daily deploys
A marketplace we worked with had:
- 4-year-old Python/Django monolith
- 15 engineers across 3 teams
- 6-week releases with a 4-hour deploy window
- 3 major incidents in the previous 6 months from failed deploys
What we did
- Month 1: mapped bounded contexts, identified 6 domains, prioritized Notifications and Analytics as first extractions
- Months 2-3: extracted Notifications as a Go service with NATS for events. Stood up observability (Grafana + Jaeger + Loki)
- Months 3-4: extracted Payments as a Go service with its own PostgreSQL. Implemented the saga pattern for coordination with the monolith
- Months 4-5: independent CI/CD per service in GitHub Actions. Feature flags with LaunchDarkly
- Months 5-6: extracted Catalog. By now the team had mastered the pattern
Results
| Metric | Before | After |
|---|---|---|
| Deploy frequency | Every 6 weeks | Daily |
| Deploy window | 4 hours | < 5 minutes |
| Incidents per deploy | ~0.5 | < 0.05 |
| Onboarding time | 3 weeks | 1 week (per service) |
| Team scale | 15 → blocked | 15 → 3 autonomous squads |
Business & commercial impact
How Numoru sells a migration
The productized offer is an Architecture Fitness Assessment ($9,500, 3 weeks) that produces a yes/no on whether microservices make sense and, if yes, the prioritized seam list. From there, a phased retainer drives the strangler program — typically 9-18 months — with clear exit criteria instead of an open-ended rebuild.
Migration engagement ticket by buyer (Numoru, USD)
Martin Fowler — Strangler Fig / MonolithFirst
DORA / State of DevOps
30-engineer SaaS strangler migration (12 months)
| Migration (one-time) | −$140,000 |
| MSP (12 mo × $6k) | −$72,000 |
| Infra overhead | −$40,000 |
| Revenue velocity captured | +$780,000 |
| Hiring runway extended (slower ramp needed) | +$240,000 |
| Net year-1 contribution | +$768,000 |
- DDD boundary review
- Deploy-graph mapping
- Seam prioritization
- Exit criteria + scorecard
- Seam-by-seam extraction
- Observability-first setup
- Platform team enablement
- Quarterly exec review
- Rollback plans per phase
- CI/CD + infra hardening
- Cost review + right-sizing
- Quarterly architecture audit
- On-call for incidents
Conclusion
Migrating to microservices is not a technical project — it's an organizational project that uses technical techniques. If your team structure doesn't change, your services will end up replicating the monolith's dependencies (Conway's Law).
Start with the most independent domain, invest in observability before extracting anything, and remember: the goal is not to have microservices — it's to have teams that can deliver value independently.