In Part 1, we explored why under explosive growth Stanza switched from vertical to horizontal scaling. In this post we’ll go over background on architecture before the switch to better understand the transition.
From the beginning our backend (MEAN) was built around a microservice architecture. Well-defined, self-contained services allowed us to scale each part of our backend very independently even on a monolithic server. Endpoints rendering different frontend products were kept separate from each other and endpoints for APIs. In addition to scaling benefits, this allowed for easier code management.
Because different services ran on the same physical machine, we used a reverse proxy NGINX to route traffic based on request path.
Each service was scaled by starting additional processes running the same code listening on different ports. NGINX load balanced traffic for a particular service to its processes via upstreams in a round-robin fashion.
Deployment was handspun and based on sending requests to a custom deploy service on the server. The service would then run scripts to:
- fetch latest code from the repository (of the service to deploy)
- checkout intended branch
- install dependencies
- restart cloned proccesses to run new code (we used forever)
Logs for each service were written to files, one per process (eg
service_1_8082.log). In our case, we piggybacked forever’s
Since logs were on the same physical machine, simple UNIX commands were sufficient for search.
SSL terminated at NGINX layer before routing to individual processes.
Though we were vertical scaling at server level, underneath the hood each service was horizontally scaled via multiple processes. Luckily for us this model was a good headstart to horizontal scaling as the principles were transferrable. In Part 3, we’ll look at the specific architecture and tools we use under horizontal scaling.