My First Microservice

Last year I built a small app for a blogger friend of mine. It saved him the trouble of downloading a file just to rename it before uploading to the blog. He just supplied a new name, a link to the file, pushed a button and it was automagically done.

It saved him money because he didn’t to consume his data plan downloading the media files he needed to host for the blog.

It was also much faster than doing it manually. I tested it with a 500MB file and it was transferred in 19 seconds. No way he could have downloaded, renamed and uploaded that fast.

(There’s a video of it in action in this old post)

Not long after it was built, I started toying with the idea of setting it up as an app others can use. It took a few months to settle on core features and the architecture for public user but I’m finally happy with the details.

The original was architecturally a monolith. The http server, ftp client and socket.io server were all in the the same file. While it was fine for a single user or a team, it would have started showing signs of stress as soon as it was opened to the public. It needed to be decoupled.

In my defense they weren’t that tightly coupled together. I used an event system to keep each part independent while in the same file, so it was relatively easy to see a path from my monolith to a microservice.

I started by pulling the ftp server component out. It was an obvious first choice because that would be the most process-intensive component. Pulling it out would give me the most breathing room architecturally. I can pull the rest apart as they outgrow the monolith.

The first challenge was keeping all the separate components in sync.

As a monolith, the user’s request to push initiated a socket.io connection which communicated the progress of the push job.

Now that the ftp push is on its own server, it needs a way to communicate with all the components reliably. It needs to collect jobs from the http server and send progress updates to the socket.io server.

I moved the communication between components from an event system over to RabbitMQ and Redis.

FTP push requests are put on a RabbitMQ queue for the ftp server to pick up from. It provides a lot of assurances, like a request won’t be lost no matter what. If the server fails to push it, the request is put back on the queue. The queue is durable so it will persist even if RabbitMQ crashes.

It would be overkill to send progress updates over RabbitMQ, so I use Redis pub/sub to communicate updates to the socket.io server, which sends them to the client.

Of course the monolith needed to connect to Redis and RabbitMQ. That was trivial though, since using events isn’t dramatically different.

I’d learned my lesson about testing, so I needed to modularize the ftp server for testing.

I created an api that abstracted the ftp server’s abilities to receive jobs and send messages and exposed them to the test runner.

At this point, everything is feature complete in comparison to the original monolith, with the added benefit of the most intensive process isolated, independently scalable and tested!

I’m still a little ways away from releasing the app, but I’m really loving this microservice architecture so I’ll continue building new components in this pattern.