Continued from Part 1.
Moving our backend stuff to Lambda ran us into our first real challenges with Serverless. Up to this point we had been using the Serverless framework to build and deploy the lambda functions and corresponding API gateway. As a developer tool, the Serverless framework is great, it made getting up and running with lambda very easy. However, the problem we faced with moving to the backend was twofold.
First, we had some packages that needed to be built on the lambda specific server, namely the python 3.6 cryptography and tensorflow packages. At the time, this was not possible with Serverless framework. After digging around in the AWS docs we found the suggested approach to this was to build on the AWS image that essentially matched the lambda environment.
Our Serverless Architecture
The second issue we were having a hard time with was following a traditional delivery flow that we were all accustomed to. You write code, test locally, then commit to a repo, then build, then deploy, then test more, roll to stage, etc. With lambda, local testing wasn’t really possible and the serverless framework cut out the commit, then build, then deploy concept a bit. After reading some AWS blogs and docs we decide to give AWS CodeStar a try. We had great expectations, as we were setting up with a mindset to implement DevOps from the first day, and it looked like CodeStar would help us achieve that mindset.
AWS CodeStar was basically orchestrating a pipeline comprised of AWS CodeCommit, CodeBuild, CodePipeline, CloudFormation, and the relevant AWS services. Like the Serverless framework, it was relatively easy to get up and running with it. It took some minor changes to our deployment template. The benefits were the code deployed was always the latest in the repo, so we never had an issue of two of us working with different versions of the code. We also were able to work through our build issues by leveraging CodeBuild. Now our environment specific modules were building and loading properly on lambda. Progress was good for a bit, but we soon ran into some snags.
AWS CodeStar is slow, very slow. Though to be fair it might not be CodeStar itself, it could just be one of the underlying services like CodePipeline or Cloud Formations. For some of our services it was 12 minutes for every change to get deployed. If we could test more locally that would be a huge deal, but there wasn’t a good way to test locally that didn’t involve writing a bunch of extra code. The other problem we had was working in parallel. With CodeStar you have one pipeline attached to one repo branch. This was good from a continuous integration and delivery standpoint, but we couldn’t really get to a multistage DevOps model where we had one stable stage and a development stage first. More investigation lead us to realize that we would need to have either separate pipelines for each stage or we would have to chain them together. We couldn’t really figure out how to chain them together well, and the speed was so bad we decided to move in a different direction.
We stared liking the lambda and API gateway concepts of stages and aliases. Alias gave us the ability to very quickly promote a working lambda from one stage to another. We set up our API gateway stages for Dev, Stage and Prod and used a stage variable for the lambda alias so the proper version of the lambda is called based on the API entry point. Unfortunately to do this we couldn’t utilize a Cloud Formation template, as they don’t support lambda versioning. So, we resorted to CLI to get the lambda deployment, aliasing, and promotion done. CLI had one great benefit over CloudFormation in that it was way faster.
The final step we took to drive towards a faster and simpler Serverless development process was to reprioritize our plans for our service. RigD happens to be a platform to bring development process activity into a collaborative channel such as Slack, so we decided to shift our priorities to start supporting the activities we needed for our Serverless development process. Now we had everything we needed in a single place, where the whole team has visibility. Our pipeline process looks like this for adding new components:
- create a Repo and commit code (this include the codified build spec)
- create and run the build (this outputs to s3)
- Deploy any Lambda, DynamoDB tables, and API gateways
- Create environment stages
- Test and promote versions to new stages
At this point in our journey we believe we have an efficient process in place for developing and deploying a serverless application. We can spend more time getting new content and features in place than we did using containers. The bonus prize was actually in the economics of going serverless.
Here is a breakdown of our architecture costs once we finished our move to Serverless:
- Lambdas all run on 128MB of memory, to run one activity through our system we consume about 5000ms of lambda time: about $.0000104 per activity
- DynamoDB we are able to operate with only 15 WCUs and RCUs across our 7 tables and indexes for fixed total of $8.50 per month
- We have three API gateways which have no fixed cost but currently cost us $11 total per month and that includes the network traffic costs
- Our DynamoDB storage costs are about $45 at present, but would be about $230 per month were we to have the same 1TB we reserved in our old architecture.
We basically have fixed costs of $54 per month and if we actually used the system heavily, say 1,000,000 activities a month, the total cost would still only be about $260 dollars.
The follow cost comparison was dramatic:
Moving to a fully serverless architecture was a journey of trial and error, and probably set us back 3 months. However, the economics thus far are fantastic. The major limiting factor were the lack of tools that are truly geared towards serverless development. Thankfully we were able to solve that by refocusing what RigD supported. In addition to being super-fast and easier to use than the console or CLI, we also gained all the benefits that come from good dogfooding!
Check out RigD.io for more on our serverless architecture journey.