No, setting up a VM and deploying your on-premise application on it, is not the modern way to build applications on the cloud !!
Native cloud applications should leverage as much as possible the managed services of the cloud provider.
These services are building blocks so developers can avoid reinventing the wheel: code is written only to implement business logic, and the rest is only the configuration of cloud-native services.
Why?
No need to manage servers: no provisioning, patching, and maintenance
Less code to manage: only configuration of pre-built features
Pay-as-you-go pricing: you only pay for the resources you use
Instead of having one monolithic server, your code (and configuration) is split across different services, such as (in AWS context) :
Lambda for short-running tasks
Fargate for long-running tasks
SQS for queuing
Step Functions for Orchestration
API Gateway for APIs
DynamoDB for serverless databases
S3 for file storage
and more..
Note: While this article focuses on AWS, Azure and GCP offer comparable services.
Decentralizing an application makes however some tasks more difficult:
Monitoring: Alerts need to be centralized
Observability: Logs are decentralized making it difficult to understand what happens
Testing: Managed services live only in the cloud making local testing difficult
In this article, I wanted to focus on testing cloud-native applications: while unit tests remains the same between monolithic and serverless applications, integration tests has to be approached differently.
Let's have an example of a simple API writing to a table.
In AWS, API Gateway would usually be used to expose a route in combination with a Lambda function writing to a DynamoDB table.
Apart from the business logic contained in the Lambda (tested by unit tests), this workflow can fail in various places:
This raises the following question: how to properly test such a pattern before deploying it to production?
Option 1: Mock
With Pytest, a lambda function handler can be run locally using mocks to simulate other AWS service and check that the integrations are correct.
Some Python packages exist, such as moto, which are specifically designed to mock other AWS services locally.
However, this approach has some limitations:
All app resources will need to be created in the test context (dynamo db tables, buckets, queues etc.) which can be quite time-consuming to implement for end-to-end testing
Moto does not offer 100% coverage of AWS functionalities
Option 2: Localstack
Localstack is a tool that can be used to emulate a complete AWS platform locally.
This can be useful for testing serverless applications without having to deploy them to AWS.
However, there are some limitations to using LocalStack:
Difficult to maintain. The LocalStack configuration is complex, and it is difficult to troubleshoot problems.
Some AWS features are not implemented in LocalStack, and others may not work exactly the same way.
For these reasons, LocalStack can be a good option for simple applications, but it is not suitable for complex applications.
Option 3: Temporary Environment
Modern Infrastructure as Code (IaC) providers like Serverless,
or SST
have a feature called stage deployment that can be specified when deploying an application:
SLS:
serverless deploy --stage staging
SST:
sst deploy --stage staging
SAM:
sam deploy --guided
The resources are then deployed in a completely isolated CloudFormation stack and prefixed with the stage name.
This allows several versions of the app to live in the same AWS account concurrently.
Feature-specific temporary environments can be created (linked to a branch) and then deleted when the development is finished.
SLS:
severless remove --stage staging
SST:
sst remove --stage staging
SAM:
sam delete --stack-anme staging
A temporary environment does not necessarily mean a temporary AWS account.
Usually, teams split their development environment into different accounts: DEV, TEST, PROD
When testing a new feature, developers can test their code in a stage linked to their branch within the DEV account.
With this system, each developer can run integration tests and add test data without impacting others.
However, you may want to share resources between stages, such as SST parameters, API keys saved in SecretManager, etc
These parameters are set in the main stage of the account but should be reusable in the feature stages.
This can be done easily using an additional custom variable to your stack defining the “main” stage that should be used.
SLS deploy to feature stage:
serverless deploy --stage feature1 --mainstage dev
SLS deploy to dev stage:
serverless deploy --stage dev
In your Lambda code, you can then simply fallback to the mainStage
environment variable in a feature stage. With this trick, the dev stage and the feature stage will reference the same secret in your account.
import os
import boto3
client = boto3.client('secretsmanager')
STAGE = os.environ.get('STAGE')
STAGE = os.environ.get('MAINSTAGE', STAGE)
if MAIN_STAGE:
STAGE = MAIN_STAGE
def handler(event, context):
response = client.get_secret_value(
SecretId=f'app-name/{STAGE}/API_TOKEN',
)
As a conclusion, the stage feature of modern IaC frameworks is probably the easiest way to 100% test your app.
I can only recommend using these frameworks: they usually provide a better development experience by providing constructs and debugging options that speed up the development process.
Thanks for reading,
-Ju
I would be grateful if you could help me to improve this newsletter. Don’t hesitate to share with me what you liked/disliked and the topic you would like to be tackled.
P.S. you can reply to this email; it will get to me.