Serverless Framework with AWS & Node.JS..
- Technologies Used :
Serverless Framework | AWS SAM | AWS Lambda | Step Functions |
API Gateway | RDS | DynamoDB | ElephantSQL |
CloudFormation | CI/CD | CloudWatch | CodeCommit |
CodeBuild | CodePipeline | S3 Notifications | SNS |
SQS | Cognito | Lambda Crons | CORS |
Apache VTL | Swagger | KMS | VPCs |
DLQs | CloudFront | OIDC | Kinesis |
MQTT | IoT | Elastic Beanstalk | ElastiCache |
+ And lot more..
Aditya Hajare (Linkedin).
WIP (Work In Progress)!
- Theory
- Architecture Patterns - Multi Tier
- Architecture Patterns - Microservices
- Architecture Patterns - Multi Provider Serverless
- AWS Lambda Limits
- DynamoDB
- AWS Step Functions
- AWS SAM
- CICD 101
- Serverless Security Best Practices 101
- Best Practices 101 - AWS Lambda
- Best Practices 101 - AWS API Gateway
- Best Practices 101 - DynamoDB
- Best Practices 101 - Step Functions
- Setup And Workflow 101
- New Project Setup In Pre Configured Environment 101
- Installing Serverless
- Configuring AWS Credentials For Serverless
- Create NodeJS Serverless Service
- Invoke Lambda Function Locally
- Event - Passing Data To Lambda Function
- Serverless Offline
- NPM Run Serverless Project Locally
- Deploy Serverless Service
- Setup Serverless DynamoDB Local
- Securing APIs
- AWS CLI Handy Commands
- Common Issues
Open-sourced software licensed under the MIT license.
- Every AWS account comes with a default
VPC (Virtual Private Cloud)
. - At the moment,
AWS Lambda Function
can run upto a maximum of15 Minutes
. - Returning
HTTP
responses fromAWS Lambda
allows us to integrate them withLambda Proxy Integration
forAPI Gateway
. - A
Step Function
can run upto a maximum period of1 Year
. Step Functions
allows us to combine differentLambda Functions
to buildServerless Applications
andMicroservices
.- There could be different reasons why you may want to restrict your
Lambda Function
to run within a givenVPC
. For e.g.- You may have an
Amazon RDS
instance running onEC2
inside yourVPC
and you want to connect to that instance throughLambda
without exposing it to outside world. In that case, yourLambda Function
must run inside thatVPC
. - When a
Lambda Function
is attached to anyVPC
, it automatically loses access to the internet. Unless ofcourse you open aPort
on yourVPC Security Group
to allowOutbound Connections
. - While attaching
Lambda Function
toVPC
, we must select at least 2Subnets
. Although we can choose moreSubnets
if we like. - When we are using
Serverless Framework
, all this including assigning necessary permissions etc. is taken care of automatically by theServerless Framework
.
- You may have an
Tags
are useful for organising and tracking our billing.Serverless Computing
is a cloud computing execution model in which the cloud provider dynamically manages the allocation of infrastructure resources. So we don't have to worry about managing the servers or any of the infrastructure.AWS Lambda
is anEvent Driven
serverless computing platform or aCompute Service
provided by AWS.- The code that we run on
AWS Lambda
is called aLambda Function
. Lambda Function
executes whenever it is triggered by a pre-configuredEvent Source
.Lambda Functions
can be triggered by numerous event sources like:- API Gateway.
- S3 File Uploads.
- Changes to
DynamoDB
table data. CloudWatch
events.SNS
Notifications.- Third Party APIs.
IoT Devices
.- And so on..
Lambda Functions
run inContainerized Environments
.- We are charged only for the time our
Lambda Functions
are executing. - No charge for
Idle Time
. - Billing is done in increments of
100 ms
of theCompute Time
. AWS Lambda
uses decoupledPermissions Model
.AWS Lambda
supports 2Invocation Types
:- Synchronous.
- Asynchronous.
Invocation Type
of AWS Lambda depends on theEvent Source
. For e.g.API Gateway
orCognito
event isSynchronous
.S3 Event
is alwaysAsynchronous
.
pathParameters
andqueryStringParameters
are the pre-defined attributes ofAPI Gateway AWS Proxy Event
.AWS API Gateway
expects Lambda function to returnwell formed http response
instead of just the data or just the response body. At the bare-minimum, our response must havestatusCode
andbody
. For e.g.{ "statusCode" : 200, "body": { "message": "Hello Aditya" } }
- Typical code to build above response:
return { statusCode: 200, body: JSON.stringify({message: "Hello Aditya"}); }
- Lambda Versioning:
- When we don't explicitely create an user version, Lambda will use the
$LATEST
version. - The latest version is always denoted by
$LATEST
. - The last edited version is always marked as
$LATEST
one.
- When we don't explicitely create an user version, Lambda will use the
- (Without using Lambda Aliases) How to use different version of Lambda Function in API Gateway? Bad Way!
- Under AWS Console, go to
API Gateway
. - Click on
Request (GET/POST/PUT/PATCH/DELETE)
underResource
. - Click on
Integration Request
. - Configure
Lambda Function
setting with a value of version seperated by colon. - Re-deploy the API.
- For e.g.
// Lambda Function name: adiTest // Available Lambda Function versions: v1, v2, v3 ..etc. // To use v2 for API Gateway GET Request, set Lambda Function value as below: { "Lambda Function": "adiTest:2" }
- Under AWS Console, go to
- Need for Lambda Aliases:
- Without
Lambda Aliases
, whenever we publish a newLambda Version
, we will manually have to edit API Gateway to use newLambda Version
and then republish the API (Refer to above steps). - Everytime we publish a new
Lambda Version
,API Gateway
should automatically pick up the change without we having to re-deploy the API.Lambda Aliases
helps us achive this.
- Without
- Lambda Aliases:
- It's a good practice to create 1
Lambda Alias
perEnvironment
. For e.g. We could have aliases for dev, production, stage etc. environments. - While configuring
Lambda Alias
, we can useAdditional Version
setting forSplit Testing
. Split Testing
allows us to split user traffic between multipleLambda Versions
.- To use
Lambda Alias
inAPI Gateway
, we simply have to replaceVersion Number
(Seperated by colon) underLambda Function
setting (inAPI Gateway
settings), with anAlias
. - Re-deploy the API.
- For e.g.
// Lambda Function name: adiTest // Available Lambda Function versions: v1, v2, v3 ..etc. // Available Lambda Function aliases: dev, stage, prod ..etc. // Aliases are pointing to following Lambda versions: { "dev": "v1", "stage": "v2", "prod": "$LATEST" } // To use v2 for API Gateway GET Request, set Lambda Function value as below: { "Lambda Function": "adiTest:stage" }
- It's a good practice to create 1
- Stage Variables in API Gateway:
- Everytime we make changes to
API Gateway
, we don't want to update theAlias Name
in everyLambda Function
before deploying the corrospondingStage
. To address this challenge, we can make use of what is called asStage Variables in API Gateway
. Stage Variables
can be used for various puposes like:- Choosing backend database tables based on environment.
- For dynamically choosing
Lambda Alias
corrosponding to the currentStage
. - Or any other configuration.
Stage Variables
are available insidecontext
object ofLambda Function
.- Since
Stage Variables
are available insidecontext
object, we can also use them inBody Mapping Templates
. Stage Variables
can be used as follows:- Inside
API Gateway Resource Configuration
, to chooseLambda Function Alias
corrosponding to the current stage:
// Use ${stageVariables.variableName} { "Lambda Function": "myFunction:${stageVariables.variableName}" }
- Inside
- Everytime we make changes to
- Canary Deployment:
- Related to
API Gateways
. - Used for traffic splitting between different versions in
API Gateways
. - Use
Promote Canary
option to direct all traffic to latest version once our testing using traffic splitting is done. - After directing all traffic to latest version using
Promote Canary
option, we can choose toDelete Canary
once we are sure.
- Related to
- Encryption For Environment Variables In Lambda:
- By default
Lambda
uses defaultKMS
key to encryptEnvironment Variables
. AWS Lambda
has built-in encryption at rest and it's enabled by default.- When our
Lambda
usesEnvironment Variables
they are automatically encrypted byDefault KMS Key
. - When the
Lambda
function is invoked,Environment Variables
are automaticallydecrypted
and made available inLambda Function's code
. - However, this only takes care of
Encryption at rest
. But duringTransit
for e.g. when we are deploying theLambda Function
, theseEnvironment Variables
are still transferred inPlain Text
. - So, if
Environment Variables
posses sensitive information, we can enableEncryption in transit
. - If we enable
Encryption in transit
thenEnvironment Variable Values
will be masked usingKMS Key
and we must decrypt it's contents insideLambda Functions
to get the actual values stored in variables. - While creating
KMS Keys
, be sure to choose theRegion
same as ourLambda Function's Region
. - Make sure to give our
Lambda Function's Role
a permission to useKMS Key
feature insideKMS Key's
policy.
- By default
- Retry Behavior in AWS Lambda:
Lambda Functions
have built-inRetry Behavior
. i.e. When aLambda Functions
fails,AWS Lambda
automatically attempts to retry the execution up to2 Times
if it was invokedAsynchronously (Push Events)
.- A lambda function could fail for different reasons such as:
- Logical or Syntactical error in Lambda Function's code.
- Network outage.
- A lambda function could hit the timeout.
- A lambda function run out of memory.
- And so on..
- When any of above things happen, Lambda function will throw an
Exception
. How thisException
is handled depends upon how theLambda Function
was invoked i.e.Synchronously or Asynchronously (Push Events)
. - If the
Lambda Function
was invokedAsynchronously (Push Events)
thenAWS Lambda
will automatically retry up to2 Times (With some time delays in between)
on execution failure. - If we configure a
DLQ (Dead Letter Queue)
, it will collect thePayload
after subsequent retry failures i.e. after2 Attempts
. - If a function was invoked
Synchronously
then calling application will receiveHTTP 429
error when function execution fails. - If a
DLQ (Dead Letter Queue)
is not configured forLambda Function
, it will discard the event after 2 retry attempts.
- Container Reuse:
Lambda Function
executes inContainerized Environments
.- Whenever we create or update a
Lambda Function
i.e. either the function code or configuration,AWS Lambda
creates a newContainer
. - Whenever
Lambda Function
is executed first time i.e. after we create it or update,AWS Lambda
creates a newContainer
. - Once the
Lambda Function
execution is finished,AWS Lambda
will shut down theContainer
after a while. - Code written outside
Lambda Handler
will be executed once perContainer
. For e.g.// Below code will be executed once per container. const AWS = require('aws-sdk); AWS.config.update({ region: 'ap-south-1' }); const s3 = new AWS.S3(); // Below code (code inside Lambda handler) will be executed everytime Lambda Function is invoked. exports.handler = async (event, context) => { return "Hello Aditya"; };
- It's a good practice to write all initialisation code outside the
Lambda Handler
. - If you have written any file in
\tmp
and aContainer
is reused forLambda Function Execution
, that file will be available in subsequent invocations. - It will result in faster executions whenever
Containers
are reused. - We do not have any control over when
AWS Lambda
will reuse theContainer
or when it won't. - If we are spawning any background processes in
Lambda Functions
, they will be executed only untilLambda Handler
returns a response. Other time they will stayFrozen
.
- Running a
Lambda Function
inside aVPC
will result inCold Starts
.VPC
also introduce some delay before a function could execute which could result inCold Start
. Resource Policies
gets applied at theAPI Gateway
level whereasIAM Policies
gets applied at theUser/Client
level.
- Most common architecture pattern that we almost find everywhere irrespective of whether we are using servers or going serverless.
- The most common form of
Multi-Tier Architecture
is the3-Tier Architecture
. Even theServerless
form will have same3-Tiers
as below:Frontend/Presentation Tier
.Application/Logic Tier
.Database/Data Tier
.
- In
Serverless 3-Tier Architecture
,Database/Data Tier
.- The
Database/Data Tier
will contain the databases (Data Stores
) likeDynamoDB
. Data Stores
falls into 2 categories:IAM Enabled
data stores (overAWS APIs
). These data stores allows applications to connect to them throughAWS APIs
. For e.g.DynamoDB
,Amazon S3
,Amazon ElasticSearch Service
etc.VPC Hosted
data stores (using database credentials). These data stores runs in hosted instances within aVPC
. For e.g.Amazon RDS
,Amazon Redshift
,Amazon ElastiCache
. And ofcourse we can install any database of our choice onEC2
and use it here. For e.g. we can run aMongoDB
instance onEC2
and connect to it throughServerless Lambda Functions
.
- The
Application/Logic Tier
.- This is where the core business logic of our
Serverless Application
runs. - This is where core
AWS Services
likeAWS Lambda
,API Gateway
,Amazon Cognito
etc. come into play.
- This is where the core business logic of our
Frontend/Presentation Tier
.- This tier interacts with backend through
Application/Logic Tier
. - For e.g. Frontend could use
API Gateway Endpoint
to callLambda Functions
which inturn interacts with data stores available in theDatabase/Data Tier
. API Gateway Endpoints
can be consumed by variety of applications such asWeb Apps
like static websites hosted onS3
,Mobile Application Frontends
,Voice Enabled Devices Like Alexa
or differentIoT Devices
.
- This tier interacts with backend through
- Typical use case of
Serverless Architecture
is theMicroservices Architecture Pattern
. - The
Microservices Architecture Pattern
is an approach to developing single application as a suit of small services, each running in its own process and communicating with lightweight mechanisms, ofteb ab HTTP resource API. - These services are built around business capabilities and are independently deployable by fully automated deployment machinery.
- There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.
- The core idea of a
Microservices Architecture
is to take a complex system and break it down into independent decoupled services that are easy to manage and extend. These services communicate over well defined APIs and are often owned by small self-contained teams. Microservices Architecture
makes applications easy to scale and faster to develop, enabling innovation and accelerating time to market for new features.- Each
Service
performs a single specific function. And because they are running independently, eachService
can be updated, deployed and scaled to meet the demands of the application.
- Newer and slowly emmerging pattern.
- This is all about reducing dependents on 1 specific cloud provider and making our application even more resilient.
- There are several big companies today that offers
Serverless Compute Services
likeAWS Lambda
. Some of the companies which are offering this service areGoogle Cloud Functions
,Microsoft Azure Functions
,IBM Cloud Functions
etc. - When we choose
Cloud Provider
, we kind of gets locked in to continue using services offered by that particularCloud Provider
. - For building
Cloud Provider Agnostic Serverless Applications
or in other words - for buildingMulti Provider Serverless Applications
, we can make use ofServerless Framework
. - For building
Multi Provider Serverless Applications
, the team behind theServerless Framework
offers a solution called asEvent Gateway
https://github.com/serverless/event-gateway. Event Gateway
is an open source tool and it is part of their offering calledServerless Platform
.- The
Event Gateway
allows us to react to any event withServerless Functions
hosted on differentCloud Providers
. Event Gateway
also allows us to send events from differentCloud Providers
and we can react to these events usingServerless Functions
from anyCloud Provider
.Event Gateway
tool is still under heavy development and not production ready yet (Today is 11 March 2020).
Resource | Default Limit |
---|---|
Concurrent executions | 1,000 |
Function and layer storage | 75 GB |
Elastic network interfaces per VPC | 250 |
Function memory allocation | 128 MB to 3,008 MB, in 64 MB increments. |
Function timeout | 900 seconds (15 minutes) |
Function environment variables | 4 KB |
Function resource-based policy | 20 KB |
Function layers | 5 layers |
Function burst concurrency | 500 - 3000 (varies per region) |
Invocation frequency (requests per second) | 10 x concurrent executions limit (synchronous – all sources) 10 x concurrent executions limit (asynchronous – non-AWS sources) Unlimited (asynchronous – AWS service sources |
Invocation payload (request and response) | 6 MB (synchronous) 256 KB (asynchronous) |
Deployment package size | 50 MB (zipped, for direct upload) 250 MB (unzipped, including layers) 3 MB (console editor) |
Test events (console editor) | 10 |
/tmp directory storage |
512 MB |
File descriptors | 1,024 |
Execution processes/threads | 1,024 |
- Datatypes:
- Scaler: Represents exactly one value.
- For e.g. String, Number, Binary, Boolean, null.
Keys
orIndex
attributes only support String, Number and Binary scaler types.
- Set: Represents multiple Scaler values.
- For e.g. String Set, Number Set and Binary Set.
- Document: Represents complex structure with nested attributes.
- Fo e.g. List and Map.
- Scaler: Represents exactly one value.
String
datatype can store onlynon-empty
values.- Maximum data for any item in DynamoDB is limited to
400kb
. Note: Item represents the entire row (like in RDBMS) of data. Sets
are unordered collection of either Strings, Numbers or Binary values.- All values must be of same scaler type.
- Do not allow duplicate values.
- No empty sets allowed.
Lists
are ordered collection of values.- Can have multiple data types.
Maps
are unordered collection ofKey-Value
pairs.- Ideal for storing JSON documents in DynamoDB.
- Can have multiple data types.
- DynamoDB supports 2 types of
Read Operations (Read Consistency)
:Strong Consistency
:- The most up-to-date data.
- Must be requested explicitely.
Eventual Consistency
:- May or may not reflect the latest copy of data.
- This is the default consistency for all operations.
- 50% Cheaper than
Strongly Consistent Read
operation.
- Internally, DynamoDB stores data in
Partitions
. Partitions
are nothing butBlocks of memory
.- A table can have
1 or more partitions
depending on it's size and throughput. - Each
Partition
in DynamoDB can hold maximum10GB
of data. - Partitioning e.g.:
- For
500 RCU and 500 WCU
--->1 Partition
. - For
1000 RCU and 1000 WCU
--->2 Partitions
.
- For
- For
Table
level operations, we need to instantiate and useDynamoDB
class fromaws-sdk
:const AWS = require('aws-sdk'); AWS.config.update({ region: 'ap-south-1' }); const dynamoDB = new AWS.DynamoDB(); // Instantiating DynamoDB class for table level operations. dynamoDB.listTables({}, (err, data) => { if (err) { console.log(err); } else { console.log(JSON.stringify(data, null, 2)); } });
- For
Item
level operations, we need to instantiate and useDocumentClient
fromDynamoDB
class fromaws-sdk
:const AWS = require('aws-sdk'); AWS.config.update({ region: 'ap-south-1' }); const docClient = new AWS.DynamoDB.DocumentClient(); // Instantiate and use DocumentClient class for Item level operations. docClient.put({ TableName: 'adi_notes_app', Item: { user_id: 'test123', timestamp: 1, title: 'Test Note', content: 'Test Note Content..' } }, (err, data) => { if (err) { console.log(err); } else { console.log(JSON.stringify(data, null, 2)); } });
batchWrite()
method allows us to perform multiple write operations (e.g. Put, Update, Delete) in one go.- Conditional writes in DynamoDB are
idempotent
. i.e. If we make same conditional write requests multiple times, only the first request will be considered. document.query()
allows us to fetch items from a specificpartition
.document.scan()
allows us to fetch items from allpartitions
.- Pagination:
- At a time, any
document.query()
ordocument.scan()
operation can return maximum1mb
data in a single request. - If our
query/scan
operation has more records to return (after exceeding 1mb limit), we will receiveLastEvaluatedKey
key in the response. LastEvaluatedKey
is simply an object containingIndex Attribute
of the next item up which the response was returned.- In order to retrieve further records, we must pass
LastEvaluatedKey
value underExclusiveStartKey
attribute in our subsequent query. - If there is no
LastEvaluatedKey
attribute present in DynamoDB query/scan response, it means we have reached the last page of data.
- At a time, any
- DynamoDB Streams:
- In simple words, its a
24 Hours Time-ordered Log
. DynamoDB Streams
maintain aTime-Ordered Log
of all changes in a givenDynamoDB Table
.- This log stores all the
Write Activity
that took place in the last24 hrs
. - Whenever there are any changes made into
DynamoDB Table
and ifDynamoDB Stream
is enabled for that table, these changes will be returned to theStreams
. - There are several ways to consume and process data from
DynamoDB Streams
:- We can use
Kinesis Adapter
along withKinesis Client Library
.Kinesis
is platform for processingHigh Volume
streaming data onAWS
. - We can also make use of
DynamoDB Streams SDK
to work withDynamoDB Streams
. AWS Lambda Triggers
also allows us to work withDynamoDB Streams
. This approach is much easy and intuitive.DynamoDB Streams
will invokeLambda Functions
based on changes received by them.
- We can use
- In simple words, its a
AWS Step Functions
are the logical progression ofAWS Lambda Functions
.- With
Step Functions
we can create visual workflows to co-ordinate or orchestrate differentLambda Functions
to work together. - A
Step Function
can run upto a maximum period of1 Year
. - We can use
Step Functions
to automate routine jobs like deployments, upgrades, migrations, patches and so on. Step Functions
allows us to combine differentLambda Functions
to buildServerless Applications
andMicroservices
.- Just like
Lambda Functions
, there is no need to provision any resources or infrastructure toStep Functions
. - We simply use
ASL (Amazon Step Language)
to define the workflows. It's a JSON based structured language. We use this language to define various steps as well as different connections and interactions between these steps inStep Functions
. - The resulting workflow is called as the
State Machine
. State Machine
is displayed in a graphical form just like a flowchart.State Machines
also has built-in error handling mechanisms. We can retry operations based on different errors or conditions.- Billing is on
Pay as you go
basis. We only pay for the trasitions betweenSteps
. task
step allows us to invokeLambda Function
from ourState Machine
activity
step allows us to run any code onEC2 Instances
. It is similar totask
step just thatactivity
step is notServerless
kind of step.- Whenever any
Step
inState Machine
fails, entireState Machine
fails. HereSteps
as inLambda Functions
or any errors or exceptions received byStep
. - We can also use
CloudWatch Rules
to execute aState Machine
. - We can use
Lambda Function
to triggerState Machine Execution
. The advantage of this approach is thatLambda Functions
support many triggers for their invocation. So we have numerous options to triggerLambda Function
and ourLambda Function
will triggerState Machine Execution
usingAWS SDK
. - While building
State Machine
and if it has anyLambda Functions (task states)
, always specifyTimeoutSeconds
option to make sure ourState Machine
doesn't get stuck or hung. - In
State Machine
,catch
field is used to specifyError Handling Catch Mechanism
.
- AWS SAM
Serverless Application Model
. AWS SAM
is just a simplified version ofCloudFormation Templates
.- It seemlessly integrates into
AWS Deployment Tools
likeCodeBuild
,CodeDeploy
,CodePipeline
etc. - It provides
CLI
to build, test and deployServerless Applications
. - Every
SAM Template
begins with :AWSTemplateFormatVersion: "2010-09-09" Transform: AWS::Serverless-2016-10-31
- To deploy
SAM
application usingCloudFormation Commands
(Instead of usingSAM CLI
):- It involves
2 Steps
:- Package application and push it to
S3 Bucket
. This step requiresS3 Bucket
to be created in prior to runningCloudFormation Package
command. - Deploy packaged application.
- Package application and push it to
- Step 1: We need an
S3 Bucket
created before we deploy. If we don't have one, then create it using following command:aws s3 mb s3://aditya-sam-app
- Step 2: Package application:
aws cloudformation package --template-file template.yaml --output-template-file output-sam-template.yaml --s3-bucket aditya-sam-app
- Step 3: Deploy application (Here, we will be using generated output SAM template file):
aws cloudformation deploy --template-file output-sam-template.yaml --stack-name aditya-sam-app-stack --capabilities CAPABILITY_IAM
- It involves
- To generate SAM project boilerplate from sample app:
sam init --runtime nodejs12.x
- To execute
Lambda Function
locally withSAM CLI
:# -e to pass event data to Lambda Function. This file must be present in the current location. sam local invoke HelloWorldFunction -e events/event.json # Alternatively, we can pass event data inline within the command by simply piping it as below. Here we are sending empty event data to Lambda Function. echo '{}' | sam local invoke HelloWorldFunction
SAM CLI
also allows to invokeLambda Functions
locally from within our application code. To do so, we have to startLambda Service
locally usingSAM CLI
:sam local start-lambda
- To run
API Gateway
service locally:- Navigate to folder where our
SAM Template
is located (e.g.template.yaml
). - Execute following command to run
API Gateway Service
locally:sam local start-api
- Navigate to folder where our
- To validate
SAM Template
locally,- Navigate to folder where our
SAM Template
is located (e.g.template.yaml
). - Execute following command to validate
SAM Template
locally:sam validate
- Navigate to folder where our
- To deploy application using
SAM CLI
:- It involves
2 Steps
:- Package application and push it to
S3 Bucket
. This step requiresS3 Bucket
to be created in prior to runningSAM Package
command. - Deploy packaged application.
- Package application and push it to
- Step 1: We need an
S3 Bucket
created before we deploy. If we don't have one, then create it using following command:aws s3 mb s3://aditya-sam-app
- Step 2: Package application:
sam package --template-file template.yaml --output-template-file output-sam-template.yaml --s3-bucket aditya-sam-app
- Step 3: Deploy application (Here, we will be using generated output SAM template file):
sam deploy --template-file output-sam-template.yaml --stack-name aditya-sam-app-stack --capabilities CAPABILITY_IAM
- It involves
- To view
Lambda Function
logs usingSAM CLI
:sam logs -n LAMBDA_FUNCTION_NAME --stack-name STACK_NAME -- tail # For e.g.: sam logs -n GetUser --stack-name aditya-sam-app-stack -- tail
AWS CodeCommit
- It is a source control service which allows us to host our
Git Based
repositories.
- It is a source control service which allows us to host our
AWS CodeBuild
- It is a
Continious Integration
service. We can use it toPackage
and optionallyDeploy
our applications.
- It is a
AWS CodePipeline
- It is a
Continious Delivery
service. It allows us to automate entireDeployment
andRelease Cycles
.
- It is a
- Setup 101.
- Initialize
Git Repository
on local machine. - Step #1: Create
CodeCommit Repository
:- Go to
CodeCommit
inAWS Console
and create new repository. - Go to
IAM
inAWS Console
and create new user. Provide:- Only
Programmatic Access
. No need to provide access toAWS Console
. - Attach
Existing Policy
. Look forCodeCommit
in policies. - It will show us
AWS Credentials
for the user. Ignore them. - Under
Users
, open that user and go toSecurity Credentials
. Scroll down to seeHTTPS Git credentials for AWS CodeCommit
. Click onGenerate
button there. - It will show us
Username
andPassword
for this user. Download that.
- Only
- Go to
CodeCommit
console and click onConnect
button. - Copy
Repository URL
from the popup. - On our local machine, we need to add
CodeCommit Repository
asRemote Repository
using following command:git remote add origin CODECOMMIT_REPOSITORY_URL
- On our local machine, add upstream origin using following command (Repeat this for all local branches):
# 'origin' refers to the remote repository. i.e. CODECOMMIT_REPOSITORY_URL git push --set-upstream origin LOCAL_BRANCH
- It will ask for credentials only once. Specify credentials we downloaded from
IAM Console
for our created user.
- Go to
- Step #2: Setup
CodeBuild
:- Go to
CodeBuild
inAWS Console
. - Before we create a
CodeBuild Project
, we will need anIAM Role
thatCodeBuild
can assume on be our half.- For e.g. When we create and deploy our
Serverless Project
, it creates different resouces likeLambda Functions, APIS, DynamoDB Tables, IAM Roles
in the background usingCloudFormation
. When we deploy from our computer,AWS Credentials
stored in environment variable of our computer are used. Now the same deployment has to run from aContainerized Environment
created byCodeBuild
. So we must provide the same permissions toCodeBuild
as we provided to the user which connects to AWS while deploying usingServerless Framework
.
- For e.g. When we create and deploy our
- Go to
IAM
inAWS Console
and create newRole
.- Under
Choose the service that will use this role
, selectCodeBuild
and click onContinue
. - Select access (We can choose
Administrator Access
) and click onReview
and create theRole
. - Now, we can go ahead and create
CodeBuild
project.
- Under
- Go to
CodeBuild
console and create a project.- Under
Source Provider
, selectAWS CodeCommit
option. SelectCodeCommit Repository
. - Under
Environment: How to build
,- Select option
Use an image managed by AWS CodeBuild
. - Select
Operating System
asUbuntu
. - Under
Runtime
, selectNode.js
. - Select
Runtime Version
. - Under
Build Specifications
, we will usebuildspec.yml
file.
- Select option
- Under
Service Role
, select theRole
we created. - Under
Advanced Settings
, createEnvironment Variable
asENV_NAME = dev
. This way we can build similar project for different environments likeprod, stage
etc.. - Continue and review the configuration and click on
Save
button. Do not click onSave and Build
button.
- Under
- Go to
- Step #3: Create a
buildspec.yml
file at root of our project.buildspec.yml
file tellsCodeBuild
what to do with the sourcecode it downloads from theCodeCommit Repository
.- For e.g.
# buildspec.yml version: 0.2 # Note: Each version can use the different syntax. phases: # There are 4 different types of phases we can define here. viz. 'install', 'pre_build', 'build', 'post_build'. Under each phase, we can specify commands for CodeBuild to execute on our be half. If there are any runtime errors while executing commands in particular phase, CodeBuild will not execute the next phase. i.e. If the execution reaches the 'posrt_build' phase, we can be sure that build was successful. - install commands: - echo Installing Serverless.. # This is only for our reference. - npm i -g serverless # Install serverless globally in container. - pre_build commands: - echo Installing NPM dependencies.. - npm i # This will install all the dependencies from package.json. - build commands: - echo Deployment started on `date`.. # This will print current date. - echo Deploying with serverless framework.. - sls deploy -v -s $ENV_NAME # '$ENV_NAME' is coming from environment variable we setup above. - post_build commands: - echo Deployment completed on `date`..
- Commit
buildspec.yml
file and deploy it toCodeCommit Repository
.
- Step #4 (Optional): If we manually want to build our project,
- Go to
CodeBuild Console
, select our project and click onStart Build
.- Select the
CodeCommit Branch
thatCodeBuild
should read from. - Click on
Start Build
button. - It will pull the code from selected branch in
CodeCommit Repository
, and then run the commands we have specified inbuildspec.yml
file.
- Select the
- Go to
- Step #5: Setup
CodePipeline
:- Go to
CodePipeline
inAWS Console
and create a newPipeline
.Source location
:- Under
Source Provider
, selectAWS CodeCommit
. - Select the
Repository
andBranch Name (Generally master branch)
. - We will use
CloudWatch Events
to detect changes. This is the default option. We can change this to makeCodePipeline
periodically check for changes.- By using
CloudWatch Events i.e. default option
underChange detection options
setting, as soon as we push the change or an update to amaster branch
onCodeCommit
, thisPipeline
will get triggered automatically.
- By using
- Click next.
- Under
Build
:- Under
Build Provider
option, selectAWS CodeBuild
. - Under
Configure your project
options, selectSelect existing build project
and underProject name
, select our existingCodeBuild
project. - Click next.
- Under
Deploy
:- Under
Deployment provider
, since our code deployment will be done throughServerless Framework
in theCodeBuild
step and we have defined ourbuildspec.yml
file that way, we need to selectNo Deployment
option. - Click next.
- Under
AWS Service Role
:- We need to create a necessary
Role
forPipeline
. Click onCreate role
button. AWS
will automatically generatePolicy
with necessaryPermissions
for us. So simply clickAllow
button.- Click
Next step
to review the configuration ofPipeline
.
- We need to create a necessary
- Click on
Create Pipeline
button to create and run thisPipeline
- Go to
- Now whenever we push changes to
master branch
, our code will get automatically deployed usingCICD
. - Step #6: Production Workflow Setup - Adding manual approval before production deployment with
CodePipeline
.- Once our code gets deployed to
Dev Stage
, it will be ready for testing. And it will trigger aManual Approval
request. The approver will approve or reject the change based on the outcome of testing. If the change gets rejected, thePipeline
should stop there. Otherwise, if the change is approved, the same code should be pushed toProduction Stage
. Following are the steps to implement this workflow: - Go to
CodePipeline
inAWS Console
and click onEdit
button for our createdPipeline
. - After
Build Stage
usingCodeBuild
, click on+ Stage
button to add new stage. - Give this new stage a name. e.g.
ApproveForProduction
. - Click on
+ Action
to add a newAction
.- Under
Action category
option, selectApproval
. - Under
Approval Actions
options:- Give an
Action Name
. For e.g.Approve
. - Set
Approval Type
toManual Approval
option.
- Give an
- Under
Manual approval configuration
options:- We need to create an
SNS Topic
.- Go to
SNS Console
underAWS Console
and click onCreate Topic
. - Specify
Topic Name
andDisplay Name
. For e.g.Topic Name: cicd-production-approval
andDisplay Name: CICD Production Approval
. - Click on
Create Topic
button. - Now that the topic has been created, we must
Subscribe
to the topic. WheneverCodePipeline
triggers theManual Approval
, aNotification
will be triggered to this topic. All the subscribers will be notified by Email for the approval. To setup this: - Click on
Create Subscription
button. - Under
Protocol
, selectEmail
option. - Under
Endpoint
, add the email address and click theCreate Subscription
button.' - This will trigger the confirmation. Only after we confirm our email address, the
SNS
will start sending notifications. SNS
setup is done at this point. We can head back toManual approval configuration
options.
- Go to
- Under
SNS Topic ARN
, select theSNS Topic
we just created above. - Under
URL For Review
, we can specifyAPI URL or Project URL
. - Under
Comments
, specify comments if any. For e.g.Kindly review and approve
. - Click on
Add Action
button.
- We need to create an
- Under
- After
Manual Approval
stage, click on+ Action
to add a newAction
forProduction Build
.- Under
Action category
option, selectBuild
. - Under
Build Actions
options:- Give an
Action Name
. For e.g.CodeBuildProd
. - Set
Build Provider
toAWS CodeBuild
option.
- Give an
- Under
Configure your project
options:- Select
Create a new build project
option. It will exactly be the same as last one, only difference is it will use different value inEnvironment Variables
viz.Production.
- Specify
Project Name
. For e.g.cicd-production
.
- Select
- Under
Environment: How to build
,- Select option
Use an image managed by AWS CodeBuild
. - Select
Operating System
asUbuntu
. - Under
Runtime
, selectNode.js
. - Select
Runtime Version
. - Under
Build Specifications
, we will usebuildspec.yml
file. i.e. Select optionUse the buildspec.yml in the source code root directory
option.
- Select option
- Under
AWS CodeBuild service role
options:- Select
Choose an existing service role from your account
option. - Under
Role name
, select the existing role we created while setting upCodeBuild
above.
- Select
- Under
Advanced Settings
, createEnvironment Variable
asENV_NAME = prod
. This way we can build similar project for different environments likeprod, stage
etc.. - Click on
Save build project
button. - We must provide
Input Artifacts
for this stage. So underInput Artifacts
options:- Set
Input artifacts #1
toMyApp
.
- Set
- Click on
Add action
button.
- Under
- Click on
Save Pipeline Changes
button. It will popup the confirmation. Click onSave and continue
button. And we are all set.
- Once our code gets deployed to
- Initialize
AWS Lambda
uses a decoupled permissions model. It uses 2 types of permissions:Invoke Permissions
: Requires caller to only have permissions to invoke theLambda Function
and no more access is needed.Execution Permissions
: It is used byLambda Function
itself to execute the function code.
- Give each
Lambda Function
it's ownExecution Role
. Avoid using sameRole
across multipleLambda Functions
. This is because needs of ourLambda Functions
may change over time and in that case we may have to alter permissions forRole
assigned to our functions. - Avoid setting
Wildcard Permissions
toLambda Function Roles
. - Avoid giving
Full Access
toLambda Function Roles
. - Always provide only the necessary permissions keeping the
Role Policies
as restrictive as possible. - Choose only the required actions in the
IAM Policy
keeping the policy as restrictive as possible. - Sometimes
AWS
might add newAction
on aResource
and if ourPolicy
is uing aWildcard
on theActions
, it will automatically receive this additional access to newAction
even though it may not require it. Hence it's a good and recommended idea to explicitely specify individualActions
in the policies and not useWildcards
. - Always make use of
Environment Variables
inLambda Functions
to store sensitive data. - Make use of
KMS (Key Management System) Encryption Service
to encrypt sensitive data stored inEnvironment Variables
. - Make use of
KMS Encryption Service
to encrypt sensitive dataAt Rest
andIn Transit
. - Remember that
Environment Variables
are tied toLambda Function Versions
. So it's a good idea to encrypt them before we generate the function version. - Never log the decrypted values or any sensitive data to console or any persistent storage. Remember that output from
Lambda Functions
is persisted inCloudWatch Logs
. - For
Lambda Functions
running inside aVPC
:- Use least privilege security groups.
- Use
Lambda Function
specificSubnets
andNetwork Configurations
that allows only theLambda Functions
to accessVPC Resources
.
- Following are the mechanisms available for controlling the
API Gateway
access:API Keys
andUsage Plans
.Client Certificates
.CORS Headers
.API Gateway Resource Policies
.IAM Policies
.Lambda Authorizers
.Cognito User Pool Authorizers
.Federated Identity Access
usingCognito
.
- When using
CI/CD Pipelines
for automated deployments, make sure appropriateAccess Control
is in place. For e.g. If pushing code tomaster branch
triggers ourDeployment Pipeline
, then we must ensure that only the authorized team members have ability to update themaster branch
.
- Keep declarations/instantiations outside
Lambda Handlers
. This allowsLambda Handlers
to reuse the objects whenContainers
get reused. - Keep the
Lambda Handlers
lean. i.e. Move the core logic ofLambda Function
outside of theHandler Functions
. - Avoid hardcoding, use
Environment Variables
. - One function, one task. This is
Microservices Architecture
. - Watch the deployment package size, remove unused dependencies. Check
package.json
. Certain libraries are available by default onLambda Functions
. We can remove those libraries frompackage.json
. - Always keep an eye on
Lambda Logs
. Monitor theExecution Duration
andMemory Consumptions
. - Grant only the necessary
IAM Permissions
toLambda Functions
. Although the serverless team recommends usingAdmin
user while developingServerless Framework Apps
. - In production, choose to give
API Key
withPowerUserAccess
at the maximum toServerless Framework User
. Avoid givingAdministratorAccess
. - Use
-c
flag withServerless Framework Deployments
. This will ensure that the commands will only generateCloudFormation File
and not actually execute it. We can then execute thisCloudFormation File
from withinCloudFormation Console
or as part of ourCI/CD
process. - If we are creating any temporary files in
/tmp
, make sure to unlink them before we exit out of our handler functions. - There are restrictions on how many
Lambda Functions
we can create in one AWS account. So make sure to delete unusedLambda Functions
. - Always make use of error handling mechanisms and
DLQs
. Put out codes inTry..Catch
blocks, throw errors wherever needed and handle exceptions. Make use ofDead Letter Queues (DLQ)
wherever appropriate. - Use
VPC
only if necessary. For e.g. if ourLambda Function
need access toRDS
which is inVPC
or anyVPC
based resources, then only put ourLambda Function
inVPC
. Otherwise there is no need to putLambda Function
inVPC
.VPCs
are likely to add additional latency to our functions. - Be mindful of using
Reserved Concurrency
. If we are planning to useReserved Concurrency
then make sure that otherLambda Functions
in our account have enoughconcurrency
to work with. This is because everyAWS Account
gets1000 Concurrent Lambda Executions Across Functions
. So if we reserve concurrency for any function then concurrency limit will reduce by that amount for other functions. - Keep containers warm so they can be reused. This will reduce the latency introduced by
Cold Starts
. We can easily schedule dummy invocations withCloudWatch Events
to keep the functions warm. - Make use of frameworks like
AWS SAM
orServerless Framework
. - Use
CI/CD
tools.
- Keep API definitions as lean as possible. i.e. move all the logic to backend
Lambda Functions
. So unless absolutely necessary we could simply useLambda Proxy Integration
whereAPI Gateway
merely acts as aProxy
betweenCaller
and aLambda Function
. All the data manipulations happen at one place and i.e. insideLambda Handler Function
. - Return useful responses back to the caller instead of returning generic server side errors.
- Enable logging options in
API Gateways
so it is easier to track down failures to their causes. EnableCloudWatch Logs
for APIs. - When using
API Gateways
inProduction
, it's recommended to useCustom Domains
instead ofAPI Gateway URLs
. - Deploy APIs closer to our customer's regions.
- Add
Caching
to get additional performance gains.
- Most important is
Table Design
. DynamoDB Tables
provide the best performance when designed forUniformed Data Access
.DynamoDB
divides theProvisioned Throughput
equally between all theTable Partitions
and hence in order to achieve maximum utilization ofCapacity Units
, we must design ourTable Keys
in such a way thatRead and Write Loads
are uniform acrossPartitions or Partition Keys
. WhenDynamoDB Tables
experienceNon-uniformed Access Patterns
, they will result in what is called asHot Partition
. i.e. Some partitions are accessed heavily while others remain idle. When this happens, theIdle Provisioned Capacity
is wasted while we still have to keep paying for it.DAX (DynamoDB Accelerator)
doesn't come cheap.- When changing the provisioned throughput for any
DynamoDB Table
i.e.Scaling Up
orScaling Down
, we must avoidTemporary Substantial Capacity
scaling up. Note: Substantial increases inProvisioned Capacities
almost always result inDynamoDB
allocating additionalPartitions
. And when we subsequently scale the capacity down,DynamoDB
will not de-allocate previously allocatedPartitions
. - Keep
Item Attribute Names
short. This helps reduce the item size and thereby costs as well. - If we are to store large values in our items then we must consider compressing the
Non-Key Attributes
. We can use technique likeGZip
for example. Alternatively, we can store large items inS3
and only pointers to those items are stored inDynamoDB
. Scan
operations scan the entire table and hence are less efficient thanQuery
operations. Thats why, AvoidScan
operations. Note:Filters
always gets applied after theQuery
andScan
operations are completed.- Applicable
RCUs
are calculated before applying theFilters
. - While performing read operations, go for
Strongly Consistent Reads
only if our application requires it. Otherwise always opt out forEventually Consistent Reads
. That saves half the money. Note: Any read operations onGlobal Secondary Indexes
areEventually Consistent
. - Use
Local Sendary Indexes (LSIs)
sparingly. LSIs share the same partitions i.e. Same physical space that is used by theDynamoDB Table
. So adding more LSIs will use more partition size. This doesn't mean we shouldn't use them, but use them as per our application's need. - When choosing the projections, we can project up to maximum of
20 Attributes per index
. So choose them carefully. i.e. Project as fewer attributes on to secondary indexes as possible. If we just needKeys
then use onlyKeys
, it will produce the smallestIndex
. - Design
Global Secondary Indexes (GSIs)
for uniform data access. - Use
Global Secondary Indexes (GSIs)
to createEventually Consistent Read Replicas
.
- Always use
Timeouts
inTask States
. - Always handle errors with
Retriers
andCatchers
. - Use
S3
to store large payloads and pass only thePayload ARN
between states.
- Setup:
# Install serverless globally. sudo npm i -g serverless # (Optional) For automatic updates. sudo chown -R $USER:$(id -gn $USER) /Users/adiinviter/.config # Configure user credentials for aws service provider. sls config credentials --provider aws --key [ACCESS_KEY] --secret [SECRET_KEY] -o # Create aws nodejs serverless template. sls create -t aws-nodejs # Init npm. npm init -y # Install serverless-offline and serverless-offline-scheduler as dev dependancies. npm i serverless-offline serverless-offline-scheduler --save-dev
- After running above commands, update the
service
property inserverless.yml
with your service name.- NOTE:
service
property inserverless.yml
file is mostly your project name. It is not a name of your specific lambda function.
- NOTE:
- Add following scripts under
package.json
:{ "scripts": { "dev": "sls offline start --port 3000", "dynamodb:start": "sls dynamodb start --port 8082", } }
- Update
serverless.yml
file with following config:service: my-project-name plugins: - serverless-offline # Add this plugin if you are using it. - serverless-offline-scheduler # Add this plugin if you are using it. provider: name: aws runtime: nodejs12.x stage: dev # Stage can be changed while executing deploy command. region: ap-south-1 # Set region.
- To add new lambda function with api endpoint, add following in
serverless.yml
:functions: hello: handler: src/controllers/users.find events: - http: path: users/{id} method: GET request: parameters: id: true
- To run project locally:
# Using npm npm run dev # Directly using serverless sls offline start --port 3000
- To invoke lambda function locally:
sls invoke local -f [FUNCTION_NAME]
- To run lambda crons locally:
sudo sls schedule
- To deploy:
# To deploy all lambda functions. sls deploy -v # To deploy a specific function. sls deploy -v -f [FUNCTION_NAME] # To deploy project on a different stage (e.g. production) sls deploy -v -s production
- To view logs for a specific function in a specific stage (e.g. dev, prod):
# Syntax: sls logs -f [FUNCTION_NAME] -s [STAGE_NAME] --startTime 10m # Use -t to view logs in real time. Good for monitoring cron jobs. sls logs -f [FUNCTION_NAME] -s [STAGE_NAME] -t # Example #1: sls logs -f sayHello -s production --startTime 10m # Example #2: sls logs -f sayHello -s dev --startTime 15m
- To remove project/function (This will delete the deployed
CloudFormation Stack
with all the resources):# To remove everything. sls remove -v -s [STAGE_NAME] # To remove a specific function from a specific stage. sls remove -v -f sayHello -s dev
- To create a simple cron job lambda function, add this to
serverless.yml
:# Below code will execute 'cron.handler' every 1 minute cron: handler: /src/cron.handler events: - schedule: rate(1 minute)
- To configure
Lambda Function
to run underVPC
:- We need
Security Group Ids
andSubnet Ids
, to get them:- Under
AWS Console
, go toVPC
. - Go to
Security Groups
and copyGroup ID
. We can copydefault
one. Just oneSecurity Group Id
is enough though. Specify it undersecurityGroupIds
. - Go to
Subnets
. EachAWS Region
has number ofSubnets
. CopySubnet ID
and specify them undersubnetIds
option. AlthoughServerless
requiresAt least 2
subnets, We can copy all the subnets and specify them undersubnetIds
option.
- Under
- Under
serverless.yml
file, set:functions: hello: # This function is configured to run under VPC. handler: handler.hello vpc: securityGroupIds: # We can specify 1 or more security group ids here. - sg-703jd2847 subnetIds: # We must at least provide 2 su1bnet ids. - subnet-qndk392nc2 - subnet-dodh28dg2b - subnet-ondn29dnb2
- We need
- Browse and open terminal into empty project directory.
- Execute :
# Create aws nodejs serverless template sls create -t aws-nodejs # Init npm. npm init -y # Install serverless-offline and serverless-offline-scheduler as dev dependancies. npm i serverless-offline serverless-offline-scheduler --save-dev
- Add following scripts under
package.json
:{ "scripts": { "dev": "sls offline start --port 3000", "dynamodb:start": "sls dynamodb start --port 8082", } }
- Open
serverless.yml
and editservice
name as well as setupprovider
:service: s3-notifications provider: name: aws runtime: nodejs12.x region: ap-south-1 plugins: - serverless-offline # Add this plugin if you are using it. - serverless-offline-scheduler # Add this plugin if you are using it.
- To install
Serverless
globally:sudo npm i -g serverless
- For automatic updates, after above command, run:
sudo chown -R $USER:$(id -gn $USER) /Users/adiinviter/.config
- To configure aws user credentials, run:
# -o: To overwrite existing credentials if there are any set already. sls config credentials --provider aws --key [ACCESS_KEY] --secret [SECRET_KEY] -o
- After running above command, credentials will get set under following path:
~/.aws/credentials
- Each service is a combination of multiple
Lambda Functions
. - To create
NodeJS Serverless Service
:sls create --t aws-nodejs
- To invoke a
Lambda Function
locally:# Syntax sls invoke local -f [FUNCTION_NAME] # Example sls invoke local -f myfunct
- To pass data to lambda function,
# Syntax sls invoke local -f [FUNCTION_NAME] -d [DATA] # Example #1: to pass a single string value into lambda function. sls invoke local -f sayHello -d 'Aditya' # Example #2: to pass a object into lambda function. sls invoke local -f sayHello -d '{"name": "Aditya", "age": 33}'
event
object holds any data passed into lambda function. To access it:- Accessing data directly passed as string as shown in
Example #1
above:// Example #1: to pass a single string value into lambda function. // sls invoke local -f sayHello -d 'Aditya' module.exports.hello = async event => { const userName = event; // Data is available on 'event'. return { statusCode: 200, body: JSON.stringify({message: `Hello ${userName}`}) }; };
- Accessing object data passed as shown in
Example #2
above:// Example #2: to pass a object into lambda function. // sls invoke local -f sayHello -d '{"name": "Aditya", "age": 33}' module.exports.hello = async event => { const {name, age} = event; return { statusCode: 200, body: JSON.stringify({message: `Hello ${name}, Age: ${age}`}) }; };
- Accessing data directly passed as string as shown in
- For local development only, use
Serverless Offline
plugin. - Plugin:
https://www.npmjs.com/package/serverless-offline https://github.com/dherault/serverless-offline
- To install:
npm i serverless-offline --save-dev
- Install Serverless Offline plugin.
- Under
serverless.yml
, add:plugins: - serverless-offline
- Under
package.json
, add new run script:"dev": "sls offline start --port 3000"
- Run:
npm run dev
- To deploy serverless service, run:
# -v: For verbose. sls deploy -v
- Use following plugin to setup DynamoDB locally (for offline uses):
https://www.npmjs.com/package/serverless-dynamodb-local https://github.com/99xt/serverless-dynamodb-local#readme
- To setup:
npm i serverless-dynamodb-local
- Register
serverless-dynamodb-local
into serverless yaml:plugins: - serverless-dynamodb-local
- Install DynamoDB into serverless project:
sls dynamodb install
- APIs can be secured using
API Keys
. - To generate and use
API Keys
we need to modifyserverless.yml
file:- Add
apiKeys
section underprovider
:provider: name: aws runtime: nodejs12.x ######################################################## apiKeys: # For securing APIs using API Keys. - todoAPI # Provide name for API Key. ######################################################## stage: dev # Stage can be changed while executing deploy command. region: ap-south-1 # Set region. timeout: 300
- Route by Route, specify whether you want it to be
private
or not. For e.g.functions: getTodo: # Secured route. handler: features/read.getTodo events: - http: path: todo/{id} method: GET ######################################################## private: true # Route secured. ######################################################## listTodos: # Non-secured route. handler: features/read.listTodos events: - http: path: todos method: GET
- After deploying we will receive
api keys
. Copy it to pass it under headers.λ serverless offline start Serverless: Starting Offline: dev/ap-south-1. Serverless: Key with token: d41d8cd98f00b204e9800998ecf8427e # Here is our API Key token. Serverless: Remember to use x-api-key on the request headers
- Pass
api key
underx-api-key
header while hitting secured route.x-api-key: d41d8cd98f00b204e9800998ecf8427e
- If a wrong/no value is passed under
x-api-key
header, then we will receive403 Forbidden
error.
- Add
- Useful commands for project
05-S3-Notifications
:- Setup aws profile for
Serverless S3 Local
plugin:aws s3 configure --profile s3local # Use following credentials: # aws_access_key_id = S3RVER # aws_secret_access_key = S3RVER
- Trigger S3 event - Put file into local S3 bucket:
aws --endpoint http://localhost:8000 s3api put-object --bucket "aditya-s3-notifications-serverless-project" --key "ssh-config.txt" --body "D:\Work\serverless\05-S3-Notifications\tmp\ssh-config.txt" --profile s3local
- Trigger S3 event - Delete file from local S3 bucket:
aws --endpoint http://localhost:8000 s3api delete-object --bucket "aditya-s3-notifications-serverless-project" --key "ssh-config.txt" --profile s3local
- Setup aws profile for
- After running
sls deploy -v
, error:The specified bucket does not exist
:- Cause: This issue occurs when we manually delete S3 bucket from AWS console.
- Fix: Login to AWS console and delete stack from
CloudFormation
. - Dirty Fix (Avoid): Delete
.serverless
directory from project (Serverless Service). - Full Error (Sample):
Serverless: Packaging service... Serverless: Excluding development dependencies... Serverless: Uploading CloudFormation file to S3... Serverless Error --------------------------------------- The specified bucket does not exist Get Support -------------------------------------------- Docs: docs.serverless.com Bugs: github.com/serverless/serverless/issues Issues: forum.serverless.com Your Environment Information --------------------------- Operating System: darwin Node Version: 13.7.0 Framework Version: 1.62.0 Plugin Version: 3.3.0 SDK Version: 2.3.0 Components Core Version: 1.1.2 Components CLI Version: 1.4.0