AWS Elasticbeanstalks

What is AWS Elasticbeanstalks?

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

How to deploy a web application using AWS Elasticbeanstalks?

  1. Open the Elastic Beanstalk console using this preconfigured link: https://console.aws.amazon.com/elasticbeanstalk
  2. Choose Create New Application.
  3. In the Application Name box, type a name for your application, and then choose Create.
  4. Choose Create one now to create an environment in the AWS Management Console.
  5. Choose the platform that matches your application code.
  6. Choose the platform version that matches your application code.
  7. Choose the sample application to deploy.
  8. Choose Create environment.
  9. Choose the URL to open your running sample application.

Difference between application and environment in AWS Elasticbeanstalks?

Instance traffic and scaling

  1. Capacity Settings :
  1. Instance Types and AMI :
  1. Scaling Settings :
  1. Load Balancer Settings :
  1. Other Settings :

Updates, monitoring, and logging

Monitoring:

Updates:

Let's see how to deploy a web application using AWS Elasticbeanstalks using GitHub Actions.

name: Deploy Develeopment Server to Beanstalk on: push: branches: [dev] jobs: build: runs-on: ubuntu-latest steps: - name: Checkout source code uses: actions/checkout@v4 - name: Set current date as env variable id: current-date run: echo "NOW=$(date +'%Y-%m-%dT%H:%M:%S')" >> $GITHUB_OUTPUT - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: 18 - name: Install dependencies run: npm i - name: Build run: npm run build - name: Generate deployment package run: zip -r ${{ steps.current-date.outputs.NOW }}.zip . -x '*.git*' -x '*node_modules*' - name: Deploy to AWS Elasticbeanstalk uses: einaregilsson/beanstalk-deploy@v21 with: aws_access_key: ${{ secrets.AWS_ACCESS_KEY_ID }} aws_secret_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} application_name: backend-dev environment_name: backend-dev-env version_label: ${{ github.sha }} region: ap-south-1 deployment_package: ${{ steps.current-date.outputs.NOW }}.zip existing_bucket_name: backend-dev-deployment
graph TD;
subgraph Local Machine
P[Local Machine]
X[CodeBase Containing <br> Github WorkFlow file]
Y[After any change <br>on configured branch <br>below action is happening]
end
subgraph GitHub
Q[GitHub]
Z[Code Changed From Local<br>and Pushed To Github]
W[Workflow/GitHub Actions Started Working]
end
subgraph R[Virtual Machine Made by<br> Github For Action Running]
S[Virtual Machine]
T[Checkout source code]
U[Set current date as env variable]
V[Setup Node.js]
V1[Install dependencies]
V2[Build]
V3[Generate deployment package]
end
X-->P
Y-->P
P-->Z
P-->W
Z-->Q
W-->Q
Q-->T
Q-->U
Q-->V
Q-->V1
Q-->V2
Q-->V3
T-->S
U-->S
V-->S
V1-->S
V2-->S
V3-->S
subgraph R1[Deploying to ElasticBeanstalks]
A[AWS Elasticbeanstalk]
end
S-->A

Some more Explanation of Above Workflow for some Jobs

Checkout source code Job
Generate Deployment Package :
Deploy to AWS Elastic Beanstalk :

1). Elastic beanstalks stores that zip file first in s3 ?

Yes, that's correct. When you deploy an application to AWS Elastic Beanstalk using the AWS Management Console, AWS CLI, or API, Elastic Beanstalk stores the deployment package (in your case, the zip file) in an Amazon S3 bucket.

Here's how the deployment process typically works:

  1. You generate a deployment package (such as a zip file) containing your application code and any other necessary files.
  2. When you initiate a deployment to Elastic Beanstalk, whether through the AWS Management Console, CLI, or API, Elastic Beanstalk uploads this deployment package to an Amazon S3 bucket.
  3. Elastic Beanstalk then retrieves the deployment package from the S3 bucket and deploys it to the instances in your Elastic Beanstalk environment.
  4. Once the deployment is complete, the deployment package remains in the S3 bucket, where it can be accessed in case of future deployments or for reference.

This mechanism allows Elastic Beanstalk to efficiently manage the deployment process by separating the storage of the deployment package from the deployment itself. It also enables Elastic Beanstalk to scale deployments across multiple instances in your environment, as instances can simultaneously retrieve the deployment package from the S3 bucket during deployment.

2). By this my s3 is getting full day by day what can we do?

If your Amazon S3 bucket used for Elastic Beanstalk deployments is getting full day by day, you may want to implement strategies to manage the storage usage effectively. Here are some steps you can take to address this issue:

  1. Retention Policy : Implement a retention policy to automatically delete older deployment packages from the S3 bucket after a certain period. For example, you can configure lifecycle rules in S3 to transition or expire objects based on their age.
  2. Versioning : If versioning is enabled for your S3 bucket, consider enabling lifecycle policies to manage object versions. You can specify rules to delete older versions of objects after a certain period while retaining the latest version.
  3. Review and Cleanup : Regularly review the contents of the S3 bucket and identify any unnecessary or obsolete deployment packages. You can manually delete these files or automate the cleanup process using scripts or AWS SDKs.
  4. Optimize Deployment Packages : Review your deployment process and optimize the size of deployment packages. Minimize unnecessary files and dependencies included in the package to reduce storage consumption.
  5. Archive to Glacier : For long-term storage of historical deployment packages that are rarely accessed, consider archiving them to Amazon Glacier. Glacier offers lower storage costs compared to standard S3 storage but with longer retrieval times.
  6. Cost Monitoring : Monitor your AWS billing and usage reports to track storage costs associated with the S3 bucket. This can help identify trends and anomalies in storage usage and take appropriate actions to optimize costs.
  7. Use Multiple Buckets : If your deployment packages include large files or datasets that are rarely accessed, consider using separate S3 buckets with different lifecycle policies based on access frequency and retention requirements.

By implementing these strategies, you can effectively manage storage usage in your S3 bucket used for Elastic Beanstalk deployments and optimize costs associated with AWS storage services.