How to deploy a website on AWS with Docker, Flask, & React from scratch
There are many ways to build and deploy a simple, scalable website. This tutorial outlines one method and hopefully saves you time not having to go through all the same pitfalls I did.
The overall architecture is:
- Frontend: Javascript React App with Create React App deployed on S3 and Cloudfront
- Backend: Dockerized Python Flask App deployed on Elastic Beanstalk
- Domain: Parked at Godaddy.com
For this example, we’ll be creating a website called https://logicshare.io . You can view the final code here: https://github.com/adamraudonis/logicshare
First let’s create a github repo:
Now we’ll want to develop locally so copy the above and run:
# Go to or make the folder you want
cd /Users/adamraudonis/Desktop/Projects/
mkdir LogicShare
cd LogicShare# Replace with your repo link
git clone git@github.com:adamraudonis/logicshare.git
cd logicshare
Frontend
Ensure you have Node >= 8.10 and npm >= 5.6 installed. Let’s call our app frontend, but you can name it something different if you want. NOTE: This will take several minutes to run. The REACT_APP_API_URL is needed to talk to the backend later.
npx create-react-app frontend
cd frontend
export REACT_APP_API_URL=http://localhost:8080/api
npm start
You should now see the default react app here:
Let’s change it a bit to make it our own add a form so we can test interaction with our backend. For editing the code, I’m using Microsoft’s Visual Studio Code (VSCode) which has great support for react and python. You can make some test changes and then just save the file and the localhost website will automatically update.
Next I added the code to make the requests to the backend. See here: https://github.com/adamraudonis/logicshare/blob/master/frontend/src/App.js You may need to install additional packages like so:
npm install --save axios
Now obviously our frontend is showing errors trying to load from the backend because the backend doesn’t exist and we never set the env var REACT_APP_API_URL.
Backend
Here is the code for our very simple backend:
import json
from flask import Flask, request
from flask_cors import CORSapp = Flask(__name__)
CORS(app)# NOTE: This route is needed for the default EB health check route
@app.route('/')
def home():
return "ok"@app.route('/api/get_topics')
def get_topics():
return {"topics": ["topic1", "other stuff", "next topic"]}@app.route('/api/submit_question', methods=["POST"])
def submit_question():
question = json.loads(request.data)["question"]
return {"answer": f"You Q was {len(question)} chars long"}if __name__ == '__main__':
app.run(port=8080)
Setup virtual environment in a new terminal window. If you don’t have python3.6 or later install it first. I like to set this up outside of the git repo.
python3.7 -m venv venv # Create the virtual environment
source venv/bin/activate # Go inside the virtual environment
cd logicshare/backend
pip install flask
pip install flask-cors
pip freeze -> requirements.txt# Test running locally:
python app.py
Now we can see that our simple website it working locally:
For deployment, we’re going to want to use the production web server gunicorn instead of the built in flask one.
pip install gunicorn
pip freeze -> requirements.txt
Next for gunicorn we need to make a file called wsgi.py
from app import appif __name__ == "__main__":
app.run()
You can test this works locally by running:
gunicorn wsgi:app -w 2 -b 0.0.0.0:8080 -t 30
Docker
Next let’s setup docker so we can get ready for deployment. Now you might be wondering why use Docker. For this ultra simple app and because we’re using Elastic Beanstalk which has Flask support it is not technically necessary. However, using Docker to start out gives you great flexibility if you ever want to switch to user Kubernetes, a different service, or if you have additional complexity such as nested directory structure or dependencies that require some specific environment. Next create the Dockerfile in backend/Dockerfile
FROM python:3.7
WORKDIR /backend
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 8080
COPY . .
CMD ["gunicorn", "wsgi:app", "-w 2", "-b 0.0.0.0:8080", "-t 30"]
Now let’s build our docker container tagged to whatever you want:
docker build -t logicshare-backend .
We can test this works by running the following. The -p maps port 8080 of the docker container to port 8080 on your computer.
docker run -p 8080:8080 logicshare-backend
Press Control-C to stop the process when ready.
AWS Setup
Create or login to https://aws.amazon.com/ . I selected my region to be Oregon, but you can select whatever works for you.
You’ll need to install the aws command line interface (CLI), I did this by downloading this package on mac: https://awscli.amazonaws.com/AWSCLIV2.pkg but you can view full install guide here: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html . You should now see:
Next before configuring the CLI you’ll want to create a new AWS user.
STOP AT THIS PAGE
Now, go back to your terminal and type aws configure
and follow the instructions copying your access key ID and secret access key from that page.
Deploy Frontend
First we need to create a bucket in S3 to store our frontend files.
For now make everything in your bucket public, but we can change this later once our domain is setup.
Now we need to actually build the frontend and push to our bucket:
cd frontend
export REACT_APP_API_URL=/api
npm run build
aws s3 sync build/ s3://logicshare-frontend --acl public-read
NOTE: You need the --acl public-read
because running s3 sync changes the permissions to not be public by default .You’ll now see stuff in your bucket:
Make sure for now all the files are public:
Next we want to create a cloud front CDN deployment that will improve the performance of our frontend loading times, allow us to easily use https, and route requests to the backend.
Create a Distribution in CloudFront and click Get Started under Web
Link to your S3 bucket and set the origin path to /index.html . For now ensure Restrict Bucket Access = No until we setup our domain.
After your distribution is done processing (which can take a couple minutes) you can now confirm everything works by clicking on the domain name. Notice it redirects to your bucket and then adds index.html . Once we add our custom domain name later we can restrict permissions and it will not redirect.
To automate the deployment of the frontend you can create a script called deploy_frontend.sh
with the following content and then run sudo chmod 777 deploy_frontend.sh
to make it executable. Now run ./deploy_frontend.sh
echo "Deploying Frontend..."
cd frontend
export REACT_APP_API_URL=/api
npm run build
aws s3 sync build/ s3://logicshare-frontend
(NOTE: This script assumes no public buckets which needs the domain name to be fixed)
Deploy Backend
First let’s setup a repo in the Elastic Container Registry on AWS so we can push our docker container there.
Now we can run the push commands. Note: They will be different for you
cd backendaws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 184952363101.dkr.ecr.us-west-2.amazonaws.comdocker build -t logicshare-backend .docker tag logicshare-backend:latest 184952363101.dkr.ecr.us-west-2.amazonaws.com/logicshare-backend:latestdocker push 184952363101.dkr.ecr.us-west-2.amazonaws.com/logicshare-backend:latest
Next let’s set up the Elastic Beanstalk instance:
Here you can configure elastic beanstalk to use only spot instances. If you do not need 100% guaranteed reliability, then you will save up to 90% by selecting these options.
NOTE: If your docker image is large, you may need to select a large instance type like t3 medium. But you could just see if your deployment fails first.
Next we need to make sure that ECR can talk to EB! Otherwise you’ll get this error: Instance deployment: The ECR service failed to authenticate your private repository. Go back to IAM and select the aws-elasticbeanstalk-ec2-role under Roles and attach the AmazonEC2ContainerRegistryReadOnly policy.
Create a folder in backend
called aws_deploy
and then a file called Dockerrun.aws.json
. Now AWS knows what docker image to use.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "184952363101.dkr.ecr.us-west-2.amazonaws.com/logicshare-backend",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 8080,
"HostPort": 8080
}
]
}
Next you’ll need the aws EB CLI to deploy your latest docker image
pip install awsebcli
cd backend/aws_deploy
eb init
eb deploy
Looks like everything works!
If you made a mistake like for example make a typo in the docker name then you can click on logs and that should help you understand what happened.
If you want to be able to deploy just by typing ./deploy_backed.sh
then create a file called deploy_backend.sh with the following content and then run sudo chmod 777 deploy_backend.sh
to allow it to be executable.
echo "Deploying Backend..."
aws ecr get-login-password --region us-west-2 | docker login --username AWS --password-stdin 184952363101.dkr.ecr.us-west-2.amazonaws.com
docker build -t logicshare-backend .
docker tag logicshare-backend:latest 184952363101.dkr.ecr.us-west-2.amazonaws.com/logicshare-backend:latest
docker push 184952363101.dkr.ecr.us-west-2.amazonaws.com/logicshare-backend:latest
cd aws_deploy
eb deploy
Routing Frontend to Backend
Back in cloudfront we now want to make it so anytime the frontend requests /api/* that request is routed to our new EB backend. First create new origin:
When you create this origin ensure you select the Elastic Load Balancer (NOT S3). Also do NOT select HTTPS. That will come later when we add a domain.
Next we need to route requests from /api/* to the backend. Do this by creating a behavior that points to the new origin we just added.
Make sure to select Managed Caching Policy Disabled otherwise any stateful api will be messed up. Check redirect http to https. You’ll also want to ensure you check allow POST method otherwise your POST methods won’t work.
EDIT: Ensure that you also set Origin Request Policy to Managed-AllViewer. Otherwise, requests with parameters in your url like ?id=2 will not work!!!
Now our website is fully functional!
Adding the second origin and behavior allows us to now restrict access to our frontend S3 bucket.
Now you’ll notice cloudfront works but s3 directly does not.
Link domain from Godaddy to AWS
First create a hosted zone in AWS Route 53:
Next click on Hosted Zone Details to view the name servers
In a new tab, log into your Godaddy account and view the DNS page. Then copy over YOUR name servers from aws above.
NOTE: If you tried now to go to Route 53 click create record, simple record, and connect to Cloudfront you’ll notice it doesn’t show up. We need to setup Cloudfront first.
Go to Cloudfront, add your domain name and www and and other sub domains you want under Alternate Domain Names and then click request or import a certificate with ACM as the field will initially be blank. Follow the instructions and then wait for your certificate to be validated. I chose use CNAME record to validate as AWS make it really easy with Route 53 for you.
When you create a certificate remember to make both *.yourdomain.com and yourdomain.com
Now you should be able to see the certificate as a dropdown in the edit distribution page.
Now let’s go back to Route 53 and create a new record:
Next create a CNAME record for www.logicshare.io
Now https://logicshare.io and https://www.logicshare.io both work!!!
Reminder that all the code for this little website is here: https://github.com/adamraudonis/logicshare