- Remove babel and use
.mjs
file extension instead of.js
- Add classes support (@babel/plugin-transform-classes)
- Add Versioning Support for API's
- add Port 80(http) and Port 443(https) support using nginx
- REVIEW: do wee need wait-for.sh in production?
- Objection.js ORM
- Knex.js Query Builder
- Node MySQL 2
- How to Create MySQL Users Accounts and Grant Privileges - (without docker)
- How to Create MySQL User and Grant Privileges: A Beginner’s Guide - (without docker)
- Babel Setup
- Docker Compose
- Knex Objection ORM Tutorial
- Knex Setup Guide
- knex wait for connection
- pool afterCreate
- acquireConnectionTimeout
- Setting up Docker with Knex.js and PostgreSQL
- Docker wait for postgresql to be running
- Waiting for MySQL to come up before talking to it
- bonita example
- wait-for-it Usage with Docker #57
- Containerizing a Node.js Application for Development With Docker Compose
- Troubleshooting Knex Connection
- ECMAScript modules (ESM) Interoperability
- Deleting data from associated tables using knex.js
- Better logs for ExpressJS using Winston and Morgan with Typescript
- Express middleware: A complete guide
- Express Use gzip compression
- AWS EC2 setup (YouTube)
- How to fix docker: Got permission denied while trying to connect to the Docker daemon socket
- Amazon Linux 2 - install docker & docker-compose
-
Make the
wait-for.sh
script executablechmod +x wait-for.sh
-
Modify Docker Compose
command
in thenode-app
service# ./wait-for.sh <wait-for-service-name>:<port-of-the-service> -- <commands-to-execute-after> command: ./wait-for.sh mysql:3306 -- npm run dev
-
Init
knex init --cwd ./src/db
-
Migrations
knex --esm migrate:make --cwd ./src/db <migrations_name>
-
Seeds
knex --esm seed:make --cwd ./src/db <seed_name>
-
IMPORTANT: Login into docker and run
migrate
andseed
-
Image
-
List images
docker image ls
-
Remove one or more images
docker image rm <image_name>
-
-
Container
-
List Running
docker ps
-
List All
docker ps -a
-
Remove one or more containers
docker image rm <image_name> # force docker image rm <image_name> -f # volumes docker image rm <image_name> -v # force and volume docker image rm <image_name> -fv
NOTE:
-f
or--force
: Force the removal of a running container (uses SIGKILL)-v
or--volumes
: Remove anonymous volumes associated with the container
-
-
Volumes
-
List volumes
docker volume ls
-
Remove all unused local volumes
docker volume prune
-
-
Access File System
-
use
sh
orash
sincebash
is unavailable in alpine imagesdocker exec -it <container_name> ash # as root user docker exec -it --user root <container_name> ash
NOTE:
- Run a command in a running container
-i
or--interactive
: Keep STDIN open even if not attached-t
or--tty
: Allocate a pseudo-TTY
-
check the set environment variables inside the docker container
printenv
-
-
Compose
-
DEVELOPMENT
-
up
docker-compose up -d # use this if there is any changes in Dockerfile to Build images before starting containers docker-compose up -d --build # re-build image without downing the container and re creating anonymous volumes docker-compose up -d --build -V # scale the number of instances docker-compose up -d --scale node-app=2
NOTE:
-d
or--detach
: Detached mode: Run containers in the background -
down
docker-compose down # Remove containers and it's volumes (don't use it if you want db to persist) docker-compose down -v # Remove all images used by any service docker-compose down --rmi all # Remove only images that don't have a custom tag set by the `image` field docker-compose down --rmi local
NOTE:
-v
or--volumes
: Remove named volumes declared in thevolumes
section of the Compose file and anonymous volumes attached to containers
-
-
PRODUCTION
-
up
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d # rebuild images docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build
-
down
docker-compose -f docker-compose.yml -f docker-compose.prod.yml down -v # don't remove volumes docker-compose -f docker-compose.yml -f docker-compose.prod.yml down
-
NOTE: can also use
docker compose
instead ofdocker-compose
-
-
Database
-
MySQL
-
Open MySQL (recommended)
docker exec -it <db_container_name> bash mysql -u <user_name> -p # enter your password use <db_name>
-
directly login into mysql
# open mysql docker exec -it <db_container_name> mysql -u <user_name> --password=<password> # directly open the database docker exec -it <db_container_name> mysql -u <user_name> --password=<password> <db_name>
-
-
redis
-
open redis
docker exec -it <redis_container_name> redis-cli
-
View Session keys inside redis-cli
KEYS *
-
Get Session Details by using the session id got from
KEYS *
GET <session_key>
-
-
-
Cleaning
If you want a fresh start for everything, run
docker system prune -a
anddocker volume prune
. The first command removes any unused containers and the second removes any unused volumes. I recommend doing this fairly often since Docker likes to stash everything away causing the gigabytes to add up.
-
Launch an server on cloud (use digital ocean or aws). I am using AWS.
-
Add ubuntu in AWS EC2 instance (i chose
t2.small
). -
Select Free Tier
-
Add security group for HTTP(80) and HTTPS(443) and SSH(22)
-
click
Review and Launch
-
Add Tags if you want
Key=Name
andValue=App
-
Create key file and store it in a secure location for ssh access
-
Launch Instance
-
Wait for instance status to be running and copy the
Public IP address
. -
Go to the location of the downloaded key file and open the terminal.
-
type in the command to get access to the cloud instance of the ubuntu server. (
ubuntu
/ec2-user
user is created by default)ssh -i <key-file-name>.<extension> ubuntu@<public_ip> # if using AMI instance ssh -i <key-file-name>.<extension> ec2-user@<public_ip>
NOTE: based on the file extension (.pem or .cer) we may need to giv it special permissions using
chmod 600 <key-file-name>.<extension>
. run the above command again to get access to the ubuntu instance -
Update Ubuntu (Optional)
# check updates available sudo apt list --upgradable # Update the repository index and install the updates for Kernel and installed applications sudo apt update && sudo apt upgrade -y # run this once the update is finished sudo reboot
NOTE: After rebooting wait for sometime and connect into the ubuntu instance using ssh
-
-
Add Deploy Keys to get repository access inside the server (work even for private repository)
-
Generate SSH key inside server
cd .ssh/ ssh-keygen -t ed25519 -C "your_email@example.com"
-
Copy public key from
id_*.pub
and paste it into deploy keys section of the github repo.
-
-
Install Docker in the Ubuntu Instance
-
get docker engine community from the scripts
curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh
-
Install docker, git (when using
AMI instance
)sudo yum install -y docker git sudo service docker start sudo usermod -a -G docker ec2-user # Make docker auto-start sudo chkconfig docker on # Reboot to verify it all loads fine on its own. sudo reboot
-
get docker-compose from official documentation for linux
# check the docs for version before using this command sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # get latest version sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose
-
Manage Docker as a non-root user or Run the Docker daemon as a non-root user (Rootless mode)
-
-
Create
.env
file inside server-
Open a .env file using vim
vim .env
-
Add environmental variables
NODE_ENV=production MYSQL_ROOT_PASSWORD= MYSQL_DATABASE= MYSQL_USER= MYSQL_PASSWORD= SESSION_SECRET=
NOTE:
NODE_ENV=production
is not needed since it is set with dockerfile, but adding it even though -
Modify
.profile
to load.env
vim .profile
# Add this at the bottom set -o allexport; source $HOME/.env; set +o allexport
NOTE: use
$HOME
(or)$(pwd)
(or)$PWD
(or) absolute path -
check existing environmental variables
printenv
-
Exit and relogin again for the changes to take effect
-
-
Create a folder for the code and clone it (ssh)
mkdir app cd app git clone git@github.com:Mugilan-Codes/objection-knex-demo.git .
-
Run docker production command
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
-
Run migrations inside the node-app container
docker exec -it app_node-app_1 ash npm run migrate:prod
-
Check in mysql container if the migrations where successfull
docker exec -it app_mysql_1 mysql -u <MYSQL_USER> --password=<MYSQL_PASSWORD>
select database(); show databases; use <MYSQL_DATABASE>; select database(); show tables; desc <table_name>;
-
-
Make calls to the API from anywhere in the world
http://<PUBLIC_IPV4_ADDRESS/PUBLIC_IPV4_DNS>/api/v1
-
Workflow
-
Make changes to src and push it to github
-
cd app
in production server and git pull the new changes -
Build the new image in production server
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build # we know that there will be changes only in the node app so we can do this instead docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build node-app # do the above thing but without rebuilding the dependencies (depends_on) docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --build --no-deps node-app # force rebuild containers even when there is no change without dependecies docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --force-recreate --no-deps node-app
-
Use a cloud repo to store the built images (DockerHub or amazon's ECR or something else..). Create a repository there.
-
Tag the image with respect to the name on the remote image repo that was created. (
<username>/<repo_name>
)docker image tag <local_image_name>:<version> <username>/<repo_name> docker image tag objection-knex_node-app mugilancodes/objection-knex-node-app
NOTE: if
version
is not provided it defaults tolatest
-
Push the tagged image to remote repo
docker push <username>/<repo_name> docker push mugilancodes/objection-knex-node-app
-
Update docker-compose.yml file to use this
image
usinggit push
NOTE: Do these in the local development machine
-
-
Pull in the changes using
git pull
and run the containers again inproduction server
to tag the imagesdocker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
-
How to make changes reflect in production server?
-
In Develoment Machine
-
Build the custom images in local development machine
docker-compose -f docker-compose.yml -f docker-compose.prod.yml build # only specific service docker-compose -f docker-compose.yml -f docker-compose.prod.yml build node-app
-
Push the built images to cloud image repo
docker-compose -f docker-compose.yml -f docker-compose.prod.yml push # only specific service docker-compose -f docker-compose.yml -f docker-compose.prod.yml push node-app
-
-
In Production Server
-
Pull the changes from cloud repo into the production server
docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull # only specific image docker-compose -f docker-compose.yml -f docker-compose.prod.yml pull node-app
-
Update the changes
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d # specific rebuild docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d --no-deps node-app
NOTE: use watchtower to automate these steps in production server
-
-
-
-
Orchestrator (kubernetes or docker swarm)
-
Check if docker swarm is active in production server (
Swarm: active
)docker info
-
Activate Swarm
-
Get public ip (
eth0 --> inet
)ip add
-
Initialize swarm using the public ip
docker swarm init --advertise-addr <public_ip>
-
-
Add Nodes to Swarm
-
Manager
docker swarm join-token manager
-
Worker
docker swarm join --token <token_provided> <ip>:<port> # retrieve the join command for the worker docker swarm join-token worker
-
-
Update compose file for swarm deployment and push it to github
-
Pull in the changes made to production docker compose into production server. Tear down the running containers to prepare for docker stack deploy
-
Deploy (you can choose any name for the Stack instead of
myapp
)docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml myapp
-
check how many nodes are running
docker node ls
-
check how many stacks are there
docker stack ls
-
list the services in the stack
docker stack services myapp
-
list all the services across all stacks
docker service ls
-
list the tasks in the stack
docker stack ps myapp
-
-