For the purpose of this tutorial let's create PostgreSQL database called example
.
Our user here is postgres
, password password
, and host is localhost
.
psql -h localhost -U postgres -w -c "create database example;"
When using Migrate CLI we need to pass to database URL. Let's export it to a variable for convenience:
export POSTGRESQL_URL='postgres://postgres:password@localhost:5432/example?sslmode=disable'
sslmode=disable
means that the connection with our database will not be encrypted. Enabling it is left as an exercise.
You can find further description of database URLs here.
Let's create table called users
:
migrate create -ext sql -dir db/migrations -seq create_users_table
If there were no errors, we should have two files available under db/migrations
folder:
- 000001_create_users_table.down.sql
- 000001_create_users_table.up.sql
Note the sql
extension that we provided.
In the .up.sql
file let's create the table:
CREATE TABLE IF NOT EXISTS users(
user_id serial PRIMARY KEY,
username VARCHAR (50) UNIQUE NOT NULL,
password VARCHAR (50) NOT NULL,
email VARCHAR (300) UNIQUE NOT NULL
);
And in the .down.sql
let's delete it:
DROP TABLE IF EXISTS users;
By adding IF EXISTS/IF NOT EXISTS
we are making migrations idempotent - you can read more about idempotency in getting started
migrate -database ${POSTGRESQL_URL} -path db/migrations up
Let's check if the table was created properly by running psql example -c "\d users"
.
The output you are supposed to see:
Table "public.users"
Column | Type | Modifiers
----------+------------------------+---------------------------------------------------------
user_id | integer | not null default nextval('users_user_id_seq'::regclass)
username | character varying(50) | not null
password | character varying(50) | not null
email | character varying(300) | not null
Indexes:
"users_pkey" PRIMARY KEY, btree (user_id)
"users_email_key" UNIQUE CONSTRAINT, btree (email)
"users_username_key" UNIQUE CONSTRAINT, btree (username)
Great! Now let's check if running reverse migration also works:
migrate -database ${POSTGRESQL_URL} -path db/migrations down
Make sure to check if your database changed as expected in this case as well.
To show database transactions usage, let's create another set of migrations by running:
migrate create -ext sql -dir db/migrations -seq add_mood_to_users
Again, it should create for us two migrations files:
- 000002_add_mood_to_users.down.sql
- 000002_add_mood_to_users.up.sql
In Postgres, when we want our queries to be done in a transaction, we need to wrap it with BEGIN
and COMMIT
commands.
In our example, we are going to add a column to our database that can only accept enumerable values or NULL.
Migration up:
BEGIN;
CREATE TYPE enum_mood AS ENUM (
'happy',
'sad',
'neutral'
);
ALTER TABLE users ADD COLUMN mood enum_mood;
COMMIT;
Migration down:
BEGIN;
ALTER TABLE users DROP COLUMN mood;
DROP TYPE enum_mood;
COMMIT;
Now we can run our new migration and check the database:
migrate -database ${POSTGRESQL_URL} -path db/migrations up
psql example -c "\d users"
Expected output:
Table "public.users"
Column | Type | Modifiers
----------+------------------------+---------------------------------------------------------
user_id | integer | not null default nextval('users_user_id_seq'::regclass)
username | character varying(50) | not null
password | character varying(50) | not null
email | character varying(300) | not null
mood | enum_mood |
Indexes:
"users_pkey" PRIMARY KEY, btree (user_id)
"users_email_key" UNIQUE CONSTRAINT, btree (email)
"users_username_key" UNIQUE CONSTRAINT, btree (username)
Here is a very simple app running migrations for the above configuration:
import (
"log"
"github.com/golang-migrate/migrate/v4"
_ "github.com/golang-migrate/migrate/v4/database/postgres"
_ "github.com/golang-migrate/migrate/v4/source/file"
)
func main() {
m, err := migrate.New(
"file://db/migrations",
"postgres://postgres:postgres@localhost:5432/example?sslmode=disable")
if err != nil {
log.Fatal(err)
}
if err := m.Up(); err != nil {
log.Fatal(err)
}
}
You can find details here
When the schema and role names are the same, you might run into issues if you create this schema using migrations.
This is caused by the fact that the default search_path
is "$user", public
.
In the first run (with an empty database) the migrate table is created in public
.
When the migrations create the $user
schema, the next run will store (a new) migrate table in this schema (due to order of schemas in search_path
) and tries to apply all migrations again (most likely failing).
To solve this you need to change the default search_path
by removing the $user
component, so the migrate table is always stored in the (available) public
schema.
This can be done using the search_path
query parameter in the URL.
For example to force the migrations table in the public schema you can use:
export POSTGRESQL_URL='postgres://postgres:password@localhost:5432/example?sslmode=disable&search_path=public'
Note that you need to explicitly add the schema names to the table names in your migrations when you to modify the tables of the non-public schema.
Alternatively you can add the non-public schema manually (before applying the migrations) if that is possible in your case and let the tool store the migrations table in this schema as well.