-
Notifications
You must be signed in to change notification settings - Fork 11
Home
First of all, you should install .Net Core 2.1, 3.1, .Net 5.0, 6.0 or 7.0. You can choose either runtime or SDK and select the appropriate version for your operating system.
From a console run the following command:
dotnet tool install --global dbup-cli
The tool is available now anywhere in your system. You can check this by typing:
dbup --version
If you have the tool is already installed you can update it to the latest version:
dotnet tool update -g dbup-cli
To uninstall the tool type the following command:
dotnet tool uninstall -g dbup-cli
As an alternative, you can download dbup-cli.exe
from the Releases page. Since the 1.2.0 version the tool is available as a standalone utility with no dependencies built against .NetFramework 4.6.2.
Go to an empty folder and put there one or more SQL-scripts. Each of the scripts should have an extension ".sql." They can contain any SQL instructions supported by your DBMS. They are executed one by one in alphabetical order. Thus, you should choose a script naming convention first. E.g., let's add two files. You can leave them empty for learning purposes.
001.sql
002.sql
Next step is to create a configuration file. Open console, go to the scripts folder and run init command:
dbup init
This command creates a default configuration file for migrations named "dbup.yml." Open the file to see it. It contains something like this:
dbUp:
version: 1
provider: sqlserver
connectionString: $CONNSTR$
# ... other options
You can use one of the supported db providers or continue with the sqlserver
as in the example.
In case you are using MS SQL Server as a database server, you can just set up an environment variable with a connection string to a database to upgrade without touching the config itself. For example:
SET CONNSTR=Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=dbup;Integrated Security=True
Alternatively, you can put connection string to the configuration file directly if you want. Let's run a migration and see what happens:
> dbup upgrade
Cannot open database "dbup" requested by the login. The login failed.
Login failed for user 'yourname.'
You see that we have used upgrade command and it said the database could not be opened. That is because the database is not created yet. To create it simply put:
> dbup upgrade --ensure
Master ConnectionString => Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=master;Integrated Security=True;Password=
Created database dbup
Beginning transaction
Beginning database upgrade
Checking whether journal table exists..
Journal table does not exist
Executing Database Server script '001.sql'
Checking whether journal table exists..
Creating the [SchemaVersions] table
The [SchemaVersions] table has been created
Executing Database Server script '002.sql'
Upgrade successful
Voila! The engine had created the database for us, and then have executed our scripts. An option --ensure
allow the engine to create a new database if it doesn't exist. You can find more information here. Try to run the same command one more time:
> dbup upgrade
Beginning transaction
Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
No new scripts need to be executed - completing.
As you can see, no scripts have been executed. At any time you can use status command to check your database status:
> dbup status
Database is up-to-date. Upgrade is not required.
Add one more script file 003.sql
and run status command again to see what happens:
> dbup status
Database upgrade is required.
You have 1 more scripts to execute.
If you want detailed information, you can use additional options -x
and -n
:
> dbup status -x -n
Database upgrade is required.
You have 1 more script(-s) to execute.
These scripts will be executed:
003.sql
Already executed scripts:
001.sql
002.sql
It may happen so that you have already run 003.sql
on the database and don't want to run it again as part of your migration process. You can mark scripts as executed without actually executing them with the mark-as-executed command. Just replace update
with mark-as-executed
:
> dbup mark-as-executed
Beginning transaction
Checking whether journal table exists..
Fetching list of already executed scripts.
Checking whether journal table exists..
Marking script 003.sql as executed
Script marking successful
> dbup status
Database is up-to-date. Upgrade is not required.
The last thing left to see. Suppose, you want to re-create your database. You can use drop command with followed upgrade with --ensure
option. Let's do it:
> dbup drop
Master ConnectionString => Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=master;Integrated Security=True;Password=
Dropped database dbup
> dbup upgrade --ensure
Master ConnectionString => Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=master;Integrated Security=True;Password=
Created database dbup
Beginning transaction
Beginning database upgrade
Checking whether journal table exists..
Journal table does not exist
Executing Database Server script '001.sql'
Checking whether journal table exists..
Creating the [SchemaVersions] table
The [SchemaVersions] table has been created
Executing Database Server script '002.sql'
Executing Database Server script '003.sql'
Upgrade successful
That's all. Getting started is finished. Let's see at the more elaborate options.
You can use one of the following supported providers:
-
sqlserver
- MS SQL Server -
azuresql
- AzureSQL -
postgresql
- PostgreSQL -
mysql
- MySQL -
cockroachdb
- CockroachDB
You specify the provider in the provider
configuration option - see the next section.
Let's take a closer look at a configuration file content and what else we can do with it. Just after completing the init command it looks like as this:
dbUp:
version: 1 # should be 1
provider: sqlserver # DB provider: sqlserver, postgresql, mysql, azuresql
connectionString: $CONNSTR$ # Connection string to DB. For example, "Data Source=(localdb)\MSSQLLocalDB;Initial Catalog=dbup;Integrated Security=True" for sqlserver
connectionTimeoutSec: 30 # Connection timeout in seconds
disableVars: no # yes / no (default). If yes, then the variable substitution is disabled
transaction: None # Single / PerScript / None (default)
scripts:
- folder: # absolute or relative (to this file) path to a folder with *.sql files
subFolders: no # yes / no (default)
order: 100 # script group order, default 100
runAlways: no # yes / no (default)
encoding: utf-8 # scripts' encoding, default utf-8
filter: # Wildcard or regex filter. Regex should be surrounded by forward slashes - for example /\d2\.sql/. By default, all scripts are included
matchFullPath: no # yes / no (default). If yes, then the filter is applied to a full file path
naming:
useOnlyFileName: no # Use only file name as script name. No by default
includeBaseFolderName: no # Start script name from base folder name. No by default
prefix: # Add prefix to the script name. Empty string by default
vars: # Variables substitution. You can use these variables in your scripts as $variable_name$
# Var1: Value1
# Var2: Value2
# journalTo: # Use 'journalTo: null' if you want to execute scripts every time when upgrade is run,
# schema: "schemaName" # or specify a custom schema name
# table: "tableName" # and a custom table name to store DB schema versions. By default, the table name is "SchemaVersions"
As it was mentioned earlier, you can create this file by init command:
dbup init
In this case, the file will be created with the name 'dbup.yml' in the current directory, or you can specify the file name:
dbup init path/to/your/file.yml
The path can be absolute or relative against a current directory.
Let's see at the top level options in the file. The only required ones are:
-
dbUp
- the root of a configuration; -
version
- should be 1 for now; -
provider
- one of the supported providers -
connectionString
- provider-specific connection string to database. You can use environment variables here, whether the whole string as in the example or the part of it (e.g. 'Data Source=myserver;Initial Catalog=mydb;Persist Security Info=True;User ID=user;Password=$PWD$
').
All other parameters are optional including scripts
section and have default values.
There is the only option:
-
connectionTimeoutSec
- Connection timeout in seconds.
The default value is 30
, which stands that if any command or query lasts longer than 30 seconds, it is interrupted.
You can choose one of three modes by putting the value for the transaction
option:
-
Single
- All scripts will be executed in one transaction scope. If one of the scripts will be failed, all changes will be rolled back. Be careful with this mode because not all of the instructions can be rolled back. It depends on a database provider. -
PerScript
- A new transaction will be created for each of the scripts. -
None
- This mode will be used by default. A transaction will not be used.
By default, the tool finds only the *.sql
files from a current directory, but you can use more complicated scenarios. You can have more than one group of scripts. Each of these groups has its own section under scripts.
You can add as many groups as you want. To see, that the scripts actually will be executed with your settings, you can use status command, as we did earlier.
Let's continue our example. Add a new script with the name '004.sql', then create a new folder with the name views
and create a script with the name '001_views.sql' inside it. Add a new script group to your config file. It looks like this now:
# other lines are omitted
scripts:
- folder: # absolute or relative (to this file) path to a folder with *.sql files
subFolders: no # yes / no (default)
order: 100 # script group order, default 100
runAlways: no # yes / no (default)
encoding: utf-8 # scripts' encoding, default utf-8
filter: # Wildcard or regex filter. Regex should be surrounded by forward slashes - for example /\d2\.sql/. By default, all scripts are included
matchFullPath: no # yes / no (default). If yes, then the filter is applied to a full file path
- folder: views # <<<<< Add this line. Notice the dash sign before
Let's see, what we have got:
> dbup status -n
You have 2 more scripts to execute.
These scripts will be executed:
001_views.sql
004.sql
Ok, it works as expected. If you forget to add the folder
option to your configuration file, you won't see 001_views.sql
to execute. You can play around with that. However, now we have another problem. We see the 001_views.sql
script before 004.sql
. What if we want all of our scripts from the views
folder always be executed after all other scripts from our root folder?
Here is the group order comes into play. Each group has an order
option. By default, it equals to 100. Let's solve the problem. Add order: 200
to your configuration file:
# other lines are omitted
scripts:
# other lines are omitted
- folder: views # <<<<< Add this line. Notice the dash sign before
order: 200
That is the result:
> dbup status -n
Database upgrade is required.
You have 2 more scripts to execute.
These scripts will be executed:
004.sql
001_views.sql
Now, our scripts will be executed in the proper order. That is the DbUp engine gathers all scripts into a flat list regardless of their directories. Each script has an order (given by its group) and a file name without a path. The engine sorts all the scripts by the order first and then by a file name.
The last thing you should know about using folders that you can use an absolute or relative path to a folder. A relative path is counted from the configuration file folder. Notice, that the sub-folders are not considered. You should add subFolders: yes
option if you want to.
So far so good, but what if we want to select only part of the scripts based on their names instead of a folder? We can do this with filter
option inside each of the groups.
To see how it works, add a new folder with the name procs
and add a couple of scripts into it - 001_procs.sql
, 002_triggers.sql
. Then add a new group of scripts into your config file with our new folder but in this case, add the option filter
:
scripts:
# other lines are omitted
- folder: views
order: 200
- folder: procs # <<<<< Add this line
order: 300
filter: "*_procs.sql"
Don't remember to add the order
option and note the double quotes which are needed because of asterisk sign at the beginning of our filter. Hit the status
to look what's going on:
> dbup status -n
Database upgrade is required.
You have 3 more scripts to execute.
These scripts will be executed:
004.sql
001_views.sql
001_procs.sql
We don't see the 002_triggers.sql
script, and that is exactly, what we wanted to achieve. Of course, you can place these scripts in any folder including root one.
Let's take a closer look at the value of the filter
option. You can use either a wildcard filter or regular expression. To use a regular expression, you should surround the filter by forward-slashes. A file name is matched as a whole string without a file path, but with an extension, e.g. /\d{3}_.+/
. If you want to match a full path, you should add the option matchFullPath: yes
to the group.
Please, note, that you can't use scripts with the extension other than the *.sql
regardless of the filter you use. That is because of the DbUp engine filters out *.sql
files first, and only then it applies your filter. However, your filter should match the extension. For example, the /\d{3}_.+/
and /\d{3}_.sql/
filters are correct, but the filter /\d{3}_.pl/
will never match any script even it exists in the folder. Be aware of that.
Usually, this is a good idea to have a distinct folder with idempotent scripts to create or update views, stored procedures and so on and run them each time when the dbup upgrade
command is executed. In this case, you can add the option runAlways: yes
to the group.
You can use any supported encoding for your script files, but I recommend to use the 'utf-8' (which is by default) or the 'utf-16' encoding. In any case, you can add the encoding: <your-encoding>
option to the group, e.g. encoding: utf-16
. You can see a list of supported encodings here.
Require version 1.4.0+.
DbUp saves scripts that are already executed into a database. Each script gets a name as a combination of a script file name and subfolder name. For instance, let's see a folder structure:
RootFolder
|--SubFolder1
| |--001.sql
|--SubFolder2
| |--011.sql
|--dbup.yml
After executing these scripts a script versions table contains two scripts with the names:
- SubFolder1.001.sql
- SubFolder2.011.sql
This is behavior by default and it is well enough for most cases. However, sometimes you need more control over script names. The configuration file contains a section Naming
where you can adjust the behavior:
naming:
useOnlyFileName: no # Use only file name as script name. No by default
includeBaseFolderName: no # Start script name from base folder name. No by default
prefix: # Add prefix to the script name. Empty string by default
Let's see how these options influence on to script names in the example above:
Naming options | Result script names |
---|---|
useOnlyFileName: no includeBaseFolderName: no prefix: |
SubFolder1.001.sql SubFolder2.011.sql |
useOnlyFileName: yes includeBaseFolderName: no prefix: |
001.sql 011.sql |
useOnlyFileName: no includeBaseFolderName: yes prefix: |
RootFolder.SubFolder1.001.sql RootFolder.SubFolder2.011.sql |
useOnlyFileName: true includeBaseFolderName: yes prefix: |
RootFolder.001.sql RootFolder.011.sql |
useOnlyFileName: no includeBaseFolderName: yes prefix: prefix_ |
prefix_RootFolder.SubFolder1.001.sql prefix_RootFolder.SubFolder2.011.sql |
As you can see all these options works together.
You can use variables in your scripts, which is substituted with their real values. Suppose, we have a script with the following content:
-- 005.sql
print '$message$'
You should enclose the variable name between the '$' signs when using it in a script. Each of the used variables should be declared in your configuration file:
vars:
message: Here you can write what you want to
Run dbup upgrade -v detailed
to see the result:
> dbup upgrade
Beginning transaction
Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
Executing Database Server script '005.sql'
Checking whether journal table exists..
Here you can write what you want to
Upgrade successful
Note! To see the script output in the console don't forget to add
-v detailed
command line option.
There are cases when you don't want to substitute variables. For example, you can use PostgreSQL scripts with $body$
and $function$
keywords, or it may be due to security reasons. You can suppress variable substitution with the disableVars
option set to yes
.
What if you want to use environment variables in your scripts? You can't do this directly, but you can use them in your configuration file. Let's see an example:
message: $env_var$
You can use environment variables as a value of the variables, declared in the configuration file. Create a new script with the same content as in the example below:
-- 005.sql
print '$message$'
Set the environment variable 'env_var' and run the migration to see the result:
> SET env_var=Hello from the environment!
> dbup upgrade
Beginning transaction
Beginning database upgrade
Checking whether journal table exists..
Fetching list of already executed scripts.
Executing Database Server script '005.sql'
Checking whether journal table exists..
Hello from the environment!
Upgrade successful
Note. If you are wondering, why do not use environment variables directly in the scripts, this is because of the security reasons. You can use only specified variables and can steal no variable from the environment inadvertently or intentionally.
You can use the '.env' file (or files) to store environment variables' values. The format of this file is pretty simple. Each line of the file contains a variable's name and value separated by the sign '=', e.g.:
VAR1=VALUE1
VAR2=VALUE2
DbUp-Cli uses all the sources of environment variables in a certain order:
- OS (process) environment variables;
- '.env' in the current folder;
- '.env.local' in the current folder
- '.env' next to a configuration file, whether default 'dbup.yml' file or specified via command line;
- '.env.local' next to a configuration file
- Additional files, specified in the command line. In this case, you should use '-e' or '--env' option to specify one or more files; each of them can have any name:
dbup upgrade -e "file.1.env" "relative-path/file.2.txt" "absolute-path\file.3.txt"
An environment variable is overridden with the variable with the same name declared in the '.env' file in the current directory, which is overridden by the next one from the '.env' file placed next to the configuration file and then it is overridden by the variables in the files 'file.1.env', 'file.2.txt' and so on. The last ones are applying in order of occurrence in a command line.
Typically, you don't need to specify a journal table name. By default, it has the name "SchemaVersions" and is created automatically in a default schema. The default schema name is different for different DB providers. The following list contains default schema names for the different DB providers:
- SQL Server and AzureSQL: dbo
- PostgreSQL: public
- MySQL: the same as the DB name
Be careful when specifying the schema name. It is not created automatically, so you should use an existing schema only.
You can change the defaults by uncommenting journalTo
option in your configuration file, for example:
journalTo: # Use 'journalTo: null' if you want to execute scripts whenever an upgrade is run,
schema: "dbo" # or specify a custom schema name
table: "MyCustomJournal" # and a custom table name to store DB schema versions. By default, the table name is "SchemaVersions"