DevOps for the Database
Most talks about DevOps focuses on the application’s source code and dismiss the database. In this article, we’ll see how to add the database to the same CI/CD pipelines used for the app.

Why adding the Database to the CI/CD pipeline ?
The objective of DevOps is to create a process for safe, secure and even more frequent delivery for the app. By saying the app, we mean the entire app including the database. Because, in a new app update, we might also change the database schema. For example, we need to add a property Color to our products. This requires 2 changes: updating the entity in our app’s code and also updating the SQL script for the product table.
The point is that both changes to the app and the database must be deployed together during the same deployment process.
Because, if we update only the app’s entity, the app will try to find the corresponding column in the database, it won’t find it, thus it will fail.
A bad practice is being done today is that DBAs are working in a separate team than working with developers. They use their own tools and manages the database as it’s their own property. And if developers ask them for an update, they will take so long to do it. It’s not they are they are lazy, but it is because they have their own process (in most cases not agile) and they need to make sure their SQL scripts are really optimised and well tested.
In addition to that, DBAs deploys the updates to the database manually, ignoring the CI/CD process, even when there’s one. That is because the lack of communication with developers.
DBAs and developers (and the Ops) are working on the same project, so they need to work as one team, in the same open-space, as one team, attend together the DSM, commit to the same repository, plan the updates and impacts together and most of all: deliver together. Here is when DevOps process and practices comes in to play. We’ll have processes that unites not only developers and operations, but also DBAs (Oh shouldn’t we call it DevDbaOps !).
In one hand, integrating the DBAs to the dev & ops team requires a structural change to the organisation. Agile/DevOps experts can handle that as they already got a similar challenge to integrate dev and ops together.
In the other hand comes the technical details: the CI/CD process. That is what we’ll be explaining next.We’ll be trying to integrate the DBAs work (SQL scripts) into the CI/CD pipelines.
The technical solution to apply DevOps to the database is to add or generate SQL scripts in the CI pipeline and deploy them during the CD.
We have 2 main options: one using ORM migrations and another one using Database project. We’ll be building the sample using SQL Server, ASP.NET Core, EF Core and Azure DevOps. But, the same concept could be applied to other similar technologies.
1. ORMs can enable database migrations transparently
ORMs like Entity Framework (EF), Doctrine, Hibernate (with Liquibase or Flyway) have built-in features for code-first automatic migrations. They can detect changes to the object entities, generate the corresponding SQL script and deploy it to the database.
Automatic migrations could be enabled in EF by cli command ‘dotnet ef migrations add AddedColor’ and adding the following code: Context.Database.Migrate(); // find out more.
EF will create a Migrations folder inside our app source code. Then it will create migration objects containing the new change through the Up method written using the ORM objects. Note in the following picture, the AddColumn method which will be translated later, by EF, during the deployment, into a SQL operation ALTER TABLE.

Note also, the use of Down method which is used to revert back the changes made by the Up method. That is useful for Rollback scenarios.
With that in place, you don’t need to change anything in your CI/CD pipelines. The ORM will generate and deploy the migration SQL scripts transparently. Our Build/CI pipeline will look like the following in Azure DevOps: all the tasks are for restoring, building, unit testing and publishing the app’s source code, nothing special for the database. Note the last task used to publish the ARM template created by the Ops to create the infrastructure needed to run the app on Azure.

And our Release/CD pipeline will look like the following, where we are just publishing the generated package into Azure App Service. During that operation, the migrations will run at the app restart. You can notice also the first task which will deploy the ARM template to Azure.

However, this approach is well suited only for small projects where there’s no DBA, database schema is pretty simple and most of all: we give full control and trust to the ORM to manage the database !
2. Running the migration script generated by the ORM
Well, we can get back some control. We’ll let the ORM generate the SQL scripts. But then we’ll go to read, edit, validate and deploy it.
For that, we need to run a command line to intercept the generated SQL scripts: dotnet ef migrations script -o outputfile.sql

We’ll get the generated script and we’ll upload it to drop/artifact folder so we can access it during the CD pipeline. That is done using Publish Artifact task. When the build is done, we can view and edit the generated scripts.
Now, from the CD pipeline, we’ll deploy the script to the database directly. For that, we need to connect to the db hosted on Azure. That is the role of the task Azure SQL Publish. It will use the sqlcmd cli tool under the hood.

3. Using the Database project
Still, for larger projects, DBAs creates separate project types for managing the database. These projects contains, generate and validate SQL queries. We are talking here about the Database project that could be opened from Visual Studio, but also from SQL Server Management Studio (SSMS) which is the favourite tool for DBAs. With this approach, DBA will have full control over generating the SQL scripts.

Now, the generation of the SQL scripts will be done manually by DBAs, not anymore managed by ORM. We are fine with that as we believe the DBA are more intelligent than ORM especially for preventing and resolving conflicts.
We just need to build the project to generate a DacPac file containing the new schema built using the provided scripts. Note in the following picture we could build the database project in a separate agent/phase as it doesn’t depend from the app’s projects. That is useful if we want to run them in parallel.

For the CD pipeline, we’ll deploy the DacPac to the database. We’ll use the same task as before: Azure SQL Database Deployment, but instead of publishing a SQL script, we’ll tell it to publish the DacPac.

Problems with these approaches
All the above approaches will work in most of the cases. But in some minor cases, they can put us in serious problems. In fact, in some cases where we have a complex schema to update, the script might stop running at the middle. This means it deployed some changes but not all. Imagine the case where we want to add two columns to two different tables where we have dependency like foreign keys between them. These are the cases where we need to deploy all the changes or nothing.
It’s a good practice to have a Rollback strategy for both the app and database updates.
But for this specific case, the already created revert-back scripts might not work. Because we did expected the case to rollback all the changes, but not just some of the changes. So, we’ll have DBA connecting directly to the database to fix the issue manually.
For the application side, the rollback strategies are easier, quicker and efficient. One solution would be to delete the package v2 and redeploy the v1. Another solution would be deploying v2 on another application server and route traffic to it only when deployment is successful. These solutions are not applicable to the database.
It is in order to avoid such problems that one of the DevOps practices is to release more frequently small deployments instead of less frequently big ones. We already apply this principle in Agile. So why not applying the same principle for the database too.
This principle help to reduce these problems but not avoiding them all. The solution would be through testing the migrations on a PreProd environment with a similar database schema and data to the one on Prod. This way, we can detect earlier any migration failures that might happen in Prod.
Unit tests and integration tests for the database
It’s a common practice to have unit and integration tests for the application as these tests helps raise the quality of the app and gives more confidence to deploy more frequently. The same type of tests could be also applied to the database.
Yes, we can develop unit tests and integration tests for the database.
Database unit tests typically tests the stored procedures. And they have a dedicated framework too ! And it uses Assert ! It is tSQLt.
For app’s integration tests, we use mocks or in-memory database to not change the real data. It would be really valuable to be able to run the integration tests directly against a real database. It makes the tests more real and helps to detect more failures especially the ones that are related to the data itself more than the schema. Think of the cases like foreign keys with null values.
The problem here is that every test might change or even delete the data that might be required by the next test. Cave Tester addresses this issue. It creates a database snapshot before every test. Then we can change the data as we want, we can even delete all the data. After each test, it will restore the snapshot. So that after the tests, we’ll end up with the same exact data we started with.
Cave Tester ensures we can change the data during the tests and end up with the same exact data we started with. No data loss or change.
Conclusion
Applying DevOps to the database is really valuable to help innovate by deploying faster and securely. It still have its challenges but small steps makes a difference. We recommend to start with encouraging the DBAs to add their scripts to the app’s repository. This way they will be dependant to developers, thus they’ll start talking to them and collaborate together.
Before wiring this article, we created the following video series that walks through more details on building and releasing the database.
Join our community Slack and read our weekly Faun topics ⬇








