DevOps and Database Design and Change Management

by May 27, 2020

Today's blog post continues my series of posts relating to DevOps and database administration…

The DBA is the information technician responsible for ensuring the ongoing operational functionality and efficiency of an organization’s databases and the applications that access those databases. This means that, traditionally, the DBA designs the database and acts as the custodian of database changes.

That said, the DBA rarely requests changes; instead, changes are requested by programmers and developers as they build systems and respond to business requests for application changes. There are times, though, when the DBA will request changes, such as to address performance issues or to utilize new features or technologies at a system level. At any rate, regardless of who requests the change, the DBA must be involved in the change process to ensure that each change is performed successfully and with no impact on the rest of the database and applications that access it.

In an organization that has embraced DevOps, a shift occurs that places more of the responsibility for changes, of all kinds including database changes, on the developer. However, the DBA still must be involved to oversee, analyze, and approve any database changes… and even some application changes, such as significant new or changes SQL statements that can impact application and system performance.

As with all things in the world of DevOps, it is desirable to automate as much of the change management process to remove manual, error-prone tasks and increase the speed of delivery. But without a tool that automates complex database changes and integrates into the DevOps toolchain, incorporating database changes into application delivery and deployment will remain a slow, mostly manual process.

To effectively make database changes, the DBA needs to consider multiple issues, the most important of which are the appropriateness of the change in terms of the database design and the impact of the change on all other database objects and applications. A developer may request a change that makes sense for their particular piece of an application, but may not have a wider view of the impact of their request. Providing this view is one of the major responsibilities of the DBA in database design and change management.

Additionally, the DBA must analyze each change and determine if it conforms to standards (for your shop and the industry), how best to make the change, and the timing of the change in terms of its impact to database availability while the change is being made.

The ideal arrangement is for database schema changes to be incorporated into the DevOps toolchain using a tool that allows developers to request changes. Those changes should be analyzed and compared against standards and rules for conformance. Non-compliant changes should automatically be referred back to the developer for modification and resubmission. Compliant changes should be accepted and cause a script to be generated using the most appropriate mechanisms to implement the change. This is a non-trivial activity which, if done properly, can eliminate a lot of manual effort and downtime. The generated script should be presented to the DBA for review and upon acceptance, be implemented.

It is worth mentioning here that today’s major DBMS products do not support fast and efficient database structure changes for all types of change. Each DBMS provides differing levels of support for making changes to its databases, but none easily supports every type of change that might be required. One quick example: try to add a column to the middle of an existing row. To accomplish such a task, the DBA must drop the table and recreate it with the new column in the middle. But what about the data? When the table is dropped, the data is deleted unless the DBA was wise enough to first unload the data. But what about the indexes on the table? Well, they too are dropped when the table is dropped, so unless the DBA knows this and recreates the indexes too, performance will suffer. The same is true for database security: When the table is dropped, all security for the table is also dropped. And this is but one example of what, at first glance, seems like a simple change, becoming difficult to implement and manage.

Adding to the difficulty of making schema changes is the fact that most organizations have at least two, and sometimes more, copies of each database. There may be copies of the database at different locations or for different divisions of the company. And at the very least, a test and a production version will exist. But there may be multiple testing environments—for example, to support simultaneous development, quality assurance, unit testing, and integration testing. Each database change will need to be made to each of these copies, as well as, eventually, to the production copy. So, you can see how database change can quickly monopolize a DBA’s time.

There is No Shortcut

Oftentimes, a simple approach is attempted to incorporate database changes into the DevOps pipeline. But as with most complex problems, there are simple solutions that are almost always wrong.

One simple approach that many attempt is simply to add the DDL for the database change into GitHub or whatever source code management solution being used. But as most DBAs will tell you, that is not really a viable solution. Well, it might work for stored procedures, which are database objects but are actually much more like code than traditional database objects (e.g. tables, indexes, etc.). But let’s think about those traditional database objects (DBOs).

DBOs define the structure for the data in the database. Any time you change that structure, the type and storage of the data can change, or the change may even cause data to be deleted. Of course, this is to be avoided and DBAs know how to do this. It can require complex scripts that work one time but will not work the next time. Sure, storing the scripts in GitHub is not a bad thing, but it is not going to work like storing program code there does. Having the script available in a secure, central place (like GitHub) makes it easier for DBAs to review the change history, but a script that makes invasive changes to a database environment is not likely to be used in exactly the same way more than once. And once it is created and certified to work, chances are that the script won't be updated again, certainly not like application code is. 

Generally speaking, a change management tool is required – and the tool will have all of the storage, search, build, execution, and tracking capabilities built into it. So, the best approach is to integrate an effective database change management solution into your DevOps pipeline, instead of trying to force-fit DDL (or scripts) into the existing process. Most of these tools will provide a repository of the database change scripts that have been built and executed for future reference and examination.

And, of course, we have not discussed all of the nuances of database change management. For example, consider how test data is managed and refreshed in conjunction with changing database structures… there is quite a lot for DBAs to manage.

The Bottom Line

To correctly implement database change management into your DevOps toolchain it is crucial that you embed a robust, time-tested process that is designed to affect database changes… and that requires a tool that both meets the needs of the DBA and works with your DevOps toolchain.

 

© 2020, Mullins Consulting, Inc.