Post Deploy Script Running Over and Over Again

Octopus worker deploying an ad-hoc SQL script illustration

Automating database deployments provides a quantum leap in continuous delivery. I cannot believe the number of problems solved by automating database deployments. Whether information technology's calculation a new table, modifying a stored process or creating an index. Whatever it is, I no longer have to determine the delta betwixt environments.

Despite all the advantages, 1 common scenario keeps coming up; running ad-hoc queries on the database server. The most common employ example I've seen fixing data. Typically, the data gets in a foreign land when a user does something unexpected. In some cases, the root effect won't be stock-still (information technology doesn't happen often enough), or the issue won't be fixed for another week or so, simply the data needs fixing right now.

When I've see this scenario in the past, the procedure is:

  1. A developer creates the script to ready the data.
  2. They send that script to a DBA or somebody who has the necessary rights to change data.
  3. The person who has the rights, runs the script.
  4. The programmer is notified the script was run.

This process has a lot of flaws. At some point in my career, I've been either the developer or the person running the script, and it'southward non an enjoyable procedure:

  1. The person running the script is not an expert in the system. The vast majority of the time a brief glance is done on the script earlier running it.
  2. The people who have the necessary rights could've gone home for the day, gone out to tiffin, or exist in a coming together. The script might not be run for several hours. In some cases, the data must be fixed right away.
  3. Notifying the developer is a manual process, meaning the script might take been run, but the notification hasn't been sent yet.
  4. Most companies don't give junior developers rights to brand changes to product. The people who can run the script take other, and frankly, more important responsibilities. They might exist really focused on something and being interrupted breaks their catamenia.
  5. If the requests are washed via email or slack zilch is audited, and email is where documentation goes to die.

Octopus Deploy tin do so much more than deploy software. A lot of new functionality has been added to make Octopus Deploy a more than complete DevOps tool. In this post, I walk you through a procedure to automate running ad-hoc queries.

My reason for using Octopus Deploy (aside, from the fact I work hither) is because information technology tin provide the following to this process:

  • Auditing: Octopus Deploy can tell you who fabricated the request, who approved the request, and when this all happened.
  • Artifacts: Using the artifact functionality built into Octopus Deploy, it's possible to store and capture the exact script that was run, still, if someone changes the script after the fact on the file-share, there is no manner to know that.
  • Approvals: In some cases, it'south important to have some other set of eyes look at the script. Octopus Deploy tin can be set up to conditionally corroborate scripts based on a fix of criteria.
  • Automation: No more manually sending emails. No more manually sending confirmations. No more opening up SSMS and running the script.
  • Repeatable: The aforementioned process volition be used beyond all environments to run the scripts.

Use cases

For the purposes of this blog post. here are the use cases:

  • Equally a developer, I need to run an advertising-hoc query to add together an index to encounter if that resolves a operation event. If it does, then add that index into the database definition and push it through all environments.
  • Equally a DBA, I need to run an ad-hoc query to create a SQL Login.
  • As a support engineer, I need to run an advertizing-hoc query to grant select rights to a developer.
  • As a business analyst, I need to clean up a data issue for a user.

Requirements

With the use cases in listen, here are the requirements for the process:

  • Octopus Deploy.
  • No source control. A lot of DBAs, support engineers, and business analysts are non familiar with source control tooling.
  • Automatic. When the scripts are prepare, they should exist run within five minutes without having to fill out a course or notify anyone.
  • Analysis of the script, if the script contains sure keywords, then a man should review the script prior to running it.
  • Work in for whatsoever environment. Nosotros want to encourage people to run this for whatsoever environment. Even dev.

Setup

Tentacles

Our database deployment documentation recommends you install Tentacles on a jump box sitting between Octopus Deploy and the database server. When using integrated security those Tentacles are running under service accounts that have permissions to handle deployments. These Tentacles will handle normal deployments.

Yous have a few options for setting upwards an ad-hoc process and permissions:

  1. Continue to utilize the deployment Tentacles but give them elevated rights to perform boosted tasks.
  2. Create a new set of service accounts with elevated permissions, and create new Tentacles for those new service accounts.
  3. A combination of pick 1 and choice 2. Create two pipelines. I for information fixes, the other for other changes. The information fixes run through the regular deployment targets, simply the other changes run through a new prepare of deployment targets with new service accounts.

Lifecycle

This process allows people to run scripts directly in product. Using a default lifecycle of dev to exam to pre-production to production doesn't make very much sense. Create a new lifecycle to allow for deployments to whatever surroundings. I called mine Script Lifecycle:

You achieve this past creating a single phase and adding all the environments to that phase:

Projects and process

For this process, I created a number of step templates. I don't want to submit those to the customs library considering they aren't generic plenty, merely you can find them in our GitHub samples repository.

Ingesting scripts

I will write a database script for this use instance:

As a business analyst, I demand to clean up a data issue for a user.

A couple of questions come to listen:

  1. Q: What environs? A: Production.
  2. Q: What SQL Server? A: 127.0.0.1.
  3. Q: What database on the SQL Server? A: RandomQuotes_Dev.
  4. Q: Who is submitting the script? A: Bob Walker.

Okay, nosotros know the answers, how do we get those from our encephalon to Octopus Deploy? For this, I will use a YAML file called MetaData that contains all that information:

          --- DatabaseName: RandomQuotes_Dev Server: 127.0.0.1 Environs: Dev SubmittedBy: Bob.Walker@octopus.com ...                  

The adjacent question is how volition that YAML file and the SQL Scripts exist sent to Octopus Deploy to run? To make this equally easy as possible for the people submitting a script, I will make use of a hot folder. I wrote a PowerShell script which will:

  1. Await for whatever new directories in the hot folder.
  2. Utilize Octo.exe to package the folder.
  3. Push button the bundle to Octopus Deploy.
  4. Create a new release.
  5. Use the MetaData.yaml file to determine which environment to deploy to.
  6. Movement the folder to a processed location then the scripts aren't run again.

I could set up a scheduled task to run on the server. But there is no existent visibility to that task. If it starts failing, I won't know that it fails until I RDP onto that server.

Rather than go through that nightmare, I set upward a new project in Octopus Deploy called "AdHoc Queries Build Database Package." It has a single step in the process, run the PowerShell script to build the database packet. Make note of the LifeCycle, and it is only running on a dummy environment that I chosen SpinUp:

It has a trigger that creates a new release every five minutes and run this process:

In the consequence, I want to extend this process to back up other types of scripts, I made it a step template:

The hawkeye-eyed reader will encounter the parameter Octopus projection. That is the project which runs the scripts.

Running the scripts

In order to run across the higher up requirements, I wanted the process to do the following:

  1. Download the package onto the Jump Box.
  2. Take hold of all the files in the packet and add them as artifacts (in the event they demand to be reviewed).
  3. Perform some basic assay on the scripts. If any of the scripts are non using a transaction, or use the keywords Drop or Delete, and then I want to trigger a manual intervention.
  4. Notify when transmission intervention is needed. My preferred tool is slack.
  5. Run the scripts.
  6. If the scripts fail, send a failure notification.
  7. If the scripts are successful, ship a success notification.

The download package stride is very straightforward. Download the package to the server. Don't run whatever configuration transforms. Don't supersede any variables. Just deploy the bundle:

The Go Scripts From Bundle to Review is a pace template that does the post-obit:

  1. Read the YAML file and set output parameters.
  2. Add all the files in the package as artifacts.
  3. Perform some basic assay on the SQL files.
  4. Set an output variable, ManualInterventionRequired, in the event the analysis fails.

This is all done in a step template. The only parameter required is the step that downloaded the parcel:

The format for output parameters with Octopus Deploy tin be tricky to remember. I know I would mistype something, so rather than practice that, I use variables. This way, if I exercise modify something, I only have to change information technology one place:

When I notify someone, I can include that information very hands. Also, make note that this step will run based on the ManualInterventionRequired output variable:

The aforementioned is truthful for the manual intervention. The run condition is based on the ManualInterventionRequired output variable:

The Run SQL Scripts pace will become through all the SQL files and run them. Again, to make it easier, I used a step template. This procedure used invoke-sqlcmd which will capture the output and add together task history:

Bold everything went well the success notification can get out:

Otherwise, the failure notification can leave:

Process demo

I accept this incorporate a script fix to be run:

The MetaData.yaml file has the script set to run on Dev:

The script itself is zip special. I'm non going to use a transaction to show that the process will pick that upward and strength a manual intervention:

I've copied that folder into the hot binder:

Octopus picks up that folder:

I now come across that demo binder has been moved to the processed folder. I put a timestamp on it so I know exactly when that folder was processed:

Looking at the projection which runs the scripts, I can see a new release has been created and a manual intervention is waiting for me:

I can check the slack channel and see the approving message has been sent:

Going into the release, I tin see the artifacts take been generated. If I wanted to, I could download them and view the verbal script which is about to be run. When I view the approval details, I can see the message is the same every bit the slack notification:

After approving the deployment, the script volition run and the output will be captured:

And because the script was successfully run, the success notification was sent to slack:

FAQ

How can I prevent someone from submitting a script to the dev environment but allow information technology for a production SQL Server?

When using integrated security, have a Tentacle per environs. That Tentacle only has access to SQL Servers in its surround. When using SQL Authentication, accept separate users and passwords per surroundings. Either way, the script volition fail considering the user being used to log in to SQL Server won't be able to.

What if I desire to have every script reviewed when they are sent to pre-production and product?

Change the manual intervention pace to always run. In addition, change the environments to pre-production and product. The provisional approval was put in place to only require blessing when certain weather are met. In fact, starting out, I recommend all scripts sent to pre-production and production are manually approved. When trust has been congenital in the procedure, it volition be fourth dimension to introduce provisional approvals.

This seems similar overkill. Couldn't yous use prompted variables in Octopus Deploy?

Admittedly! I accept some other project set to do simply that. The question is, who will submit these scripts? Should they have rights to create a release and have information technology go to production? Should everybody have access to Octopus Deploy? For my use cases, my answer was no to all of those. My primary goal for this process was the emptying of every bit many manual steps as possible. Manually creating a release using prompted variables added too many manual steps.

Conclusion

I'm the beginning to acknowledge this process is far from perfect. It will not work for every company. The goal of this post is to provide an example of a process you can modify for utilise in your company.

Until next time, happy deployments!


Posts in the database deployment automation series:

  • Why consider automated database deployment
  • Database deployment automation approaches
  • Database deployment automation using state-based Redgate SQL Change Automation
  • Using advertising-hoc scripts in your automated database deployment pipeline
  • Database deployment automation using Octopus and Redgate Deployment Suite for Oracle
  • Database deployment automation using Jenkins and Octopus Deploy post deployment scripts for Oracle
  • Using DbUp and Octopus workers for database deployment automation
  • Automatic approvals for your database deployment automation

pomeroyyoultaid54.blogspot.com

Source: https://octopus.com/blog/database-deployment-automation-adhoc-scripts

0 Response to "Post Deploy Script Running Over and Over Again"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel