One of the developers at a client recently pulled me aside to ask me a fairly simple question:

Hey Cliff, how do we grant access to QA to do read-only queries in our Azure DB instance for testing?

My natural response was of course a tilt of the head and


Creating cloud-first software is an excellent time to revisit you and your team’s approach to software development. Challenging ingrained, old, crusty practices and anti-patterns that exist in organizations that don’t have a place in cloud development can help your team more rapidly adopt modern best practices. The cloud affords us the ability to decide whether we want to spend our time managing server updates, whether we need to test all of our new third-party dependencies the cloud has enabled, and if we should allow access to all of the new resources that used to hang around in dusty servers sitting in corners.

Take the opportunity with a new project to ask yourself and your team why you’re doing the things you’re doing. What are you actually trying to accomplish, and are those processes necessary?

Two rows of racks of servers in the cloud

The more servers you have to deal with the less time you have to get work done.

Do we really need a server for that?

Your team (or another team in your company) may be tempted to follow the same practices you’ve been using for development or operations for years. Simply putting a cloud spin on things will help, but it certainly won’t achieve the kind of agility, scalability, nor reliability that cloud-first development stories boast. Starting with running IIS on a Windows 2012 server in Azure certainly gets you the benefits of letting Microsoft pick up the tab on the physical hardware, but it’s only a start.

Making the leap directly into Azure App Service can save you the costs and time associated with maintaining a full server of your own. This lets your team focus on what you’re actually trying to accomplish: ship a product. As much fun as installing security patches is I imagine your app release timeline would be better off not scheduling that kind of ongoing maintenance. Maybe put that time into unit tests instead?

This guideline can extend to many more services than one might think. Azure of course covers the bases for most of the underlying systems your product needs to function, but there are a host of good tools out there that require an installation or maintenance. The last decade of cloud-first thinking has spawned many PaaS-oriented approaches of services that otherwise had to be installed on your own hardware. A quick google search for a subscription-based option instead of having to install it on a server somewhere could save your team the cost of a machine as well as licensing costs.

Keeping an eye on the horizon is also a good bet. Azure is constantly adding new and interesting services that ease the burden of adopting more cloud-first practices.

Hot air balloon with caption Dev and Test with Microsoft Azure

The hot air balloon can only fit so much before it can’t get off the ground.

Do we really need to test that?

A common refrain I’ve recently acquired is “Don’t test someone else’s system unless you have a good reason to.” This covers third-party libraries, .NET framework libraries, operating system security, and a host of other things that one generally shouldn’t be actively concerned with when building a cloud-first application.

With modern development practices you will inevitably be using third party libraries. It’s unavoidable in a world where I can install an entire web API framework with a handful of nuget packages. The best kind of third party libraries are the ones with tests included in their repo and a guarantee that all of the tests pass whenever they release something. Keeping an eye open for VSTS or AppVeyor badges on public Readme files is a good indicator that the team is at least trying. If the team is showing you their code works then trust that it does until you’re proven wrong.

Your unit tests should focus on code that your team has control over. 100% code coverage is a waste of time, especially when you start testing methods that are simply calling into underlying framework code. Test to make sure your own logic works, not that a for loop runs 5 times when you give it 5 inputs.

That said we’ve all encountered a third-party bug before. If your third party library updates aren’t reliable then by all means write acceptance tests. Trust until your trust is broken, then spend the time to write the appropriate tests to make sure you won’t get fooled again.

Security is included with this. When using Azure PaaS offerings for the first time your security team might pipe up with questions about how often Microsoft updates the underlying VMs, or whether they should perform penetration testing on those systems. Microsoft has created an entire section of their website for those exact kind of questions. The Trust Center details the kind of certifications, audits, and other compliance documentation that Microsoft handles on their end so you don’t have to do it on your end.

Make sure that you’re still testing your own systems to ensure they’re safe to use in the cloud. Just because it’s deploy to App Service doesn’t mean you can pass credit cards in plaintext. Running vulnerability scans against your API endpoints in the cloud is still a reasonable practice.

An open padlock on a laptop keyboard

Access is a dependency. Keep your dependencies to the minimum.

Do we really need access for that?

I really do like humans, they’re good at creating incredible things. Unfortunately they have an issue in that they don’t always do what they intend to do. Sometimes they run the wrong command. Sometimes they delete the wrong thing. Mistakes happen, that’s a fact of life. However if humans can’t touch the system at all then humans can’t make mistakes in those systems.

Challenge why anyone should be able to stick their fingers in a production system. Even read-only access can be problematic. SELECT statements aren’t free, they take finite resources as well. Get a long enough running command and you might end up blocking writes or causing performance issues. Ask people requesting access if they really need to see the database directly.

The rule of thumb we recommend to our clients is that an environment prior to staging (such as UAT for large teams, or just DEV for smaller ones) is the last environment people get write access to. Staging is the last environment anyone gets read access on. Nobody gets access to production. The cloud-first approach preaches infrastructure as code and ensuring that any changes to systems must happen by first checking those changes into source control. Combining those practices with a robust build and deploy system means any changes should happen automatically when the changes are checked in. If it’s so easy and simple to control your changes that way, why does anyone need to make changes directly in the environments at all?

Going back to the request that started this discussion, what was the path forward if I had simply showed that dev the handful of buttons necessary to get access to run read-only queries against the Azure SQL instances? In their case the API they were building was just that, an API. The database it wrapped was not exposed to any other systems. No other application will ever have insight into the fact that the database exists, let alone what its schema or structure looks like. If QA had that access and started writing queries against the API to confirm test cases their code would now depend on the database format. This in turn reduces the flexibility of that schema to adapt to new challenges that may come later in time. By instead strictly adhering to the API boundary they can get tests while still maintaining separation of concerns.

If you want to test an API, then test the API. The API doesn’t expose a database, so those tests don’t get to see the database. QA therefore doesn’t need read access, so QA doesn’t get read access.

Same goes for developers. If developers want to troubleshoot a production issue they can work off of the snapshot debugger or a database clone. Nobody needs direct access to production when indirect access is more than sufficient (and far less error prone!).

Access begets more access. If someone has read access they can easily press for more write access at some later date. If you start with the blanket statement of “nobody but the build server gets access” it’s a lot easier to maintain that. Remember that access is a dependency that you have to consider and maintain in the future.

Nebbia is here to help!

Need someone to help you ask ‘Why’? Nebbia is standing by to provide all of the devil’s advocate questions you could possibly want while we help your company explore and expand into the cloud.