If you would ask me what one of the most forgotten but essential elements of database security and stability is, it would be keeping all of the involved software in the database environment up to date. That might seem trivial, but experience shows that in a lot of environments this is often positioned in “the gray area”. Especially when a customer has a lot of third party vendors bringing their own databases. In this time of exploit kits enabling even the non-technically savvy to launch sophisticated attacks, patch management takes on a new importance. Especially when you consider the fact that getting access to any database, is considered hitting the jackpot by a hacker.
This is not the only reason why patching is crucial. It is a very complex product that SQL server runs in a cyclic bi-monthly cumulative update model implementing crucial bug, performance and security fixes.
Both reasons are why getting the applicable patches installed as soon as possible after their release is more important than ever. “As soon as possible” being the key phrase; it doesn’t necessarily mean “immediately after release.” While patching has evolved to the point where automatic updating processes will do the work for you, and whereas this might be the best practice for consumers, it’s not prudent to just “set it and forget it” in a database context. Downtime or corruption from a botched patch could have a high impact on the company’s SLA’s.
Unfortunately patch management isn’t just more important than ever, it’s also more complex than ever as uniformity is the exception rather than the rule. Most corporate networks run a mix of SQL Server editions. Ensuring all of these are properly patched is a major headache for IT.
The need to patch as quickly as possible conflicts with another important tenet of updating: stability and the need to carefully test patches in a controlled environment before rolling them out on your production network. This second need is also more important than ever because of the increased complexity of the product. That means there is a greater chance undiscovered conflicts will result in problems, when applied to specific configurations.
Due to the nature and complexity of SQL Server, it’s impossible for Microsoft to cover every possible configuration in testing. That’s why Microsoft’s own best practices on patch management have long included in-house testing on systems designed to emulate your production systems. The trick is to protect your systems as much as possible from vulnerabilities, without putting them at risk from applying untested patches. The first step is to have someone evaluate each patch individually and in effect perform a risk assessment for each vulnerability. That means looking at the following factors for every patch:
Patching priorities will be different for different organizations and even for different machines within an organization, based on these considerations.
Most of the time we hear that while the system admins know which OS to patch, SQL Server is often left running as is, because it “just runs”. The problem is some SQL Server builds are known as “dangerous builds.” The level of risk going from “unstable” over “insecure” to “corrupts data in specific situations.”
At Kohera we offer periodic health-checks including an evaluation of the patch level of your servers running SQL server. Because we’re running this for several clients our consultants have solid knowledge of the “dangerous builds” and the potential stability risks (not) implementing a patch can have on your environment. This enables us to advise and assist you in maintaining a solid patch management.
© 2022 Kohera
Crafted by
© 2022 Kohera
Crafted by