kohera-logo-regular.svg

The SQL Server Misconfiguration Chronicles part 3: Database Files

Vector_BG.png

The SQL Server Misconfiguration Chronicles part 3: Database Files

Vector_BG.png

In the previous part of the SQL Server Misconfiguration Chronicles we handled the default instance memory configuration.

In the third part of the misconfiguration we will handle the database files. If you create a database SQL Server will look at your model database and create the file structure as specified in your model database. By default this will mean that you will create 1 data file with a size of 4 MB and 1 log file with a size of 1 MB. These files will be able to grow automatically by auto growth, where the data file will grow in steps of 1MB and your log file will increase with 10% each growth.

 

Why is this a bad thing? In this episode of the misconfiguration chronicles this is actually a multi part question.

1. Why is the data file configuration not optimal?
It all starts with another question, how big is your database going to be? If you are going to have a database which will only contain 7 MB of data this data file setup will work for you, but when you have to scale your database this will not be a good configuration for you. Let’s say you are going to have a database in which initially 4 GB of data will be loaded, and this might grow to 8 GB within the next year. In this situation the default data file setup will not be optimal. If you leave the default settings on this will mean that your database will grow automatically about 4088 times for your initial load, causing file fragmentation! This will cause your IO calls to become a lot slower because you have to access your data on a fragmented file. (For more information on fragmented files and how to solve them you can read this blog which gives you a perfect explanation on the problem and how to fix it).

This fragmentation will definitely impact your performance, and cause your SQL server to perform slower or even give errors due to your file system limitation. You can prevent this issue by initially scaling your database the right way, you can choose to give your initial database file a size of 4096 MB with an auto growth setting of 1024 MB. This will minimize the fragmentation of your file.

Also important to remember is that SQL server can use Instant File Initialization for its SQL Server data files. This has to be set on the OS level. You have to grant the SQL Server service account the right to Perform Volume Maintenance tasks. Why is the Instant File initialization a good thing? IFI will “reserve” the database file space of your auto-growth and not fill the space with zero. This will cause your file to remain available during an auto growth (Without INI your transaction will become suspended while auto growing).

Another thing you can do is choose to use multiple files, but this is to enhance performance, and give you flexibility in scaling your database. For more information on this I suggest you read the impact on performance of multiple data files post of Paul Randall.

2. Why is the Log file configuration not optimal?
For the physical file level fragmentation, the explanation of the data file configuration also applies here, only the part of multiple files and the part of Initial File Initialization does not apply here. For multiple log files: SQL Server will only use one transaction log file. If you add another transaction log file it will not be used unless you have a max size on the other transaction log file and/or the file is full. This is because the SQL Server transaction log will write sequentially into the file this means transaction after transaction. It cannot spread its workload over different files.

The first difference with the data file part is that we here have a default setup of an auto growth of 10%, this will make your file grow in very small portions at first, but later on 10% might become a very big growth. 10% of 1MB is not that much and will case heavy fragmentation if you have a large transaction running on your SQL server. It is better to set an auto growth in MB according to the workload your SQL Server has to handle. I usually set the Auto growth on LOG or DATA files from 64MB to 128 MB to 256MB to 512MB to 1024MB depending on the size of the file and the workload.

The second and most important difference with data files is that log files use something that is called Virtual Log Files aka the VLF’s. If your log file grows SQL Server will add additional Virtual Log Files. (for more information how many virtual log files will be created & more information on VLF’s you can read this great blog on VLF).

Having too many VLF files will impact your transaction log backup & restore process, this will take longer and longer depending on the amount of VLF which you have in your transaction log file. In extreme cases this might even impact your insert/update/delete performance (as explained by the blog on Brent Ozar’s website).

 

How do you solve this?

There is not a straight answer for this one, because it is a multi part question and it all depends. We can say that the default setting will have to be altered 99% of the time. The configuration you have to choose always depends on what your size is and will be, and what your workload will be, so unfortunately I cannot give you a straight answer here, because it will depend on other factors which will change the answer.

Thank you all for reading and happy reconfiguring!

Photo of successful woman coder hacker web creator sitting armchair comfortable workspace workstation indoors.
The hurdles and pitfalls of moving or migrating a System-versioned temporal table cross database
Maybe you already have your own way of doing this and are wondering about alternative methods, or maybe you are...
Group of computer programmers working in the office. Focus is on blond woman showing something to her colleague on PC.
Updating your Azure SQL server OAuth2 credentials in Power BI via PowerShell for automation purposes
The better way to update OAuth2 credentials in Power BI is by automating the process of updating Azure SQL Server...
2401-under-memory-pressure-featured-image
Under (memory) pressure
A few weeks ago, a client asked me if they were experiencing memory pressure and how they could monitor it...
2402-fabric-lakehouse-featured-image
Managing files from other devices in a Fabric Lakehouse using the Python Azure SDK
In this blogpost, you’ll see how to manage files in OneLake programmatically using the Python Azure SDK. Very little coding...
2319-blog-database-specific-security-featured-image
Database specific security in SQL Server
There are many different ways to secure your database. In this blog post we will give most of them a...
kohera-2312-blog-sql-server-level-security-featured-image
SQL Server security made easy on the server level
In this blog, we’re going to look at the options we have for server level security. In SQL Server we...