SQL Server 2017 and Azure SQL Database introduced a whole new set of query improvements regarding performance. These modifications are all part of the so called “adaptive query processing feature family”, which includes three major changes: batch mode adaptive joins, batch mode memory grant feedback and interleaved execution for multi-statement table valued functions. In this article, I will focus on the, in my humble opinion, most fascinating one: batch mode adaptive joins.
Every DBA who has ever been given the task to tune a database or monitor its performance knows the phenomenon, your ‘oh so blazingly fast query’ suddenly runs slow… One of the most common causes for this issue is parameter sniffing. When you execute a parameterized query for the first time, SQL Server computes a set of execution plans based on the provided parameters and picks the plan with the lowest estimated cost. SQL Server caches the chosen plan and reuses it for future executions. The problem arises when the values in the table aren’t evenly distributed. In the case of ad-hoc queries, SQL Server is able to generates different execution plans based on these provided values. When using parameterized queries on the other hand, a one-size-fits-it-all plan is created based on the provided values of the first execution.
To show you the impact, we’ll execute the same query twice using different parameters. To make the data unevenly distributed I added some records to the [Sales].[InvoiceLines] table. It now has 67000 records for StockItemID 1 and 20 records for StockItemID 227. In all the following examples we’ll use the WorldWideImporters database which is available on GitHub.
Let’s kick off with a short demonstration on how parameter sniffing can influence your performance. We’ll first create a very basic stored procedure:
CREATE PROCEDURE [dbo].[GetCustomersByStockItemID]
@StockItemID int
AS
BEGIN
SET NOCOUNT ON;
SELECT DISTINCT(i.CustomerID)
FROM [Sales].[InvoiceLines] il
inner join [Sales].[Invoices] i on il.InvoiceID = i.InvoiceID
WHERE il.StockItemID = @StockItemID
END
GO
Now we’ll execute this stored procedure for StockItemID 227.
SET STATISTICS IO, XML, TIME ON
GO
EXEC dbo.GetCustomersByStockItemID @StockItemID = 227
GO
As you can see SQL Server uses 2 nonclustered index seeks and 60 logical reads on the [Sales].[Invoices] table to execute the procedure.
Next, let’s execute the same stored procedure but now for StockItemID 1 instead of 227.
SET STATISTICS IO, XML, TIME ON
GO
EXEC dbo.GetCustomersByStockItemID @StockItemID = 1
GO
SQL Server cached the previously generated execution plan and, as you can see, chooses to reuse the plan. Because there are a lot more records in Sales.InvoiceLines holding StockItemID 1, we now have over 200k logical reads on [Sales].[Invoices]!
To check if SQL Server would create a different execution plan for StockItemID 1, we must first clear the procedure cache and then execute the stored procedure again.
DBCC FREEPROCCACHE()
GO
SET STATISTICS IO, XML, TIME ON
GO
EXEC dbo.GetCustomersByStockItemID @StockItemID = 1
GO
In this case SQL Server chooses to scan the nonclustered index on [Sales].[Invoices] and the number of logical reads drops to 166.
Prior to SQL Server 2017 one possible solution would be to add ‘OPTION (RECOMPILE)’ at the end of your stored procedure. This way SQL Server compiles a single-use plan for each execution, based on the provided values. Although this is a viable solution, it has its downsides, especially for large and complicated queries with high compilation times. Starting from SQL Server 2017 we have a new tool in the shed: batch mode adaptive joins. This new type of join enables SQL Server to postpone the choice between a hash and nested loop join until the first input has been processed, in our case the seek on Sales.InvoiceLines. The join operator holds a threshold. If the input exceeds this threshold, SQL continues to use a hash join. If the input falls below the threshold, SQL Server switches to a nested loop join.
To be able to use adaptive joins, the database must be in compatibility mode 140 and SQL Server must be able to use batch mode processing. To enable batch mode processing, I used a little trick Itzik Ben-Gan wrote about. Basically, you create a filtered nonclustered columnstore index that can’t have any rows in it.
CREATE NONCLUSTERED COLUMNSTORE INDEX [NCCI_Sales_Invoices_InvoideId] ON [Sales].[Invoices]
(
[InvoiceID]
)where invoiceId = 1 and invoiceId = -1
WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0)
If we execute the stored procedure again, SQL Server should be able to switch between joins. First execute the proc using StockItemID 227:
SET STATISTICS IO, XML, TIME ON
GO
EXEC dbo.GetCustomersByStockItemID @StockItemID = 227
GO
If you hover the adaptive join operator you can see that the number of rows falls below the predefined threshold of 436 rows. Therefore, SQL Server switches to a nested loop join and seeks the nonclustered index on [Sales].[Invoices] using 60 logical reads.
Now let’s execute the same procedure again for StockItemID 1.
SET STATISTICS IO, XML, TIME ON
GO
EXEC dbo.GetCustomersByStockItemID @StockItemID = 1
GO
SQL Server reuses the execution plan, as you can see the number of rows exceeds the predefined threshold. Consequently, SQL Server continues to use a hash join and scans the nonclustered index on [Sales].[Invoices] using only 166 logical reads!
An important disadvantage of adaptive joins is that SQL Server will grant a higher amount of memory compared to when using a regular nested loop join. This is because the optimizer grants memory as if it was executing a hash join. If you are on a system with an already limited amount of memory caution should be paid, especially regarding queries that are executed a lot. As a second downside, Microsoft states there’s an overhead for the build phase as it is a stop-and-go operation. Although I couldn’t witness any significant difference, this overhead might be more noticeable in large environments.
If any of the previously described disadvantages is a blocking factor for you, you can always choose to disable adaptive joins. They can be disabled at either database or statement scope. To disable the feature for all queries executed on a specific database, use:
ALTER DATABASE SCOPED CONFIGURATION SET DISABLE_BATCH_MODE_ADAPTIVE_JOINS = ON;
As a last option you can choose to use trace flag 9398 as well, this can be set on global, session and query level.
If you wish to disable the feature specifically for one query, you can add following hint at the end of the query:
OPTION(USE HINT('DISABLE_BATCH_MODE_ADAPTIVE_JOINS'));
As a last option you can choose to use trace flag 9398 as well, this can be set on global, session and query level.
With the introduction of adaptive joins, we have a new nifty tool to improve and maintain query performance on our beloved database platform. Even though it comes with an additional cost, you will get improved flexibility in return, especially for unevenly divided data.
Cookie | Duration | Description |
---|---|---|
ARRAffinity | session | ARRAffinity cookie is set by Azure app service, and allows the service to choose the right instance established by a user to deliver subsequent requests made by that user. |
ARRAffinitySameSite | session | This cookie is set by Windows Azure cloud, and is used for load balancing to make sure the visitor page requests are routed to the same server in any browsing session. |
cookielawinfo-checkbox-advertisement | 1 year | Set by the GDPR Cookie Consent plugin, this cookie records the user consent for the cookies in the "Advertisement" category. |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
CookieLawInfoConsent | 1 year | CookieYes sets this cookie to record the default button state of the corresponding category and the status of CCPA. It works only in coordination with the primary cookie. |
elementor | never | The website's WordPress theme uses this cookie. It allows the website owner to implement or change the website's content in real-time. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Cookie | Duration | Description |
---|---|---|
__cf_bm | 30 minutes | Cloudflare set the cookie to support Cloudflare Bot Management. |
pll_language | 1 year | Polylang sets this cookie to remember the language the user selects when returning to the website and get the language information when unavailable in another way. |
Cookie | Duration | Description |
---|---|---|
_ga | 1 year 1 month 4 days | Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors. |
_ga_* | 1 year 1 month 4 days | Google Analytics sets this cookie to store and count page views. |
_gat_gtag_UA_* | 1 minute | Google Analytics sets this cookie to store a unique user ID. |
_gid | 1 day | Google Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously. |
ai_session | 30 minutes | This is a unique anonymous session identifier cookie set by Microsoft Application Insights software to gather statistical usage and telemetry data for apps built on the Azure cloud platform. |
CONSENT | 2 years | YouTube sets this cookie via embedded YouTube videos and registers anonymous statistical data. |
vuid | 1 year 1 month 4 days | Vimeo installs this cookie to collect tracking information by setting a unique ID to embed videos on the website. |
Cookie | Duration | Description |
---|---|---|
ai_user | 1 year | Microsoft Azure sets this cookie as a unique user identifier cookie, enabling counting of the number of users accessing the application over time. |
VISITOR_INFO1_LIVE | 5 months 27 days | YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface. |
YSC | session | Youtube sets this cookie to track the views of embedded videos on Youtube pages. |
yt-remote-connected-devices | never | YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. |
yt-remote-device-id | never | YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. |
yt.innertube::nextId | never | YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. |
yt.innertube::requests | never | YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. |
Cookie | Duration | Description |
---|---|---|
WFESessionId | session | No description available. |