kohera-logo-regular.svg

Using CTAS to efficiently load large tables

An Azure data warehouse is a real powerhouse. Personally, I consider it to be one of the most performant components that can be used and build into the Lambda architecture. Ludicrous power comes with a drawback of course and that drawback is potential inefficiency.

As with any (database) architecture good foundations are key for long term usability. Luckily, I had the opportunity to test the SQL Server PDW on which the ADWH is built upon. This knowledge is key for an optimal design.

Once the data gets big, and there is a need for large loads even these systems can start to feel slow or sluggish. As such you should design your tables so that maintenance and data loading becomes a breeze. For “small” tables this problem is less noticeable because the system is powerful enough to CTAS the full load into a new table, drop the old and rename the new without running into resource or time constraints.

Breaking the workload of large tables

Typically, when an EDW solution gets infused with big data, these systems will have tables that are of respectable size to say the least. Logic tells us that when working with large tables delta loading is the way to go, but on an MPP system this might become a problem. The reason for that is that to be able to merge the data, you will often run into unnecessary data movement and high resource consumption.

To achieve a delta load in a more proficient way, we will need to break up the process in manageable chunks and use our partitions and CTAS more efficient, maximizing the raw power of the MPP system without having to run into its drawbacks.

So when working with these large data set we must try to eliminate hidden data movement as much as possible. By doing so we can keep the resource needs and DWU’s low. This can be done when using a technique called “walking the partitions” enabling you to manage or maintain large data sets with minimal resource consumption (DWU’s)

In short, the technique looks like this:

  • CTAS the delta data into a structure ready for partition switching
  • Switch out the delta partition in the original table and destroy or archive
  • Switch the newly loaded partition back into the table

This pattern is so efficient because you eliminate the need to rebuild large sized partitions, indexes or tables, minimizing the risk of running out of memory! The key is to use CTAS as much as possible rather than only to rely on a table split operation or data manipulations that will fragment your data structure or cannot be run fully parallel, causing additional data movement between the compute nodes.

Remark: As always when using partitioning, it’s key to keep your first and last partition empty to prevent you from having to recreate entire tables.

For larger tables things to keep in mind are:

  • Table splitting happens in sequence (read : partially parallel)
  • CTAS, SWITCH out, ALTER TABLE split, Switch in: is like 8x faster

 

Typical example

Let’s load the delta store into our table TableA.

TableA is hashed on its Unique Key, and is partitioned by LoadDate.

The delta store is exported daily, and stored on the data lake.

So first we create the external table on the data lake.



CREATE EXTERNAL TABLE External.Delta_Load_TableA (  

    [UniqueKey] uniqueidentifier, 

    [LoadDate] int, 

    [SomeData] varchar(50) 

) 

WITH ( 

        LOCATION='/DataLake/TableA/DailyDelta/Delta_Load_TableA.csv.gz', 

        DATA_SOURCE = myDataLake, 

        FILE_FORMAT = external_file_format_name 

    ) 

Then we CTAS this data into the delta schema object for TableA.


CREATE TABLE Delta.Delta_Load_TableA WITH
(
    CLUSTERED COLUMNSTORE INDEX,
    DISTRIBUTION = HASH(UniqueKey),
    PARTITION
    (
        [LoadDate] RANGE RIGHT FOR VALUES
        (00000000, 20010101, 20020101, 20030101, 20040101, 20050101, 20060101, 20070101, 20080101, 20090101, 20100101, 20110101, 20120101, 20130101, 20140101, 20150101, 20160101, 20170101, 20180101, 20190101, 20200101, 20210101, 20220101, 20230101, 20240101, 20250101, 20260101, 20270101, 20280101, 20290101, 99999999)
    )
)
AS SELECT * FROM External.Delta_Load_TableA;

Now we switch out the loaded partition from TableA into the Archive schema


ALTER TABLE TableA switch partition 18 to Archive.TableA;

ALTER TABLE Delta.Delta_Load_TableA switch partition 18 to TableA partition 18;

Parameter sniffing solved with new Parameter Sensitive Plan Optimization feature

If you’re someone that works a lot with Microsoft SQL Server, there’s no doubt that you’ve had issues with an issue called “Parameter sniffing” before. Up until now, you had...

Creating maps with R and Power BI

The possibilities are infinite when it comes to creating custom visuals in Power BI. As long as you have creativity and knowledge about the right programming language, you can let...

Sending monitoring alerts through Telegram

What if you could get the ease of phone notifications for whatever monitoring alerts you need? Then we have a solution for you with the app Telegram. Some of you...

Send mails with Azure Elastic Database Jobs

The DatabaseMail feature in SQL Server and Managed Instance is widely used by many professionals. But what if you want a similar functionality in Azure SQL Database? There are options,...

Sorting matrices in Power BI

Recently I worked on a Power BI project for a client. They had a SharePoint site where they regularly published articles and wanted to pour view data into a report...

The world of data is evolving

The data landscape has changed dramatically over recent years. In the past, we mainly heard that we needed to do as much as possible “cloud-only”—but this trend has become more...