kohera-logo-regular.svg

Using CTAS to efficiently load large tables

An Azure data warehouse is a real powerhouse. Personally, I consider it to be one of the most performant components that can be used and build into the Lambda architecture. Ludicrous power comes with a drawback of course and that drawback is potential inefficiency.

As with any (database) architecture good foundations are key for long term usability. Luckily, I had the opportunity to test the SQL Server PDW on which the ADWH is built upon. This knowledge is key for an optimal design.

Once the data gets big, and there is a need for large loads even these systems can start to feel slow or sluggish. As such you should design your tables so that maintenance and data loading becomes a breeze. For “small” tables this problem is less noticeable because the system is powerful enough to CTAS the full load into a new table, drop the old and rename the new without running into resource or time constraints.

Breaking the workload of large tables

Typically, when an EDW solution gets infused with big data, these systems will have tables that are of respectable size to say the least. Logic tells us that when working with large tables delta loading is the way to go, but on an MPP system this might become a problem. The reason for that is that to be able to merge the data, you will often run into unnecessary data movement and high resource consumption.

To achieve a delta load in a more proficient way, we will need to break up the process in manageable chunks and use our partitions and CTAS more efficient, maximizing the raw power of the MPP system without having to run into its drawbacks.

So when working with these large data set we must try to eliminate hidden data movement as much as possible. By doing so we can keep the resource needs and DWU’s low. This can be done when using a technique called “walking the partitions” enabling you to manage or maintain large data sets with minimal resource consumption (DWU’s)

In short, the technique looks like this:

  • CTAS the delta data into a structure ready for partition switching
  • Switch out the delta partition in the original table and destroy or archive
  • Switch the newly loaded partition back into the table

This pattern is so efficient because you eliminate the need to rebuild large sized partitions, indexes or tables, minimizing the risk of running out of memory! The key is to use CTAS as much as possible rather than only to rely on a table split operation or data manipulations that will fragment your data structure or cannot be run fully parallel, causing additional data movement between the compute nodes.

Remark: As always when using partitioning, it’s key to keep your first and last partition empty to prevent you from having to recreate entire tables.

For larger tables things to keep in mind are:

  • Table splitting happens in sequence (read : partially parallel)
  • CTAS, SWITCH out, ALTER TABLE split, Switch in: is like 8x faster

 

Typical example

Let’s load the delta store into our table TableA.

TableA is hashed on its Unique Key, and is partitioned by LoadDate.

The delta store is exported daily, and stored on the data lake.

So first we create the external table on the data lake.



CREATE EXTERNAL TABLE External.Delta_Load_TableA (  

    [UniqueKey] uniqueidentifier, 

    [LoadDate] int, 

    [SomeData] varchar(50) 

) 

WITH ( 

        LOCATION='/DataLake/TableA/DailyDelta/Delta_Load_TableA.csv.gz', 

        DATA_SOURCE = myDataLake, 

        FILE_FORMAT = external_file_format_name 

    ) 

Then we CTAS this data into the delta schema object for TableA.


CREATE TABLE Delta.Delta_Load_TableA WITH
(
    CLUSTERED COLUMNSTORE INDEX,
    DISTRIBUTION = HASH(UniqueKey),
    PARTITION
    (
        [LoadDate] RANGE RIGHT FOR VALUES
        (00000000, 20010101, 20020101, 20030101, 20040101, 20050101, 20060101, 20070101, 20080101, 20090101, 20100101, 20110101, 20120101, 20130101, 20140101, 20150101, 20160101, 20170101, 20180101, 20190101, 20200101, 20210101, 20220101, 20230101, 20240101, 20250101, 20260101, 20270101, 20280101, 20290101, 99999999)
    )
)
AS SELECT * FROM External.Delta_Load_TableA;

Now we switch out the loaded partition from TableA into the Archive schema


ALTER TABLE TableA switch partition 18 to Archive.TableA;

ALTER TABLE Delta.Delta_Load_TableA switch partition 18 to TableA partition 18;
2319-blog-database-specific-security-featured-image
Database specific security in SQL Server
There are many different ways to secure your database. In this blog post we will give most of them a...
kohera-2312-blog-sql-server-level-security-featured-image
SQL Server security on server level
In this blog, we’re going to look at the options we have for server level security. In SQL Server we...
blog-security_1
Microsoft SQL Server history
Since its inception in 1989, Microsoft SQL Server is a critical component of many organizations' data infrastructure. As data has...
DSC_1388
Aggregaties
Power BI Desktop is een prachtige tool om snel data-analyses te kunnen uitvoeren. Je connecteert op een databron, importeert en...
dba image
DBA is not that scary
Often when talking to people who are looking for a career path in the data world I feel like there...
blog-2303
How do you link an SCD Type 2 table in Power Query?
This article uses a simple example to demonstrate how to link an SCD Type 2 table in Power Query to...