kohera-logo-regular.svg

Azure Synapse Analytics: Where Azure DWH, Spark & ADF meet

When I started to jump from DBA to Data Architect, I thought that these roles would diverge further and further until they became two separate roles and functions. Both with their own skillset and their own purpose in the (big) Data Analytics world. The current evolution Microsoft launched proved me wrong though. Enter Azure Synapse Analytics.

Azure Synapse Analytics

Azure Synapse is Azure SQL Data Warehouse evolved—blending Spark, big data, data warehousing, and data integration into a single service on top of Azure Data Lake Storage for end-to-end analytics at cloud scale.

Due to the power of this platform it naturally blends with all the existing connected services like the Azure Data Catalog, Azure Databricks, Azure HDInsight, Azure Machine Learning and of course Power BI.

The improvements of the Azure Gen 2 components, provide benefits that require no configuration and are provided out-of-the-box for every data warehouse these improvements include the non-volatile memory solid-state drives to increase the I/O bandwidth available to queries, Azure FPGA-accelerated networking enhancements that enables the environment to move data at rates of up to 1GB/sec per node to improve queries. This improves the Azure Datawarehouse’s inherent possibility to leverage the multi-core parallelism in the underlying SQL Servers allowing it to move data efficiently between compute nodes while ongoing investments in distributed query optimization will also greatly improve the systems performance over time.

ELT1-> ADF Data Flow [Code Free]

Out of the box Azure Synapse Analytics has more then 90 connectors and can sustain ingestions of up to 4GB/s, and more importantly the big data formats like CSV, AVRO, ORC, Parquet and JSON files. This is the complete stack of Azure Data Factory
To enable a graphic, code free graphical interface the platform uses ADF Data flow for it’s data integration providing it with seamless integration into an existing Azure Data Platform.

This combined with wrangling data flows. This will enable the platform to create compete ETL & ELT flows from a graphical interface, taking away the complexity of spark code where it isn’t needed.

This makes the tool extremely useful for generic loads that don’t need specialized logic or have complex data flows.

ELT2 -> Spark Notebooks [Code First]

Here is where (for me) the real synergy starts, this is where you can use the power of Spark where it can really shine, without having to suffer from its drawbacks.  Spark is extremely powerful with large complex dynamic data flows. Code that would be a lot harder with ADF to create. But has a large overhead where it’s power cannot be put to full use.

Analytics

Spark can also provide extra power in the analytics department where spark’s superior analytics feature can greatly enhance Azure’s DWH analytical abilities. While The Azure DWH’s SQL is a really powerful language it sometimes has troubles with specific operations like removing duplicates or window functions that lean heavily on the master node. This is the kind of workload that can easily be done by Spark. Spark is also better suited for more advanced computational workloads like switching between data models and or business rules then core SQL. While SQL is much more mature on recomposing the data into workable datasets for reporting.

I from my side am looking really forward to the new possibilities of this platform!

 

 

 

Group of computer programmers working in the office. Focus is on blond woman showing something to her colleague on PC.
Updating your Azure SQL server OAuth2 credentials in Power BI via PowerShell for automation purposes
The better way to update OAuth2 credentials in Power BI is by automating the process of updating Azure SQL Server...
2401-under-memory-pressure-featured-image
Under (memory) pressure
A few weeks ago, a client asked me if they were experiencing memory pressure and how they could monitor it...
2402-fabric-lakehouse-featured-image
Managing files from other devices in a Fabric Lakehouse using the Python Azure SDK
In this blogpost, you’ll see how to manage files in OneLake programmatically using the Python Azure SDK. Very little coding...
2319-blog-database-specific-security-featured-image
Database specific security in SQL Server
There are many different ways to secure your database. In this blog post we will give most of them a...
kohera-2312-blog-sql-server-level-security-featured-image
SQL Server security made easy on the server level
In this blog, we’re going to look at the options we have for server level security. In SQL Server we...
blog-security_1
Microsoft SQL Server history
Since its inception in 1989, Microsoft SQL Server is a critical component of many organizations' data infrastructure. As data has...