The lambda architecture actually splits data flows into two components, receiving data centrally and doing as little as possible processing before copying and splitting the data stream into two streams, namely the real time and batch layers. Within Azure we have for example IOT Hub being capable of duplicating these streams at the entry point.
The batch layer collects the data in its raw form and then regularly, hence the name batch, processes this data and write the results to a data store. Because only having a Data Store would limit the usability of the data a bit, most of the time we will build DataMart’s on top of this data store greatly enhancing end user usability.
The Data Store
Because this data store will be used for what we would call the classic BI tasks, It should support a lot of reads, probably needs some kind of index, but ideally would not have to care about trivial things like error handling, constraints, locking, consistency issues or even worse, transactions. This simplifies the store significantly. An example of such a system is an Azure Data Lake.
The Data Mart
Within Azure we have great solutions for DataMart’s, like Azure DB, Azure Analytics Services or for the really big sets Azure DWH. Because these solutions look and feel like there on premise cousins, the end user experience is very familiar and adoption is swift.
Pro & Contra of the batch layer
The inherent issue with batch processing is that it requires time, adding steps to feed DataMart’s will even increase the time needed so depending on the loading algorithms, the above process may take hours or days. All new data that has arrived after processing, will only be added when the batch process starts over. The good thing about the batch layer is that it’s very proficient in enriching and/or preparing data in a usable form, making this layer extremely proficient for Analysis, machine learning and/or BI questions.
To cope with the drawbacks of the batch layer (always being to late, or data being old) at the time of processing, the Lambda architecture also has the Real Time Layer, where data is treated in a sliding window scenario, and events or alerts happen as they arrive.
Azure Stream Analytics seamlessly integrates with Azure IoT Hub and Azure IoT Suite to enable powerful real-time analytics on data from your IoT devices and applications. Traditionally, analytics solutions have been based on capabilities such as ETL (extract, transform, load) and data warehousing, where data is stored prior to analysis. Stream Analytics starts with a source of streaming data and then examines the stream by use of an Analytics job that specifies from where the data comes can include a transformation and even look for data, patterns, or relationships.
Pro & Contra of the Streaming Layer
This means that the data that has to be stored in the Real Time Layer is very manageable, because it only has to store and serve a sliding window of data, which needs to be roughly as long as the batch process takes. On the other hand it doesn’t even have to be accurate because in many cases its estimated results will be replaced or even assisted by highly precise data coming from the batch layer within a (reasonably) short period.