In this blogpost, youâll see how to manage files in OneLake programmatically using the Python Azure SDK. Very little coding experience is required and yet you can create very interesting automations this way.
I started testing this approach when I was creating a proof-of-concept. I was working on a virtual machine where database tables were exported to Parquet files on a daily base. These Parquet files needed to be moved to OneLake. However, at the time of writing the on-premise gateway for pipelines in Fabric was not yet fully supported.
Instead of using a pipeline, I started thinking about uploading the files directly to OneLake from the machine and then proceed in Fabric for further processing.
The process for doing this is described below.
Step 1: Create an app registration (service principal) in Azure and create a client secret.
Write down the tenant_id, client_id and client_secret. A detailed guide on how to create the service principal can be found here: https://learn.microsoft.com/en-us/entra/identity-platform/quickstart-register-app
Step 2: Add the service principal as contributor to your Fabric workspace.
Step 1: First create a JSON file with the Service Principal credentials. I put this file in the config folder of my project
{
“tenant_id”: “<tenant_id>”, “client_id”: “<client_id>”, “client_secret”: “<client_secret>” } |
Step 2: Next, import the dependencies. Make sure theyâre installed in the Python environment youâll be using. From the Azure SDK we import the package to manage the datalake (OneLake) and we also import the package for authenticating with the Service Principal using a client secret. Lastly, the JSON package is imported to read our config file created in step 1.
from azure.storage.filedatalake import DataLakeServiceClient
from azure.identity import ClientSecretCredential import json |
Step 3: Make a Credential object for the Service Principal.
config = json.load(open(“config/service_principal.json”))
credential = ClientSecretCredential( tenant_id=config.get(‘tenant_id’), client_id=config.get(‘client_id’), client_secret=config.get(‘client_secret’) ) |
Â
Step 4: Put the name of the workspace and the lakehouse in variables so it can be easily reused.
workspace = ‘<Name of the fabric workspace>’
lakehouse = ‘<Name of the lakehouse in the fabric workspace>’ files_directory = ‘<Name of the folder under files in the fabric lakehouse>’ |
Step 5: Create a DataLakeServiceClient object. This is an object at the OneLake level. Next use this object to create a FileSystemClient object. The FileSystemClient is on the workspace level. Once we have this, the preparation is done. Now we can start doing stuff. đ
service_client = DataLakeServiceClient(account_url=”https://onelake.dfs.fabric.microsoft.com/”, credential=credential)
file_system_client = service_client.get_file_system_client(file_system = workspace) |
Below are some examples of what we can do with our FileSystemClient object.
Example 1: List all the folders starting from a specific path in OneLake
paths = file_system_client.get_paths(path=f'{lakehouse}.Lakehouse/Files/{files_directory}’)
for path in paths: print(path.name) |
Example 2: Create a new (sub)folder on OneLake
new_subdirectory_name = ‘test’
directory_client = file_system_client.create_directory(f'{lakehouse}.Lakehouse/Files/{files_directory}/{new_subdirectory_name}’) |
Â
Example 3: Upload a file to OneLake
vm_file_path = r’C:\test\onelake\vm_test.csv’
onelake_filename = ‘onelake_test.csv’ directory_client = file_system_client.get_directory_client(f'{lakehouse}.Lakehouse/Files/{files_directory}/test’) file_client = directory_client.get_file_client(onelake_filename) with open(file=vm_file_path, mode=”rb”) as data: file_client.upload_data(data, overwrite=True) |
Â
Example 4: Download a file from OneLake
onelake_filename = ‘onelake_test.csv’
vm_file_path = r’C:\test\onelake\download_onelake_test.csv’ directory_client = file_system_client.get_directory_client(f'{lakehouse}.Lakehouse/Files/{files_directory}/test’) file_client = directory_client.get_file_client(onelake_filename) with open(file= vm_file_path, mode=”wb”) as local_file: download = file_client.download_file() local_file.write(download.readall()) |
Â
Example 5: Append to a CSV file on OneLake
onelake_filename = ‘onelake_test.csv’
text_to_be_appended_to_file = b’append this text!’ directory_client = file_system_client.get_directory_client(f'{lakehouse}.Lakehouse/Files/{files_directory}/test’) file_client = directory_client.get_file_client(onelake_filename) file_size = file_client.get_file_properties().size file_client.append_data(text_to_be_appended_to_file, offset=file_size, length=len(text_to_be_appended_to_file)) file_client.flush_data(file_size + len(text_to_be_appended_to_file)) |
Example 6: Delete a file from OneLake
onelake_filename = ‘onelake_test.csv’
directory_client = file_system_client.get_directory_client(f'{lakehouse}.Lakehouse/Files/{files_directory}/test’) file_client = directory_client.get_file_client(onelake_filename) file_client.delete_file() |
Â
Example 7: Delete a directory from OneLake
directory_client = file_system_client.get_directory_client(f'{lakehouse}.Lakehouse/Files/{files_directory}/test’)
directory_client.delete_directory() |
Letâs go back to the setup of my proof-of-concept. On my virtual machine I uploaded the exported tables to a folder on OneLake named âExportsâ. Each table had its own subfolder. It looked like this:
Files/
Exports/ |
To expose the data in Lakehouse Tables, I made a very simple pipeline. It starts by creating a list of all the exported tables by looking at the exports folder. Next, a foreach loop is initiated with a copy activity to copy the data into a Lakehouse table. In my setup, the tables are overwritten in every load.
The structure of the pipeline is shown below:
As you can see, itâs very easy to manage files on OneLake with Python, especially for people who have used the Python Azure SDK for data lakes before. OneLake can be approached in an unthinkable number of ways. So there is a lot of room for creativity when developing your architectures. I would definitely recommend playing around with it. Have fun!
Johan Hostens, Data Wizard at Kohera.
© 2023 Kohera
Crafted byÂ
© 2022 Kohera
Crafted byÂ
Cookie | Duration | Description |
---|---|---|
ARRAffinity | session | ARRAffinity cookie is set by Azure app service, and allows the service to choose the right instance established by a user to deliver subsequent requests made by that user. |
ARRAffinitySameSite | session | This cookie is set by Windows Azure cloud, and is used for load balancing to make sure the visitor page requests are routed to the same server in any browsing session. |
cookielawinfo-checkbox-advertisement | 1 year | Set by the GDPR Cookie Consent plugin, this cookie records the user consent for the cookies in the "Advertisement" category. |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". |
CookieLawInfoConsent | 1 year | CookieYes sets this cookie to record the default button state of the corresponding category and the status of CCPA. It works only in coordination with the primary cookie. |
elementor | never | The website's WordPress theme uses this cookie. It allows the website owner to implement or change the website's content in real-time. |
viewed_cookie_policy | 11 months | The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data. |
Cookie | Duration | Description |
---|---|---|
__cf_bm | 30 minutes | Cloudflare set the cookie to support Cloudflare Bot Management. |
pll_language | 1 year | Polylang sets this cookie to remember the language the user selects when returning to the website and get the language information when unavailable in another way. |
Cookie | Duration | Description |
---|---|---|
_ga | 1 year 1 month 4 days | Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors. |
_ga_* | 1 year 1 month 4 days | Google Analytics sets this cookie to store and count page views. |
_gat_gtag_UA_* | 1 minute | Google Analytics sets this cookie to store a unique user ID. |
_gid | 1 day | Google Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously. |
ai_session | 30 minutes | This is a unique anonymous session identifier cookie set by Microsoft Application Insights software to gather statistical usage and telemetry data for apps built on the Azure cloud platform. |
CONSENT | 2 years | YouTube sets this cookie via embedded YouTube videos and registers anonymous statistical data. |
vuid | 1 year 1 month 4 days | Vimeo installs this cookie to collect tracking information by setting a unique ID to embed videos on the website. |
Cookie | Duration | Description |
---|---|---|
ai_user | 1 year | Microsoft Azure sets this cookie as a unique user identifier cookie, enabling counting of the number of users accessing the application over time. |
VISITOR_INFO1_LIVE | 5 months 27 days | YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface. |
YSC | session | Youtube sets this cookie to track the views of embedded videos on Youtube pages. |
yt-remote-connected-devices | never | YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. |
yt-remote-device-id | never | YouTube sets this cookie to store the user's video preferences using embedded YouTube videos. |
yt.innertube::nextId | never | YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. |
yt.innertube::requests | never | YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen. |
Cookie | Duration | Description |
---|---|---|
WFESessionId | session | No description available. |