For a customer, I recently migrated one of their databases from 2012 to 2017, whilst putting compression on all the indexes. The process of recreating the Indexes on 8 new filegroups took 3 hours for a 400GB database.
For a customer, I recently migrated one of their databases from 2012 to 2017, whilst putting compression on all the indexes. The process of recreating the Indexes on 8 new filegroups took 3 hours for a 400GB database.
Many of you are, hopefully, starting up projects to upgrade your current SQL Server to the shiny SQL Server 2017. One of the first things to do is run the Database Migration Assistant (DMA). I'll show you how to export the report as a .JSON-file.
In this blog I will be talking about Creating and Managing Audits in SQL Server. We will first look at what SQL Server audit is and why you should use it. Then I'll discuss the configuration and activatation of the SQL server audit, and many more...
"Basically", Stephanie started, "we have 2 different options under the umbrella what we call Always On. We have on one side Always On High Availability Groups and on the other side we have Always On Failover Cluster Instances. Hence the confusion."
Is SSRS on Azure happening? Well as always in Azure it depends… If you mean that ‘SQL Service Reporting Services’ is going to be provided on Azure, the answer is NO as SSRS is a product name as well as a part of SQL Server. But...
When helping a fellow dba on dba.stackexchange.com, I came across an interesting subject. What about synonyms in Azure SQL DB? We know that synonyms that stay in the current database scope work, but how do we get cross db synonyms to work?
Recently I needed to have a SQL Server 2008 R2 instance to do some testing. I had to install it on my laptop, but I wasn't a big fan of installing such an old version locally. But I found a solution!
An Azure data warehouse is a real powerhouse. Personally, I consider it to be one of the most performant components that can be used and build into the Lambda architecture. Ludicrous power comes with a drawback of course and that drawback is potential inefficiency.
SQL Server 2017 and Azure SQL Database introduced a whole new set of query improvements regarding performance. These modifications are all part of the so called “adaptive query processing feature family”, which includes three major changes. I'll focus on the batch mode adaptive joins.
One of the most challenging tasks in data mining or machine learning is how to get seasonality into the mix. Therefore I created a construct that can insert the seasonality for the Flemish part of Belgium in SQL. With some minor alterations, this can easily be adapted for the Brussels or Walloon regions.
As I start to learn more about SQL, I came across a subject that was new to me: spatial data. I was curious on how it worked so I thought it might be a good idea to calculate a real life example. In this case: how many kilometers must the Belgium football team travel to play their matches?
SQL Server has many features that support creating secure database applications. In this blogpost I’ll guide you through the latest security features in SQL Server 2017: Common Language Runtime (CLR), dynamic data masking, row level security and always encrypted.
SQL Server 2017 came with a couple of enhancements regarding Always On Availability Groups. The most radical change came with the introduction of “Read-scale availability groups” or the support for Availability Groups without an underlying cluster.
A while ago, Microsoft released a preview version of SQL Operations Studio. We've tested the preview version to get acquainted with it.
In October 2017 at the dataMinds conference (former SQL User Days) I gave a presentation about Data Lake Analytics. One of my slides contained the following topic: Fight of the Big Data Stores. Quite a nice topic to write something about.
With Analysis services 2016 and Analysis Services Management Object (AMO) we now have some powerful tooling to automate our cube with, for example, PowerShell. In fact, it's now even (relatively) easy to retrieve all the measures from a cube in just a few seconds.
Data beveiligen is de uitdaging van de eeuw. Er zijn tal van momenten en manieren waarop iemand met slechte bedoelingen aan je gegevens kan komen. Je infrastructuur wordt rechtstreeks aangevallen of gebruikers worden overtuigd om schadelijk software te installeren.
One of the new features in SQL 2017 is a way to save Graph Data. First time I heard the term “graph” I was stunned… Graph Data? First thing, I thought that came into my mind: “Why would you save charts (graphs)”. Nevertheless, that is not what Graph Data is. When we are speaking about Graph Data we are not speaking about tables anymore...
Summer holidays are coming up: perfect timing to take that moment of peace and quiet you had been waiting for and catch up on some of the most recent SQL updates with our monthly SQL Spot. And once again, our thanks to the neverending support and input of SQL blogging community!
As you all may know, many vendors tend to force you to put the MAXDOP of your SQL Server to 1 (SAP, SHAREPOINT, AX). Today I would like to demonstrate you the actual impact on your server of setting your MAXDOP to 1.
In SQL Server 2016 heeft Microsoft ons voorzien van tal van nieuwigheden. Naast de talrijke mooie wijzigingen in de SQL Server engine, SSAS en SSRS, is er ook aandacht besteed aan SSIS 2016. Deze verbeteringen staan wat minder in de kijker, maar zijn daarom niet minder de moeite waard.
Losing sight of what are the most relevant updates on SQL Server 2016? Well, there is no need to spend countless hours browsing the internet. We made another our SQL Spot summary of the most interesting updates and changes for all SQL lovers.
Securing sensitive data is a critical concern for organizations of all types and sizes. In this blog post you will get an overview of the possibilities, availability, pros and cons, and evolution of Transparent Data Encryption (TDE).
When auditing SQL Server databases, one of the more common problems I see is that people add too many indexes to their tables. This has a significant performance impact on your SQL Server.
It is all about the data these days, and that is exciting. The increasing growth of data technologies is on an all-time high. With our SQL Spot we try to gather some intel for all SQL Server enthusiasts, to feed you professionally or just your common hungry personal interest.
What is Master Data Management? Let me start by telling you what it is not. MDM is not a pure technological solution, or a quick, pre-developed GUI to maintain data. To ensure sanitized master data, you need to include fundamental changes which are required in business processes.
A colleague of mine recently had an issue when trying to format a variable of the time data type. He was trying to return the time in the HH:mm format, but got a NULL value.
SQL 2016 has been released now already since a couple of months. But did you know the existing of following new syntax in SQL 2016 or vNext?
A SQL Server is as fast as you tune your workload. One of the things I try to do, is tune the workload, a.k.a. the queries executing on my SQL Server, in order to get a better performing server.
When you are responsible for databases on a project (as an architect, administrator or developer), you sometimes have to store documents (Files) which are linked to structured data. These data often have to be accessible outside SQL Server (Windows API’s). Storing BLOBs in the database, especially the ones over 1MB, can consume large amounts of file space and expensive server resources. Within SQL Server and Azure, there are different solutions to solve this kind of issues.
The growth spurt of SQL Server 2016 and all its related dimensions is hard to follow. Yet with our SQL spot we try to gather some intel for all SQL Server enthusiasts. As always, we try to share some interesting articles, which crossed our paths recently, either professionally or out of personal interest. The gathered community information addresses a healthy BI perspective, ranging from business to technical insights on various topics: SQL Server news, Power BI, SQL Server BI stack, features, data science topics, accidental DBA stuff and interesting learning or study materials.
Big data has become big business. Almost every company has transformed into a digital company these days. But how do you turn large volumes of information to your advantage?
Hi there. Back again with more splendid community articles. Once again I tried gathering some noticeable information for you on common SQL Server topics regarding business, BI, Data Science, Azure, maintenance, tuning, learning and more…. As always, all rights go out to the community for putting in all the work.
In een vorige blogpost kon je lezen wat R is, wanneer en waarvoor je R kunt gebruiken en hoe je ermee van start gaat. Vandaag vertellen we je hoe je met deze meest gebruikte open source programmeertaal een statistische analyse en een grafische presentatie van jouw gegevens maakt. Plus, waarom dat nuttig is.
One of the most prominent questions when working with PaaS databases is how to secure them. In the previous blodpost of these defence series, I talked about limiting access to objects through firewalls and encryption. Today I'll discuss protecting and auditing your data. Also, I’ll add some extra resources for a complete overview of the available security features on an Azure SQL Sever Database.
One of the most prominent questions when working with PaaS databases is how to secure them. In these defence series, I will talk about limiting access to objects, protecting your data and auditing. Ok, let’s get started with the first layer. Enjoy!
It has been going fast in 2016 for Microsoft BI. If we continue to move on like this, we’re in for a treat in 2017. I want to share some interesting articles, which crossed my path in the past month.
SQL Server provides custom recommendations by using the Azure SQL Database Advisor, enabling maximal performance. The SQL Database Advisor makes recommendations for creating and dropping indexes, parameterizing queries, and fixing schema issues.
It all began with a client who wanted to distribute reports with minimal effort and make them usable by non-technical users.
Did you ever needed to use a lookup in SSIS that was not the standard equi-join and there was no possibility to do the lookup directly in the source?
The most commonly created packages are data copy packaging from one source DB to a staging target DB. Instead of creating multiple packages (with or without BIML), a META data free solution can be made using the .NET SqlBulkCopy class.
Using the .NET SqlBulkCopy class, you can easily create a parameterized package to copy a bunch of tables at once. Instead of creating multiple packages, we can also create a generic package that covers the complete set of tables we want to copy.
ETL processes have a lot of common parts between them. One important part of it that’s frequently repeated is a data copy sub-process to copy table(s) data from one server/database to another. Sometimes full table copies, in other cases with specific data selection. For every table or selection, you create the process in one or several packages/control flows and/or data tasks. There is not much special to say about small tables, but large tables are another matter.
Recently I had a problem on a project where we needed to securely store documents. It was the first time for the customer to do such a thing. Telling them this was not so hard, I convinced them to us SQL file tables to store these documents securely in an SQL server database.
The Chamber of SQL - Part 2 A couple of months after the breakdown of one of their databases in production, Stephanie, Marc and Richard worked out a very good plan. Stephanie created a restore script that would automatically be fired on a weekly base.
What is the buzz about ‘R’? What is R, how can I use it, where do I use it, how do I start? I had all the same questions about it, the first time I heard of it. That’s why to explain what the possibilities are in R and what you can use it for.
Let me begin by introducing the main characters of this blog. We have Marc, a senior system administrator with a deep technical knowledge about Windows PowerShell, networks and everything related to Active Directory and DNS. There’s also Richard, CTO of the company, a great visionary and in the old days one of the best system administrators the company had. Finally, there is Stephanie, who has been working with SQL Server on various projects for almost ten years, working as an external consultant.
A client requested whether it was possible to launch an SSIS package flow from Excel. The reason behind it was that the load of data in question was only used once a year and just over a period of one or two weeks. The actual code used to do this was no longer than four lines of VBA code in Excel.
After publishing my previous blog about making a lookup case insensitive, I received a question from my most loyal reader, my wife. Just like me, she is a BI developer, so that makes her the perfect proofreader. After reading my post, she wondered what would happen if you used a partial cache mode, and the input data didn’t match the casing of the lookup data. So I started investigating…
Scenario: your customer has determined that your SSIS source for his new project is a CSV file with a header row followed by the data rows. All is developed and tests were made, all is success. After a few days of testing the customer wants the business user to have the ability to change the position of the data columns as the customer needs.
Many websites and blog posts will tell you that the SSIS Lookup Component is Case Sensitive (CS). And while they are correct that there isn’t a direct property to change the case sensitivity of the component, it can however be used in a non-case sensitive way.
Controlling database deployment, managing (measured) deployment to production of database changes and keeping an overview of these changes can be challenging. As an answer to this problem, I’ve been searching for a well-working approach to implement Database Lifecycle Management (DLM), of which the target would be to use this approach on all projects.
Today I was creating a dummy database and wanted to fill this with a certain amount of data. Since I was going to use this database for demo purposes, I created them with foreign keys, primary keys and some indexes. I found that the way I was loading data was quite slow, so I decided to load the data into my database using Natively Stored Procedures and in-memory tables, since this should be the fastest way. The result really baffled me!
One of the SQL Server settings with critical impact that is often misconfigured is the maximum memory setting. SQL Server will attempt to take as much memory as possible and is not always eager to release it. This results in the operating system being forced to start actively using the page file. While SQL Server might seem perfectly happy and show good page life and buffer hit ratios, the operating system might in fact be suffering.
Normal day to day maintenance of large data volumes can be difficult. In a previous post I talked about how to handle DBCC CheckDB on a multi-terabyte database. Today I will tackle another issue I came across while dealing with multi-terabyte databases: index maintenance.
Recently I was working for a big international customer who, according to policy installs, Named Instances of SQL Server only. During a huge project deployment, an obstacle occurred: this customer had hardcoded the SQL connection string which pointed to the local Default Instance (even worse '.').
One of our customers had problems maintaining some monster databases (VLDB). These have about 2 to 4 TB of data and more than 90,000 tables. Building a good maintenance solution for such databases isn't easy. For most of my clients I use the Ola Hallengren solution, which has proved to be excellent before but it just wasn’t suitable here.
Since a couple of days SQL Server 2016 is finally available. I can't wait to start working with this new exciting version. However, I'm not going to talk about SQL Server 2016 in this blog. I’m going to discuss the management tool we all use for accessing SQL Server databases, because without it, we would not be able to do all of our hocus pocus. This blog is in honour of our little friend called SQL Server Management Studio.
Azure and DevOps have become quite the rage in the information technology world. With the increasing desire for companies to build and deploy code faster, and the need for scalability the DevOps philosophy gets more focus as a path to accomplishing these goals. However, applying DevOps in a cloud environment means a shift in the way IT-Pros have been working over the years.
Not everybody tends to display information in data warehouses in the same manner. For this reason, we have created four so-called special dimension members to avoid confusion and establish some uniform ground rules.
Today I’ll be handling an issue I had when migrating a SQL Server 2000 database to SQL Server 2012. We migrated the database from the old, grumpy, and more importantly, unsupported Windows Server 2003 and SQL Server 2005 instance. All was well, and we had no instant issues when users started working with the application. The migration was seamless, we did not expect any future problems. Even the upgrade advisor did not give us any errors!
If you would ask me what one of the most forgotten but essential elements of database security and stability is, it would be keeping all of the involved software in the database environment up to date. That might seem trivial, but experience shows that in a lot of environments this is often positioned in “the gray area”. Especially when a customer has a lot of third party vendors bringing their own databases. In this time of exploit kits enabling even the non-technically savvy to launch sophisticated attacks, patch management takes on a new importance. Especially when you consider the fact that getting access to any database, is considered hitting the jackpot by a hacker.
Just like Windows when you install a SQL Server, it’s more a generalist then a specialist. Trace flags is the road to follow when you as a DBA want to tune your SQL server to look like either a F1 car for OLTP or an Optimus prime monster-truck-like-Australian road-train for DWH queries. Both of them are extremely powerful in their own field, but the key is to use that power correctly.
As BI consultants we regularly get the question to load one or more Excel files. Files tend to have different layouts, almost always consist of multiple worksheets and columns often contain different data types, making it more complex to load an Excel file than it would initially appear. After lots of #$@&%*! I grew tired of being frustrated, so I started looking for a solution that would simplify loading intricate Excel files. Here's what I came up with.
This blogpost is an addendum to my previous blog “SQL 2016 - 5 (Real) reasons to upgrade, part 2” where I explain about StretchDB. We know that StretchDB is an awesome feature, but what willit cost and how does it compare with your local SAN storage?
As a running up to the new release of SQL 2016 I would like to share some research I did about the new features that will be available. Last time I talked about Dynamic Data Masking. This time I will discuss another very interesting new feature namely Row Level Security. We know Row Level Security on Analysis Service level. But Microsoft now also implemented it on Database Service level.
My personal favorite reason to upgrade to SQL2016: StretchDB StretchDb provides cost-effective availability for cold data Stretch warm and cold transactional data dynamically from SQL Server to Microsoft Azure with SQL Server Stretch Database. Unlike typical cold data storage, your data is always online and available to query.
In my role as a consultant, many customers ask me for real reasons why to upgrade their existing SQL platforms to the next iteration of SQL Server. As migrating their existing environments always contain a risk, they are often looking for good reasons to upgrade or to stay at their current build level.
Whenever I go to a burger restaurant & I want to order myself a nice and tasty burger. The first thing I do is look at the menu and ask myself the question: “Where is the bacon?”. I did the same thing while looking at the menu of the new up and coming SQL Server 2016, and found the nice tasty bacon in the form of a feature called Query Store!
1.Above all, Install Microsoft SQL server with much fun and devotion. 2.Preconfigure TEMPDB, not leaving defaults, for better throughput and performance. 3.One shall protect their data and backup the data as if it was the Chosen one. 4.Test, Test and Test the restoring of the data as to anticipate future disasters, they might be your salvation.
Unfortunately, SQL Server’s management studio doesn’t provide you with a simple way to script your resource pools and resource groups. If you want to do this you’ll first have to script all the groups, and then the pools one by one in the correct order. To recreate them you have to do the same but in a reverse order.
As a running up to the new release of SQL 2016 I would like to share some research I did about the new features that will be available. Today I want to talk about Dynamic Data Masking. It is a nifty, small but quite intresting new feature.
In the third part of the misconfiguration we will handle the database files. If you create a database SQL Server will look at your model database and create the file structure as specified in your model database. By default this will mean that you will create 1 data file with a size of 4 MB and 1 log file with a size of 1 MB. These files will be able to grow automatically by auto growth, where the data file will grow in steps of 1MB and your log file will increase with 10% each growth.
Als je naar het ziekenhuis gaat voor een behandeling, wil je vooral weten hoe lang je er zal moeten blijven en hoeveel het zal kosten. Ook het ziekenhuis wil dit weten om de capaciteitsplanning, het dienstrooster en de financiële planning te verbeteren. Het Sint-Rembertziekenhuis in Torhout werkt daarom aan een nieuwe reeks rapporten om een beter overzicht te krijgen van de eigen activiteiten, op basis van de bestaande gegevens.
In the first part of the SQL Server Misconfiguration Chronicles we handled the database level issue of AutoClose & AutoShrink. The second part of the misconfiguration chronicles will handle the default memory settings of a freshly installed SQL Server instance. SQL Server by default configures the instance memory settings to take a maximum memory of 2147483647MB which is about 2 Petabytes.
In the SQL Server Consulting world you see a lot of different environments, but mostly you see the same basic misconfigurations for their SQL Server Enviroment. The following blogposts will show you the most common issues with the configuration of SQL Server, they will tell you what the issue is, why it is a bad configuration and how you can solve it.
One of the challenges of using the free express editions of SQL Server is the lack of the SQL Agent. This means that creating a functioning maintenance plan is a bit more of a challenge then on a regular SQL Server instance. Luckily for us Ola Hallengren created his maintenance plan in such a way that it can easily be altered to make it usable for SQL express editions.
In the Microsft BI world, SSRS is a very much integrated product. We can find SSRS reports in many companies. A question that we encounter often as BI consultants is to render some of these SSRS reports on a scheduled basis and save them in a shared location or email them as a pdf file. Subscriptions can do this for you. However, with subscriptions, we are missing some flexibility, like dynamic report names or sending the rendered report only when certain conditions are met.
SSIS has so many properties that some are overlooked or ignored. In this article we will have a closer look at one of them: the “cache mode” setting of the Lookup Component. As the name indicates, the “Cache mode” lets you choose which form of caching the Lookup component will use. The available modes are Full, Partial and None. By default “Full Cache” mode is selected. This is a good default value, but there are of course situations in which you might want to change it, otherwise there would be no need for the other modes.
Close to Halloween we noticed that some SQL Servers had started acting spooky, as if they wanted to add to the sentiment of Halloween. Last week I had one SQL Server doing exactly that. The server had been running perfectly for the past months but that day its TempdDB was starting to fill up. Of course this required some investigation, and I found that VersionStore was the problem for my unexpected TempDB growth. The strange thing was that one database was taking up almost all of my tempdb space in version store, but it did not have any open transactions. Spooky stuff indeed!
Many years ago Microsoft released SQL server 2005. This was an important release after SQL server 2000. Today, a decade later, Microsoft is about to stop the extended support on SQL server 2005 on April 12th 2016. SQL server 2005 is end of life, no more updates will be released. When running SQL server 2005 you will be exposed to potential bugs and security threats.
Werken met een elektronisch patiëntendossier maakt het leven veel gemakkelijker voor iedereen in het Ziekenhuis Oost-Limburg. Het heeft wel een belangrijke implicatie: wat er ook gebeurt, de digitale gegevens mogen niet verloren gaan. De wet vereist ook dat patiëntengegevens tenminste 30 jaar worden bijgehouden. Daarom ging het ziekenhuis op zoek naar een oplossing die hen volledige garanties zou geven op lange termijn, tegen de laagst mogelijke kost.
When deploying to the cloud one of the hardest challenges is to get tempdb to scale in the same way as your virtual server. Why is this, well as cloud machines are scalable and thus can change configurations during their lifecycle the number of tempdb files should be able to scale accordingly, so the amount of tempdb files becomes a dynamic value. Luckily for us, this parameter changes only after a shut down and restart of your virtual machine, so if we had a script that could adjust our tempdb needs every time the configuration changes we could deploy this and be more at ease.
Master Data Services is one of the hidden gems in the Microsoft BI Tool stack. As the name suggests the idea is to use the tool to technically support Master Data Management. But what we see in real life is that Master Data Services, although being a good tool, isn’t up to standards compared to the master data management tools offered by other big vendors like IBM and SAP. This doesn’t mean that you shouldn’t use Master Data Services (MDS). MDS has a big added value in lots of Business Intelligence and Data Warehousing projects.
Meer en meer krijgen wij als consultant de vraag of het nuttig is om SQL Server te virtualiseren. Buiten het licentieverhaal, waar virtualisatie alleen al een enorme impact heeft, zijn er ook technische redenen om wel of niet te virtualiseren. Dit is vooral zo wanneer de bestaande omgeving veel kleine fysieke servers heeft of enkele grote machines met meerdere instances. In deze post lijsten we dan ook de voor- en nadelen op van het virtualiseren van SQL Server. Monday 28 September 2015 SQL Server
Deze vraag krijg ik de laatste tijd vaak voorgeschoteld van klanten die advies vragen welke richting ze uit moeten met hun Business Intelligence. Sinds de overname van Datazen en de grote update van Power BI afgelopen zomer beginnen alle puzzelstukjes in elkaar te vallen en kunnen we spreken van een compleet BI-platform dat zeer concurrentieel is ten opzichte van de andere spelers in de markt.
There comes a time in every SQL Server dba’s life when they will have to consolidate an MS Access database to an SQL Server. It's inevitable and destiny decided that it was my turn today. A heavily used Access database which had almost grown beyond its capacity sealed my fate. It had about 1.2 GB of data and had become extremely slow at executing queries, which was something we could no longer ignore. This was the first Access database I was going to migrate, so I started by doing some research on the good old internet. Many people suggest that making SSIS packages is the easiest way to do this, while others recommend using openrowset or opendatasource. I chose the openrowset technique.
There is a large variety in possible data sources from which we can extract and that number is not likely going to decrease. The ETL path that has to be taken for each source can vary heavily depending on all kinds of factors including application architecture, company security, departmental policies, etcetera. Middle-sized to large companies tend to use multiple applications from which data can be extracted to feed the greedy monster that is BI.
Imagine your client gives you an Excel file as the SSIS source for a new project. When you load the data you find that the data type is not aligning as expected and some of the rows are not filled with the data provided. You then try to change the input data type in the Excel source component, but you can’t seem to get the output you want. So what's the problem? Well, let’s take a look at the input.
While writing my SQL Server template installer, I ran into a big problem – I had to determine if my server was a core edition or not, based on the Windows edition. The biggest challenge was that it had to be as OS-independent as possible. Our customers run several OSs, so I needed something that could run on Windows 2008 R2, Windows 2012 R2 and be future-proof for Windows 2016.
The uniqueidentifier datatype is a 16-byte binary value used as a globally uniqueidentifier (GUID). Developers use it in database models for assigning identifiers that must be unique in a network of many computers at many sites. Also, developers exploit this uniqueidentifier data type inappropriately, assuming this is always the best way to enforce uniqueness in their data model
Today I came across an issue with my Central Management Server. I have a database on my CMS which has a few tables with details about the server state (memory, disk space, etc.). I wrote a few stored procs to gather this data using openrowset in combination with the server list obtained from the msdb on the CMS.
Wondering where to start with your dinosaur database servers that are still floating around somewhere in your highly state-of-the-art environment? Oh yes, everyone else may have forgotten about them, but these machines still exist. While everybody is happily doing their job, you still need to maintain the old servers. After all, you're the DBA and your job is to keep these machines alive.
Cloud computing is fully mature, so we believe that many of your SQL Server systems could be running more efficiently in the cloud. We are convinced that a cloud model, whether hybrid or full cloud, is in fact the best solution for most of our customers. The only questions that remain are: when are you going to migrate and with which model?
With this post I want to show you how you can make a central management server (CMS) selective for different user accounts so that you can use public and management roles, only showing the servers that are necessary to do your job. This enables you to only maintain one CMS, but show different servers (and even groups) dependent of your role (operator, junior DBA, Sys DBA,…) these roles are only limited by your needs and imagination. First I looked into Row- and Cell-Level Security
The goal of this blogpost is to transfer a dataset from the Adventureworks DB and query it in the same way that you would on your own DWH, but running on a DocumentDB NoSQL database. This blogpost is not intended to compare any RDBMS based data warehousing to Document-based data warehousing. It’s purely to demonstrate that you can actually migrate a dataset from a pure RDBMS data structure and transfer it into a document-based NoSQL Engine and run data warehouse queries.
Today I want to show you another powerful way to use your Central Management Server. The CMS can also be used to execute OPENROWSET queries over your complete SQL Server Farm. Now I can hear you all say: “Why would you want to do that?” and “OPENROWSET, you know that you shouldn’t implement that for queries that have to run frequently!”. And to be honest, in most of the cases you’d be right. But it can come in extremely useful in some situations.
While working on various SQL Migrations these are my findings on how to migrate a database between servers with no downtime on the SQL server side.
In this blog post I want to explain something weird that happened in a previous SQL Server Integration Services (SSIS) project. A lot of different SSIS packages had to be developed. Often very similar code was used within the same package.
It all started about a month ago, I got my first big assignment as a junior. The assignment was quite simple, design and implement a datawarehouse from scratch from a production system of an enterprise. Although the description of my assignment was quite simple, putting this assignment into practice proved to be a lot of work.
When I was at one of our customers a few weeks ago, they had some questions about the rebuild index process. On some servers it caused big TLog growths, and on other servers, it didn’t. So I did a quick investigation on the differences in versions and options.
A data warehouse is a relational database just like any other relational database. However, the database model is designed in a different way. It should facilitate business users querying large amounts of data, but also let them explore the data from different points-of-view by means of BI applications. Six steps will guide you through the design process and address some key fundamentals in data warehouse modeling. Finally, five additional tips will help you save time and money with future enhancements of your data warehouse solution.
What is better? Using GUIDs or Integer values? This is been an age long (religious) debate and there are advocates in both camps stressing on the disadvantages of the other. Both implementations have their advantages and disadvantages. In this blogpost, we'll see that the ultimate SQL Server answer is also valid for this debate: it all depends!
This post might sound a bit like a game of Cluedo, but it happened on a production SQL Server… The issue showed itself on a hosted production VM with 64GB of Ram, where SQL could allocate memory from 32GB to 55GB. The agreements with the hosting partner clearly mentioned that while the CPU could be overcommitted, memory couldn’t. All my alerts suddenly started to show that the machine was using all the available and became completely unresponsive (I couldn’t even open the Event Viewer to unload the logs). Remote connection with management studio gave me this worrisome error: