kohera-logo-regular.svg

How to import a JSON-file into a SQL Server table?

Many of you are, hopefully, starting up projects to upgrade your current SQL Server to the shiny SQL Server 2017. One of the first things to do is run the Database Migration Assistant (DMA) (https://www.microsoft.com/en-us/download/details.aspx?id=53595)

What does it do? Nicely said on the MS website:

Data Migration Assistant (DMA) enables you to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality on your new version of SQL Server. It recommends performance and reliability improvements for your target environment. It allows you to not only move your schema and data, but also uncontained objects from your source server to your target server.

Basically, it gives you a list of issues to tackle before you can go ahead with your migration. Not all of these issues are blocking, most might be warnings and won’t prevent you to upgrade to SQL Server 2017. Yet if you’re going to start with an innocent new (SQL) server, why not clean-up that dirty stuff along the way huh.

This article isn’t about how you use the tool, there’s plenty of information on that to be found online, but how you can make the results more readable.

After DMA has finished the assessment you’ll get an overview of all the issues found per compatibility. You also have the option to export this report as, which is good to know, DMA doesn’t save the results for later itself. Once you close DMA the results are gone.

Exporting the report can be done in two formats, as a .JSON file or as a .CSV file.

I dare you to export into a CSV file. I tried this, which resulted in a 6MB file with 58.000 lines of ‘data’. #Unreadable.

The other option provided is to export it as a .JSON file.

When you open the file, in notepad, you get something like this:


{
  "Name": "DMATest",
  "Databases": [
    {
      "ServerName": "JANVDP-SQL\\TEST",
      "Name": "DMA_Test",
      "CompatibilityLevel": "CompatLevel90",
      "SizeMB": 242420.0,
      "Status": "Completed",
      "ServerVersion": "10.50.6529.0",
      "AssessmentRecommendations": [
        {
          "CompatibilityLevel": "CompatLevel140",
          "Category": "Compatibility",
          "Severity": "Error",
          "ChangeCategory": "BreakingChange",

Still, for the same assessment as mentioned earlier this generated a 2MB file with almost 19.000 lines of the above syntax. Not very readable either.

I decided to import the JSON file into a SQL Server table.

First I created a database, DMA, and the following table:


USE [DMA]
GO

DROP TABLE IF EXISTS DMA.dbo.ReportData
GO

SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[ReportData](
	[ImportDate] [datetime] NULL,
	[ServerName] [nvarchar](255) NULL,
	[DatabaseName] [nvarchar](255) NULL,
	[CurrentCompatibilityLevel] [nvarchar](30) NULL,
	[ServerVersion] [nvarchar](255) NULL,
	[TargetCompatibilityLevel] [nvarchar](30) NULL,
	[Severity] [nvarchar](15) NULL,
	[ChangeCategory] [nvarchar](50) NULL,
	[Title] [nvarchar](max) NULL,
	[Impact] [nvarchar](max) NULL,
	[Recommendation] [nvarchar](max) NULL,
	[MoreInfo] [nvarchar](max) NULL,
	[ImpactedObjectName] [nvarchar](255) NULL,
	[ImpactedObjectType] [nvarchar](50) NULL,
	[ImpactDetail] [nvarchar](max) NULL
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

To import your JSON file into the table, put the file in a location accessible for your SQL Server.

Use the following script to import the data; don’t forget to alter the location\name of your file:


DECLARE @OpenJSON varchar(max)

SELECT @OpenJSON=BulkColumn
FROM OPENROWSET (BULK 'C:\Temp\DMA\DMATest.json', SINGLE_CLOB) as j

INSERT INTO DMA.dbo.ReportData
SELECT GETDATE() as ImportDate, ServerName, DatabaseName, CurrentCompatibilityLevel, ServerVersion, TargetCompatibilityLevel, Severity, ChangeCategory, Title, Impact, Recommendation, MoreInfo, ImpactedObjectName, ImpactedObjectType, ImpactDetail
FROM OPENJSON(@OpenJSON, '$.Databases')
WITH(
		ServerName nvarchar(255) '$.ServerName',
		DatabaseName nvarchar(255) '$.Name',
		CurrentCompatibilityLevel nvarchar(30) '$.CompatibilityLevel',
		SizeMB nvarchar(255) '$.SizeMB',
		Status nvarchar(255) '$.Status',
		ServerVersion nvarchar(255) '$.ServerVersion',
		AssessmentRecommendations nvarchar(max) '$.AssessmentRecommendations' AS JSON
	)
CROSS APPLY OPENJSON (AssessmentRecommendations) 
WITH(
		TargetCompatibilityLevel nvarchar(30) '$.CompatibilityLevel',
		Category nvarchar(255) '$.Category',
		Severity nvarchar(15) '$.Severity',
		ChangeCategory nvarchar(50) '$.ChangeCategory',
		RuleId nvarchar(255) '$.RuleId',
		Title nvarchar(max) '$.Title',
		Impact nvarchar(max) '$.Impact',
		Recommendation nvarchar(max) '$.Recommendation',
		MoreInfo nvarchar(max) '$.MoreInfo',
		ImpactedObjects nvarchar(max) '$.ImpactedObjects' AS JSON
	)
CROSS APPLY OPENJSON (ImpactedObjects)
WITH(
		ImpactedObjectName nvarchar(255) '$.Name',
		ImpactedObjectType nvarchar(50) '$.ObjectType',
		ImpactDetail nvarchar(max) '$.ImpactDetail'
	)
GO

There you go, simple as that.

The only “problem” now is that DMA generates duplicates for a lot of issues, based on the target compatibility level. For example, if your current compatibility level of your database is 90 (SQL Server 2005) it will report 5 times the same issue, once for target compatibility level 100, 110, 120, 130 and 140.

To retrieve only the latest version you can use this statement:


;WITH cte AS 
(
  SELECT [ImportDate]
	,[ServerName]
	,[DatabaseName]
	,[CurrentCompatibilityLevel]
	,[ServerVersion]
	,[TargetCompatibilityLevel]
	,[Severity]
	,[ChangeCategory]
	,[Title]
	,[Impact]
	,[Recommendation]
	,[MoreInfo]
	,[ImpactedObjectName]
	,[ImpactedObjectType]
	,[ImpactDetail]
	, rn = ROW_NUMBER() OVER (PARTITION BY [ServerName], [DatabaseName], [ImpactedObjectName] 
      ORDER BY [ServerName], [DatabaseName], [ImpactedObjectName], [TargetCompatibilityLevel] DESC)
  FROM [DMA].[dbo].[ReportData]
)
SELECT * 
FROM cte
WHERE rn = 1
ORDER BY [ServerName], [DatabaseName], [ImpactedObjectName], [TargetCompatibilityLevel]; 
GO

You can use the same CTE to remove the different versions if that’s what you like. Or you can keep the duplicates in order to show your manager how much work there’s to do to get some more days for your project.

The results now are 547 records, coming from 58.000 in excel and 19.000 in the json file.

Happy new SQL Server upgrade!

Group of computer programmers working in the office. Focus is on blond woman showing something to her colleague on PC.
Updating your Azure SQL server OAuth2 credentials in Power BI via PowerShell for automation purposes
The better way to update OAuth2 credentials in Power BI is by automating the process of updating Azure SQL Server...
2401-under-memory-pressure-featured-image
Under (memory) pressure
A few weeks ago, a client asked me if they were experiencing memory pressure and how they could monitor it...
2402-fabric-lakehouse-featured-image
Managing files from other devices in a Fabric Lakehouse using the Python Azure SDK
In this blogpost, you’ll see how to manage files in OneLake programmatically using the Python Azure SDK. Very little coding...
2319-blog-database-specific-security-featured-image
Database specific security in SQL Server
There are many different ways to secure your database. In this blog post we will give most of them a...
kohera-2312-blog-sql-server-level-security-featured-image
SQL Server security made easy on the server level
In this blog, we’re going to look at the options we have for server level security. In SQL Server we...
blog-security_1
Microsoft SQL Server history
Since its inception in 1989, Microsoft SQL Server is a critical component of many organizations' data infrastructure. As data has...