How to fire up Azure Analysis Services

This is my twelfth (12th) post in a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This time it’s going to be about how to create an Azure Analysis Services [AAS] Instance via the Azure Portal and connect to it from Power BI.

Azure Analysis Services

Previous Posts in Entry Level Serie:

Preface

One of the Microsoft’s biggest announcements at this years PASS Summit, was Azure Analysis Services. Period. CNTK second, IMO. So, why should you care?
According to Microsoft, this is why:

Based on the proven analytics engine in SQL Server Analysis Services, Azure AS is an enterprise grade OLAP engine and BI modeling platform, offered as a fully managed platform-as-a-service (PaaS). Azure AS enables developers and BI professionals to create BI Semantic Models that can power highly interactive and rich analytical experiences in BI tools such as Power BI and Excel. Azure AS can consume data from a variety of sources residing in the cloud or on-premises (SQL Server, Azure SQL DB, Azure SQL DW, Oracle, Teradata to name a few) and surface the data in any major BI tool. You can provision an Azure AS instance in seconds, pause it, scale it up or down (planned during preview), based on your business needs. Azure AS enables business intelligence at the speed of thought! For more details, see the Azure blog.

ref

If you’re still thinking “Nah, it’s no  biggie”, you should read it again. This is huge.

Getting it airborne

In this example, I will only show how to create the AAS instance from the portal. So we are heading there, right away. The easiest way, is to search in the services search field:

Portal Search

Click the Analysis Services link (as you can see, still in preview) and you will be directed to the, hopefully familiar, service tab/screen in Azure.

Service Tab

Click Add and you will see the configuration  tab to the left

Service ConfigurationOn naming, there are currently? restrictions, so you  are down these basic naming rules:

Naming Convention

Locations supported are currently only West Europe and South Central US.

Locations

Probably the most interesting part (even if you don’t care, someone on a higher pay grade will) is about Pricing Tier. As for now, there are four (4) pricing tiers D1, S1, S2 and S4 (I see what you did there Azure Team; The well-known Windows skip version?) Approximate cost units are 1, 15, 30 and 60 respectfully; So if D1 costs $50 a month, S4 costs $3.000 – as you can see, forgetting to shut down an S4 instance can quickly build up a bill, so beware.

Once you click ‘Create‘ after setting all the configurations, you will be guided back to the service tab and here you’ll see your instance.

AAS Created

Connecting from Power BI

This requires the latest November update from the Power BI Team.
Before you connect, you need to get the server name. In Azure portal > server > Overview > Server name, copy the entire server name.

Copy Server Name

  1. In Power BI Desktop, click Get Data > Databases > Azure Analysis Services.
  2. In Server, paste the server name from the clipboard.Get Data
  3. In Database, if you know the name of the tabular model database or perspective you want to connect to, paste it here. Otherwise, you can leave this field blank. You can select a database or perspective on the next screen.
    Connection
  4. Leave the default Connect live option selected, then press Connect. If you’re prompted to enter an account, enter your organizational account.
  5. In Navigator, expand the server, then select the model or perspective you want to connect to, then click Connect. A single click on a model or perspective shows all the objects for that view.
    Navigator

Voilà – Happy Data Discovery

 

Loading

Which Database Servers are hit from my Analysis Services Instance(s)

This is my eleventh (11.) post in a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This time it’s going to be about how to quickly get an overview of an existing environment, in particular Analysis Services (SSAS) to Relational Database (RDBMS) connections.

Previous Posts in Entry Level Serie:

Introduction

Whenever you are introduced to a new environment, either because you visit a new client or take over a new position from someone else, it’s always crucial to get on top of what’s going on. More often than not, any documentation (if you are lucky to even get hands on that) is out of date or not properly maintained. So going through that may even end up making you even more confused – or in worst case; misinformed.

In a previous engagement of mine came a request from the Data Architecture team. I was asked to produce a list of all servers and cubes running in a specific environment. They provided the list of servers and wanted to know which servers were hit by running solutions. Along with this information the team also needed all sorts of information on the connection strings from the Data Source Views, as well as which credentials were used, if possible.

Easy, Peasy, PowerShell – Period.

Solution

I wound up writing a very small function using PowerShell leveraging the SSAS AMO API. Needless to say, it was a somewhat baffled Data Architect that had to double-check, when I had the results ready within the hour.

Here’s the script

# Load Analysis Management Ojects
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices") >$NULL

function Get-CubeMetaData( [string]$Server ) {

	#Create Server Instance
	$Srv = New-Object Microsoft.AnalysisServices.Server
	
	# Create Collection to contain the information
	$Collection = @()
	
	# Regular Expressions to extract Provider, Data Source and User ID from the ConnectionString property
	$DataSourceRegEx = New-Object System.Text.RegularExpressions.Regex ('(?s)(?<=Data\sSource=).*?(?=\;)', [System.Text.RegularExpressions.RegexOptions]::MultiLine)
	$UserNameRegEx = New-Object System.Text.RegularExpressions.Regex ('(?s)(?<=User\sID=).*?(?=\;)', [System.Text.RegularExpressions.RegexOptions]::MultiLine)
	$ProviderRegEx = New-Object System.Text.RegularExpressions.Regex ('(?s)(?<=Provider=).*?(?=\;)', [System.Text.RegularExpressions.RegexOptions]::MultiLine)
	$InitialCatalogRegEx = New-Object System.Text.RegularExpressions.Regex ('(?s)(?<=Initial\sCatalog=).*?(?=\;)', [System.Text.RegularExpressions.RegexOptions]::MultiLine)
	
	# Connect to server passed as parameter
	$Srv.Connect( $Server );
	
	# Loop each Database on Server
	foreach( $Database in $Srv.Databases ) {
		# Loop each Data Source on each Database
		foreach( $DataSource in $Database.DataSources ) {
			
			#Create object to contain attributes
			$obj = New-Object PSObject 
			$obj | Add-Member -MemberType NoteProperty -Name ConnectionStringSecurity -Value $DataSource.ParentServer
			$obj | Add-Member -MemberType NoteProperty -Name CompatibilityLevel -Value $Database.CompatibilityLevel
			$obj | Add-Member -MemberType NoteProperty -Name Database -Value $Database.Name								
			$obj | Add-Member -MemberType NoteProperty -Name Provider -Value $ProviderRegEx.Matches( $DataSource.ConnectionString, 0 )[0].Value
			$obj | Add-Member -MemberType NoteProperty -Name DataSource -Value $DataSourceRegEx.Matches( $DataSource.ConnectionString, 0 )[0].Value
			$obj | Add-Member -MemberType NoteProperty -Name InitialCatalog -Value $InitialCatalogRegEx.Matches( $DataSource.ConnectionString, 0 )[0].Value
			$obj | Add-Member -MemberType NoteProperty -Name UserID -Value $UserNameRegEx.Matches( $DataSource.ConnectionString, 0 )[0].Value
			
			# Add to collection
			$Collection += $obj
		}
	}
	
	# Report Collection as Table (
	$Collection | Format-Table
	
	# Note the parameter embedded into the path
	# So do mind named instances, as the file name will most likely burp on you	
	$Collection | Export-Csv -Path "C:\SSAS\Output\$ServerCubeMetaData.csv" -NoTypeInformation -Delimiter ';' -Append
}

# Execute the function
Get-CubeMetaData -Server .\SQL2012

You can extend with whatever attributes suit your purpose, but do mind the export to file when dealing with named instances. I’ll produce an oddly named file, as backslash does some funky stuff in the string concatenation.

Loading

Backing up your VM’s in Azure on a Schedule

This is my tenth post in a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This time it’s going to be about how to backup your VM’s in Azure with the Recovery Services Vaults.

Previous Posts in Entry Level Serie:

Introduction

Finding yourself up that creek w/out a paddle, is not something you should strive to achieve. Hence, do your backups so that when that brownish thinghy hits the rotating cooling thingy, you are able to dodge the bullet in style.

Via the Azure Portal this is actually quite an easy setup. We begin by defining a Recovery Services Vault. If the offering is not immediately available, search for it like this:

searchYou can select to toggle it as favorite, you may want to check back regularly to see if your backup job is running as expected.

Clicking the Recovery Services Vaults presents you with the next dialog. Here you can view all the services you may have running. You can also Add new ones which we are going to do by clicking the Add button.

add

Next dialog lets us set a couple of basics about the service, such as name, resource group and do keep in mind which subscription you are working on, if you have more than one available. Click Create when all attributes are in place.

basics

Once the deployment is complete, you’ll see the notifier alerting you, that deployment has been successful.

deployment

Back to the Recovery Services Vaults, where we can Refresh and observe the newly created Recovery Services Vault.

Test Recovery Service
Now it’s time to add some server to be backed up. By clicking the newly created Recovery Services Vault, Test-Recovery-Service in my example, we get to the overview of the service. Here, we select the option/pane Backup:

Backup Pane

In this wizard we are presented with three (3) areas of configuration; First we need to decide if it’s in Azure or On-Premises. By selecting Azure, we are left with only Virtual Machine as the only option for the backup. On-Premises has more options, SQL Server, Sharepoint and Hyper-V VM’s among others. This example will be about Azure VM’s, hence we selected accordingly.

Step 2 is about the backup policy, or in other words frequency and retention. I am going with the default settings here, but options are great as you can configure retention range for weekly, monthly and yearly backups in parallel.

Backup Policy

When a new backup policy is created, it automatically becomes an option in other Recovery Services Vaults.

In Step 3 we select which VM’s are included in the Vault.

Items to Backup

 

Finally we check everything is as we want it to be and click OK.

Final Wizard

Going back to the overview, we can now observe that the job has been scheduled according to our backup policy definition. Also observe that the backup usage is right where you want it to be, to track and monitor.

Overview

And by clicking the “Azure Virtual Mach…” we get the following overview/status.

Status

But when the job has completed, at the specified time, you will see the time stamps as to when the backups were executed.

Status Final

In my humble opinion, this is so dead simple and easy it’s almost scary – I fear I am missing something, but I seriously doubt it. You might want to keep an eye on spending though, especially if you set up a high frequent backup schedule with a long retention. It can become pricy, but is very reasonable for small installations like this.

Loading

Testing Your #SSAS Cubes w/ Visual Studio

This is my ninth post in a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This time it’s going to be about how to create a consistent test setup of your Analysis Services (SSAS MD) databases, across servers, using build in features in Visual Studio (VS).

Previous Posts in Entry Level Serie:

Preface

This technique requires a tiny bit of C# programming (or VB if you prefer that) – But do not let this scare you off. I guarantee the effort pays of well in the end. But if you want to jump right into testing, I have shared some code in this blog post, that will get you going even faster.

Limitations

One of the main drawbacks of using this approach, is that test methods in VS does not support parameters. This means we have to create a method for each type of MDX we want to execute against a cube.

We can however execute the same MDX on multiple cubes, from the same method.

Depending on whether you test for consistency or performance, having the test methods run synchronously, one after the other, will either work for you or against you. We do, unfortunately not, have the liberty to choose execution mode as we see fit. This blog post sums up how to enable parallel testing, but in our case we do no meet the requirements which are (I have highlighted the ones we fail to meet):

  1. Ensure you have a multi-core/CPU machine
  2. Ensure you are running only unit tests
  3. Ensure your tests are thread-safe
  4. Ensure you do not have any data adapters on
  5. Ensure you are running locally
  6. Modify your test settings file.

The Code

We begin by creating a new Unit Test Project in Visual Studio.

New Unit Test Project

Once the project has been created, we can add classes to our project. Each of these classes can contain one (1) or more tests. In this example, I will expose two (2) tests to the Visual Studio test framework.

But before we can do any testing, some plumbing is in order. I have attached code to this blog post, in which there is a class that wraps the ADOMD class library as well as implements some easy of use functionality. This class is called Helper. In this implementation it contains a method to clear the cache of a cube, execute XMLA or MDX against a cube and finally a method to assert if two (2) CellSets are identical. The Helper class relies on settings found in the App.config file. You are free to make the code your own, but can use it as is.

Here is an example of one test you might have in a cube. Testing if the grand total of a specific measure [Internet Sales] stays consistent for each deployment.

Test Class

When we build our Visual Studio project, the Test Explorer will automatically pick up on any newly added test methods in any of the classes in the project. So you can add as many classes with as many (or few) methods you’d like.

Build
Build (right click) or [Ctrl]+[Shift]+[b]

Execution

In this quick introduction I will only show how to execute the test via the Test Explorer. Obviously this should be automated, as I will blog about in another post.

Knowing right away, that you introduced a bug into the code base, can save you hours of back tracking.

In the Test Explorer you can select any single test or multiple tests, then right-click and select Run Selected Tests.

Run Selected Tests

You will end up with a result in the same pane; either Success, Failure or Undecided. The last option is not included in this example, but is a viable option if you extend your unit/integration testing further.

My result looks like this:

Test OutcomeWith this test, we established that the grand total of the measure Customer Count was consistent across both versions of the cube. If this fails, you may be into trouble with either your data model, your cube script or something completely different. The test won’t tell what is wrong, only that you have an inconsistency.

Fail fast, fail often!

Wrap-Up

Unit/Integration testing in Visual Studio is actually not that hard and can save you a lot pain down the road. The testing framework in Visual Studio offers extensive ways of executing batches of tests. You can group tests by Class, Duration, Outcome, Trait or Project.

When you right-click a test, you get the option to select how you want the tests in the Test Explorer to be grouped.

Test Group By

Traits can be added as properties to the method declaration in the class. The attribute is called TestCategory and you can have multiple declarations per method. As you can probably tell, you can weave an intrinsic web of categories of test methods. In other words, one test method can be part of one or more schemes by which you test your code. You should test development deployments differently from test deployments, which again should be tested differently from production deployments.

Test Category

Or you can store tests in playlists by right clicking and adding them separately to one (1) or more playlists.

In the top of the Test Explorer, you select which playlist you want active.

Playlists

Happy testing 🙂

Clarification

After publishing this post, Fabrice Michellonet (b|l|t) did clarify the difference between Unit Testing, and what I am showing in this post; Integration Testing. Now, admittedly I hash tagged the tweet #UnitTesting, which was incorrect. However, it seems easy to fall into that trap, as Microsofts Test Template for this kind of project is Unit Test Project – even though you are not going to do any Unit Testing.

Microsoft Test Templates

 

Code available here: ssas-unittestproject

Loading

Creating a Striped Disk on Your Azure VM

This is my eighth post in a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This time, it’s a little trick on how to make life easier for SQL Server on an Azure Virtual Machine. This tip applies to standard on-premises setups as well, although it’s not always as easy as running a PowerShell script to magically conjure a bunch of disks, on your own servers.

Prerequisites

In order to leverage this tip, you will need an Azure VM (or an on-premises server, where you can add disks) and some administrative permissions.

Things to do in Azure

First of all, you need to add the Data Disks to you Virtual Machine. This can be done in two ways; (1) You can do this through the Portal. Select the VM you want to work with, click All Settings->Disks->Attach New.

VM Disk Configuration

This will bring you to the final dialog, where you punch in the specifics for your new disk.

Disk Specifics

Or you can do it using Powershell, using Add-AzureDataDisk.

Windows Server

Once the disks have been attached to the VM, it’s time to do some magic in Windows. Mind you, a long series of screenshots. First, you’ll have to create a Storage Pool, next configure a Virtual Disk and finally a Volume (Disk).

It all begins with in the Server Manager, where this illustration should take you through the steps to create a Storage Pool:

Storage Pool Wizard

Storage Pool Wizard

Once the Storage Pool is created, you head on to create the Virtual Disk:

VIrtual Disk Wizard

Finally you need to create a new Volume:

New Volume Wizard

Benefits

The obvious benefit is throughput. The more disks you add to your stripe, the more theoretical throughput you’ll get. As an example, is ran a very quick test, using CrystalDiskMark

Beware: Do not use the installer, as it contains adware. Get the portable edition (.zip).

Disk Stripe ResultAs displayed in above screen shots, the single Azure Standard Storage VHD gives you (as promised) about 500 IOPS. Striping eight (8) of those, will roughly give you eight (8) times the IOPS, but not same magnitude of [MB/s] apparently. Still, the setup is better off, after, rather than before!

Do mind, that there are three levels of storage performance; P10, P20 and P30. For more information, read this.

Loading