Continuous Delivery w/ Analysis Services and Visual Studio Online

Recently I have unlocked a new and long-awaited achievement and I felt so happy about it, that I had to do a blog post on it.

Card House
This is how I picture the end result

I’ve known a couple of developers through my career, including myself, who was dreading the moment when they had to release/deploy a new version of the cube into production. In my experience, that all had root in the fact, that they (me included) had no idea if we would be able to run the process flawless, and as supposed. Typos would sometimes sneak in and crash the whole thing, an incorrect data connection would load data from the wrong source. Did we in fact deploy to the right server? Are we introducing breaking changes?
Sure, it’s easy enough when you only have a small server setup, but that’s where you plant the seeds to grow big. At that moment it’ll be a lot easier to implement semi- or even fully automated deployment frameworks.

I will here shortly describe some of the most obvious solutions of deploying new cubes to an environment, as supported by the product out of the box. Then I will sketch the latest introduction to our development pipe, continuous delivery. And just to be clear on concepts, it’s not continuous deployment, as that would require all builds to be published to production automatically. Not sure we will do that any time soon though.

Continuous deployment means that every change is automatically deployed to production. Continuous delivery means that the team ensures every change can be deployed to production but may choose not to do it, usually due to business reasons.
ref.: Wikipedia

Types of Deployment

XMLA

Late last year, I wrote a blog on how to Extract XMLA from SSAS Databases using PowerShell. But that is not the only way to get the code behind.

When browsing a server in SQL Server Management Studio (SSMS), you can right-click the database and select Script Database As. If you have a database in place already, in the new location, you would select Alter to, otherwise you select Create to. Obviously, when you want to delete a database, you select Delete to.

Script SSAS Database

If you try to update an existing database with a Create to statement, you will be served an error message telling you that a database with that key already exists. You can manually change the Create statement to an Alter statement, no fuzz here.

Pros: You get to mess with XMLA?

Cons: The Database needs to be processed once the script has been executed. Huge impact to business users.

Synchronization

In a previous post of mine, I have written about how to synchronize a database from one server to another. I will not go into detail here, just refer to the post and highlight pros and cons.

Synchronize Database

Pros: Seamless to the end user (except cache clean-out), Easy to setup and manage.

Cons: Too slow, when databases become large.

Backup/Restore

The backup/restore approach is actually the last resort, as I see it. Once the restore process is kicked off, the business users are feeling the impact of the heavy process . You should avoid this. Depending on how your disk layout is, the pain can be avoided to some extend.

So, what needs to change?

None of the above mentioned scenarios appeals to Team Foundation Server (TFS) and in order to get into the no-sweat zone during release time, we need to build our deployments around TFS; The obvious choice when working with Microsoft.

Natively Visual Studio, or more precisely MSBuild, does not support dwproj files which are used for Analysis Services (SSAS) projects. So obviously this has to involve some kind of magic. But as it turns out, it’s not all that magic. However there is not much documentation on this particular scenario out there but I managed to find one good resource, which is this. It gave me just enough assistance to complete the task.

Depending on how high you aim, you can actually get quite far, with just a few simple tricks and some home-grown code. I will be doing this with Team Foundation Server (TFS), Visual Studio Online (VSO – see benefits here), along with some PowerShell to enable us to automatically deploy new SSAS solutions to specific environments.
In the early days of my career, right after I left Basic programming on my IBM XT (and then spend a good few years in the education system), TFS has always been a partner in crime for me. Both to secure my code, but also to manage releases. Back in those days it was called Visual Source Safe, which then turned into TFS, which is now also available as a hosted service, known as Visual Studio Team Services (VSTS). Even now a days I come across BI Developers who don’t know what TSF is, let alone a code repository, even veteran ones. I’d like for that to change.

PowerShell is free and so is VSTS, which means you can kick this off at almost no cost. How cool is that!?

Once you have your VSTS (get started here) in line and PowerShell available (get started here), you are ready to begin the automation process.

Three Steps to Success

  • Install and Configure a Hosted Build Agent
  • Create a Build Definition Locally
  • Setup a Visual Studio Online Release

 

  • Prerequisites:
    • TFS License (hosted build agents are ~ $15 – source)
    • Visual Studio Online Subscription (free)
    • Visual Studio Data Tools (free)

Installing a Hosted Build Agent

  1. Log on to the computer where you want the agent to run
  2. Open TFS Web Access
  3. Click on Settings in the top right corner
  4. Click on project collection in the breadcrumbs
  5. Click on Download agent link to download .zip file with build/release agent in it. Right click on downloaded .zip file, click on Properties and Unblock the files if it is blocked.
  6. Make sure that you unzip the file into a folder that is not under your user profiles (for example, Downloads or Documents) or Program Files folders, because this will cause UAC to get in our way when a build job will try to call to build/release agent.

Configuring the Hosted Build Agent

  1. Open PowerShell (preferably as administrator), and browse to the folder where you unzipped the build/release agent
  2. Run ConfigureAgent.ps1 and follow the wizard

Creating a Build Definition

In Visual Studio, we open the Team Explorer Window ([Ctrl]+[½], [Ctrl]+[m]) and select the Builds sub menu.

Team Explorer

Click New Build Definition

New Build DefinitionThis will open up a wizard, that will let you specify certain details on how the build should be executed.

First up a name and description (and whether the build definition is active or not)

Build Definition GeneralThen we get to specify which triggering mechanism should fire off the build

Build Definition Trigger

We select, in this example, Continuous Integration so that every check in generates a build. You can set this up as you see fit. There are a lot of options and I bet there is one fitting your requirements.

In the Source Settings tab you need to Map which source control folders you would like for the build to output. This is actually one of the steps that should refer to the card house in the beginning, as I am not quite sure how selective you can be nor how many options you have. If in doubt as me, just map the whole bloody thing!

Build Definition Source Settings

The next step requires that you have set up the hosted build agent as described in the previous step – otherwise it won’t show up as an option. This is when you select your hosted build agent.

Build Definition Build DefaultsRemember to tick the ‘Copy build output to the server‘.

For Process, we select the most simple of the available process templates: Default Template. But, this is also the step where we customize our build definition to “support” our SSAS type project. In fact we will be building a regular .Net project. You can name this project file anything you’d like, it just has to have this following construction:

<Project xmlns ="http://schemas.microsoft.com/developer/msbuild/2003">  		
	<PropertyGroup>
		<Env>DEV</Env>
		<DeployFile></DeployFile>
		<SolutionPath></SolutionPath>
		<ProjectFolder Condition="'$(projectFolder)'==''"><INSERT PROJECT FOLDER HERE></ProjectFolder>
		<SsasVersion Condition="'$(ssasVersion)'==''">2016</SsasVersion>		
	</PropertyGroup>
	<Target Name="BuildCube">
		<ItemGroup>
			<CubeProjectFiles Include="$(sollutionPath)\**\*.dwproj"></CubeProjectFiles>			
		</ItemGroup>
		<PropertyGroup>
			<DevEnvTool Condition="'$(devEnvTool)'=='' AND '$(ssasVersion)'=='2016'">C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\devenv.exe</DevEnvTool>
		</PropertyGroup>
	</Target>
</Project>

Place this project file next to your SSAS solution file, and point to that file in the Items To Build dialog in the Process tab.

Build Definition Process

Finally is the Retention Policy, which is not something we change in this example. Basically it’s about how long the server keeps different outcomes of the build result.

Build Definition Retention Policy

Once this is saved, you should be able to check-in any code change, and observe that the build is running. You can also set off the build manually, by right  clicking and selecting Queue New Build.

Queue Build

And observe that the build is showing up as a job in Builds.

Build Queue Number

And finally, if we double click the job, we get the result of the build.

Build Output

At this point, our code base is present on our Build Agent. This means, that if we want to do something with our code, we need to activate some scripts. Here is where VSO comes in handy.

Setting up a Visual Studio Online Release

If you do not already have a VSTS subscription, you can sign up for one, for free here.

Provided you have set up your account as by the wizard above, you should be able to log into your account and select Release in the top menu.

VSO Release Tab

In the Release dialog, expand the + and select Create Release Definition.

VSO Create Release Definition

A new dialog will appear, and you need to select the Empty definition for the purpose of this example.

VSO Rreate Release Definition Wizard 1

This will allow you to define the basics of the build, such as what type of build you’d use, whether it’d be Jenkins or MS Build or you can choose later on. Important here, is that you get to choose the Project from your TFS Collection that you would like to have build – marked 1. Also you need to select which build agent should be used with this particular build; Select the name of your hosted build agent here – marked 2.

VSO Create Release Definition Wizard 2

You can the go rename both the environment and the definition to something more meaningful than New Empty Definition <Date> and Environment 1. Before we add tasks to the release definition, we need to link an artifact to the definition. We do so by clicking the Artifacts tab in the menu bar and select Link an artifact source. In the next dialog you need to select Team Foundation Version Control as Type followed by which Project, Source and Source Alias.

Remember to Save your work.

Once this has been completed, there are only a few steps left, before you have the loop closed. Now it’s time to put together some of the building blocks I have written about earlier. The Microsoft.AnalysisServices.Deployment.exe tool along with some PowerShell.

VSO Add Task

The first task we are going to add is to move the output of the build to another location on the server. This may not be completely necessary, but I like to separate duties. Basically we just copy the files to another folder.

VSO Copy Files
Supply the Source Folder where the Build Agent drops the output of the build and specify the Target Folder and you are set to go.

By running the Build Definition (not the Release Definition) you get to see where the Build Agent drops the files on your hosted agent. In the Source Folder of the Copy File(s) tasks you insert the folder containing the dwproj of your solution. Contents should be left ** and Target Folder should be set to some folder on the Agent Machine which you intend to use for deployment.

The last step is to use the SSASHelper and Microsoft.AnalysisServices.Deployment.exe tool to deploy the code just build onto an instance of Analysis Services. For this to run super smoothly, I recommend running the deployment tool manually the first time with the switch /a. This will run the utility in answer mode. All responses made during the wizard part of the utility should be written back to the input files, but no changes will actually be made to the deployment targets. If you choose, you can have the database processed as part of the deployment procedure or not; Depends on your requirements.

Note: This is where you specify which server the SSAS Project gets deployed to.

This means you get the parameters for your cube stored in files on disk, so you can deploy by those values every time,  without having to do anything manually. Provided the tool has been run in answer mode, we can now add a task that will run the tool in silent mode with the switch /s.

Steps in script:

  1. Create .asdatabase from project using SSASHelper
  2. Deploy .asdatabase to instance using Microsoft.AnalysisServices.Deployment /s

VSO PS Deploy

Save you work, now 🙂

Verify that the Release Definition will actually fire, when the Build Definition kicks off. Do so by checking the Triggers tab.

 

VSO Triggers Tab

Testing the Setup / Check-In

The next time you check change into TFS, your local Build Definition will kick off as will the VSO Release Definition, which in time will deploy the cube (processed or unprocessed) to your designated server.

Next step could be to synchronize the database from the staging area to test or even production. To do this you can set up a new environment in your TFS Release Definition and in this Environment, synchronize the database to the new location. You can even specify approvers to your environments, so that no steps are taken, until one or more people have signed off.

Final Thoughts

This may well be the longest blog post I have ever written. It is also the one task that have taken me the longest time to actually implement. The fact that I can now check in changes to TFS and behind the scenes our test environment is updated and prepared for test is a huge daily relief.

I will shortly try to add automated tests to the release process, and I will of course blog about it.

For me, this post sums up the steps of automating your daily/weekly update of your test environment; Something I haven’t been able to find anywhere else.

Loading

Using PowerShell to get current Data/Log/Temp and Backup Directories from SSAS

In a blog post from October 2015 I ranted about not being able to get the Data, Log, Temp and Backup directories from the AMO API. Back then, the I was tasked with mapping all of the possible locations for SSAS related files on a number of servers.
Obviously, a single attribute per location wouldn’t cut it, as locations may have been changed over time. At least that is what I observed on this setup. So, back then, I needed something more sophisticated.
Meanwhile I thought that this was a short coming of the AMO API, I filed a Connect item, in order for Microsoft to rectify this.

Just recently a reply was made to the Connect item, highlighting the fact, that the current values of the Data/Log/Temp and Backup Directories – meaning the currently configured values – is exposed through the Server.ServerProperties collection. According to the answer, only public property values are exposed.

Using PowerShell, we can now retrieve the desired information from any given instance of Analysis Services. Doing so would look something like this:

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.AnalysisServices")

$serverName = "<insert server name here>";
$Server = New-Object Microsoft.AnalysisServices.Server;
$Server.Connect($serverName);
$prop_values = "";


$Server.ServerProperties["DataDir"].Value
$Server.ServerProperties["TempDir"].Value
$Server.ServerProperties["BackupDir"].Value
$Server.ServerProperties["LogDir"].Value

$Server.Disconnect();

$prop_values;

 

Loading

Gemino!

This is my third post in a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This time it’s going to be about how to synchronize Analysis Services (SSAS) databases, across servers, using a build in feature. I was contemplating not having this post as an entry-level item, but decided so, simply because I would have liked to know about the feature long before I found out it existed.

Preface

Before we get into the meat of this technique, it might be helpful to know a little bit about when to leverage the benefits.

In my experience, the most frequent way to use this feature, is to use it to copy a database from a processing instance to one or more query server instances. The main benefit of this approach is, that the end users does not feel the impact of the cube/database processing, nor do they feel the relatively heavy impact of a database restore, which would be another way of copying a cube.
Another use of this feature, is to merge databases from multiple processing instances, into a single query instance. This is not something I have come across, other than in writing, but it should be applicable (at least according to Microsoft).

Depending on your environment, you may be facing different obstacles, and as usual, there are no silver bullets. This also goes for database synchronization, which means copying the entire database, every single time. It’s easy to identify the first show stopper, that being the size. When a database/cube becomes too big, you may want to look for alternatives. So let’s look at the alternatives.

Backup/Restore: Target Database cannot be queried during restore process.

The suggested work around for this is to have two identical databases on the target server, only difference being the name. Restore to alternate name, and once restore is complete, rename the current and the newly restored.

Detach/Attach: You have to detach the target database, before you can attach the new version.

As with backup/restore, is would be possible to have a second replica, attach this and do the renaming as described above.

Both of these approaches will require you to allocate at least two times the disk amount, since you must have two copies side-by-side.

Robocopy: SSAS needs stop/restart to release lock on files.

If your SSAS instance is located on an Azure VM, you should be able to work around this issue by having two servers, one cold standby. You would then fire up the cold stand-by to replicate the changes, and after successful update, redirect queries using a load balancer to the new updated server. Once there are no active queries on the “old”, you can shut that down.

Synchronize: You need to copy the full database, every time

The draw back of this approach is obviously the fact that you need to move all of the data across the lines, every single time. There is no circumventing this.
There are no guideline from Microsoft on size limit, other than, when it takes too long, the database is too big – probably 🙂

Prerequisites

First of all, there is a set of prerequisites, in order for the synchronization to work.

  • User must be a member of the Analysis Services server administrator role on the Target server.
  • User must have Full Control permissions on the Source database.
  • TCP port 2383 must be open on both servers to allow remote connections between default instances.
  • Both the Source and Target server must be the same version.
    • See Synchronization Error here
    • Synchronization Error
  • Edition needs to be either Enterprise, Developer, or Business Intelligence.
  • Server mode must be identical on the two, either Tabular/Tabular or Multidimensional/Multidimensional.

Manual Synchronization (Wizard)

Using SQL Server Management Studio (SSMS) it’s possible to do a manual synchronization of any given database. Please note the prerequisites above before commencing. In SSMS you begin by connecting to the target server, the server where you need the database synchronized to. Once connected, right-click the database node and you will be presented with a context menu with a Synchronize item. See figure below:

SSMS SynchronizeNote that in the screen dump, there are no databases on the target server CMS01\SQL2016Prod but one on the CMS01\SQL2016 instance called Test Database. This is the database we are going to synchronize.

Once you click the Synchronize item, the synchronization wizard will kick of with a welcome message. You can skip at will 😉

Synchronization Wizard Welcome

The next screen lets you select the source of the synchronization. The target is prelected, as the server on which you invoked the wizard. In this case, I select the CMS01\SQL2016 instance and the Test Database to be the source.

Synchronization Source SelectionWhen clicking next, you will be presented with a screen that allows you to select the location you want to synchronize to. Notice, that the folders available, are the folders set in the SSAS configuration attribute AllowedBrowsingFolders on the target server.

Allowed Browsing Folders

Also note, that the property is categorized as Advanced, so you need to check Show Advanced (All) Properties.

Back to the Wizard. If you need to specify a different location on the target server, this is where you can.

Specify LocationIn my case, both folders display: (default) – which means I haven’t set up anything special.

Next you are going to be prompted to choose which security settings to synchronize. You can Copy All, Skip Membership or Ignore All. And then for some odd reason, the choice on whether to compress during Synchronization or not is put into this screen as well…

If you Copy All, you will replace whatever security settings you have on the target server. This includes Roles, members and Permissions.
Skip Membership copies all of the Roles including Permissions, but not the Members. This is a tricky one, as you will, no doubt, have a gap (outage) from when the new Roles are in place, until members have been assigned (for new roles). Existing roles will keep the members already in place.
Finally you can Ignore All, which leaves the existing Roles and Members unaffected.

The only reason I see to uncheck the Use compression… option would be if you are sourcing from a database that is hosting users. In that case, you may want to not compress, in order to leave as small in imprint on the source server as possible. You will be consuming clock cycles for the compression which will likely impact the end user experience.

Synchronization OptionsIn this case I have opted to just Copy All.

This leads to the last screen where you are prompted to select something – Do you want to synchronize now or Save it to a script?

Synchronization MethodWe go with the Synchronize Now option and are presented with a Summary of our choices, just to make sure.Synchronization Summary

When we click Finish, we are presented with the process and end result in a new window.

Synchronization Result

Until it’s finished, you can actually Stop the process.

Let’s see the new Database in our “production” (CMS01\SQL2016Prod) environment:

Synchronization Complete

Voila, the database has been synchronized!
Next up automation.

SQL Agent

All of the above steps can be automated using the SQL Agent and an XMLA script. The script can, as earlier stated, be generated during the Wizard process. One such Script would look like this:

<Synchronize xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.microsoft.com/analysisservices/2003/engine">
  <Source>
    <ConnectionString>Provider=MSOLAP.7;Data Source=.\SQL2016;Integrated Security=SSPI;Initial Catalog=Test Database</ConnectionString>
    <Object>
      <DatabaseID>Test Database</DatabaseID>
    </Object>
  </Source>
  <SynchronizeSecurity>CopyAll</SynchronizeSecurity>
  <ApplyCompression>true</ApplyCompression>
</Synchronize>

This Script is easily integrated with the SQL Agent. And setting up the SQL Agent is, in my experience, the hardest part.
In order for the SQL Agent job to run independently from the user account used for the SQL Agent Service, we need to setup both Credentials and a Proxy. The Proxy uses the Credentials to execute the job, more specifically, the individual steps in the job. This means, we can setup different Credentials/Proxy for different steps in our job, in case we need to synchronize to different environments.

Setting up the Credentials is pretty straight forward. In SSMS, connect to the server where you want your SQL Agent to run the job. Expand Server->Security, and then right-click on Credentials. This will offer you the option to create a new Credential.

New Credentials

A credential is a record that contains the authentication information (credentials) required to connect to a resource outside SQL Server – More here.

You then give the Credentials a meaningful name, assign the desired account and password and click OK.
Encryption can be added as well, if you have such a provider.

Credentials Wizard

I have named my Credentials the SSAS Synchronization Service. Later I will be able to pick this particular Credentials in my Proxy,

When the dialog is closed, we see the Credentials appear in the Credentials collection in SSMS.

Credentials ResultHaving the Credentials ready, we can move on to create the Proxy that the SQL Agent Job uses.

Expand the SQL Agent, and right-click Proxies. This allows you to create a new.

New ProxySetting up the Proxy is also very easy. Assign a name, Assign the proper Credentials and check the proper sub-system, in this case SQL Server Analysis Services Command .

Proxy AttributesMaybe a bit misleading, but I have named both the Credentials and the Proxy the same. Once the Proxy is finished, it’ll appear in the Proxy collection in SSMS.

By now, we are ready to create our actual SQL Agent Job. We do so by right-clicking the Job folder under the SQL Agent node in SSMS

New SQL Agent JobWe are then prompted to assign a name for the Job, and please do pick something meaningful and scaleable. I’ve seen tons of servers flooded with jobs named almost the same, only to differ in the post fix number. Bad Developer, No Twinkie!

SQL Agent Job NameObviously you would also fill in the description, as the above title should be accompanied by some more detail

The actual tasks in a SQL Agent Job is hidden in the steps assigned. And for this demo, we only need to assign one single step to produce the desired outcome. Select Steps in the left hand menu, and click the New button.

SQL Agent Job Add New Step

This will bring you to the actual meat and bone of the Job task. In here, you fill in details such as Step name, the Type of the step, under which Proxy the step should be use (if none defined, SQL Agent will be selected).

SQL Agent Job Step Details

By selecting a SQL Server Analysis Services Command Type, the Server and Command textboxes appear, and we can fill in our details for those and our Step is all set to go.

SQL Agent Job Step Details

We can now add a Schedule, Alerts and Notifications to the Job, but I will let you play with that on your own. For now I will just hit OK and see that my Job gets created.

SQL Agent Job RunWhen right-clicking the newly created Job, I can choose to Start Job at Step – If more steps were present, I would be able to select a specific one. Since I only have one, the Job just kicks off.

SQL Agent Job ResultIf the Green Success check mark fails to show, I bet you it has to do with the Proxy/Credentials. Please do re-check again with the requirements for synchronizing a cube, as stated initially in this blog post. Most often I found it’s down to lack of privileges.

Round-Up

I hope this Synchronization walk-through has given you  insight into how to move your data around, with out having to port code (XMLA) from one server to another, rename the Cube key in the script in order to run it – or whatever odd way you found, to have a copy in production (or dev). I certainly advocate for using this feature over many other home-grown solutions, as it’s maintainable, to some degree controlable and it’s proven technology.

Gemino reference, for those who didn’t catch on…

Loading

A Hello World Custom Assembly in SSAS

This is my second post i a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this new blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This one is going to be about how to extend Analysis Services (SSAS) with your own custom code.

The example in this post will be the well known Hello World example in the context of SSAS, and I trust this will illustrate the possibilities with this technique well enough, for you to apply your own solution, to your challenges.

Creating the Custom Code

First of all, we have to create the custom code. This can be done with the free tool Visual Studio (download). To get going, we create a new project  [Ctrl]+[Shift]+[N] or via the dialog below:

New Project Dialog

In the next dialog, we then select both which programming language to use for the project and also what kind of output the compiler will provide. I select a Class Library output, programmed in Visual C#, in this example. This will provide us with an assembly, which we can import directly into the SSAS Server.

Class Library Dialog

First a couple of house keeping routines. Even though this is just a tiny example, there’s no need to not do the right thing. Naming conventions should be agreed on, before diving into the code.
We begin by renaming the entire solution:

Rename Solution

I name this one ‘CustomAssembly’

Next we rename the project:

Rename Project

Finally we need to rename the Class File:

Rename Class FileI have named mine HelloWorld.cs. With this in order, we are now ready to implement our custom code.
Of course the code in this example is simple, but I trust you will be able to spot the potential of the technique.

The Class implementation looks as follows:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace CustomAssembly
{
    public static class HelloWorld
    {
        public static string GetValue()
        {
            return "Hello World";
        }
    }
}

As I said, very simple. This function only returns the same text, for every call.
In order for us to leverage this via SSAS, we need an assembly to register. We now just need to successfully build the project in Visual Studio and we are off.

Right click the Solution name in the top of the Solution Explorer ([Ctrl]+[Alt]+[L]) or press [Ctrl]+[Shift]+[B] to build.

Build SolutionThis will produce the desired output and put it in a predefined location. Right click the Project and select ‘Open Folder in File Explorer’

Open Folder in File Explorer

In the bin folder, depending on your deployment settings (Debug/Release), you will find your assembly. Remember the location or move it to a known location. Issues can occur, if you register the assembly from the same location as you build.

Registering the Assembly with SSAS

We begin by opening SQL Server Management Studio (SSMS) and connect to the server we want to test our custom code on. When connected, we can right click the Assembly collection and select to add a new Assembly.

Add New AssemblyThis opens a new dialog, in which we can specify different settings, most importantly the settings on security and impersonation. For details on this, please see this MSDN description. The different levels of permission settings:

Permission Setting Description
Safe Provides internal computation permission. This permission bucket does not assign permissions to access any of the protected resources in the .NET Framework. This is the default permission bucket for an assembly if none is specified with the PermissionSet property.
ExternalAccess Provides the same access as the Safe setting, with the additional ability to access external system resources. This permission bucket does not offer security guarantees (although it is possible to secure this scenario), but it does give reliability guarantees.
Unsafe Provides no restrictions. No security or reliability guarantees can be made for managed code running under this permission set. Any permission, even a custom permission included by the administrator, is granted to code running at this level of trust.

In my example, I will not have a dedicated account, which I would highly recommend, if you were to go into production with this.

I have selected the path where Visual Studio is configured to put the output, and I have selected Permission Setting Safe and I am impersonating the Current User.

Register Assembly Dialog

When we click OK, the dialog will close, and the collection of assemblies ion the instance will be updated, to contain our newly created custom assembly.

Assembly Registered

Run the Query

The above steps enables us to query the function created, directly from MDX as shown in the screenshot below:

Result Set

Note! The Assembly Name applied in the registration, is also the name I use in the MDX query. Had I named the Assembly MyFirstCode, the MDX would look like this:
My First CodeThink of this as an alias for the Class, the method name does not change.

Next steps

As described above, we can extend Analysis Services with all kinds of thinkable custom code. We can even send back parameters, in order to have some sort of context for the algorithm in question. We can of course also have multiple algorithms per assembly, allowing us to select specific ones, for each specific MDX query.

An assembly can, as shown above, be registered at the server level. But we can also register our assembly in the private assembly collection of any number of databases on the server. This allows us to differentiate the code base, based on the solution at hand.

Now, I am not saying custom assemblies are the solution to all of our challenges on the platform, but in the course of my career I have implemented a couple of solutions where I have used this technique. The end result was far more scalable, robust and better performing than the equivalent MDX voodoo it was set to replace.

 

Loading

Permission Scope in Analysis Services

This is my first post i a series of entry-level posts, as suggested by Tim Ford (b|l|t) in this challenge.
I have not set any scope for the topics of my, at least, twelve (12) posts in this new blog category of mine, but I can assure, it’ll be focused on SQL Server stuff. This one is going to be about permission scope in Analysis Services (SSAS).

What this post will not be about: The how to setup basic dimension security in SSAS nor How do you manage Security.

In this post, I will highlight the difference between standard NTFS permission scope and the way SSAS handles Allowed and Denied sets when dealing with multiple roles. So if you define multiple roles on your solution, you should be on the lookout, because SSAS has some surprises.

NTFS

In NTFS, permissions are defined so that deny generally takes precedence over allow. As illustrated in below diagram, we can define an Allowed set as a sub set of the total set. The blue rectangle is to be seen as the full set.

NTFS Allowed SetAnd as we apply a Denied set, this will restrict the previous Allowed sub set, the intersection of the two sets, illustrated below: (The color indicates the finally allowed set.)

NTFS Denied Set

 SSAS

In SSAS however, the Allowed set takes precedence over Denied. So if you apply your usual NTFS logic to your dimensional security in SSAS, you may well be in for a surprise.

When no security is defined in a cube, everything is accessible to everyone. This color indicates the allowed set. (the complete rectangle)

Entire Cube Space

As soon as you introduce the first role, the cube is locked down to everyone not a member of said role.

Role w/ Allowed Set Introduced

If you then introduce restrictions in another role, you will get a different result than in the NTFS based security. The members of the role will still be able to see the full set of the Allowed set. Even though we just Denied that!

Role w/ Denied Set Introduced

Example

By creating two test roles, we can easily bring forward the, to some extend, unexpected behavior of SSAS.

Allowed Set Defined
Allowed Set Defined

As seen in Role Test 1, I have defined the Allowed set to contain Weeks 41 and 42 of the Year 2015. By browsing the Cube through SQL Server Management Studio, we can identify the Allowed set is working:

Allowed OK
Allowed OK

As I the introduce a new Role Test 2, and in that denying the week 42 of 2015, I would expect the browsing result to be only displaying only Week 41, but…

Denied Set
Denied Set

The result we get, when browsing the cube using both roles, shows all dates with data: (WTF! – Yes, you said it!)

Denied Set actually allowing everything?
Denied Set actually allowing everything!?

Fix

Clearly that was not the intention, by denying Week 42. So, how to fix the above violation to the Allowed set?

By adding an empty Allowed set ‘{}’ to the role containing the Denied set, in this case Test 2, as depicted below:

With this Empty Set ‘{}’ in place, we can browse the Allowed set again, but the Denied set does not restrict the result set.

Allow-Denied-Meh

Final Thoughts

While the way SSAS interprets the Allowed and Denied Sets may be straight forward, for simple security tasks. I think the above examples shows just how much testing one needs to do, in order to be totally sure the desired permissions are reflected in the end user experience.

As a side note, adding an empty set to the Denied Set of the first Role (Test1) does not alter the outcome.

 

Loading