MsBIP Meeting – Inside xVelocity

On Thursday 26th I will be attending the 5th Microsoft BI Professionals Denmark meeting at Microsoft in Hellerup. The meeting will have prominent guests as Marco Russo (blog|twitter) and Alberto Ferrari (blog|twitter) has offered to stop by and share some knowledge on these two subjects:

Inside xVelocity (VertiPaq)
PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different from OLAP and SQL ones!

Choosing between Tabular and Multidimensional
You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice.

This session should be loads of fun 🙂

SQL Server 2012 Beta Exams

I’ve signed up for three (3) of the five (5) available beta exams on the new SQL Server 2012 Business Intelligence certification program which Kendra Little (blog|twitter) has written a great post about here.

Since there are numbered seats on these beta exams, I went straight for the BI specific tests. So at friday the 23rd of March I took on [70-467] Designing Business Intelligence Solutions with Microsoft SQL Server 2012, followed by a short break of SQLBits, then on to [70-466] Implementing Data Models and Reports with Microsoft SQL Server 2012 on the 3rd of April.

Tomorrow I’m up for [70-461] Querying Microsoft SQL Server 2012

First of all I’m taking the tests in order to identify my weak spots in regards to the new version of the stack. Secondly it’s a great, and not to forget free, opportunity to achieve the certifications that looks nice on the resume (before the peers @ work).

My first impression with the tests so far, is that Microsoft Learning (blog|twitter) has done a really good job at re-designing the tests. IMHO the tests will go deeper and further than their respective prequels. And maybe it’s just me, but it seems as if there is a more real-world-feel about the questions in the test than I’ve previous expreienced. Good job MSLearning!


SQLBits X PreCon SSAS Deep Dive Summary – Part 1

As an exercise for myself I will be writing fairly short post on the session I attended at SQLBits X´. I will begin with the training day where I went for the Deep Dive into SSAS.

The only bad thing there was to say about the SSAS Deep Dive precon by Akshai Mirchandani, was that he had so much material, that he had to skip some. Besides that it was a fantastic whole-day session.

The base of the material was taken from the SSAS Maestro program, and from several of the white papers and guideline documents out there. See a complete list by Vidas Matelis (blog|twitter) here.

General view of SSAS

For starters Akshai took the crowd through the major parts contributing to the execution of an incoming query. The main parts are illustrated on the following diagram:

MDX Query execution architecture
MDX Query Execution

What this diagram doesn’t show, is the change made in the 2012 edition, to better serve processing and querying by allowing a split of the two on separate thread pools. The 2012 edition will have two (2) thread pools; one for process jobs and one for IO. Read more info about this new feature here.

Dimension Design 1on1

This was followed by a best practise walk-through on designing dimensions in a cube.
Some of the highlights were:

  • Simple Cubes => Faster
  • String AttributeKey => BAD
  • Dummy attributes
  • Natural Hierarchies
  • String Store (4 GB limit)

The “dummy attribute” construction was new to me, and among the note taking and listening I lost the general concept. But I believe it had to do with dimensions where no immediate relationship could be made. Anybody reading this, please feel free to fill in the big blanks.

Akshai then continued on to give a brush up on Parent-Child (PC) Hierarchies in Analysis Services. Here he wanted to debunk some of the myths out there in regards to size and usage. In general the guidelines he gave was that one or two PC Hierarchy is acceptable in a cube. In SQL Server 2005 there was a guideline saying that when ever a dimension with a PC Hierarchy has more than 250K members, it was advised to convert this hierarchy into a natural hierarchy. This can be done by using the Parent-Child Dimension Naturalizer found at CodePlex. Akshai stated that the 250K members limit was not advised in the SQL Server 2008 edition and forward, but when performance issues arrises it could be a solution to convert a PC Hierarchy.

In regards to Many-To-Many (M2M) dimension relationships Akshai had an interesting notion on compression. Luckily BIDS Helper has a build in feature that generates the rather complex SQL needed for this operation. The technique  is described in this white paper.
In general Akshai recommended BIDS Helper and highlighted several of the tool features.

ΑΩ – Biggest take-away

The maybe biggest take away from the part on dimensions was to always check if you really need the AttributeHierarchyEnabled to be true on an attribute. There is much to gain by going through your attributes disabling those you only need as detail and not in hierarchies. See this TechNet article on the subject.