I am super excited to have been selected to speak at the Intelligent Cloud Conference in Copenhagen, May 29th-30th this year.
The full schedule for the event can be viewed here.
My talk will be on Power BI, or to be more precise, a talk on tips and tricks to get you started ingesting data with the Power BI Desktop application. So if you are a veteran SSIS developer, tag along to get your mind blown by the simplicity of Power Query. Or maybe you are a prospect Data Scientist that needs something like Excel on steroids. This will be a session for you.
I will start of by demonstrating how easy it is to import data straight into Power BI from csv or xls(x) files that have a simple structure. Then moving on to scraping data from a web page. Gradually making the tasks more and more complex, as we move on to header/footer issues or what about variable number of columns for multiple files for the same data set? Once we have established the elegance of Power Query in Power BI, I will demonstrate how easy it is to Enterpricylize (yep, made up that word for this situation) your data discoveries by exposing the newly created model in an Azure Analysis Services, with a few clicks in a wizard.
However, if you are not into Power BI or Business Intelligence, the conference has a lot more to offer. It’s a combined Data Platform and Azure conference, meaning there will be an equal number of tracks divided among the two topics, filled with super interesting sessions such as “Failure prediction for manufacturing industry with SQL Server & R”, “Maturing IoT solutions with Microsoft Azure”, “How to make hundreds of servers work for you” and the list could go on and on.
I am really looking forward, not only to speak at this conference, but also listen in on some of the other great sessions that are being presented.
We will be showing how to optimize you BI solution through the use of the new SQL Server 2014 In-Memory database technology, called Hekaton. The first of two scenarios, will demonstrate how to performance optimize your ETL by using Hekaton as reference database. The second demo, will be building a ROLAP cube on top of Hekaton, to see what benefits can be achieved here.
See you there?
I will be speaking about how to create a custom assembly, in order to use the power of the relational engine knitted into MDX. I will be taking the information described in this blog post here, and combine it with the bloated version of AdventureWorks, thanks to Bob Duffy (blog|twitter), in order to get some critical mass to display the advantages.
At a client we had to produce a server, in order for us to have a development environment, since they couldn’t spare the server power. Odd situation, but that’s not the point. The point is that, somewhere during the installation process of this server, after installing SQL Server, we noticed that the server wasn’t a domain controller. Now the SQL Server and in particullar Reporting Services (SSRS) was installed with credentials granted to the build-in Network Services account. This account gets a new SID once we installed the domain controller, and after that SSRS started acting on us.
The new Network Services account needs to be granted modify permissions to the following folders:
Often I meet the business requirement of listing data for the last X months. In this blog post, I will try to give my shot at, how this is done elegantly and with maximum flexibility.
We start of backwards and enhance the Data Source View in the cube with a custom Named Calculation called UtilityKey (or whatever you like). The query is probably the most simple Named Calculation you’ll ever write, since it should be looking awfully a lot like the next figure:
When this Named Calculation is ready to go, we need to make use of the fact that having a dimension with the key = 0, means we have a hold of every row in that fact table! Enter shell dimension.
We create a table in SQL Server that holds the dimension we want to be able to slice by, using the utility key; In this case: A periodicity dimension. A dimension that hold the members by which we want to slice data. The table in this example looks as follows:
We then populate with data that matches our business requirements, in this case we need the members All, 1 Month, 3 Months, 6 Months and 12 Months:
This dimension table, we then use to create a dimension in the cube, based on the key and name columns.
Ending up with a result, after trimming the names, setting IsAggregatable = False etc:
The clever part of this shell dimension is that we then define a SCOPE in the cube that allows us to slice the data as we see fit. We use the regular time dimension present in the cube to navigate through the five (5) different members of our Periodicity dimension, like this:
Slicing the cube by Periodicity and not by the regular Calendar dimension yields the desired result to the business requirement.