Presentation at SQL Saturday in Kansas City

I just finished presenting on Automating Partition Management for Analysis Services at SQL Saturday in Kansas City. Within the next few days I will be uploading my presentation materials and project files for those interested. So please check back for the updated information

Update

Here is the link to the Partition Management project. These SSIS task and the table scripts were written using Adventure Works DW2008R2 but can be modified to work with any database

http://cid-e91a5dff9098de4a.office.live.com/self.aspx/Blog%20shares/Partition%20Management.zip

Do we really need to customize our DSV’s?

 

Recently I was reading an article by Boyan Penev on DSV’s called “Using the DSV to its full potential” and it got me to thinking about how I use DSV’s…

Named Queries

In SSAS 2005 when DSV’s were introduced I found myself using named queries quite a bit. But the more I read about design methodologies for cubes, I felt like it was better if I used views in the database to actually control what data was being pulled into the cube rather than use named queries.  That way I could have better control over the logic and not have to make changes to the cube all the time. In fact the only way that I can see myself using named queries was if I were going to have two fact tables built off of the same view or table. In this case I may use a named queries to filter the data to create the two fact tables. Other than that, I cannot see why I would ever use a named query.

Named Calculations

This function of the DSV I do use. Specifically for the “hanger” dimensions for time calculations.

 

I think Boyan wrote a great article and I suggest reading it. I just can’t ever see myself doing most of the things he outlined in his article simply based on the fact that I can do them all in a database level view.

Do you use the DSV in a the way that Boyan describes?

Is MDX going the way of Cobol?

I was reading an article on Chris Webb’s site the other day and he starts the blog off like this " Even if blogging about MDX feels, these days, a bit like blogging about COBOL (I’ll be returning to DAX soon, I promise)…..
 
Really Chris? Is that how you feel? Has DAX taken over already? Is MDX now old hat?
 
Office 2010, which is needed to even use PowerPivot and DAX, isn’t even out of beta yet and here we have one of our best and brightest MDX guru’s saying he feels that blogging about MDX feels like blogging about Cobol. This really bothers me. I have been using MDX for the past 4 -5 years and there is still a lot of things I don’t know about. In reality, there are very few people that in my mind I would classify as MDX experts and to have one of them compare MDX to Cobol to me is crazy.
 
First, Analysis Services isn’t going away anytime soon. Second, PowerPivot is not going to take over the world. Just to implement it you need Office 2010 and SQL Server 2008 R2. To push it out on an intranet you need SharePoint 2010. I know there will be early adopters of all of the above technologies, but do people really think it is time to completely revert your attention away from MDX and SSAS to Office 2010 and DAX? From all of the demo’s that I have seen, I think PowerPivot does have it’s place, but I don’t feel that it replaces Analysis Services.
 
I am excited to play with PowerPivot and learn all of the new technologies and languages coming in the next year from MS, but I am definitely not forgetting the one that got us to this point.
 
And Chris…..please don’t stop blogging about MDX and the inventive solutions you come up with to solve real world business issues that we all face on a day to day basis.
 

Moving KPIs from one cube to another

Recently I have been doing some work with SSAS KPI’s and I found myself in a pickle. Most of my work on the KPI’s had been in a local development database and I was a t a stage where the KPI’s needed to be tested against a larger dataset. Since I had about 25 KPI’s I thought I would just script them out and add them to the larger cube. To my amazement and chagrin I found there wasn’t any built in way to script out KPI’s.
 
So after some thought I decided to attack this from the XMLA side. I scripted out my development cube and then started searching. I found what I was looking for at the bottom of the XMLA script. Tucked between <KPIs> </KPIs> was all of the XML for the 25 KPIs I had defined.
 
I copied the entire section between the <KPIs> tags and pasted that into notepad. Next I went to my new cube and scripted it out as an ALTER (very important). I then scrolled to the bottom of it and pasted my KPI definitions between the <KPI> tags. Syntax checked the file and then clicked execute.
 
Next I opened that new cube in BIDS and clicked on the KPI tab and voila! all of my KPI’s were there. I did notice some formatting issues with things like the status definitions,etc… but nothing that would prevent the KPI from running.
 
So, it wasn’t pretty but I was able to move the 25 KPIs to the new cube without having to redefine them again one at a time.

Reporting on SSAS Metadata

Dan Meyers over at Capstone Consulting has put together a package of Reporting Services reports that display SSAS metadata based on Analysis Services DMV’s. You can find them here. Dan did an awesome job on these reports and as he notes in his blog, the design is based on the reports from BI Documenter
 
One note…you will have to change the linked server in some of the sub report queries in order for all of the report actions to work. It seems Dan was using two different linked server names.
 
 

Default Measures

Today I started seeing some weird results when I was using NonEmpty function. If I just used NonEmpty without a filtering measure like
   Nonempty([Time].[Time].[Time])
 
then I wasn’t getting any rows returned. But if I put a filtering measure in there like
 
Nonempty([Time].[Time].[Time],[Measures].[MyMeasure]) then I was getting data back. It occured to me that I had not even thought about what my default measure was. So I ran the following query
 
Select measures.defaultmember on 0
from mycube
 
To my surprise it was a measure that would not really be applicable to most of my calculations. I was a little perplexed at why SSAS chose that measure as my default, so after some "Binging", I found this excerpt in one of Mosha’s old SSAS site pages
 
"For the default measure, the first non-hidden measure is chosen. If all measures are hidden, then the first measure is chosen. “First” here refers to the first measure in the measures collection in AMO and DDL (but not in MDSCHEMA_MEASURES, which is always alphabetically sorted). "
 
So I promptly changed the default measure to something that would have more meaning to the calculations I was writing.
 
To change the default measure, just simply put the following code in the bottom of your MDX script
 
ALTER CUBE CURRENTCUBE UPDATE DIMENSION Measures, DEFAULT_MEMBER= <whatever measure you want to use>