I have a table based around requisitions, and each requisition has a number of positions. That number can change over time through updates to pertinent rows rather than through transaction-like records that record an entire history, and I'm only able to get a monthly snapshot of the table. What I decided to do is still use one table for OLAP (fact_requisitions) but add a column called period_key that refers to the month the data comes from. So if I have two months of data then the table has each requisition twice, possibly with differing position counts, and new requisitions from the second month are only present once. Then I tried to filter the MDX query like so:
SELECT {
([Dim TimeRequestClosed].[Year - MonthNumber].[Year_Text].&[2008].&[1],[Dim Requisitions].[Period].[Period Key].&[200801])
}
ON COLUMNS,
NON EMPTY
{
([Dim Location].[Region Name].MEMBERS, [Dim Location].[Period Key].&[200801])
}
ON ROWS
FROM
[Requisitions]
WHERE
[Measures].[Request Closed Date Count]
This query doesn't work even though the data is there, it just returns nulls. Am I going about this all wrong? If not, what might I be doing wrong, and how would I get the query to return more than one period (e.g. tell Dim Requisition to match up with Dim Location on the period key)?
From what I can see, the 'varbinary(max)' data type is not supported, and the 'image' data type is supposed to go away. Is there some other way to store large chunks (10MB to 100MB) of data into an SSEv DB?
If I have to use the 'image' data type to so this, does anyone have a code sample that would let me push an array() of numbers into an 'image' field, and unload an 'image' field into an array()?
I have been trying to get the ValueR column of the following query through the MDX but instead getting ValueW as output of the MDX
  select   exp(Log(sum(MTMROR)+ 1 ))-1 as ValueW,   exp(sum(Log(MTMROR + 1)))-1 as ValueR   from   Temp_Performance   where Rundate in ('2015-03-01','2015-03-02')
MDX written for the above query is
  With   Member [Measures].[LogValuePre]   as ([Measures].[MTMROR] + 1)   Member [Measures].[LogValuePre1]   as VBA
[Code] ...
[MTMROR] measure has the aggregate function Sum. What i can get from this behavior is MDX is first aggregating the result and default aggregation function is Sum. When i try to see the value with more granular data by having the date dimensions on the row (un-commenting the date dimension) i get the correct log and exp log values. Its showing the correct value as date dimension is most granular level in the fact table. While trying to get the data at less granular level(Fund level), getting the sum function applied automatically. Â Â
If i set AggregateFunction to none in the cube structure, i get null as the output.
How could i apply the log function before the sum function in the [MTMROR] measure?
I have created one Multidimensional CUBE, while browsing(Analyze in Excel) it in Excel, I am unable to create TimeLine Slicer.It is giving me following error as: "We can't create Timeline for this report because it doesn't have field formatted as Date".
I have Dim_Date having Date as a column of Day level granularity. In PowerPivot, we do mark Dim_Date Table as a 'Date Table'.In the same way, do we have to set anything here so that Excel will come to the Date format for TimeLine Slicer?
I have created a simple cube in BI Studio against an Oracle Relational Data Warehouse. In this cube I have created a perspective in which I have selected (included) only a small subset of measures and dimension. When I view this perspective on the Browse Tab of BI Studio, only the entities that were included in the perspective were available for report construction as was expected.
Next, I generated a model for this cube in Report Manager. Now here is the problem, I then opended the model in Report Builder and selected the same perspective from above, but all of the Entities in the cube were displayed including the fields I explicitely did not include in the perspective. I then looked at the .smdl file describing the Model and it looked like in the Perspective description section all of the cube's entities were included, even the ones that should not have been included. It seems as if the problem is ocurring durring the model generation. I also tried generating the model in Management Studio and it seems to be doing the same thing.
Any ideas on how to fix this? Could I be doing something wrong(probably)? I have to give a presentation soon and this is a big deal for the Project Stakeholders.
By the way, I am using the 180 Day Evaluation of Sql Server 2005 with SP2 CTP installed.
I am looking for a script whichs exports data (by DTS?) into a flat file and store the files (according their date stamp in the transactions) with a name like 05_2002.txt, 06_2002.txt etc. The data in the table Transactions will be deleted after some time to prevent fast growing of this particular table.
I have 10 oracle o/p tables. I have to transfer data in monthly or adhoc basis. Each table will have millions of records. How to transfer Oracle to SQL Server 2005. Which is the best way to transfer the data.
I'm trying to develop a query that provides sales data by Customer.GroupCode in monthly columns as depicted below:
GrpCd JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC TOT Film 5,000 15,000 20,000 Aero Elct 3,000 950 3,950 Desg Edu 150 150
Here€™s a simplified version of the DDL: CREATE TABLE invchead ( invoicenum int NULL , invoicedate datetime NULL , invoiceamt decimal(16, 2) NULL , custnum int NULL )
The query below gets me close but it gives me gives me one row for each customer. So if I have 5 customers with the same group code, I get 5 rows for that group code. I need to modify it or come up with a different approach that gives me only one row for each GroupCode.
SELECT distinct c.Name, c.GroupCode, (SELECT SUM(InvoiceAmt) FROM InvcHead WHERE InvcHead.custnum=i.custnum AND DATEPART(year, InvcHead.invoicedate)= DATEPART(year, i.invoicedate) AND DATEPART(month, InvcHead.invoicedate)=1) JAN, (SELECT SUM(InvoiceAmt) FROM InvcHead WHERE InvcHead.custnum=i.custnum AND DATEPART(year, InvcHead.invoicedate)= DATEPART(year, i.invoicedate) AND DATEPART(month, InvcHead.invoicedate)=2) FEB, ...... (SELECT SUM(InvoiceAmt) FROM InvcHead WHERE InvcHead.custnum=i.custnum AND DATEPART(year, InvcHead.invoicedate)= DATEPART(year, i.invoicedate)) TOT
FROM InvcHead i INNER JOIN Customer c ON (i.custnum=c.custnum) WHERE i.invoicedate>='1-1-2007' AND i.invoicedate<'1-1-2008'
Grateful for any advice that will get me closer to accomplishing this.
i have a big table (120 million records) and i want to take all this table and to insert it into another table. since this BULK insert operation can make all kind of performance problem i would like to make the bulk insert via small chunks. the table does not have any idintity.
can someone give me an exapmle with rowcount or with a loop to make each time an insert into select statment and to insert in each time for example 5000 rows.
I have a table that's 25,000,000 records... about 10 fields. I need to export this data to a flat file in no more than 500,000 record chunks. I've tried the following algorithm, adding a flag field called "exported" with default value 0.
do: - mark random 500,000 records, setting exported = -1 - export everything in that table where exported = -1 - set exported = 1 where exported = -1 loop
This was pretty slow, taking about 10 hours last night to run.
I find myself wanting a sort of a split dataset task in SSIS, being able to split records a chunk of records out of a dataset and handle them. Anyone have ideas for me?
I need to export records to a flat file using a dataflow task, but want no more than 50,000 records in each file. What's the best way to automate this?
I've got a report that is using a cube as a data source and I can't get the report to show all the data. Only data at the lowest level of the cube is displayed. The problem is that most of the data I'm concerned with is at higher levels. There's no problem with the MDX. I get the correct results when I run the query.
I'm using a table to show the results. I've also tried a matrix, but I get the same results. I'm using SSRS 2005 and SSAS 2000.
Anyone have experience with this? Am I missing something simple?
Here's a little SP to break up those long-running, massively-locking, bring-app-to-a-halt queries. By default it does 500 rows at a time and allows for a maximum SQL query size of 4000 characters; it should be trivial to adjust those.
Cheers -b
CREATE PROCEDURE p_BatchExecute (@vcSQL varchar(4000)) AS set nocount on DECLARE @iRows int select @iRows=1 SET ROWCOUNT 500 WHILE @iRows>0 BEGIN print 'Executing batch of 500...' exec (@vcSQL) set @iRows=@@ROWCOUNT END GO
We have data that consists of an employee number, a start time and a finish time, similar to the example below
EMP Â STARTTIME Â Â Â Â Â Â Â Â Â Â ENDTIME
00001 10-Feb-2012 06:00:00 10-Feb-2012 10:00:00
00002 10-Feb-2012 07:15:00 10-Feb-2012 10:00:00
00003 10-Feb-2012 08:00:00 10-Feb-2012 10:00:00
I am trying to come up with a procedure in SQL that will give me each 15 minute block throughout the day and a count of how many employees are expected to be at work at the start of that 15 minute block. So, given the example above I would like to return
10-Feb-2012 00:00:00Â Â Â Â 0 10-Feb-2012 00:15:00Â Â Â Â 0 10-Feb-2012 06:00:00Â Â Â Â 1 10-Feb-2012 06:15:00Â Â Â Â 1
[code]....
I'm not too worried if the date part is not included in the result as this could be determined elsewhere, but how can I do this grouping/counting?
I am just starting out using CUBEMEMBER/CUBEVALUE formulas in excel linked into a sql olap db - using this method for some custom reports where pivot tables are not suitable. The time dimension values include Months, Quarters and Years and the CUBEMEMBER formulas like
=CUBEMEMBER("OLAPCUBE","[Time].[Time].[Year].&[2015].&[1].&[1]") work fine - 1st quarter 1st month etc.
Is there a straightforward notation to aggregate months or do I need to use a plus sign to add a number of CUBEMEMBER formulas together.In other words - Is there an easier way of for say jan to july 2015 totals than
I'm using the code below to send files that are in a blob file in my database to the browser client. The code sends the file in chunks in order to increase performance. The file I'm using to test with is 7MB. It works great on Windows XP with any browser. It takes virtually the same amount of time compared to downloading the file directly from the webserver. However, Windows 2000 and Mac OS X both take about 4x the amount of time it takes to download the file on XP machines. Why the performance difference? Is there anything I can do to fix this? I tried downloading the file directly from the webserver instead of getting it out of the database and it takes the same amount of time on all 3 OS. I had the same problem on Windows XP when I wasn't sending the file in chunks, but after using the code below, it started working for XP only.
Dim bufferSize As Integer = 24000 Dim outbyte(bufferSize - 1) As Byte Dim retval As Long Dim startIndex As Long = 0
Dim sql As String = "SELECT ..." Dim cmd As New SqlCommand(sql, conn) conn.open() Dim dr As SqlDataReader = cmd.ExecuteReader(CommandBehavior.SequentialAccess) If dr.Read() Then ' Reset the starting byte for a new BLOB. startIndex = 0
' Read bytes into outbyte() and retain the number of bytes returned. retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Current.Response.Clear() Current.Response.Buffer = True Current.Response.ContentType = "application/octet-stream" Current.Response.AddHeader("Content-Disposition", "attachment; filename=" & myfile" & "." & myextension)
Do While retval = bufferSize Current.Response.BinaryWrite(outbyte) Current.Response.Flush()
' Reposition the start index to the end of the last buffer and fill the buffer. startIndex += bufferSize retval = dr.GetBytes(DocCol, startIndex, outbyte, 0, bufferSize) Loop
'Write the remainder of the last chunk Dim remaining(retval) As Byte Array.Copy(outbyte, 0, remaining, 0, retval) Current.Response.BinaryWrite(remaining) Current.Response.Flush() Current.Response.Close() End If dr.Close() conn.Close()
Using the SqlClient provider I'm trying to write big datachunks of maybe 20 MB each to SQL server to store in BLOBs using blobColumn.Write(...) using .NET 2.0 dbcommand object calling a Stored procedure
CREATE PROCEDURE [dbo].[putBlobByPK]
(
@id dKey
, @value VARBINARY(MAX)
, @offset bigint
, @length bigint
, @ModDttm dModDttm OUT
, @ModUser dModUser OUT
, @ModClient dModClient OUT
, @ModAppl dModAppl OUT
)
....
When doing this I can do this exactly 3 times than the application hangs (for ever).
When looking in the SQL Server log, I find the following to errors:
Error: 4014, Severity: 20, Status: 2.
A fatal error occurred while reading the input stream from the network. The session will be terminated.
I don't get this error on the client! OK, the session died.
What may be the problem?
I write big chunks like this to avoid many writes as the data shall be replicated later using peer to peer replication. And the more writes used for writing the total BLOB the more huge becomes the transaction log of the subscriber database.
I have made sure that I am a member of OLSP Adminstrator group on the server and everything seems to work al the way till design storgae and cube processing but then when I browse the cube, I do not get any data :confused:
Can someone help me out in telling me what a data cube is and if you have any examples I can look at?
We are in the process of using data cubes through SQL. What we are looking for is to be able to have summary of data, but if we wanted to click on a grouping we can go to the details of the data.
I just wanted to know if there was any infomation/websites I can go and look at.
When I make a call to GetSchemaDataset with a restriction of a cube name with a space in the name of the cube the call fails. Following is a sample of the code: adoRestriction = new AdomdRestriction("CATALOG_NAME", "Contoso Telecom_Contoso"); adoRestrictions.Add(adoRestriction); dataSet = conn.GetSchemaDataSet("MDSCHEMA_CUBES", adoRestrictions); I am running SQL Server 2005 Analysis Services SP2. Is there some way to qualify the cube name in the restriction or is this just a bug? Thanks.
Hi friends, I am new to MSAS world. I need help related to this. I want to pull data from MSAS cube programmatically. Only way I know is thru ODBO, but that won't help me in this case, cause I might have to drill down upto all possible intersections stored in MSAS (at least all of the stored members). Doing this thru MDX could be humongous thing, at least manipulating data taken out using ODBO. I might be missing something out here. Can anybody help. It would also help if somebody can tell me if any other approach is poosible.
I've seen posts about merging 2 databases but everyone is talking about 2 SQL based databases. I have a data cube based on MDX query giving me all the Sales data. I have a Excel sheet with the Budgets in it, thanks to the connector i can browse through my Excel file using SQL statements.
Now the problem that i have is i want to merge these 2 data's into one report i want to provide Sales figures from the cube with the budgets next to them which are stored in the Excel file.
So basicly is it possible to merge an MDX based database with an SQL database ? Cuz all the solutions i have found to far is by joining databases which is always done on SQL level.
I need to create a package that updates the dimensions and cube data from a data warehouse on daily basis. I was going to create a Data Flow that takes the data from the DW source then put it as input to a Process Dimension destination to update the dimensions and use a Process Partition destination in the same manner to update the cube, but then I came across the Analysis Services Processing Task which seems to do the job as well. I am kinda confused which way to go. Any recommendations?
I want to make a package in SSIS for automatic process of my data cube providing some log informations (two INSERT statements to my log table with actual date and result of operation succesful/unsuccesful). I tried to set data source to analysis services, I found my cube but I don't where I can add my cube to project and how can I desingn it. Can anybody tell me how to??? Thanks
I have a problem with date conversion when the date is coming from a cube.
I mean the function "=cdate(Fields!Signature_Date.Value)" works fine when the date is always fill in, but when my record is equal to nothing, I have got the value "#error" in my report...
Which is the best way to avoid to display this value in my report?
I have used the code "=iif(Fields!Signature_Date.Value = nothing, nothing, cdate(Fields!Signature_Date.Value))" but it doesn't work neither...
Please, if my only way to resolve that is to convert the field in the MDX statement, could you give me a code example...?
I need help if any one can... My need is that...I am devloping my Cube using SSAS. I want to access my cube in multi languages e.g.: English, Japanese or other. At the movement I have two languages description in SQL Database like English and Japanese.
But In SSAS Translations, I can translate Measures Groups, Dimensions, Perspectives, KPIs, Actions, Named Sets & Calculated Members names. But there will be a need to Map Columns as per selected Lenguage so that I can display in proper language data.
Suppose I select Japanese the description should come from Column where Japanese description is stored.
OR any othere method by doing I can see my reports/data in any language.