Data Warehousing :: How To Decide Whether It Is Required To Develop Operational And Analytical Reports
Jul 15, 2015
I have received some reports and I have been asked to decide whether these reports can be developed as an operational report or Analytical Reports.
Basically I wanted to understand what points needs to be considered while deciding whether I should go for Analytical reporting (Cubes) or Operation Reporting.
We use timed subscriptions to do almost all of our reporting. Reports are delivered (primarily via e-mail and printer) once they are completed and users don't have to "watch the pot boil" so to speak.
Apparently SSRS has some load balancing capability whereby it lets only a limited number of threads/reports run concurrently. We often reach this max and lock ourselves up on some very long-running reports, causing other important reports to wait a long time.
We've added some operational reports (ie. document prints) to the mix. These reports run off of OLTP data. They are very fast and very high priority. Waiting on them is not an option. Is there some way we can get SSRS to work on these operational reports in preference to other types of reports (eg. "just for kicks" reports)? I think we'd almost like to add another SSRS server and dedicate it to the operational reports. Ideally the new SSRS server would use the same Report Server database but would only work on subscriptions for certain documents.
Has anybody else tried to solve this problem? This MS document does really address subscriptions or load balancing by report: http://www.microsoft.com/technet/prodtechnol/sql/2005/pspsqlrs.mspx
I am Crystal Reports Developer and I am new in SSIS environment. I have started to read Professional SQL Server 2005 IS book. I am really confused by many tasks to choose.
I need to develop reports from data warehouse. But before I have to send the data from operational database (SQL Server 2000) to warehouse (SQL Server 2005) monthly - I have a script for retrieving the data. For my package, I chose Data Flow Task, Execute SQL Task, and OLE DB Destination, and it does not work.
Please help me if I can look similar packages performing? Thank you!!
Hello,I read several articles of newsgroup about the bulk delete, and I foundone way is to:-create a temporary table with all constraints of original table-insert rows to be retained into that temp table-drop constraints on original table-drop the original table-rename the temporary tableMy purge is a daily job, and my question is how this work on a heavyload operational database? I mean thousand of records are written intomy tables (the same table that I want to purge some rows from) everysecond. While I am doing the copy to temp table and drop the table whathappens to those operational data?I also realized another way of doing the bulk delete is using BCP:1) BCP out rows to be deleted to an archive file2) BCP out rows to be retained3) Drop indexes and truncate table4) BCP in rows to be retained5) Create indexesAgain the same question: When I'm doing the BCP is there any insertionblocking to my original table? What happens to my rows meantime to beinserted?Does BCP acquire an exclusive lock on the table which prevents anyother insertion?Does any one have an experience with a BCP command for querying out 2million records, and how long will it take?I appreciate your help.
Now, the developement in SqlServer Clr is obviously different from the develop in normal programs. there are many limitation in Clr develop, and it take many work to use dll alread exsits in Clr develop.
I do know that some limitation is reasonable, BUT much more limitation should be removed.
For example, now in order to use enum that exists in old dll in sqlServer, we have to create a new project and write many similar class to wrap the old enums, it's very troublesomely and slow!
so pls make the develop model consistent as possible.
My requirement for the parameter is multivalue parameter with a text box. for example when user enters aa15 it need to include product aa15. when the user enters aa15, aa16, zz15 than it needs to include all the three products. the last case is when the user enters AA** than i need to inclued all the products start with AA. when i use default multivalue parameter with data source analytical services than i am getting a drop down box. I dont want that. I need a text box where user can enter the value. 1. In sql we have a like key word to query . for example select * from product where product like "AA%". what is equavalent mdx query to get such results ? 2.How to impliment the multivalue parameters without using dropdown box?
We have a SQL2K5 SP2 (x64) in active/passive cluster running an ERP application database. I have users in remote office who are requesting the ability to run the "standard reports" from SSMS such as Disk Usage, Disk Usage by Table and so on. The user in question has AD account within the instance with db_datareader role.
What other database or server defined role is required to allow this user to launch and view the standard reports from SSMS?
Hi, I need to implement/set up the Data warehouse/Data mart in one of the department in my company by using SQL server 2005. Do any body knows the steps what I need to follow?
It will be more appreciate that, if any body gives some of the links which will help me to do the implementation/development of the same.
I do have the basic idea however I may face some of the difficulties when I start such as, does the SQL server reporting service allow the end user to customize the report based on their needs etc.?, so any of them having experience in this field please reply me.
I am working on to create a data warehouse. I have made a database which will be the data warehouse and will consist of dimension and fact tables. I know that other than dimension and fact table a data warehouse should also consist of a meta data, now my question is what should be the structure of metadata and all the information it should have?
We are starting with designing a datawarehouse for my company. I have done some reading on the concepts and steps involved, but what I am seriously lacking is some examples. I'd like to read through some real examples of data warehouses that worked including the full design diagrams. Can anyone direct me to some good sites for this?
I note that Microsoft indicates that SQL Server 2005 Express Edition supports "SQL Analytical Functions" (refer: http://www.microsoft.com/sql/prodinfo/features/compare-features.mspx).
What is this refering to? I assume this is not Analysis Services, which I understand is not supported in the Express edition. I can't find any reference to "SQL Analytical Functions" in any Microsoft documentation.
How do you run a stored procedure on PDW via SSIS? I've tried Execute SQL Task and Execute T-SQL Task but in both cases the task will run and complete almost immediately. Task shows success, no errors, but nothing happens in PDW. PDW admin console does not even register the query. Procedures run fine manually from SQL Server Object Explorer connection.
Is it possible to write a SP (Automate) to generate STATISTICS on any database and then use the output to create the stats on that database.
I ran the tuning adviser and it suggested indexes with lot of STATISTICS on the dev environment. This dev environment is replicated in several other environment with data size in these environment varying. I would like to know if I can create a SP which generates STATISTICS information pertaining to specific database environment for the query in question for tuning.
I have a large fact table spread across tens of partitions (appx. 1TB each). I found that the business does not need much of the columns in the table. So, as an optimization action, I decided to get rid of these un-needed columns.What is the efficient way to achieve this? Can I simply drop these columns from the table, or use a new table with the reduced structure?
I have a Fact Table with a ID column as Primary key and clustered index is created. And also I have 4 dimensions FK's of data type INTEGER. And finally, I have one aggregation measure in the Fact Table.
Now, my situation is How can I improve the speed of querying the fact table by creating any of the below indexes?
I have a table that is increasing quite largely each day. By now, I have average 300 million of records over 2.5 years. Before we received our new interface, the data we received was aggregated and thus not that big.The problem is that the table is so huge that I cannot use the Slowly Changing Component. I was thinking about making a temp table where I load the incremental data before I load it into the final data mart table.Based on this temporary table I use a script to compare the temp data with the already existing data in the data mart. However, this requires a compare of each records (300 mil records).
Question: Is it feasible to use a star schema dimensional model for an OLTP system that incurs few (750 per day)Sales Orders transactions?
Background: My customer wants to replace an existing OLTP system database because it runs on Oracle and their in-house expertise is in SQL Server. The original database developers that designed the Oracle DB have apparently retired. The Oracle database has been over-normalized, to say the least. The number of sales orders being entered daily is small: about 500-750 per day. These entries are done at the five clerks' convenience, from a paper form, and are very unlikely to ever be entered in quick succession. Nothing else gets regularly entered into this database except for the occasional change to a customer, but new customers are very few and far between.
I've designed a star schema for the replacement database with the Sales Order Header and Sales Order detail table combined into a single 'fact' table, and I've introduced some duplication into dimension tables (like customer) in order to eliminate some of the joins (and confusion) that were built into the original database.
I've never tried this before. Is there any reason this would not or should not work?
ETL Packages are getting failed sometimes(Package Execution Error). Eventhough executing ETL Package again from start, getting the same Error. But after Restarting Sql Service in BI Server, it is working fine. Whether it is the issue from Developer Code side or from server side.
I'm having issues with bulk update in SQL Server.I'm using SAP BODS as ETL tool and have some 20000 updates.target table has approx 0.5 million records and it has clustered index on id column.I have selected upsert option in BODS.Same setup is also done for Sybase IQ , IQ has bulk update option which is giving very ood performance.
In IQ same update load is finishing in some 9 minutes where SQL is taking more than 2 hours for same, this doesn't seem right.When I look at update is causing whole package to go slow.Sybase is creating query where is ID is present then do update else insert.Is there any way to make bulk update work faster in SQL environment?
care session quarter Q1-15 Q2-15 Q3-14 Q3-15 Q4-14
I am using this [care session quarter] column in the group by clause to achieve but no success.IF I use date column in the select clause and Group by clause then it comes correctly but groups by all dates which is not required.
Ideally I want show only quarter aggregates. The [Date Dimension] table has the column [care session quarter] which stores all the quarters of years along with dates for each day. i..e I have all columns in [Date Dimension] table as shown below
I am putting together an invoice for my company. I have a text box describing each section of the invoice, followed by a table to list out the charges. I am using multiple tables based on what type of charge the client is receiving.
I would like to hide each section if there are no items purchased of that type. I can do this with the table using the expression "=CountRows() < 1", but I do not know how to refer to that table (call it Tablix1 for the sake of discussion) for the text box. I've tried using a ReportItems function as my basis, without success.