Greetings, I'm interested in learning how to create queries against a cubes metadata. Specifically I'm interested in writing a aquery, using VB, to return a collection of bottom level members relative to a given member name.
Using the MDX Sample appication I can execute the query:
select
Descendants([Customer].[All Customer].[Canada],,leaves) on columns
from Sales
to get the members, however, the query appears to attempt to bring back data, thus is probably an expensive method.
It would be great to understand how to implement a meta data query without have to resort to recursion,
I have created and deployed my first report. It renders fine for me and the other database admin. When others attempt to view it, we get the error
Query execution failed for data set 'periods'. (rsErrorExecutingCommand), For more information about this error navigate to the report server on the local server machine, or enable remote errors
Initially, We created a local group on the machine that hosts both the database and webserver and added the individuals to that group. Then, within SRS Report manager, we added that group to the Browswer role of the report. The error message was slightly different, in that it couldn't even open the Datasource.
We then added an individual to the database as dbreader, and got the above message. It apprently is starting to render, and when it encounters the first query (dataset "periods", which populates a drop down list for a parameter), it chokes. BTW, the Periods dataset executes a stored procedure dbo.Period_List that has no parameters. It returns a list of reporting periods.
I could not figure out how to "enable remote errors" or find an error log on the server. The C:Program FilesMicrosoft SQL ServerMSSQL.3Reporting ServicesLogFiles Log files did not appear to record any errors.
I am working on to create a data warehouse. I have made a database which will be the data warehouse and will consist of dimension and fact tables. I know that other than dimension and fact table a data warehouse should also consist of a meta data, now my question is what should be the structure of metadata and all the information it should have?
Hi Friends, I am trying to write SQL statement for getting information about Cube metadata. When we create any table in MSSQL server, we can trace out its information from system tables in master database by querying tables like sysobjects, syscolumns etc. Like that I have created a cube using MS Analysis services and I can see Metadata (like Creation Date, Processed on etc) on 'Metadata' tab of cube. But I want to access its contents by writting SQL statement. For this I could not locate any system level table where this information gets stored. Could any one help me for writting htis system table level query for accessing metadata info of cube?
I'm curious if there is a way to gather any metadata about a query that was just ran, or about a dynamic query passed in as a parameter, such as being able to "dynamically" return a list of columns in the query, along the lines of:
SELECT * FROM [myTable]; SELECT @@COLUMNS --Expecting this to return a recordset of columns for the previously run query
I'm not talking about querying metadata about the tables themselves, I'm really looking for what "metadata" might be available about the query itself.
This would be useful for me to dynamically generate some code for ad-hoc queries with a stored procedure call, or to simply return a list of just column names only without the data.
I am looking for anyone who can help me find an existing tool that will allow my programming team to build reports and data extracts simply and easily from a form of query builder. Unfortunatley, my database is structured as a metadata store using name data pairs. The actual structure is something like: DateTime; Customer (integer); fieldname(string); fieldvalue (string)
In addition, I have another table that is structured similarly to store numeric data DateTime; Customer (integer); fieldname(string); fieldvalue (Integer)
So, for the query tool to even start, it needs to scan both tables for distinct values in the fieldname column for a specific date/time range for a specific customer. Once that is done, it can display the field names and begin to search for the data I am looking for.
In addition, If I want the names of all people in the system who have bought a pie from my customer 1, the query tool will have to search for the fieldvalue "pie", return the key cells and then search for the fieldnames "Name" where the fields match (that or do it in a compound query).
Is there any tool anywhere out there that might have such capabilities?
I have multiple reports, in multiple folders, that have fields in them and those fields are set to Navigate to the same report. I'd like to store a single copy of the report being navigated to in one hidden folder.
Example Report "Inventory Value" in folder Accounting has the Item Number field set to navigate to the report "Item Details"stored in folder Linked Reports Paths //ReportServer/Accounting/Inventory_Value //ReportServer/Linked_Reports/Item_Details
I created a slowly changing dimension object and used an OLE DB Source object with a SQL Command to feed it. After all the SCD objects are created, I get a warning about a truncation which might happen.
I adjust the source query with the right converts, so the query uses data types and lengths which match the target table. I update the OLE DB Source with the new query. Column names are unchanged.
Now how do I get this data flow to reflect the new column lengths? Do I have to throw away all objects and recreate the SCD? The refresh button in the SCD object doesn't do it. This is also a problem when adding columns to your dimension table. Because I modify the stuff that the SCD generates, it's VERY teadious to always have to throw it all away and start over again.
I have a data flow task with a single source and destination task. I'm having the source task creating a table from a variable expression and the destination table also created from a variable expression. I'm running this under 3 scenarios in which each scenario has a different source and destination table. They are different in name but close in table structure with the exception of one column being different. The Metadata for the source flow path seems to be "sticky" in that it is not modifying the source table structure in the flow to account for this different column. I'm not sure how to adjust this. Any ideas? I've modified several properties in the task and data flow but nothing seems to make this adjustment in run-time.
We are currently using a Merged replication and Push subscriber to replicate the databse from DB1 to DB2 every hours. The replication process successed for first 20 hours but we found that the replication process cannot be completed after 20 hours with the following error codes:
Action Code Last Action Msg
4 The process could not query row metadata at the 'Subscriber'
363 The process could not deliver insert(s) at the 'Subscriber'.
SSIS seems to automatically set the metadata type and for "typed" sources like database and XML connections it seems to take whatever the source column datatype is. If you use a cast or convert in the your source SQL query, it will not change the datatype of the metadata. This becomes an issue when doing things like merge joins on data from different sources and the join columns are different types (e.g. a ZipCode in one system is a varchar and it is an int in another system). I've been working around the issue by editing the XML code and changing the datatype there. Is there any way to do this through the GUI?
I am tasked with truncating and reloading tables from one server to another. Company policy prevents cross-server queries, but allows SSIS packages with cross-server connections. I am doing this for about 25 tables. I have the table names in a single table & I have created an FEL to execute tasks against each table one-by-one. It works fine to truncate all the tables. I run into issues, though, with the DataFlowTask. I'm able to tell it which server & table to dynamically connect from and to, but it doesn't know how to map the metadata. They're the exact same columns and field names in both source & destination.
I have a series of drill-through reports from a parent report. While in BIDS I get a blue arrow that allows we to get back to the parent report from the drill-through report. I do not see this feature available on the RS web interface. How are users expected to navigate back to the parent report?
I have a SSIS package with a Data Flow task. This task transfers the data from SQL Server 2000 to a table in SQL Server 2005.
I deployed and tested this package on the Test Server. Then put this package in a job and executed it - Works fine.
On the production server- If I execute the package through DTEXECUI, it works fine. But when I try executing it through a job- the job fails and it gives me following error:
Description: The external metadata column collection is out of synchronization with the data source columns. The "external metadata column "T_FieldName" (82)" needs to be removed from the external metadata column collection....
What I don't understand is, why are there no errors displayed when I execute the package through DTEXECUI.
Get a filestream download link with only access to read and with folder navigation
I need a link with the path to get the file stream blob, that path could be used to download a document using any windows app like windows explorer, etc. the requirement is that path does not allow customer to navigate in filesstream share folders or see other files and only can read the file of the path,
Checking :
[file_stream].GetFileNamespacePath(2)
Allow you navigate in folders.
NON_TRANSACTED_ACCESS read_only, resolve the requirement to disable the save in file table, but allow you navigate and see other files.
I have a scenario, need to create SQL server Tables dynamically.
I Have multiple xml data file on a particular location, and want to load those XML data into sql server tables, but he metadata of each xml data files are not same.
Hence the approach is that,
1. Pick first file from that location 2. Create a table according to that xml data file metada 3. load data on newly created table. 4. Pickup the next xml data files. 5. loop through, till the XML data files are exists on that location.
After the staging_temp data gets inserted into main table.my probelm is to handle such a file where number of columns are more than the actual table.
If you see the sample rows there are 4 column separated by "¯".but actual I am having only 3 columns in my main table.so how can I get only first 3 column from the satging_temp table.
I am stuck on finding a solution to transpose source data from a system via a metadata look-up table into a destination table. I need a method to transpose/pivot the source data into columns (which are by various data-types). The datatypes for each column are listed in a metadata table.
Source Data Table:
Table Name: Source
SrcID AGE City Date 01 32 London 01-01-2013 02 35 Lagos 02-01-2013 03 36 NY 03-01-2013
Metadata Table:
Table Name:Metadata
MetaID Column_Name Column_type 11 AGE col_integer 22 City col_character 33 Date col_date
Destination table:
The source data to be loaded into the destination table(as shown below):
I am stuck on this and its happening no matter what I do. I have a huge project that holds tons of reports. Most are sub-reports.
I have 5 sections I will just use one for example
communictions (totals)
jump to report - summary data
Jump to report - detail data.
The when I get to the detail data I always get the error:
The path of the item '(null)' is not valid. The full path must be less than 260 characters long; other restrictions apply. If the report server is in native mode, the path must start with slash. (rsInvalidItemPath)
I have tried to do all sorts of things and no matter what I do I cant get it to work.
This is a web application using the ReportViewer for the web, SQL Server 2005 SP2
I have searched high and low for an answer to this and nothing works, not even the get instead of the post.
I recently added a column to an existing table with a getdate default. When doing a query from that server everything works fine. When a query is ran from a remote server I get an SQLOLEDB error message saying 'inconsistant metadata'. I've tried dropping the remote server and reconnecting but that didn't seem to resolve the problem. Can anyone tell me how to resolve this error. I believe the error number is 7353.
Hi,Is it possible to get metadata (i.e. descriptions of tables etc.) insql-server? In Oracle you can retrieve this information with tables likeall_objects, user_tables, user_views etc. For example, this query selectsthe owner of the table 'ret_ods_test' (in Oracle!):select ownerfrom all_objectswhere object_name = 'ret_ods_test'What's the equivalent in sql server?Thanks a lot.
Is there a way to find out which user defined procs/child packages etc are been called in SSIS packages using some metadata. The idea is to have a document which lists the number of packages called, whats sprocs and child packages are executed by those pkgs..
I have checked the SSIS metadata whitepaper but that is too generic.
[OLE DB Source [1]] Warning: The external metadata column collection is out of synchronization with the data source columns. The column "objectName1" needs to be updated in the external metadata column collection.
A corollary question: what does right-clicking a package in Solution Explorer and clicking "Reload with Upgrade" do?
-We are using SSIS packages for various kind of data load from excel source. -If there are any change in the data type or format of excel, the package cries for the Metadata mismatch. -During design time if you accept the metadata changes, all things work fine.
But in our case we have deployed the packages on Production Server, now the excel file format/data has changed. The packages are expecting a different metadata so they are not working at all.
Do you have any suggestions for the above problem? Thanks, Vijay.
i am using icolumnsrowset interface to get some metadata about the columns in a rowset, but i never got unique and primary key columns in the columnsrowset, they always return null, when using sqlcedataadapter i simply add addwithkey option to the adapter to determine it, but i dont know how to do it by using ole db interfaces, i have tried to set DBCOLUMN_KEYCOLUMN flag to true on ccommand<cynamicaccessor, crowset> but it seems it rejects it, generating an unknown error, error object says almost nothing except that 'errors occured[,,,,,]' text
can someone tell me, how can i retrieve columnsrowset with that unique and primary key sections filled?
Hello, i would like to know if it's possible to generate automatically a word document or an excel document that will contain all the metadata definition, for example containing the source columns names, their datatype, and the destination with their datatypes, so that it would easy to create a data dictionnary .
It's now quite some time that one particular behaviour of SSIS is really frustrating me and I would like to know if I'm the only one experiencing this problem or if other people have the same problem. The issue I'm talking about is SSIS 'dependency on what is written in the XML files describing the flows. Particularly with the Data Types of columns. I'm explaining myself: Imagine your are developping a flow containing several numeric(18,0) columns... During the flow you have to perform a lookup on an Integer Field.... Of course this operation is not allowed as a numeric is not mappable with an Integer... (This is, in my opinion, a nonsense as an implicit conversion has to be possible). as a result of this behaviour, I decide to change the datatype (numeric) from my source query to an integer and use it in the Lookup which of course succeeds but now I have a second problem: each lookup in my flow has an error handling branch which I'm joining back using a Union transform. and there we have the second irritation: the Union transform doesn't replicate the Data Type changes that occured upwards in the flow... worse: it even has no interface to let you modify the data types like the advanced editor of some transforms or data sources. (I've just lost a complete dataflow while trying to modify it manually in the xml file directly :-( for those who are considering modifying directly the XML, don't!! You are asking for trouble and a lot of frustration when you'll switch back to the designer to see the effects ) My question is now: Am I misusing SSIS?? Is there somewhere an option to activate in order to get this behaviour fixed?? Has anyone else experienced this problem?? How are you solving this?? Are there any plans in the future to loose this dependency on the datatypes or at least add some implicit conversions??
Thanks in advance for your replies, suggestions,questions and other thaughts about this subject :-)
I have a question regarding my metadata information. I finally setup my fixed width file which took some time. Is there a way that I can backup my metadata so I wont have to recreate these setting again. I'm thinking the format of the file is stored in the metadata so if I have a user running the SSIS package from the Business Intell Studio they wont reset all of my columns. Is there a file I can restore or backup if this should happen