Getting Destination Columns

Feb 22, 2007

How do i get all the destination columns of a table in a sqlserver destination component?

View 1 Replies


ADVERTISEMENT

Destination Columns Question

Jul 23, 2005

When I create a DTS to export a text file from a table, if I click onthe Define Columns button and then Populate from source, then execute,it changes one of the types to not quotable and the size to 19, whichin turn is changing the file width. The field that it is changing hasa numeric data type and the remainder of the fields are text (allvarying sizes). Is it doing this because it is numeric or is theresome other explaination? And is there anyway to stop it fromhappening?

View 1 Replies View Related

Copy Columns From Source To Destination

Apr 10, 2007

Hi all,

I want to copy 2 columns from 1 database to another database.
I managed to do this, using a Ole DB source and a Ole DB destination dataset.

Now I want to merge 2 colums into 1:
Source Database: Column A: first name, Column B: Lastname.
Destination Database: Column 1: First and lastname customer.

Thanks for your help!

Kind regards,

Marcel Hijnen
eXDe Solutions B.V.

View 4 Replies View Related

How To Map New Columns To An Oledb Destination Using Script?

Jun 29, 2006

Overview of my scenario: i have an SQL table with 10 columns. I read data off a csv flat file source and insert / update them into the sql. No problems there.

Now, after the 10th column, the columns are variable from site to site. Ie, for any customer, the first 10 columns are the same but the following 'n' number of columns can be different.

I'm already able to detect what are the 'custom' columns from the source db and can programmatically add / modify the columns to the destination sql tables when SSIS runs.

My problem now is, i have the flat file source with 'fixed' and 'custom' columns together, and my sql table is ready to receive them, but i don't have the proper column mappings in my SSIS package. I only have mappings for the 'fixed' columns.

How can i programmatically 'add' these mappings to my data source and subsequently my oledb destination?

What strategy should i use? Open for anyone's suggestions or ideas.

Thanks.

View 4 Replies View Related

Ole DB Destination (available Input Columns Blank)

Oct 17, 2007

Can anyone please help

I am trying to get a table from production to development created and populated with data in Prod. When I create a package to set the data flow on the available input columns in mappings I do not see any columns there. The source has been defined as the production table.

Thanks a bunch

View 3 Replies View Related

Mapping Source And Destination Columns

Feb 21, 2007

can somebody show an example of how to map source and destination columns when uploading a file to sql server?

Also, please send me the mapping when i want to map source to different destination columns.

View 1 Replies View Related

Maximum Number Of Destination Columns In OLE DB Command??

Apr 29, 2008

I'm using 2 OLE DB Commands; 1 to perform an insert the other an update. I have found that on the column mapping tab, I only have 10 parameters available to map to. The issue is I need 40 parameters. Am I doing something wrong? Is there a setting I am missing? Is there another way to do this? Am I out of luck .
Here is the sql query I am using, so you can see that I have the correct number of parameters listed:

Insert into Reqs_Data (G_COLLECTDATE, PROGRAM, PRODUCT_ID, TOTAL, ADDED, CHANGED, DELETED, NOT_ALLOCATED, NEWREQS, MODIFIED, LEGACY, UNCATEGORIZED, NOT_TRACED_DOWN, NOT_TRACED_UP, PASSED, PARTIALLY_PASSED, WAIVED, FAILED, NOT_VERIFIED, PLANNED_ADDED, RQTS_TOTAL_TBD_COUNT, RQTS_TOTAL_MODS_IN_MONTH, BUILD_ID, RTM_PRODUCT_ID, TRACED_UP, TRACED_DOWN, ALLOCATED, LASTMONTHTOTAL, CALC_ADD, CALC_DELETE, IS_TRACED_DOWN, IS_TRACED_UP, LEVEL3, LEVEL2, LEVEL1, LEVEL0, IPT1, IPT2, IPT3, IPT4 ) Values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)


Update Reqs_Data SET TOTAL = ?, ADDED = ?, CHANGED = ?, DELETED = ?, NOT_ALLOCATED = ?, NEWREQS = ?, MODIFIED = ?, LEGACY = ?, UNCATEGORIZED = ?, NOT_TRACED_DOWN = ?, NOT_TRACED_UP = ?, PASSED = ?, PARTIALLY_PASSED = ?, WAIVED = ?, FAILED = ?, NOT_VERIFIED = ?, PLANNED_ADDED = ?, RQTS_TOTAL_TBD_COUNT = ?, RQTS_TOTAL_MODS_IN_MONTH = ?, BUILD_ID = ?, RTM_PRODUCT_ID = ?, TRACED_UP = ?, TRACED_DOWN = ?, ALLOCATED = ?, LASTMONTHTOTAL = ?, CALC_ADD = ?, CALC_DELETE = ?, IS_TRACED_DOWN = ?, IS_TRACED_UP = ?, LEVEL3 = ?, LEVEL2 = ?, LEVEL1 = ?, LEVEL0 = ?, IPT1 = ?, IPT2 = ?, IPT3 = ?, IPT4 = ?
WHERE G_Collectdate = ? AND Program = ? AND PRODUCT_ID = ?

View 5 Replies View Related

How To Map Columns Between Flat File And Oledb Destination

Mar 11, 2008

i am trying to load almost 15 csv files to my oledb destination can i use for each container to map the source columns dynamically to destination table during data flow task


FLAT FILE SOURCE ------------------------------------ > OLEDB DESTINATION

1.Product.csv Product Table (Different mappings between columns)
2.Depot.csv
...
....


No.of columns also differ in csv files...


P.S. : Dont ask to me to do individual data flow task for each csv files.

View 4 Replies View Related

Dynamic Columns For Flat File Destination?

Jul 10, 2006

I have a database app, and we're implementing various data export features using SSIS.

Basically, it's a couple of straight extracts of various recordsets/views, etc. to CSV (flat files) from our SQL Server 2005 database, so I'm creating an SSIS package for each of these datasets.

So far, so good, but my problem comes here: My requirements call for users to select from a list of available columns the fields that they want to include in their exported file. Then, the package should run, but only output the columns specified by the user.

Does anyone have any idea as to the best way to accomplish this? To recap, at design time, I know which columns the users will have to choose from, but at run time, they will specify the columns to export to the flat file.

Any help or guidance here is greatly appreciated

View 7 Replies View Related

How Do I Order Columns In My Custom Destination Component?

Nov 17, 2006

Here is the situation. I have created a package that takes 50 columns from a comma delimited flat file. I then validate and clean the data. Next I add two columns that were not in the original source file. These two columns need to be in the 5th and 9th column position when the file is then re-written to a text file. How do i get those two columns to write out in the desired order? Any ideas?



K. Lyles

View 1 Replies View Related

Not Able To Map View Columns To A Flat File Destination

Oct 16, 2007

Hi,

I have a dataflow task with two components: OLE DB source --> Flat File destination

In the OLE DB source, I am accessing a view, not a table.

However, when I attempt to map the view's input columns to the flat file destination columns, the screen is completely blank. There's none of the usual drop downs that let you select the column name, nor can I drag and drop the input column from the upper pane to the lower pane.

Is there a trick to doing this?

Thanks

View 6 Replies View Related

Complication - When Mapping Columns To OLEDB Destination In C#

Aug 28, 2007



Hi ,

I have a Package Consisting of the following (tried to portray the flow below)

1) OLE DB Source (Which I set the sql command for in code)


|
/
2) Rowcount Transform (Counts the Source Records)

|
/
3) Derived Column (Adds An AuditJobID ,ExecutionStartDate)


|
/
4) Conditional Split (Splits New and Updated Record - My Implementation of SCD)





/
/ /
5) Rowcount Transform (Counts the New Records) Rowcount Transform (Counts Update Records)




/
/ /
6) OLEDB Destination OLEDB Update Command

I am trying to map the columns in the "OLEDB Destination" with c# ,with the following


IDTSInput90 input = oledbDestination.InputCollection[0];

IDTSVirtualInput90 vInput = input.GetVirtualInput();

As soon as I call the input.GetVirtualInput(); method I get a com exception ,Seem that I am missing a

VirtualInputColumnCollection on the component ,but can't seem to figure out why.



When I drop the all the other components and only keep the OLEDb Source and OLEDB Destination with a flow between them , the call to input.GetVirtualInput() doe not fail with a com exception and I can mapping normally

I really need some guidance on the above.

Regards
Cedric

View 4 Replies View Related

Some Columns Not Populated In Data Flow Destination

Mar 13, 2007

I am populating a table using a SQL command. very simple.

SELECT RISKID
      ,RISKIDREN
      ,RISKIDEND
      ,PREMIUMSUBTOTAL
      ,PREMIUMTOTAL
      ,SURCHARGE1
      ,SURCHARGE2
      ,SURCHARGE3
      ,SURCHARGE4
      ,SURCHARGE5
      ,COMMPREMIUM1
  FROM PREMIUM

However, the first three columns are not being populated in the destination  table. The other columns come over fine.

The SQL stmt. returns data as expected when run against the source database.

I deleted the source and destination and recreated the flow to prevent metadata mapping issues. In the source editor preview I see all of the columns and data. In the destination editor preview, the first three columns of data are null ???. 

It appears that the columns are not mapping properly even though they are in the source and destination of the mapping editor.

I have made sure that the destination mapping contains all the columns in the UI.

The source and destination have the columns represented in the advanced editor metedata. I also checked the XML to verify that the columns are in the destination.

There is a row count between the source and destination. which should have no effect.

This is a part of a larger DW load where I have 10 other tables populated within the dataflow. I also do not get any validation, or error messages. So, I have eliminated truncation errors or the like.

I am really puzzled.  Has anyone run accross anything like this?

 

View 5 Replies View Related

Order Of Columns In Flat File Destination

Oct 23, 2007

I need to create a number of flat files, all with the same layout and sourced from the the same table, but with different criteria.

The first set of (three) flat files file is created out of a simple Conditional Split transformation: If Source Table row number > 40,000 route to File 3; if row number > 20,000, route to file 2, otherwise route to file 1. This gives me 20,000 rows in files 1 & 2 and the remainder in file 3.

I also want to create a fourth flat file by joining the Source Table with a sample table and selecting only those rows where the Customer numbers match. I'm currently doing this in two stages: An Execute SQL Task performs the join and inserts the selected rows into a Destination table (identical layout to source table), and then a simple data flow moves the rows from the Destination table into the fourth flat file.

My problem is that the order of the columns in the first three flat files is different from the fourth file. I've tried creating the fourth flat file with a single data flow using a Merge Join transformation which didn't work because the tables aren't sorted in the correct sequence, and I couldn't get an OLE DB Command transformation to work either.

I'm not sure why the column order of the 4th file should be different seeing as how its contents are sourced from the same Source table, but is there a cunning way of setting this up so that the columns end up in the same order?

View 8 Replies View Related

Formatting Columns For A Flat File Destination Control

Mar 28, 2007

So i've created a flat file destination control and have mapped the columns. At what point can you control which column shows up first?

View 4 Replies View Related

Flat File To SQL Destination - Unused Columns Warning

Jan 15, 2007

I have a flat file data source and SQL Server destination data flow. Only a subset of columns from the source are mapped to the destination. During execution SSIS returns DTS pipline warnings for every unmapped source column. Is some kind of transformation the only way to get rid of these warnings?

Also this data flow subsequently returns an error: [SQL Server Destination [1293]] Error: An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Could not bulk load because SSIS file mapping object 'GlobalDTSQLIMPORT ' could not be opened. Operating system error code 2(The system cannot find the file specified.). Make sure you are accessing a local server via Windows security."

I'm researching this error, but if anyone is familiar with it your advice would be appreciated. Thanks.

View 10 Replies View Related

DB Design :: Copy Specific Columns For Source To Destination

Nov 4, 2015

I am trying to Import data into SQL server 2008 using management studio.  The source data is an Access dbase.  I am trying to do this with queries as the tables do not match but I do need to copy specific columns for the source to the destination. Any brief example selecting a column from the source table, and just entering a dummy value for other columns(other column of data for destination table does not exist in source table). 

For the example

Source access dbase, just two columns but no primary key, T2dbase, Employee

New table.

First Name, Description
 Tom          , manager

Destination dbase SQLServer (T1dbase,Employee table), note 4 columns
Primary key NameID, FirstName nvarchar (20), LastName nvarchar (20), Description nvarchar(20)

View 5 Replies View Related

Transactional Replication With Additional Columns In Destination Table

Jul 15, 2007

In Transactional Replication -while Publisher, Distributer & Subscriber are on same instance- can I have additional columns in destination Table & set "Action if name is in use" for related Article to "Keep existing object unchanged"?

I do this but even Snapshot is not transfered to destination table!!

View 5 Replies View Related

SSIS Script Source Destination Insert Null Columns

Dec 1, 2006

Hi I have an SSIS package that retrieves table data from a database using an OLE DB source component.

This then passes the rows of data to a script destination component.

Within this script my code takes the row data and inserts it into the database (amongst other things).

However I have noticed that if any of the Row columns contain nulls then the component falls over with errors when run like so:

[Back Up Prize Banks [2875]] Error: System.Data.SqlClient.SqlException: Parameterized Query '(@PASId int,@ScriptName nvarchar(17),@LastModified datetime,@Pri' expects parameter @RegenerateXML, which was not supplied. at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.HandleUserException(Exception e) at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.ProcessInput(Int32 inputID, PipelineBuffer buffer) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostProcessInput(IDTSManagedComponentWrapper90 wrapper, Int32 inputID, IDTSBuffer90 pDTSBuffer, IntPtr bufferWirePacket)

I have solved the problem by checking the column contents before inserting it as follows:

Before:
Command.Parameters.AddWithValue("@RegenerateXML", Row.RegenerateXML)

After:
If (Row.RegenerateXML_IsNull()) Then
Command.Parameters.AddWithValue("@RegenerateXML",System.DBNull.Value)
Else
Command.Parameters.AddWithValue("@RegenerateXML", Row.RegenerateXML)
End If

Which is great but it does turn 1 line into 5 lines. I have 20 parameters which takes up 20 lines of code which will now be 20 x 5 = 100 lines of code.

My question is :

Is there a better way to write this code without having to take up so many lines?

Thanks

Matt.





View 2 Replies View Related

SSIS Programmaing - How To Map Flatfile Source Columns To SQL Server Destination?

Dec 7, 2007

Hi,

I am new to SSIS programming and trying to export data from a flatfile source to SQL server destination table dynamically. I need to get the table schema info (column length, data type etc.) from SQL server table and then map the source columns from flatfile to destination table columns.

I am referring to one of the programming samples from Microsoft and another excellent article by Moim Hossain. Can someone help me understand how to map the Source columns to destination table columns depending on table schema? Please help.


Thanks

View 5 Replies View Related

Integration Services :: Flat File Destination Columns Are Too Wide (too Many Trailing Spaces)

Nov 6, 2015

I'm loading data from a sql server table into a flat file. The flat file connection manager has the following settings

GENERAL:

Format:Delimited
Text Qualifier:"
Header row delimiter: {CR}{LF}
Header rows to skip : 0
Columns:
Row Delimiter: {CR}{LF}
Column delimiter: comma(,)

View 4 Replies View Related

Integration Services :: SSIS 2008 R2 - Add Columns To Existing Mapping With Destination DB Table

Sep 8, 2015

The only way to add a new column to an existing mapping that I know is to go to advanced editor and refresh. This however keeps only the default mapping (where the field names match), the rest is wiped out, so need to restore the mapping manually after that. Risky and annoying at the same time. Is there any alternative?

View 3 Replies View Related

RS2k Issue: PDF Exporting Report With Hidden Columns, Stretches Visible Columns And Misplaces Columns On Spanned Page

Dec 13, 2007

Hello:

I am running into an issue with RS2k PDF export.

Case: Exporting Report to PDF/Printing/TIFF
Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .

User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.

We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.

Any help or suggestion on this issue would be appreciated

View 1 Replies View Related

OLE DB DESTINATION And SQL Server Destination

Jul 4, 2007

Hey All:



I was totally confused.

When designing the SSIS dataflow part, firstly , i tried SQL Server Destination because my target server is a sql server.

then execute the task with failure.

Then i tried to use OLE DB DESTINATION instead of SQL Server Destination.

This Dataflow worked.



i can not figour out why.

By the way , i used the connection is OLE DB.And i choosed OLE DB source as the datasource cuz i can not find SQL server datasource.



Who can tell me some reasons for this?



View 9 Replies View Related

Transact SQL :: Select And Parse Json Data From 2 Columns Into Multiple Columns In A Table?

Apr 29, 2015

I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:

I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.

1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.

SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B

If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.

2. My second question: How to i get around this error?

SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.*
FROM PRODUCT A
CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B,  fnSplitJson2(A.ITEM6,NULL) C

I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.

View 14 Replies View Related

T-SQL (SS2K8) :: Select Group On Multiple Columns When At Least One Of Non Grouped Columns Not Match

Aug 27, 2014

I'd like to first figure out the count of how many rows are not the Current Edition have the following:

Second I'd like to be able to select the primary key of all the rows involved

Third I'd like to select all the primary keys of just the rows not in the current edition

Not really sure how to describe this without making a dataset

CREATE TABLE [Project].[TestTable1](
[TestTable1_pk] [int] IDENTITY(1,1) NOT NULL,
[Source_ID] [int] NOT NULL,
[Edition_fk] [int] NOT NULL,
[Key1_fk] [int] NOT NULL,
[Key2_fk] [int] NOT NULL,

[Code] .....

Group by fails me because I only want the groups where the Edition_fk don't match...

View 4 Replies View Related

SQL Server 2014 :: Creating A Table With Updatable Columns And Read-only Columns

May 26, 2015

Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily

Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.

View 4 Replies View Related

Hiding/Showing Columns Based On The Columns Present In The Dataset

Jun 27, 2007

I have query which retrieves multiple column vary from 5 to 15 based on input parameter passed.I am using table to map all this column.If column is not retrieved in the dataset(I am not talking abt Null data but column is completely missing) then I want to hide it in my report.

Can I do that??

Any reply showing me the right way is appricited.



-Thanks,

Digs

View 3 Replies View Related

How Dose It Matter For The Non-clustered Index Key Columns And Included Columns?

Apr 24, 2007

Hi, all experts here,

Thanks a lot for your kind attention.

As I am creating the non-clustered indexes for the tables, I dont quite understand how dose it really matter to put the columns in the index key columns or put them into the included columns of the index?

I am really confused about that and I am looking forward to hearing from you and thank you very much again for your advices and help.

With best regards,

Yours sincerely,

View 4 Replies View Related

A Word About Meta-data, Pass Through Columns And Derived Columns

Oct 13, 2006

Here's another one of my bitchfest about stuff which annoy the *** out of me in SSIS (and no such problems in DTS):

Do you ever wonder how easy it was to set up text file to db transform in DTS - I had no problems at all. In SSIS - 1 spent half a day trying to figure out how to get proper column data types for text file - OF Course MS was brilliant enough to add "Suggest Types" feature to text file connection manager - BUT guess what - it sample ONLY 1000 rows - so I tried to change that number to 50000 and clicked ok - BUT ms changed it to 1000 without me noticing it - SO NO WONDER later on some of datatypes did not match. And boy what a fun it is to change the source columns after you have created a few transforms.

This s**hit just breaks... So a word about Derived Columns - pretty useful feature heh? ITs not f***ing useful if it DELETES SOME of the Code itself after there have been changes in dataflow. I cant say how pissed off im about that SSIS went ahead and deleted columns from flow & messed up derived columns just because the lineageIDs dont match.

Meta-data - it would be useful if you could change it and refresh it - im just sick and tired of it that it shows warnings and errors when there's nothing wrong - so after a change i need to doubleclick all my transforms so that those red & yellow boxes would disappear.

Oh and y I passionately dislike Derived columns - so you create new fields based on some data - you do some stuff - combine multiple columns to one, but you have no way saying remove the columns from the pipeline. Y you need it - well if you have 50K + rows with 30+ columns then its EXTRA useless memory overhead for your package.

Hopefully one day I will understand how SSIS works (not an ez task I say) - I might be able to spend more time on development and less time on my bitchfest - UNTIL then --> Another Day - Another Hassle with SSIS

View 5 Replies View Related

T-SQL (SS2K8) :: Converting Row Values To Columns With Dynamic Columns

Jun 11, 2015

Basically, I'm given a daily schedule on two separate rows for shift 1 and shift 2 for the same employee, I'm trying to align both shifts in one row as shown below in 'My desired results' section.

Sample Data:

;WITH SampleData ([ColumnA], [ColumnB], [ColumnC], [ColumnD]) AS
(
SELECT 5060,'04/30/2015','05:30', '08:30'
UNION ALL SELECT 5060, '04/30/2015','13:30', '15:30'
UNION ALL SELECT 5060,'05/02/2015','05:30', '08:30'
UNION ALL SELECT 5060, '05/02/2015','13:30', '15:30'

[Code] ....

The results from the above are as follows:

columnAcolumnB SampleTitle1 SampleTitle2 SampleTitle3 SampleTitle4
506004/30/201505:30 NULL NULL NULL
506004/30/201513:30 15:30 NULL NULL
506005/02/201505:30 NULL NULL NULL
506005/02/201513:30 15:30 NULL NULL

My desired results with desired headers are as follows:

PERSONSTARTDATE STARTIME1 ENDTIME1 STARTTIME2 ENDTIME2
506004/30/2015 05:30 08:30 13:30 15:30
506005/02/2015 05:30 08:30 13:30 15:30

View 3 Replies View Related

Matching A View's Columns To It's Underlying Table's Columns

Jul 20, 2005

Hello,Using SQL Server 2000, I'm trying to put together a query that willtell me the following information about a view:The View NameThe names of the View's columnsThe names of the source tables used in the viewThe names of the columns that are used from the source tablesBorrowing code from the VIEW_COLUMN_USAGE view, I've got the codebelow, which gives me the View Name, Source Table Name, and SourceColumn Name. And I can easily enough get the View columns from thesyscolumns table. The problem is that I haven't figured out how tolink a source column name to a view column name. Any help would beappreciated.Garyselectv_obj.name as ViewName,t_obj.name as SourceTable,t_col.name as SourceColumnfromsysobjects t_obj,sysobjects v_obj,sysdepends dep,syscolumns t_colwherev_obj.xtype = 'V'and dep.id = v_obj.idand dep.depid = t_obj.idand t_obj.id = t_col.idand dep.depnumber = t_col.colidorder byv_obj.name,t_obj.name,t_col.name

View 2 Replies View Related

SELECT Query - Different Columns/Number Of Columns In Condition

Sep 10, 2007

I am working on a Statistical Reporting system where:


Data Repository: SQL Server 2005
Business Logic Tier: Views, User Defined Functions, Stored Procedures
Data Access Tier: Stored Procedures
Presentation Tier: Reporting ServicesThe end user will be able to slice & dice the data for the report by


different organizational hierarchies
different number of layers within a hierarchy
select a organization or select All of the organizations with the organizational hierarchy
combinations of selection criteria, where this selection criteria is independent of each other, and also differeBelow is an example of 2 Organizational Hierarchies:
Hierarchy 1


Country -> Work Group -> Project Team (Project Team within Work Group within Country)
Hierarchy 2


Client -> Contract -> Project (Project within Contract within Client)Based on 2 different Hierarchies from above - here are a couple of use cases:


Country = "USA", Work Group = "Network Infrastructure", Project Team = all teams
Country = "USA", Work Group = all work groups

Client = "Client A", Contract = "2007-2008 Maint", Project = "Accounts Payable Maintenance"
Client = "Client A", Contract = "2007-2008 Maint", Project = all
Client = "Client A", Contract = allI am totally stuck on:


How to implement the data interface (Stored Procs) to the Reports
Implement the business logic to handle the different hierarchies & different number of levelsI did get help earlier in this forum for how to handle a parameter having a specific value or NULL value (to select "all")
(WorkGroup = @argWorkGroup OR @argWorkGrop is NULL)

Any Ideas? Should I be doing this in SQL Statements or should I be looking to use Analysis Services.

Thanks for all your help!

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved