Synchronous Transformations, Con't

Jan 5, 2008

The following statement is from Microsoft documentation:

If you use the ExclusionGroup property to specify that rows should only go to one or another of a group of outputs, as in the Conditional Split transformation, you must call the DirectRow method to select the appropriate destination for each row. When you have an error output, you must call DirectErrorRow to send rows with problems to the error output instead of the default output.

I have a question about this because I have never used the "ExclusionGroup" property. For example, I have a script component where I specify 4 separate outputs, because I am sending different groups of rows to each output. I accomplish this programmatically using a lot of conditionals and it works fine.

I did not have to use the "ExclusionGroup" property to do this. So I'm not sure why I would ever need this, or to use DirectRow? I'm trying to understand this better, because maybe I feel like I'm not understanding the DirectRow, or how/when to use it.

Thanks

View 1 Replies


ADVERTISEMENT

Synchronous Vs Asynchronous Outputs

Jan 3, 2008

Can someone please clarify:

If you have a data file, and you only want CERTAIN rows to pass to the destination, ie) a table

and you are using a script task to accomplish this,

is this a synchronous or asynchronous transformation?

Q. And how do you assign the values to the output? Do you have to create output columns, or not?

I am very very confused right now. I can't seem to find a decent answer to what is a very basic question either in my SSIS book or in the documenation. Perhaps it is so basic, that the question doesn't seem valid? I don't know. But I just don't understand this at all.

Thank you

View 9 Replies View Related

Synchronous Application Scenario

Oct 1, 2007

I'm new to SSB, so please bear with me. Our application requirements are:
1) Web app gathers user input from a web UI.
2) Web app calls a stored procedure, passing in the user input gathered in step (1).
3) Procedure issues queries to multiple data sources (SQL Server 2005 db's) derived from the user input.
4) Procedure waits for replies from these multiple data sources, governed by a timeout in case a data source is unavailable or slow to respond.

5) Procedure aggregates the data returned in step (4) from multiple data sources into an XML document.
6) Procedure returns the XML document as an output parameter.

This is different than the usual SSB asynchronous application paradigm. In particular, I'm wondering:

How can I setup a synchronous dialog, where the procedure that issues the SEND waits for a reply from the target service? I don't want the initiator procedure to end after SENDing, but rather wait for the replies to those messages so it can aggregate the data from the reply message bodies.

Thanks - Dana Reed

View 4 Replies View Related

Synchronous Script Component: Get Last Row

Sep 25, 2006

HI, I need to know whether I am on the last row or not in my script component. If this is the case, I would alter a column in the row that indicates me that I am processing the last row. Is there a way to do it? I tried with process input but when the EndOfRowSet() indicates me that the last row is porcessed, I cannot alter the row in the buffer.

Thank you,

Ccote

View 4 Replies View Related

Synchronous Bulk-Copy Into Two Table

May 24, 2007

Hi guys,in my db i have these three tables1.Stores 2.Products3.Partstheir structure is something like :Stores ----Products ----PartsStores----------------StoreId, StoreNameProducts----------------ProductId, StoreId, ProductNameParts----------------PartId, ProductId, PartNamenow, in my application i wanna to implement a bulk-copy operation souser can copy products from one store to another one and when aproduct copied to new store;all of it's parts should copy too.in fact i need a method to insert a Product item in Products table andsynchronously copy it's parts into Parts table and repeat this stepsuntil all of proucts copied.how can i do that without cursors or loops ?Thanks

View 19 Replies View Related

Temporarily Suspending Synchronous Mirroring

May 13, 2006

I have synchronous mirroring. Some times I loose connection to witness and mirror servers. These times primary server is down. Is there any way I can change mirroring to asynchronous when primary server is down due to communication break down between witness and mirrored servers? I can break mirroring but to re-establish mirroring, I have to backup and restore on the other side. So if I can change mirroring to asynchronous when primary server is down due to connection breakdown between witness and mirrored server, then when witness and mirror servers come back, I don't need to restore the entire database. Ofcourse I could use asynchronous always but that does not failover automatically. I am thankful to all answers and suggestions. Thanks.

View 3 Replies View Related

Synchronous Merge Replication(Automatic)

Aug 30, 2006

hi there

I am using SQL SERVER 2005.

I am creating a Merge replication(between two databses), its working well, but i need to goto the Local publications & right click there & select the "View Synchronization Status" & then start the Agent manually to Synchronous the databases.

Now I want the Two databses should be replication (Merge) automatically without going to any menu.

I mean the Synchronization takes place when any of the Databases changes, automatically, without touching anything.

For Example:

If one record is inserted in a table(on commit) of Database ONE, it should be Reflected to the other table of the database TWO.

Any Idea, or Link or solution???

Gurpreet S. Gill



View 4 Replies View Related

An SSIS Job Runs Synchronous Or Asynchronous?

Mar 19, 2008

Porting an existing SQL 2k DTS job over to a SQL 2k5 SQL Server running SSIS.

Background:
The job loads data into an empty work table and performs some work before clearing out the work table.
This job runs every minute.

Question:
If the job happens to take longer than a minute, does SSIS create a second instance of the job?
Or perhaps it does what DTS did and reschedules the job for the next iteration?

Concern:
I need to know because there would be key contraint violations if another instance of the job started before the working table was cleared out.

Thanks in advance


View 1 Replies View Related

1 Row In, Multiple Rows Out: Synchronous Or Asynchronous?

Jan 15, 2008

I'm creating a script component that reads from an OLEDB source and writes to an OLEDB destination.

For every input row, I need to output several rows.I tried using the Row.DirectRowToOutput0() method inside a loop in the

Input0_ProcessInputRow routine but that's not working. Should I be using Addrow() instead? If I use Addrow() does this mean it needs to be an asynchronous transformation?

I remember seeing a blog entry (Jamie's?) that did almost exactly what I wanted, but I can't find it now.

Any pointers appreciated

View 13 Replies View Related

A/Synchronous Execution Of SSIS ETL Packages

Jun 8, 2006

Hi

I'd like to know if there's a way to control the execution of ETL packages, such that:
Different packages, or at least packages that don't access the same table or database run asynchronously with respect to each other; e.g., two different packages run at the same time
and
If a package is called for execution more than once by different requests, force them to run synchronously, or one after the other.If this is possible, what resources would it require? Is this possible under, say, a dual or quad processor machine?

Thanks.

View 6 Replies View Related

Help Needed: For Creating Synchronous Transform Component

Mar 13, 2006

Hi
I am currently trying to write a custom transform componet in c# that will take a row of data, perform a look-up via an external system,
then if there is a match then send the data from the extranel system down macth ouptut (which will have different columns to the input) and drop the data that
was read, else send the data down the unmacthed output which will be the same as the input.

So I would like to write a synchrons transform becuase I don't need read all the rows from the input buffer before I started processing, also I wish have millions of rows
load in memory.

Can this be done? also does any have explame code of how to do this? becuse I can't see how to send data down the match output buffer,
as this will have the lookup results data which will have diffent columns to the input data and how disgard the input data as well.


Thanks Steve

View 7 Replies View Related

SYNCHRONOUS TRANSFORMATION - FILE TO TABLE - Help Needed

Apr 2, 2007



Hi,



Can anyone please point me in the right direction?



What I am trying to do should be very straightforward:



Take a flat file, perform various transformation on various columns using the SCRIPT COMPONENT task, then send the transformed (and un-transformed) rows to a table in the database.



My question is, how to do this using scripting? I have yet to see an example of what I'm trying to do. (I have both Kirk Haselden's book, Donald Farmer's SSIS scripting book, and the msdn website, but I have yet to see an example of what I'm trying to do!)



FILE SOURCE --> SCRIPT COMPONENT (synchronous transform) --> OLE DB DESTINATION



How do I account for all the columns that will be both transformed and un-transformed, and get them into the table? That is the missing piece of information I can't find anywhere.



The closest thing I found was this code snippet. Do I need to use this syntax, eg. Me.Output0Buffer.FirstName = (where FirstName is the actual column name??)



etc.



Then, once I hook up the SCRIPT COMPONENT to the OLEDB Destination, which uses a connection manager to the table, it will insert FirstName with what I specify?



Help. Thanks.



Me.Output0Buffer.AddRow()
Me.Output0Buffer.FirstName = columnValue (or whatever)

View 8 Replies View Related

Transformations

Jul 11, 2006

Hi all,

Can anyone tell me some links which can gimme good insight on the SSIS transfoemations.

Thanks,

Praveen kumar Dayanithi

View 4 Replies View Related

SQL Server Admin 2014 :: Synchronous And Asynchronous Commit

Nov 10, 2014

We are seeing high number of hadr_sync_wait types on our server after setting up AOAG during peak times. We have setup sync type as synchronous commit and failover automatic. Can we change these settings to async and manual failover whenever we need and change them back to sync commit during off peak timings. Any drawbacks because of these changes ?

View 4 Replies View Related

Application With AlwaysOn Read Only Replica In Synchronous Commit

Mar 8, 2015

I am trying to implement a read only replica to move much of the data read for an application to the secondary replica. Initially I had the the primary and secondary set to asynchronous commit. QA brought up an issue with creating entities from the application because after the creation of an entity the application turns around and repopulates the entire aggregate object. Well it seems that the application was reading the secondary replica before the data had been committed. Although I understand the issues that synchronous commits can cause, I went ahead and made the change as I expected it to fix the issue. After changing the primary replica to synchronous we still had the error, so I also changed the secondary although that makes no since, but the issue remains.

View 5 Replies View Related

Change Output Alias On Synchronous Component Programatically

Nov 21, 2006



Wanted to enquire how this is done. Tried doing it in the setUsageType method I have overwritten but only allows description to be changed. Basically need to change "Name".

Best option would be to change it instantly when a user selects a column from the inputs in the custom component, ie. it changes the Output Alias to a desired value. (Input tab in advanced editor)

All this is being done in a custom component which I would like to be synchronous, can achieve a similar result asynchronously.

Thanks

View 2 Replies View Related

Data Transformations, Or Something Like That, With SQL

Jul 30, 2007

Right, so I'm currently trudging through the SQL video tutorials and such, so it may be that I get to this sooner or later, but as I'm under a deadline, I thought I'd post this question beforehand so I can use that info with what I'm learning now.   Here's my situation: I have a ASP.NET 2.0 site in which I currently use XML files to display the text on the page, and I transform that text using an XSL stylesheet.  I want to move that data to a database, but I'm not sure what is the best way to do that.  Basically what I'm most concerned with is storing the main text (paragraphs with embedded hyperlinks).  Currently, I can get the XSL to pick out the links and transform them from simply XML data to live links when they display on the page, but would I be able to do the same if I were pulling these paragraphs out of a database? Or should I just store the XML data in the database, and still pull that out so I can transform it appropriately with the XSL sheet I already have? (For that matter, can I dynamically write XML content to a database?  Or am I just better off keeping my XML files?) What's the best approach for something like this?Thanks for the help! 

View 4 Replies View Related

Using Pivot Transformations!!!

Jan 10, 2008

I want to change the rows of the following table to columns to avoid repeatability:






Manufacturer
AOpen

Model
s661FXm s661FXm Intel P4

System Type
Motherboard

Standard Memory
N/A

Maximum Memory
2 GB

Sockets
2

Slots/Banks
2

Manufacturer
HP/COMPAQ

Model
Presario SR1917FR AMD Athlon 64 X2 3.06 GHz

System Type
Desktop

Standard Memory
1024 (1024MB x1 Removable)

Maximum Memory
4 GB

Sockets
4

Slots/Banks
4



Can this be done using Pivot Transformations? If yes then which column will have pivotusage and which will have pivotkeyvalue. I am getting a little confused here.
Please help!!!
Thanks

View 6 Replies View Related

Transformations As A Function?

Aug 3, 2006

I am importing four large flat files and have some formatting issues with dates.  I've figured out how to process the dates within a package exactly the way I desire.  Unfortunately, the process has several steps that I don't want to have to repeat for each date field.  Is it possible to call a reusable sequence of transformations that take parameters and have a return value?  Is there any other way to achieve similiar results?

View 1 Replies View Related

Which Transformations To Use In The Dataflow?

May 23, 2007

Hi all,
In my DataFlow i set the "OLEDB Source" which is a table in my Extract Server and need to do some transformations and stage the table which will be a Dimension in the staging DB,



Q1-Now i need only 3 columns from the Source table, which transformation do i need to use to just extract the the 3 columns?



Q2- Two Columns of 3,which i will need to transform as it is-no changes at all and One of the column which has values like "BOSTON...."
(I have a vague idea of what i need to do,need something solid suggestions/advices to kickoff,plan is to use this city column with a Replace function (as one of the forum member's Spirit1 adviced..thanks..!!))to take out the dots and then need to write a condition if BOSTON then Assign Code "BOS" which will be City_Code and this "City_Code" will have to be looked in City_Dimension to get the "City_Key_Number" for "Boston" and lastly the City_Code and City Key Number both have to be transformed to the destination Dimension.

Any ideas /suggestions will be appreciated.

Thanks in advance...!!



ravi

View 5 Replies View Related

Dynamic Transformations

Feb 7, 2006

Hi

I have a dts that is creating a table with not a fixed number of columns. The number of colums depend on a couple of factors based on the data that I'm pulling from other tables.

After some processing I need to dump all the data in the "dynamic" table into an excel doc. My problem is with the transformations within the transform data task. I don't know how many fields I will have in my table and this needs to be mapped to columns within the excel doc. Is it possible to programmatically define the transformations within an activeX script or what can I do.

Thanks

Johnnie

View 1 Replies View Related

Project Real TestHarness.dtsx - Synchronous Or Asynchronous Process ????

Jan 8, 2008



Hi,



Can someone explain how the PR testharness is supposed to function? When I run it f with and without debugging and by using dtexec other processes can start and stop before the awaiting response from the console date and number of days to process. If you just look at the testharness and the loadgroupfulldaily the increment date and SQL Audits will finish prematurely before the console data is even entered. Is there a property that I am missing?



Thanks,

Larry

View 1 Replies View Related

Data Format Transformations

Jun 18, 2008

I am using SQL 2005 SSIS.  I need to do a data conversion for a date field in a txt file.  I used the import wizard to bring my txt file into SQL 2005 but didn't convert the date.  The date is displayed in the flat file as 20070612.  Can someone help me convert the date.  I did add an OLE DB Source to the Data Flow screen and selected command what do I do next and what do I write?

View 9 Replies View Related

OLE DB Transformations And Column DEFAULTS

Jul 2, 2007

I have a column which uses a DEFAULT as GETDATE in one of my tables. When I execute a DTS package to insert data into it, the column values are all the same, but if I use SSIS, the dates differ slightly (by a few ticks after several rows, but not a consistent amount of rows).

Is there an explanation for this difference, and how can I correct this problem?

View 2 Replies View Related

How Can I Use Audit And ROw Count Transformations?

Jan 2, 2008



Hi,

I am trying to get all the row counts of source and target databases, then insert into a rowcounts table
and getting all the data about package name,start time etc and insert into logs table
How can I do it?

thanks,
Gokhan

View 31 Replies View Related

ActiveX Transformations Gone From SSIS?

May 29, 2007

In good old fashioned DTS there was the ability to perform custom transformations using activeX / vbscripty type language - does this still exist or are we stuck with the derived column editor?

View 3 Replies View Related

Package Failure - Multiple Synchronous Data Flows - File Name Not Valid – /3GB /PAE

Apr 23, 2008

I have a package with 10 synchronous dataflows, which, combined, load about 300MB of flat file data to a database. This package would run successfully on 2 of our database servers, but would regularly fail on a third. The server on which it was failing is a 4 processor box with 16GB Ram with Windows Server 2003, SQL 2005, SSIS and SSRS installed - much more robust than one of the others that the package worked on. The SSIS error messages returned alternated between the following (with no apparent reason why one would show up rather than another, though the first was the most common):

"The file name "\Server1Folder1File1.txt" specified in the connection was not valid."

"The file name property is not valid. The file name is a device or contains invalid characters."

"An error occurred while initializing the flat file parser."

For the first error message, the error would report different connection managers and their associated file as invalid from run to run. All of the files across the 10 dataflows resided in the same network folder, and the package would read in and process a few of them before failing, so the problem was definitely not the connection string.

Searching the forums, etc. for these errors provided no useful information - given the real cause of the problem, these error messages are worse than unhelpful, they send you looking in the wrong direction. It was only when trying to track down another problem on the same server that I discovered the issue. When trying to copy database backups greater than 12GB over the network to this server, the operation would fail with an "Insufficient System Resources" message.

Some research led to the discovery that problem was caused by the /3GB switch in the boot.ini file of the server (don't let your Server team use that switch if you have 16GB of memory or more). Removing the switch and setting SQL to utilize AWE, fixed both the file copy problem AND the SSIS package failure problem. The SSIS package failed, not due to a bad connection string, but rather to insufficient server resources (read memory) to handle the simultaneous connections.

I hope this may help any others trying to track down this kind of SSIS package failure.

I will also provide here what I have gleaned about setting up Memory usage for SQL Server 2005 running on 32 bit Windows Server 2003 (with the caveat that I am no expert €“ corrections and additional information are welcome).

The following links got me started in my research (thanks to the folks who provided such useful information):
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=55191
http://articles.techrepublic.com.com/5100-10878_11-6091280.html
http://www.simple-talk.com/community/blogs/brian_donahue/archive/2007/09/30/37747.aspx
http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx
http://www.modhul.com/2007/11/10/optimising-system-memory-for-sql-server-part-i/

Also, search BOL for:
Server Memory Options
Enabling Memory Support for Over 4 GB of Physical Memory
Enabling AWE Memory for SQL Server


Windows Server 2003 provides access to 4GB of virtual address space. By default, 2GB is assigned to the OS and 2GB to applications. This default can be change to 1GB for the OS and 3GB for applications by the use of the /3GB switch in the boot.ini file.

Physical memory over 4GB can be addressed by enabling Physical Addressing Extensions (PAE), which is done by setting the /PAE switch in the boot.ini file. This does not increase the systems virtual address space, rather it increases the size of the page table (which is maintained within the virtual address space), adding entries to reference the physical memory above 4GB.

It is important to note that these two switches are not interdependent (they do different things and you can turn each on or off regardless of the others status), though the combination of them has an impact on server performance and the maximum amount of physical memory which can be addressed.

The /3GB switch only impacts the allocation of the first 4GB of memory (virtual address space) between the OS and applications (default 50/50 % split, with switch on - 25% OS and 75% applications). The /PAE switch enables the system to reference/manage physical memory above 4GB, but does not alter the allocation percentages of the first 4GB of memory between the OS and applications. However, when PAE is enabled, the OS requires more memory within the first 4GB to manage the physical memory above 4GB (due to increased page table entries). With the /3GB switch, the OS has only 1GB of virtual address space, and only enough space to manage a total of 16GB of physical memory. If 32GB of physical memory is installed, 16GB of it will go to waste.

Address Windowing Extensions (AWE) is an API that allows an application to address more than the 2-3GB of memory that is available to applications within the virtual address space (first 4GB of memory). SQL Server can utilize AWE to take advantage of memory above the first 4GB that is made available via PAE, and can even reserve portions for its own use. I believe (though I can€™t remember where I got this bit) that SQL utilizes AWE memory only for the page cache (buffer pool €“ which seems to be a misnomer), and not for other operations.

To enable AWE, see the BOL references above.

The big question: what are the recommended settings for all of these? That all depends on what you have running on the server. You need to leave space for the OS, SQL Server and any other applications you have.

The hard and fast rules:
If you have more than 4GB of RAM, you must use the /PAE switch in order to take advantage of it.
If you have more than 16GB of RAM, you must NOT use the /3GB switch in order to take advantage of it.

Based on anecdotal evidence, I€™ve noticed the following generally recommended guidelines €“ assuming the server is dedicated to SQL.

Use of the /3GB switch seems to be a generally accepted practice if you have 8GB of RAM or less. For between 8 and 16GB, some say never use the /3GB switch, others say you can use it up to 12GB and still others up to 16GB. I interpret this to mean that it all depends on what types of loads are being placed on the server and that testing on individual servers will be required to determine whether or not to use the switch. Certainly that was my experience - the /3GB switch worked fine with 16GB RAM, until the server encountered a certain workload. For me, no more /3GB switch.

For setting SQL to use AWE, most seem to agree that it should be enabled if you have more than 4GB RAM. The setting of max server memory is more complicated. BOL seems to suggest (the €˜Server Memory Options€™ entry) a formula of Total Physical Memory minus 1-2GB for the operating system. Based on a desire to be a bit more conservative, I am now using the following formula:

max server memory = total physical memory

minus

4GB for the OS and application processes (since the AWE memory is utilized for page cache, not SQL processes)

minus

AWE memory required by other applications, including other instance of SQL Server


If anyone has additional insight, or a more refined equation, I could certainly benefit from it.

View 1 Replies View Related

Difference Between ‘Fuzzy Lookup Transformations ‘

Mar 5, 2007

What is the difference between ‘Fuzzy Lookup Transformations ‘ and ‘Lookup Transformations in ssis .any real time senario for better understanding

View 1 Replies View Related

Help With Data Transformations With Outer Join?

Jul 20, 2005

Hello all.I am trying to write a query that "just" switches some data around soit is shown in a slightly different format. I am already able to dowhat I want in Oracle 8i, but I am having trouble making it work inSQL Server 2000. I am not a database newbie, but I can't seem tofigure this one out so I am turning to the newsgroup. I am thinkingthat some of the SQL Gurus out there have done this very thing athousand times before and the answer will be obvious to them.This message is pretty long but hopefully gives you enough informationto replicate the issue.There are 3 tables involved in my scenario. Potentially a lot more inthe real application, but I'm trying to keep this example as simple aspossible.In my database I have many "things". Let's call them "User Records"(table: users) for this example. My app allows the customer to createany number of custom "Extra Fields" (XF's) for a given User Record.The Extra Field definitions are stored in a table which we can callattribs. The actual XF values for a given user record are stored in athird table, let's call it users_attribs.users_attribs will look something like this (actual DDL below.)UserID | ExtraFieldID | Value--------------------------------------User_1 | XF_1 | hamUser_1 | XF_2 | eggsUser_2 | XF_1 | baconUser_2 | XF_2 | cheeseUser_3 | XF_2 | onionsThe end result is that I want a SQL query that returns something likethis:UserID | XF_1 | XF_2-------------------------------------User_1 | ham | eggsUser_2 | bacon | cheeseUser_3 | NULL | onionsPotentially there would be one column for each extra field definition.One interesting question is how to get a dynamic number of columns toshow up in results, (so new XF's show up automatically) but I'm notworried about that for now. Assume I will hard-code a specific set ofextra fields into my query.The key here is that all users must show up in the final result EVENIF they don't have some extra field value defined. Since User_3 inthe example above doesn't have an XF_1 record, we see a NULL in thatcolumn in the final result.With Oracle I am able to accomplish this via an Outer Join, and I knowSQL Server supports Outer Joins, but I can't seem to make it work. Inever version I have tried so far, if any user is missing any extrafield value, the entire row for the user goes "missing", and that ismy problem.It seems like one possible solution would be to just go ahead andpopulate the users_attribs table with a NULL value for thatcombination of user ID and extra field ID, basically adding a new rowlike this:UserID | ExtraFieldID | Value--------------------------------------User_3 | XF_1 | NULLI would like to avoid that if possible, for a number of reasons,particularly the question of *when* that NULL would be added. I don'twant my report to touch the database and add stuff at reporting timeif at all possible. In Oracle, I seemingly don't have to, and I wantto get to that point on SQL Server.So, here is some specific DDL to recreate this scenario:CREATE TABLE users (user_id varchar(60), username varchar(60));-- Extra Field (attribs) definitionsCREATE TABLE attribs (xf_id varchar(60), xf_name varchar(60));-- Extra Field values for UsersCREATE TABLE users_attribs (user_id varchar(60), xf_id varchar(60),val varchar(60));-- populate the sample tables-- sample User recsINSERT INTO users VALUES ('U_1', 'John Smith');INSERT INTO users VALUES ('U_2', 'Mary Rogers');-- sample extra field definitionsINSERT INTO attribs VALUES ('XF_1', 'Extra Field 1');INSERT INTO attribs VALUES ('XF_2', 'Extra Field 2');INSERT INTO attribs VALUES ('XF_3', 'Extra Field 3');-- sample values for User Extra Fields (XF's)-- U_1 ("John Smith") has complete values for each XFINSERT INTO users_attribs VALUES ('U_1', 'XF_1', 'XF_1 value forU_1');INSERT INTO users_attribs VALUES ('U_1', 'XF_2', 'XF_2 value forU_1');INSERT INTO users_attribs VALUES ('U_1', 'XF_3', 'XF_3 value forU_1');-- U_2 ("Mary Rogers") only has one value, missing the other two..INSERT INTO users_attribs VALUES ('U_2', 'XF_2', 'XF_2 value forU_2');Now, I can get what I want on Oracle, provided that I define an newview that joins the three tables together, then do a separate query onthat view that does an outer join. I could dispense with the view,but I don't want to hard-code the XF ID's into the query. I am finewith hardcoding the XF names, though. (Long story.)-- Create a User Extra Field view that joins Users-- extra field definitons (attribs)-- and values (users_attribs.)CREATE VIEW u_xf_view ASSELECT u.user_id, at.xf_name, uxf.valFROMusers u,attribs at,users_attribs uxfWHEREuxf.user_id = u.user_id ANDuxf.xf_id = at.xf_id-- Oracle-only outer join syntax works if you use the view:SELECTu.username as "User Name",uxf1.val as "Extra Field 1 Value",uxf2.val as "Extra Field 2 Value",uxf3.val as "Extra Field 3 Value"FROMusers t,u_xf_view uxf1,u_xf_view uxf2,u_xf_view uxf3WHEREuxf1.user_id(+) = t.user_id ANDuxf1.xf_name(+) = 'Extra Field 1' ANDuxf2.user_id(+) = t.user_id ANDuxf2.xf_name(+) = 'Extra Field 2' ANDuxf3.user_id(+) = t.user_id ANDuxf3.xf_name(+) = 'Extra Field 3';-- RESULTS (correct):User Name Extra Field 1 Value Extra Field 2 Value ExtraField 3 Value------------- ------------------------ ------------------------------------------------John Smith XF_1 value for U_1 XF_2 value for U_1 XF_3value for U_1Mary Rogers NULL XF_2 value for U_2 NULL2 Row(s)So far I have not been able to get the equivalent result in SQLServer. Like I said, I am really hoping to avoid populating thoseNULL values. Can anything think of a way to replicate Oracle'sbehavior here? I have tried a number of variations on the ANSI joinsyntax instead of Oracle's (+) operator, but everything I tried so farhas only yielded a row when ALL extra fields are populated (or evenworse behavior.)I greatly appreciate any assitance you may be able to give. I would behappy to provide any additional information if I forgot to mentionsomething important. I apologize in advance for any broken / wrappedlines. Thank you for taking the time to read this.I'm going to be out of town for the next week or so, so I won't checkfor a response until then, but as soon as I get back home I will checkback in the newsgroup.Thank you!!Preston Landerspibble (at) yahoo (dot) com

View 2 Replies View Related

Designing Reusable Column Transformations?

Jul 27, 2007



We receive thousands of files every week from various clients and we attempt to clean the columns using the same technique over and over so the data is consistent. The problem is I dont see a way to reuse complex column transformations in different packages. I would hate to have to go change every package if we change the rules for cleaning a column.

So #1: Can you create some kind of script or .net function that cleans a column and reuse it in multiple packages (or even in the same package)?

#2: Is it possible to call functions from the Derived Column expression builder?

Thanks!

View 3 Replies View Related

Allowing Transformations When Creating Publication For Replication

Dec 2, 2003

I am at my wits end here. For Replication the Books Online clearly state:

"The option to allow transformations is set at the time you create a publication"

However, I cannot find any options that allow me to do this in the Create Publication Wizard.

Once the Publication has been created I see in the Properties in the Subscription Options tab that "Use DTS to transform data before distributing it to a Subscriber" is set to No and there is no way to change it.

Where am I going wrong?

View 1 Replies View Related

Error Testing Transformations In SQL7 DTS Package

Mar 8, 2001

In SQL 7 DTS I'm creating a Data Driven Query Tasks. When I get to writing the transformations in VB Script and try using the Test button to test the script, I get the following message:



Error Source: Microsoft Data Transformation Services Flat File Rowset
Provider
Error opening datafile. The system cannot find the file specified.

The documentation says:

[The transformation is tested] by executing it against a part of the
source data and copying the results to a temporary text file, for
preview purposes.

I can find no place to specify what temporary file is used for the test result and, because it is temporary, have assumed the system creates and
deletes it as needed.

Is this true?

I get this test error for every transformation. even those that work correctly when the DTS package is actually executed.

What can I do to get the transformations to test properly?

Any help or hints will be greatly appreciated.



===============================

William J Brown
Systems Analyst
College of Human Medicine
A114 East Fee Hall
Michigan State University
East Lansing, MI 48824
Voice: (517) 432-7490
Fax: (517) 355-0342
Email: brownwj@msu.edu

View 3 Replies View Related

PIVOT Operator For Variable Number Of Transformations

Feb 19, 2007

Hi, i'm trying to port a pivot query from access to sqlserver.
I'm trying this query:

SELECT IDMerce, [1] AS [Department-1], [2] AS [Department-2], [3] AS
[Department-3], [4] AS [Department-4]
FROM (SELECT IDMerce, Pezzi, IDMagazzino
FROM Disponibilita) p PIVOT (sum(Pezzi) FOR
IDMagazzino IN ([1], [2], [3], [4])) AS pvt


this works, but in my case i don't know in advance how many transformations
i need, so there is a solution?
Thanks

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved