There many columns in (Product) that reverence 1 lookup table (PropertyType)
In the table Product, the columns Property1, Property2, Property3 all contain a numerical value that references PropertyType.PropertyTypeId
How do I select a Product so I get all rows from Product and also the PropertyType that corresponds to the Product.Property1, Product.Property2, and Product.Property3
I have a common requirement (when I'm processing data rows from an input file) to perform some data manipulation in script then look up a value from a reference table and perform some further data manipulation depending on whether a matching value was found in the lookup table or not.
For example, say I'm processing Customer data rows and the first "word" (/token) of the FullName column might be a title or the title could be missing and it might be a forename or initial instead. I want to check the first word of this FullName column to see if it matches any valid title values in a ReferenceTitles lookup table. If I find a match I want to set my Title column to the value from the ReferenceTitles lookup table, otherwise I want to set it to, say, an empty string. Then I want to process the rest of the FullName column tokens differently depending on whether or not a match was found.
It seems very messy to start coding a script transformation and then have to use a lookup transformation combined with a script transformation on the error output followed by a union and a sort and finally a further script transformation (especially as I would like to be able to use variables from the first script in the later processing)...
So what I'm wondering is: Is there an easy/clean way to perform a database lookup (using cached values) from a script so that I can achieve all the above from within a single script component?
In BOL it says: "The Lookup transformation performs an equi-join between values in the transformation input and values in the reference dataset. Using an equi-join means that each row in the transformation input must match at least one row from the reference dataset. If there is no matching entry in the reference dataset, no join occurs and no values are returned from the reference dataset. This is an error, and the transformation fails, unless it is configured to ignore errors or redirect error rows to the error output. "
I have a lookup transformation which is supposed to find a match on two fields in the reference dataset (a table in my case) but strangely, when I execute my package and the reference table is empty the lookup still finds match for each row of my input dataset.
Does anyone have an idea why? I could'nt find anything about that in BOL.
So I have been trying to get mySQL query to work for a large database that I have. I have (lets say) two tables Table_One and Table_Two. Table_One has three columns: Type, Animal and TestID and Table_Two has 2 columns Test_Name and Test_ID. Example with values is below:
In Table_One all types come under one column and the values of all Types (Mammal, Fish, Bird, Reptile) come under another column (Animals). Table_One and Two can be linked by Test_ID
I am trying to create a table such as shown below:
This should be my final table. The approach I am currently using is to make multiple instances of Table_One and using joins to form this final table. So the column Bird, Reptile, Mammal and Fish all come from a different copy of Table_one.
For e.g
Select Test_Name AS 'Test_Name', Table_Bird.Animal AS 'Birds', Table_Mammal.Animal AS 'Mammal', Table_Reptile.Animal AS 'Reptile, Table_Fish.Animal AS 'Fish' From Table_One
[Code] .....
The problem with this query is it only works when all entries for Birds, Mammals, Reptiles and Fish have some value. If one field is empty as for Test_Two or Test_Three, it doesn't return that record. I used Or instead of And in the WHERE clause but that didn't work as well.
My source has 2.2 million of records. I'm performing the incremental load.In the lookup transformation i used the destination table for the reference using Full cache mode.For the first time package executed successfully but when i executed the package second time, Suddenly Package hangs while running.Than i truncate the data from the destination table and restart the SQL Server Services.After doing all this i executed package again and it worked but when i executed package second time, again package hangs up .I have 8GB RAM and i5 2.5 GHz Processor laptop.
I need to write a query that requires respective fields referencing from multiple tables. For example, here are the tables: Main Table: InfoID Team1 Player1 Team1 Table: Player_ref Player Team_Player_ref Player1 John doh Table: Team_ref Team Team_Player_ref Team1 My Team
Ideal result Table from query: InfoID Count John Doh 1 My Team 2
Any suggestion to creat the Ideal Results table from query? Normally, I could do it if it only referenced from 1 table, I would do an inner join, however, since there are 2 referenece table, doing inner join wouldn't work. A proposed suggestion would certainly be nice. Thanks in advance. --daydreamstuck at the current problem
I'm creating a new Integration Services Project that copies data out of a SQL 7 server, transforms it, and places the data on a SQL 2005 (SP 2) Server. When defining a lookup transformation, if I specify an OLE DB Connection to my server running SQL 7 as the reference table, as soon as I click on the Colums tab, Visual Studio closes / crashes and dumps me to windows. I don't get an error message. If however I specify a connection to a server running SQL 8, or SQL 2005, no problems.
Is this supposed to happen?
My workstation is running Windows XP Pro SP2, Visual Studio 2005 Pro.
Microsoft SQL Server Integration Services Designer Version 9.00.1399.00
The server that doesn't work for a reference table is running Windows 2000 Server SP4 SQL 7.00.623
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
I am facing a problem in writing the stored procedure for multiple search criteria.
I am trying to write the query in the Procedure as follows
Select * from Car where Price=@Price1 or Price=@price2 or Price=@price=3 and where Manufacture=@Manufacture1 or Manufacture=@Manufacture2 or Manufacture=@Manufacture3 and where Model=@Model1 or Model=@Model2 or Model=@Model3 and where City=@City1 or City=@City2 or City=@City3
I am Not sure of the query but am trying to get the list of cars that are to be filtered based on the user input.
Say you have a fact table with a few columns that all reference the same key column in a dimension table, you want to write a view to return the information for those keys?
USE MyTestDB; GO SET NOCOUNT ON; IF OBJECT_ID ('dbo.FactTemp' ,'U') IS NOT NULL DROP TABLE dbo.FactTemp;
[Code] ....
I'm using very small data at the moment, and the query plan and statistics don't really say which way.
I have an Parent table (Parentid, LastName, FirstName) and Kids table (Parentid, KidName, Age, Grade, Gender, KidTypeID) , each parent will have multiple kids, I need the result as below:
I'm developing an ETL solution that needs to look for duplicate records using a fuzzy lookup. If the lookup table has an identity column, I get the following error. I can get this to work on a local desktop instance of SQL server, but not on my development server or production box. Any help is greatly appreciated.
Also I've stripped down the incoming data for the lookup to a very simplified version of what I'd like to use, but if I can't get it to work then I can't add addtional columns to match with.
It's like it's trying to add it's own Id to the temp table created for the tokens
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Multiple identity columns specified for table '##FLRef_070403_10:09:36_3532_aeac56a4-8bc0-4ff4-ac41-0984e293261a'. Only one identity column per table is allowed.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Database name 'tempdb' ignored, referencing object in tempdb.".
Error: 0xC004701A at Move Clean Records into Clean Enrollment, DTS.Pipeline: component "Fuzzy Lookup" (300) failed the pre-execute phase and returned error code 0xC0202009.
I have two tables table1 and table2. I want to check a value from table1 against 4 different columns in table 2. What would be the most optimized way to do this. I came up with this SQL but it runs forever.
select * from table1 a where (a.id in (select orig_id from table2 where exctn_dt >= '01-OCT-14')) or (a.acct_intrl_id in (select benef_id from table2 where exctn_dt >= '01-OCT-14')) or (a.acct_intrl_id in (select send_id from table2 where exctn_dt >= '01-OCT-14')) or (a.acct_intrl_id in (select rcv_id from table2 where exctn_dt >= '01-OCT-14'));
Hi,I have some tables where I import data in, lots of field have gotten aNULL value which the application can not handle.Now can I replace each NULL value with '' in a columns with:update <tableset [<column>] = '' where [<column>] IS NULLBut because there are lots of columns this is pretty much work, alsothere are multiple tables.Is there an easy way to replace all NULL values in all columns in atable?Thanks in AdvanceBob
We did some "at scale" fuzzy lookup tests today and were rather disappointed with the performance. I'm wanting to know your experience so I can set my performance expectations appropriately.
We were doing a fuzzy lookup against a lookup table with 25 million rows. Each row has 11 columns used in the fuzzy lookup, each between 10-100 chars. We set CopyReferenceTable=0 and MatchIndexOptions=GenerateAndPersistNewIndex and WarmCaches=true. It took about 60 minutes to build that index table, during which, dtexec got up to 4.5GB memory usage. (Is there a way to tell what % of the index table got cached in memory? Memory kept rising as each "Finished building X% of fuzzy index" progress event scrolled by all the way up to 100% progress when it peaked at 4.5GB.) The MaxMemoryUsage setting we left blank so it would use as much as possible on this 64-bit box with 16GB of memory (but only about 4GB was available for SSIS).
After it got done building the index table, it started flowing data through the pipeline. We saw the first buffer of ~9,000 rows get passed from the source to the fuzzy lookup transform. Six hours later it had not finished doing the fuzzy lookup on that first buffer!!! Running profiler showed us it was firing off lots of singelton SQL queries doing lookups as expected. So it was making progress, just very, very slowly.
We had set MinSimilarity=0.45 and Exhaustive=False. Those seemed to be reasonable settings for smaller datasets.
Does that performance seem inline with expectations? Any thoughts to improve performance?
I have a requirement of table partitioning. we have 10 years of data on a table which is 30 billion up rows on 2005 server we are upgrading it to 2014. we have to keep 7 years of data. there is no keys on table or date column. since its a huge amount of data and many users its slow down the process speed. we are thinking to do partition on 7 years for Quarterly based. but as i said there is no date column on table we have to use reference table to get date. is there a way i can do the partitioning with out adding date column on table? also does partition will make query faster?
I have think three ways to do it. 1. leave as it is. 2. 7 years partition on one server 3. 3 years partition on server1 and 4 years partition on server2 (for 4 years is snapshot better?)
Hi everyone. I am updating a table with aggregate results for multiplecolumns. Below is an example of how I approached this. It works finebut is pretty slow. Anyone have an idea how to increase performance.Thanks for any help.UPDATE #MyTableSET HireDate=(Select Min(Case When Code = 'OHDATE' then DateChangedelse null end)From HREHWhere #MyTable.HRCo=HREH.HRCo and#MyTable.HRRef=HREH.HRRef ),TerminationDate=(select Max(Case When Type = 'N' thenDateChanged else null end)From HREHWhere #MyTable.HRCo=HREH.HRCo and#MyTable.HRRef=HREH.HRRef ),ReHireDate=(select MAX(Case When Code = 'HIRE' thenDateChanged else null end)From HREHWhere #MyTable.HRCo=HREH.HRCo and #MyTable.HRRef=HREH.HRRef )
I have a Problem with my SQL Statement.I try to insert different Columns from different Tables into one new Table. Unfortunately my Statement doesn't do this.
If object_ID(N'Bezeichnungen') is not NULL Drop table Bezeichnungen; GO create table Bezeichnungen ( Artikelnummer nvarchar(18), Artikelbezeichnung nvarchar(80), Artikelgruppe nvarchar(13),
I am stuck on finding a solution to transpose source data from a system via a metadata look-up table into a destination table. I need a method to transpose/pivot the source data into columns (which are by various data-types). The datatypes for each column are listed in a metadata table.
Source Data Table:
Table Name: Source
SrcID AGE City Date 01 32 London 01-01-2013 02 35 Lagos 02-01-2013 03 36 NY 03-01-2013
Metadata Table:
Table Name:Metadata
MetaID Column_Name Column_type 11 AGE col_integer 22 City col_character 33 Date col_date
Destination table:
The source data to be loaded into the destination table(as shown below):
I have a table with a string value, where all values are seperated by a space/blank. I now want to use SQL to split all the values and insert them into a different table, which then later will result in deleting the old table, as soon as I got all values out from it.
Old Table:
Code: ID, StringValue
New Table:
Code: ID, Value1, Value2 Do note: Value1 is INT, Value2 is of nvarchar, hence Value2 can contain spaces... I just need to split on the FIRST space, then convert index[0] to int, and store index[1] as it is.
I can split on all spaces and just Select them all and add them like so: SELECT t.val1 + ' ' + t.val2... If I cant find the first space that is... I mean, first 2-10 characters in the string can be integer, but does not have to be.Shall probably do it in code instead of SQL?Now I want to run a query that selects the StringValue from OldTable, splits the string by ' ' (a blank) and then inserts them into New Table.
Code: SELECT CASE CHARINDEX(' ', OldTable.stringvalue, 1) WHEN 0 THEN OldTable.stringvalue ELSE SUBSTRING(OldTable.stringvalue, 1, CHARINDEX(' ', OldTable.stringvalue, 1) - 1) END AS FirstWord FROM OldTable
Found an example using strange things like CHARINDEX..But issue still remains, because the first word is of integer, or it does not have to be...If it isn't, there is not "first value", and the whole string shall be passed into "value2".How to detect if the very first character is of integer type?
Code: @declare firstDigit int IF ISNUMERIC(SUBSTRING(@postal,2,1) AS int) = 1 set @firstDigit = CAST(SUBSTRING(@postal,2,1) AS int) ELSE set @firstDigit = -1
Code written so far. this pivots the column deck and jib_in into rows but thats it only TWO ROWS i.e the one i put inside aggregate function under PIVOT function and one i put inside QUOTENAME()
DECLARE @columns NVARCHAR(MAX), @sql NVARCHAR(MAX); SET @columns = N''; SELECT @columns += N', p.' + QUOTENAME(deck) FROM (SELECT p.deck FROM dbo.report AS p GROUP BY p.deck) AS x;
[Code] ....
I need all the columns to be pivoted and show on the pivoted table. I am very new at dynamic pivot. I tried so many ways to add other columns but no avail!!
I have multiple databases in the server and all my databases have tables: stdVersions, stdChangeLog. The stdVersions table have field called DatabaseVersion which stored the version of the database. The stdChangeLog table have a field called ChangedOn which stored the date of any change made in the database.
I need to write a query/stored procedure/function that will return all the database names, version and the date changed on. The results should look something like this:
Sorry for the confusing subject. Here's what im doing:I have a table of products. Products have N categories andsubcategories. Right now its 4. But there could be more down theline so it needs to be extensible.So ive created a product table. Then a category table that has manycategories of products, of which a product can belong to N number ofthese categories. Finally a ProductCategory "match" table.This is pretty straigth forward. But im getting confused as to how towrite views/sprocs to pull out rows of products that list all theproducts categories as columns in a single query view.For example:lets say productId 1 is Cap'n Crunch cereal. It is in 3 categories:Cereal, Food for Kids, Crunchy food, and Boxed.So we have:Product----------------1 Capn CrunchCategories-----------------1 Cereal2 Food for Kids3 Crunchy food4 BoxedProductCategories------------------1 11 21 31 4How do I go about writing a query that returns a single result set fora view or data set (for use in a GridView control) where I would havethe following result:Product results---------------------------------ProductId ProductName Category 1 Category 2Category 3 Category N ...------------------------------------------------------------------------1 Capn Crunch Cereal Food for Kids Crunchy foodBoxedAm I just thinking about this all wrong? Sure seems like it.Cheers,Will
my report having 10 columns, i can explain my need with example that is, assume like this customer id is first column of my table customer id is 101 it having 3 departments a1,b1,c1,perticular department that is a1 having emp1,b1 having emp2,c1 having emp 3.
i want output like this when clicking + customer id driildwon it display 3 departments taht is a1,b1,c1, when +a1 drill down clicking i need to dispaly emp1, corresponding b1 to emp2 , c1 to emp 3.
above explanation is only one column of the table, like that iam also displaying this driiling procedure for remaining different columns.
and i need to display customer information by weekly,daily,monthly,yearly at bottom of the report
please give which logic used in creating format like above drilldown report which having multiple drilldowns for all columns in a table
if any body give procedure for creating fromat for drilldown report which having multiple drilldowns for all columns in a table is appriciate.
select CurrencyCode,TransactionCode,TransactionAmount,COUNT(TransactionCode) as [No. Of Trans] from TransactionDetails where CAST(CurrentTime as date)=CAST(GETDATE()as date) group by TransactionCode, CurrencyCode,TransactionAmount order by CurrencyCode
select CurrencyCode,TransactionCode,TransactionAmount,COUNT(TransactionCode) as [No. Of Trans] from TransactionDetails where CAST(CurrentTime as date)=CAST(GETDATE()as date) group by TransactionCode order by CurrencyCode
But of course this codes gives an error, but how can I get my desired result??
I have a data flow task in which there is a OLEDB source, derived column item, and a oledb destination. My source is a SQL command, that returns some values. I have some values, that I define in the derived columns, and set default values under the expression column. My question is, I also have some destination columns which in my OLEDB destination need another SQL command. How would I do that? Can I attach two or more OLEDB sources to one destination? How would I accomplish that? Thanks