Deciding whether or not to use a CTE or this simple faster approach utilizing system tables, hijacking them.
SELECT s.ORDER_NUMBER, s.PRODUCT_ID, 1 AS QTY, s.VALUE/s.QTY AS VALUE FROM @SPLITROW s INNER JOIN master.dbo.spt_values t ON t.type='P' AND t.number BETWEEN 1 AND s.QTY
Just wanted to know if its okay to use system tables in a production environment and if there are any pit falls of using them ?
I imported all rows of my txt file using SSIS 2005 into a table. I am now trying to figure out how to split out the header, payment rows, and maintenance rows. First, some information.
An example of table results is here: http://www.webfound.net/split.txt The table has just one field of type varcha(100) because the incoming file is a fixed length file at 100 bytes per row
The header rows are the rows with HD in them...then followed by detail rows for that header (see here http://www.webfound.net/rows.jpg).
I need to
1) Split out the header into a header table 2) Split out the maintenance rows (related to the header) into a maint table 3) Split out the payment rows (related to the header) into a payment table
I'll need to maintain a PK/FK relationship between each Header and it's corresponding maint and payment rows in the other 2 tables.
To determine if it's a payment vs. maintenance row, I need to compare chars 30 - 31. If it contains 'MT' then you know it's a maintenance row, else it's a payment row.
I need to create a SQL statement to read the string, split it at the "," character, and insert it into individual rows. I also need to insert an ID (the same for all split values).
I was wondering is there any way by which i can split a single row in my table into two rows in my table in design of my report. In my table there are 16 rows i cant design table with 16 rows as it will increase size of my report body & it will give problem while exporting. Instead what i want to do is , design a table with two header rows first header with first 8 columns & second with remaining 8 columns.
I was wondering if anyone had a suggestion as to how to delete duplicate rows from a table. I have been doing this:
SELECT * INTO TempUsersNoRepeats FROM TempUsers2 UNION SELECT * FROM TempUsers3
This way I end up with a total of four tables (the fourth table being the original Users table) and I was hoping that there was a way that I could do this all within the the original Users table and not have to create the three TempUsers tables.
To anyone that is able to help....What I am trying to do is this. I have two tables (Orders, andOrderDetails), and my question is on the order details. I would liketo set up a stored procedure that essentially inserts in the orderstable the mail order, and then insert multiple orderdetails within thesame transaction. I also need to do this via SQL 2000. Right now ihave "x" amount of variables for all columns in my orders tables, andall Columns in my Order Details table. I.e. @OColumn1, @OColumn2,@OColumn3, @ODColumn1, @ODColumn2, etc... I would like to create astored procedure to insert into Orders, and have that call anotherstored procedure to insert all the Order details associated with thatorder. The only way I can think of doing it is for the program to passme a string of data per column for order details, and parse the stringvia T-SQL. I would like to get away from the String format, and gowith something else. If possible I would like the application tosubmit a single value per variable multiple times. If I do it this waythough it will be running the entire SP again, and again. Anysuggestions on the best way to solve this would be greatlyappreciated. If anyone can come up with a better way feel free. Myonly requirement is that it be done in SQL.Thank you
I'm trying to come up with an elegant, simple way to compare twoconsecutive values from the same table.For instance:SELECT TOP 2 datavalues FROM myTable ORDER BY timestamp DESCThat gives me the two latest values. I want to test the rate ofchange of these values. If the top row is a 50% increase over the rowbelow it, I'll execute some special logic.What are my options? The only ways I can think of doing this arepretty ugly. Any help is very much appreciated. Thanks!B.
I have 3 tables with the follwing schema Table <Category> {
UniqueID, LastDate DateTime }
Assume the follwing tables with data following the above schema
Table Cat1 {
1, D1 2, D2 3, D3 } Table Cat2 {
2, D4 3,D5 4, D6 } Table Cat3 {
1, D7 3,D8 5,D9 }
I have a Master and the schema is as follows Table master {
UniqueId, Cat1 DateTime, -- This is same as the Table name Cat2 DateTime, -- This is same as the Table name Cat3 DateTime -- This is same as the Table name }
After inserting the data from all these 3 tables, I want the my master table to look like this Table Master {
I have resulting rows from a query similar to the following:
The data is coming from a single table that contains only one coverage code column and one coverage code date, but the end user wants the two coverage code types and dates combined into a single row. So the SELECT looks something like this:
SELECT [Employee ID] = emp.employee_id, [Coverage Code 1] = enr.coverage_code, [Coverage Date 1] = enr.coverage_date, [Coverage Code 2] = case when enr.product_type = 'Accident.Accident' then enr.coverage_code else NULL end,
[Code] ....
I basically want to merge the like Employee ID's together into a single row like the following:
I know I have done this before and it is probably pretty simple.
Now we have different packages for 4 tables data loading. These 4 packages will start at a time. Before going to load the data we have to make the Flag to 1 and after that we have to load it. Because of this we have written Update statement to update the Value to 1 in respective Package.
Now we are getting dead lock because we are using same table at a same time. Because we are updating different records.
Hi, my name's John and this is my first post here at SQL team. I've learnt a lot from the forums here, never needed to post though. I'm pretty good at sql, but not that good and I think for the first time in couple of years I now have a query that I'm completely unable to create. Let me explain the setup. I've got three tables e.g.
This basically shows a simple setup showing that product (computer) has been done the action "repaired" and product (printer) has been done the action "changed motherboard". The query i'd like to have is where I can select all products that have not been performed on cerrtain actions. e.g. a list of products that have not been performed on all actions, such as
select productid from products where productid not in (select productid from productperformedactions....
which elimiantes products that have been performed an action on, but it also removes it for ALL actions, which i'm trying to avoid. All i want is kind of a cross join, listing all actions, then inverting it, to show all products that have NOT had it performed on them. I hope this makes sense.
Help, is something wrong with my SL Server? I am unable to return any rows from all tables in all databases (user and system)on My SQL 7.0 SP2 machine. Whne i right click on the table in E.M and select design or open table i get no results. Does anyone know why this is happening? It did not always happen either. Thanks
I have created a trigger that is set off every time a new item has been added to TableA.The trigger then inserts 4 rows into TableB that contains two columns (item, task type).
Each row will have the same item, but with a different task type.ie.
I have a simple query that joins a largeish fact table (3 million rows) to a view that returns 120 rows. The SKEY in the view is returned via a scalar function. The view returns instantly if queried on it's own however when joined to the fact table in the simple query below results in a query execution plan that runs forever. Interestingly if I change the INNER JOIN to a LEFT OUTER JOIN the query returns the matched results almost instantly.
Select Dimension.Age_Band.[10_Year_Age_Band], Count(*) From Fact.APC_Episodes Inner Join Dimension.Age_Band ON Fact.APC_Episodes.AGE_BAND_SKEY = Age_Band.AGE_BAND_SKEY Group By Dimension.Age_Band.[10_Year_Age_Band]
I know joining to a view using a column generated by a scalar function is not a good recipe for performance. I also know that I could fix this by populating a physical table with the view first as I have already tested this though I hoping not to have to go down that route.
Why a LEFT OUTER JOIN works and not an INNER JOIN or anyway I can get the query optimizer to generate an execution plan that works?
I have an input file with fixed-width columns that I want to import into two tables.. 5 of the input columns go to 1 table and the remaining 15 go to another table. What's a good way to do this in SSIS?
Hi This is probably a very basic question for most people in this group. How do i split the data in a column in to 2 columns? This can be done in access with an update query but in MS SQL server I am not sure. Here is an example of what i want to acheive
I am using DTS and VBScript in DataPump tasks in order to transfer large amounts of data from text files to an SQL database.
As the database uses a normalized schema, there is often the case of inserting multiple records in a destination table from various fields of the same record of the source text file.
For example, if the source record contains information about goods sold like date, customer, item code, item name and total amount, and does so for a maximum of 3 goods per sale (row), therefore has the structure:
I have tried using a datapump task and VBScript, and I guess it has to do with the DTSTransformStat_**** constants, but none of those I used seems to work
I have a delimited text file with 650+ columns. The sum of the column lengths of a single row, if fully populated, exceeds 30K bytes. The "killer" fields lengthwise are the "Description" fields. If they were removed from the input file, the remainig columns would occupy about 5000 bytes, which is within SQL max row length.
Can SSIS be used to created these two tables? (one without description fields, the other with those field but arranged vertically in the table rows).
The fundamental issue is I can not import a single file row into a sql table because that row length could exceed the max byte count for a row.
I have a matrix with single row. The no. of column varies and sometimes goes to 10-15. So it goes to next page and while exporting it inserts blank pages when exported to PDF. I need the column width at least 2.5cm. I need to break the matrix to next row instead of it going to next page say after 6th or 8th column. I tried to work with the example given in the site http://blogs.msdn.com/chrishays/archive/2004/07/23/HorizontalTables.aspx by Chris Hays. But it is showing matrix for each Row Group, which doesn't meet my requirement.
I had a work around which worked by putting two matrix one below the other and filtering the columns to be shown in each matrix.
If anybody faced this issue or anybody solved the issue kindy reply which will be very helpful for me.
One more doubt, Can I get the Column number of the matrix?