Transact SQL :: Huge Performance Difference For Same Select Between Environments

Jun 22, 2015

I have encountered a problem with a specific set of tables. The same select yields slightly differing execution plans in two different environments (instances). But the slight variation seems to contain a huge differences in stats. I don't know the significance of these stats. The two tables have the exact same indices.

This is the selcet statement:

SELECT 'xx' FROM DUKS.dbo.Profiler
WHERE DNA_Løbenummer IN
(SELECT DNA_Løbenummer FROM DUKS.dbo.Effektregister
WHERE Sagsnummer = '2015-00002')

View 17 Replies


ADVERTISEMENT

Huge Performance Difference When Running UDF In Workstation Vs Server

Dec 13, 2007

Hi,

I created a CLR UDF that returns a large number of rows, when I run it from my VPC (XP, SQL Server Developer Edition and 1GB Memory) it takes approx 2 min and 30 secs to start displaying the rows (Using Management Studio), when I run the same query in our development server (Win 2003, SQL Server Enterprise Edition, 8 GB Memory and 8 Processors) it takes more than 15 min to start displaying the results, does anybody have an idea why is this happening?

Thanks in advance

View 2 Replies View Related

Significant Performance Difference If SELECT Command Contains User

Oct 25, 2006

SQL 2000 Connection String:user id=MyUserName;password=MyPassword;initial catalog=MyDB;server=MyServer;Connect Timeout=30 This SELECT statement returns its 10 results nearly instantly:SELECT * FROM MyTableDitto from above, but completes in 30-40 seconds:SELECT * FROM [dbo].[MyTable]Ditto from above, but completes nearly instantly:
SELECT TOP 1000 * FROM [dbo].[MyTable] Obviously I have stopped using the [dbo] syntax in my SqlCommand's (SELECT's and EXECUTE's) but still would like to know why this is.vr, Rich

View 3 Replies View Related

Stored Procedure Vs SQL Huge Difference In Execution Time

Jul 23, 2005

I have a Stored Procedure (SP) that creates the data required for areport that I show on a web page. The SP does all the work and justreturns back a results set that I dump in an ASP.NET DataGrid. The SPtakes a product area and a start and end date as parameters.Here are the basics of the SP.1.Create temp table to store report results, all columns are createdthat will be needed at this point.2.Select products and general product data into the temp table.3.Create a cursor that loops through all the products in the temptable, running a more complex query with each individual product.4.The results of that query are updated on the temp table based on thecurrent product of the cursor.5.A complex "totals" query is run and the results from that areinserted into the temp table as the last 3 rows.In all we are talking about 120 rows in the temp table with 8 columnsthat are mostly numbers.I originally wrote this report SP about a month ago and it worked fine,ran in about 10 - 20 seconds based on server traffic and amount ofdata in the temp table. For the example I'm running there are the120 products.Just yesterday the (SP started timing out and when I ran the SPmanually from Query Analyzer (QA) (exec SP_NAME ... ) with the sameparameters as it was getting in the code it took 6 minutes to complete.I was floored. I immediately copied the SQL out of the SP and pastedinto another QA window, changed the variables to be hard coded valuesand ran it. It completed in 10 seconds.I'm really confused now. I ran a Profiler on the 2 when I ran themagain. The SQL code in QA executed again in ~10 seconds with 65,000reads. When the SP finished some 6 minutes later it had completed witthe right results but it needed 150,000,000 reads to do its job.How can the exact same SQL code produce such different results (time,disk reads) based on whether its in a SP or just run from QA but stillgive me the exact same output. The reports both look correct and havethe same numbers of rows.I asked my Sys Admin if he had done anything to anything and he saidno.I've been reading about recompiles and temp table indexes and allkinds of other stuff that could possibly be affecting it but havegotten nowhere.Any ideas are appreciated.

View 5 Replies View Related

DB Engine :: Huge Difference Between Estimated And Actual Rows

Aug 21, 2015

There is a stored procedure. It uses linked server. As we will be migrating to amazon cloud, our architect instructed not to replace linked server with openquery.

View 8 Replies View Related

Huge Performance Issue

Jan 23, 2007

I have a medical DB with the loads 150,000 transactions per month. Each month, I load the tranactions into a table for the current year. I also have to update records for prior months based on current month information.

For example, out of 186,000 dump records...150,000 will be loaded into the main table and 36,000 will be used to update records already loaded into the main table.

The tables have 90 columns, I have a clustered PK using [Soc_Sec_Number] & [Month] & [Row Index]. I need the row index counter (like auto number in MS Access) because I can have multiple transactions per month for the same Soc Sec Number.

===========================================================

My steps are

1) Load 150,000 records into main table (For december, this makes the table have 1,800,000 rows
2) Run queries for the remaining 36,000 rows to update records already loaded into the table containing the 1,800,000 rows.
3) The 36,000 queries have to be splits depending upon the update type code, So I am actually running 6 queries using 6,000 rows each against the 1,800,000 records.

The update queries are using inner joins with [Soc Sec #] and [Date], part of my composite Primary Key on both tables.

=============================================================
Problem

This process takes forever, about 4 hours per monthly update. As the months go out, the main table gets larger and the time increases. It took almost 24 hrs to get from January 2004 to June 2004.

I am running Sql Server on my PC, no seperate workstation. My PC has 2.8 GHZ with about 1 Gig in RAM. Could my PC specs be too low. I noticed that the task mamager shows sqlservr.exe using over 657,000 mb of RAM when running.

I also ran a simple Select MAX(Soc_Sec_Number) query that took over 5 minutes. This is way too long especially since Soc_sec_Number is part of the composite PK.

Could my queries actually take that long or are my pC specs too low. MY PC seemed to freeze after the JUNE update? Any help appreciated.

View 17 Replies View Related

Performance Issue With Huge Data

Mar 20, 2007

Hello All,

I am using SSIS to transfer data between two SQL Servers (2000). There is no transformation involved as the source and destination table structure is same. Even then the package execution takes lot of time.

The data in the tables is of the order of 66000000 the we were required to kill the package execution after it took more than 24 hours. The CPU usage was more than 13000s and disk I/O was well above 330000000. I am new to the tit-bits of SIS. Can anyone please tell me the reason as to why the package has gone so resource hungry.



Thanks in advance,

Atul

View 3 Replies View Related

Huge Performance Loss When Using Variables

Oct 26, 2007

was trying to performance tune a query and came up on this weird issue(Weird to me atleast.. possibly because I am a newbie:-))).

When I run the query mentioned below, I get the results in 1 second, while the same query, if I declare a variable for the value I am comparing, it takes 40 seconds. I am on SQL Server 2000. Both queries are being run in the same DB on the same tables and the only difference between the two is that the date has been declared as a variable. I am able to consistantly reproduce this. Tried casting the variable, declaring it as varchar , replacing the date by getdate() at both places, but the moment I use the variable, the performance goes for a toss. Has anyone came across any similar issues?
TIA
Callista

Query 1
select ae.Col1, ae.Col2, ae.Col3, ae.Col4, ge.Col4, c.Col5, 0
from table1 ae WITH (INDEX=ind_Col3 NOLOCK), table2 ge with (nolock), table3 c with (nolock)
where ae.Col1 = ge.Col1
AND ae.Col4 = c.Col1
AND ae.Col3 > '2007-10-25 14:18:12.380'
and ae.Col6 is not null

Query 2
declare @MyVariable datetimeselect @MyVariable = '2007-10-25 14:18:12.380'
select ae.Col1, ae.Col2, ae.Col3, ae.Col4, ge.Col4, c.Col5, 0
from table1 ae WITH (INDEX=ind_Col3 NOLOCK), table2 ge with (nolock), table3 c with (nolock)
where ae.Col1 = ge.Col1
AND ae.Col4 = c.Col1
AND ae.Col3 > @MyVariable
and ae.Col6 is not null

View 5 Replies View Related

Transact SQL :: Backup / Restore Tools Which Can Restore Multiple Environments

Jun 25, 2015

I am looking for a SQL Backup/Restore tools which can restore multiple environments.  Here is high level requirements.

1.  We have 4 DBs, range from 1 TB - 1.5 TB Each Database.  When we restore to QA, DEV, or Staging, we usually restore 4 of them.
2.  I am looking for the speed to complete restoring between 1 - 2 hours for 4 DBs.

I am evaluating the Dephix Software but the setup is very complex and its given us a lot of issues with Windows Authentions, and failure in the middle of the backup.  I used Guess Software many years ago but can't find it on the web site any more. Speed is very important for us mean complete restoring as fast as possible.  We are on SQL 2012 and SQL 2008 R2.We are currently using NETAPP Technology and I have Redgate Backup Tool but I am mainly looking for fast Restore Process.

View 4 Replies View Related

Transact SQL :: Append A Column To The Huge Transaction Table

Oct 18, 2015

I want to append the column to the transaction table(60 million records in it.) ..

Our transaction table is being used in production.. but i have very less amount of time ..

Instead of alter table.. (IF we use the alter to take backup of table and do the processing it will take more time). Is there any way to append the column to the transaction table ..

View 8 Replies View Related

Transact SQL :: Table Locking Happening When Removing Huge Data

May 14, 2015

declare @error int, @rowcount int
select @rowcount = COUNT(1) FROM  STG_BCDR;
while @rowcount > 0
begin
 BEGIN TRAN Deletion

[code]....

Above code i try to delete records batch by batch to avoid table locking at BCDR table.total records in this BCDR  table is 40,000 records.  However I run the code at execution plan, the BCDR table still clustered index scan which means that the locking still happend.

If i change the delete top (5000)...... to delete top (5).... then thre is clustered index seek, which is good..The problem here is  each time  only delete top 5 records which is means it will realy take very long time to remove those  data.

how to cater the situation inorder for me to delete those huge data without table locking happend. If table locking happend , then other user will not be able to access this table at the same time.

View 6 Replies View Related

Huge Select Into.. (Can I Choose Not To Log?)

Dec 12, 2000

I have a job that selects alot of data from one database into another.

Can I choose not to log this operation (doesn't need to be and the log fills up before it's done)

Thanks..

type of code:

Insert into database_1
Select * from database_2

View 1 Replies View Related

IP Or DNS : Difference In Performance ?

Sep 22, 2006

Hello guys,

I would like to know if there is any meaningful difference in speed performance between using the DNS ("sql.server.com") or the IP address of the sql-server in the connection string.
The advantage of using DNS is that if there is any change in IP, I do not have to change the connection strings, but I do not want to loose speed because of the necessity to resolve the DNS.

Thanks for any help!

Regards,
Fabian

my favorit hoster is ASPnix : www.aspnix.com !

View 6 Replies View Related

Is There Any Difference Performance Wise

Dec 4, 2007

Dear All,
is there any difference performance wise using
select * from my_table
and
select mycol1,mycol2....mycoln from my_table


actually i've read from one article the there is big difference....
please clear my doubt...
thanks in advance

Vinod
Even you learn 1%, Learn it with 100% confidence.

View 8 Replies View Related

What's The Performance Difference Between WITH And A Subquery?

Aug 24, 2006

Hello Everyone,

Does anyone know if there is a performance difference between the new WITH clause t-sql and the subquery?

On a basic functionality level, they seem to perform the same function but I was wondering if there are any performance difference between the 2?



Thanks,

Joseph

View 9 Replies View Related

How To Select Specific 2 Rows Out Of A Huge Table

Jul 24, 2013

I have a very large table , and from that table I need just 2 records with column1 = 'A' and column1 = 'B' .

Here I don't think if I can not use OR or IN or Case operators because I need exactly 2 records not more.

View 6 Replies View Related

Select Data From Huge Fact Tables

Oct 12, 2006

Hi,

I have a situation where I have 4 tables:

1. 2 Dimensional tables(Parent), DIM1 with 50000 rows, and DIM2 with 1000 rows

2. Fact 1 with 50 columns, 25 Million rows and with FK to DIM1 and DIM2

3. Fact 2 with 40 columns, and 25 Million rows and with FK to DIM1 & DIM2 tables.

Actually the fact 1 and fact 2 have same related data but since our Analysis cube person wanted the fact table not to have more than 50 columns we divided the tables into 2, but they have the same compound key.

Above said, I have a situation where I have to select all the columns, in both fact tables, and do a group by. I wrote the query and ran "Analyze Query in the Database Engine Tuning Advisor" for it. It gave bunch of recomendations about the statistics and indexes which I created. When I executed the query the result came up in matter of seconds, which was good.

In the query I had a condition having MarketName='Bridgeview' and DateID = 344 (FK of today-1).

When I wanted the data for last 30 days I changed to DateID in ( > FK of today -32 and < FK of today), the query responded and worked fine.

But when I changed the query to get MarketName='Aurora' (other than I used when I ran Tuning Advisor), the result returned is empty set. When I removed the MarketName condition, it is supposed to return all markets' data, but it returns only Bridgeview data.

I know the data is in the table for all markets, since reports are rendered from these fact tables for all of these markets(also ran queries to check the fact table data).

I am unable to point out the reason why the query behaves like this. It responds to the date change, but not to the MarketName change.

I really appreciate if anyone can help me point out the problem.

Thanks,

Venkat

View 3 Replies View Related

Performance Difference Of Numeric And String

Sep 15, 2014

I have one question what is performance difference between cluster index on numeric field or string field? I know that numeric is faster but why it is faster?

View 1 Replies View Related

Performance Difference Between Query Executed Through ASP.NET And SSMS

Sep 18, 2007

I have also posted this in microsoft.public.sqlserver.programming.

I have a query which, depending on where I run it from, will either take 10 milliseconds or 10 seconds.

The query works perfectly when run in SQL Server Management Studio... in my database of around 70,000 items it returns the results in around 10ms. It uses all my indexes and indexed views correctly.

However when I run the identical query from my ASP.NET application, it takes around 10 seconds... 1000 times longer.
Looking at it in Sql Server Profiler I can't see any difference in the query, except from ASP.NET it needs 62531 reads and from SSMS it needs only 318 reads. If I copy the slow running ASP.NET query from the profiler into SSMS, then it runs quick again. The results returned are the same.

I have provided more details of the query below, but I guess my real question is: What is the best way to debug this? I'm not an expert with SQL Server, so any pointers on where I should start looking to find the difference in how the query is being executed would be a great help.

The query is of the form:

WITH RowPost AS
(
SELECT
ROW_NUMBER() OVER(ORDER BY DateCreated DESC) AS Row,
ItemId,
Title,
....
FROM
Items_View WITH(NOEXPAND)
WHERE ItemX >= @minX AND ItemX <= @maxX AND ItemY >= @minY AND ItemY <= @maxY
)
SELECT
*,
(SELECT Count(*) FROM RowPost) AS [Count]
FROM RowPost
WHERE Row >= @minRow AND Row < @maxRow

Where Items_View is an indexed view, and WITH(NOEXPAND) is being used to force it to use the indexed view (this is optimal). The line beginning "SELECT Count(*)" is to get the total number of results (without having to run the inner query a second time).

This is running against SQL Server Developer Edition.

View 5 Replies View Related

Problem: Performance Difference Between MSDE And SQL Express 2005

Feb 4, 2007

Hello, all, I started out thinking my problems were elsewhere but as Ihave worked through this I have isolated my problem, currently, as adifference between MSDE and SQL Express 2005 (I'll just call itExpress for simplicity).I have, to try to simplify things, put the exact same DB on twosystems, one running MSDE and one running Express. Both have 2 Ghzprocessors (one Intel, one AMD), both have a decent amount of RAM(Intel system has 1 GB, AMD system has 512 MB), and plenty of GB offree disk space. MSDE is running on the Intel system, Express isrunning on the AMD system. To keep things fair I use the exact sameDB's and query on both systems. The DB's were created on MSDE so Isp_detach_db'd them from MSDE and then sp_attach_db'd them to Express(this is how MS says to do a "side-by-side" upgrade, so it'sacceptable to do so). After fighting problems in performancedifferences in different situations I have narrowed the problem downto this:Executing a simple select statement with join clause on the databasesyields a difference in execution time that is quite great. Using theExpress Management program I can run the query against either system(MSDE or Express, the two systems are connected via crossover cable toeliminate any network problems/issues). When running the queryagainst the MSDE system (which is over the network) I consistently get<20 ms response times on the query. When running the query againstthe Express installation (which is in shared memory) I consistentlyget 700 ms or longer response times. Both times are for the TotalExecution Time.The query is simply this: select db1.* from db1.owner.tablename as db1inner join db2.owner.tablename as db2 on db1.pkey = db2.someid wheredb1.criteria = 3So, gimme all the columns from one table in one DB (local to theinstallation), matching the records in another DB (also local to theinstallation), where one field in the first db matches a field in thesecond db and where, in the first db, one column value = 3.The first table has a total record count of 630 records of which only12 match the where clause. The second table has a total record countof about 2,700 of which only 12 match up on the 12 out of 630.Even though the data is the same and I've done the detach and attach,and even done the sp_updatestats, the difference in execution time isremarkable, in a bad way.Checking the Execution Plan reveals that both queries have the samesteps, but, on the MSDE system the largest consumer in the process isthe Clustered Index Scan of the 630 record table (DB1 in my queryexample), using 85%. The next big consumer is a Clustered Index Seekagainst the other table (2,700 rows), using 15%.The Execution Plan against the Express system reveals basically theexact opposite: 27% going to the Clustered Index Scan of the 630record DB1, and 72% going to the Clustered Index Seek of the 2,700record DB2.I'm sorry to be stupid but I have this information but I don't knowwhat to do with it. The best that I can tell from this is that thisis the source of my problems. My problems are that on my currentsystems that my clients use the data is returned to them faster thanthey can click the mouse and that the new system (that is, when theychose (or are forced by attrition) to move to Vista and thus Express2005) the screen pop is like 1.5 seconds. This creates poor userexperience. Worse, one process I allow the users to do goes fromtaking 14-30 seconds to over 4 minutes (all on the same machine withthe same OS and version of my program, so it's not a machine or OS ormy app problem).Anyway, I hope someone can shed some light on this now that I've paredit down some.Thanks in advance.--HC

View 9 Replies View Related

Performance Difference: Query Window V. Stored Procedure

Oct 24, 2007

Executing the stored procedure took 45 seconds. But copying the code to a query window and setting up the variables (instead of parameters), it took 7 seconds.

In the query window, most of the processing cost (86%) is right up front in a "Distinct Sort." But in exec stored procedure, the cost for this step is 11% and the significant costs are in later "Table Scans."

I don't know why SQL Server would choose different execution plans when the code is identical in each.

Any quick insights?

Many thanks.

View 4 Replies View Related

Query Performance Difference Between Sql Server 2005 And 2000

Aug 1, 2007

Hi,

I'm having an issue with a query I'm running on Sql Server 2005. It's a semi-complex query involving an in-line table function and several left outer joins which are joined on to the results of the function call. Two of the left outer joins are then qualified in a where clause of the form where table.Col is not null; the idea is that the final result set contains data that has no match in those two tables.

The problem revolves around a where clause in the function and the last left outer join (ie, one of the ones qualified with where not null). When I alter the where clause of the function to further restrict the result set the function returns, the query times shoots up from 1 second to roughly 2-3 minutes. Note that the time the function takes to complete is not affected. The difference in time is purely down to what the query does with the results the function provides. Also note that the change to the where clause provides a subset of the original data; it does not add any more data (it actually restricts the original resultset by roughly 1000 rows).

I can bring the query speed back down again by removing the last left outer join - this join takes one of the columns from the function, and joins it to a small table - 924 rows. So it appears that this particular join is the cause of the issue, but only when using the resultset generated from the modified function query.

Now, as the thread title alludes, Sql Server 2000 and 2005 handle this differently, or appear to. When I execute this same query on a Sql 2000 machine, there's no apparent time differences, and the data that is returned is as expected. Does anyone have any suggestions as to what might be causing this and how I can fix it? I could simply return the larger resultset and use managed code to filter out the rows I don't want; however, I would like to get to the bottom of this, especially if it's going to effect future queries.

Cheers,

Chris

View 4 Replies View Related

Transact SQL :: Difference Between Exist And In?

Aug 24, 2015

select count(cars.carid)
from
Cars left
join
RentalOrders
on
cars.CarID=RentalOrders.CarRef
where carid not
in(selectRentalOrders.CarRef
from
RentalOrders)

when I wrote this above-query for sofiacarrental_v2.2 it shows 30 in the result but when I changed it this query to that:

select count(cars.carid)
 from
Cars left
join
RentalOrders
on
cars.CarID=RentalOrders.CarRef
where not exists
(select
RentalOrders.CarRef
from
RentalOrders)

I replaced not in with not exists it showed 0 in the result.there is any point in term of using them or I made a mistake in the second query? 

View 2 Replies View Related

Transact SQL :: Difference Between Data With Group By

Jul 28, 2015

I have a SQL table like this

Events time endTime
Tram 2014-11-28 12:35:50.390 2014-11-28 12:43:19.120
Re-Entry 2014-11-28 12:43:19.120 2014-11-28 12:56:07.040
Tram 2014-11-28 12:56:07.040 2014-11-28 13:15:25.060 // EndDate Before dump
Dump 2014-11-28 13:15:25.060 2014-11-28 13:50:07.233
Tram 2014-11-28 13:50:07.233 2014-11-28 13:55:17.473
Load 2014-11-28 13:55:17.473 2014-11-28 14:06:55.063
Tram 2014-11-28 14:06:55.063 2014-11-28 14:37:12.100
Dump 2014-11-28 14:37:12.100 2014-11-28 14:37:12.100

I want to calculate the Difference between 2 dates like endtime before Dump -time..I am expecting output like this

ROW1   ROW2
 1    00:39:34
 2    23:12:55

You can find the details in SQL Fiddle here.

View 4 Replies View Related

Transact SQL :: Query On Running Value (difference)

Jun 18, 2015

I have a table that will be loaded over night everyday and I need to write a query on running value difference ?

List of Columns (ID, Branch ,Group, Date, Value)

ID    Branch   Group   Date                   Value
1        A           C      2015-06-01            10
2        A          C       2015-06-02            15
3        A          C       2015-06-03            25
4        A          C       2015-06-04            20
5        B          D       2015-06-01            20
6        B          D       2015-06-02            25
7        B          D       2015-06-03            10
8        B          D       2015-06-04            20

I want the Output like below with a Running value difference in comparison to previous day.

ID    Branch   Group   Date          Value    Running Value
1        A           C      2015-06-01            10         10
2        A          C       2015-06-02            15         05
3        A          C       2015-06-03            25         10
4        A          C       2015-06-04            20         -5
5        B          D       2015-06-01            20         20
6        B          D       2015-06-02            25         05
7        B          D       2015-06-03            10        -15
8        B          D       2015-06-04            20         10

Basically I need to compare the previous day and show the difference. How can I do this in SQL 2008 r2?

View 6 Replies View Related

Transact SQL :: Difference Between Index And Primary Key

Aug 10, 2015

What is the difference between the Index and the Primary Key?

View 14 Replies View Related

Transact SQL :: Difference Between Heaps And Balanced Tree?

May 28, 2015

What is the difference between Heaps and Balanced Trees? Is it true that if a table has no clustered index it is a heap?

If so , if the table has only Non clustered index, is it still a Heap?

View 4 Replies View Related

Transact SQL :: Date Difference Between Two Dates In Due And OverDue

Sep 25, 2015

I want a difference in days 

Select datediff(dd,Target_Date,Achv_Date) 
Now , checks are 
1] when target date greater than achv_Date the difference should be greater than 0 
means for FileID 77608 
Select datediff(dd,'2015-09-24 00:00:00.000','2015-09-24 10:42:32.823')
 
i am getting -6 it should be 6 cant switch Target_Date and Achv_Date in datediff else i will get opposite result in first four records basically, i want a two column TAT and Status beside  achv_date based on the values of two dates difference see above ..and also want a result of (No. of Yes in status / No. of Files that has achv_date )i.e. result= (7/8) = 87% 

View 6 Replies View Related

Transact SQL :: Difference Between Batch And Stored Procedure?

Jun 19, 2015

What is the difference between Batch and Stored Procedure?

View 5 Replies View Related

Transact SQL :: Getting Time Difference In Hours And Minutes?

Jul 10, 2015

Currently my script is using the below mentioned query to find the time difference.

DATEDIFF(HH,DATEADD(SS,hcreacion,fcreacion) ,DATEADD(SS,hcerrar,fcreacion))

If there is 1 hr 30 minutes time difference, I am getting 2 hours as output. But we need 1.30 as output. is there any way to achieve this?

View 14 Replies View Related

Transact SQL :: Inventory Difference Between Rows Between Stores

May 8, 2015

how to measure a change in inventory over various stores.  My sql2008R2 express db gets a new row of data everyday from each store(about 40 stores) for a single product stock count "OnHand" and if there is any new stock on order.  When the new stock arrives it is added to the "OnHand" count.   I want to measure the delta change per day,per store.  I'm stuck on how to separate the stores and how to query the delta of stock.My data base looks like this
                               
TimeStamp Store
OnHand OnOrder
2015/04/22 18   1 - Concord
12
     0
2015/04/23 11   1 - Concord
11
 
[code]....

View 17 Replies View Related

Transact SQL :: Total Difference Time In Last Column?

May 23, 2015

I have the below query

DECLARE @GivenDate DATE='2015-05-13'
create table #table (LedgerID int,AttDate Date, checkedtime time,checkedtype varchar(1))
insert into #table (LedgerID,AttDate,checkedtime,checkedtype) values (1232,'2015-05-13','09:01:48.0000000','I')
insert into #table (LedgerID,AttDate,checkedtime,checkedtype) values (1232,'2015-05-13','13:05:52.0000000','O')
insert into #table (LedgerID,AttDate,checkedtime,checkedtype) values (1232,'2015-05-13','14:10:25.0000000','I')

[code]....

the result is like below

i need 'TotalDiffernceTime' column as new column (OUT1-IN1)+(OUT2-IN2).

i am using SQLServer 2008 R2

View 8 Replies View Related

Transact SQL :: Getting Time Difference Between Same Column In Different Rows

Jun 21, 2015

I have a table data like below

id         type      timestamp
1001    start1    10:34:23:545
1001    start2    10:34:24:545
1001    end2     10:34:24:845
1001    end1     10:34:25:545
1002    start1    10:34:25:645
1002    start2    10:34:25:745
1002    end2     10:34:25:945
1002    end1     10:34:25:965

I need the result as follows

id              millisecond diff start1end1                 millisecond diff start2end2
1001    end1 timestamp-start1 timestamp    end2 timestamp-start2 timestamp
1002    end1 timestamp-start1 timestamp    end2 timestamp-start2 timestamp

SQL Server 2008 R2

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved