Inconsistent Performance Results With Large Partitioned Tables.

Dec 5, 2007

I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.

The first query completed in around 7s and has 47,000 logical reads.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime


The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime



The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.


My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?

One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.

Thanks,

Tim

View 5 Replies


ADVERTISEMENT

SQL 2005 Full-Text Performance On Large Results

May 10, 2006

Hello everybody,
I've got a little problem wich i'm trying to solve since 1-2 years and i hoped it would go away with SQL 2005 - but that wasn't the case :(.

Situation:
I've just bought a new Server containing:
SQL 2005
64 Bit Enviroment
4 GB RAM
2x AMD Opteron 2 GHz Prozeccors (Dual Core)
2x RAID Controllers (RAID 1) containing
1.1 System
1.2 Data
2.1 Transaction Logs

I've created a full-text table containing all the search terms i need to search.
Table build:
RecID - int - Primary Key
SrcID - varchar(30)
ArticleID - int - referring to an original table
SearchField - varchar(150) - Containing the search terms
timestamp - timestamp field

Fulltext index:
RecID as Primary Key
SearchField as indexed field - Wordbreaker: Neutral (containing several languages), Accent sensitivity off

Now i've got different tables imported in here resulting in a table size of ~ 13 million rows.

There is no problem with the performance on this catalog if i search a term wich isn't contained in more than 200-300 recordsets - but if i search for a term wich could occur in 200'000 upwards it gets extremely slow.

On the slow query the first records get in after no time, but until the query finished up to 60 seconds pass.
The problem is that i have to sort by a ranking value wich is stored externally - so i need all results to sort them...

current (debugging) query:
SELECT ArticleID FROM fullTextTable AS ft INNER JOIN CONTAINSTABLE(FullTextCatalog,SearchField,'"term*"') AS ftRes ON ftRes.[KEY]=ft.idEntry

Now if i check in the performance monitor:
As soon as i run the query the 'Avg. Disk Read Queue Length' counter on disk D (SQL Data Files) jumps to the top, until the query has finished.
Almost no read/write activity on C: where the Fulltext is stored...

If i rerun the query, after it finished once successfully - it takes place below 1-2 seconds, would be nice to get that result in first place :).

Does anybody know a workaround to this problem?

View 9 Replies View Related

Query Server To Find All Partitioned Tables, Partition Name, Column Used, Partitioned By

Dec 17, 2007

I want to find a way to get partition info for all the tables in all the databases for a server. Showing database name, table name, schema name, partition by (maybe; year, month, day, number, alpha), column used in partition, current active partition, last partition (for date partitions I want to know if the partition goes untill 2007, so I can add 2008)

all I've come up with so far is:






Code Block

SELECT distinct o.name From sys.partitions p
inner join sys.objects o on (o.object_id = p.object_id)
where o.type_desc = 'USER_TABLE'
and p.partition_number > 1

View 3 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

Performance Issues With Large Tables

Dec 5, 2007

Hi,

I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?


Thanks,
Harish

View 6 Replies View Related

Inconsistent SQL Results

Apr 11, 2006

HiI have an oddity. If I run a piece of SQL:SELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToI get the resultsEmployeeNo MailTo----------- ------608384 1606135 1608689 1609095 1607163 1606165 1606472 1608758 1.....for 2594 rowsIf I create a stored procedure with the same SQL:-CREATE PROCEDURE dbo.PPS_testASSELECT EmployeeNo, MailToFROM ST_PPS.dbo.Employeewhere AddedOn BETWEEN '01-jan-2006' and '01-feb-2006'AND MailTo NOT IN ( '3', 'x')order by MailToGOand run it:-EXEC PPS_testI get three extra rowsEmployeeNo MailTo----------- ------607922 NULL606481 NULL605599 NULL606316 1608871 1607427 1608795 1.....for 2597Does anyone know what is happening here? It appears that the clause:-MailTo NOT IN ( '3', 'x')excludes NULL in raw SQL, but includes NULL (correctly I think) in astored procedure.Chloe CrowderThe British Library

View 5 Replies View Related

Inconsistent Results

May 5, 2008

Hi,

I am building a report with a recursive hierarchy for drill-down purposes. The hierarchy is built by querying a SSAS OLAP cube and defining a details grouping for the table/matrix.

Every time I run the report one or more of the leaf members in the recursive hierarchy "jumps" up to the highest level. First I thought that this may be due to the fact that the leafs parents are not part of the returned dataset. However, the queries makes sense and the "offending" members does never contain any data (while the query should return only non empty members) which is why this is a very strange behavior. Furthermore, the "offending" member differs between different executions of the report, despite the fact that the parameters is exactly the same and the cube is untouched between executions.

I am actually pressing "View Report", waiting for the report to execute and when I press "View Report" again, the returned datasets seem to differ, yielding different "offending" members in the report.

When I run the queries individually in the Data-tab in BIDS, the returned datasets are always the same. Execution caching is turned off for the report.

Checking against SSRS's ExecutionLog, the RowCount for consecutive executions with the exact same parameters differ. For example, RowCount:
3094
3080
3079
3088
3087

Why does SSRS behave such inconsistently? Any tips or tricks?

View 3 Replies View Related

Inconsistent Query Results

Apr 10, 2001

I am running SQL Server 7.0 on NT 4.0. I have created a simple query:

SELECT SUM(month1) As total_month1
FROM eac_manload
WHERE project_number = 8800
and dept IN (50,51,52,55,57,60,61,62,63,64,65,68,69)

I first time I run the query I get the correct result. Subsequent times that I run the query the result is 1 record with a Null value. The data has not changed. If I stop MSSQLSERVER and restart the service I get the correct result the first time and the Null value each time thereafter. Anybody out there with any idea of what is going on here? Any help will be appreciated!!

View 1 Replies View Related

Inconsistent Results From Stored Procudure

Jan 11, 2000

I have a stored procedure (see below), in which I would like to check if the create an identity column and make it a primary key succeeded. I check @@error after the exec statement. This used to pick up an error if the table already had an identity column. It has stopped doing that. Why? And, if this is not the way to capture the error after the exec statement, how do I do it?


CREATE PROCEDURE rasp_test3
/*
Written by Judith Farber Abraham
this procedure loops thru sysobjects looking for user tables.
If a user table, does it have a primary key?
If not, add an identity column to table and make it a primary key
*/
--would like to have sp in main db but use from all three
@fixDB nvarchar(50)--the db to which to add PKs

AS
Declare @TableName varchar(50)
Declare @TableID int
Declare @Msg varchar (50)
Declare @ColumnName varchar(50)
Declare @IndexName varchar(50)
Declare @MyCursor nvarchar(500)
declare @MyCursorC nvarchar(500)
declare @CName sysname
--Set @Msg = "********* Finished adding Ident fields *************"
/* */
/*
do for all user tables ( xtype = u )
*/
set @Mycursor = N'Declare SysCursor cursor for select Name, ID from ' + @fixdb +'.dbo.sysobjects where xtype = "u"'
execute sp_executesql @mycursor
open syscursor
Fetch next from SysCursor into @TableName, @TableID
/* -1 = no record; -2 = row deleted; 0 = got a row */
While (@@Fetch_status <> -1)
Begin
If (@@Fetch_status <> -2)
Begin /* have a user row (table) */
/* */
set @ColumnName = @TableName + 'ID'
set @IndexName = 'PK_' + @columnName

--only add ident and PK if no primary key in table
If not exists (Select * from Sysobjects where Parent_obj = @TableID and xtype = 'PK')

--add an identity column to user table and make it a Primary key

EXEC ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' )
--
Begin
--if error, assume already ident column, so find column name & make PK
print @@error
if @@error <> 0 print "jerror occured"
--set @MycursorC = N'Declare SysCursorC cursor for SELECT c.name
--FROM syscolumns c, sysobjects o
--WHERE ((c.id = o.id) AND (c.status = 128)) AND (o.name = ' + @tablename + ')'
--execute sp_executesql @mycursorC
--Open SyscursorC
--Fetch next from SysCursorC into @CName
--print @cname
--close syscursorc
--deallocate syscursorc
--Exec ('ALTER TABLE ' + @tablename + ' ADD ' + @columnName + ' INT IDENTITY CONSTRAINT ' + @IndexName + ' PRIMARY KEY ' )
--select @cname=c.name
--print c.name
End

End
Fetch next from SysCursor into @TableName, @TableID
End
--Print @Msg
Close SysCursor
Deallocate SysCursor
Return

Thanks for any help,
Judith

View 3 Replies View Related

Inconsistent Stored Procedure Results

Jul 13, 2006

I'm testing some code to look up values from my database and update a specific field when certain conditions are met. I'm having trouble with some code that is giving me the results I expect when I submit one set of parameters, but is not finding anything in the database for another set, when I know the data exists.

Here's the code for my stored procedure, SP:

Code:

@flightid bigint,
@departuretime smalldatetime
SELECT flightid, flightno, departuretime, origincode, destinationcode
FROM flightschedules

WHERE flightid <> @flightid
AND departuretime = CONVERT(SMALLDATETIME, @departuretime, 120)



And here's the vbscript that calls it:


Code:

vOutboundID = 452
vReturnID = 453

'--- Get the flight details ---
strOrigin = "confirmflightdetails '" & vOutboundID & "';"
set rsOrigin = Server.CreateObject("ADODB.Recordset")
rsOrigin.Open strOrigin, objConn
response.write "origin " & rsOrigin("departuretime") & "<BR>"

strReturn = "confirmflightdetails '" & vReturnID & "';"
set rsReturn = Server.CreateObject("ADODB.Recordset")
rsReturn.Open strReturn, objConn
response.write "return " & rsReturn("departuretime") & "<BR>"

strGetOFID = "SP '" & vOutboundID & "', '" & rsOrigin("departuretime") & "';"
set rsOFID = Server.CreateObject("ADODB.Recordset")
rsOFID.Open strGetOFID, objConn

DO WHILE NOT rsOFID.EOF
response.write "OFNO " & rsOFID("flightno") & " " & rsOFID("flightid") & "<br>"
rsOFID.MoveNext
Loop

strGetRFID = "SP '" & vReturnID & "', '" & rsReturn("departuretime") & "';"
set rsRFID = Server.CreateObject("ADODB.Recordset")
rsRFID.Open strGetRFID, objConn

DO WHILE NOT rsRFID.EOF
response.write "RFNO " & rsRFID("flightno") & " " & rsRFID("flightid") & "<br>"
rsRFID.MoveNext
Loop




Here's the code for confirmflightdetails:

Code:

@flightid bigint
AS
SELECT flightid, flightno, departuretime
FROM flightschedules
WHERE flightid = @flightid



When confirmflightdetails is tested, I the proper results, as confirmed by the response.write statements:

4521092006-07-29 08:00:00
4531102006-07-29 12:05:00


I put the response.write statements and loops in so I could verify the functionality.

Here's what it produces:

out 452
ret 453
origin 7/29/2006 8:00:00 AM
return 7/29/2006 12:05:00 PM
OFNO 109 450

Here's what it should produce:

out 452
ret 453
origin 7/29/2006 8:00:00 AM
return 7/29/2006 12:05:00 PM
OFNO 109 450
RFNO 110 451

If I do this in query analyzer:

Code:

select flightid, flightno, departuretime
from flightschedules
where flightid > 449 and flightid < 454



this is what I get from the database:

flightid flightno departuretime origin destination
4521092006-07-29 08:00:00 A C
4501092006-07-29 08:00:00 A B
4531102006-07-29 12:05:00 C A
4511102006-07-29 13:15:00 B A

What I'm trying to do is look up the chosen flight, then find the flight with the matching origin/destination (the other flight leg) on the same day.

I can't figure out why it's working for one set of parameters and not for the other.

Thanks in advance for any help!

View 1 Replies View Related

Inconsistent SP Performance On Different Servers

Jun 8, 2006

One of my developers recently installed a backup of the production database onto his test site. His test server has the same configuration as the production server.

One of the Stored Procedures that is called takes 1:45 to run on his machine, but only 2 seconds on the production server. This same SP takes only 2 seconds on my development database.

The SP is called iteratively, up to 10 times... to run against 10 separate fields. Depending on a value for a parameter called @CriteriaClassID, depends on which portion of the SP runs.

The significant difference in processing time in itself is baffling (since the servers are same specs / configuration, as far as I can tell, and the data is identical, since he has a backup of the most recent production data).

But more baffling: if, in his data, I switch the values from field 1 to field 2, and vice versa, his results take 2 seconds (switching the values in field 1 to field 2 switches the value in @CriteriaClassID which is passed through to this SP).

It's exactly the same SP; the only difference is that field 1 is processed first, field 2 second, field 3 third etc. On the production site and my development site, it doesn't make a difference in the order they are processed. On his machine it does.

Any ideas? I though perhaps his Indexes were corrupted in the rebuild, but we ran a SQL Server maintenance schedule to clean it up, and no improvement.

This is the SP, if it is of any help:

CREATE procedure [dbo].[st_pull_model_data] @ModelID as integer, @CriteriaID as integer
as

declare @ClientID as integer, @CriteriaClassId as char(1)

/*Procedure to pull data from org_model_data and postalcode_model_data for modeling and media analysis */
/*Need to have table #temp_data created outside of SP with fields org_id and zip_code */
/*This procedure is used by SP st_model_data */

If @CriteriaID is not null
begin

set @CriteriaClassId = (Select model_criteria_type from model_criteria where model_criteria_id = @CriteriaID)
if @CriteriaClassID = 'G' -- changes client_id from specific to general, if General is required.
begin
set @ClientID = 0
end
else
begin
set @ClientID = (Select client_id from model where model_id = @ModelID)
end

If @CriteriaClassId in ('G','P')
Begin
update#temp_data
setdata1 = postal_criteria_value
from #temp_data t
left outer join
(select postalcode, postal_criteria_value
from postalcode_model_data pmd
join model_org_trade_area mota on mota.zip_code = pmd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id
where model_criteria_id = @CriteriaID
and client_id = @ClientID
and mo.model_id = @ModelID) as PMD
onPMD.postalcode = t.zip_code
end
else
Begin
update#temp_data
setdata1 = org_criteria_value
from#temp_data t
left outer join
(select distinct postalcode, org_criteria_value, omd.org_id
from org_model_data omd
join org o on o.org_id = omd.org_id
join model_org_trade_area mota on mota.zip_code = omd.postalcode
join model_org mo on mo.model_org_id = mota.model_org_id and mo.org_id = o.org_id
where model_criteria_id = @CriteriaID and o.client_id = @ClientID and mo.model_id = @ModelID) as OMD
on OMD.postalcode = t.zip_code and omd.org_id = t.org_id
end
end

View 5 Replies View Related

Inconsistent Application Performance

Feb 13, 2007

Basically, I have a web-based application that queries aggregates from several big tables that grow rather quickly. I feel I am fairly accomplished at tuning individual queries and have optimized these particular stored procedures as much as I can and often see great results for a period of time. However, after time passes, it seems the stored procedures begins performing really badly. What's frustrating is that I have traced the server, located a stored procedure that consistently executes with an abnormally high duration, but when I run the exact same text directly against the sql server it returns instantly.

If I recompile the stored procedure using my direct connection by including the WITH RECOMPILE clause to the EXEC statement, the problem remains for the web-app. However, if I add the WITH RECOMPILE to the actual stored procedure, it is instantly resolved. I can then remove the WITH RECOMPILE from the stored procedure text and things will remain speedy for a period of time before they again slip back.

I'm curious as to what I can do from a SQL standpoint to correct for this. For now, the app team is setting a counter for each execution and then issuing a WITH RECOMPILE when the counter limit is exceeded.

I apologize if this question has most been asked before and I have attempted a review of the forums but was unable to find any material. Any suggestions, advice, links or general assistance would be most appreciated.

View 4 Replies View Related

Inconsistent Database Performance

Jul 23, 2005

Hi,I am facing a peculiar problem while looking ahead in a live Databasecurrently under operation in one of my client’s Project. AnApplication that is updating 3 - tables in the Database is missing toupdate a certain number of Fields in one of the Tables. The fact isnot frequent and I have checked through the Server Performance Monitorthat there is no performance slag of the Server during any point oftime.The Tables are indexed with common Key fields. Can anybody help me inthis regard ?Thanks & Regards.--Posted using the http://www.dbforumz.com interface, at author's requestArticles individually checked for conformance to usenet standardsTopic URL: http://www.dbforumz.com/General-Dis...pict193836.htmlVisit Topic URL to contact author (reg. req'd). Report abuse: http://www.dbforumz.com/eform.php?p=655931

View 2 Replies View Related

Inconsistent Performance From Queues

Nov 13, 2006

Hi everyone! I have a very brief question... I have 10 queues in my database and each of them are sent equal number of messages... There are instances where they execute/activate the stored procedures very fast but there are times where they don't, does anyone have an idea why this happens?

Thank you very much for taking the time to read my post. :)

View 10 Replies View Related

SQL Server 2014 :: Left Join With A Large Partitioned Table?

Aug 3, 2015

I have a query that has a left join with a large partitioned table. The partitioned table has 10s of millions of records, and each partition has about 100,000 records.

The left join is part of an insert that gets a column from the partitioned table, if the column exists. The query contains the partition ID and all other joined columns are part of a non-clustered index.

Through the profiler, I found that there were millions of reads and the execution plan was giving me a table scan on the partitioned table.

I changed the query to do the insert followed by an update with inner join. That did the trick, but it worries me that SQL Server 2014 behaves differently from 2012 or 2008R2, which can make upgrading very time consuming.

View 3 Replies View Related

Inconsistent Linked Server Query Results

Feb 26, 2007

Hello,

I have a linked server named 'Charlie_File' to an Excel Workbook that I set up in SQLServer 2005 Management Studio. The workbook is on my local C drive. Sometimes, I get the results back that I expect when I run the following query;

SELECT * FROM OPENQUERY(Charlie_file, 'SELECT * FROM [Feb$]')

Sometimes, on subsequent runs of the above query, I get the following message;

Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "MSDASQL" for linked server "Charlie_file" reported an error. The provider did not give any information about the error.
Msg 7303, Level 16, State 1, Line 1
Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Charlie_file".


There seems to be about a minute or so of a delay before the query will run correctly on subsequent attempts. Is there a connection issue here where a connection blocks subsequent attempts to select the data within a specific time span?

Thank you for your help!

cdun2

View 1 Replies View Related

Execution Plans Inconsistent With Performance

May 16, 2008

I've been working with SQL Server 2005 for a while now and I've noticed some odd behavior that I want to bounce of other members of the community. I should preface that I've been a forum viewer (and occasional contributer) here at SQL Team for a while and I've naturally developed a keen sense for optimizations.

Fundamentally, longer stored procedures with perfectly fine/optimized execution plans are inconsistent with real world performance. In some of these cases, a low subtree cost on a 4 core machine with 16gb of ram and 2 15 drive SAS arrays with little load takes excessively long to run or in some cases doesn't complete.

This isn't due to blocking or resource bottlenecks as I'm quite familiar with built in tools to troubleshoot and resolve those issues. In all cases, I am able to rearchitect the stored procedure into a higher subtree cost variant and get reasonable performance, but it's frustrating to have to redo work and there seems to be no common theme other than longer multi-statement procedures.

I've used SQL Server 2000 extensively and did not notice this level of inconsistency in performance with that product version. Just wondering if others in the community have experiences similar or if I'm just crazy.

Thanks for reading my rant.

- Shane

View 3 Replies View Related

Indexing Issue, Inconsistent Performance

Feb 13, 2006

Hi -Trying to chase down a baffling performance issue. Our database has beenrunning very slow lately. So we are performance tuning the database. Indoing so, we created a copy of our production database. In that database, Ichanged one clustered index on a table to try to improve performance. I ranone query - saw a slight improvement - but saw "lazy spool" in the executionplan.I tried to change it back to the original index by dropping the changedindex, and recreating the original index. I then ran the original query -which now went from 5 seconds to 36 seconds.I then ran DBCC REINDEX on that table. Performance of the query was stillmarkedly worse. I then reran the DBCC REINDEX on all tables, and then Iupdated each tables statistics. Performance of that query has never returnedto the original 5 seconds.What could be at issue here? Is there something else that I caused inchanging the index and changing it back?Ideas much appreciated.

View 4 Replies View Related

SQL Server 2012 :: Inconsistent Results When Converting To Time?

Jun 5, 2014

I have a lot of rows of hours, set up like this: 0745, 0800, 2200, 1145 and so on (varchar(5), for some reason).

These are converted into a smalldatetime like this:

CONVERT(smalldatetime, STUFF(timestarted, 3, 0, ':')) [this would give output like this - 1900-01-01 11:45:00]

This code has been in place for years...and we stick the date on later from another column.

But recently, it's started to fail for some rows, with "The conversion of a varchar data type to a smalldatetime data type resulted in an out-of-range value".

My assumption is that new data being added in is junk. If I query for these values and just list them (rather than adding a column to convert them also) that's fine, of course. I've checked all the stuffed (but not yet converted - so 11:45 rather than 1145) output to see if it ISDATE(), and it is. There are no times with hours > 23 or minutes greater than 59 either.

If I add the CONVERT in, we see the error message. But here's the oddity, if I place all of the rows into a holding table, and retry the conversion, there is no error. It's this last bit that is puzzling me. Plus I can't see any errors in the hours data that would cause a conversion problem.

I've put the whole of this into a cursor to try to trap the error rows too, but all processes fine. Why would it fail if NOT in a cursor?

View 9 Replies View Related

T-SQL (SS2K8) :: Stored Procedure Returning Inconsistent Results?

Mar 11, 2015

Firstly may I say that the sproc I am having problems with and the service that calls it is inherited technical debt from an unsupervised contractor. We are not able to go through a rewriting process at the moment so need to live with this if possible.

Background

We have a service written in c# that is processing packages of xml that contain up to 100 elements of goods consignment data. In amongst that element is an identifier for each consignment. This is nvarchar(22) in our table. I have not observed any IDs that are different in length in the XML element.

The service picks up these packages from MSMQ, extracts the data using XPATH and passes the ID into the SPROC in question. This searches for the ID in one of our tables and returns a bool to the service indicating whether it was found or not. If found then we add a new row to another table. If not found then it ignores and continues processing.

Observations

The service seems to be dealing with a top end of around 10 messages a minute... so a max of about 1000 calls to the SPROC per minute. Multi-threading has been used to process these packages but as I am assured, sprocs are threadsafe. It is completing the calls without issue but intermittently it will return FALSE. For these IDs I am observing that they exist on the table mostly (there are the odd exceptions where they are legitimately missing). e.g Yesterday I was watching the logs and on seeing a message saying that an ID had not been found I checked the database and could see that the ID had been entered a day earlier according to an Entered Timestamp.

So the Sproc...

USE [xxxxxxxxxx]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

[Code]....

So on occasions (about 0.33% of the time) it is failing to get a bit 1 setting in @bFound after the SELECT TOP(1).

The only suggestions I can make have been...

change @pIdentifier nvarchar(25) to nvarchar(22)
Trim any potential blanks from either side of both parts of the identifier comparison
Change the SELECT TOP(1) to an EXISTS

The only other thought is the two way parameter direction in the C# for the result OUTPUT. Not sure why he did it that way or what the purpose is.

I have been unable to replicate this using a test app and our test databases. Has observed selects failing to find even though the data is there, like this before?

View 6 Replies View Related

MSSQL Express 2005 Performance So Inconsistent!

Jan 26, 2007

Hi, just wondering if anyone noticed the MSSQL Express 2005 is so inconsistent with its performance. I was testing out a query with 2 joins, with the main table having about 13,600,000 records. With the same criteria value, sometimes it finished executing in 3 secs. sometimes almost a 1 min.

Could it be just the volume of the data? This is really driving me crazy!

Any advise will be greatly appreciated.

Rick..

View 2 Replies View Related

Data Access :: BULK INSERT Yields Inconsistent Results

May 6, 2015

I am getting inconsistent results when BULK INSERTING data from a tab-delimited text file. As part of my testing, I run the same code on the same file again and again, and I get different results every time! I get this on SQL 2005 and SQL 2012 R2.We have an application that imports data from a spreadsheet. The sheet contains section headers with account numbers and detail rows with transactions by date:

AAAA.1234 /* (account number)*/
1/1/2015      $150                 First Transaction
1/3/2015      $24.233              Second Transaction

BBBB.5678
1/1/2015      $350                 Third Transaction
1/3/2015      $24.233              Fourth Transaction

My Import program saves this spreadsheet at tab-delimited text, then I use BULK INSERT to bring the data into a generic table full of varchar(255) fields. There are about 90,000 rows in each day's data; after the BULK INSERT about half of them are removed for various reasons. Next I add a RowID column to the table with the IDENTITY (1,1) property. This gives my raw data unique row numbers.

I then run a routine that converts and copies those records into another holding table that's a copy of the final destination table. That routine parses though the data, assigning the account number in the section header to each detail row. It ends up looking like this:

AAAA.1234     1/1/2015      $150          First Purchase
AAAA.1234     1/3/2015      $24.233     Second Purchase
BBBB.5678     1/1/2015      $350            Third Purchase
BBBB.5678     1/3/2015      $24.233       Fourth Purchase

My technique: I use a cursor to get the starting RowID for each Account Number: I then use the upper and lower RowIDs to do an INSERT into the final table. The query looks like this:

SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber
FROM   GenericTable
WHERE RowHeader LIKE '____.____%'

Results look like this:

But every time I run the routine, I get different numbers! my results are not accurate. I get inconsistent results EVERY TIME.Here is my code, with table, field and account names changed for business confidentiality.This is a high profile project at my company;

TRUNCATE TABLE GenericImportTable;
ALTER TABLE GenericImportTable DROP COLUMN RowID;
BULK INSERT GenericImportTable FROM 'SERVERGeneralAppnameDataFile.2015.05.04.tab.txt'
WITH (FIELDTERMINATOR = ' ', ROWTERMINATOR = '
', FIRSTROW = 6)

[code]...

View 2 Replies View Related

SQL Server 2012 :: Adding RowID To Existing Table - Inconsistent Results

May 6, 2015

I am getting inconsistent results when BULK INSERTING data from a tab-delimited text file. As part of my testing, I run the same code on the same file again and again, and I get different results every time! I get this on SQL 2005 and SQL 2012 R2.

We have an application that imports data from a spreadsheet. The sheet contains section headers with account numbers and detail rows with transactions by date:

AAAA.1234 /* (account number)*/
1/1/2015 $150 First Transaction
1/3/2015 $24.233 Second Transaction
BBBB.5678
1/1/2015 $350 Third Transaction
1/3/2015 $24.233 Fourth Transaction

My Import program saves this spreadsheet at tab-delimited text, then I use BULK INSERT to bring the data into a generic table full of varchar(255) fields. There are about 90,000 rows in each day's data; after the BULK INSERT about half of them are removed for various reasons.

Next I add a RowID column to the table with the IDENTITY (1,1) property. This gives my raw data unique row numbers.

I then run a routine that converts and copies those records into another holding table that's a copy of the final destination table. That routine parses though the data, assigning the account number in the section header to each detail row. It ends up looking like this:

AAAA.1234 1/1/2015 $150 First Purchase
AAAA.1234 1/3/2015 $24.233 Second Purchase
BBBB.5678 1/1/2015 $350 Third Purchase
BBBB.5678 1/3/2015 $24.233 Fourth Purchase

My technique: I use a cursor to get the starting RowID for each Account Number: I then use the upper and lower RowIDs to do an INSERT into the final table. The query looks like this:

SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber
FROM GenericTable
WHERE RowHeader LIKE '____.____%'

Results look like this:

But every time I run the routine, I get different numbers!

Needless to say, my results are not accurate. I get inconsistent results EVERY TIME. Here is my code, with table, field and account names changed for business confidentiality.

TRUNCATE TABLE GenericImportTable;
ALTER TABLE GenericImportTable DROP COLUMN RowID;
BULK INSERT GenericImportTable FROM 'SERVERGeneralAppnameDataFile.2015.05.04.tab.txt'
WITH (FIELDTERMINATOR = ' ', ROWTERMINATOR = '', FIRSTROW = 6)
ALTER TABLE GenericImportTable ADD RowID int IDENTITY(1,1) NOT NULL
SELECT RowID, SUBSTRING(RowHeader, 6,4) + '.UBC1' AS AccountNumber
FROM GenericImportTable
WHERE RowHeader LIKE '____.____%'

View 3 Replies View Related

Partitioned Tables

Mar 19, 2008



Hello,

I am implementing a table partitioning on our database with TSQL.
At the moment (it is under developing) the data are correctly located in the relavant file group.
Our target is to meke that the oldest partions/File groups can be backup and removed from the database. This to reduce the size of DB (time period is used for partitioning).
Then, if the need arises, restoring the filegroup to make reporting or analysis.
Take care that data are conitnuosly added and thus new File groups are added to represent the new time period (eg: new file group is the new month).
Based on your experience is it possible a solution like that?


Thank

View 4 Replies View Related

Partitioned Tables

Sep 28, 2007

When do partitioned tables/indexes become beneficial? When a table has several million rows? Hundreds of millions of rows?

My tables all have clustered indexes based on the bigint identity PK. I am considering partitioning some of the larger tables by year. If the field I use is not part of the current clustered index then I can't use create index to create my partitions? I need to create an empty table for each year and then use the Alter Table switch? I have header/detail/sub-detail tables. As long as I create the partition function using a similar date field the partitions will be able to be joined? How do I insure my indexes will be aligned? Once I set up the partitions I assume new rows will be stored in the proper partitions based on the value of the date field.

I've read BOL, etc & they are good sources for theory but I need a "Building Partitions for Dummies" type paper with step by step explanations. Anything out there like that?

Thanks.

View 4 Replies View Related

Partitioned Tables And Parameters

Apr 18, 2007

Hello all,
I was wondering if anyone else ran into this and if how you got around it.
In a nut shell the SQL optimizer it NOT pruning the additional partitions from the execution plan as would be expected when applying a constraint directly against the partitioned table€™s partition key, Instead its scanning every partition that you have set up in you partition function range.. Yet when you apply the actual value against the table the plan return as expected.



Hmm.... strange......ghost...ooooooo?

I have created a simple test to reproduce:






Code Snippet

CREATE PARTITION FUNCTION [PTFunction](int) AS RANGE LEFT FOR VALUES (1,2,3)

GO

CREATE PARTITION SCHEME [PTDataScheme] AS PARTITION [PTFunction] TO ([PRIMARY], [PRIMARY], [PRIMARY], [PRIMARY])

GO

CREATE TABLE tblPartitionTest(

ID int identity(1,1) ,

PartitionKey int,

Sales money)

ON PTDataScheme(PartitionKey)

GO

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,10.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,20.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,30.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,40.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(1,50.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,10.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,20.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,30.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,40.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(2,50.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,10.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,20.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,30.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,40.00);

INSERT INTO tblPartitionTest(PartitionKey,Sales) VALUES(3,50.00);

set showplan_text on;

-- query using the set value as a constraint

select * from tblpartitiontest

where partitionkey = 2

--show plan text on result:

|--Table Scan(OBJECT:([tblPartitionTest]), WHERE:([tblPartitionTest].[PartitionKey]=(2)) PARTITION ID:((2)))

-- query using the parameter as a constraint

declare @param_partitionkey int

set @param_partitionkey = 2

select * from tblpartitiontest

where partitionkey = @param_partitionkey

--show plan text on result:

|--Table Scan(OBJECT:([tblPartitionTest]), WHERE:([tblPartitionTest].[PartitionKey]=[@param_partitionkey]) PARTITION ID:(RangePartitionNew([@param_partitionkey],(0),(1),(2),(3))))



BTW I have reproduce this in SP 2 as well.



Any thoughts?

Thanks

Eric





View 12 Replies View Related

SQL 2012 :: Deadlocks In Partitioned Tables?

Oct 31, 2015

We are using partitions and all the table are properly aligned as per the partition keys. When this particular sp, which is inserting data to a table from a different table based on the partitionkeys is called by Web UI where threading has been applied, dead lock appears.

Let me make it more clear.

ThreadOne:
Insert into table A(partitionKey,BatchId,...)
select * from table B where partitionkey = 1
ThreadTwo:
Insert into table A(partitionKey,BatchId,...)
select * from table B where partitionkey = 2

I can see sometimes it gives deadlock for this procedure, not sure about the reason, as far as I guess since the tables are partitioned and escalation is set to Auto the deadlock should not occur.

View 2 Replies View Related

Determining The Nature Of Partitioned Tables

Aug 13, 2007

I apologize in advance if this is something obvious I've missed ... fresh eyes/brain and all that.

If I have a table that is using a particular partition scheme/function, is there a quick and easy way to determine which column of that table is being used for partitioning? We're examining a number of legacy structures and we're hoping to reduce the time it's going to take us to get the report management wants.

Thanks.

View 1 Replies View Related

How To Create Weekly Partitioned Tables Automatically

May 14, 2006

Hi,

I need help in looking at sample script to perform automatic creation for weekly partitioned tables and also update the partitioned view accordingly.

Any helps are very much appreciated!


Thanks,
G

View 1 Replies View Related

Triggers On Tables Underlying A Partitioned View

Jul 23, 2005

We have a partitioned view with 4 underlying tables. The view and eachof the underlying tables are in seperate databases on the same server.Inserts and deletes on the view work fine. We then add insert anddelete triggers to each of the underlying tables. The triggers modifya different set of tables in the same database as the view (differentthan the underlying table). The problem is those triggers aren't firedwhen inserting or deleteing via the view. Inserting or deleteing theunderlying table directly causes the the triggers to fire, but not whenthe tables are accessed as a result of using the view.Am I missing something? The triggers are 'for insert' and 'fordelete'. No 'instead of' or 'after' triggers.

View 4 Replies View Related

Help With Partitioned Views Or Updating Data From Multiple Tables

Mar 16, 2008

Hi All,

My database's design is set out here. In summary, I'm trying to model a Stock Exchange for a Technical Analysis application written using Visual C++. In order to create the hierachy I'm using a Nested Set Model. I'm now trying to write code to add and delete equities (or, more generically, nodes) to the database using a form presented to the user in my application. I have example SQL code to create the necessary add and delete procedures that calculate the changes to the values in the lft and rgt columns, but these examples focus around a single table, where as my design aggregates rows from multiple tables using UNION ALL:




Code Snippet
CREATE VIEW vw_NSM_DBHierarchy -- Nested Set Model Database Hierarchy
AS
SELECT clmStockExchange, clmLeft, clmRight FROM tblStockExchange_
UNION ALL
SELECT clmMarkets, clmLeft, clmRight FROM tblMarkets_
UNION ALL
SELECT clmSectors, clmLeft, clmRight FROM tblSectors_
UNION ALL
SELECT clmEPIC, clmLeft, clmRight FROM tblEquities_




Essentially, I'm trying to create an updateable view but I receive the error "UNION ALL View is not updatable because a partitioning column was not found". I suspect that my design in wrong or lacks and this problem is highlighting the design flaws so any suggestions would be greatly appreciated.

View 9 Replies View Related

Paging Large Results In SQL 2005

May 29, 2006

lets say we have more than 100 000 rows in Table1, and we want to view each 10 rows alone.... and by pressing on a NEXT button we will see the other 10 pages....

there is 2 buttons : NEXT and PREVIOUS

so can anyone tell me how to do that in SQL 2005, and what is correctly called.

I have found a code that does use ROW_NUMBER in order to view results between 2 numbers,

example: rows between 10 and 50....
but It is not what I want, so please I need some help, thank you

By Uncle Sam

View 10 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved