Hi I am slowly getting to grips with SQL Server. As a part of this, I have been attempting to work on producing more efficient queries. This post is regarding what appears to be a discrepancy between the SQL Server execution plan and the actual time taken by a query to run. My brief is to produce an attendance system for an education establishment (I presume you know I'm not an A-Level student completing a project :p ). Circa 1.5m rows per annum, testing with ~3m rows currently. College_Year could strictly be inferred from the AttDateTime however it is included as a field because it a part of just about every PK this table is ever likely to be linked to. Indexes are not fully optimised yet. Table:CREATE TABLE [dbo].[AttendanceDets] ([College_Year] [smallint] NOT NULL ,[Group_Code] [char] (12) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Student_ID] [char] (8) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ,[Session_Date] [datetime] NOT NULL ,[Start_Time] [datetime] NOT NULL ,[Att_Code] [char] (1) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL ) ON [PRIMARY]GO CREATE CLUSTERED INDEX [IX_AltPK_Clust_AttendanceDets] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [All] ON [dbo].[AttendanceDets]([College_Year], [Group_Code], [Student_ID], [Session_Date], [Start_Time], [Att_Code]) ON [PRIMARY]GO CREATE INDEX [IX_AttendanceDets] ON [dbo].[AttendanceDets]([Att_Code]) ON [PRIMARY]GOALL inserts are via an overnight sproc - data comes from a third party system. Group_Code is 12 chars (no more no less), student_ID 8 chars (no more no less). I have created a simple sproc. I am using this as a benchmark against which I am testing my options. I appreciate that this sproc is an inefficient jack of all trades - it has been designed as such so I can compare its performance to more specific sprocs and possibly some dynamic SQL. Sproc:CREATE PROCEDURE [dbo].[CAMsp_Att] @College_Year AS SmallInt,@Student_ID AS VarChar(8) = '________', @Group_Code AS VarChar(12) = '____________', @Start_Date AS DateTime = '1950/01/01', @End_Date as DateTime = '2020/01/01', @Att_Code AS VarChar(1) = '_' AS IF @Start_Date = '1950/01/01'SET @Start_Date = CAST(CAST(@College_Year AS Char(4)) + '/08/31' AS DateTime) IF @End_Date = '2020/01/01'SET @End_Date = CAST(CAST(@College_Year +1 AS Char(4)) + '/07/31' AS DateTime) SELECT College_Year, Group_Code, Student_ID, Session_Date, Start_Time, Att_Code FROM dbo.AttendanceDets WHERE College_Year = @College_YearAND Group_Code LIKE @Group_CodeAND Student_ID LIKE @Student_IDAND Session_Date <= @End_DateAND Session_Date >=@Start_DateAND Att_Code LIKE @Att_CodeGOMy confusion lies with running the below script with Show Execution Plan:--SET SHOWPLAN_TEXT ON--Go DECLARE @Time as DateTime Set @Time = GetDate() select College_Year, group_code, Student_ID, Session_Date, Start_Time, Att_Code from attendanceDetswhere College_Year = 2005 AND group_code LIKE '____________' AND Student_ID LIKE '________'AND Session_Date <= '2005-11-16' AND Session_Date >= '2005-11-16' AND Att_Code LIKE '_' Print 'First query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds' Set @Time = GetDate() EXEC CAMsp_Att @College_Year = 2005, @Start_Date = '2005-11-16', @End_Date = '2005-11-16' Print 'Second query took: ' + CAST(DATEDIFF(ms, @Time, GETDATE()) AS VarCHar(5)) + ' milli-Seconds'GO --SET SHOWPLAN_TEXT OFF--GOThe execution plan for the first query appears miles more costly than the sproc yet it is effectively the same query with no parameters. However, my understanding is the cached plan substitutes literals for parameters anyway. In any case - the first query cost is listed as 99.52% of the batch, the sproc 0.48% (comparing the IO, cpu costs etc support this). BUT the text output is:(10639 row(s) affected) First query took: 596 milli-Seconds (10639 row(s) affected) Second query took: 2856 milli-SecondsI appreciate that logical and physical performance are not one and the same but can why is there such a huge discrepancy between the two? They are tested on a dedicated test server, and repeated running and switching the order of the queries elicits the same results. Sample data can be provided if requested but I assumed it would not shed much light. BTW - I know that additional indexes can bring the plans and execution time closer together - my question is more about the concept. If you've made it this far - many thanks.If you can enlighten me - infinite thanks.
I have two schematically identical databases on the same MS SQL 2000 server. The differences in the data are very slight. Here is my problem: the identical query has totally different execution plans on the different databases. One is (in my opinion) correct, the other causes the query to take 60 times as long. This is not an exaggeration, on the quick DB the query takes 3 seconds, on the other DB it takes 3 minutes. I have tried the following to help the optimizer pick a better execution plan on the slow db:
rebuild the indexes dbcc indexdefrag update statistics
I CAN put in a hint to cause the query to execute faster, but my employer now knows about the problem and he (and I) want to know WHY this is happening.
I'm looking for assistance on a problem with SQL Server. We have adatabase where a particular query returns about 3000 rows. This querytakes about 2 minutes on most machines, which is fine in thissituation. But on another machine (just one machine), it can run forover 30 minutes and not return. I ran it in Query Analyzer and it wasreturning about 70 rows every 45-90 seconds, which is completelyunacceptable.(I'm a developer, not a DBA, so bear with me here.)I ran an estimated execution plan for this database on each machine,and the "good" one contains lots of parallelism stuff, in particularthe third box in from the left. The "bad" one contains a "Nested Loop"at that position, and NO parallelism.We don't know exactly when this started happening, but we DO know thatsome security updates have been installed on this machine (it's at theclient location), and also SP1 for Office 2003.So it looks like parallelism has been turned off by one of these fixes.Where do we look for how to turn it back on? This is on SQL Server2000 SP3.Thanks for any help you might have for me!Christine Wolak -- SPL WorldGroup --Join Bytes!
Hi,We are trying to solve a real puzzle. We have a stored procedure thatexhibits *drastically* different execution times depending on how itsexecuted.When run from QA, it can take as little as 3 seconds. When it iscalled from an Excel vba application, it can take up to 180 seconds.Although, at other times, it can take as little as 20 seconds fromExcel.Here's a little background. The 180 second response time *usually*occurs after a data load into a table that is referenced by the storedprocedure.A check of DBCC show_statistics shows that the statistics DO getupdated after a large amount of data is loaded into the table.*** So, my first question is, does the updated statistics force arecompile of the stored procedure?Next, we checked syscacheobjects to see what was going on with theexecution plan for this stored procedure. What I expected to see wasONE execution plan for the stored procedure.This is not the case at all. What is happening is that TWO separateCOMPILED PLANs are being created, depending on whether the sp is runfrom QA or from Excel.In addition, there are several EXECUTABLE PLANs that correspond to thetwo COMPILED PLANs. Depending on *where* the sp is run, the usecountincreases for the various EXECUTABLE PLANS.To me, this does not make any sense! Why are there *multiple* compileand executable plans for the SAME sp?One theory we have is, that we need to call the sp with the dboqualifier, ie) EXEC dbo.spHas anyone seen this? I just want to get to the bottom of this andfind out why sometimes the query takes 180 seconds and other timesonly takes 3 seconds!!Please help.Thanks much
I have a table TableA with few million rows. When I query TableA , the execution plans changes based on the input parameter as shown below . Why this happens ? How to resolve this ? Any inputs would be appreciated.
SELECT * FROM TableA WHERE Column1 = 1 => SELECT -> Clustered Index Scan (100%)
SELECT * FROM TableA WHERE Column1 = 2 => SELECT -> Clustered Index Scan (100%)
SELECT * FROM TableA WHERE Column1 = 3 => SELECT -> Parallelism (3%) -> Clustered Index Scan (97%)
SELECT * FROM TableA WHERE Column1 = 4 => SELECT -> Nested Loops -> Index Seek (50%) -> Clustered Index Seek (50%) (takes a very long time to retrieve the records)
In using ADO to connect to SQL Server, I'm trying to retrieve multiple datasets AND statistics that are usually returned via the OnInfoMessage event. For those that are familiar with SQL Server, I need the results returned by the SET STATISTICS IO ON and SET STATISTICS PROFILE ON options. Anyone had any luck doing this before?
I've been working with SQL Server 2005 for a while now and I've noticed some odd behavior that I want to bounce of other members of the community. I should preface that I've been a forum viewer (and occasional contributer) here at SQL Team for a while and I've naturally developed a keen sense for optimizations.
Fundamentally, longer stored procedures with perfectly fine/optimized execution plans are inconsistent with real world performance. In some of these cases, a low subtree cost on a 4 core machine with 16gb of ram and 2 15 drive SAS arrays with little load takes excessively long to run or in some cases doesn't complete.
This isn't due to blocking or resource bottlenecks as I'm quite familiar with built in tools to troubleshoot and resolve those issues. In all cases, I am able to rearchitect the stored procedure into a higher subtree cost variant and get reasonable performance, but it's frustrating to have to redo work and there seems to be no common theme other than longer multi-statement procedures.
I've used SQL Server 2000 extensively and did not notice this level of inconsistency in performance with that product version. Just wondering if others in the community have experiences similar or if I'm just crazy.
Hi there - hoping someone can help me here!I have a database that has been underperforming on a number of queriesrecently - in a test environment they take only a few seconds, but onthe live data they take up to a minute or so to run. This is using thesame data.Every evening a copy of the live data is copied to a backup 'snapshot'database on the same server and also, on this copy the queries onlytake a second or so to run. (This is testing through the QueryAnalyser)I've studied the execution plans for the same query on the snapshot dband the live db and they seem to be significantly different - why isthis? it's looking at the same data and exactly the same code!!Anybody got any ideas???
I think not. Microsoft says it is possible: one for parallel and one for serial execution. Don't believe that's possible for a stored procedure to change execution plans on the fly. Have an on-going problem with timeout occurring with an application and narrowed the culprit to a stored procedure. I couldn't find any obvious issues database wise, no locks, etc. so I recompiled (altered) the sproc without making any changes and the issue cleared for a couple days.
It happened again to day, and so I recompiled (altered) the sproc and it went away again. No code changes to both application (so they say) and stored procedure. I ran the below code snippet to check for sprocs with multiple cached plans and the offending one came up on a short list. So, my question is, Is it one sproc per query plan or can there be more than one. I understand the connection issues.
SELECT * FROM a LEFT OUTER JOIN b ON a.id = b.id instead of
SELECT * FROM a LEFT JOIN b ON a.id = b.id
generates a different execution plan?
My query is more complex, but when I change "LEFT OUTER JOIN" to "LEFT JOIN" I get a different execution plan, which is absolutely baffling me! Especially considering everything I know and was able to research essentially said the "OUTER" is implied in "LEFT JOIN".
after moving off VS debugger and into management studio to exercise our SQLCLR sp, we notice that the 2nd execution gets an error suggesting that our static SqlCommand object is getting reused from the 1st execution (of the sp under mgt studio). If this is expected behavior, we have no problem limiting our statics to only completely reusable objects but would first like to know if this is expected? Is the fact that debugger doesnt show this behavior also expected?
Using SQL Server 2k5 sp1, Is there a way to deny users access to a specific column in a table and deny that same column to all stored procedures and views that use that column? I have a password field in a database in which I do not want anyone to have select permissions on (except one user). I denied access in the table itself, however the views still allow for the user to select that password. I know I can go through and set this on a view by view basis, but I am looking for something a little more global.
SQL7: I have added a Maintenance Plan to backup to 4mm dat tape the master and msdb SQL databases as well as another database relative to our application called WISE. This works fine; however, it appears to always append to the media as opposed to overwriting (preferred). Any help would be appreciated.....
I am going to set up maintenance plans on all our SQL servers (7.0 and 2000). I have found several 'tutorials' on how to do this, but no one is describing the options in detail. Can you guys/gals please help me out? We have alot of small databases and some medium (1-2GB).
I am new to SQL Server 2000 & need ya all's help!! I am trying to set up a database maintenance plan to back up databases & transactional logs. If I do a full backup once a week & a transactional backup every day...will I be safe enough to have enough backups to be able to restore to any point of time by restoring the full backup & the transactional logs upto that point? In other words, I am asking what are the points to consider & what should be a decent backup plan? Do transactional logs take stored procedureal changes also?
We have Veritas' Backupexec running in our Enterprise and the Veritas Install actually installs MS SQL Server MSDN on each Server in the Enterprise.
It looks like it also sets up a default Maintenance plan within each of the MSDN Instances.
I guess my question is.. Can I manage the Maintenance Plans on these MSDN Instances via the SQL Server EM GUI from my desktop?? Seems like when I look at the Maintenance plans alot of the options are greyed out or not available. What I am trying to do is modify one of the maintenance plans to have the backups deleted after one week (One of the Instances has been running a complete backup on the Backupexec Databases for a year and there are a years worth of backups on the Server) but the option to "remove files older than" is 'greyed out' ??????
Does any body have any good recommendations for maintenace plans?
Here are few questions I have.
When should indexes be re-indexed? What should be done first? Reorganize indexes or rebuild indexes? How often backups should be done and what kind? How often should database statistics be updated? Do database statistics need need to be updated on system databases? How often should a database integrity check be done? How long should history be kept for? What is a good order for tasks should be done?
I created several Maint.Plans before installing SP2. Now I need to modify them and I get the following error. I cannot even Create new ones, because of the Enumerate error. Please advice if this error is due to the same issues mentioned on this blog.
When replying please cc me at Camilo.Torres@bellsouth.com
TITLE: Microsoft SQL Server Management Studio
Enumerate target servers failed for Job 'Daily Maintenance Plan 1'. (Microsoft.SqlServer.Smo)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Enumerate+target+servers+Job&LinkId=20476
Failed to retrieve data for this request. (Microsoft.SqlServer.SmoEnum)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&LinkId=20476
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
String or binary data would be truncated. (Microsoft SQL Server, Error: 8152)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.3042&EvtSrc=MSSQLServer&EvtID=8152&LinkId=20476
We have been asked by a customer who uses our software product whether they should upgrade from SQL 6.5 to SQL 7 or SQL 2000. They have one server with 250 clients accessing SQL Server.
I used to think that upgrading to SQL 7 would be cheaper in light of the complaints I've heard from administrators about the cost of going to SQL 2000's new pricing model, so I took a look at Microsoft's web site. Navigating around their web site looking for licensing info is always an adventure!
I saw only SQL 2000 (non-upgrade) pricing on their web site, so I'm assuming that buying SQL 2000 and then upgrading to SQL 7 would result in the same pricing structure whether a customer upgrades to SQL 7 or SQL 2000 (minus the discount for upgrading). I see that a processor based license ($2499 per processor standard edition) with its unlimited cals would be cheaper for a company with 250 clients, since the cost of cals for 250 clients with the standard edition server-based pricing would be prohibitive.
So if the pricing is the same, the decision to upgrade to SQL 7 or SQL 2000 would rest upon features available and when SQL 7 will be retired.
Does anyone know how long Microsoft will support SQL 7? (I'm defining support as until they stop providing the patches and hotfixes, since the product will no longer be actively supported. The information I saw on Microsoft's web site made it seem like SQL 7 will be supported for a long time yet.
Does anyone know how long Microsoft will support SQL 6.5? That will give me an idea how long our customers will have to upgrade?
What would you recommend to your customers so as to benefit them most?
Is there a way to find out what Trans-SQL is being executed by a Maintenance Plan. More specifically, if I select Attempt to Repair on the Check Database Integrity screen, I know it is executing DBCC CheckDB, but which repair option is it using (repair_fast, repair_allow_data_loss, repair_rebuild)?
A year ago one of our SQL Server 6.5 servers was upgraded to SQL Server 7.0 sp1. My compatibility level still shows 6.5, however. The SQLAgent has been using the 'localsystem' account up until earlier this week. I changed the login to be a domain account with System Administrator permissions and removed the SA permissions from the BuiltinAdministrator group. (My ultimate goal is to limit the access NT Administrators have within my SQL databases.) All of my scheduled jobs run without error except the maintenance plans. (All jobs have an owner of sa.)
The errors that I receive are permission errors-not being able to get into tables on the MSDB database. However, if I open Query Analyzer with the SQLAgent domain account and perform a select on one of the tables in MSDB, it is successful. If I give the BuiltinAdministrator account the SA permissions again while still keeping the SQLAgent using the new domain account, the maintenance jobs succeed.
Is this an upgrade problem since I do have other SQL 7.0 servers that don't have this problem? How can I correct this?