I have a VB6 application using classic ado (MDAC 2.8) for connecting
ms sql 2000 server. Application uses a lot of server side cursors. Now
I want to switch to ms sql 2005 server but I have noticed very serious
performance problem. Sql profiler results of execution of following
commands:
declare @p1 int
set @p1=180150131
declare @p3 int
set @p3=1
declare @p4 int
set @p4=16388
declare @p5 int
set @p5=22221
exec sp_cursoropen @p1 output,N' Select ... from ... where .... order
by ...',@p3 output,@p4 output,@p5 output
select @p1, @p3, @p4, @p5
on sql server 2000:
CPU: 234
Reads:82515
Writes:136
Duration:296
and on sql server 2005:
CPU: 4703
Reads:678751
Writes:1
Duration:4867
Both databases are identical, the servers runs on the same machine
(Pentium 2,8 Ghz, 2 GB RAM) with only one client connected. On forums
I've read that Microsoft doesn't recommend using server side cursors
on sql 2005 but is there any way to increase performance to some
acceptable level?
I have a VB6 application using classic ado (MDAC 2.8) for connecting ms sql 2000 server. Application uses a lot of server side cursors. Now I want to switch to ms sql 2005 server but I have noticed very serious performance problem. Sql profiler results of execution of following commands:
declare @p1 int set @p1=180150131 declare @p3 int set @p3=1 declare @p4 int set @p4=16388 declare @p5 int set @p5=22221 exec sp_cursoropen @p1 output,N' Select ... from ... where .... order by ...',@p3 output,@p4 output,@p5 output select @p1, @p3, @p4, @p5
on sql server 2000:
CPU: 234 Reads: 82515 Writes: 136 Duration: 296
and on sql server 2005:
CPU: 4703 Reads: 678751 Writes: 1 Duration: 4867
Both databases are identical, the servers runs on the same machine (Pentium 2,8 Ghz, 2 GB RAM) with only one client connected. On forums I've read that Microsoft doesn't recommend using server side cursors on sql 2005 but is there any way to increase performance to some acceptable level?
I can't build a data connection in BIDS (error 80040154), and I checked the internet for possible solutions.
I downloaded Microsoft Component Checker and it informed me that I had a mismatch with MDAC 2.8 SP1 and XP SP2.
Specifically SQLOLEDB.dll (expected 2000.85.1117.0) and MSXML3.dll (expected 8.50.2162.0) and SQLXMLX.rll (expected 2000.85.1117.0) seem to be mismatched.
There are all kinds of instructions on the web for various fixes, and I am not certain what is current.
What do I need to load/uninstall in order to fix this problem?
All,I have a problem regarding SQL Server 2000 SP3,I have SP that calls other SP and it inserts about 30,000 records as atime,in the development environment (MS Windows 2003 Enterprise, 256 RAM,3.0 GHz Intel Processor) takes about 6 seconds to run this SP.But, with the same Software but, 2.6 GHz Intel and 1 GB Ram, it runsvery slow it takes more than 135 Seconds to run,I have read a lot of articles about expanding the SQL Memory and giveit a higher process privilege but, with no use,I don't know where the problem is, do you have any idea about what isthe problem?Thank you in advance,MAG
Can anyone give me some guidelines as to when to chose JOINS over returning multiple resultsets in a strored procedure.. For eample, I have two tables, Orders and OrderDetails, which are linked by a primary key field. There can be orders w/o a corresponding record in orderdetails. 1.) I can return all orders and their details using a stored preocedure that has: SELECT o.order_id as OrderId, o.customername, od.order_id, od.orderdate FROM orders AS o LEFT OUTER JOIN orderdetails AS od ON (o.order_id=od.order_id) 2.) I can do the same by returning two results sets in a different stored procedure: SELECT order_id, customername FROM orders SELECT order_id, orderdate FROM orderdetails I think the client processing time for the second option will be slightly less, because the resultset I need to filter will only be as big as the orederdetails table (it will not include records for orders that have no details). Regardless, I think this would only make for a small performance gain, so if Option 1 is better in Database performace, I would probably go with that. I assume the method to choose also depends on table size and # of JOINS. Any guidance would be appreciated. Thanks, Al
I have a openquery query like this Select * from openquery([db2],'Select * from tableA')
To only return and process records for a given date range I changed it to be something like this
Select * from openquery([db2],'Select * from tableA') where datefield > '06/06/2001'
While this works fine, my question is that does it copy all the records from the db2 server to the sql server before filtering them. I think it does. The db2 table will have over 1,000,000 records eventually, and the sql server will use records for a given day/date range only.
I have tried many things and even the worst case thing which I was trying to avoid i.e. uninstalled MSDE SP3 and Analysis Services and service packs but the problem is still not solved...
Whenever I open a DTS in design view and double click on the link (the line connecting the two) between the source and destination servers the PC goes to sleep and comes back after a long time and then the same problem occurs when I press on the transformation tab...
I know this may sound weird but that really is the case :o
I have tried many things and even the worst case thing which I was trying to avoid i.e. uninstalled MSDE SP3 and Analysis Services and service packs but the problem is still not solved...
Whenever I open a DTS in design view and double click on the link (the line connecting the two) between the source and destination servers the PC goes to sleep and comes back after a long time and then the same problem occurs when I press on the transformation tab...
I know this may sound weird but that really is the case :eek:
When using Named Pipes the issue is no longer there.
On massive batch inserts we sometimes get a long pause at the end of one insert and before begining the next one. Example:
1000 inserts in the same table and then repeat. This will work fine for 3 or 4 iterations, then pause during the 5th iteration for up to 40 seconds and then simply continue.
When this exact same procedure is done using Named Pipes as the connection method this never happens.
While this is happening neither the server or the workstation is doing anything, 0% CPU, 0% network, it just sits there.
All this using the SQL Native Client 2005 and ADO.
I created two tables one is based on partition structure and one is non-partition structure.
File Groups= Jan,Feb.....Dec Partition Functions='20060101','20060201'......'20061201' I am using RIGHT Range in Partition function. Then I defined partition scheme on partition function.
I have more than 7,00,000 data in my database. I checked filegroups and count rows. It works fine.
But When I check the estimation plan time out for query it is same for both partition table and non partition table.
I am new to SQL Server 2000. I am eager to learn what factors/parameters are key for obtaining good retrieval performance of SQL Server 2000 (prompt response to user query).
I recall that someone told me that a recordset with asOpenStatic cursor type has higher speed than that of a recordset with other cursor types.
Is this true or false. Are there really some key parameters for perfomance tuning .
I am with the response time for a simple count on a fulltext search that is too slow.
Even using the most simple query on a good server (64 bit Dual Opteron 4GB Ram with high speed 16 raid disk storage)):
select count(*) from content_books where contains(searchData,'"english"') Takes 4 seconds to count the avg 500.000 resultsI have removed all the joins with real table data so that the query is only inside the fulltext engine..
I would expect this to be down to 4 milli seconds. Isn't it just getting the size of the "english" word result index?
It seems the engine is going through all the results because if a do a more complex search that returns less results the performance is better.
Any clues of how to do this faster? I never read the thousands of records BUT i need to count them...
I am not fimilar with cursors at all but need some assistance/direction on how to re-create the following.
The result set will need to give me a list of new customers with sign dates of last year and this current year plus their sales for each year. If the customer has a date signed of 2007 I need to see the "year of sales" for both 2007 and 2008, whether or not they actually had sales.
I'm needing to convert an oracle cursor to Sql Server 2005. If at all possible I would like to stay away from the cursor and create this some other way. Any help would be greatly appreciated.
Customer Date Signed Year of Sale Amount 1111 7/1/07 2007 $50,000.0 2008 0.0
Below is the oracle cursor that was used to calculate this information in the past.
DECLARE MBLDRN BAV_BUILDER_NEW.BLDRN%TYPE; MYR_SIGNED BAV_BUILDER_NEW.YEAR%TYPE; MCOMPANY BAV_BUILDER_NEW.COMPANY%TYPE; MSHORTNAME BAV_BUILDER_NEW.SHORTNAME%TYPE; MDSIGNED BAV_BUILDER_NEW.DSIGNED%TYPE; MDCANCELED BAV_BUILDER_NEW.DCANCELED%TYPE; MDMTERR BAV_BUILDER_NEW.DMTERR%TYPE; MDMNAME BAV_BUILDER_NEW.DMNAME%TYPE; MENGR_TEAM BAV_BUILDER_NEW.ENGR_TEAM%TYPE; MSALESREGON BAV_BUILDER_NEW.SALESREGON%TYPE; MDIVCODE BAV_BUILDER_NEW.DIVCODE%TYPE; MYRLOOP NUMBER; MLSTYR NUMBER; MSYEAR DATE; MEYEAR DATE; CURSOR CA IS SELECT YEAR,BLDRN,COMPANY,SHORTNAME,DSIGNED,DCANCELED, DMTERR,DMNAME,ENGR_TEAM,SALESREGON,DIVCODE FROM BAV_BUILDER_NEW ORDER BY YEAR; BEGIN SELECT YEAR INTO MLSTYR FROM CURRENT_DATES; OPEN CA; LOOP FETCH CA INTO MYR_SIGNED,MBLDRN,MCOMPANY,MSHORTNAME,MDSIGNED,MDCANCELED, MDMTERR,MDMNAME,MENGR_TEAM,MSALESREGON,MDIVCODE; EXIT WHEN CA%NOTFOUND; MYRLOOP := MYR_SIGNED; WHILE MYRLOOP <= MLSTYR LOOP SELECT MIN(SYEAR) INTO MSYEAR FROM UDBDATES WHERE YEAR=MYRLOOP; SELECT MAX(EYEAR) INTO MEYEAR FROM UDBDATES WHERE YEAR=MYRLOOP; BEGIN INSERT INTO BAV_BUILDER_NEW_DATA (YR_SIGNED,DIVCODE,SALESREGON, BLDRN,COMPANY,SHORTNAME,DSIGNED,DCANCELED,ENGR_TEAM, DMTERR,DMNAME,YR_SALES) VALUES (MYR_SIGNED,MDIVCODE,MSALESREGON,MBLDRN,MCOMPANY,MSHORTNAME, MDSIGNED,MDCANCELED,MENGR_TEAM,MDMTERR, MDMNAME,MYRLOOP); END; MYRLOOP := MYRLOOP + 1; END LOOP; COMMIT WORK; END LOOP; COMMIT WORK; CLOSE CA; END; / COMMIT WORK;
Please advice when i used loop in the below code its only returning last value from the sub table however i wanted to fetch all values from the sub-table---- Please advice..
ALTER Procedure [dbo].[SyncEmpContGrid] @EmpID As Varchar(20)
As
Declare @CountryEOF bit,
@i int,
@CtR Varchar (1000)
begin
Declare @C_list Cursor
Declare ContCr Cursor Local for
(Select custCountryofExperience from Employees_CountryExperience where Employee=@EmpID)
Set @C_List = contCr
Open @C_List
Fetch next from @C_List into @CtR
While (@@Fetch_Status = 0 )
Begin
set @ctR = @ctR + ','
Fetch next from @C_List into @CtR
update EmployeeCustomTabFields
Set custTxt_Ct = @CtR Where (EmployeeCustomTabFields.Employee = @EmpID);
What components of I.S. would I use, for simulating a cursor in SQL like that?
Example of simple cursor :
declare c_CURSOR cursor FOR SELECT DISTINCT V.CODIGO FROM GB_RELACIONAL.DBO.VISITA V INNER JOIN MEDIDAS_LINEAL_PROMOCION_200604_02 M ON M.CODIGO_VISITA=V.CODIGO AND M.TIEMPO_ENTRE_vISITAS <> V.TIEMPO_ENTRE_VISITAS
OPEN C_CURSOR FETCH NEXT FROM C_CURSOR INTO @CODIGO WHILE @@FETCH_STATUS=0
Do various Jobs; Doing Selects, Inserts etc.. using @codigo
FETCH NEXT FROM C_CURSOR INTO @CODIGO end CLOSE C_CURSOR DEALLOCATE C_CURSOR
Then what I need is passing the @codigo in every loop, to other components in I.S., for doing inserts in other related tables.
Then my questions are:
a- What Component i would use for simulating the looping cursor like the example ?
b- How i Store and pass de @codigo to the other I.S. components for doing the selects, inserts?
c- What I.S. components and how would use, for receiving the @codigo ,and doing the corresponding Select , Inserts.. in the related tables , using that variable ( @codigo ) ?
I need an Idea of the I.S. components, configurations, and the flow for doing something like this...
We are in the process to upgrading from SQL 7.0 SP1 on a NT 4 EE sp6a active/passive cluster to a SQL 2K SP2 on the same NT 4 EE sp6a active/passive (single instance I think it is called now) cluster.
My question now is about MDAC.
Do I need to upgrade all my clients (W98 SE) that actually have MDAC 2.5 installed to the 2.6 version of the product?
Is there any MS articles that clearly explain this argument?
We have a SQL Server 7.0 running on NT 4.0 with MDAC 2.1 installed.
We also have a SQL Server 2000 running on Windows 2000 with MDAC 2.6 installed, as well as other instances of 7.0 and 2000.
I am trying to setup Push replication from the box with SQL 7.0 and MDAC 2.1 installed to a 2000 server with MDAC 2.6.
A SQL 7 to 2000 push works if an instance of 7 and 2000 are installed on the same machine. I can push to any remote machine, except for the one server with MDAC 2.1 installed.
We have a large project that is importing msado21.tlb. We want to import least msado25.tlb to access the ADODB::_Stream object, but are unsure what exactly this entails. I've noticed no problems testing that change, but I cannot find any changelog for the MDAC upgrade. Is anyone aware of, or can point me to, a list of changes (fixes, modifications, caveats, etc) between MDAC 2.1 and MDAC 2.5? The binary msado file is always much newer, of course, but we want to link with the older type libraries to maintain maximum compatibility.
STATIC Defines a cursor that makes a temporary copy of the data to be used by the cursor. All requests to the cursor are answered from this temporary table in tempdb; therefore, modifications made to base tables are not reflected in the data returned by fetches made to this cursor, and this cursor does not allow modifications
It say's that modifications is not allowed in the static cursor. I have a questions regarding that
Static Cursor declare ll cursor global static for select name, salary from ag open ll fetch from ll
while @@FETCH_STATUS=0 fetch from ll update ag set salary=200 where 1=1
close ll deallocate ll
In "AG" table, "SALARY" was 100 for all the entries. When I run the Cursor, it showed the salary value as "100" correctly.After the cursor was closed, I run the query select * from AG.But the result had updated to salary 200 as given in the cursor. file says modifications is not allowed in the static cursor.But I am able to update the data using static cursor.
Hello,I have a test database with table A containing 10,000 rows and a tableB containing 100,000 rows. Rows in B are "children" of rows in A -each row in A has 10 related rows in B (ie. B has a foreign key to A).Using ODBC I am executing the following loop 10,000 times, expressedbelow in pseudo-code:"select * from A order by a_pk option (fast 1)""fetch from A result set""select * from B where where fk_to_a = 'xxx' order by b_pk option(fast 1)""fetch from B result set" repeated 10 timesIn the above psueod-code 'xxx' is the primary key of the current Arow. NOTE: it is not a mistake that we are repeatedly doing the Aquery and retrieving only the first row.When the queries use fast-forward-only cursors this takes about 2.5minutes. When the queries use dynamic cursors this takes about 1 hour.Does anyone know why the dynamic cursor is killing performance?Because of the SQL Server ODBC driver it is not possible to havenested/multiple fast-forward-only cursors, hence I need to exploreother alternatives.I can only assume that a different query plan is getting constructedfor the dynamic cursor case versus the fast forward only cursor, but Ihave no way of finding out what that query plan is.All help appreciated.Kevin
I'm trying to implement a sp_MSforeachsp howvever when I call sp_MSforeach_worker I get the following error can you please explain this problem to me so I can over come the issue.
Msg 16958, Level 16, State 3, Procedure sp_MSforeach_worker, Line 31
Could not complete cursor operation because the set options have changed since the cursor was declared.
Msg 16958, Level 16, State 3, Procedure sp_MSforeach_worker, Line 32
Could not complete cursor operation because the set options have changed since the cursor was declared.
Msg 16917, Level 16, State 1, Procedure sp_MSforeach_worker, Line 153
Cursor is not open.
here is the stored procedure:
Alter PROCEDURE [dbo].[sp_MSforeachsp]
@command1 nvarchar(2000)
, @replacechar nchar(1) = N'?'
, @command2 nvarchar(2000) = null
, @command3 nvarchar(2000) = null
, @whereand nvarchar(2000) = null
, @precommand nvarchar(2000) = null
, @postcommand nvarchar(2000) = null
AS
/* This procedure belongs in the "master" database so it is acessible to all databases */
/* This proc returns one or more rows for each stored procedure */
/* @precommand and @postcommand may be used to force a single result set via a temp table. */
declare @retval int
if (@precommand is not null) EXECUTE(@precommand)
/* Create the select */
EXECUTE(N'declare hCForEachTable cursor global for
What is the best way of reliably closing a cursor inside a BEGIN CATCH block? The simple problem statement is: I want to check if the cursor is open, then close it. I'm trying to use CURSOR_STATUS function and it seems to return a -3 (both when the cursor is open and not open.). Is this a bug or am I missing something?
I have removed all of my code and only provided what is necessary to repro the problem.
Hi folks. Got a real doozy going. One of the network techs reinstalled NT4 SP5 on a server. This reverted the SQL driver back to 2.something. Now sqlserveragent jobs will not run. I have tried to instal MDAC 2.5, but it doesn't change the driver up to the new 3.xxx driver. I searched around on here and folks said that you have to move the MDAC executable to a new directory. for the new MDAC I don't see MDAC.EXE. I did find MDAC.COM and renamed that. Tried to rerun MDAC 2.5 but still got the old SQLSRV32.Dll. Uninstalled SQLServer and reinstalled. still got the old SQLSRV32.Dll. Tried to apply SQLServer SP1. It died. Found that I am to shutdown some services. Did that. SP1 still dies. Tried to apply SQLSERVER SP2. It tries to pick up where it died and it dies again. SO, I'm currently uninstalling SQLServer, reboot, installing SQLServer, reboot, Trying SP2 again. Didn't think that it was brain surgery just to install a new flippin version of the ODBC driver.. SHEEZ!!!!
I'm in a mild-debate with some colleagues of mine. Can someone shed some light on this for me.
(This is related to a problem we are having with empty recordsets being returned where we know there should be rows returned. I think it could be caused by incorrect/outdated MDAC components.)
We use SQL 7 sp2, and ado DSN'less connection strings.
MSSQL .7.0 SP2 WIN2KPRO ATI 32mb video 1.3gh AMD 1g Ram
No other apps are affected....my once fast system is slowed to a crawl. 5+gb free on C 37gb on E and NO apps loaded in systray. NO background processes, nunca.
I installed MDAC 2.6 on a system to try to correct an OLE error when using sp_OACreate that causes an ODBC connection broken and shuts down SQL every other execution.
***** Now, the system is running slow, I tried reinstalling, moving the data to a single local drive. (No other apps appear to be slowed)
Uninstalled MDAC 2.6 and reverted to 2.5 still slow.
Reinstalling MSSQL 7 and SP2, and MDAC 2.5 still slow...
As an example, when a simple xp_cmdShell 'dir c:mssql7data*data.mdf' the resulting 10 lines are slowing typed accross the screen. On most systems the text is refreshed so fast we only see a flicker.
Cleaning the registry still slow... Profiler shows DML and GHost lines that appear to suck the life out my system, but its only pegged at 2%, like it's doing something, but what(?)
Tonight I may have to reformat and reload, however I've seen this before and would really like a clue.