Error While Handling Large Number Of Records

Jan 29, 2008

Hi
 
I have created one store procedure which handles global updates
 I am using cursor to fetch one be one row for updating (It is required for implementing business logic)
Now when i execute this store procedure ---it gives me dedlock error , I dont know why i m getting this error(Approx number of rows 1.5lakh)

if then i removed unnecessary records from table (Approx -50000) it works fine,
Is there any way to handle it
I am calling this storeprocedure from my window service.
please give me a good solution if possible
 

View 3 Replies


ADVERTISEMENT

Inserting Large Number Of Records

Jun 20, 2006

Hi!

The DB I am working with has about 10 tables and some of the tables have 200,000 to 500,000 records. All tables have a clustered index on the primary key.

I performance during INSERT could be better I think - I add thousands of records at a time from many connections.

Is there a way to defer the update of indicies? So that, I can update the tables and then let the indicies regenerate ?

thanks,

Jas.

View 4 Replies View Related

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

SQL Server 2005 Full Text Performance With Large Number Of Records

Dec 12, 2007

Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
 
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
 
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
 
Thanks in advance

View 1 Replies View Related

Error Occur While Delete Large Volum Records

Jan 6, 2004

I had write a ActiveX service to delete several tables and those records are more than 100000. When I test it by deleted 1000 records it is ok, but once the volum is increase until 100000, it will give me a error message said timeout operation fail.

how can i overcome this problem. please!!!!

View 8 Replies View Related

Methods For Handling Large Datasets

Apr 7, 2008

Lets say you want to offer users access to a table that has over 1,000,000 records in it.  You dont want to fetch the entire table into a dataset.  Even with paging, it will still cause a performance lag, will it not ?Lets say there was a column that held last names.  You could then divide up all the data in to alphabetical groups by letter.  That will at least let you split up the data 26 different ways (letters in teh alphabet).   Thats still roughly 38,000 possible records in the dataset at any given time. I suppose since the database and web server are very close to each other, physically, the latency shouldnt be too bad.  However, Im worried about having several dozen or more users, each requesting datasets with tens of thousands of records.  That would cause the memory use on the web server to spike most likely.   Right now, its just one user do this sort of processing, but that might grow to include several dozens.ideas ? 

View 3 Replies View Related

Handling Large Data Values In Textboxes

Mar 28, 2007

Hi, Ive got a report using a List item that is vertically displaying the columns from a table. The problem I run into, is that some of the fields in this table contain large blocks of text where the users have entered comments and such.

I am using Textboxes to display this data.

So my report will look something like
-----
Field label 1 Field value 1
Field label 2 Field value 2
Field label 3


<white space>



<page break>

Field value 3 ---> this is a big block of text
Field label 4 Field value 4
etc
------
It appears as though the report attempts to keep the contents of each textbox together even if that means breaking onto an entirely new page to do this. I would prefer for the data to flow more natrually instead where the page breaks in the middle of the data being displayed should it be too large to fit on the page it started on.

-----
Field label 1 Field value 1
Field label 2 Field value 2
Field label 3 Field value 3 --- As much as can fit on this page

<page break>

Field value 3 ---> remaining data that broke over the page
Field label 4 Field value 4
etc
------

Any suggestions would be apprecaited.

View 3 Replies View Related

Handling Character Large Objects In Java Application

Jul 13, 2004

In my Java application, I have a stream of character data in a java.io.Reader object.
I am using a PreparedStatement object to insert data into a table containing such a large object column (datatype - text). I am using the following API call:
PreparedStatement.setCharacterStream(colIndex, reader, size);

In order to find the size in the above statement, I read the stream and find the length.
Because of this I am getting the following error message and the data is not getting inserted:

Exception during insertion : Failed for MYTABLE Reason [Microsoft][SQLServer 2000 Driver for JDBC]Transliteration failed.

Is there any alternate method to handle this? Please help.

View 1 Replies View Related

Handling Large Queries In A Table Valued User Defined Function

May 20, 2008

Hello,

We have created several Table Valued User Defined Functions in a Production SQL Server 2005 DB that are returning large (tens of thousands of) rows obtained through a web service. Our code is based on the MSDN article Extending SQL Server Reporting Services with SQL CLR Table-Valued Functions .

What we have found in our implementations of variations of this code on three seperate servers is that as the rowset grows, the length of time required to return the rows grows exponentially. With 10 columns, we have maxed out at approximately 2 500 rows. Once our rowset hit that size, no rows were being returned and the queries were timing out.

Here is a chart comparing the time elapsed to the rows returned at that time for a sample trial i ran:

Sec / Actual Rows Returned
0 0
10 237
20 447
30 481
40 585
50 655
60 725
70 793
80 860
90 940
100 1013
110 1081
120 1115
130 1151
140 1217
150 1250
160 1325
170 1325
180 1430
190 1467
200 1502
210 1539
220 1574
230 1610
240 1645
250 1679
260 1715
270 1750
280 1787
290 1822
300 1857
310 1892
320 1923
330 1956
340 1988
350 1988
360 2022
370 2060
380 2094
390 2094
400 2130
410 2160
420 2209
430 2237
440 2237
450 2274
460 2274
470 2308
480 2342
490 2380
500 2380
510 2418
520 2418
530 2451
540 2480
550 2493
560 2531
570 2566

It took 570 seconds (just over 9 1/2 minutes to return 2566 rows).

The minute breakdown during my trial is as follows:
1 = 655 (+ 655)
2 = 1081 (+ 426)
3 = 1325 (+244)
4 = 1610 (+285)
5 = 1822 (+212)
6 = 1988 (+166)
7 = 2160 (+172)
8 = 2308 (+148)
9 = 2451 (+143)

As you can tell, except for a few discrepancies to the resulting row count at minutes 4 and 7 (I will attribute these to timing as the results grid in SQL Management Studio was being updated once every 5 seconds or so), as time went on, fewer and fewer rows were being returned in a given time period. This was a "successful" run as the entire rowset was returned but on more than several occasions, we have reached the limit and have had 0 new rows per minute towards the end of execution.

Allow me to explain the code in further detail:

[SqlFunction(FillRowMethodName = "FillListItem")]
public static IEnumerable DiscoverListItems(...)
{

ArrayList listItems = new ArrayList();

SPToSQLService service = new SPToSQLService();

[...]

DataSet itemQueryResult = service.DoItemQuery(...); // This is a synchronous call returning a DataSet from the Web Service

//Load the DS to the ArrayList


return listItems;
}


public static void FillListItem(object obj, out string col1, out string col2, out string col3, ...)
{

ArrayList item = (ArrayList) obj;


col1 = item.Count > 0 ? (string) item[0] : "";
col2 = item.Count > 0 ? (string) item[1] : "";
col3 = item.Count > 0 ? (string) item[2] : "";
[...]
}

As you will notice, the web service is called, and the DataSet is loaded to an ArrayList object (containing ArrayList objects), before the main ArrayList is returned by the UDF method. There are 237 rows returned within 10 seconds, which leads me to believe that all of this has occured within 10 seconds. The method GetListItems has executed completely and the ArrayList is now being iterated through by the code calling the FillListItem method. I believe that this code is causing the result set to be returned at a decreasing rate. I know that the GetListItems code is only being executed once and that the WebService is only being called once.


Now alot of my larger queries ( > 20 000 rows) have timed out because of this behaviour, and my workaround was to customize my web service to page the data in reasonable chunks and call my UDF's in a loop using T-SQL. This means calling the Web Service up to 50 times per query in order to return the result set.

Surely someone else who has used Table Valued UDFs has come accross this problem. I would appreciate some feedback from someone in the know, as to whether I'm doing something wrong in my code, or how to optimize an SQL Server properly to allow for better performance with CLR functions.

Thanks,

Dragan Radovic

View 7 Replies View Related

Enable Error Handling When Writing Custom Source Component /custom Error Handling Component.

Apr 21, 2006

1) We are writing a custome Source component for Oracle with OCI calls, Could some one please let me know how to Enable Error Handling for the Same,

2) Is it possible to write Custome Error Handeling Component for SSIS? if yes could you please help me on how to write it.

Thanks in advance.

View 1 Replies View Related

Handling Out-dated Transaction Records

Sep 1, 2006

For performance issue, I believe many program should have a house-keeping procedure to clean up transaction history. Is there any best practice to perform this? Or should it be done simply by moving transaction data from the transaction table into a history table? Any better or consideration that I should be concerned of?

View 1 Replies View Related

Handling Multiple Data Records With Sp

May 2, 2007

I have a need to read a table and extract records that matches a criteria (dough!, not that simple). Then I put the record in a text message and send them out via email. Works OK, but the problem is when I have multiple records for one entry. Since SQL doesn't have arrays "per say", I need to load the records on a "temporary array" then assembly the message at the end to include the records. Any suggestions how to "emulate" an array on a stored procedure (SQL), not sure what direction to follow. thanks

View 2 Replies View Related

How To Number Records / Duplicates Receiving Same Number

Feb 19, 2013

I have a large table of customers. I would like to add a column that contains an integer, unique to that customer. The trick is that this file contains many duplicate customers, so I want the duplicates to all have the same number between them.the numbers dont have to be sequential or anything, just like customers having the same one.

View 8 Replies View Related

Number Of SQL Queries Too Large?

Jul 28, 2004

Hi All,

I'm currently in the middle of building quite a large CMS using ASP.NET and MSSQL2K and have began to question if the amount of queries I am using for one page to be built is too many?

For one page (View Forum) I am getting all of the templates and checking access then pulling a list of threads, getting the first and last posts, then user info for the first and last posts... anyway to view 10 threads on the page the number of queries comes to about 54 and the page takes 0.064 seconds to load.

My question is, Is this to many queries to be running for a single page load? All queries are using Stored Procedures.

Thanks Guys.

View 3 Replies View Related

Large Number Of Connections

Jul 19, 2004

hi,
Does large number of connections to a sql server result in slogging queries??
by large number of connections I mean in the range of 2000 to 2500 connections to various databases on the same server.
This happens on the production database, if I download the copy of the database and run the specific queries on the local box it hardly takes 1 second to execute, whereas on the production box it takes about 45 to 60 secs, i m working on the indexes part... was not sure whether number of connections to a server affect the performance of the queries or it is just that the indexes need defragmentation...

View 2 Replies View Related

Very Large Number Of Rows

Jul 23, 2005

We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA

View 4 Replies View Related

Large Number Of Rows.

Apr 3, 2007

Hi all,



A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?



Does the number of records returned affect the performance inspite of the indexing ?



Thanks,



DBLearner

View 3 Replies View Related

Handling SQL RollBack Transaction For More Records In A Single Process

Mar 6, 2008

I have over 500 transaction records in a single DB process handling within SQL transaction (Begin, Commit, RollBack and End).
Is there any limitation for the following rollbacktransaction function to handle more records (eg. over 500 records)? Public Shared Sub RollBackTransaction()
Dim transactionObj As Object
Try
transactionObj = SqlTransaction.GetExistingTransaction
If (Not IsNothing(transactionObj)) Then
CType(transactionObj, SqlTransaction).RollBack()
End If

Catch ex As Exception
Throw New Exception(ex.Message)
End Try
End Sub
 
 

View 2 Replies View Related

Integration Services :: Handling Duplicate Records In SSIS?

Oct 14, 2011

I have one ssis package  moving the data from staging to destination. In stating table we have the duplicate data. But in destination table  4 columns have primary key. How to handle the duplicate records in oldedb source.

View 8 Replies View Related

Inserting Large Number Of Rows

Nov 24, 1999

Hi,

I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently

Thanks

View 2 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies View Related

Large Number Of Databases On 2005

Feb 23, 2007

For anyone with a larger number of databases (500+): How many do you have in a single instance. If you are using multiple instance on a single server, how many dbs per instance. This is why I'm asking

We are experiencing 701 "out of system memory" and temporary (usually) system freezes when the error occurs. We have 32bit 2005 version 9.00.2153.00, 32GB of memory, AWE enabled, quad dual-core 3GHz hyperthreaded server. Nether the bPool or VAS show any pressure when the "out of system memory error" occurs. Since this error usually indicates a VAS problem we tried increasing VAS to 1GB w/the -g flag. It made no difference. PSS has been working on the case for 3 weeks. They dont seem to be finding any evidince of memory pressure either. When I last spole to the escalation engineer yesterday it seemed that they are going to recommend reducing the number of databases on the server. I asked for clarification as to whether we are hitting a 32 bit barrior, an instance limitation, or both. I am awaiting the answer. How many databases do you have on your server? We had between 1700 and 1900 (the number varies) at times when the error occured. We are now at 1500, and have not had the error in the 2 days since reducing the number of databases...

View 4 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

Question On Displaying Large Number Of Record?

Oct 20, 2006

I have datagrid that needs to display a log table which has more than million records. Since it it huge number, it is not possible to get dataset using "select * from log_table" to fill and to bind to datagrid.Is there anyway to display first 100 rows on first page and show next 100 rows if use clicks on page 2?Thank you very much in advance!Justin 

View 4 Replies View Related

Large Number Of Processes In Activity Monitor

Jun 4, 2007

All;

We have a Windows App and Web App that share business objects which points to a single database. When a Windows user logs in, an average of 50 processes are created in the first few seconds and never go away. The details window is blank and they all remain sleeping from that point on.

I have stepped through the code to see if there is anything odd going on but most of the processes are created when validating the number of parameters the stored procedure has or the length of the stored procedure name. This translates to 1000-1500 processes on average.

Is this normal? Will it hurt performance? Is there a way to remove them?

View 1 Replies View Related

Deleting Large Number Of Rows Times Out

Sep 14, 2007

I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.

DELETE FROM tblLog

I just get a message about the server timing out, what can I do?

Thanks

View 5 Replies View Related

NTEXT Vs NVARCHAR For Large Number Of Columns

Jul 23, 2005

Hi all,I need to store data into about 104 columns. This is problematic with MSSQL, since it doesn't support rows over 8kb in total size.Most of the columns are of type NVARCHAR(255), which means we can't havemore than 8092/(255*2) = 15 columns of this type.With a row length of more than 8kb, SQL gives a warning that any rows overthat amount will be truncated.So far I'm seeing two possible solutions to this problem:1. Split data into multiple tables with the same ID column accross alltables, and then join them on SELECT statements.2. Use NTEXT instead of NVARCHAR. NTEXT's length is 16 bytes because itcontains a pointer to the actual value stored somewhere else. However, NTEXTdoesn't support regular indexing, only through a Full-Text Index catalog. Inthis case I'll need to user "WHERE CONTAINS(columnName, 'sometext')" toperform searches, which is bearable.I'm inclined toward #2. However I haven't used Full-Text indices before anddon't know their limitations. Will I run into problems with NTEXT? Is therea better solution?Thanks.-Oleg.

View 7 Replies View Related

Large Number Of INSERT Statements - Not All Are Executed

Feb 9, 2007

Hello!I have a developer that is playing around with some SQL statementsusing VB.NET. He has a test table in a SQL 2000 database, and he hasabout 2000 generated INSERT statements.When the 2000 INSERT statements are run in SQL query analyzer, all2000 rows are added to the table. When he tries to send the 2000statements to SQL Server through his app., a random number ofstatements do not get executed. But, SQL Profiler shows that each ofthe 2000 statements are getting sent to the server.I suggested that he add a "GO" statement at the end of the INSERTblock, but the statement fails when that is sent to the server.I know that this is not the ideal manner to insert bulk data to thesystem, but now we are all just curious as to why SQL server doesn'texecute each individual INSERT.Any thoughts?

View 3 Replies View Related

How To Manage IDENTITY In Tables With Large Number Of Rows?

May 23, 2002

Table structure: col1 IDENTITY (seed=1 increment=1) + few other columns (col2...col7) + one text column (col 8)
I have around 50,000,000 rows per day inserted in the table T1. At the end of the day 40,000,000 rows are deleted. I have to keep the records for 12 months and then archive it. Database is 24/7 web serving and there is no down time allowed. IDENTITY column will go out of range (overflow) after less than two years, unless the identity seed is reset to the start value (seed=1, increment=1).
At the end of 12th month data is archived in another table and only last month is kept in the table T1. So table T1 enters new year with data from last month of the previous year. There are few other tables that refer to this table by using there own field with values from T1.IDENTITY column (referential integrity is not enforced). Identity column in T1 is needed as a unique id for some search actions. Performance is an issue therefore bigint data type is used for this identity column rather than decimal.

Another problem I have is how to do table update on one column (1 mil rows to be updated out of 2 mil of rows) with the minimum impact on the users who are querying this table heavily. Not need to mention that it is web app 24/7 no down time.

Thank you in advance.


Goran

View 1 Replies View Related

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

Managing Large Number Of Objects (table,sp,view)

Jul 28, 2007

Hi,
Our database has very large number of objects. We have a naming convension by modules, subprojects etc. But for example when we need to open a specific table it still takes time to find it. If we could create custom folders under table folder or stored procedure folder it will be easier to find an object. We could create sub folders by module, subproject and classify our objects with these folders. Will the next version SQL Server 2008 support this kind of functionality?

View 8 Replies View Related

Add Column To Existing Table With Large Number Of Rows

Dec 24, 2007

Hey Guys

i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all

whenever i do : alter table mytable add Updated_date datetime

it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit

can u plz advice on how to add this column. its urgent!!

thanks in advance

View 5 Replies View Related

SQL Server 2005 Slows Down After A Large Number Of Queries

Jul 4, 2007

Hi,

We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.

I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.

I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.

The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?

If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)

Any advice will be greatly appreciated.

Thanks,
Joe

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved