SQL Server 2008 :: High Volume Reads And Writes?

Jul 6, 2015

We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.

View 2 Replies


ADVERTISEMENT

SQL Server 2008 :: Disk Reads And Writes

Nov 5, 2015

How can I measure the disk reads and writes to see if I need to add aditional disks to the server?

View 2 Replies View Related

Reads And Writes To A Sql Server Database Per Table

Aug 1, 2006

Is it possible to find the reads/writes to a sql server table ?

View 2 Replies View Related

SQL Server Admin 2014 :: Indexes With More Writes Than Reads

Jul 17, 2015

I have inherited a database that is over-indexed, i.e. there are sometimes 10-20 indexes on a table. The performance is at times not great due to blocking from long running queries. I want to clean up the indexes as a starting point.

Through a query I found some time ago on the SQLCat blog I have discovered a large number of indexes in the database that have a huge disparity between reads and writes. The range of difference is sometimes almost 2 million more writes than reads. Should I just drop the indexes that have say, more than 100,000 more writes than reads and then see what the Missing Index DMVs tell me after a few days of running without those indexes?

In some cases there are a few hundred thousand reads but maybe a million writes on the index. Thus, there are a fair number of reads happening, just not in comparison to the number of writes. In some cases there are almost no reads and a million or more writes. I am obviously dropping those indexes. I just am not sure what to do about the indexes that do have a fair number of reads.

View 9 Replies View Related

What Is The Best Way To Achieve Best Latency For Reads And Writes To SQL Server 2005?

Oct 18, 2007

I am looking into various options to improve latency of our application (we figured the latency is mainly because data persistence - writes and reads from DB). I am looking into In-Memory databases also. But, before making that decision (of using in memory databases), I would like to see if there is a way to configure SQL Server 2005 to get as close performance as in-memory databases?

My question:
1. Is there a way that I can configure SQL Server 2005 to use a CACHE that gets loaded as needed basis, so that future database reads/writes will happen to the cache as opposed to disk (db writes)?
2. Is SQL Server 2005 recoverable in such configurations?
3. Are there any ideas/resources where I can get more details? (Such as sample configurations with bench mark numbers, rpevious experiences..etc)

Thanks
Murthy

View 1 Replies View Related

Reads / Writes Per Second.

Oct 30, 2006

How can You find the reads and writes per second of your hard drives in sql. I am reading my SQL book and it says that your average disk should have 125 or less i/o's. And it gave the forumal but as mentioned I don't know how to find the reads and writes.

View 4 Replies View Related

COUNT Of READS And WRITES On A 6.5 Db.

Jul 21, 2000

Is there a way to get a total count of all SELECT, UPDATE, DELETE and INSERT statements to a SQL Server 6.5 database during a 12 hour period? I'm thinking maybe someone knows of a software that reads the log or monitors the server... I've been looking at the performance monitor and, although it has good information, it doesn't capture DML's.

FYI - it's for capacity planning.

TIA,
Mike

View 1 Replies View Related

Track Reads And Writes

Mar 5, 2008

GUys,

Is there any way track tables which have most no of reads and writes from a database of 400 tables.

Thanks

View 9 Replies View Related

Checking For Dirty Reads/writes

Apr 17, 2008



Problem Statement........


Lets say user A accesses a record and is making an update to a column... next user B accesses the same record and makes an update to the same column and saves the data... how can user A check to see if an update has been made to prevent overwriting the data..

Is there a query statement that user A can write to check for this?

I understand locking can be used to prevent this but is there an alternative to locking.

View 5 Replies View Related

Performance Tuning On High Volume Server

Jan 4, 2007

I am looking to improve the performance of my sql server databases.

I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.

The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.

My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?



I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?

View 5 Replies View Related

High Volume I/O!!!

Feb 28, 2008

I have a summary table with a 9 field composite primary key. Every 10 minutes, my system generates 2 files of 500,000 to 750,000 rows to be summarized into this table. I first Bulk insert those into a temp table, and then trigger an inner-join update query to do the updates, followed by a left-outer join to do the inserts. As the day goes on, millions of rows in my summary table, this process is too slow. Any ideas about causes/solutions???

RLiss

View 2 Replies View Related

Handling High Volume Data

Apr 18, 2007

Hi Guys,

I am facing problems with concurrent access in SQL Server 2000,The scenario is that the DB contains one huge de-normalized table containing 40 million records.

The application frequently queries this table to populate other derived tables,the sql queries take a long time to return results.

So if one query is in execution the other user's query goes into a
wait mode.Please suggest how I can better this.

Or do I need to upgrade to 2005.

Regards,
Prashant

View 2 Replies View Related

High Volume DB Performance Problems

Mar 19, 2008

Hello ,i am a master student and i am making a seminar about high volume DB performance problems ,example : if i have a table with length of 1000000 record and this length is growing exponentially by the time,what the problems may i face in insertion ,deletion , search,in such table?? and what the problems in processing such DB in general

View 1 Replies View Related

High Page Reads

Jan 17, 2002

SQL 6.5 - 5.5 Gig
NT

Hello,

Throughout the day our Document Management application generates high busts of physical page reads when users query the database.

What SQL configuration parameter(s) should I check/modify to insure that the database is performing at it's optimun during these bursts?

Thank You in advance.

View 1 Replies View Related

MSDE For Large Volume And High No. Of Users?

Sep 21, 2005

Hi,We need to use a free database for a project because of tight budget.Is MSDE ok for handling large volume of data and 70 - 80 users?My understanding is that MSDE is optimized for 5 concurrent users.Is MySQL better than MSDE?Thanks,Ben

View 1 Replies View Related

Solution Design For High Volume Of Data

Apr 18, 2006

Hi,

I have been asked to design a solution for a client of mine who basically requires the daily analysis and reconciliation of the differences between 2 extremely large text files.

The files are not in an identical format but are both in some form of delimited format (one is CSV, the other is a little more complex). For the sake of this question, let's assume that I can effectively import each file into an MS SQL table.

Each file will have in excess of 100,000 rows each day (new data for each day).

Whilst I know that MS SQL does easily have the capacity to store the data, is there a recommended way to tackle the potential problems (I imagine that performance is important... they will be running the report every day)

Or is building the solution as simple as importing the data into 2 tables, and then querying the differences and outputting as a report using Crystal?

Any suggestions appreciated.

Thanks

Rael

View 10 Replies View Related

High Volume Database Query Optimization

Dec 10, 2007

Hello every body

i am doing a research about high volume database treatment (maybe a database with tera bytes volume) , so is there any optimization or specialization for queries deal with such database? !!

View 5 Replies View Related

Audit Logout, High Reads

May 2, 2007

Hi,

I'm trying to figure out why my sqlserver is flatlined on the CPU. I'm doing a trace and can't help but notice this, with crazy high reads. I'm not sure what this is? It doesnt look good to me, altho maybe its nothing. Any info is much appreciated.

Thanks again!
mike123



Event Class/ TextData/ApplicationName/ LoginName/ CPU/ Reads/ Writes/ Duration

Audit Logout.Net SqlClient Data ProviderloginName3764129784 3146 156

View 3 Replies View Related

Slow Running Query With A High Lob Logical Reads, Lob Data

Jun 1, 2007

we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.

profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.

3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.

is there anyway to improve the performance without changing data types to varchar(max)?


select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)

View 3 Replies View Related

SQL Server 2008 :: Huge Volume Of Records To Copy To Excel File Through SSIS

Oct 22, 2015

I am copying data from database to an excel file through SSIS. database is MS SQL 2005 and BIDS is also 2005.However, the job doing this task fails every time.As per investigation, the result of the query is more than 100,000 rows and we know that excel has a limit of 65000 rows of data.Is there a way of setting the limit up? or something? a better approach maybe so that everything will be copied to the excel file successfully.

The PrimeOutput method on component "Source - Query" (1) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. End Error Error: 2015-10-22 04:34:58.05 Code: 0xC0047021

Source: Data Flow Task
Description: SSIS Error Code DTS_E_THREADFAILED.

Thread "SourceThread0" has exited with error code 0xC0047038. There may be error messages posted before this with more information on why the thread has exited. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 4:30:00 AM Finished: 4:35:05 AM Elapsed: 304.844 seconds. The package execution failed. The step failed. "

View 1 Replies View Related

SQL 2005 And SQL 2008 Database Volume Capacity

Dec 1, 2007

Hi Everybody:

4 -5 years ago, I started my career as a translator translating the MetaTexis CAT (Computer Aided Translation Software).

It's amazing to see all the improvements that have been made until now, but recently I found some problems regarding
databases:

I heard that ACCESS databases volume is limited up to 2 GB and that SQL 2005 databases volume is limited up to 4 GB, but I think this information is wrong, or at least I was only able to import 10% of that amount.

Speaking in words, 2 GB doesn't represent a database with a volume of 125,000 segments/sentences (for ACCESS) and 4 GB a volume of 250,000 (for SQL 2005).

Concrete, my "mega.mxtm" database has "only" 359 MB and suddenly I refuses to import more sentences. Is that normal? (MICROSOFT SQL 2005)

Question: Is the new SQL 2008 also limited? Is there any way to "free" or increase the volume capacity?

Point 2: As I updated the SQL 2005 into 2008 I am not able to open the "old" "mega.mxtm" anymore... : (

Regards!


De Sena Viegas

View 4 Replies View Related

Got An Expensive Server, SQL Is Very Slow On Writes

Feb 2, 2004

Hi,

i am experiencing SQl write performance problems on a very shiny server. Got data files on a Raid 1+0, log files on a separate drive, all SCSI, Win2003 server, 6G RAM, 2 Xeon processors. I've created a small benchmarking program and run it on my desktop pc and this 'big' server. Here are the results:

Desktop: SQL server inserts: 78 Seconds, Direct writes to the harddisk(Just write a string to the file 10000 times): 13 seconds

SQLServer: SQL server inserts: 422 Seconds, Direct writes to the harddisk: 16 seconds

So, for some reason, my 'shiny' machine is 6 times slower on writes than my desktop. When i tried comparing the select performance, my shiny server is 10 times faster than my desktop.

Initially i had Raid5 on my server and it had poorer direct write performance but now, direct writes seem to be ok, so, i recon this is a problem related to SQL server.

What can i do to improve the insert performance?

Thanks in advance

View 8 Replies View Related

Deploy To Production Server / SSIS Writes Out Garbage Data

Apr 2, 2015

I built a SSIS(writing out to a flat file ) in 32 bit machine and it woks fine . But however when I deploy to the produciton server(64 bit) the SSIS writes out garbage data . After some research I found out that the problem with the 32 bit OS and 64 bit OS problem.What is my next step. Am I out of luck that now I will have to redesing the SSIS in 64 bit?

View 5 Replies View Related

High Availability To High Protection Without Reconfiguring Mirroring

Apr 23, 2007

Hi,

Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.

If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?

Thanks,
J.

View 3 Replies View Related

High Safety Changed To High Performance After Fail Over ?

Mar 6, 2008

Hi There

I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.


If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.


So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?

Thanx

View 4 Replies View Related

SQL Server Admin 2014 :: Backing Up To Mount Point - Perfmon Shows Zero Writes

Apr 20, 2015

Im backing up to a network directory thats actually a mount point on a different server.My backup was slower than usual so i opened up perfmon to have a look.

When selecting the mount point from the Logical Disks section in perfmon i can see that writes/sec & write bytes/sec both show zero for a long period of time, even though the backup percent complete is increasing.Then all of a sudden the writes to the network share jump massively.

Is there some caching mechanism for backups in sql where during a backup data is only flushed to the disk periodically during backup?

View 1 Replies View Related

Log Reads In SQL Server 2005

May 31, 2006

I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,

View 1 Replies View Related

SQL Server Functionsp To Know Volume's Total Size

May 6, 2008

Hi,

Is there any SQL Server functionstored procedure available to get drivesvolumes and mount points total size and free space information?

I don’t want to use clr approach for this.

Can you please provide any pointers related to this.

Thanks in advance,

-Kiran

View 1 Replies View Related

SQL Server Admin 2014 :: How Data Writes In Multiple Secondary Files If There Is No Filegroup Created

Nov 12, 2014

I read , When sql server Database having multiple data files within single filegroup then sql server writes data in multiple proportional file algorithm where the amount of data written to a file is proportionate to the amount of free space in that file, compared to other files in the filegroup.

so if there is no filegroups created and multiple secondary files are attached in databse , is there same way data stored and writes data in multiple files by the same algorithm or any different way.

View 2 Replies View Related

Can REPLICATION On SQL Server 2000 Allow Dirty Reads

Dec 1, 2005

All my queries are being blocked while the tables are being replicatedand it is causing some 2 minute blocking. Is there a way for theReplication to allow dirty reads because I really don't care aboutthat, I would rather have dirty reads than 2 minute waits.Thanks.

View 1 Replies View Related

SQL Server 2005 Large IO And Logical Reads

May 22, 2008

A table in one of my databases is running very slowly. The IO is very high and below is a printout from the SET STATISTICS IO ON command run on a common query used on the table:


(4162 row(s) affected)

Table 'WebProxyLog'. Scan count 3, logical reads 873660, physical reads 3493, read-ahead reads 505939, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

I have a clustered unique index and a nonclustered index on the table. I have ran SQL Profiler and opened the trace in Database Tuning Advisor, DTA displays 0% improvement suggestions. I have a number of statistics on the table and index which are all up to date and fragmentation is less than 1%. I've tried a number of variations on indexes to improve performance but to no avail. There is only one query which runs on the table, and the nonclustered index created on the table did significantly improve performance, however the query still runs at around 23 seconds. The query does bring back a large amount of data however i'm sure there is a way to bring down the IO and logical reads to improve performance.

The table and index scripts are below:




Code Snippet
-- =================== Table and Clustered index ===========================
CREATE TABLE [dbo].[WebProxyLog](
[ClientIP] [bigint] NULL,
[ClientUserName] [nvarchar](514) NULL,
[ClientAgent] [varchar](128) NULL,
[ClientAuthenticate] [smallint] NULL,
[logTime] [datetime] NULL,
[servername] [nvarchar](32) NULL,
[DestHost] [varchar](255) NULL,
[DestHostIP] [bigint] NULL,
[DestHostPort] [int] NULL,
[bytesrecvd] [bigint] NULL,
[bytessent] [bigint] NULL,
[protocol] [varchar](12) NULL,
[transport] [varchar](8) NULL,
[operation] [varchar](24) NULL,
[uri] [varchar](2048) NULL,
[mimetype] [varchar](32) NULL,
[objectsource] [smallint] NULL,
[rule] [nvarchar](128) NULL,
[SrcNetwork] [nvarchar](128) NULL,
[DstNetwork] [nvarchar](128) NULL,
[Action] [smallint] NULL,
[WebProxyLogid] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [pk_webproxylog_webproxylogid] PRIMARY KEY CLUSTERED
(
[WebProxyLogid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

-- =================== Nonclustered Index ===========================

CREATE NONCLUSTERED INDEX [dta_ix_WebProxyLog_Kaction_clientusername_logtime_uri_mimetype_webproxylogid] ON [dbo].[WebProxyLog]
(
[Action] ASC
)
INCLUDE ( [ClientUserName],
[logTime],
[uri],
[mimetype],
[WebProxyLogid]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

-- =================== Query which is called regularly on the table ===========================

SELECT [User] = CASE
WHEN LEFT(clientusername,3) = domain' THEN RIGHT(clientusername,LEN(clientusername) - 3)
ELSE clientusername
END,
logtime AS [Date],
desthost AS [Site],
uri AS [Actual Site]
FROM webproxylog
WHERE CONVERT(Datetime,CONVERT(VarChar(25),logtime,106),106) BETWEEN '20 apr 2008' AND '14 may 2008'
AND(RIGHT(uri,4) NOT IN('.css','.jpg','.gif','.png','.bmp','.vbs'))
AND (RIGHT(uri,3) NOT IN('.js'))
AND LEFT(mimetype,6) = 'text/h'
AND (uri NOT LIKE '%sometext.local%')
AND (uri NOT LIKE '%sometext.co.uk%')
AND [action] = 9
AND (clientusername IN ('USERNAME'))
ORDER BY logtime ASC;





PS There are 60,078,605 rows in the table

Please help!

Many Thanks

View 6 Replies View Related

SQL Server 2012 :: Summing Logical Reads Each Day For A Server

Oct 2, 2014

We would like to benchmark our logical reads daily to show our improvement as we tune the queries over time.

I am using sys.dm_exec_query_stats summing the Physical and Logical Reads. Is this a viable option for gathering this metric? Is this a viable metric to gather?

select sum(total_physical_reads) as TotalPhyReads, sum(total_logical_reads) as TotalLogReads from sys.dm_exec_query_stats;

How best to provide performance based metrics.

View 4 Replies View Related

SQL Server Admin 2014 :: Why Number Of Reads Increases During Insert Test

Jun 2, 2014

I am writing a performance baseline test.

The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)

The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.

In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB

The script assures the number of bytes inserted in the nvarchar columns is always the same.

This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)

Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment

- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)

In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).

Now what I do not understand is :

Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.

I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.

Finally I want to notice

- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries

So I wonder why SQL Server starts to execute so many reads in test 1.

View 4 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved