SQL Server 2012 :: Summing Logical Reads Each Day For A Server

Oct 2, 2014

We would like to benchmark our logical reads daily to show our improvement as we tune the queries over time.

I am using sys.dm_exec_query_stats summing the Physical and Logical Reads. Is this a viable option for gathering this metric? Is this a viable metric to gather?

select sum(total_physical_reads) as TotalPhyReads, sum(total_logical_reads) as TotalLogReads from sys.dm_exec_query_stats;

How best to provide performance based metrics.

View 4 Replies


ADVERTISEMENT

SQL Server 2005 Large IO And Logical Reads

May 22, 2008

A table in one of my databases is running very slowly. The IO is very high and below is a printout from the SET STATISTICS IO ON command run on a common query used on the table:


(4162 row(s) affected)

Table 'WebProxyLog'. Scan count 3, logical reads 873660, physical reads 3493, read-ahead reads 505939, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

I have a clustered unique index and a nonclustered index on the table. I have ran SQL Profiler and opened the trace in Database Tuning Advisor, DTA displays 0% improvement suggestions. I have a number of statistics on the table and index which are all up to date and fragmentation is less than 1%. I've tried a number of variations on indexes to improve performance but to no avail. There is only one query which runs on the table, and the nonclustered index created on the table did significantly improve performance, however the query still runs at around 23 seconds. The query does bring back a large amount of data however i'm sure there is a way to bring down the IO and logical reads to improve performance.

The table and index scripts are below:




Code Snippet
-- =================== Table and Clustered index ===========================
CREATE TABLE [dbo].[WebProxyLog](
[ClientIP] [bigint] NULL,
[ClientUserName] [nvarchar](514) NULL,
[ClientAgent] [varchar](128) NULL,
[ClientAuthenticate] [smallint] NULL,
[logTime] [datetime] NULL,
[servername] [nvarchar](32) NULL,
[DestHost] [varchar](255) NULL,
[DestHostIP] [bigint] NULL,
[DestHostPort] [int] NULL,
[bytesrecvd] [bigint] NULL,
[bytessent] [bigint] NULL,
[protocol] [varchar](12) NULL,
[transport] [varchar](8) NULL,
[operation] [varchar](24) NULL,
[uri] [varchar](2048) NULL,
[mimetype] [varchar](32) NULL,
[objectsource] [smallint] NULL,
[rule] [nvarchar](128) NULL,
[SrcNetwork] [nvarchar](128) NULL,
[DstNetwork] [nvarchar](128) NULL,
[Action] [smallint] NULL,
[WebProxyLogid] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [pk_webproxylog_webproxylogid] PRIMARY KEY CLUSTERED
(
[WebProxyLogid] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]

-- =================== Nonclustered Index ===========================

CREATE NONCLUSTERED INDEX [dta_ix_WebProxyLog_Kaction_clientusername_logtime_uri_mimetype_webproxylogid] ON [dbo].[WebProxyLog]
(
[Action] ASC
)
INCLUDE ( [ClientUserName],
[logTime],
[uri],
[mimetype],
[WebProxyLogid]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

-- =================== Query which is called regularly on the table ===========================

SELECT [User] = CASE
WHEN LEFT(clientusername,3) = domain' THEN RIGHT(clientusername,LEN(clientusername) - 3)
ELSE clientusername
END,
logtime AS [Date],
desthost AS [Site],
uri AS [Actual Site]
FROM webproxylog
WHERE CONVERT(Datetime,CONVERT(VarChar(25),logtime,106),106) BETWEEN '20 apr 2008' AND '14 may 2008'
AND(RIGHT(uri,4) NOT IN('.css','.jpg','.gif','.png','.bmp','.vbs'))
AND (RIGHT(uri,3) NOT IN('.js'))
AND LEFT(mimetype,6) = 'text/h'
AND (uri NOT LIKE '%sometext.local%')
AND (uri NOT LIKE '%sometext.co.uk%')
AND [action] = 9
AND (clientusername IN ('USERNAME'))
ORDER BY logtime ASC;





PS There are 60,078,605 rows in the table

Please help!

Many Thanks

View 6 Replies View Related

Query Logical, Scan Reads?

Dec 22, 2000

Hi Everybody,

One of my friend asked me "How do we reduce the query logical, scan reads
in SQL Server?".

I really don't know, how to answer him.

Can anybody explain me regarding this.

thanks,
Srini

View 2 Replies View Related

Profile Logical Reads Versus STATISTICS IO

May 5, 2015

Why is there often such a dramatic discrepancy between the logical reads recorded in the trace file versus the output of STATISTICS IO?

In the server-side trace I have running I found a reporting procedure that shows having 136,949,501 reads (yes, in hundreds of millions), and it's taking 13,508 seconds to complete.

So I pull the code from the trace and execute it via SSMS - it runs < 1 second, and only generates about 4,000 reads (using various different parameters I get the same result)

The execution plan shows nothing unusual

View 5 Replies View Related

Query Optimization: CPU Speed Or Logical Reads Better?

Dec 12, 2005

How do I determine which method I should use ifI want to optimize the performance of a database.I took Northwind's database to run my example.My query is I want to retrieve the Employees' Firstand Last Names that sold between $100,000 and$200,000.First let me create a function that takes the EmployeeIDas the input parameter and returns the Employee'sFirst and Last name:CREATE FUNCTION dbo.GetEmployeeName(@EmployeeID INT)RETURNS VARCHAR(100)ASBEGINDECLARE @NAME VARCHAR(100)SELECT @NAME = FirstName + ' ' + LastNameFROM EmployeesWHERE EmployeeID = @EmployeeIDRETURN ISNULL(@NAME, '')ENDMy first method to run this:SELECT EmployeeID, dbo.GetEmployeeName(EmployeeID) ASEmployee, SUM(UnitPrice * Quantity) AS AmountFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeID,dbo.GetEmployeeName(EmployeeID)HAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000It's running in 4 seconds time. And here are theStatistics IO and Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 17 ms, elapsed time = 17 ms.(3 row(s) affected)Table 'Order Details'. Scan count 1, logical reads 10,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 21,physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3934 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server Execution Times:CPU time = 3844 ms, elapsed time = 3935 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Now my 2nd method:IF (SELECT OBJECT_ID('tempdb..#temp_Orders')) IS NOT NULLDROP TABLE #temp_OrdersGOSELECT EmployeeID, SUM(UnitPrice * Quantity) AS AmountINTO #temp_OrdersFROM OrdersJOIN [Order Details] ON Orders.OrderID =[Order Details].OrderIDGROUP BY EmployeeIDHAVING SUM(UnitPrice * Quantity) BETWEEN100000 AND 200000GOSELECT EmployeeID, dbo.GetEmployeeName(EmployeeID),AmountFROM #temp_OrdersGOIt's running in 0 seconds time. And here are the Statistics IOand Time results:SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 0 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.Table '#temp_Orders0000000000F1'. Scan count 0, logicalreads 1, physical reads 0, read-ahead reads 0.Table 'Order Details'. Scan count 830, logical reads 1672,physical reads 0, read-ahead reads 0.Table 'Orders'. Scan count 1, logical reads 3, physical reads 0,read-ahead reads 0.QL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.(3 row(s) affected)SQL Server Execution Times:CPU time = 15 ms, elapsed time = 19 ms.SQL Server Execution Times:CPU time = 15 ms, elapsed time = 20 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 1 ms.(3 row(s) affected)Table '#temp_Orders0000000000F1'. Scan count 1,logical reads 2, physical reads 0, read-ahead reads 0.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server Execution Times:CPU time = 0 ms, elapsed time = 3 ms.SQL Server parse and compile time:CPU time = 0 ms, elapsed time = 0 ms.By the way why "SQL Server Execution Times"exists 3 times and not just one time?Summary:The first code is clean, 1 single SELECT statement buttakes 4 long seconds to execute. The logical reads arevery few compared to the second method.The second code is less clean and uses a temp table buttakes 0 second to execute. The logical reads are waytoo high compared to the first method.What am I supposed to conclude in this example?Which method should I use over the other and why?Are both methods good depending on which I prefer?If I can wait four seconds, it's better to reduce the logicalreads in order to provide less Blocking on the live tablesin a heavily accessed database?Which method should I choose on my own database?Calling a function like dbo.GetEmployeeName getsprocessed per each returned row, correct? That meansIf i had a scenario where 1000 records were to be returnedwould it be better to dump 1000 records to a temp tablevariable and then call a function to process each recordone at a time?Or would the direct approach without usinga temp table cause slower processing and moreblocking/deadlocks because I am calling the functionper each row as I am accessing directly from the tables?Thank you

View 1 Replies View Related

STATISTICS IO: Scan Count Vs Logical Reads

Apr 12, 2006

Hello

I'm doing som performance research, I have a index with following priority: ClientId, Active, ProductId. Active is a bit field telling whether the Product is active or not, it can be inactive products than active, but always at least one active product.
When I'm executing



SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20)

I'm getting following result: Scan count 1, logical reads 490



When I'm leading SQL Server to the right paths by including the to possible values in Active by executing the following SQL:


SELECT * FROM [table] WHERE ClientId = [id] AND ProductId IN (1,2,3,5,7,9,20) AND Active IN (0,1)

I'm getting following results: Scan count 14, logical reads 123



With this information, which version would you say is fastest and why?

When I was running this query 1000 times with different ClientId I got a average time of 172 ms for the first query, and 155 ms for the second one. I have been told that scan count is very expensive... out of this example it seems that the cost of 1 scan count is like 20 logical reads?

View 8 Replies View Related

Statistics IO Logical Reads Sometimes 15 Million And Other Times Down To 90 Thousand?

Mar 4, 2006

I am running a query in SQL 2000 SP4, Windows 2000 Serverthat is not being shared with any other users or any sqlconnections users. The db involves a lot of tables,JOINs, LEFT JOINs, UNIONS etc... Ok it's not a prettycode and my job is to make it better.But for now one thing I would like to understand with yourhelp is why the same SP on the same server and everythingthe same without me changing anything at all in terms ofSQL Server (configuration, code change, ...) runs inQuery Analyzer in 1:05 minute and i see one table get ahit of 15 million logical reads:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 147, read-ahead reads 0.This 'TABLE1' has about 400,000 recordsThe second time i ran right after in Query Analyzer again:Table 'TABLE1'. Scan count 2070, logical reads 15516368,physical reads 0, read-ahead reads 0.I can see now the physical reads being 0 as it isunderstandable that SQL is now fetching the data frommemory.But now the third time I ran:Table 'TABLE1'. Scan count 28, logical reads 87784,physical reads 0, read-ahead reads 0.The Scan count went down from 2070 to 28. I don'tknow what the Scan count is actually. It scanned thetable 28 times?The logical reads went down to 87,784 reads from 15million and 2 seconds execution time!Anybody has any ideas why this number change?The problem is i tried various repeats of my test, irebooted the SQL Server, dropped the database, restoredit, ran the same exact query and it took 3-4-5 secondswith 87,784 reads vs 15 million.Why i don't see 15 million now?Well i kept working during the day and i happen to run intoanother set of seeing 15 million again. A few runs wouldkeep running at the paste of 15 million over 1 minute andeventually the numbers went back down to 87,784 and 2seconds.Is it my way of using the computer? Maybe i was openingtoo many applications, SQL was fighting for memory?Would that explain the 15 million reads?I went and changed my SQL Server to used a fixed memoryof 100 megs, restarted it and tested again the samequery but it continued to show 87,784 reads with 2 secondsexecution time.I opened all kinds of applications redid the same testand i was never able to see 15 million reads again.Can someone help me with suggestions on what could bethis problem and what if i could find a way to come tosee 15 million reads again?By the way with the limited info you have here about thedatabase I am using, is 87,784 reads a terrible number ofreads, average or normal when the max records in the manytables involved in this SP is 400,000 records?I am guessing it is a terrible number, am I correct?I would appreciate your help.Thank you

View 4 Replies View Related

SQL Server 2012 :: Admission Dates Overlap And Summing Of Inpatient Costs

Jun 28, 2015

I've been struggling with this for some time. we have to group data based on Patients admission date and discharge date. If any Patients discharge date + 1 = admission date then we have group both rows into one row and sum costs from both the rows. Please check out the sample input and expected output for details.

Sample Input
PatientID AdmissionDate DischargeDate Cost
1009 27-07-2014 31-07-2014 1050
1009 01-08-2014 23-08-2014 1070
1009 31-08-2014 31-08-2014 1900
1009 01-09-2014 14-09-2014 1260
1009 01-12-2014 31-12-2014 2090
1024 07-06-2014 28-06-2014 1900
1024 29-06-2014 31-07-2014 2900
1024 01-08-2014 02-08-2014 1800

Expected Output
PatientId AdminssionDate DischargeDate Cost
1009 27-07-2014 23-08-2014 2120
1009 31-08-2014 14-09-2014 3160
1009 01-12-2014 31-12-2014 2090
1024 07-06-2014 02-08-2014 6600

Please Use the below script to generate the source table and fill them up with the sample data.

--Create Table
CREATE TABLE PatientProblem
(
PatientID INT,
AdmissionDate DATETIME,
DischargeDate DATETIME,
Cost MONEY

[Code] ......

View 9 Replies View Related

Slow Running Query With A High Lob Logical Reads, Lob Data

Jun 1, 2007

we had some slow down complaints lately and this query seems to be the culprit almost every single time. The estimated execution plan is a clustered index seek as there is a clustered index on the uidcustomerid column. setting profile statistics on shows that every time it executes it does an index seek.

profiler session showed a huge number of reads for these queries depending on the value being looked up. 1500 through 50000. i set up profile io on and the culprit is lob logical reads. everything else is 0 or very low. in this case lob logical reads is over 1700.

3 of the columns in the select statement are text columns. when i take them out of the query the lob logical reads drops to 0 and goes up incrementally as i add each column back in.

is there anyway to improve the performance without changing data types to varchar(max)?


select SID,Last_name,Name_2,First_name,Middle_initial,Descriptives,Telephone_number,mainline,Residence,ADL,
DID_number,Svce_street,Svce_town,Svce_state,Svce_appt,Mailing_street,Mailing_town,Mailing_state,Mailing_appt,
Mailing_zip,Listing,Addl_listing,Published,Listed,Gold_number,PIN,status,SSnumber,tax_jurisdiction,
Bill_date,Past_balance,Service_start_date,Service_end_date,LOA,FCC_type,Line_type,I_W,Jacks,Voice_messaging,
vms_ring_cycles,CCS,phonesmarts,ringmate,voice_dialing,Bill_detail,Contact_Number,Contact_extension,
Best_Time,suspend,suspend_start,suspend_end,credits_allowed,credits_granted,home_region,Calling_Plan,Local_Plan,
Local_Plan_Rate,Flat_Rate,Sales_agent,Community,Building_Mgmt,How_Heard,Incentive_1,Incentive_1a,Incentive_1b,
Incentive_1c,Incentive_2,Incentive_2a,Incentive_2b,Incentive_2c,Incentive_3,Incentive_3a,Incentive_3b,
Incentive_3c,block_operator,block_collect,block_group,block_adult,block_call_return,block_repeat_dialing,
block_call_trace,block_caller_id,block_anonymous,block_all_high_toll,block_regional_and_ld,block_DA_Call_Completion,
block_DA,block_3rd_party,bank,prepayment,dial_around_number,custid,waive_interest,Financial_Treatment,
Other_Feature_1_code,Other_Feature_1_rate,Other_Feature_2_code,Other_Feature_2_rate,Other_Feature_3_code,
Other_Feature_3_rate,Other_Feature_4_code,Other_Feature_4_rate,Partial_Account,mail_date,snp_1_date,snp_2_date,
terminate_date,snp1notified,snp1peak,snp1offpeak,snp2notified,snp2peak,snp2offpeak,avg_days_paid,Pulled_Ld,SNP1,
SNP2,Treatment,Collections,Installment,Nynex_BTN,LD_rate,local_discount,to_month,rounds_up,full_package_made,
local_made,PIC,LPIC,tax_exempt_local,tax_exempt_federal,CommissionedAgent,LDRateID,UidCustomerId,
accVchLineClassUSOC,block_Inter_Reg_LD,block_international,block_DA_3rd_Collect,block_DH2,block_ISP_2,block_ISP_3,
block_ISP4_3_GBAS,block_ISP3_3_GBAS,block_collect_only,block_LD_Reg_DA,block_usage_based,block_ISP5_3_GBAS,
block_ISP5_2_GBAS,block_group_adult,csr_PIC,csr_LPIC,csr_SA,csr_exception,cutover_status,cutover_datetime,
OutsideAgent,prfVchAttributes,uidResellerID,Category,uidDealID
from profiles where UidCustomerID in (352199267)

View 3 Replies View Related

SQL 2012 :: Deadlocking Under Repeatable Reads

May 5, 2015

Just migrated application from Oracle to SQL and we are seeing alot of deadlocking and blocking. I did notice that app seems to be passing isolation level of repeatable read. Attached is a .doc of one of the deadlocks, is there a way to avoid these in the repeatable read isolation level? This example is a select with two tables, using NCI's that cover the where, and a insert doing just a clustered index insert. Is this simply try to get rid of the repeateable read if not needed, guess have to check with vendor on that or is there a way to get this to not deadlock using repeatable read?

View 2 Replies View Related

Log Reads In SQL Server 2005

May 31, 2006

I have a set of triggers that log the history of changes to a table - i.e. I record inserts, updates, deletes (pretty standard audit stuff I suppose). I want to also log reads on that data. If I were using sprocs for reading data, this would be relatively painless, but I am using an O/R mapper to handle my data access, which writes dynamic sql at runtime (and I don't want to use sprocs with it) and then sends it down to the DB. Is there a way I can intercept reads and log them to the same table I am logging other actions? I know very little about the new capabilities of SQL Server 2005, but I would think I could somehow, maybe via the new CLR capabilities or similar, get access to these types of events within the database? Anyone? I know I could always do this higher up in the application layers, but I would like to keep all of this at the database level if possible....Thanks,

View 1 Replies View Related

SQL 2012 :: Is Profiler Reads Column Incorrect For Parallel Plans

Aug 19, 2015

Is the SQL Server Profiler Reads Column Incorrect For Parallel Plans?

I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.

However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.

Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.

Does the profiler only display one thread of a parallel plan perhaps? Or something else?

View 9 Replies View Related

SQL Server 2008 :: Disk Reads And Writes

Nov 5, 2015

How can I measure the disk reads and writes to see if I need to add aditional disks to the server?

View 2 Replies View Related

Can REPLICATION On SQL Server 2000 Allow Dirty Reads

Dec 1, 2005

All my queries are being blocked while the tables are being replicatedand it is causing some 2 minute blocking. Is there a way for theReplication to allow dirty reads because I really don't care aboutthat, I would rather have dirty reads than 2 minute waits.Thanks.

View 1 Replies View Related

Reads And Writes To A Sql Server Database Per Table

Aug 1, 2006

Is it possible to find the reads/writes to a sql server table ?

View 2 Replies View Related

SQL Server 2008 :: Create Test Portfolio Data Summing To 100%?

Feb 5, 2015

I'm building a proc to generate fake stock portfolios for testing. I have a list of thousands of symbols, and I want the tester to be able to select how many symbols they want in their fake portfolio, and then give each symbol a random weighting (i.e. percentage held in that security) which, across all the symbols, sums to 100%. The securities here are not the part I care about, it's the weightings summing to 100 that's important.

So test data would look something like this:

/*
--This is the repository of potential symbols I can add to a fake portfolio.
-- So the simple part is basically select top (@symbolCt) from #PossibleSymbols, plus some magic I have yet to determine
if object_id('tempdb.dbo.#PossibleSymbols') is not null drop table #PossibleSymbols
create table #PossibleSymbols
(
SymbolID int
)
insert into #PossibleSymbols (SymbolID)

[code]....

View 0 Replies View Related

SQL Server 2008 :: High Volume Reads And Writes?

Jul 6, 2015

We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.

View 2 Replies View Related

SQL Server Admin 2014 :: Indexes With More Writes Than Reads

Jul 17, 2015

I have inherited a database that is over-indexed, i.e. there are sometimes 10-20 indexes on a table. The performance is at times not great due to blocking from long running queries. I want to clean up the indexes as a starting point.

Through a query I found some time ago on the SQLCat blog I have discovered a large number of indexes in the database that have a huge disparity between reads and writes. The range of difference is sometimes almost 2 million more writes than reads. Should I just drop the indexes that have say, more than 100,000 more writes than reads and then see what the Missing Index DMVs tell me after a few days of running without those indexes?

In some cases there are a few hundred thousand reads but maybe a million writes on the index. Thus, there are a fair number of reads happening, just not in comparison to the number of writes. In some cases there are almost no reads and a million or more writes. I am obviously dropping those indexes. I just am not sure what to do about the indexes that do have a fair number of reads.

View 9 Replies View Related

What Is The Best Way To Achieve Best Latency For Reads And Writes To SQL Server 2005?

Oct 18, 2007

I am looking into various options to improve latency of our application (we figured the latency is mainly because data persistence - writes and reads from DB). I am looking into In-Memory databases also. But, before making that decision (of using in memory databases), I would like to see if there is a way to configure SQL Server 2005 to get as close performance as in-memory databases?

My question:
1. Is there a way that I can configure SQL Server 2005 to use a CACHE that gets loaded as needed basis, so that future database reads/writes will happen to the cache as opposed to disk (db writes)?
2. Is SQL Server 2005 recoverable in such configurations?
3. Are there any ideas/resources where I can get more details? (Such as sample configurations with bench mark numbers, rpevious experiences..etc)

Thanks
Murthy

View 1 Replies View Related

SQL 2012 :: Difference Between Logical And Physical Join?

Jul 9, 2014

What is the difference between Logical Join and Physical Join and there Types?

View 6 Replies View Related

SQL 2012 :: Renaming Logical Filename Of A Database?

Mar 27, 2015

If I run an ALTER DATABASE command that renames the Logical file name of a database, will I run into any unseen consequences in terms of day to day use of the database?

View 4 Replies View Related

SQL 2012 :: Alter Logical FileName During Restore

Aug 12, 2015

Can two databases have same logical filenames in the same instance?

In my instance, database already exists say A.

I have to restore DB B with the same backup used for database A in the same instance.

So, once the database B is restored, A and B will have same logical fileNames.

Can I leave it as is? Or do i alter only after B is restored?

Or is there a way where I can change the logical file for B while restoring?

View 1 Replies View Related

SQL Server Admin 2014 :: Why Number Of Reads Increases During Insert Test

Jun 2, 2014

I am writing a performance baseline test.

The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)

The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.

In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB

The script assures the number of bytes inserted in the nvarchar columns is always the same.

This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)

Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment

- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)

In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).

Now what I do not understand is :

Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.

I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.

Finally I want to notice

- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries

So I wonder why SQL Server starts to execute so many reads in test 1.

View 4 Replies View Related

SQL Server 2014 :: Profiler Reads Column Incorrect For Parallel Plans?

Aug 19, 2015

I often use profiler as one tool to identify bad plans. The reads column gives me a good indication of excessive IO to dig into and correct if necessary. I often use it with Showplan so I can see what a query does, replicate it and fix it.

However I have just lost some faith in it. I am looking at a poorly performing query joining five tables. A parallel plan has been generated and one table is being scanned (in parallel) due to a missing index. This table had in excess of 4 million rows in it. The rest hitd indexes well. However the entire query generates ONLY 12 READS.

Once corrected, a single processor plan is used. This looks really efficient and uses 120 reads. That looks the right figure to me.

Clearly 12 reads is wrong. Does the profiler only display one thread of a parallel plan perhaps? Or something else?

View 1 Replies View Related

Logical XOR Operator In T-SQL (SQL Server 2000)

Sep 5, 2006

Hi all....

I was wondering why SQL Server 2000 lacks a logical XOR operator. Do you have a ready to use function that implements it?

How can I program one by myself? I know that A XOR B is equivalent to NOT A * B + A * NOT B. So I tried to create this function :

CREATE FUNCTION XOR

(

-- Add the parameters for the function here

@A BIT,

@B BIT

)

RETURNS BIT

AS

BEGIN

RETURN NOT @A AND @B OR @A + NOT @B

END

But it didn't compile.

How can I pass boolean values as parameters? any other hint?

Thanks

Jaime

View 12 Replies View Related

SQL Server Detected A Logical Consistency-based I/

Mar 16, 2008

This local test DB was doing just fine. I fire up SQL Server again, and all of a sudden I can't access it (expend the database in Management Studio). So I detach and try to run this script which was run initially to re-attach it:

USE [master]
GO
CREATE DATABASE [ABS] ON
( FILENAME = N'C:DatabaseABS.mdf' ),
( FILENAME = N'C:DatabaseABS_log.ldf' )
FOR ATTACH
GO
if not exists (select name from master.sys.databases sd where name = N'CMS' and SUSER_SNAME(sd.owner_sid) = SUSER_SNAME() ) EXEC [CMS].dbo.sp_changedbowner @loginame=N'domain estuser', @map=false
GO

and I get this error: SQL Server detected a logical consistency-based I/O error: torn page

View 1 Replies View Related

SQL Server 2008 :: Error Renaming Logical Filename

Feb 27, 2015

I have a SQL Server 2008 R2 Cluster. I have moved a file to another drive by adding a new file on the drive, doing a shrinkfile with emptyfile. I have removed the old file and when I try to rename the new file to the old files old name, I get an error.

ALTER DATABASE mydb REMOVE FILE myfilename
Msg 5009, Level 16, State 9, Line 1
One or more files listed in the statement could not be found or could not be initialized.

ALTER DATABASE mydb
modify FILE ( NAME = temp_filename
, NEWNAME = myfilename
)
Msg 1828, Level 16, State 3, Line 4

The logical file name "myfilename" is already in use. Choose a different name.

I have completed a failover, thinking the name was being held in cache. I've also renamed to something completely different then tried to rename to the old filename.

View 9 Replies View Related

SQL Server 2008 :: Renaming Logical File Names

Apr 30, 2015

Is there any danger with renaming the LOGICAL file names behind the database?

There are a bunch of databases that were restored copies and all of them have the same logical database file name. I'm trying to get some growth data so I want the logical files to be different (prefer them to match the actual database name) so I can more easily identify them.

For instance:

database_id name type_desc name physical_name
1 DLMdb1 ROWS DLMDB1 D:dlmdb1.mdf
1 DLMdb1 LOG DLMDB1_log E:dlmdb1.ldf
2 DLMdb2 ROWS DLMDB1 D:dlmdb2.mdf
2 DLMdb2 LOG DLMDB1_log E:dlmdb2.ldf
3 DLMdb3 ROWS DLMDB1 D:dlmdb3.mdf
3 DLMdb3 LOG DLMDB1_log E:dlmdb3.ldf

Am I safe to rename the logical names? I can't think of anything that references those logical file names that I would be breaking [backups, applications].

View 3 Replies View Related

SQL Server Detected A Logical Consistency-based I/O Error

Mar 15, 2007

Did anyone familiar with these errors:

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x1a00d2fd; actual: 0x1a02d2fd). It occurred during a read of page (1:103329) in database ID 11 at offset 0x00000032742000 in file 'C:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLDATAServer02.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database c (Source: MSSQLServer, Error number: 824)
Get help: http://help/824

Thank you.

TJ_1



View 17 Replies View Related

SQL Server 2008 :: Logical And Physical File Names Of All Databases

Apr 3, 2011

Is there a query or sp that I can pull all databases on the server along with their logical file names and physical file names?

View 5 Replies View Related

SQL Server Admin 2014 :: TempDB Configuration For 24 Logical Cores

Aug 27, 2015

I have a server with 2 processors, 12 cores each making it a 24 core logical system.

My tempdb has 4 datafiles and 4 logfiles created.

I know creating more number of log files in tempdb may not make sense, but how many tempdb data files are supposed to be created?

Should it be 24?

View 4 Replies View Related

SQL Server 2012 :: How To Insert Data Into Table From Linked Server

Nov 19, 2013

I wonder if it possible to move data from tables on a linked server to a "normal database"?

Name linked server: Covas
Name table on linked server: tblCountries
Name field: cntCountryName

Name "normal" database: CovasCopy
Name "normal" table: Countries (or dbo.Countries)
Name "normal" field: Country

This is just a test setup. I figure that if I get this working the rest will be easier.

My current query:
select * from openquery(COVAS,'
INSERT INTO CovasCopy.dbo.Countries(Country)
SELECT cntCountryName FROM db_covas.tblCountries;')

View 8 Replies View Related

SQL Server 2012 :: Query To Return Top 10 Tables On Each Database On Server?

Nov 21, 2013

i need a query to return the top 10 tables in each database on a server. have used EXEC sp_msforeachtable 'sp_spaceused ''?''' which returns what I need for one db but I need it to loop through and gather the info for all dbs on server. maybe need cursor not sure. for reporting reasons i would like to include the name of the server and name of database.

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved