I have trouble with MSSQL2000 SP4 (without any hotfixes). During last two
weeks it start works anormally. After last optimalization (about few months
ago) it works good (fast, without blocks). Its buffer cache hit ratio was
about 99.7-99.8. Last day it starts work slow, there was many blocks and
dedlocks. There are no any queries, jobs and applications was added. Now
buffer cache hit ratio oscilate about 95-98. I try update statistics and
reindex some hard used tables, but there is no effect or effect is wery
short (after few hours problem return).
Mayby somene know what it could be?
Is it possible to estimate how each table (using DBCC SHOW_STATISTICS or
DBCC SHOWCONTIG or others) how the table affect on total buffer cache hit
ratio?
This issue just happen recently. The buffer cache ratio went from > 90%to 50% and has slowly been climbing back up over 8 hours or so. Itscurrently @ 76%. Is this something I should take action on immediately?It seems to be coming back to normal...
On Microsoft performance monitor, what is the difference between SQL Server Cache Manager: Cache Hit Ratio and SQL Server Buffer Manager: Buffer Cache Hit Ratio? We have a production server where the buffer cache hit ratio is consistently at 99%, which is normal. However, the cache hit ratio is 73%. What is the difference between the two hit ratios, and why would we have such a significant difference between the two?
Is there a way to drop clean buffers at the database level instead of the server/instance level like the undocumented €śDBCC FLUSHPROCINDB (@dbid)€?? Is there a workaround for €śdbo€? to be able to flush procedure and data cache without being elevated to €śsysadmin€? server role?
PS: I am aware of the sp_recompile option that can be used to invalidate cached execution plans. Thx.
I have been seeing this strange statistics in one of our servers. The cache hit ratio has gone beyond 100%, it is currently showing 124%. Has anyone seen this before.
I have a large dell server with 4 processors, and 8 gig of memory on win 2000 advanced with sql 2000 enterprise edition running a 3rd party app. My cache hit ratio averages about 76%. I thought the gneral rule was if you get below 80% to add more memory. However my memory manager shows I am only using 71% of my memory and have a full gig available. I have the sql server set to use about 7.1 gig of the 8 gig on the server. My ? is if I am only using 71% of my memory, will will adding more memory actually help my cache hit ratio
HelloI am tring to figure out why our SQL server is a bit sluggish fromtime to time.It is running a dual XEON, with 2.5 GB RAM, and a fast SCSI I/O subsystem setup as follows.OS, mirrored 2 drivesSQL DATA 16 HDD RAID 10SQL LOG 4 HDD RAID 10SQL tempdb 4 HDD RAID 10OS = win 2003SQL = sql2000 standard editiondbcc showcontig shows me nothing special, it looks okI launch performacne monitor and add SQL server cache manager: hitratioand it is constantly at 7% and never changes up or down, it is justconstant.Can this be correct? and if so it sounds rather bad, we have ahandfull of large tables that are heavily used and enough RAM to holdthem all in RAM so I really do not understand why the cache hit ratiois not higher.Any hints would be greatrgdsMatt
Hi, In m*y SQL server 7.0, when using Performance, I see in graph: Buffer Catche Hit Ratio Counter (SQLServer Buffer Manager objects)always is maximum (100). What does this mean ? What's the Buffer Catche Hit Ratio? How do I configure SQL server to performance? Thanks in advance.
Maybe I am just a lot better at this than I thought, but I figure that somewhere there is a mathematical rule that is being overlooked. When I run dbcc sqlperf (lrustats) on some of my production machines, I sometimes end up with a cache hit ratio (which is defined as a percentage, mind you) that is slightly over the limit:
Statistic Value -------------------------------- ------------------------ Cache Hit Ratio 100.00898 Cache Flushes 0.0 Free Page Scan (Avg) 0.0 Free Page Scan (Max) 0.0 Min Free Buffers 331.0 Cache Size 4362.0 Free Buffers 9434.0
I suspect some counter somewhere is getting wrapped around its 4 byte limit. Is there any reliable source for getting statisics about SQL Server performance? Users tend be unreliable and say everything is slow.
I'm putting together some monitor scripts, have buffer cache ratio etc etc but struggling to get an accurate script for the current procedure cache hit ratio...
Help, have recently upgraded from 6.5 to 7.0 and have come across a problem with performance. The problem appears to relate to the buffer cache being flushed, the buffer cache hit ratio drops from 98% to 0% in a matter of a second. It then very slowly grows, then is flushed again, then increase slowly upto 30%.
Does any one have any ideas as to what would flush the buffer cache?
My problem is that I cannot completely clean buffer cache on SQL Server 2005 version 9.00.2047.00 (probably SP1).
Right after I run DBCC DROPCLEANBUFFERS in the context of my database (this is development server, and so far I am only the one who is working with a particular database), I run a script that quetries sys.dm_os_buffer_descriptors view also from the context of my database to make sure that the buffer cache is really clean. However it shows large number of entries totalling 42 MB.
I ran both DBCC an the script in the past too, and it always showed nothing in the results, that means that buffers were really clean. The reason why I am running this is for benchmarking of existing and new application.
Does anybody have any idea, suggestions, how to troubleshoot this issue ? I already closed all connections to this database, but rebooting the server is not an option since other people are also working on it.
We are troubleshooting a performance problem and the test result is slow the 1st time but the subsequent runs are faster.. Logging out of application and log back in ( connecting to a new database session) did not clear the buffer cache as I thought it would.. When does the database clear the buffer cache? Is it not per database session?
I can issue CHECKPOINT and then run DBCC DROPCLEANBUFFERS to clear the buffers in the disk. But since we are testing from the application,do we need to run these commands via application code to clear buffer/per database session OR can we run these commands from a management studio session?
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
Question- Why am I getting 428 pages for which there is no corresponding DB object? Why are so many pages present in sys.dm_os_buffer_descriptors but are missing from sys.allocation_units.
I'm getting an alert which states that both my Buffer Cache Hit Ratio and PLE are low on one of my SQL Servers though I'm not sure how to correctly check this.
I ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Buffer cache hit ratio'
Which gives me the Buffer Cache Hit Ratio, cntr_Value of 9 though its constantly dipping between 3-3000 and is never steady and I'm unsure if this is normal.
I also ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
Which gives me the Page life expectancy of 209061.
If these values would cause concern and if this is a normal Buffer Cache Hit Ratio? It's constantly dropping from high or low from what I can see. These scripts were pulled from another forum and I'm assuming they're showing the correct values.
I encountered the following error while attempting to preview an RDL report I was developing in VS2010 using SSDT:"The size necessary to buffer the XML content exceeded the buffer quota"
We have a set of reports with same header section in all the reports. So while developing a new report i used to copy that header section to the new report with same dataset names (without any change) , but while rendering the report it is throwing error " The size necessary to buffer the XML content exceeded the buffer quota".
I have a master package that executes a series of sub packages run from a SQL Agent job. One of those sub packages has been stable for a week, running at least once per day, but it just failed despite having been run once already today with the same set of input data.
There were a series of errors showing in the event log for the Execute Package Task starting with "Buffer Type 15 had a size of 0 bytes.", then "The buffer manager failed to create a new buffer type.", then "The Data Flow task cannot register a buffer type. The type had 32 columns and was for execution tree 3.", then "The layout failed validation." and finally "Error 0xC0012050 while loading package file "C:[Package].dtsx". Package failed validation from the ExecutePackage task. The package cannot run.".
SQLIS.com reports the constant for the error code as DTS_E_REMOTEPACKAGEVALIDATION ( http://wiki.sqlis.com/default.aspx/SQLISWiki/0xC0012050.html ).
I then ran the package on my dev machine in BIDS and it worked fine, so I re-ran the job on the server and this time that package executed ok, but another one fell over but did not put anything in the event log.
I'm experiencing a completely random warning from any given row count component within any given data flow task. It occurs sporadically. Whilst distracting, I don't see any adverse effects to the data after the packages complete. Can someone weigh in on this warning and let me know if it is indeed benign or what I maybe able to do to fix it?
Here's the warning:
"A call to the ProcessInput method for input 75997 on component "CNT Rows sent for STG table" (75995) unexpectedly kept a reference to the buffer it was passed. The refcount on that buffer was 4 before the call, and 5 after the call returned."
Hi , I have a sample table with the following descriptions CREATE TABLE "dbo"."tab1" ( "cola" "int" NOT NULL , "colb" varchar (30) NOT NULL , "colc" "int" NOT NULL , CONSTRAINT "PK___2__10" PRIMARY KEY CLUSTERED ( "cola", "colc" ) ) GO After adding some 100000 records to this table I run the sp_spaceused procedure and it give me the following information. name - tab1 rows - 100000 reserved - 2122KB data - 2084KB index_size - 30KB unused -8KB Also i manullay calculated the number of datapages using the following equations which is available in the Sql server documentation. Data row size = 8 ( Size of ffixed length columns) + 30 (Size of all variable length columns) + 1(Number of variable length columns ) + 7( over head) = 46 Rows per page = (2032 -32)/46 = 43 32 - Page heade size Number of data pages = 100000/43 = 2325 pages = 2325 * 2 = 4650KB
Can any body tell me why there is a mismatch in result between the calculation?Am i wrong in some steps ? Is there any other way to estimate the size of table.Your valuable suggestions appreciated. Regards Jiji
Dear Experts, i'm going to create a new database for the client, the initial size might be 10GB. so please guide me what will be datafile size and log file size?
i'm not creating secondary datafiles. is it ok?
Vinod Even you learn 1%, Learn it with 100% confidence.
Im getting this error when trying to set up a cache dependency...are there any special permissions etc?From CS:SqlCacheDependency dep = new SqlCacheDependency("MySite-Cache", "Products");Cache.Insert("Products", de.GetAllProductsList(), dep); From connectionStrings.config:<add name="SiteDB" connectionString="Data Source=localhost,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" />Also tried this using my machinename<add name="SiteDB" connectionString="Data Source=<machinename>,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" /> From web.config: <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="MySite-Cache" connectionStringName="SiteDB" pollTime="2000"/> </databases> </sqlCacheDependency> </caching> EDIT: So making progress I can't seem to get the table registered for cache dependency:The sample i have says"aspnet_regsql.exe -E -S .SqlExpress -d aspnetdb -t Customers -et"and the command line response is "Enabling the table for SQL cache dependency..An error has happened. Details of the exception:The table 'Customers' cannot be found in the database."Where does this "Customers" table come from? There is obviously not an application specific "Customers" table in aspnetdb I'm confused probably more by the example than anything....
Has anyone come across a more accurate method than sp_spaceused to estimate the size of a full database backup for SQL Server 2000 ?
I have found this to have too great a variance (even after running updateusage) to rely on any accuracy for it. I have also looked at perhaps using the ALLOCATED Pages indicated in the GAM pages but this also seems to be pretty inaccurate.
I have a number of servers where space can be limited and backups using Maintenance Plans have occasionally failed because they delete the old backups AFTER they do the latest one. I am writing a script which can check the space remaining and adjust the backup accordingly but the variance I have observed so far with sp_spaceused is too great.
I am a newbie in using MS SQL server with analysis services. There seems to be no 'cross-validation' tool in MS SQL which is frequently used in data mining and even statistics. Is there anyone having similar difficulties? Is there any solution like a small scripts to divide the given dataset with multiple folds? Your valuable comments and feedbacks would be appreciated.