My problem is that I cannot completely clean buffer cache on SQL Server 2005 version 9.00.2047.00 (probably SP1).
Right after I run DBCC DROPCLEANBUFFERS in the context of my database (this is development server, and so far I am only the one who is working with a particular database), I run a script that quetries sys.dm_os_buffer_descriptors view also from the context of my database to make sure that the buffer cache is really clean. However it shows large number of entries totalling 42 MB.
I ran both DBCC an the script in the past too, and it always showed nothing in the results, that means that buffers were really clean. The reason why I am running this is for benchmarking of existing and new application.
Does anybody have any idea, suggestions, how to troubleshoot this issue ? I already closed all connections to this database, but rebooting the server is not an option since other people are also working on it.
Is there a way to drop clean buffers at the database level instead of the server/instance level like the undocumented €śDBCC FLUSHPROCINDB (@dbid)€?? Is there a workaround for €śdbo€? to be able to flush procedure and data cache without being elevated to €śsysadmin€? server role?
PS: I am aware of the sp_recompile option that can be used to invalidate cached execution plans. Thx.
What is the difference between CleanExpiredCache and FlushReportFromCache Do we need to run both the SPs to clear all the SSRS reporting cache? Is it possible to clean all the Cache information and retain the logs? If yes how we can do so. Is it by deleting the REportServerTempDB.dbo.ExecutionCache table work in achieving this?
Help, have recently upgraded from 6.5 to 7.0 and have come across a problem with performance. The problem appears to relate to the buffer cache being flushed, the buffer cache hit ratio drops from 98% to 0% in a matter of a second. It then very slowly grows, then is flushed again, then increase slowly upto 30%.
Does any one have any ideas as to what would flush the buffer cache?
This issue just happen recently. The buffer cache ratio went from > 90%to 50% and has slowly been climbing back up over 8 hours or so. Itscurrently @ 76%. Is this something I should take action on immediately?It seems to be coming back to normal...
HiI have trouble with MSSQL2000 SP4 (without any hotfixes). During last twoweeks it start works anormally. After last optimalization (about few monthsago) it works good (fast, without blocks). Its buffer cache hit ratio wasabout 99.7-99.8. Last day it starts work slow, there was many blocks anddedlocks. There are no any queries, jobs and applications was added. Nowbuffer cache hit ratio oscilate about 95-98. I try update statistics andreindex some hard used tables, but there is no effect or effect is weryshort (after few hours problem return).Mayby somene know what it could be?Is it possible to estimate how each table (using DBCC SHOW_STATISTICS orDBCC SHOWCONTIG or others) how the table affect on total buffer cache hitratio?Marek---www.programowanieobiektowe.pl
We are troubleshooting a performance problem and the test result is slow the 1st time but the subsequent runs are faster.. Logging out of application and log back in ( connecting to a new database session) did not clear the buffer cache as I thought it would.. When does the database clear the buffer cache? Is it not per database session?
I can issue CHECKPOINT and then run DBCC DROPCLEANBUFFERS to clear the buffers in the disk. But since we are testing from the application,do we need to run these commands via application code to clear buffer/per database session OR can we run these commands from a management studio session?
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
Question- Why am I getting 428 pages for which there is no corresponding DB object? Why are so many pages present in sys.dm_os_buffer_descriptors but are missing from sys.allocation_units.
I'm getting an alert which states that both my Buffer Cache Hit Ratio and PLE are low on one of my SQL Servers though I'm not sure how to correctly check this.
I ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Buffer cache hit ratio'
Which gives me the Buffer Cache Hit Ratio, cntr_Value of 9 though its constantly dipping between 3-3000 and is never steady and I'm unsure if this is normal.
I also ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
Which gives me the Page life expectancy of 209061.
If these values would cause concern and if this is a normal Buffer Cache Hit Ratio? It's constantly dropping from high or low from what I can see. These scripts were pulled from another forum and I'm assuming they're showing the correct values.
in microsoft doc there is written on the topic of BP Extensions with SSD's in SQL Server 2014: only clean pages are written to disk... does this mean data pages that have not been modified yet? or also those data pages that have already been modified, and where log has finished writing and the transaction has been marked as commited??
why are there clean data pages being written to L2 cache to make space for other not modified pages? I mean, shoudnt they be modified first, before letting other unmodified data pages into the Cache? I mean they have still to be modified..that makes no sense to me to page them out and page them in again just for other data pages...
I encountered the following error while attempting to preview an RDL report I was developing in VS2010 using SSDT:"The size necessary to buffer the XML content exceeded the buffer quota"
We have a set of reports with same header section in all the reports. So while developing a new report i used to copy that header section to the new report with same dataset names (without any change) , but while rendering the report it is throwing error " The size necessary to buffer the XML content exceeded the buffer quota".
I have a master package that executes a series of sub packages run from a SQL Agent job. One of those sub packages has been stable for a week, running at least once per day, but it just failed despite having been run once already today with the same set of input data.
There were a series of errors showing in the event log for the Execute Package Task starting with "Buffer Type 15 had a size of 0 bytes.", then "The buffer manager failed to create a new buffer type.", then "The Data Flow task cannot register a buffer type. The type had 32 columns and was for execution tree 3.", then "The layout failed validation." and finally "Error 0xC0012050 while loading package file "C:[Package].dtsx". Package failed validation from the ExecutePackage task. The package cannot run.".
SQLIS.com reports the constant for the error code as DTS_E_REMOTEPACKAGEVALIDATION ( http://wiki.sqlis.com/default.aspx/SQLISWiki/0xC0012050.html ).
I then ran the package on my dev machine in BIDS and it worked fine, so I re-ran the job on the server and this time that package executed ok, but another one fell over but did not put anything in the event log.
I'm experiencing a completely random warning from any given row count component within any given data flow task. It occurs sporadically. Whilst distracting, I don't see any adverse effects to the data after the packages complete. Can someone weigh in on this warning and let me know if it is indeed benign or what I maybe able to do to fix it?
Here's the warning:
"A call to the ProcessInput method for input 75997 on component "CNT Rows sent for STG table" (75995) unexpectedly kept a reference to the buffer it was passed. The refcount on that buffer was 4 before the call, and 5 after the call returned."
Im getting this error when trying to set up a cache dependency...are there any special permissions etc?From CS:SqlCacheDependency dep = new SqlCacheDependency("MySite-Cache", "Products");Cache.Insert("Products", de.GetAllProductsList(), dep); From connectionStrings.config:<add name="SiteDB" connectionString="Data Source=localhost,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" />Also tried this using my machinename<add name="SiteDB" connectionString="Data Source=<machinename>,[port]SQLEXPRESS;Integrated Security=true;User Instance=true; AttachDBFileName=|DataDirectory|ASPNETDB.MDF" providerName="System.Data.SqlClient" /> From web.config: <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="MySite-Cache" connectionStringName="SiteDB" pollTime="2000"/> </databases> </sqlCacheDependency> </caching> EDIT: So making progress I can't seem to get the table registered for cache dependency:The sample i have says"aspnet_regsql.exe -E -S .SqlExpress -d aspnetdb -t Customers -et"and the command line response is "Enabling the table for SQL cache dependency..An error has happened. Details of the exception:The table 'Customers' cannot be found in the database."Where does this "Customers" table come from? There is obviously not an application specific "Customers" table in aspnetdb I'm confused probably more by the example than anything....
I am looking at the plan caches/cached pages from the perspective of sys.dm_os_memory_cache_counters and sql serverlan Cache - Cache Pages
For the first one I am using
select (sum(single_pages_kb) + sum(multi_pages_kb) ) from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP' a slight change from a query in http://blogs.msdn.com/sqlprogrammability/
For the second just perfmon.
The first one gives me a count of about 670,000 pages only for the object and query cache and the second one gives me a total of about 100,000 pages for five type of caches including object and query.
If I am using the query from http://blogs.msdn.com/sqlprogrammability/ to determin the plan cache size
select (sum(single_pages_kb) + sum(multi_pages_kb) ) * 8 / (1024.0 * 1024.0) as plan_cache_in_GB from sys.dm_os_memory_cache_counters where type = 'CACHESTORE_SQLCP' or type = 'CACHESTORE_OBJCP'
it gives me about 5 GB when in fact my SQL Server it can access only max 2GB with Total and Target Server Memory at about 1.5 GB.
The scenario is as follows: I have a source with many rows. Each row has a column called max_qty_value. I need to perform a calculation using another column called qty. This calculation is something similar to dividing qty/(ceiling) max_qty_value. Once I have that number I need to write an additional duplicate row for each value from the prior calculation performed. For example, 15/4 = 4. I need to write 4 rows to the same target table as in line information for a purchase order.
The multicast transform appears to only support fixed and/or predetermined outputs. How do I design this logic in SSIS to write out dynamic number of rows to a target table.
WHERE (dbo.document.docVisible = 'yes') AND (dbo.document.docHide = 'NO') AND (dbo.t_files.fileMoved = '1') AND (dbo.documentCategory.docCategoryName LIKE '%olic%') AND (dbo.minisite.sectionLive = 1) AND (dbo.t_files.fileName <> N'A LINK') AND (dbo.t_files.fileName NOT IN (SELECT exFileID FROM t_exclude)) OR (dbo.document.docVisible = 'yes') AND (dbo.document.docHide = 'NO') AND (dbo.t_files.fileMoved = '1') AND (dbo.minisite.sectionLive = 1) AND (dbo.t_files.fileName <> N'A LINK') AND (dbo.t_files.fileName NOT IN (SELECT exFileID FROM t_exclude)) AND (dbo.document.docDesc LIKE '%olic%') OR(dbo.document.docVisible = 'yes') AND (dbo.document.docHide = 'NO') AND (dbo.t_files.fileMoved = '1') AND (dbo.minisite.sectionLive = 1) AND (dbo.t_files.fileName <> N'A LINK') AND (dbo.t_files.fileName NOT IN (SELECT exFileID FROM t_exclude)) AND (dbo.document.docType BETWEEN 1 AND 4) OR (dbo.document.docVisible = 'yes') AND (dbo.document.docHide = 'NO') AND (dbo.t_files.fileMoved = '1') AND (dbo.minisite.sectionLive = 1) AND (dbo.t_files.fileName <> N'A LINK') AND (dbo.t_files.fileName NOT IN (SELECT exFileID FROM t_exclude)) OR(dbo.t_files.fileName IN ('127-2006-1-27-3321130.pdf', '127-2006-1-30-3726619.pdf', '127-2006-1-27-5700042.pdf', '127-2006-1-27-5678586.pdf', '127-2006-1-27-5693574.pdf', '127-2006-1-27-5873392.pdf')) ORDER BY dbo.document.docDesc
Right - as you can see, some of the line slook pretty similar. When I try and do:
(one AND two AND three) AND (four OR five), I get
(one AND two AND three AND four) OR (one AND two AND three AND five)
when using enterprise manager. Is there anyway to keep the first way of doing it. Otherwise, everytime I add anotehr OR statement in, I'll have to create a new line!
I had received a message that my log file is full and it do not enable to me to do a database backup before free up disk space. How do i clean up de log file (_log.ldf)?
I have the following query which strips out middle initial data from the first_name column and populate the middle_initial column with the relevant data.
What my query does not handle is when a first name has a single character (IE: A). In its current state, the "A" would be moved to the middle_initial field with a period added to the end (IE A.). The first_name column would also include "A". Basically, when a single character first name is found in the first_name column, I do not want to populate the middle_name field.
Hope this does not sound too cryptic; query is below along with some sample data when run.
SELECT first_name, CASE WHEN SUBSTRING(LTRIM(RTRIM(first_name)),LEN(LTRIM(RTRIM(first_name)))-1,1)=' ' THEN SUBSTRING(LTRIM(RTRIM(first_name)),LEN(LTRIM(RTRIM(first_name))),1) +'.' ELSE NULL END AS 'middle_initial', CASE WHEN SUBSTRING(LTRIM(RTRIM(first_name)),LEN(LTRIM(RTRIM(first_name)))-1,1)=' ' THEN LEFT(LTRIM(RTRIM(first_name)),CASE WHEN LEN(LTRIM(RTRIM(first_name)))>=2 THEN LEN(LTRIM(RTRIM(first_name)))-2 ELSE LEN(LTRIM(RTRIM(first_name))) END) ELSE LTRIM(RTRIM(first_name)) END AS 'first_name_removing_initials' FROM contact
Sample Data first_name, middle_initial, first_name_removing_initials, Paul, NULL, Paul A, A., A, A Fred, NULL, A Fred, Aaron, NULL, Aaron
I have a db server that had served both my application data and reporting services. I have since installed a second db server to be the reporting services server; the primary box still stores the application data. I am keeping reporting services installed and set up on the primary box, since I would like it to act as a warm standby for the reporting services function.
However, the ReportServerTempDB on my primary box is still quite large. The ChunkData table is quite close to 10 gig. The CleanupCycleMinutes property is set to the default of 10. Clearly any data that might be in this table is far older than that.
Is there some recommended method of clearing this data? Can I just delete the rows or do a truncate on the table? I would prefer not to uninstall/reinstall reporting services, as this machine is actively serving my application data.
Hi everyone, What is the easy way to clean up all ms sql 2005 tables? For example, a database table named Customers which has triggers and primary key and foreign keys. Now I clean up the Customer table using the folowing statement delete Customers And the ms sql 2005 asks me to remove all triggers and foreign keys before allow me to clean up the Customers table. Is there a way to clean up all tables without remove the tiggers and foreign keys? Thanks May
Hi, I need some advise in the log shipping. The log files in the primary server get cleaned up according to what I have specified in the maintenance plan. But the log files that got shipped to the secondary server stay there for ever wasting my hard disc. Will it make any problem if I remove them or can I set it up to remove all files earlier than past 2 hrs? Please advise. Thanks