Last Sunday on our Primary server I saw some blocking and the First spid that blocked everything was waiting on LogBuffer wait. At that time the server was not running hot and our DB is on SSD's. There was no memory pressure(1 TB of memory on the server) It is strange why we should get LogBuffer wait when we are running on the fastest disks possible and there was not much action going on the DB.
I am trying to find out what could be causing this issue. Why would we be waiting on cpu when its barely being used. Signal waits are varying from 35 to 55% and cpu usage is only at 5% usage.We are using Windows Server 2012 with SQl Server 2012 Standard edition with cpu5. There are 3 instances on the server each with max memory 50gb memory and the server has a total of 190gb memory. The machine is a 12 core machine with hyperthreading enabled.
I am sure I have seen in the past in a monitoring tool that PLE drops off to 0 whenever we do a backup. I was doing some reading around this however and found something that said backups use a different portion of memory external to the buffer pool (minmax settings).
Is this correct and how can I tell how much memory will be required for a backup?
We are troubleshooting a performance problem and the test result is slow the 1st time but the subsequent runs are faster.. Logging out of application and log back in ( connecting to a new database session) did not clear the buffer cache as I thought it would.. When does the database clear the buffer cache? Is it not per database session?
I can issue CHECKPOINT and then run DBCC DROPCLEANBUFFERS to clear the buffers in the disk. But since we are testing from the application,do we need to run these commands via application code to clear buffer/per database session OR can we run these commands from a management studio session?
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
Question- Why am I getting 428 pages for which there is no corresponding DB object? Why are so many pages present in sys.dm_os_buffer_descriptors but are missing from sys.allocation_units.
I'm getting an alert which states that both my Buffer Cache Hit Ratio and PLE are low on one of my SQL Servers though I'm not sure how to correctly check this.
I ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Buffer cache hit ratio'
Which gives me the Buffer Cache Hit Ratio, cntr_Value of 9 though its constantly dipping between 3-3000 and is never steady and I'm unsure if this is normal.
I also ran:
SELECT object_name, counter_name, cntr_value FROM sys.dm_os_performance_counters WHERE [object_name] LIKE '%Buffer Manager%' AND [counter_name] = 'Page life expectancy'
Which gives me the Page life expectancy of 209061.
If these values would cause concern and if this is a normal Buffer Cache Hit Ratio? It's constantly dropping from high or low from what I can see. These scripts were pulled from another forum and I'm assuming they're showing the correct values.
I encountered the following error while attempting to preview an RDL report I was developing in VS2010 using SSDT:"The size necessary to buffer the XML content exceeded the buffer quota"
We have a set of reports with same header section in all the reports. So while developing a new report i used to copy that header section to the new report with same dataset names (without any change) , but while rendering the report it is throwing error " The size necessary to buffer the XML content exceeded the buffer quota".
I have a master package that executes a series of sub packages run from a SQL Agent job. One of those sub packages has been stable for a week, running at least once per day, but it just failed despite having been run once already today with the same set of input data.
There were a series of errors showing in the event log for the Execute Package Task starting with "Buffer Type 15 had a size of 0 bytes.", then "The buffer manager failed to create a new buffer type.", then "The Data Flow task cannot register a buffer type. The type had 32 columns and was for execution tree 3.", then "The layout failed validation." and finally "Error 0xC0012050 while loading package file "C:[Package].dtsx". Package failed validation from the ExecutePackage task. The package cannot run.".
SQLIS.com reports the constant for the error code as DTS_E_REMOTEPACKAGEVALIDATION ( http://wiki.sqlis.com/default.aspx/SQLISWiki/0xC0012050.html ).
I then ran the package on my dev machine in BIDS and it worked fine, so I re-ran the job on the server and this time that package executed ok, but another one fell over but did not put anything in the event log.
I'm experiencing a completely random warning from any given row count component within any given data flow task. It occurs sporadically. Whilst distracting, I don't see any adverse effects to the data after the packages complete. Can someone weigh in on this warning and let me know if it is indeed benign or what I maybe able to do to fix it?
Here's the warning:
"A call to the ProcessInput method for input 75997 on component "CNT Rows sent for STG table" (75995) unexpectedly kept a reference to the buffer it was passed. The refcount on that buffer was 4 before the call, and 5 after the call returned."
Hello to all,I have a small question.I call the SP outer the DB. The procedure deletes some record in tableT1.The table T1 has a trigger after delete.This is very importand for me, that the SP will be finished ASAP,that's why, I do not want, and I do not need to wait for a trigger.Does the SP will be finished, after the trigger is finished?Means, does the SP "waits" for a trigger?I think it is like that. Is it anyhow possible, to set the trigger (orthe procedure) that it want's be waiting for a result of triggerexecution?Thank You for kindly replyMateusz
I have a query that keeps on waiting with a wait type of SOS_SCHEDULER_YIELD. I have checked for the CPU pressure as well, and there is nothing else running. The query outline is given below.
Select a,b,c,d....
FROM tableA A inner join tableB B on A.a = B.b
inner join tableC on B.c = C.c
where
a.x not in (SELECT x from tableX where y = '???')
I have restored the copy of prod DB on a DEV box and ran the query which runs fine. But the prod box which is more powerful chokes. DEV is a 4CPU/4GB box and PROD is 8CPU/32GB box. All other DB and Server settings are exactly the same. DEV is SQL 2005 Standard ed SP2. Prod is a SQL 2005 Enterprise ed SP2.
Back in the days of SQL 7.0 I used a lot of ODBC SELECT querying form VB applications, in which I implemented NOLOCK in order to prevent the primary business applications from being locked out of tables once the queries were run.
Now, quite a few years later, I'm busying myself converting a lot of old Access based forms and queries to TSQL on SQL-Server 2000, and wonder aimlessly why NOLOCK queries (simple select ones) are imensely slower than a standars select clause.
SELECT * FROM employees
This would be much much faster than the code below, but users would get "The current record could not be accessed, as it is being used by another user", evidently because I'm locking the record while producing the output.
SELECT * FROM employees (nolock)
So this could should - as I remember it - do a dirty read on table, not obstructing other users and give me a snapshot of date as they are, although they might be locked for edit.
Could anyone explain to me why the nOLOCK query fials to give me any output? It is as if the nolock request is waiting for the table/records to free? In which case I'll never be able to run a query.
About a month ago I setup mirroring with our DB, High safety with automatic failover. Ever since the lock waits per minute in the DB went from maybe 2-5 per minute to 22-25 per minute. I am not sure if this was expected or what; but sometimes it spikes even higher than that.
Does this sound off the charts to anyone or normal for mirroring?
I have MS SQL Server 2005 with SP1 installed, version 9.0.2047 I am trying to debug a Script Task in SSIS. I have break point on the first line of the code. SSIS runs and eventually launches MS Visual Studio for Applications. Line with the break point is highlighted in yellow. After that Visual Studio is frozen. F5, F11 or any other key press produces a popup which says the following:
Delay notification. Microsoft Visual Studio for Applications is waiting for an operation to complete. If you regularly encounter this delay during normal usage please report this problem to Microsoft. Please include a description of the work you were doing in Microsoft Visual Studio for Applications and when possible instructions how to reproduce this delay. If Microsoft Visual Studio for Applications is waiting on another application you can switch to that application now, or you can continue waiting for this operation to complete.
The popup has to buttons: [Switch to€¦] and [Continue Waiting] None of the buttons allows to proceed.
Any idea what causes Microsoft Visual Studio for Applications to a complete halt?
I have a 2GHZ cpu with 1GB of RAM. I occassionally see very slow (long) queries against a local SQL Server 2005 Express (SP2) database. The issue occurs against different SQL Queries, but all queries are rather basic select statements Perfmon shows that the SQL Server counter for the "MEMORY GRANT QUEUE WAIT Avg MS" gets extremely high (25000+ ms). Perfmon also also shows that PAGING is not occuring, and the system is not under unsual stress. The problem is not reproducible with MSDE.
Has anyone seen this issue, or have any recommendations for a next course of action?
The scenario is as follows: I have a source with many rows. Each row has a column called max_qty_value. I need to perform a calculation using another column called qty. This calculation is something similar to dividing qty/(ceiling) max_qty_value. Once I have that number I need to write an additional duplicate row for each value from the prior calculation performed. For example, 15/4 = 4. I need to write 4 rows to the same target table as in line information for a purchase order.
The multicast transform appears to only support fixed and/or predetermined outputs. How do I design this logic in SSIS to write out dynamic number of rows to a target table.
I'm having a problem importing a text file into a SQL db using DTS. I have to transform some of the data that is being imported so I think Bulk import is out of the question.
Everything works fine until a hit a row that contains more than 255 characters in one cell. Once it encounters that row, it fires this error:
"Error at source for row number 9.Errors encountered so far in this task :1 General Error: -2147217887(80040E21) Data for Source Column 3('Col3') is too large for the specified buffer size."
I found a entry in the MS KnowledgeBase that addresses the symptom but the workaround doesn't fix it:
Help, have recently upgraded from 6.5 to 7.0 and have come across a problem with performance. The problem appears to relate to the buffer cache being flushed, the buffer cache hit ratio drops from 98% to 0% in a matter of a second. It then very slowly grows, then is flushed again, then increase slowly upto 30%.
Does any one have any ideas as to what would flush the buffer cache?
I have a strange problem that I need to solve as soon as possible. I have created two CLR UDTs called point and point_list. Each record of a point_list consists of a list of points. I created a CLR stored procedure which reads some raw data and updates the point_list records. When I execute the stored procedure the following error appears :
System.Data.SqlTypes.SqlTypeException: The buffer is insufficient. Read or write operation failed. System.Data.SqlTypes.SqlTypeException: at System.Data.SqlTypes.SqlBytes.Write(Int64 offset, Byte[] buffer, Int32 offsetInBuffer, Int32 count) at System.Data.SqlTypes.StreamOnSqlBytes.Write(Byte[] buffer, Int32 offset, Int32 count) at System.IO.BinaryWriter.Write(Char ch) etc ...
Hi thereAnybody know how to increase the MS SQL server buffer size?I get an error when trying so insert some pictures as OLE objects. Whentransfering to the server i get an error, that the buffer sizes needs tobe increased.RegardsRudi W.
A script component receives some input. But I just can't get at the first row??
Basically, if i use the NextRow method in the in the Do statement, then it advances the row collection to the second row before it gets into the code inside the loop?? BUT, if I use the EndOfRowset property to define my loop then I get an error:
[PipelineBuffer has encountered an invalid row index value]
I'm guessing this means...I have to call NextRow before i access the data in the collection? But thats retarted because then I miss the first row?? what? What am I missing??
This is the code which works but I miss the first row:
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) Dim strConcept As String
Do While Row.NextRow()
strConcept = Row.concept
updateDb(strConcept)
Loop End Sub
This is the code which throws the invalid row index error:
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer) Dim strConcept As String
Do While Not Row.EndOfRowSet()
strConcept = Row.concept
updateDb(strConcept)
Row.NextRow()
Loop End Sub
I've put some try catches in there an the error happens on the line which calls Row.concept....?
Can anyone help, it must be something I'm messing up
All, My weekly loading is failed and here is the error message I got. Could someone kindly point me what is the problem and how to detail with it?
Thanks
Error: 0xC0047012 at Fact_ResidentService, DTS.Pipeline: A buffer failed while allocating 63936 bytes. Error: 0xC0047011 at Fact_ResidentService, DTS.Pipeline: The system reports 43 percent memory load. There are 4227104768 bytes of physical memory with 2378113024 bytes free. There are 8796092891136 bytes of virtual memory with 8787211939840 bytes free. The paging file has 10300792832 bytes with 14786560 bytes free. Error: 0xC0047022 at Fact_ResidentService, DTS.Pipeline: The ProcessInput method on component "Union All 1" (3629) failed with error code 0x8007000E. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. Error: 0xC02020C4 at Fact_ResidentService, From_Basis [16]: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020. Error: 0xC0047038 at Fact_ResidentService, DTS.Pipeline: The PrimeOutput method on component "From_Basis" (16) returned error code 0xC02020C4. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. Error: 0xC0047021 at Fact_ResidentService, DTS.Pipeline: Thread "WorkThread0" has exited with error code 0x8007000E. Error: 0xC0047021 at Fact_ResidentService, DTS.Pipeline: Thread "SourceThread1" has exited with error code 0xC0047038. Error: 0xC0047039 at Fact_ResidentService, DTS.Pipeline: Thread "WorkThread2" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown.
When running a package created on my local machine i get no errors at all but when i try to run the same package on the server i get an error specifying Microsoft.SqlServer.Dts.Pipeline.DoesNotFitBufferException: The value is too large to fit in the column data area of the buffer.
I have tried changing the defaultbuffersize of the data flow task but this makes no difference. I think that a buffer size for a particular column is being exceed but i cannot find anywhere to set this property.
Upon running DTS manually to transfer data from Excel into SQL Server, I get the error:
-----------------------------ERROR OUPTUT ------------------------------------ Error at Source for Row number 264. Errors encountered so far in this task: 1. General error -2147217887 (80040E21). Data for source column 3 ('Value') is too large for the specified buffer size. ---------------------------END ERROR OUTPUT----------------------------------
*** 'Value' is varchar(4000); largest having length of 1000. *** The network packet size is 4096.
?? AM I SUPPOSED TO CHANGE THE BUFFER SIZE??
Your kind help is greatly appreciated Thanks Ziggy
Time out occurred while waiting for buffer latch type 2,bp 0x18b7d40, page 1:11558916), stat 0xb, object ID 9:1842105603:2, EC 0x5862D9C8 : 0, waittime 300. Not continuing to wait.