SQL Server Admin 2014 :: How To Get Number Of Execution In Specific Time - Not From First
Apr 7, 2015
I have this query
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count
FROM sys.dm_exec_query_stats AS a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
where last_execution_time >= '2015-04-07 10:01:01.01'
ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
This store procedure will get some executable queries from the select statement, the cursor will fetch each rows to execute the query and insert the queries into table_3 to mark as 'E'. Until 17:00, this store procedure will stop execute the queries and just get the queries from select statement insert into table_3 to mark as 'C'.
I don't know why the outputs in table_3 are quiet different than I think. This store procedure comes out with two exactly same queries and one marked as C and another marked as E.
CREATE PROCEDURE procedure1 AS DECLARE cursor_1 CURSOR FOR SELECT 'This is a executable query' FROM table_1 DECLARE @table_2 DECLARE @stoptime DATETIME = NULL;
I'm looking for a quick script that someone has already written to update statistics (not to rebuild or re-organise) on specific indices in specific databases - I guess loop though a table comprising of a list of databases and the indices.
I know Ola has one but I'm not look for something that is that complicated. If I cannot find one I'm going to have to write one myself - I want to try and avoid re-inventing the wheel as tomorrow I have to do this work and it's about 7K plus indices in about 10+ databases.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I have a user who needs access to views like(dbo.viewnameabc1,dbo.viewnameabc2 and so on...) dbo.viewnameabc* and anytime the user creates the view he already have the permission to view those views....
We have always on setup in our environment with read only replica. The primary database has 2 schema one is a dbo and other xyz. We have some store procs created in dbo schema and xyz schema. These store procs are being used by SSRS reports to retrieve the data (select only) no data changes will be made.
when we run the store proc from the read only server the storeprocs in the dbo schema run fine but xyz schema are failing with the message saying failed to update the database as this is a read only...
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and TestDB had 40 VLFs and TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB: RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec). SQL Server Execution times: CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2: RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec). SQL Server Execution Times: CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)
The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.
In test 1 min/max server memory was set to 9215MB/10751MB In test 2 min/max server memory was set to 13311MB/14847MB
The script assures the number of bytes inserted in the nvarchar columns is always the same.
This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)
Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment
- buffer cache hit ratio is decreasing - page life expectance becomes "terrible" - free list stall/sec increases - lazy writes/sec increases - readlatency increases (write latency does not)
In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).
Now what I do not understand is :
Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.
I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.
Finally I want to notice
- I'm the only user on this machine - the table has a clustered index on a identity column - there are no foreign key constraints - inserts are executed using a loop, not one big transaction - to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries
So I wonder why SQL Server starts to execute so many reads in test 1.
Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:
Data files – should go on the fastest reading storage. Log files – should go on the fastest writing storage. TempDb – involves a lot of writing at the same time the data files are being read. Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.
Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”
Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.
Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).
When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.
(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)
What is the largest number of users any of you have had in a single database?
I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.
I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.
On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.
Is this normal behaviour? I've seen more blocking as a result of this.
Is there any way to reduce the number of SPIDs generated?
I've migrated a server with SQL Server 2008R2 and Reporting services into a new box with SQL Server 2014, but forgot to change the timezone to the correct one. I've changed it later, but it seems like the reports are running by the old default timezone. The schedule says that the report should run at 6:30am, but the Last Run column shows 8:30 PM.
I need to fix it without manually updating each subscription with some date/time conversion.
Is there any single TSQL query which provides below info.When did my AlwaysOn Availability group failed over and from which node it failed to which new node(i.e. replica)?
I've used some info on here to generate random dates within a given range and also random times - independently they work fine, but I can't seem to join them into a single field of datetime. I'm not sure why. The following snippet works fine as two independent fields:
select CAST(CAST(ABS(CHECKSUM(NEWID()))%(780)+(33968) AS DATETIME) as DATE) as theDate, CAST(CAST(DATEADD(milliSECOND,ABS(CHECKSUM(NEWID()))%86400000 ,'00:00') AS TIME) as varchar(50)) as theTimeBut when I try to make it a single datetime field:
select CAST(cast(cast(CAST(ABS(CHECKSUM(NEWID()))%(780)+(33968) AS DATETIME) as date) as varchar(50)) + ' ' + cast(CAST(CAST(DATEADD(milliSECOND,ABS(CHECKSUM(NEWID()))%86400000 ,'00:00') AS TIME) as varchar(50)) as varchar(50)) as datetime)
Which returns with: Conversion failed when converting date and/or time from character string.
So what I am really looking for is a way to join those two values into a single datetime field... Or failing that that how to generate random dates within a range including random times...
SQL server job or SP to deny access to an AD login for certain period of time to SQL server instance...i.e. to deny access to login ADxyz from 12 PM to 10 PM and revoke access to same login at 10:01 PM...
On one of our SQL Server 2014 boxes each database has a copy-only full backup made every night, in addition to the maintenance plan schedule of a full backup weekly, daily differential backups and log backups.
When performing a PIT restore in SSMS the restore file list lists the most recent copy-only backup as the full backup to use, not the most recent plan full backup. I noticed that using SSMS 2008 to start a PIT restore on the 2014 box does not have this problem, and lists the correct restore file sequence (ignores the copy-only backups).
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
If I just use a simple select statement, I find that I have 8286 records within a specified date range.
If I use the select statement to pull records that were created from 5pm and later and then add it to another select statement with records created before 5pm, I get a different count: 7521 + 756 = 8277
Is there something I am doing incorrectly in the following sql?
DECLARE @startdate date = '03-06-2015' DECLARE @enddate date = '10-31-2015' DECLARE @afterTime time = '17:00' SELECT General_Count = (SELECT COUNT(*) as General FROM Unidata.CrumsTicket ct
I have huge export files in a DB and i need to check if there are any datasets that have the same value in the first column, but a different in another one, via a query of course.
Like this:
ID IS NULL 1 1 2 1 3 0 1 0
The expected ID i get as a result of my query should be 1 in this case.
I am new to this level of coding in SQL SERVER 2012, but I am looking to update the TITLE_YEAR field in the temp table with the Year the employee is in that title. For example for employee 11127 the data should look like this:
EMP_IDTitle DateValue TITLE_YEAR 3 Senior Consultant 2009-01-01 00:00:00.0001 3 Director 2010-01-01 00:00:00.0001 3 Director 2011-01-01 00:00:00.0002 3 Director 2012-01-01 00:00:00.0003 3 Director 2013-01-01 00:00:00.0004 3 Senior Director 2014-01-01 00:00:00.0001
It seemed quite easy at first glance. I Built it up via string concatenation and thought to execute the dynamic sql with sp_exec and get the result. As I don't like dynamic sql I was wondering If there is any other way..
I created a new login and then created a new user [COM] in DB with default schema pointing to [COM]
I created then schema [COM] WITH AUTHORIZATION [COM]
I want this [COM] user to have all permissions it needs on [COM] schema only. How do I do that? When I try to create table [Com].Table it gives me permission denied.
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?