SQL Server Admin 2014 :: Update Statistics On Frequently Updated Tables
Dec 23, 2014
I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).
When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.
I have a query that retrieves a single record from searching on two tables. The statement goes like this... select sum(amount) from Table1 A union Table2 B on a.id = b.id where date < ### and date > #### and account = ###
As people are running a particular report, this statement is executed time and time again to pull up the numbers necesarry for the report. When the report gets slow, I can speed it up by updating the statistics. My concern is that I'm having to update the statistcs every hour; otherwise, the query becomes slow. I have noticed that users are inserting data while users are running the report on one of the tables listed above. I'm sure that's making it become more fragmented and ultimately slowing down the query. Do you have any suggestion on how I can make the union of these two tables faster? Or is there anything I could do to speed the query besides creating clusted indexes? Any help would be appreciated....thank you
I'm looking for a quick script that someone has already written to update statistics (not to rebuild or re-organise) on specific indices in specific databases - I guess loop though a table comprising of a list of databases and the indices.
I know Ola has one but I'm not look for something that is that complicated. If I cannot find one I'm going to have to write one myself - I want to try and avoid re-inventing the wheel as tomorrow I have to do this work and it's about 7K plus indices in about 10+ databases.
We installed SP1 for SQL Server 2014 this past weekend and got this error message in the logs. I found that if you set the db to read-write, it updates the system objects, even after SP1 has completed. Then you can set it back to read-only. I'm just posting this so other people can find it on the internet, as I wasn't able to find it specifically.
Error Log Entry:System objects could not be updated in database 'x' because it is read-only.
Problem: After installing SP1 for SQL Server 2014 you will find this message in the error logs saying read-only databases could not be updated.
Solution: Simply set the db to read-write and the system objects will get updated, long after SP1 was installed.
ALTER DATABASE [x] SET READ_WRITE WITH NO_WAIT
Then set it back to read-only:
ALTER DATABASE [x] SET READ_ONLY WITH NO_WAIT
You should then see these log entries:
System objects could not be updated in database 'x' because it is read-only. Setting database option READ_WRITE to ON for database 'x'. Starting up database 'x'. CHECKDB for database 'x' finished without errors on 2015-07-25 01:02:28.143 (local time). This is an informational message only; no user action is required. Synchronize Database 'x' (129) with Resource Database. Setting database option READ_ONLY to ON for database 'x'. Starting up database 'x'. CHECKDB for database 'x' finished without errors on 2015-07-25 01:02:29.888 (local time). This is an informational message only; no user action is required.
I had to reinstall my local copy of SQL a few weeks ago, which naturally overwrote the
msdb.dbo.sysmanagement_shared_server_groups_internal and msdb.dbo.sysmanagement_shared_registered_servers_internal tables.
However I still have the local XML file that SSMS reads so I can still access the groups, I just get weird errors when trying to re-register my install as the new CMS. How to rebuilt those tables from the XML file or know of a way to repopulate?
We are in the middle of re-designing few tables (namely transaction tables) that would store very large data and would be hosted on cloud (Azure). The old design of this product breaks transaction tables into monthly tables. i.e. say ORDERS Table would be physically broke into twelve monthly tables over a year like ORDERS0115 (mmyy), ORDERS0215 and so on.
We are in the opinion that keeping the entire transactions in one Table is better. Would like to know what's the best practices for transaction tables like the one mentioned above? Is it better to use one table with partitions. I read somewhere that partitions can slow down SELECT queries if not designed and thought properly.Since this would be hosted on cloud (Azure), do you think some additional things are to be taken care? How a site like Amazon keeps their transactions tables?
I have 6 tables which are very huge in row count and need to be partitioned for better manageability.
Little info: Every day, 300 Million records are inserted and 300 million records are deleted in below 7 tables. we maintain only 8 days worth of data in below tables which is the reason records which are older than 8 days are continuously deleted.
Master table which has [ID],[Timestamp] Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. dbo.ConnectionDB - 1,147,578,048 dbo.ConnectionSS - 876,458,321 dbo.ConnectionRT - 118,133,857 dbo.ConnectionSample - 100,038,535 dbo.Command - 100,032,235
I would like to partition the above child tables based on the IDs that are inserted every 4 hours. Meaning, All IDs that are inserted in 4 hours window should be in a partition.
I have a job scheduled that imports a table from a Oracle database. The job runs at 3am and reports success. But for some reason when i query the table to see how many records there are, I see the same row count as the day before (it should increase everyday- student enrollment). When i execute the package manually, the table updates fine.
We face slow performance issue for like taking long time for same query execution after We apply index rebuild and reorganize index. But, after execution of query or procedure for 2 -3 times, performance will be faster. I have following questions
1 do we need to update stats after we rebuild an reorganize index. 2. is it will be slow for 1-2 times for every query and stored procedure execution after we rebuild and reorganize index?
If you were to do a fresh install it would set permissions on the disk so everything just works.
Now when changing the service account (e.g. to a domain user) use the configuration manager, does it do the same magic (possibly sans if the database data/log files are on another disk)? Or do you need to trawl through the dozens of folders and assign rights manually?
- An MSSQL 2014 Standard server that houses multiple small databases (in excess of a hundred). - These databases are frequently dropped and restored by an application that uses this SQL Server. - There is a business need for this setup at this time, so I can't get away from it. Therefore answers like "don't have so many small databases that are frequently dropped and restored" would be somewhat unuseful
This is the problem I have:
- When I connect SSMS 2014 to the server and expand the "Databases" node, it takes forever to display. In comparison, SSMS 2008 connected to SQL 2008R2 server with the same number of databases displays the Databases tree very quickly.
I ran a trace to see what exactly SSMS 2014 is doing. When the "Databases" node is expanded, it runs a query that checks each database for Memory-Optimized Tables (new and wonderful feature of SQL 2014 for sure, but I'm not using it, at least yet). Naturally, when you have to loop through over a hundred DBs, it takes time. Worse yet, if one of these DBs is in process of being restored, the query sits and waits to time out before proceeding to the next DB. Sometimes this causes outright timeouts. Here is the query:
use [MyDatabase] SELECT ISNULL((select top 1 1 from sys.filegroups FG where FG.[type] = 'FX'), 0) AS [HasMemoryOptimizedObjects]
To be sure, this is NOT a SQL Server performance issue. This server processes a rather heavy workload and has been doing so for over a month, and the workload completes within expected time limits or better. Even so I've done some basic performance measuring, and the server itself is quite all right.
Moreover, if I connect SSMS 2008 to it, I get an error message (Index out of bounds or somesuch), but SSMS 2008 does connect, and displays the Databases tree much faster than SSMS 2014.
I'd like to turn off the option to check for Memory Optimized Objects altogether, as I'm not using the feature.
As per attachment, i have been created replications but in local subscription it is not populated any thing at the same time, Subscription database has been created but tables is not populated as per publication table.
I have recently defragged my SQL server using INDEXDEFRAG. Can somebody please tell me how to update the statistics on all the tables? Thanks in advance.
Below is the script that I executed to defrag all the tables in my database if anyone needs this.
/*Perform a 'USE <database name>' to select the database in which to run the script.*/ -- Declare variables SET NOCOUNT ON DECLARE @tablename VARCHAR (128) DECLARE @execstr VARCHAR (255) DECLARE @objectid INT DECLARE @indexid INT DECLARE @frag DECIMAL DECLARE @maxfrag DECIMAL
-- Decide on the maximum fragmentation to allow SELECT @maxfrag = 20.0
-- Declare cursor DECLARE tables CURSOR FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE'
-- Loop through all the tables in the database FETCH NEXT FROM tables INTO @tablename
WHILE @@FETCH_STATUS = 0 BEGIN -- Do the showcontig of all indexes of the table INSERT INTO #fraglist EXEC ('DBCC SHOWCONTIG (''' + @tablename + ''') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS') FETCH NEXT FROM tables INTO @tablename END
-- Close and deallocate the cursor CLOSE tables DEALLOCATE tables
-- Declare cursor for list of indexes to be defragged DECLARE indexes CURSOR FOR SELECT ObjectName, ObjectId, IndexId, LogicalFrag FROM #fraglist WHERE LogicalFrag >= @maxfrag AND INDEXPROPERTY (ObjectId, IndexName, 'IndexDepth') > 0
-- Open the cursor OPEN indexes
-- loop through the indexes FETCH NEXT FROM indexes INTO @tablename, @objectid, @indexid, @frag
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
I have got 4 MS Access Database Files, which have got 3 Tables each, means Total 12 Tables which gets updated with new data every evening, by an external application. Means new data gets appended to all these 12 Tables.
I want to have exact same 4 Databases, which have got 3 Tables each, means Total 12 Tables, but WITHIN MS SQL SERVER. And then update all of these 12 Tables every evening, with the corresponding updates from the respective tables from the MS Access Databases.
I do not want to Manually Update all these 12 tables every evening into SQL Server. Hopefully there would be some easier method to do this in automatic manner.
I ran an INDEXDEFRAG against the tables in my DB, & yet the results in DBCC SHOWCONTIG are the same before and after for some of the tables that it defragmented. Why?
In the DBCC SHOWCONTIG before defragging, this was a sample table:
I ran the script from the BOL entry for 'DBCC SHOWCONTIG' to defragment all indexes in a database , under letter E, with @maxfrag = 5%. It returned the following results (there are 5 nonclustered indexes):
My understanding is that whenever any INSERT, DELETE, or UPDATE statements execute that impacts significant amount of rows in a table, its statistics must be automatically updated by SQL Server. But it was surprise to see in the profiler that after INSERT no update statistics event happen. But the moment a SELECT is executed on the table the event 'Auto Stats' is shown up. What is the reason behind this? Why the Auto Stats not happening immediately after the INSERT statement?
Following code can be used to illustrate this (in the profiler select the Auto Stats event under Performance):
IF(SELECT OBJECT_ID('t1')) IS NOT NULL DROP TABLE t1 GO CREATE TABLE t1(c1 INT, c2 INT IDENTITY) INSERT INTO t1 (c1) VALUES(1) INSERT INTO t1 (c1) VALUES(2) INSERT INTO t1 (c1) VALUES(3) CREATE NONCLUSTERED INDEX i1 ON t1(c1)
GO
--Now add 10,000 rows to update so that statistics are updated. Look into the profiler:
SET NOCOUNT ON GO
DECLARE @n INT SET @n = 1 WHILE @n <= 10000 BEGIN INSERT INTO t1 (c1) VALUES(2) SET @n = @n + 1 END
SET NOCOUNT OFF GO
--Next runt he following statement. The profiler will now show 'Auto Stats' event
Account getting wrongly updated due to concurrencyWe are using Sql 2014 and storing data in customer_account table for customer account details however we are experiencing wrong value insertion during concurrency ..pls find the code
Declare @date datetime2(7) Declare @InsDate datetime2(7) SELECT TOP 1 @Amountremaining =Remaining ,@Date=Datetime                                                             FROM <customertaccounttable>                                                               [code]....
Can any one recommend an approach for identifying the frequently accessed tables and files in a SQL/Server application? Particularly, if the database is stored on raw partitions instead of FAT or NTFS. Thanks.
I faced problem related to Locking and Isolation Level on Table(s).
My problems is there r some tables which r frequently updated, and I also want to fire select query over those tables every 1 seconds and want to get only committed records. In current scenario we start transactions with ReadCommitted Lock for updating records. But in this scenario I can€™t get select query result because of some of recourses r used by transactions so after some time it gives Deadlock error.
So I want solution like both operation run simultaneously and get only committed records at a time of transaction running Please help me for solving my problem.
I am working on an existing infrastructure and i do not have liberty to change much right now. I am in a situation where app issues update statistics command quite often. So frequently that sometimes one blocks another. Is there any way i can do something like this
IF ( update_statistics going on) dont do anything else run update statistics
This is temporary solution untill i fix bad inline SQL code (in app) and use SPs.
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?