SQL Server 2008 :: Update Statistics With Large Databases

Feb 5, 2015

Currently our database size is around 350G. It will grow up to 1.5 TB

We have the

Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False

at database level

we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.

Previously they tested with sampling but instead of full scan running with the sampling effected the queries.

Is there option to avoid the long time job duration.

If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases

View 9 Replies


ADVERTISEMENT

SQL Server 2008 :: Update Statistics On Tables Of Database After Rebuild Indexes?

Nov 5, 2015

If I rebuild some indexes that are above 30% of average fragmentation, should I after that update statistics?

Also, How can I see if I Need to update statistics^on the tables of my database?

View 3 Replies View Related

SQL Server 2012 :: Detect Update Statistics

May 20, 2014

I am working on an existing infrastructure and i do not have liberty to change much right now. I am in a situation where app issues update statistics command quite often. So frequently that sometimes one blocks another. Is there any way i can do something like this

IF ( update_statistics going on)
dont do anything
else
run update statistics

This is temporary solution untill i fix bad inline SQL code (in app) and use SPs.

View 8 Replies View Related

SQL Server 2000: Update Statistics Causes Error 3628

May 30, 2007

Hi,

I am using SQL Server 2000 sp4 (standard edition); when I run an Update Stats command against a table I get error 3628:



UPDATE STATISTICS IMSV7.BLITEMRT WITH SAMPLE 20 PERCENT
Server: Msg 3628, Level 16, State 1, Line 1
A floating point exception occurred in the user process. Current transaction is canceled.



Does anyone know of a fix or a workaround?

View 3 Replies View Related

SQL Server Express Login Issue With Large Databases

Feb 6, 2007

Hello,

My company works with SQL Server 2005 express locally with Visual Studio to develop websites. Everything works very well.

We use SQL Server 2005 express on our production server as well. We change the database over to a non-User Instance when the site is ready to go live. Everything works fine here as well.

We run into issues when databases get near 100 MB. This is well below the stated database size limit for Express of 4GB.

At that point, about once a day, a site with a "large" database will stop responding. The error that we'll get is "Cannot open database 'DBNAME' requested by the login. The login failed. Login failed for user 'DBUSER'

The only way we've found to fix this is to restart the SQL Express service. Obviously, that isn't a very useful alternative.

Has anyone run into anything like this? Could we have some setting wrong?

Would moving to the full version of SQL Server 2005 fix this?

View 3 Replies View Related

SQL Server Admin 2014 :: Update Statistics On Frequently Updated Tables

Dec 23, 2014

I'm working on databases where statistics of some indexes (tables) are changing too frequently. Once I update them manually, one minute after they get 10-20% change, and five minutes after they get over 100% change. Tables get updated very frequently (multiple times in a second).

When I run a query to read from sys.stats, sys.dm_db_stats_properties and other dynamic views, I see that they were last updated when I did it manually, but the change rate overpassed the 500+20% (tables have multiples of 10K rows). Auto create and update statistics are set to true on all databases, and I don't know why sql server does not do that automatically.

View 2 Replies View Related

SQL Server 2008 :: Add ID And Primary Key To Large Table(s)

Apr 24, 2015

Background:

* SQL Server 2008 R2
* Database was created from a third party product. The product writes to the 3 tables that I need to make changes to 24/7 and downtime is not an option. All changes must be done live.
* Database overall size is ~200 GB
* The 3 tables I must update make up ~190 GB of that space.
* Tables have no primary key or ID columns. Therefore, the data is highly fragmented.
* Of the ~190 GB of space allocated for the tables, there is roughly 70 GB of actual data.
* Rows of the table are not guaranteed to be unique. In fact, on one of the tables, tests were ran with a small sample of data and duplicates were very much evident.

What I'm trying to accomplish here is to get an ID column added to the 3 tables and set that ID field as the primary key. Doing so will force the data to become much less fragmented than it is currently and with purging and new inserts, eventually fragmentation will be nearly non-existent.

Problem:
Making table changes on tables this large while data is constantly being added poses many risks and can cause data loss. This was tried on a smaller table than these three and the entire table was lost in the process. Restore from backup was needed to get back to most recent log backup point.

Original Solution:
My original plan was to create a backup of each table and run the script below to migrate the majority of the data temporarily into the new table. I could then update the original table (which now would contain much less data) and then migrate the data back.

CREATE TABLE #temp
(
MsgDate varchar(10)
,MsgTime varchar(8)
,MsgPriority varchar(30)
,MsgHostname varchar(255)

[Code] ....

Original Solution Problem:
The problem with the solution above is that it calls the DELETE function on the original table using the values from the temporary table. When there are duplicate rows, which have not all been inserted into the backup table yet, they will all be removed from the original table because there is nothing unique to separate them out. In my testing, I had 10,000 rows in the original table and ended up with 9,959 rows in the backup table.

Question 1: Is my approach to making these table changes reasonable?
Question 2a: If so, how can I make sure I don't lose data as part of this temporary migration of the data to my backup tables?
Question 2b: If not, what would be a better approach that isn't going to cause disruption to the application that INSERTs data 24/7 and won't have any risk of data loss?

View 9 Replies View Related

SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies View Related

SQL SERVER 2005 Database Mirroring For Large Number Of Databases

May 30, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

this are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

View 1 Replies View Related

SQL SERVER 2005 DATABASE MIRRORING For Large Number Of Databases

Jun 1, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

these are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

All of the databases are critical and all must be included in the Database Mirroring.
so, after that I tried to implement database mirroring again......
System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still
giving this error while trying to enable database mirroring for 37th
Database.....

"There is insufficient system Memory to run this query"

WHY?

View 19 Replies View Related

SQL Server 2008 :: Migrate Contents Of Large Table From One DB To Another?

Mar 3, 2015

I have a large table containing about 800 million rows with an average row length of about 1K. The columns in the table are char columns. I need to move the contents of this table into a similar table where the target columns are varchar. The original table column definitions are compatible with the target table but the reverse is not necessarily true. For example, one column is being changed from int to bigint. The table is partitioned.

So, what is the fastest way to migrate the data. I was thinking to unload each partition into a flat file and load the target table running multiple load streams? Is this a good way?

View 0 Replies View Related

SQL Server Admin 2014 :: Database Mirroring For Large Number Of Databases

Oct 27, 2015

I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.

View 0 Replies View Related

SQL Server 2008 :: Replication Error - Field Size Too Large

Feb 1, 2011

I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.

I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?

View 5 Replies View Related

SQL Server 2008 :: Retrofitting Partitioning To Existing (large) Tables

Feb 9, 2015

We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.

After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.

The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.

Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.

For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?

View 1 Replies View Related

SQL Server 2008 :: What Could Cause Large Gaps In Servers Default Trace

Feb 12, 2015

I have the default trace on a SQL Server 2008R2 instance enabled and found today that there is a gap of nearly 4 minutes in the trace during a time of the day when there most certainly is not going to be a 4 minute window of nothing.

What if anything could cause the default trace to have a gap like this? The SQL Server Instance (against my preferences) is hosted on VMware however it has its on HOST and so its resources are not being shared with any other server. The data & log files reside on different parts of the SANS. Our IT & Network admins are looking into the issue on their end but when I looked and found a near 4 minute gap in the default trace it hit me that this could be something above/outside of SQL Server.

View 1 Replies View Related

SQL Server 2008 :: How To Find Statements That Cause Large Memory Paging

Apr 22, 2015

I am monitoring our production server, and noticed that periodically we have spikes of Memory Paging Rate (pages/sec).

How to find particular queries/stored procedures that causing this?

View 5 Replies View Related

SQL Server 2008 :: Speed Up Text Search In Large Result Set?

Jul 14, 2015

I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.

The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.

select * from #TempLogins a where exists
(select 1 from #TempSearch t1 where
a.detail like '%' + t1.ResultIDSearch + '%'
or
a.detail like '%' + t1.ResultSetIDSearch + '%')

View 1 Replies View Related

SQL Server 2008 :: Large Binary Dataset - Database Or File System?

Jun 2, 2015

I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.

A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.

View 5 Replies View Related

SQL Server 2008 :: Making Use Of A Large Transaction File To Delete Records?

Jun 5, 2015

Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.

When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.

could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.

The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?

View 2 Replies View Related

SQL Server 2008 :: Changing Large / Existing Table To Sparse Columns?

Sep 21, 2015

I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.

1.) three steps for each column in each table in each db.

Step 1: update table to allow for nulls

Step 2: update tabe set column=null where column =0.00

Step 3 update table set sparse columns

2.)

Step 1: Create entirely new table with sparse column definitions

Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS

Step 3: drop original table, rename new table to original name

View 0 Replies View Related

SQL Server 2008 :: Update Null Enabled Field Without Interfering With Rest Of INSERT / UPDATE

Apr 16, 2015

If I have a table with 1 or more Nullable fields and I want to make sure that when an INSERT or UPDATE occurs and one or more of these fields are left to NULL either explicitly or implicitly is there I can set these to non-null values without interfering with the INSERT or UPDATE in as far as the other fields in the table?

EXAMPLE:

CREATE TABLE dbo.MYTABLE(
ID NUMERIC(18,0) IDENTITY(1,1) NOT NULL,
FirstName VARCHAR(50) NULL,
LastName VARCHAR(50) NULL,

[Code] ....

If an INSERT looks like any of the following what can I do to change the NULL being assigned to DateAdded to a real date, preferable the value of GetDate() at the time of the insert? I've heard of INSTEAD of Triggers but I'm not trying tto over rise the entire INSERT or update just the on (maybe 2) fields that are being left as null or explicitly set to null. The same would apply for any UPDATE where DateModified is not specified or explicitly set to NULL. I would want to change it so that DateModified is not null on any UPDATE.

INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded)
VALUES('John','Smith',NULL)

INSERT INTO dbo.MYTABLE( FirstName, LastName)
VALUES('John','Smith')

INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded)
SELECT FirstName, LastName, NULL
FROM MYOTHERTABLE

View 9 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related

SQL Server 2008 :: How To Find Which Queries / Processes Causing Large Memory Paging Rate

Mar 30, 2015

Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?

View 3 Replies View Related

SQL Server 2008 :: Grant Access To All Databases

Nov 18, 2010

I have around 600 databases in my server, a user need select access of all the databases. will i have to go one by one in all the dbs and create that user and give datareader role to him. or is thr any shorter way to do so????

View 8 Replies View Related

SQL Server 2008 :: Shrink System Databases

Apr 28, 2015

I have some questions :

1.Is it OK to shrink Master, model ? the transaction log for those databases are full .
2.Is it OK to set the Recovery model of Database MODEL as SIMPLE ? at the moment it is in FULL recovery model and the transaction log is Full too

View 1 Replies View Related

SQL Server 2008 :: Script To Find A Value In All Databases?

Aug 4, 2015

Script to find a value in all databases ? for example I want to find a value contain word “ SQL” in all databases so I will get the information in which database and which table it exists.

My script is only for finding a value in one database.

View 8 Replies View Related

SQL Server 2008 :: Enabling TDE On Databases Which Are Used For Log Shipping

Sep 15, 2015

I have log shipping enabled on databases(primary and secondary) and works fine. I need to implement TDE on the database. I have experience on implementing TDE on databases which are not used for log-shipping.

What are the steps needed to setup TDE which are involved with log-shipping.

View 0 Replies View Related

SQL Server 2008 :: How To Migrate Huge Databases

Sep 27, 2015

I would like to migrate around 15 databases in production server to the new production server. The biggest database is 80 GB .

Wondering is there any fastest way to do that with very low downtime ?

What I can think about is shrink databases files and perform attach – detach .

View 8 Replies View Related

SQL Server 2008 :: Paths Not Well Defined For Databases

Oct 29, 2015

What are the potential risks of keeping data , log , tempdb backups etc in the same drive although in different folders ?

View 4 Replies View Related

SQL Server 2008 :: Find Unused Databases Or When Last Used In A Instance?

Mar 30, 2011

Find unused databases in a instance or when last used or accessed?

I'm on SQL SERVER 2008 R2 64bit -enterprize

I need to find when the databse is last accessed.

View 3 Replies View Related

SQL Server 2008 :: Restore Number Of Databases Using Powershell

Aug 15, 2012

I have a new problem with doing a restore of a number of databases using powershell. The script I'm using is based mainly on this one (Part 2 in particular): [URL] .....

The problem I'm having is around the RedgateGetDatabaseName function. My hunch is that its down to the different version of red gate and how sqlbackup works. Basically when the call is made to the function it is returning both the Database Name and the number of row's that the SQL command in the function has ran. I've tried to include a SET NOCOUNT ON at the start of the SQL command in the function but its still returning the now count.

View 5 Replies View Related

SQL Server 2008 :: Generate Data Models Of All Databases

Feb 12, 2015

I need to generate data models of all databases for an instance. How can I accomplish this?

View 3 Replies View Related

SQL Server 2008 :: Populating UserID Field That Comes From One Of 2 Databases

Feb 12, 2015

I have 2 databases: one for our intranet & another for our internet website.In the intranet database, we have a table called "Clients". A new client can be added in one of 2 ways:

1) One of our employees manually adds the client via the intranet
2) A new customer subscribes to our services via our website, thus becoming a new "client" (when subscribing online, we also add a record to an "Accounts" table located in our internet website database, but we'd also like to add a record to our intranet's Clients table as well).

In the client table is a field called "CreatedBy" which expects the UserId of whoever created the account. Again, this UserId can belong to either an employee via the intranet or a new customer via our internet website. how do I distinguish where the UserId in the dbo.Clients.CreatedBy field is coming from?

My 1st thought was to append a negative to the CreatedBy UserId# if it came from a customer via the website, but that just seemed too quirky (i.e. if an employee added a client and his UserId was '10', the Clients.CreatedBy would be '10'. If a customer subscribed via the website, the Clients.CreatedBy would be '-10'. Where customers can once again sign up online. So, I need a way to keep these databases separate but keep track of which db a user's coming from when we create a new Client record.

View 0 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved