SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies


ADVERTISEMENT

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

Deleting Large Number Of Rows Times Out

Sep 14, 2007

I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.

DELETE FROM tblLog

I just get a message about the server timing out, what can I do?

Thanks

View 5 Replies View Related

SQL SERVER 2005 Database Mirroring For Large Number Of Databases

May 30, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

this are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

View 1 Replies View Related

SQL SERVER 2005 DATABASE MIRRORING For Large Number Of Databases

Jun 1, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

these are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

All of the databases are critical and all must be included in the Database Mirroring.
so, after that I tried to implement database mirroring again......
System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still
giving this error while trying to enable database mirroring for 37th
Database.....

"There is insufficient system Memory to run this query"

WHY?

View 19 Replies View Related

SQL Server Admin 2014 :: Database Mirroring For Large Number Of Databases

Oct 27, 2015

I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.

View 0 Replies View Related

SQL Server 2008 :: Deleting Data (rows) From Table To Reclaim Space?

Feb 11, 2015

I have a table 300+GB. it holds 10 years of Data. I need to delete 5 years of data and put it to another server so I can have more space.

If I delete 5 years of data, Transaction log gets so huge and size of the database even gets bigger because of the .ldf file which even gets bigger! I think I can shrink the log file and the data file. Is this the best way to do it?

View 8 Replies View Related

Mirroring :: Async Mirror Setup For High Number Of Databases

Oct 27, 2015

I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances.

What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.

What all needs to be checked for looking into the feasibility of going ahead with a async mirror setup as mentioned above.

View 4 Replies View Related

SQL 2012 :: Deleting Large Batches Of Rows - Optimum Batch Size?

Oct 16, 2015

In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.

In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).

I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.

good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?

View 9 Replies View Related

SQL Server 2008 :: Select Alternate Number Of Rows?

Jan 28, 2015

A simple Query to select alternate rows from a table ?

View 9 Replies View Related

SQL Server 2008 :: Update The Document Number Row For 3k Rows?

Aug 4, 2015

I have a table where I would like to update the document number row for 3k rows. The problem I have is that the documents come in sets of two (version 1 and 2) but both have different numbers. Picture it like this below:

DOCNUM: 4445787 Version 1
DOCNUM: 4445790 Version 2

It should be the same docnum (ie 4445787 Version 1, 4445787 Version 2).

The challenge is how can we assign the new docnum for version 1 to be also for version 2 as well. Basically in SQL we need a way to

1. Find a way to distinguish the pair of documents in the target db that are the same even though they have different docnums.

2. Update them so that the docnums match.

View 7 Replies View Related

Very Large Number Of Rows

Jul 23, 2005

We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA

View 4 Replies View Related

Large Number Of Rows.

Apr 3, 2007

Hi all,



A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?



Does the number of records returned affect the performance inspite of the indexing ?



Thanks,



DBLearner

View 3 Replies View Related

SQL Server 2008 :: Way To Check To Find Number Of Rows And Size Of A Table

Apr 29, 2015

How can we monitor the all tables in all databases and send notifications to the team.Is there a way to check to find the no of rows and size of a table last month and find out growth % now

View 4 Replies View Related

Inserting Large Number Of Rows

Nov 24, 1999

Hi,

I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently

Thanks

View 2 Replies View Related

SQL Server 2008 :: How To Pivot Unknown Number Of Rows To Columns Using Data As Column Headers

Sep 10, 2015

I have a single table that consist of 4 columns. Entity, ParamName, ParamsValue and ParamiValue. This table stores normalized Late Fee related parameters for apartments. The Entity field contains a code that identifies the apartment complex. The ParamName in a textual field that contains the name of the parameter that the other 2 fields define the value for; ParamsValue and ParamiValue. If the Late Fee parameter (as named in ParamName is something numerical then the value for that parameter can be found in ParamiValue else its in ParamsValue.

I don't know if 'Pivot' is the correct term to use for describing what I am trying to do because I've looked at the Pivot examples and I don't see how that will work for this. Using the Table and data as provided below, how would I construct a query so that I get 1 row per Entity in which the columns are the ParamsValue or ParamiValue for the ParamName listed in the column header (for the query)?

Below is the DDL to create the table and populate it.

USE [DBA_UTIL]
CREATE TABLE [dbo].[PARAMEXAMPLE](
[Entity] [varchar](16) NULL,

[Code]....

View 4 Replies View Related

How To Manage IDENTITY In Tables With Large Number Of Rows?

May 23, 2002

Table structure: col1 IDENTITY (seed=1 increment=1) + few other columns (col2...col7) + one text column (col 8)
I have around 50,000,000 rows per day inserted in the table T1. At the end of the day 40,000,000 rows are deleted. I have to keep the records for 12 months and then archive it. Database is 24/7 web serving and there is no down time allowed. IDENTITY column will go out of range (overflow) after less than two years, unless the identity seed is reset to the start value (seed=1, increment=1).
At the end of 12th month data is archived in another table and only last month is kept in the table T1. So table T1 enters new year with data from last month of the previous year. There are few other tables that refer to this table by using there own field with values from T1.IDENTITY column (referential integrity is not enforced). Identity column in T1 is needed as a unique id for some search actions. Performance is an issue therefore bigint data type is used for this identity column rather than decimal.

Another problem I have is how to do table update on one column (1 mil rows to be updated out of 2 mil of rows) with the minimum impact on the users who are querying this table heavily. Not need to mention that it is web app 24/7 no down time.

Thank you in advance.


Goran

View 1 Replies View Related

Add Column To Existing Table With Large Number Of Rows

Dec 24, 2007

Hey Guys

i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all

whenever i do : alter table mytable add Updated_date datetime

it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit

can u plz advice on how to add this column. its urgent!!

thanks in advance

View 5 Replies View Related

T-SQL (SS2K8) :: Deleting Only 1 Row At A Time Instead Of Using Condition And Deleting Many Rows?

Jul 18, 2014

/****** Object: StoredProcedure [dbo].[dbo.ServiceLog] Script Date: 07/18/2014 14:30:59 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER proc [dbo].[ServiceLogPurge]

-- Purge records dbo.ServiceLog older than 3 months:
-- Purge records in small portions to avoid locking production tables
-- for a long time. The process takes longer, but can co-exist with
-- normal usage of the tables.

[Code] ...

*** Getting this error below when executing the code ***

Msg 102, Level 15, State 1, Procedure ServiceLogPurge, Line 45
Incorrect syntax near 'Failed:'.

View 9 Replies View Related

Mirroring :: Any Limit For Configuring Number Of Secondary DB Server

May 7, 2015

In log shipping is there any limit for configuring no.of secondary db server???

View 3 Replies View Related

SQL Server 2008 :: Database Mirroring Timeout Setting?

Oct 20, 2011

I have setup database mirroring with witness server. To prevent unnecessary failover because of network slow or other issue, I changed the timeout setting as

ALTER DATABASE <Database Name>
SET PARTNER TIMEOUT 120

Which I understand if connection is broken between principal and mirror, principal database will wait for 120 second and after that only automatic failover will happen.

If this is true, it does not happen in my case. Failover happens before120 second.

View 5 Replies View Related

SQL Server 2008 :: Encryption / Decryption On Mirroring Database

May 4, 2015

I have created mirroring... one of the column is encrypted on mirror database and I can see the decrypted result when I do query when I actually logged into server (through remote connection) but when I use the same query through using SSMS from my laptop the query result come as the column is not decrypted,

View 0 Replies View Related

Setup And Upgrade :: 2008 R2 Setup UI Has Incomplete List Of Installed Features - Can't Uninstall RS

Jul 23, 2015

We're doing upgrades from SQL 2008 R2 to SQL 2014. This is blocked due to RS is installed but not configured. Our desired action is to uninstall RS and proceed with the upgrade. But when setuparp.exe is raised, it does not list all the features on the 'Select Features' page.  In fact, it only lists the last 2 shared features (SQ Client Connectivity SDK and Microsoft Sync Framework). However, all items appear to be listed on the 'Select Instance' page including RS. I've seen this issue on 2 of our SQL 2008 R2 Servers already.

View 3 Replies View Related

Help With Deleting And Foreign Keys.

Feb 3, 2008

I'm working with a SQL 2000 DB, which has a table structure as follows, implemented with foreign key constraints: ________________________________|                                                        ||--> TABLE1 --> TABLE2 --> TABLE3 --|
 (ie, it forms a 'loop' so to speak)
I am trying to create a clean copy of this database, basically by deleting all the data in most of the tables, and these 3 tables are included. Of course, when I run a DELETE on any of these tables, it complains because of the FK. How can I script a delete on the 3 tables without manually deleting row after row, until there aren't anymore relations?
Cheers for any help :)

View 11 Replies View Related

Deleting Rows, Halting Server

Dec 11, 2007

Hi,

I've deleted a bunch of rows from my table.. So far about 500k from a 9 million row table.. This table is hit frequently

I think maybe my indexes are a mess? Its so slow every page is timing out .. im now urgently trying to figure something out to resolve ..

ive done UPDATE STATISTICS on the table .. now its probably best I run some command on the indexes directly?

Any help very much appreciated.. !!!

mike123

View 10 Replies View Related

SQL Server 2008 :: Add ID And Primary Key To Large Table(s)

Apr 24, 2015

Background:

* SQL Server 2008 R2
* Database was created from a third party product. The product writes to the 3 tables that I need to make changes to 24/7 and downtime is not an option. All changes must be done live.
* Database overall size is ~200 GB
* The 3 tables I must update make up ~190 GB of that space.
* Tables have no primary key or ID columns. Therefore, the data is highly fragmented.
* Of the ~190 GB of space allocated for the tables, there is roughly 70 GB of actual data.
* Rows of the table are not guaranteed to be unique. In fact, on one of the tables, tests were ran with a small sample of data and duplicates were very much evident.

What I'm trying to accomplish here is to get an ID column added to the 3 tables and set that ID field as the primary key. Doing so will force the data to become much less fragmented than it is currently and with purging and new inserts, eventually fragmentation will be nearly non-existent.

Problem:
Making table changes on tables this large while data is constantly being added poses many risks and can cause data loss. This was tried on a smaller table than these three and the entire table was lost in the process. Restore from backup was needed to get back to most recent log backup point.

Original Solution:
My original plan was to create a backup of each table and run the script below to migrate the majority of the data temporarily into the new table. I could then update the original table (which now would contain much less data) and then migrate the data back.

CREATE TABLE #temp
(
MsgDate varchar(10)
,MsgTime varchar(8)
,MsgPriority varchar(30)
,MsgHostname varchar(255)

[Code] ....

Original Solution Problem:
The problem with the solution above is that it calls the DELETE function on the original table using the values from the temporary table. When there are duplicate rows, which have not all been inserted into the backup table yet, they will all be removed from the original table because there is nothing unique to separate them out. In my testing, I had 10,000 rows in the original table and ended up with 9,959 rows in the backup table.

Question 1: Is my approach to making these table changes reasonable?
Question 2a: If so, how can I make sure I don't lose data as part of this temporary migration of the data to my backup tables?
Question 2b: If not, what would be a better approach that isn't going to cause disruption to the application that INSERTs data 24/7 and won't have any risk of data loss?

View 9 Replies View Related

SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies View Related

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

SQL Server 2005 Slows Down After A Large Number Of Queries

Jul 4, 2007

Hi,

We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.

I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.

I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.

The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?

If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)

Any advice will be greatly appreciated.

Thanks,
Joe

View 3 Replies View Related

SQL Server 2008 :: Update Statistics With Large Databases

Feb 5, 2015

Currently our database size is around 350G. It will grow up to 1.5 TB

We have the

Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False

at database level

we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.

Previously they tested with sampling but instead of full scan running with the sampling effected the queries.

Is there option to avoid the long time job duration.

If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases

View 9 Replies View Related

SQL Server 2008 :: Migrate Contents Of Large Table From One DB To Another?

Mar 3, 2015

I have a large table containing about 800 million rows with an average row length of about 1K. The columns in the table are char columns. I need to move the contents of this table into a similar table where the target columns are varchar. The original table column definitions are compatible with the target table but the reverse is not necessarily true. For example, one column is being changed from int to bigint. The table is partitioned.

So, what is the fastest way to migrate the data. I was thinking to unload each partition into a flat file and load the target table running multiple load streams? Is this a good way?

View 0 Replies View Related

SQL Server 2008 :: When Renaming A Table / It Doesn't Rename Foreign Key Constraints

Sep 21, 2015

I'm working on creating a new version of an existing table. I want to keep the old table around, only with a different name. In looking this up I found there's a system routine named sp_rename, which looks like it will work fine.However in thinking about this I realized that the foreign key constraints defined on the table (there's 3 of them) are likely to not be renamed, correct?

If I'm correct, then I suppose I could rename the table, then drop the 3 foreign key constraints, and create them new using different names for those foreign key constraints.

View 5 Replies View Related

SQL Server 2008 :: Syntax Error When Trying To Change Mirroring For Dedicated Network Through Specific IP

Jun 10, 2015

We have a database in SQL Server 2008 R2 with mirroring and want that replication is done by dedicated network.We stop the endpoint and when we try to run the following command, syntax error occurred:

Msg 102, Level 15, State 1, Line 1
Incorrect syntax near '192.168.1.14'.

What is the correct syntax of the command line below?

ALTER ENDPOINT Endpoint_Mirroring AS TCP (LISTENER_IP = '192.168.1.14')

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved