SQL Server 2012 :: How To Batch Delete Large Table

Jun 16, 2015

I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.

So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.

DECLARE @i int
DECLARE @RowCount int
DECLARE @NextBatchDate datetime
CREATE TABLE #BatchProcess
(
EventDate datetime,
ApplicationID int,

[Code] .....

View 9 Replies


ADVERTISEMENT

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

SQL 2012 :: Deleting Large Batches Of Rows - Optimum Batch Size?

Oct 16, 2015

In another forum post, a poster was deleting large numbers of rows from a table in batches of 50,000.

In the bad old days ('80s - '90s), I used to have to delete rows in batches of 500, then 1000, then 5000, due to the size of the transaction rollback segments (yes - Oracle).

I always found that increasing the number of deleted rows in a single statement/transaction improved overall process speed - up to some magic point, at which some overhead in the system began slowing the deletes down, so that deleting a single batch of 10,000 rows took more than twice as much time as deleting two batches of 5,000 rows each.

good rule-of-thumb numbers (or even better, some actual statistics and/or explanations) as to how many records should be deleted in a single transaction/statement for optimum speed? 50,000 - 100,000 - 1,000,000 or unlimited? Are there significant differences between 2008, 2012, 2014?

View 9 Replies View Related

SQL Server 2012 :: Purge Process On A Large Table

Jan 9, 2014

I am attempting to do a rather simple purge task on a very large table. This task will need to take place daily and delete records older than 6 months out of the database. On first pass this will delete well over 130 million rows. I thought the best way to handle this is create a proc and call the proc from a SQL Agent Job that runs nightly. Here is an example of the script:

CREATE PROCEDURE usp_Purge_WCFLogger
AS
SET NOCOUNT ON
EXEC sp_rename 'dbo.logs', 'logs_work'
GO
SELECT * INTO dbo.Logs_Backup FROM dbo.Logs_Work WHERE TIMESTAMP < DATEADD(month, -6, GETDATE())

[Code] .....

View 3 Replies View Related

SQL Server 2012 :: How To Quickly Update / Insert 3M Records In Large Table

Mar 28, 2015

Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.

The initial solution is:

1 Create a table (table_b) which structur is as the same as table_a

2 Use BCP to import updated records into table_b

3 Remove outdated data in table_a:
delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields

4 Append updated or new data into table_a:
insert into table_a select * from table_b

As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?

View 9 Replies View Related

SQL Server 2012 :: Delete Unmatching Records In Child Table

Feb 24, 2015

I've 2 tables ResumeSkill (Child table) and Skill (Parent table), There are duplicates in the parent table and after removing the foreign key constraint in child table deleted all duplicate values from Parent table. But those deleted duplicate values has references in child table which need to be deleted now.

ResumeSkill Skill

Id SkillId
SkillId Name

I want to delete all the records from ResumeSkill that dont have matching skillId in Skill table.

View 2 Replies View Related

SQL Server 2012 :: How To Delete Duplicate Rows Present In Parent Table

Oct 19, 2014

I have a master table and i need to import the rows into the parent and child table.

Master table name is Flatfile_Inventory
Parent Table name is INVENTORY
Child Tables name are INVENTORY_AMOUNT,INVENTORY_DETAILS,INVENTORY_VEHICLE,
Error details will be goes to LOG_INVENTORY_ERROR

I have 4 duplicate rows in the Flatfile_Inventory which i have already inserted in the Parent and child table.

Again when i run the query using stored procedure,its tells that all the 4 rows are duplicate and will move to the Log_Inventory_Error.

I need is if i have the duplicate rows in the flatfile_Inventory when i start inserting into the parent and child table the already inserted row have the unique ID i must identify it and delete that row in the both parent and chlid table.And latest row must get inserted into the Parent and child table from Flatfile_Inventory.

-- ===============================================================================================
-- STORED PROCEDURE FOR FLATFILE_INVENTORY
-- ===============================================================================================
USE [IconicMarketing]
---=======================================SALES_CURSUR===========================================
--USE IconicMarketing
--GO
DECLARE
@FileType varchar(50) ,
@ACDealerID varchar(50) ,

[code].....

View 2 Replies View Related

SQL 2012 :: Create Clustered Index On A Very Large Table (500 GB)

May 7, 2014

I need to create a Clustered Index (CI) on a very large SQL Server 2012 database table. This table has about approximately 10 billion rows, 500 GB in size. The job ran for about 20 hours into it and then fails with error: "Out of disk space in tempdb". My tempDB size is 1.8TB, but yet it's still not enough.

Here is my script:

CREATE CLUSTERED INDEX CI_IndexName
ON TableName(Column1,Column2)
WITH (MAXDOP= 4, ONLINE=ON, SORT_IN_TEMPDB = ON, DATA_COMPRESSION=PAGE)
ON sh_WeekDT(Day_DT)
GO

View 9 Replies View Related

SQL 2012 :: Partitioning Large Table On Nullable Date

May 15, 2014

I have a very large table that I need to partition. Ideally the table will write to three filegroups. I have defined the Partition function and scheme as follows.

CREATE PARTITION FUNCTION vm_Visits_PM(datetime)
AS RANGE RIGHT FOR VALUES ('2012-07-01', '2013-06-01')
CREATE PARTITION SCHEME vm_Visits_PS
AS PARTITION vm_Visits_PM TO (vm_Visits_Data_Archive2, vm_Visits_Data_Archive, vm_Visits_Data)

This should create three partitions of the vm_Visits table. I am having a few issues, the first has to do with adding a new clustered index Primary Key to the existing table. The main issue here is that the closed column is nullable (It is a datetime by the way). So running the following makes SQL Server upset:

ALTER TABLE dbo.vm_Visits
ADD CONSTRAINT [PK_vm_Visits] PRIMARY KEY CLUSTERED
(
VisitID ASC,
Closed
)
ON [vm_Visits_PS](Closed)

I need to define a primary key on the VisitId column, but I need to include the Closed column in order to partition on it.how I would move data between partitions on a monthly basis. Would I simply update the Partition function, or have to to some sort of merge, split, or switch function?

View 2 Replies View Related

Problems With Replicating Large Batch Inserts

Nov 14, 2006

I have transactional replication set up between 2 SQL Server 2005 databases on 2 different boxes. Both the log reader and distribution agent run in continuous mode. The distributor is residing on a third SQL Server box. We are having performance issues with replication when there are large batch deletes/inserts happening on the publisher. There is a batch job that runs for about 8-10 hours everyday on the publisher and deletes/inserts thousands of records as part of transactions. The amount of time to replicate all this data on the subscriber is around 13-15 hours which is not acceptable to our user community. While monitoring I found that the distribution database (MSRepl_commands table) at times has millions of records in it which would explain why the latency is so high. Add to it the fact that there are large transactions occuring on the publisher.

I was wondering if anyone has faced similar problem before. Are there any conifguration changes I can make to the replication infrastructure to reduce latency?

Would appreciate your help.

Thanks.

View 4 Replies View Related

SQL 2012 :: Generating CREATE TABLE Scripts For Large Number Of Tables

Feb 11, 2014

Other than right-clicking on each individual table in SSMS and generating a CREATE script, is there a simple way to generate CREATE TABLE scripts for tables within a given database?

Background: I have a bunch of tables in one database, and I would like to add tables to a second database that have the same names and basic structures of some of the tables from the first database.

I do not need to transfer any data from the tables, this is a seperate project that will use a similar data structure. I just want to generate the CREATE TABLE scripts for 30ish tables within the first database, and then I'll tweak the scripts as appropriate and run them against the new database.

[URL] ....

View 7 Replies View Related

SQL Server 2008 :: Making Use Of A Large Transaction File To Delete Records?

Jun 5, 2015

Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.

When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.

could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.

The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?

View 2 Replies View Related

SQL 2012 :: Delete From One Table Based On Results Of Other Table

May 28, 2015

I have this table:

with actividades_secundarias as (
select a.*, r.Antigo, r.Novo, rn = row_number()
over (PARTITION BY a.nif_antigo, r.novo ORDER BY a.nif_antigo)
from ACT_SECUNDARIAS a inner join
((select

[Code] ....

I want to make a delete statement like this:

select * into #table1 from actvidades_secundarias where rn>1
Delete from act_secundarias where act_secundarias.nif_antigo = #table1.nif_antigo and act_secundarias.cod_cae = #table1.cod_cae

But it seems that I cant delete like this.

View 5 Replies View Related

SQL 2012 :: Delete Trigger On One Table

Jun 8, 2015

I have a delete trigger on one table. When I delete a single row in this table that deleted record will be recorded to a history table.But sometimes my delete will delete more than one record. In this case also only one record is being recorded into the history table.How can I record all the deleted records into the history table when multiple records are deleted with one single delete statement?

View 4 Replies View Related

SQL 2012 :: Delete Record From Table With ID And Rownumber

Mar 17, 2014

I will try my best to explain this, We have a shopping cart on our website, the person that was developing this has now left the company and I've been given the job to finish it off.

When I load all the items that the user has entered in to his/her cart I return the Item ID and the RowNumber (ROW_NUMBER() OVER (Order by Id) AS RowNumber)

I'm trying to delete the item from the table using the following query

DELETE FROM [dbo].[Cart.Items] WHERE UniqueID = UniqueID and ItemID = @ItemID and @RowNumber IN (
SELECT ROW_NUMBER() OVER (Order by Id) AS RowNumber)

Now the reason we are using the RowNumber is because the user can add the same Item as many times as they like so for example you buy 3 different mobile phones, and you want three screen protectors, they will click screen protector 3 times which will add 3 records in to the db with the same id. so the row number is used to find the correct one.

But the above delete is not working.

View 1 Replies View Related

SQL Server 2012 :: ShrinkFile On Large Database With TRUNCATEONLY

Dec 30, 2013

We have a large OLAP database, about 2.5 TB spread out over 3 data files on three different drives, and recently someone ran a query that created a table that continued to grow until the data files filled the available disk space (about 3 TB total - 1 TB per drive).

Tonight I plan on running a full backup (it's in Simple mode) and running a ShrinkFile on all three files sequentially with TRUNCATEONLY just so it will remove the space after the last extent. Any way to tell ahead of time how much space this will recover?

Granted running a DB Shrink is one of those things you just don't do, but this is a one-time shot and unavoidable to get the file size back under control.

View 5 Replies View Related

SQL Server 2012 :: Fetching Large Data From Teradata?

Jun 9, 2015

I am fetching large amount of data from teradata to sql server using linked server. I am facing below query:

A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)

View 0 Replies View Related

SQL Server 2012 :: Select Large Data From Multiple Tables

May 10, 2014

In a Library Management database we have these tables

1) Document ( DocNo , Doc_type , permalink,inDate)
2)Title(id, DocNo,Main_Title, Other_Title)
3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....)
4)Publisher(id,DocNo , Name,Publisedate,address)
5)Subject(id,DocNo,Subject)
6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc

In document table I have 500,000 records.

I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?

View 3 Replies View Related

SQL Server 2012 :: Merging Two Large Tables (More Than 100m Rows)

Aug 18, 2014

SQL 2012

I have a source table in the staging database stg.fact and it needs to be merged into the warehouse table whs.Fact.

stg.fact is not a delta feed it is basically an intra-day refresh.

Both tables have a last updated date so its easy to see which have changed.

It will be new (insert) or changed (update) data that I am interested in, there are no deletions.

As this could be in the millions of rows that are inserts or updates then this needs to be efficient.

I expect whs.Fact to go to >150 million rows.

When I have done this before I started with T-SQL Merge statement and that was not performant once I got to this size.

My original option was to do this is SSIS with a lookup task that marks the inserts and updates and deal with them seperately. However I set up the lookup tranformation the reference data set will have a package variable in the SQL commnd. This does not seem possible with the lookup in 2012! Currently looking at Merge Join transformation and any clever basic T-SQL that could work as this will need to be fast, and thats where I think that T-SQL may be the better route.

Both tables will have >100,000,000 rows
Both tables have the last updated date
The Tables are in different databases but on the same SQL Instance
Each table holds 5 integer columns, one Varchar, one datatime

Last time I used Merge it was a wider table with lots of columns so don't know if this would be an option.

View 6 Replies View Related

SQL Server 2012 :: Best Way To Handle Like Percentage On Column Too Large For Index

Sep 18, 2015

We have a table to 100M rows and up until now we were fine with an non clustered index a varchar(4000) because we never went above 900 bytes (yes it is a bad design).We have the need to support international character sets now so the column was updated to nvarchar(4000) and now we have data past the 900 byte limit.

The data is long, seems useless but is needed by the business and they need to be able to search "where bigcolumn like 'test%'". With an index, even with a huge amount of data, it was 'fast'. Now of course without an index it is unusable. The wildcard is always at the end of the search. I made a full text index on the column and basic queries such as: select * from ourtable where contains(bigcolumn, 'AReallyLongStringofTextHere') works fine unless there is a space in the data. We loose thousands of returned rows because of spaces in the data.

I have tried select * from ourtable where contains(bigcolumn, '"AReallyLongStringofTextHere that includes spaces"') but not all of the data is returned. I get 112 rows with the contains statement. The table scanning statement of "select * from ourtable where bigcolumn like 'AReallyLongStringofTextHere that includes spaces%' returns 1939 rows.I understand that a full text index is breaking the long string up since it contains spaces. Is there a way to retain the entire string as 1 index entry or is there a way to fix my query to return all of the rows?

View 9 Replies View Related

SQL Server 2012 :: Converting Large Excel Spreadsheet To Normalized Data

Aug 7, 2014

I have a large excel spreadsheet created by finance user that contains several decades worth of sales data.

Here is a small sample:

Guest Count
Unit ID1/2/2011 1/9/2011
3 0
7 0
8 0
90 0
151696 1202
222769 1914
232704 2110
250 0
282838 1882
331089 691
363581 3064
371469 1062

I need to get this data into an SQL table in the following form so I can use it to further manipulate the data and update several other tables. I am thinking that UNPIVOT or CROSS APPLY might be the way to go, but am not sure how to code it.

The desired output:

Unit IDDate Guest Count
31/2/2011 NULL
71/2/2011 NULL
81/2/2011 NULL
91/2/2011 0
151/2/2011 1696

and so on ......

The spreadsheet has 2900 columns and 3500 rows so performance is definitely a consideration as well.

View 9 Replies View Related

SQL Server 2012 :: Asynchronous Cursor Population Slow For Large Result Sets

Jul 2, 2015

so async cursor population is supposed to create the cursor and return the cursor id quickly, while the server works on async populating the results. For a keyset-driven cursor, SQL Server stores the key sets in tempdb, which it then uses to fetch data for cursor results. Anyway, this works fine for smaller tables, but I'm finding for large result sets, the async cursor population is very slow and indeed seems to approximate synchronous time. The wait stat I get while it is running (supposedly asynchronously) is TRANSACTION_MUTEX.

Example:
--enable async cursor
exec dbo.sp_configure 'cursor threshold', 0; reconfigure;
declare @cursor int, @stmt nvarchar(max), @scrollopt int, @ccopt int, @rowcount int;
--example of giant result set
set @stmt = 'select * from sys.all_objects o1, sys.all_objects o1';

[code]...

Note that using the SQL "select * from sys.all_objects o1" is much faster than "select * from sys.all_objects o1, sys.all_objects o2". However, if cursor population is async, I'd expect the time to return a cursor id to be similar between the two.

View 7 Replies View Related

Large Delete && Tran Log

May 26, 2004

I have a table that has 110 million records, I will be deleting over 60 million records, but I dont have enough space to hold all the deletes on the trans log file. Is there a way to delete 1 million record then free the trans action then run another delete in one script.

View 1 Replies View Related

Large Data Delete

Jul 20, 2005

HiI have a SQL2000 server with 128m rows of data. I want to delete about65m of that. So far I have bcp'ed the relevent data out and put theminto another SQL database.We have a small amount of space for our transaction log so I cannotdelete all 65m rows in one go. So far I have been doing them is 0.5mchuncks, but it is extremly slow.Would a faster way be to bcp the data I wan to keep and truncate thetable and bulk import them in again ?What hapnes to log size in when builk import is happening and is thereanother way of doing this ?Thanks for any help

View 2 Replies View Related

Large Delete Hangs

Feb 6, 2008



Hi. Does anyone know why a delete statement sometimes hangs when deleting a large amount of data and how to resolve this?

Thanks!

View 2 Replies View Related

Converting .dbf To Sql Server Table...batch File? Command Line?

Jul 20, 2005

My client will be receiving a .dbf file which needs to be uploadedinto a sql server database table (as an append) every week. They areNOT computer savvy and I would like to automate this process ratherthan go into enterprise manager and run data transformation.Is there any way to write a batch file or a set of command lines whichwill do this?Thanks.Monica

View 3 Replies View Related

SQL 2012 :: Turn Off Delete In Server?

Jun 11, 2014

Is there anyway I can turn off delete in SQL server? I want to prevent anyone inadvertently deleting rows in tables. I thought worse case I could have triggers on tables to perform roll back.

View 3 Replies View Related

Soft Delete In Table, Why Merge Agent Report Hards Delete On Table ?

Feb 1, 2007

Hi seniors

there are two tables involve in replication let say table1 and replicated table is also rep.table1.

we are not deleting records physically in table1 so only a bit in table1 has true when u want to delete a record but the strange thing is that replication agaent report that this is hard delete operation on table1 so download and report hard delete operation and delete the record in replicated table which is very crucial.

plz let me know where am i wrong and how i put it into right way.

there is no triggers on published tables and noother trigger is created on published table.

regards

Ahmad Drshen

View 6 Replies View Related

SQL Server 2012 :: Delete Last Job In Parallel Processing?

Mar 18, 2015

I create a main program which will launch two jobs at a time, each job does some processes and at the end I'm trying to delete those jobs after storing the job details in one of the custom table I created (cleanup sub-program).

Out of two jobs I am able to store one job details (like job_name,job_id,start_time and end_time of the job) in the custom table and able to delete that job, but the job that's getting completed at the end is not getting captured nor getting deleted from sysjobs and sysjobhistory tables.

I had included this step (which will call the cleanup sub-program to store the job details and delete it) at the end. I can see that this cleanup procedure getting called from debug message but it is neither storing details nor deleting the job.

When I execute this cleanup program separately, it does store the job details and delete it.

View 0 Replies View Related

SQL Server 2012 :: How To Archive Data And Delete

Apr 22, 2015

I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.

Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.

View 1 Replies View Related

SQL 2012 :: Modify SP In Batch

Jul 7, 2015

I face a task to edit definition for bunch of sp: cleaning <IF EXISTS THEN DROP> and changing CREATE for ALTER.

Do you think there is a good way to do it in batch and take original source from system.tables (even we have this as physical file in VSS)?

Surely enough <IF Exists then drop> is not uniformed, could be variations with syntax and on diff number of lines, IF Exists... vs OBJECT_ID is not null etc...

CREATE__sp_PROCEDUREA vs CREATE_spPROcedure (with diff spacing).

View 0 Replies View Related

SQL Server 2012 :: Behavioral Difference Between DELETE And TRUNCATE

Apr 4, 2014

I have an issue with Delete statement.In the code given below (its a part of actual proc),if we use TRUNCATE to clean the temp tables, everything goes fine.But if I use DELETE in place of truncate, system skips the IF loop 'if (@script_type = 1 OR @script_type = 2)'I am not able to understand this behavioral difference between DELETE and TRUNCATE.Recently the database is being used for replication, but that should not be a reason.

SELECT @max_rows = COUNT('X') FROM #temp_table1
SET@row_cnt = 1
WHILE @row_cnt <= @max_rows
BEGIN

[code],...

View 2 Replies View Related

SQL Server 2012 :: Keep Only Top 50 Records For Every User And Delete Remaining

Aug 4, 2015

I have a table 'library' with two columns

user_name, access_time

every user is having more than 500 entries, i want to keep only top 50 records for every user and delete the remaining records.

View 9 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved