Retaining Records Of Top N Rows And Deleting The Rest
May 15, 2008
Hi All,
I am writing a SP where I need to pass an value to maintain records of last n days. In this SP I am deleting a couple of tables based on the value passed to this SP. For e.g. If the SP is passed the value 10, then only TOP 10 records is maintained, the rest are deleted.
I have formed the following logic, which I feel can be improved vastly.
I create a temp table and
CREATE TABLE #TempAuditTbl (Rownum int PRIMARY KEY, Orderid uniqueidentifier)
INSERT INTO #TempAuditTbl
SELECT ROW_NUMBER() OVER (ORDER BY orderdate desc) AS rownum, Orderid FROM Orders
DELETE Orders FROM Orders INNER JOIN #TempAuditTbl adt ON adt.Orderid = Orders.Orderid AND rownum > @TopnRows
DROP TABLE #TempAuditTbl
OR
DELETE FROM Orders WHERE orderid NOT IN ( SELECT TOP @TopnRows OrderID FROM Orders ORDER BY OrderDate desc)
This way I am able to keep the top n records.
Which of these two solutions is more efficient? Is there a more efficient way to achieve the same.
Please help.
Hey guys, I have a table full of data that has duplicate records except for two date columns (date1 and date2). What I would like to do is remove the duplicates while retaining the most recent record, how can I do this?
So record 1 looks like this:
Code:
John | Smith | 08/08/2000 | 10/10/2000
Record 2 looks like this:
Code:
John | Smith | 08/10/2005 | 10/10/2005
I'd like to remove the first instance and keep the second (most recent one).
I have a table employee_test having the sample data. The rows with EmployeeID=6 are duplicate rows. I want to delete the duplicates retaining one row for the employeeid=6. Note :- I don't want to use a temporary table. I want to do this using a single query or at the most in a SP query batch. Please advise.
Is there any way to control this scenario, I know trick to put 10 on each row ,but I need to split them unevenly, 10 on first page and the rest on second page. is it possible ?
/****** Object: StoredProcedure [dbo].[dbo.ServiceLog] Script Date: 07/18/2014 14:30:59 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER proc [dbo].[ServiceLogPurge]
-- Purge records dbo.ServiceLog older than 3 months: -- Purge records in small portions to avoid locking production tables -- for a long time. The process takes longer, but can co-exist with -- normal usage of the tables.
[Code] ...
*** Getting this error below when executing the code ***
Msg 102, Level 15, State 1, Procedure ServiceLogPurge, Line 45 Incorrect syntax near 'Failed:'.
I have a situation where deleting old records is blocking updating latest records on highly transactional table and getting timeout errors from application.
In details, I have one table called Tran_table1 in OLTP database. This Tran_table1 is highly transactional table, it will receive data for insert/update continuously
While archiving 2 years old records from Tran_table1 into Tran_table1_archive in batches(using DELETE OUTPUT INTO clause), if there is any UPDATEs on Tran_table1,these updates are getting blocked and result is timeout errors in application.
Is there any SQL Server hints to avoid blocking ..
I must admit I dont know all that much about SQL, which is why I hope someone can show me the light. I have a script almost finished, however I have no idea how to have it trim database entries that are older than, say, 90 days. Any ideas?
I have a table with a load of orphaned records (I know... poor design) I'm trying to get rid of them, but I'm having a brain cramp.
I need to delete all the records from the table "Floor_Stock" that would be returned by this select statement:
SELECT FLOOR_STOCK.PRODUCT, FLOOR_STOCK.SITE FROM PRODUCT_MASTER INNER JOIN FLOOR_STOCK ON PRODUCT_MASTER.PRODUCT = FLOOR_STOCK.PRODUCT LEFT OUTER JOIN BOD_HEADER ON FLOOR_STOCK.PRODUCT = BOD_HEADER.PRODUCT AND FLOOR_STOCK.SITE = BOD_HEADER.SITE WHERE (BOD_HEADER.BOD_INDEX IS NULL) AND (PRODUCT_MASTER.PROD_TYPE IN ('f', 'n', 'k', 'b', 'l', 's'))
I was thinking along the lines of:
DELETE FROM FLOOR_STOCK INNER JOIN (SELECT FLOOR_STOCK. PRODUCT, FLOOR_STOCK.SITE FROM PRODUCT_MASTER INNER JOIN FLOOR_STOCK ON PRODUCT_MASTER. PRODUCT = FLOOR_STOCK.PRODUCT LEFT OUTER JOIN BOD_HEADER ON FLOOR_STOCK. PRODUCT = BOD_HEADER. PRODUCT AND FLOOR_STOCK.SITE = BOD_HEADER.SITE WHERE (BOD_HEADER.BOD_INDEX IS NULL) AND (PRODUCT_MASTER.PROD_TYPE IN ('f', 'n', 'k', 'b', 'l', 's'))) F ON FLOOR_STOCK. PRODUCT = F. PRODUCT AND FLOOR_STOCK.SITE = F.SITE
... but Sql Server just laughs at me: "Incorrect Syntax near the keyword INNER"
Here is the scenario. I'm working with two tables:
Contact1 Conthist
Contact1 contains basic contact information and conthist contains history records for those contacts. Conthist can hold many records related to a single contact1 record.
The link between the two tables is a column called accountno.
I'm trying to delete any records in conthist that have an accountno that does not exist in contact1. The queries that I've tried keep returning conthist records that do actually have a matching accountno.
I have a couple SQL tables that have been appended to daily over the last two years. There is now about 50,000,000 records in the table. Does anyone know the fastest way to delete records before a certain date to shorten these tables? Delete queries and everything else I've tried is taking way too long.
Apparently, deleting 7,000,000 records from a table of about 20,000,000 is not advisable. We were able to take orders at 8:00AM, but not at 7:59.
So, what's the best way of going about deleting a large number of records? Pretty basic lookup table, no relationships with other tables, 12 or so address-type fields, 4 or 5 simple indexes. I can take it down for a weekend or night, if needed.
DTS the ones to keep to another table, drop the old and rename the new table? Bulk copy out, truncate and bring back in? DTS to text, truncate and import back? Other ways?
Never worked with such a large table and need a little experienced guidance.
My Web Host does not provide administrative privilages to the SQL server I have access to. I would like to delete tens of thousands of records from two of my tables without writing to the Transaction Log. Is what I'm trying to do is delete these records quickly without utilizing any of the alotted space my web host has set aside for my transaction log (they give me 50 mb and I go way over that when I run a DELETE statement)
I need a sql statement to delete duplicate records.
I have a college table with all colleges in the nation. I noticed that all of the colleges were listed twice. How do I delete all of the duplicate records.
Here is my table. Colleges ------------------- schoolID - smallint NOT NULL, schoolName - varchar(60) NULL
Can someone help me out with the sql statement??? I'm running SQL Server 6.5.
Hi All, I am having one table named MyTable and this table contains only one column MyCol. Now i m having 10 records in it and all the records are duplicate ie value is 7 for all 10 records.
It is something like this,
MyCol 7 7 7 7 7 7 7 7 7 7
Now i m trying to delete 10th record or any record then it gives me error "Key column information is insufficient or incorrect. Too many rows were affected by update."
What should i do if i want only 4 records insted 10 records in my table? How do i delete the 6 records from table?
I have a problem where records in underlying tables of a dataview are being deleted (seemingly at random)
For example.
CREATE TABLE [Employee] (Id int, Name varchar(50)) CREATE TABLE [Company] (Id int, Name varchar(50)) CREATE TABLE [EmployeeCompany] (CompanyId int, EmployeeId int)
CREATE VIEW [dvEmployee] AS SELECT * FROM [Employee] INNER JOIN [EmployeeCompany] ON [Employee].[Id] = [EmployeeCompany].[EmployeeId]
CREATE TRIGGER [dvEmployeeUpdate] ON [dbo].[dvEmployee] INSTEAD OF UPDATE AS BEGIN UPDATE EmployeeCompany SET Status = INSERTED.Status FROM EmployeeCompany, INSERTED WHERE EmployeeCompany.CompanyId = INSERTED.CompanyId AND EmployeeCompany.EmployeeId = INSERTED.EmployeeId END
Because the column [Status] is a t-sql keyword, does the fact that the trigger contains the line "SET Status = ..." without saying "SET [Status] = ..." mean that I could lose records in the EmployeeCompany table?
Reason I'm asking is we have an already designed database that is littered with columns named the same as sql keywords (almost every table has a [Status] column, and there are many [Password] columns). When using a dataview on these tables, triggers exist that aren't putting the [] around these column names (the same as my dvEmployeeUpdate trigger above), and somehow we are seemingly randomly losing records. It is very rare, and they are getting completely deleted, and it seems to be the tables that contain the keyword columns and are used in dataviews with instead of triggers that don't put [] around the column names. Nowhere in any trigger or stored procedure is there a DELETE FROM on these tables, and the software running on the database uses only the data views, and doesn't directly access the underlying tables.
I've been going through all of the code adding the [], but my question is simply whether or not anyone has heard of this causing the deletion of any records, or whether there may be something else going on that I should be looking into?
help me out on this one. i have 2 text boxes in my page. user enter any number in those two text boxes. i slect that many record randomly from my main table, and put it into two another tables. now the problem is coming in how to delete those records which were randomly selected from main table in main table. for eg main table contains srNo. UswerID 1 abcd 2 trtr 3 tret 4 yghg 5 jjhj
user enters in text box1 '2' and in text box2 `1' so total of 3 random records are selected and put it into two another table say
table1 sr.no UserID 2 trtr
and table2 contains
sr.no. userid 3 tret 5 jjhj
now i want to delete these records which are sr.no 2,3,5 from the main table. how do i do it as user can enter any number in the text box.so writing multiple delete statements would not be possible. how do i write statements or help me with logic.
Hi I wanted to do a delete rows from a group of table. These tables have a common column UserID. I heard that there is something called ondelete cascade. But I don't know how to set it up and utilise it. Could someone tell me how to do it. Or point me to a tutorial which shows how to do it. Thanks
I have a database that is used to store a lot of data. We load the data on adaily basis, several thousand records per day. The Log file is not needed,so whats the best way to delete the records in it and reduce the sizeThanksDerrick
I have a table with approx 5 million rows and 36 columns. It takes approx 4 minutes to delete 1 row. The table has 3 indexes in addition to it's primary key and has twelve foreign key constraints. We are still using sequel 7. There is a backup run every night as part of the nightly maintenence that reorg/reindexes and checks the database integrity. Any thoughts?
Hi, i need the suggestion here in very familiar db situation ..i have a main table and a primary key of that table is used in many other table as foreign key.If i am deleting a record in a main table,how do i make sure that all the corresponding record in the associated tables,where that foreign key is used, gets deleted too?What are my options?Thanks
Hello all, I have a DTS package set up to import a text file on a daily basis. I need to dump the data in 2 table after 7 days of the last import .this is the code that I have Delete From TblTemp date(Day(-7), CurrentStamp). But for some reason it deleting the data right after it imports it. And it doesn't delete anything out of the other table.
I have a function that opens a connection to an SQL database, issues a SELECT command, and reads the records with an OleDbDataReader. As the records are read, any that match certain criteria are deleted with a DELETE command. Simplified example code is shown below:
Dim dbCmd As OleDb.OleDbCommand = New OleDb.OleDbCommand() dbCmd.Connection = New OleDb.OleDbConnection(UserDbConnString) dbCmd.CommandText = "SELECT * FROM [User] ORDER BY UserID" dbCmd.Connection.Open() Dim reader as OleDb.OleDbDataReader = dbCmd.ExecuteReader(CommandBehavior.CloseConnection) While reader.Read() If reader("SomeColumn") = SomeCalculatedValue Then Dim dbCmd2 As OleDb.OleDbCommand = New OleDb.OleDbCommand() dbCmd2.Connection = New OleDb.OleDbConnection(UserDbConnString) dbCmd2.CommandText = "DELETE FROM [User] WHERE UserID = " + reader("UserID") dbCmd2.Connection.Open() dbCmd2.ExecuteNonQuery() dbCmd2.Connection.Close() End If End While reader.Close()
This code worked well with an MS Access database, but when I changed to SQL Server, I get a database timeout error when attempting to do the DELETE. I suspect the reason is that the connection the reader has open has the record locked so it cannot be deleted.
The SQL connection string I am using is something like this:
UserDbConnString = "Provider=SQLOLEDB; Server=(Local); User ID=userid; Password=password; Database=dbname"
The connection string I used for MS Access included the property "Mode=Share Deny None". I wonder if there is some similar way to tell SQL Server to allow editing of records that are open for reading with an OleDbDataReader.
I wrote a script to archive and delete records rom a table back in 2005 and 2009.
I can't seem to get the syntax right. Any sample script to simply archive and delete records?
This is what I have so far.
DECLARE @ArchiveDate Datetime SET @ArchiveDate = (SELECT TOP 1 DATEPART(yyyy,Call_Date) FROM tblCall ORDER BY Call_Date) --SELECT @ArchiveDate AS ArchiveDate DECLARE @Active bit
Rajarajan writes "Kindly don't ignore this as regular case. This is peculiar. I need to delete one of duplicate records only if they occurs consecutively. eg.
1. 232 2. 232 3. 345 4. 567 5. 232
Here only the first record has to be delete. Kindly help me out.
I loaded one table via SSIS and found that it contained many duplicate records (from the input source). I can create a SQL task to delete them, but I wonder if SSIS offers and task "out of the box" to delete dups?
I've got a table with a unique column, "id". I've got the id values of about 300,000 records. These records need to be DELETEd from this table. Is there a way to do this in batch? I can't imagine the only way to do it is:
DELETE FROM Table WHERE id = 1 OR id = 2 OR id = 3... OR id = 300000