Hi,
I have a table named "std_attn", where, by some bad coding, lots of duplicated rows have been created. And the table don't have any PK. So Now tell me the way to remove the duplicaies..................
I have three FileStreams (FS1 on F drive, FS2 on H drive, FS3 on E drive) belonging to the same FileStream group of one particular database (DB) which is in Simple recovery mode in the SQL Server 2012.
FS1 contains huge number of files due to which F drive is completely full.
So, I am trying to move some of the extra files from one FileStream (FS1 on F drive) to another FileStreams (FS2 on H drive and FS3 on E drive) using command:
dbcc shrinkfile('FS1', emptyfile)
Then, I take the Full and Differential backup of the database and issue the CheckPoint and try to delete the already duplicated files from the Filestream FS1 to get some space in the F drive using command:
/****** Object: StoredProcedure [dbo].[dbo.ServiceLog] Script Date: 07/18/2014 14:30:59 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER proc [dbo].[ServiceLogPurge]
-- Purge records dbo.ServiceLog older than 3 months: -- Purge records in small portions to avoid locking production tables -- for a long time. The process takes longer, but can co-exist with -- normal usage of the tables.
[Code] ...
*** Getting this error below when executing the code ***
Msg 102, Level 15, State 1, Procedure ServiceLogPurge, Line 45 Incorrect syntax near 'Failed:'.
Hello, I have a table T1 with fields ID, F1, F2, F3, F4, F5, F6….
I need to find if there is duplicated rows based on F1, F2, F3 columns. If there is set F5=’minimum’ where ID is MIN(ID). So the smallest should be set as minimum. How can I do this in a stored procedure?
Hello, I have a table T1 with fields ID, F1, F2, F3, F4, F5, F6€¦.
I need to find if there is duplicated rows based on F1, F2, F3 columns. If there is set F5=€™minimum€™ where ID is MIN(ID). So the smallest should be set as minimum. How can I do this in a stored procedure?
I have just started developing in SQL Express in the last 2 months so still learning. The problem I€™m having with my stored procedure is that I get duplicate rows in my results. The row is a duplicate in terms of column 'Job No' as when the query runs in access only one instance of each 'Job No' is returned but when I recreate the query in SQL server I get a number of rows back for the same 'Job No'? How would I go about getting just 1 instance of each 'Job No' back? With column 'Days to Date' showing the total 'Days to Date' for each Job No. Please see Ms Access results if unsure of what I€™m asking.
A copy of the stored procedure is below and a sample of the out-put with Ms Access results at very bottom.
ALTER PROCEDURE [dbo].[sl_DaysDonePerJob] AS
SELECT CASE WHEN [Job No] IS NULL THEN '' ELSE [Job No] END AS [Job No], SUM([Actual Days]) AS [Days to Date], CONVERT(nvarchar(10),MIN(SessionDate),101) AS [Start Date],
CONVERT(nvarchar(10),MAX(SessionDate),101) AS [End Date],
MAX(CASE WHEN DATEPART(MM,SessionDate)=1 THEN 'Jan'
WHEN DATEPART(MM,SessionDate)=2 THEN 'Feb'
WHEN DATEPART(MM,SessionDate)=3 THEN 'Mar'
WHEN DATEPART(MM,SessionDate)=4 THEN 'Apr'
WHEN DATEPART(MM,SessionDate)=5 THEN 'May'
WHEN DATEPART(MM,SessionDate)=6 THEN 'Jun'
WHEN DATEPART(MM,SessionDate)=7 THEN 'Jul'
WHEN DATEPART(MM,SessionDate)=8 THEN 'Aug'
WHEN DATEPART(MM,SessionDate)=9 THEN 'Sep'
WHEN DATEPART(MM,SessionDate)=10 THEN 'Oct'
WHEN DATEPART(MM,SessionDate)=11 THEN 'Nov'
WHEN DATEPART(MM,SessionDate)=12 THEN 'Dec' END) AS 'End Month'
Hi,I've got a db table containing 5 columns(excluding id) consisting of1.) First Half of a UK postcode2.) Town name to which postcode belongs3.) Latitude of Postcode4.) Longitude of Postcode5.) Second Part of the PostcodeI want to select columns 1,2,3 and 4, but once only. There are oftenseveral entries where 1 and 2 are the same but 3 and 4 are differenti.e.WA1Bewsey and Whitecross53.386492-2.596847WA1Bewsey and Whitecross53.388203-2.590961WA1Bewsey and Whitecross53.388875-2.598504WA1Fairfield and Howley53.388455-2.581701WA1Fairfield and Howley53.396117-2.571789My current query isSELECT DISTINCT Postcode, Town, latitude, longitudeFROM PostcodeWHERE Postcode.Postcode = 'wa1'ORDER BY Postcode, TownHowever as latitude and longitude differ on each line DISTINCT doesnot do what I'm looking for.Can anybody suggest a way changing the query to just give the firstinstance of each Postcode/Town combo?I.E.WA1Bewsey and Whitecross53.386492-2.596847WA1Fairfield and Howley53.388455-2.581701Many thanks!Drew
SELECT R.name, R.age,R.DOB, ISNULL(D.Doc1,'NA') AS doc1, ISNULL(C.Doc2,'NA') AS doc2 FROM REQ R inner join RES S ON R.Request_Id=S.Request_Id inner join RES1 D ON D.Response_Id=S.Response_Id inner join REQ1 C ON C.Request_Id=R.Request_Id
select * from RES1 where Response_Id = 111 -- return 3 select * from REQ1 where Request_Id = 222 --- returns 2
So at last inner join retuns 3*2 = 6 records , which is wrong here and i want to show 3 records in doc1 row and 2 records in doc 2 rows ...
Hi. As the title, I am try to figure out how to write script to prevent duplicated rows before loading data from couple csv files to the OLE database table. Another quick question, when I use Data Conversion to convert data from string to datetime or decimal type, it always return error like potential data loss.
The code below shows 4 rows. The first two rows are almost identical, but the two of them exists in the same table as different rows. Row number 1 is also related to Row number 3 and Row number 2 is also related to Row number 4 The problem is that I have to use only one of then (Rows number 1 or 2) togheter with row 3 & 4.
I thought using GROUP BY RECEIPTJURNALMATCH.JURNALTRANSID, but getting error. Thanks in advance, Aldo.
Code Snippet SELECT RECEIPTJURNALMATCH.JURNALTRANSID AS 'R.JURNALTRANSID', RECEIPTJURNALMATCH.MATCHNUM AS 'R.MATCHNUM', JURNALTRANSMOVES.ACCOUNTKEY AS 'J.ACCOUNTKEY', JURNALTRANSMOVES.SUF AS 'J.TOTAL', STOCK.REMARKS AS 'S.REMARKS'
FROM RECEIPTJURNALMATCH INNER JOIN JURNALTRANSMOVES ON RECEIPTJURNALMATCH.JURNALTRANSID = JURNALTRANSMOVES.ID LEFT OUTER JOIN STOCK ON RECEIPTJURNALMATCH.STOCKID = STOCK.ID
I have a table with approx 5 million rows and 36 columns. It takes approx 4 minutes to delete 1 row. The table has 3 indexes in addition to it's primary key and has twelve foreign key constraints. We are still using sequel 7. There is a backup run every night as part of the nightly maintenence that reorg/reindexes and checks the database integrity. Any thoughts?
I have a csv file that I need to import daily into a SQL Server 2005 table. Much of the table contents could just be overwritten with the new csv file, however there are a set of Rows within the table that need to be appended to , rather than overwritten. There is no Primary Key in the csv file that can be used. I'm not sure this is the best approach, but what I have been trying to do, is append the entire csv file to the existing table, and then go back and delete the duplicates. When I run the Delete, it does delete the majority of the records, but leaves a couple hundred behind. The number left behind varies with each run, can't seem to identify a pattern here. Running the Delete a second time does clean up the rows left behind in the first execution of the Delete, and gives the result I want. Any thoughts as to why this needs to be run twice? Or is a better approach available? Here is my code - SELECT [Pkg ID], [Elm (s)], [Type Name (s)], [End Exec Date], [End Exec Time], dupcount=count(*) INTO temppkgactions FROM pkgactions GROUP BY [Pkg ID], [Elm (s)], [Type Name (s)], [End Exec Date], [End Exec Time]HAVING count(*) > 1
DELETE TOP (SELECT COUNT(*) -1 FROM dbo.temppkgactions WHERE dupcount > 1 ) FROM dbo.pkgactions DROP TABLE temppkgactions
What I'm trying to do is delete a user and all their related information within the other tables. I'm not wanting to delete the table, just one column with that user and their related information. So my Primary_Key is UserID within the table [alumni] and my three Foreign_Keys are CommentID, PhotoID, and AlbumID within the tables [comments], [photos], and [albums]. Here is some of the code that I have: <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:SoderquistString %>" DeleteCommand="DELETE FROM [alumni] WHERE [UserID] = @UserID" SelectCommand="SELECT [UserID], [UserName], [FirstName], [LastName], [State] FROM [alumni] WHERE ([State] = @State)"> <DeleteParameters> <asp:Parameter Name="UserID" Type="Int32" /> </DeleteParameters> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="state" PropertyName="SelectedValue" Type="String" /> </SelectParameters> </asp:SqlDataSource> The users are set up in GridView form. Is there some type of DELETE command that I need to be writing that is different than the one above? I have tried adding onto the following DELETE statment: DeleteCommand="DELETE FROM [alumni] WHERE [UserID] = @UserID DELETE FROM [photo] WHERE [UserID] = @UserID; DELETE FROM [album] WHERE [UserID] = @UserID; DELETE FROM [comment] WHERE [UserID] = @UserID; ...but that doesn't work...and doesn't look right. I would really appreciate anyones suggestions or help that you may be able to provide. Thank you!
I have problem in deleting duplicate rows. I have a identity column in my table, if I try to use correlatted sub query with Delete command it gives error.
The other problem I have is I have a date column in my table and update that column with current date and time. If use a query to fetch a records on a particular day , it does not return any rows
select * from rates where ch_date >='02/11/99' and ch_date<='02/11/99'
If I use convert also there is some other problems. Is there any way to force date checkings to be done excluding time.
This is an imaginary problem while discussing ROWID in ORACLE.
Consider a table without primary key, unique key, uniuqe index. A row has inserted into the table many times. I want to delete all but one dulicated rows. With any 'where' clause all rows(duplicated) will be deleted. In ORACLE i can achieve this using ROWID as follows:
Delete from Table_name where < all column values > and ROWID <> ( Select max(rowid) from Table_name where < all column values > )
How can this be achieved in MS SQL Server 6.5 ?
According to Dr. Codd's Golden rules for RDBMS one is that One should be able to reach each data value in the database by using table name, row idenfication value and column name.
Does MS SQL Server 6.5 satisfy this requirement ?
Also How many of Dr. Codd's 13 Golden Rules for RDBMS does MS SQL Server 6.5 Satisfy? Which doesn't ?
I have two tables. Table 1 contains a distinct ID(3,4,5). Table 2 contains multiple ID's(3,3,3,3,3,4,4,4,5,5,5). I want to be able to use a join. If an ID is found in table 2, remove all entries of it. Any ideas? Thanks...
delete b.id from table1 a join table2 b on a.id = b.id
In my database, I have a table "tbl_c_extract" that consists of 4 columns that look the following. I'm looking at a daily batch of around 4000 records, of which 150 are likely to be duplicates.
In the example above, I need to remove 2 of the entries, leaving only the one that with the maximum leave date. In this case, those without a leave date have the 2099 entry.
Using CTE works exactly as I want it to, however SQL Server Agent doesn't seem to like the use of CTE..
Code: WITH CTE (Proprietary_ID, LeaveDate, RN) AS ( SELECT Proprietary_ID, LeaveDate, ROW_NUMBER() OVER(PARTITION BY Proprietary_ID ORDER BY Proprietary_ID, LeaveDate) AS RN FROM tbl_c_extract ) DELETE FROM CTE WHERE RN > 1
int1 must have the value 1 int2 must have the value 1 int3 must have the value 0 and now we get to the difficult part - first character in the field [Cost No] have to be different from the letter 'B'. [Cost No] have the datatype varchar(20)
I expected the code to look something like this: DELETE Table1 FROM Table 1 WHERE ([Int1] = 1) AND ([Int2] = 1) AND ([Int3] = 0) AND ((LEFT([Cost No]), 1) <> 'B')
But I get this error message: The left function requires 2 arguments
What am I doing wrong and what should the right code look like?
I need to delete the duplicate rows from a table. How to do that in SQL server 7.0 ? If possible write an example, so that it will be much useful for me..
i have 6 read only tables that every night all the data gets dumped and a new updated copy of the data is copied over. the tables range from 500,000 rows to almost 4 million. i have indexes set up on the fields i use to query against. my questions are 1.since i dump all the data every night and replace it, do i need to rebuild the indexes every night or is that done after the data is reentered? 2.i want to use a fill factor on the table since it is read only, but will dumping the data every night and reinputting it have adverse affects with a fill factor? 3.should i be shrinking the database or defragging it everynight cause of my data dumps and reloads?
How can I quickly delete thousands of rows in a table (SQL2000) according a query and without blowing up the log file? For instance executing the query: Delete from transactions WHERE transactiondatestamp < DATEADD (m,-4,GETDATE())
increases my log file to almost 6GB before job was done an normal size was re-obtained. In addition it took a long to time to get the job done. With the command truncate table I cannot use query unfortunately but this would be faster.
I have an SQL tables [Keys] that has various rows such as: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External 3 Key1 Key1 InHouse 4 Key1 Key1 InHouse 5 Key1 Key1 InHouse
Obviously IDs 1,3,4,5 are all exactly the same and I would like to be left with only: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External
I cannot create a new table/database or change the unique identifier (which is currently ID) either. I simply need an SQL script I can run to clean out the duplicates (I know how they got there and the issue has been fixed but the Database is still currently invalid due to all these duplicate entires).
I have an SQL tables [Keys] that has various rows such as: [ID] [Name] [Path] [Customer] 1 Key1 Key1 InHouse 2 Key2 Key2 External 3 Key1 Key1 InHouse 4 Key1 Key1 InHouse 5 Key1 Key1 InHouse
Obviously IDs 1,3,4,5 are all exactly the same and I would like to be left with only:
I cannot create a new table/database or change the unique identifier (which is currently ID) either. I simply need an SQL script I can run to clean out the duplicates (I know how they got there and the issue has been fixed but the Database is still currently invalid due to all these duplicate entires).
im having problems deleting rows in a reference table. is there any tools which tables to delete first before deleting the rows in the table which contains the primary key?
i have a lot of tables let say over 300 so its hard for me to guess which comes first... what should i keep in mind deleting rows with a referential integrity?
I made an application that insert some data to MS SQL 2005 DB Express Edition. One of columns in my database store text. The content of that field must, beyond the existing text, append the current id to its text string. Practically, it means that if on row nr 15 I store text value "15text", on the next row i will store "16text". I figured out that I can get value of max ID by creating following stored procedure.
Once i get the maxID value i simply concat max id and string like:
string.Concat(max_id.ToString(), file_name);
where "max_id is integer return value from stored procedure and "file_name" is a string to rename like "max_id+file_name"
Two problems occur!!!
Problem nr 1.
Since I use to insert 10 new rows each time, the values from 0-9 are appended to text like "(0-9)text"
Problem nr 2.
If I delete some rows, ID does not get update. It means that after deleting all rows from table, next inserted item gets last existed ID before delting +1. New inserted item should get value 1 since table is empty after deleting all rows from it!!!