I used to work in a Sybase database environment. When I had to insert/
update records in the database, I always used "insert on existing
update", in this way, you didn't have to check whether a record
already existed (avoid errors) and you were always sure that after
running the scripts, the last version was in the database.
Now I'm looking for the same functionality in MS SQL Server, asked a
few people, but nobody knows about such an option.
Does anybody here knows the SQL Server counterpart of "insert on
existing skip/update"? If this doesn't exist, this is a minus for
MS ;).
The objective is to identify orders where an order fee has been applied incorrectly. I have multiple orders per customer, my table contains an orderID and a customerID. Currently if the customer places additional orders before the previous orders have been closed/cancelled, then additional fees are being applied.
Let's say I'm comparing order #1 to order #2. I need to identify these rows where the following is true:-
The CustID is the same.
Order #2 has a more recent order date.
Order #2 has a FeeDate Before the CancelledDate of Order #1 (or Order #1 has no cancellation date).
So in the table the orderID:2835692 of CustID: 24643 has a valid order fee. But all the subsequently placed orders have fees which were applied before the first order was cancelled and so I want to update the FeeInvalid column with a 'Y'. The first fee will always be valid.
I think I understand why the code I am trying doesn't achieve the result I want but I can't figure out how to write it correctly. Below is one example of code I've tried and also code to create the table and insert some test data.
update t1 SET FeeInvalid = 'Y' FROM MockData t1 Join MockData t2 on t1.CustID = t2.CustID WHERE t1.CustID = t2.CustID AND t2.OrderDate > t1.OrderDate AND t2.FeeDate > t1.CancelledDate CREATE TABLE [dbo].[MockData]( [OrderID] [float] NULL,
If I have a table with 1 or more Nullable fields and I want to make sure that when an INSERT or UPDATE occurs and one or more of these fields are left to NULL either explicitly or implicitly is there I can set these to non-null values without interfering with the INSERT or UPDATE in as far as the other fields in the table?
EXAMPLE:
CREATE TABLE dbo.MYTABLE( ID NUMERIC(18,0) IDENTITY(1,1) NOT NULL, FirstName VARCHAR(50) NULL, LastName VARCHAR(50) NULL,
[Code] ....
If an INSERT looks like any of the following what can I do to change the NULL being assigned to DateAdded to a real date, preferable the value of GetDate() at the time of the insert? I've heard of INSTEAD of Triggers but I'm not trying tto over rise the entire INSERT or update just the on (maybe 2) fields that are being left as null or explicitly set to null. The same would apply for any UPDATE where DateModified is not specified or explicitly set to NULL. I would want to change it so that DateModified is not null on any UPDATE.
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) VALUES('John','Smith',NULL)
INSERT INTO dbo.MYTABLE( FirstName, LastName) VALUES('John','Smith')
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) SELECT FirstName, LastName, NULL FROM MYOTHERTABLE
I have one table Phone and a table SmsMessage that are linked by the Cellnumber. Cellnumber is the primary key in Phone.
For some reason in the table Phone the Cellnumbers are stored with extra spaces like: '+27000000000 ', but in the SmsMessage table the same value is stored as '+27000000000'. However when I want perform an update to trim the Cellnumbers, I get the message 'Cannot modify values Cellnumber in Phone because there are dependent values in SmsMessage.
The fact is that there are no dependent values, but for MSSQL '+27000000000' and '+27000000000 ' is the same????! Note that the function Len(Cellnumber) gives me the length of the string WITHOUT the spaces as well.
Even if I remove all relationships from Phone, I still get the same error. Are there more places in MSSQL where relationships are stored besides the Diagrams?
Or is there a command that tells MSSQL to ignore all relationships for the next query?
I have one table of new records (tableA) that may already exist intableB. I want to insert these records into tableB with insert if theydon't already exist, or update any existing ones with new data if theydo already exist. A column (Action) in tableA already tells me whetherthis is an INSERT, UPDATE, or DELETE. I'm able to derive that I can doan insert withselect * into tableB from tableA where Action = 'INSERT'....and I think I can handle the delete.But I'm stuck on the update. How do I do the update? An ordinaryUPDATE statement just won't do unless I use a cursor to cycle throughthe recordset. I want to avoid a cursor.
I need to create an SSIS package that updates columns in a table from columns in another database where the keys match. What's the best way to do this?
IUpdating existing rows in a SQL server 2005 destination table
--I have a CSV file and every row needs to perform an update on a production table
The most straight forward is to use an OLEDB Command Data Flow Transformation Object that sends one SQL statement per row coming in. How do I configure this object ???
That might be okay if you've only got a small number of updates, however if you've got a large set, I prefer landing this set to a temporary table and then doing Set based updates in a following Execute SQL task. Can you please tell me how I can set up the Execute SQL Task to do set based updates in the Control flow??
I have 50 MSDE SQL2k servers, each server has around 10 customer databases. There are 5 stored procecures need to update to 50*10 = 500 databases. These 5 stored procedures each has many 'Go' keywords and 4 of 5 with more than 8000 characters. What might be the best way to loop execute them automatically, instead of isql/w to each database connection to run the script?
I had bumpped by 'Go' keywords error and limitation of max varchar of 8000 error. thanks for the help David
I'm building a package to import data from a flat file into a Customer table, and I have set up a Lookup to check if that customer already exists in this table, and if so, perform an Update command instead of the bulk load. I don't expect many updates, if any, this is why i just used the OLE DB Command instead of using a staging table.
I've a bit of a problem executing this within a transaction and having the lock table option set on the SQL Server Destination. Is there anyway I can get Transaction support for this Data Flow working, as I want to be able to rollback a complete file/import if possible.
How do I insert data into an existing temporary table? Note: I’m primarily a .NET programmer who has to do T-SQL to grab data from time to time.
What I am trying to do is this: 1) Put the scores for all the people who have completed a questionnaire into a temporary table called #GroupConfidence. 2) Add on a row at the end that gives an average for each score (ie the last row is an average of the column above).
I need my SP to give me a DataSet that I can throw straight to my .NET reporting engine (I don’t want to do any number crunching inside .NET) - that's why I want to add on the 'average' row at the end.
If I do this (below) the temporary table (#GroupConfidence) gets created and the values inserted.
-- Insert the results into the #GroupConfidence table SELECTRTRIM(UC.FirstName + ' ' + UC.LastName) AS 'FullName', RP.SubmitID, RP.GL_Score, RP.GP_Score, RP.GPH_Score, RP.DL_Score, RP.MP_Score, RP.Role_MI_Score, RP.Role_ASXRE_Score, RP.Role_APRA_Score, RP.Overall_Score AS 'AllCategories' INTO#GroupConfidence FROMRodResultPercentages RP JOIN#UsersCompleted UC ON UC.SubmitID = RP.SubmitID
My problem is that #GroupConfidence already exists so in fact I have this code below:
CREATE TABLE #GroupConfidence ([FullName] [varchar] (200) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [SubmitID] [int] NOT NULL, [GL_Score] [decimal](19, 10) NOT NULL, [GP_Score] [decimal](19, 10) NOT NULL, [GPH_Score] [decimal](19, 10) NOT NULL, [DL_Score] [decimal](19, 10) NOT NULL, [MP_Score] [decimal](19, 10) NOT NULL, [Role_MI_Score] [decimal](19, 10) NOT NULL, [Role_ASXRE_Score] [decimal](19, 10) NOT NULL, [Role_APRA_Score] [decimal](19, 10) NOT NULL, [AllCategories] [decimal](19, 10) NOT NULL )
-- Insert the results into the #GroupConfidence table SELECTRTRIM(UC.FirstName + ' ' + UC.LastName) AS 'FullName', RP.SubmitID, RP.GL_Score, RP.GP_Score, RP.GPH_Score, RP.DL_Score, RP.MP_Score, RP.Role_MI_Score, RP.Role_ASXRE_Score, RP.Role_APRA_Score, RP.Overall_Score AS 'AllCategories' INTO#GroupConfidence FROMRodResultPercentages RP JOIN#UsersCompleted UC ON UC.SubmitID = RP.SubmitID
So I get this error: Server: Msg 2714, Level 16, State 1, Line 109 There is already an object named '#GroupConfidence' in the database.
When inserting records into a table created through a RDA Pull(), many users experience duplicate key violations. One reason for this is Identity columns. SQL Server CE RDA does not set the seed on the Identity columns when a table is pulled.
Source: Microsoft SQL Server 2000 Windows CE Edition
Native Error: 25016
HR: DB_E_INTEGRITYVIOLATION
Description: Value violated the integrity constraints for a column or table.
Interface defining error: IID_IRowsetChange
Param = 0
Param = 0
Param = 0
Param =
Param =
Param =
This error can be returned when the user attempts to insert a row with an automatically incremented Identity column. With RDA, this usually occurs when a user pulls rows from the server and attempts to insert new rows before setting the seed and increment values for the Identity column. By default, the seed and increment values are both 1.
To correct this error, set the seed to the next highest number after the table is pulled, before allowing users to enter data.
What do they mean by setting the seed to the next highest level? I this the seed to the GUID row in the pulled table? How do you correctly do this with the ALTER table statement? The database that I am pulling has 21 tables, and would be a pain to have to do this. Does anyone have any other ideas as to why this won't work properly. If I clear out the data from the tables on the publishing server, I can add data to the pull as long as I want until I do another pull later. After I do that, I keep getting the above issue.
I need to run a query and insert the count results and a name of another table into an existing table.
I have 20 jobs that import records from text files into seperate tables. When that completes I want the count to be sent to another table along with the table name that the count was run on. Any ideas? I can't get SELECT INSERT to work.
I have a field where all of the data is 5 characters in length. The last character denotes a status and will always be an F, H, or T. I want to add a new field (which I will do manually) and populate the new field with the last character from the "old" field. Once that is complete, I want to eliminate that 5th character from the old field.
I am building a data warehouse. I have many columns I want to populate in a fact table using integration services. Sample fields are: companyID, companyName, companyNumberOfAccounts, company, NumberOfUsers.
In the integration services package, I would like to keep this fact table in place and populated with data and to build integration services packages that update each of the existing row's specific columns (e.g. companyNameOfAccounts) one by one. For example, I have a source of data that has companyID and companyNumberOfAccounts data.
Is it possible to use the SQL Server Destination or OLE DB Destination Integration Services elements to import companyID and companyNumberOfAccounts data so that existing records just have their companyNumberOfAccounts column updated?
we are running SQL7.0. I execute a store procedure on Server A that insert a data into the table on Server B. I use a Link Server to connect from Server A to Server B. When I execute the SP, I got the messege 'The command(s) completed successfully'. When I select from the table was inserted, I don't see the data that was inserted. Can we update or insert tabel using Link Server.
I have an excel file that contains column A with names of components and products followed by column B which has each respective quantity on hand. I want to import that data to our website's SQL database that has a products table with a column, Pf_ID, that has only product names not component names and In_Stock which contains out-dated information that I want updated from column B of the excel file.
I think I've figured out how to use DTS and update the two fields, but I'm afraid that when everything runs new entries will be created with component information. Is it possible to specify that only rows where Pf_ID matches some row in column A that same row's column B will be used to update the data in In_Stock. I may have just made things too confusing than they need to be, but I don't have much experience with EM or Excel.
I'm also considering trying to write a macro that will match Pf_IDs in an exported excel file of the products table and take rows out of the excel file with current quantity information putting them in a new excel file to import into the website's database.
I am trying my first bulk update to an existing SWL table from a CSV text file,The text file naming is exacrtly the same as the SQL table, with the same attributes
The statements: BULK INSERT [Jedox_prod].[dbo].[B_BP_Customer] FROM 'c:Baanjedox_dailyjdcom4401.txt' WITH
[code]....
The error message is: [size=1Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 2, column 3 (BP_Country). Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".size=1]..The have checked and re-checked the BP_Country field ( the 1st field after the key) and I am not seeing any mismatches.
Have installed SQL Server 2008 R2 Express (includes SSMS tool) on Windows server 2008 R2 sp1 without any issues.Database created with no issues, full text catalog created via the wizard also with no issues but cannot run the process as a scheduled task of updating the catalog because the SQL agent is not available in the express version.
The full text index information is already being populated and updated by a third party application so this leaves just the catalog to be updated as and when new full text information is available.
I have a third party SQL scheduler which will run SQL scheduled tasks but requires a script to run the full text catalog update process
Is it possible to extract a script from the existing full text catalog to run the update process or how to create a script from scratch to do the same update catalog process in the third party scheduler?
Hi, How can i store the record insert/update timestamp in a SQL server 2000 db programacally. ? what are the date/time functions in ASP.NET 2.0 ? I know that this can be done by setting the default valut to getdate() function in SQL, but any other way on ASP page or code-behind page ???
I have current current sql server 2000 database containing some columns in big5. To display these cols correctly, my asp.net nust have directive with CodePage="1252" ContentType="text/html;charset=BIG5". I can not update, or insert big5 character into these columns via .aspx page. I'm using .net framework 2.0. Please help me, thanks a lot for any help.
I have an application that calculates a bunch of numbers and then inserts them into a table (or updates a record in the table if it exists). In my test environment it is issuing 100 insert or update requests to the server per second and it could run for several hours.After a the first several hundred requests, the SQL server is bogging down (processor at 90-100%) and the application slows down while it waits on SQL to update the database.What would be the best way to optimize this app? Would it help to loop through all my insert/update requests and then send them as one big batch of statements to the server (per 1000 requests or something)? Is there a better way of doing this?Thanks!
All: I have created a stored procedure on SQL server that does an Insert else Update to a table. The SP starts be doing "IF NOT EXISTS" check at the top to determine if it should be an insert or an update. When i run the stored procedure directly on SQL server (Query Analyzer) it works fine. It updates when I pass in an existing ID#, and does an insert when I pass in a NULL to the ID#. When i run the exact same logic from my aspx.vb code it keeps inserting the data everytime! I have debugged the code several times and all the parameters are getting passed in as they should be? Can anyone help, or have any ideas what could be happening? Here is the basic shell of my SP: CREATE PROCEDURE [dbo].[spHeader_InsertUpdate] @FID int = null OUTPUT,@FLD1 varchar(50),@FLD2 smalldatetime,@FLD3 smalldatetime,@FLD4 smalldatetime AS Declare @rtncode int IF NOT EXISTS(select * from HeaderTable where FormID=@FID) Begin begin transaction --Insert record Insert into HeaderTable (FLD1, FLD2, FLD3, FLD4) Values (@FLD1, @FLD2, @FLD3,@FLD4) SET @FID = SCOPE_IDENTITY(); --Check for error if @@error <> 0 begin rollback transaction select @rtncode = 0 return @rtncode end else begin commit transaction select @rtncode = 1 return @rtncode end endELSE Begin begin transaction --Update record Update HeaderTable SET FLD2=@FLD2, FLD3=@FLD3, FLD4=@FLD4 where FormID=@FID; --Check for error if @@error <> 0 begin rollback transaction select @rtncode = 0 return @rtncode end else begin commit transaction select @rtncode = 2 return @rtncode end End---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
I am experimenting with using CDC to track user changes in our application database. So far I've done the following:
-- ENABLE CDC ON DV_WRP_TEST USE dv_wrp_test GO EXEC sys.sp_cdc_enable_db GO
-- ENABLE CDC TRACKING ON THE AVA TABLE IN DV_WRP_TEST USE dv_wrp_test
[Code] ....
The results shown above are what I expect to see. My problem occurs when I use our application to update the same column in the same table. The vb.net application passes a Table Valued Parameter to a stored procedure which updates the table. Below is the creation script for the stored proc:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
if exists (select * from sysobjects where id = object_id('dbo.spdv_AVAUpdate') and sysstat & 0xf = 4) drop procedure dbo.spdv_AVAUpdate
[Code] ....
When I look at the results of CDC, instead of operations 3 and 4, I see 1 (DELETE) and 2 (INSERT) for the change that was initiated from the stored procedure:
-- GET CDC RESULTS FOR CHANGES TO AVA TABLE USE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO
--RESULTS SHOW OPERATION 1 (DELETE) AND 2 (INSERT) INSTEAD OF 3 AND 4 --__$start_lsn__$end_lsn__$seqval__$operation__$update_maskAvaKeyAvaDescAvaArrKeyAvaSAPAppellationID --0x0031E84F000000740008NULL0x0031E84F00000074000230x02119Test26NULL --0x0031E84F000000740008NULL0x0031E84F00000074000240x02119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000310x0F119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000420x0F119Test46NULL
Why this might be happening, and if so, what can be done to correct it? Also, is there any way to get the user id associated with the CDC?
1.First i need to update the row if the status column is 0 to 1 2.Need to insert the row IF SegmentId=@SegmentId and SubjectId<>@SubjectId and StaffId=@StaffId 3.Need to insert the row IF StaffId<>@StaffId And ClassId=@ClassId and SegmentId<>@SegmentId and SubjectId<>@SubjectId
I have wrote the stored procedure to do this. But the problem is If do the update. It is reflecting in the database by changing 0 to 1. But it shows error like cannot insert the duplicate
Here is the stored Procedure what i have wrote
ALTER PROCEDURE [dbo].[InsertAssignTeacherToSubjects]
If a column is set to allow nulls I know that a constraint can be used to supply a default (i.e. GetDate() ) when no value is provided but what about when an explicit NULL is provided in an INSERT or UPDATE statement?Is there any way other then an AFTER trigger to substitute a value for an explicitly provided NULL? In other words, assuming that dtAsof is a NULL enabled column, is there any way to over ride what the following will do to MYTABLE:
If there's no way to do this in SQL Server 2008R2 then what about later versions of SQL Server? Do any more recent versions have a way to deal with this? We have a third party app that uses a SQL Server back end and many of the tables have columns for storing audit like data such as date/time but many are left to NULL values and I'd really like to fix that in as passive a way as possible so as to not break the app that uses the database. I know a constraint with a default can be sued to over ride a null but not when a null is explicitly provided.
I have a web form with a text field that needs to take in as much as the user decides to type and insert it into an nvarchar(max) field in the database behind. I've tried using the new .write() method in my update statement, but it cuts off the text after a while. Is there a way to insert/update in SQL 2005 this without resorting to Bulk Insert? It bloats the transaction log and turning the logging off requires a call to sp_dboptions (or a straight-up ALTER DATABASE), which I'd like to avoid if I can.
Our system runs a SQL Server 2012 DB, it has a table (table_a) which has over 10M records. Our system have to receive data file from previous system daily which contains approximate 3M updated or new records for table_a. My job is to update table_a with the new data.
The initial solution is:
1 Create a table (table_b) which structur is as the same as table_a
2 Use BCP to import updated records into table_b
3 Remove outdated data in table_a: delete from table_a inner join table_b on table_a.key_fileds = table_b.key_fields
4 Append updated or new data into table_a: insert into table_a select * from table_b
As the test result, this solution is very inefficient. Step 3 costs several hours, e.g. How can I improve it?
Hi when i am using OLDB Command for Update Existing Records the Follwing ERror Code I am getting . Any one pls help me on this one.
1)[DTS.Pipeline] Error: The ProcessInput method on component "OLE DB Command 1" (9282) failed with error code 0xC0202009. The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running.
2)
[OLE DB Command 1 [9282]] Error: An OLE DB error has occurred. Error code: 0x80040E10.
3)
[OLE DB Command 1 [9282]] Error: An OLE DB error has occurred. Error code: 0x80040E10.
I am inserting updating few tables from snapshot and reading same bunch of tables from reporting using readcommitted . It is showing some deadlocks i think it is write in this situation as " x" is not compitable with "s" ,"is".