Transact SQL :: Use Sequences And Triggers For Identity
Oct 16, 2013
I would like to use sequences and triggers to update table identity field with int value from sequence via before insert trigger. I'm searching on google for a few days and there are no same or identical article about this subject.
Is there any sample how to create table with column Id, Name, Comment and sequence (for generate int numbers for Int field in table) and trigger which will fired before insert and check is inserted Id is NULL and update this field from sequence or nothing if id is set up.
I am trying to understand federated servers. So far I have been able to link 2 servers, create a distributed view, and run a select and insert query against the view.
The next thing that I need to understand is how to deal with triggers and with identity columns (ie columns that have auto-increments).
Our application is using triggers on tables in several cases to keep track of create and update dates. So there is a trigger that automatically updates the datetime field on a table whenever that row gets updated or changed or created. This way the programmer doesn't need to remember to update those date time values. Now I understand that in the distributed view architecture the triggers on the underlying tables may not work. Can any one comment on this?
As well, I have a table which has the identity enabled, so it is incrementing a counter and creating a unique integer key value for each new record. How does this work when we move to a distributed architecture? How do I assure a unique key is generated in the distributed view architecture?
I have table A and B. A has column ID,A1,A2,A3,A4,A5 columns. B has ID,B1,B2, A1 columns. A table has a trigger. I defined the below trigger.Â
Solution 1 ALTER TRIGGER [dbo].[Tri_A] ON [dbo].[A] for UPDATE AS BEGIN   UPDATE B  SET B.A1= i.A1  FROM inserted i     INNER JOIN B     ON B.ID = i.ID END; GO
If I change the above solution 1 trigger to solution 2.Can I improve the trigger performance dramatically? I mean only A.A1 is changed then update B.A1. So when the other columns is changed, the update will not be required.
Solution 2 ALTER TRIGGER [dbo].[Tri_A] ON [dbo].[A] for UPDATE AS BEGIN IF ( UPDATE (A1) )   UPDATE B  SET B.A1= i.A1  FROM inserted i     INNER JOIN B     ON B.ID = i.ID END; GO
when i alter non identity column to identity column using this Query alter table testid alter column test int identity(1,1) then i got this error message Msg 156, Level 15, State 1, Line 3 Incorrect syntax near the keyword 'identity'.
Jamie writes "When Inserting a row on a table where the Primary key is a sequence number. Once inserted is there a key word to find out the Sequence number of the newley Created Row."
i'm creating a sql hoilday database and i need to write a sequence but i've been told its wrong but i dont know why, can anyone help? CREATE SEQUENCE SEQ_HOILDAY_SITES INSERT INTO Details_of_sites_visited(Code_of_the_sites, Sites_name) VALUES(SEQ_HOILDAY_SITES.NEXTVAL,'124','Yosemite National Park');
Im doing a quickie Access Project for a Fileroom orginization DB. Generally, Records are addressed by a 7 character unique ID, with all sorts of rules, assigned by another system. However we need to track Items that for various reasons do not exist in that system. For this, we enter a code that consists of 2 characters of invalid data, followed by a simple incrementing number. In Oracle, i would do this with a Sequence. How do I go about this in Sql Server?basicly, I want to write a trigger that says something like: pseudo code wrote:before insert: if :new.ItemType is Type3 Then-- File local to our file room :new.FileID = "IV" + getNextLocalNumber().ToString() end ifNote that the fileID is not a Key in the normal DB Sense. It is a neumonicly unique identifier w/ some other properties so that people can easily look things up.
there is this little dumb thing i don't manage to understand about sequences when i create a table i put the id as the primary key and write something like this
Code Block[Id] [int] IDENTITY(1,1) NOT NULL,
so each time the sequence is incremented by,
but if i delete a line from a table and then insert a new one instead of inserting the id 3 for example it inserts id = 4
I would like to wrap the following code in a function and reuse it. Â I use this code in many triggers.
DECLARE @Action as char(1); SET @Action = (CASE WHEN EXISTS(SELECT * FROM INSERTED) AND EXISTS(SELECT * FROM DELETED) THEN 'U' Â -- Set Action to Updated. WHEN EXISTS(SELECT * FROM INSERTED) THEN 'I' Â -- Set Action to Insert. WHEN EXISTS(SELECT * FROM DELETED) THEN 'D' Â -- Set Action to Deleted. ELSE NULL -- Skip. It may have been a "failed delete". Â Â END)
Is it possible to write a function and pass the INSERTED and DELETED logical tables to it?
I am new in sql server. I am using TSQL2012 database. I have added a new column in processedOrderCount in Hr.Employees table. And created a trigger on Sales.Orders table that whenever orderId is updated it automatically updated processedOrderCount in Hr.Employees. There is a problem that the orderId is the identity column I can't update it. How can we work with identity column? the code of my trigger is:
If Object_id ('Sales.trig_Calculate_OrderProcessed', 'tr') is not null Drop Trigger Sales.trig_Calculate_OrderProcessed go Create Trigger Sales.trig_Calculate_OrderProcessed ON Sales.Orders
In my database all columns have Identity Value as a PK.
Now today I check Index Fragmentation I saw that many cluster and Non cluster Index are avg.Fragmentation is around 99 % I thought that in Identity column record is always inserted at bottom so there is no fill factor assigned to it.
So in this case Do I need to set Fill factor for Cluster and Non Cluster Index?
If Yes Then For PK How much - 95 % or what? and same for Non cluster or It should around 85 to 90
I am designing a database. I want to define a automatic sequence on a table primary key field. what is the best solution for it?I know I can enable identity property for a field, but it has some problems ( for example its seed jumps on restart and unsuccessful events).I also can use some calculated sequences. for example I can calculate max of the filed values and after incrementing use it as key for new inserted record.
What is the escape sequence in a stored procedure?
Here is what I'm trying to achieve:
ALTER PROCEDURE Test ( @Func VarChar(1000) )
AS DECLARE @SQL VarChar(8000) SELECT @SQL = 'SELECT DISTINCT TNAME FROM TABLE WHERE FUNC LIKE ' + @Func
Now, my goal is to add single quote (') before @Func and another one after that. For eg, if @Func is "Test", I want my query to be SELECT DISTINCT TNAME FROM TABLE WHERE FUNC LIKE 'Test'
and NOT SELECT DISTINCT TNAME FROM TABLE WHERE FUNC LIKE Test
I have to identify missing records from the example below.
Category BatchNo TransactionNo
CAT1 1 1
CAT1 1 2
CAT1 2 3
CAT1 2 4
CAT1 2 5
CAT1 3 6
CAT1 3 7
CAT1 3 8
CAT1 5 12
CAT1 5 13
CAT1 5 14
CAT1 5 15
CAT1 7 18
CAT2 1 1
CAT2 1 2
CAT2 3 6
CAT2 3 7
CAT2 3 8
CAT2 3 9
CAT2 4 10
CAT2 4 11
CAT2 4 12
CAT2 6 14
I need a script that will identify missing records as below
Category BatchNo
CAT1 4
CAT1 6
CAT2 2
CAT2 5
I do not need to know that CAT1 8 and CAT2 7 are not there as they potentially have not been inserted yet.
I idealy want a nice clean SQL statement and do not particually want to insert new table's or triggers although views i Can deal with to an extent.
Considerations up to 50,000 records added per day!!! Only need script to run once a day and I have insert dates to help me. Only 12 Categorys Batch numbers always start at 1 for different categorys
My current proc updates(updates using joins of two or three tables) millions of records as per the condition provided for each department.
However, when the proc fails it writes to a ErrorTable, ERROR_MESSAGE(), ERROR_SEVERITY() and which department has failed.
Since the records are updated keeping the selected departments in loop, I get the department in a temp variable.Now, I was asked to log the specific record where the failure was occured.Something like log the identity column value or primary key value which record has failed.
I'm trying to insert records into "holding" table and write back identity column value (Entry_Key) to the original table. So my setup is I have two tables; tblEWPBulk and tbleFormsUploadEWP. Users will enter records into tblEWPBulk and use BatchID to group records, once batch entry has been completed (usually less than 30 records) user will click on UploadAll button and insert records (not all fields) into tbleFormsUploadEWP. One record in tblEWPBulk can be sent multiple times to the holding table but tblEWPBulk will need to have latest Entry_Key captured. Records are sent from holding table to DB2 z/VSE using SQL stored procedure and based on certain logic records are marked uploaded or certain error capture... that part works fine.
So for example I want to sendÂ
BatchID, AccountNumber, Period, ReceiveDate, AccountType, ReturnType, NetProfitOrLoss, TaxCredit FROM tblEWPBulk to the holding table and write back Entry_Key (identity column) back to the record in tblEWPBulk (field called UploadEntryKey). As I said one record could be sent to the holding table multiple times until uploaded or deleted and UploadEntryKey always needs to be updated so that when results are processed response from the DB2 can be inserted into table and presented to the user.
No foreign key relationship exists since records in the holding table get sent to the archive table and table is truncated and entry_key starting value reset back to 2000... just some DB2 restrictions.Â
I am writing an Instead of Insert trigger. I would like to fire an error when inserting into an 'Identity' column. Since UPDATE([ColumnName]) always returns TRUE for insert statements, is there an easy/fast way around this? I don't want to use:Â
IF(EXISTS(SELECT [i].[AS_ID] FROM [inserted] [i] WHERE [i].[AS_ID] IS NULL)) here is my pseudo-code... CREATE VIEW [org].[Assets] WITH SCHEMABINDING
[Code] .....
-- How does this statement need to be written to throw the error? --UPDATE([AS_ID]) always returns TRUE
IF(UPDATE([AS_ID])) RAISERROR('INSERT into the anchor identity column ''AS_ID'' is not allowed.', 16, 1) WITH NOWAIT;
-- Is there a faster/better method than this? IF(EXISTS(SELECT [i].[AS_ID] FROM [inserted] [i] WHERE [i].[AS_ID] IS NOT NULL)) RAISERROR('INSERT into the anchor identity column ''AS_ID'' is not allowed.', 16, 1) WITH NOWAIT;
-- Do Stuff END;
-- Should error for inserting into [AS_ID] field (which is an identity field) INSERT INTO [org].[Assets]([AS_ID], [Tag], [Name]) VALUES(1, 'f451', 'Paper burns'), (2, 'k505.928', 'Paper burns in Chemistry');
-- No error should occur INSERT INTO [org].[Assets]([Tag], [Name]) VALUES('f451', 'Paper burns'), ('k505.928', 'Paper burns in Chemistry');
I am working on a text mining application wherein I need to detect unusual/anomalous sentences in text. Certain sentences, that I know occur very frequently, are given a likelihood of 0.2 by PredictCaseLikelihood. Other sentences that are just as frequent get a much higher likelihood (>0.9). I am using the NORMALIZED option. The only significant difference between these sentences is their length. The one with the lower likelihood has only 2 words in it, whereas the one with the higher likelihood has more than 10 words. The problem is that the shorter sentences end up being interpreted as anomalous, when in fact they are'nt. Any suggestions?
-------   -----         ----         ---- Name    Add          No        RowID -------    -----                ----        -------
aa    #a-1,India                             10 bb    #a-1,India                     11                aa    #a-1,India                             12
 table 1 inserting to Table 2 (Using 1st Data flow)
Table 2:
-------   -----         ---- Name    Add         ID(Note:Here Identity1,1) -------    -----                ---- aa    #a-1,India                1 bb    #a-1,India                2 aa    #a-1,India                3
My Requirement is Update Table 1 set Column::No=Table 2.ID                                                                        based on Exact Match of                                                                         Table1.Name=Table2.Name and                                                                         Table1.Add=Table2.Add
It means Get back the Id for Source Table 1
 2nd Data flow        Source(Table1:Name, Add,No)                    |
  --LOOKUP(Table2:Name, Add::Matched Look Columns Name, Add and Tick Mark on ID)                         |(Match)
  -->OLEDB Command: update Table1 set N0=? where RowID=?(Here Param_0= NO ,Param_1=RowID)
Here My Issue is if Table 1 had Duplicates(same Name, Add, but Row Id is different it is Updating Same ID for Table 1.No It means Get Back ID correctly not updating Result::
Table 1:
-------   -----         ----         ---- Name    Add          No        RowID -------    -----                ----        ------- aa    #a-1,India               1             10 bb    #a-1,India              2       11                aa    #a-1,India              1 12
I have a table of raw data where each column can be null. The thought was to create an identity key (1,1) and set as primary for each row. (name/ address / zip/country/joindate/spending) with surrogate key: "pkid".However other queries will not use this primary key. So for instance they may count the # of folks at a zip, select all names, addresses etc. The queries may order by join date, or select all the people that joined on a specific date.No other code would logically use the primary key (surrogate primary id key), therefore would it still have any performance benefits? at this time the table would have no clustured or nonclustured indexes or keys. I'm curious if there are millions of records.
In my database table has auto Identity file which is (1,1) But Its Increasing 1000 Some time 100 I don't Understand why It is happening in my every table.
I've seen several posts about reading and writing files that have different record types with varying column metadata. My particular file has 11 record types plus several header types and looks something like:
<Header1>
<Header2>
<Detail01-#1>
<Subdetail02>
<Subdetail03>
...
<Detail01-#2>
<Subdetail02>
<Subdetail03>
...
...
Since i need to get different detail and subdetail records, i can't really use the technique of 3 dest file connection managers found in http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=87269&SiteID=1
I've tried using an exec sql to get the main detail records and then a forech ADO en umerator that would get the subdetails, but it all seems so kludgy. I'm starting to think that I should just write the bulk of the file creation code in a c# app instead of trying to smush this into SSIS. Opinions? Am I missing some trick in SSIS?
This isn€™t an problem as such, it€™s more of a debate.
If a table needs a number of update triggers which do differing tasks, should these triggers be separated out or encapsulated into one all encompassing trigger. Speaking in terms of performance, it doesn€™t make much of an improvement doing either depending upon the tasks performed. I was wondering in terms of maintenance and best practice etc. My view is that if the triggers do totally differing tasks they should be a trigger each on their own.
While I have learned a lot from this thread I am still basically confused about the issues involved.
.I wanted to INSERT a record in a parent table, get the Identity back and use it in a child table. Seems simple.
To my knowledge, mine would be the only process running that would update these tables. I was told that there is no guarantee, because the OLEDB provider could write the second destination row before the first, that the proper parent-child relationship would be generated as expected. It was recommended that I create my own variable in memory to hold the Identity value and use that in my SSIS package.
1. A simple example SSIS .dts example illustrating the approach of using a variable for identity would be helpful.
2. Suppose I actually had two processes updating these tables, running at the same time. Then it seems the "variable" method will also have its problems. Is there a final solution other than locking the tables involved prior to updating them or doing something crazy like using a GUID for the primary key!
3. We have done the type of parent-child inserts I originally described from t-sql for years without any apparent problems. (Maybe we were just lucky.) Is the entire issue simply a t-sql one or does SSIS add a layer of complexity beyond t-sql that needs to be addressed?
I want to insert a new record into a table with an Identity field and return the new Identify field value back to the data stream (for later insertion as a foreign key in another table).
What is the most direct way to do this in SSIS?
TIA,
barkingdog
P.S. Or should I pass the identity value back in a variable and not make it part of the data stream?
I have table of three column first column is an ID column. However at creation of the table i have not set this column to auto increment. Then i have copied 50 rows in another table to this table then set the ID column values to zero.
Now I have changed the ID column to auto increment seed=1 increment=1 but the problem is i couldn't figure out how to update this ID column with zero value set to each row with this auto increment values so the ID column would have values from 1-50. Is there a away to do this?
Ok,I just need to know how to get the last record inserted by the highestIDENTITY number. Even if the computer was rebooted and it was twoweeks ago. (Does not have to do with the session).Any help is appreciated.Thanks,Trint