Best Way To Insert New Records Avoiding Concurrency
Mar 27, 2008
I have web site when people orders through website at same time, a problem can be arrive when allocating next primary key value to new record, using maximum number of records +1
how to avoid this problem and insert to sql server
I have been trying to solve the locking problem from past couple of days. Please help mee!!
Scenario: -------------- I have a SSIS package in which 2 data flow tasks. 1st data flow task deletes records from a 5 tables and the 2nd data flow task should insert records into 1 of the five tables after the success of 1st data flow task. This scenario runs in Transacation.
The above scenrio in the 2nd data flow task hangs in runtime. It does not complete. with sp_who2 command i could see that there is an intent share lock(LK_M_IS) on the table and the status is SUSPENDED.
I dont know how to come out of this locking. Please help.
I tried to port 10000 records using DTS. After porting of 9900 records I got an error and comes out without any result. But I want to keep the records which has been ported till the error occured. Plz help me.
On my site users can register using ASP Membership Create user Wizard control. I am also using the wizard control to design a simple question and answer form that logged in users have access to. it has 2 questions including a text box for Q1 and dropdown list for Q2. I have a table in my database called "Players" which has 3 Columns UserId Primary Key of type Unique Identifyer PlayerName Type String PlayerGenre Type Sting
On completing the wizard and clicking the finish button, I want the data to be inserted into the SQl express Players table. I am having problems getting this to work and keep getting exceptions. Be very helpful if somebody could check the code and advise where the problem is??
To match the answers to the user I get the UserId and insert this into the database to.protected void Wizard1_FinishButtonClick(object sender, WizardNavigationEventArgs e) { SqlDataSource DataSource = (SqlDataSource)Wizard1.FindControl("InsertArtist1"); MembershipUser myUser = Membership.GetUser(this.User.Identity.Name); Guid UserId = (Guid)myUser.ProviderUserKey;String Gender = ((DropDownList)Wizard1.FindControl("PlayerGenre")).SelectedValue; DataSource.InsertParameters.Add("UserId", UserId.ToString());DataSource.InsertParameters.Add("PlayerGenre", Gender.ToString()); DataSource.Insert();
I am running an script and the following sentence throws and error because the DTC service is not running in the Remote Server:
insert into MyLocalTable execute synonym_MyRemoteProcedure @SomeParameter
Since a transaction is not declared within the script, why is the DTC required? How can I avoid the usage of the DTC? Is there a way to say "this code is not within a distributed transaction"?
Requirements: Write a MS SQLServer 2000 Storeed Procedure to: 1. Update the Tasks table by assigning the task to an Employee. 2. Incrememnt the employee's Emp_Task_Cnt for each Task assigned. 3. Match the Employee to the Task by matching the Task_Requirement to the Emp_Specialty. 4. Do not exceed the employee's Max_Task_Cnt.
I have a working solution to the requirements, but it involves using cursor logic. For all the obvious reasons, I wanted to avoid using a cursor (or cursor-like looping structure) but could not figure out any other way to avoid processing the Task table one record at a time because of the: "4. Do not allow an Employee's Task_Cnt to exeed the Max_Task_Cnt."
Q: Is there a way to do this without using a cursor and still meet all of the requirements?
I'm trying to performance tune a procedure and am sort of being thwarted by caching.
When I first run the procedure, it takes a few seconds which is too long in this case. Subsequent executions in Management Studio are nearly instantaneous, though, which I imagine is due to caching and does not reflect the behavior of the procedure in production.
Is there a way to disable caching so that each execution of the procedure in Management Studio will be consistent and reflect the "first run" performance?
This query uses a cursor to fetch a parameter and pass it to another Stored proc. Is there a straightforward way to do this without using a cursor?
declare @deleteunassigned int declare cur_unassigned cursor for select distinct a.cust_cont_pk from cust_cont a, cont_fold_ass b (NOLOCK) where a.cust_cont_pk != b.CUST_CONT_PK open cur_unassigned fetch next from cur_unassigned into @deleteunassigned while @@fetch_status = 0 begin exec spDeleteCustContbypk @deleteunassigned fetch next from cur_unassigned into @deleteunassigned end close cur_unassigned deallocate cur_unassigned GO
declare @deleteunassigned int declare cur_unassigned cursor for SELECT DISTINCT a.cust_cont_pk FROM cust_cont a, cont_fold_ass b (NOLOCK) WHERE a.cust_cont_pk != b.CUST_CONT_PK open cur_unassigned FETCH NEXT FROM cur_unassigned INTO @deleteunassigned while @@fetch_status = 0 begin exec spDeleteCustContbypk @deleteunassigned FETCH NEXT FROM cur_unassigned INTO @deleteunassigned end close cur_unassigned deallocate cur_unassigned GO
Using small stored procs or sp_executesql dramatically reduces the number ofrecompiles and increases the reuse of execution plans. This is evident fromboth the usecount in syscacheobjects, perfmon, and profiler. However I'm ata loss to determine what causes a compilation. Under rare circumstances theusecount for Compiled Plan does not increase as statements are run. Seemsto correspond to when there is no execution plan. It would seem to me thatcompilation is a resource intensive task that if possible (data and schemaare not changing) should be held to a minimum.How does one encourage the reuse of compile plans?Is this the same as minimizing compilation?Looks like some of this behavior is changing in SQL 2005....Thanks,Danny
I have a stored procedure spUpdateClient, which takes as params a number of properties of a client application that wants to register its existence with the database. The sp just needs to add a new row or update an existing row with this data.
I tried to accomplish this with code somethign like this. (The table I'm updating is called Client, and its primary key is ClientId, which is a value passed into the sp from the client.)
IF (SELECT COUNT(ClientId) FROM Clients WHERE ClientId=@ClientId) = 0 BEGIN -- client not found, create it INSERT INTO Clients (ClientId, Hostname, Etc) VALUES (@ClientId, @Hostname, @Etc) END
ELSE
BEGIN -- client was found, update it UPDATE Clients SET Hostname=@Hostname, Etc=@Etc WHERE ClientId=@ClientId END But the client apps call this every second or so, so soon enough I started getting primary key violations. It looks like one client would make two calls nearly at the same time, both would get a 0 value on the SELECT line, so both would try to insert a new row with the same ClientId. No good. So then I added
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRANSACTION at the top, and a COMMIT at the bottom. I thought the first one in would get to run the whole sp, and the next one in would have to wait for the first to be done. Instead I'm now getting deadlock errors. If I understand the docs right, that's because the exclusive lock is not placed on the Clients table until the INSERT happens, not at the SELECT. So when two calls to the sp happen at nearly the same time (call them A and B), A does the SELECT and that locks Clients so nobody else can update it. Then B does the SELECT, locking Clients so nobody else (including A) can update it. Now A needs to exclusively lock Clients to do its INSERT, but B still has that read lock on it, and they're deadlocked. I could catch the deadlock in my client app after SQL Server kills one of the transactions, but it seems to me there should be some way to set a lock at the top of the sp that says "nobody else can enter this sp until I exit it". Any such thing? Thanks. Nate Hekman
Hello. I have been developing a small site that has two backend SQL Server databases. One for my application data and one for the ASPNETDB database that is created by the ASP .NET Configuration utility. Is it possible to configure the ASP .NET Configuration tool to use my custom database instead of creating a second database called ASPNETDB? Thanks in advance. Kev
I am exclusively using Stored Procedures to access the database, i.e. there are no Ad-Hoc SQL statements anywhere in the C# code. However, one thing I need to be able to do is to allow filtering for data grids on my ASP.NET page. I want to do the filtering in the Stored Procedure using Dynamic SQL to set the WHERE clause. However, one fear of mine is SQL injection from the client. How can I avoid arbitrary SQL injection, yet still allow for a dynamic WHERE clause to be passed into the stored procedure?
I currently have an asp script that is generating a 12 month rolling report. From asp I'm running a for loop with 12 iterations, each one sending the following query:
select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID where f.findingInvalid <> 1 and month(aReportDate) = " & Mo & " and year(aReportDate) = " & Yr
where the Mo and Yr variables are incremented accordingly.
I actually have 4 sets of data being pulled back to populate a graph, so this results in 48 queries with each page load! Obviously not ideal. So I'm hoping to reduce this to 4 queries. I was playing with the following in enterprise manager:
DECLARE @DT DATETIME DECLARE @CNT INT SET @DT = '10/31/07' SET @CNT = 1 WHILE(@CNT < 12) BEGIN select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID where f.findingInvalid <> 1 and month(aReportDate) = month(@DT) and year(aReportDate) = year(@DT)
SET @CNT = @CNT + 1 END
I haven't yet added any logic to increment the date, but my concern is that it looks like it is returning 12 separate results. Is there any way to combine this all into one resultset that will be passed back to my asp script? Hopefully this makes sense?
Suggestions on a completely different approach would also be welcome.
Hope someone could help me in revising a long running query. Here is the query
select * from table1 where classid is null and productid not in ( select productid from table1 where classid = 67)
In here table1 could have several occurance of productid in which productid could have different classid. The possible values of classid are: NULL,1,2,3,67. Basically I am looking for all records whose classid is null but should never had an instance in table1 where its classid is 67.
Do you have something like a "join" statment that will only include all records in the left table that is not in the right table?
Hope someone could help me with this. Thanks in advance.
I have a table in our system that hold temporary data for doing calculations. It will process several million records in it. each time they forecast our products.....
Is there any way to have the SQL server NOT add these transactions to the transaction log, since I'm going to wipe the data anyway? I'd like to be able to pick and choose the tables that are 'backed up' into the transaction log...
I am trying to figure out an efficient way of comparing two tables of identical structure and primary keys only I want to do a join where one of the tables reveals values for records which have been modified and/or updated.
To illustrate, I have two tables in the generic form:
id-dt-val
For which the 'val' in table 2 could be different from the 'val' in table 1 - for a given id-dt coupling that are identical in both tables.
Does anyone know of an efficient way I could return all id-dt couplings in table 2 which have values that are different from those with the same id-dt couplings in table 1?
NOTE: I am asking this because I am trying to avoid explicit comparisons between the 'val' columns. The tables I am working with in actuality have roughly 900 or so columns, so I don't want this kind of a monster query to do (otherwise, I would simply do something like where a.id = b.id and a.dt = b.dt and a.val <> b.val) - but this won't do in this case.
As a sample query, I have the following script below. When I attempt the where not exists, as you might expect, I only get the one record in which the id-dt coupling is different from those in table 1, but I'm not sure how to return the other records where the id-dt coupling is the same in table 1 but for where modified values exist:
create table #tab1 ( id varchar(3), dt datetime, val float ) go
create table #tab2 ( id varchar(3), dt datetime, val float ) go
insert into #tab1 values ('ABC','01/31/1990',5.436) go insert into #tab1 values ('DEF','01/31/1990',4.427) go insert into #tab1 values ('GHI','01/31/1990',7.724) go
insert into #tab2 values ('XYZ','01/31/1990',3.333) go insert into #tab2 values ('DEF','01/31/1990',11.111) go insert into #tab2 values ('GHI','01/31/1990',12.112) go
select a.* from #tab2 a --Trouble is, this only returns the XYZ record where not exists (select b.* from #tab1 b where a.id = b.id and a.dt = b.dt) go
drop table #tab1 drop table #tab2 go
I really dont' want to have to code up a loop to do the value by value comparison for inequality, so if anyone knows of an efficient set-based way of doing this, I would really appreciate it.
The C++ application calls the database to look up property data. Onetroublesome query is a function that returns a table, finding data whichis assembled from four or five tables through a view that has a join,and then updating the resulting @table from some other tables. Thereare several queries inside the function, which are selected accordingto which parameters are supplied (house #, street, zip, or perhaps parcelnumber, or house #, street, town, city,...etc.). If a lot of parametersare provided, and the property is not in the database, then several queriesmay be attempted -- it keeps going until it runs out of queries or findssomething. Usually it takes ~1-2 sec for a hit, but maybe a minute insome failure cases, depending on the distribution of data. (~100 milproperties in the DB) Some queires operate on the assumption the input datais slightly faulty, and take relatively a long time, e.g., if WHEREZIP=@Zip fails, we try WHERE ZIP LIKE substring(@Zip,1,3)+'%'. Whileall this is going on the application may decide the DB is never going toreturn, and time out; it also seems more likely to throw an exception thelonger it has to wait. Is there a way to cause the DB function to fail ifit takes more than a certain amount of time? I could also recast it asa procedure, and check the time consumed after every query, and abandonthe search if a certain amount of time has elapsed.Thanks in advance,Jim Geissman
I have a Master/Detail table setup - let's call the master "Account" and the detail "Amount". I also have a "black box" stored procedure (BlackBox_sp) which carries out a lot of complex processing.
What I need to do is, for each Account, I need to iterate thtough it's Amount records and call the black box each time. Once I've finished going through all the Amount records, I need to call the black box again once for the Account. This must be done with the Account & Amount rows in a specific order.
So I have something along the lines of
Code Block
DECLARE Total int
DECLARE Account_cur OPEN Account_cur FETCH NEXT FROM Account_cur WHILE FETCH_STATUS = 0 BEGIN
SET Total = 0
DECLARE Amount_cur OPEN Amount_cur FETCH NEXT FROM Amount_cur WHILE FETCH_STATUS = 0 BEGIN
SET Total = Total + Amount
EXEC BlackBox_sp (Amount) END CLOSE Amount_cur
EXEC BlackBox_sp (Total)
END CLOSE Account_cur
Any tips on another approach would be appreciated given the contraints I have.
Hi there, I'm using a query to fetch data from a table where one of the criteria is IN(...) clause for the key column of the table.Now the data being retrieved is ordered by the key column of the table even though I haven't specified any order by clause. I want to know if there a way in which the data being fetched is in the order of my IN(...) clause.
Is there a way to temporarily disable logging into the transaction log.
In our system, we perform purging of our database every night, where the purging consists of 2 steps:
1. For each table, insert the data, to be deleted, into a corresponding "purged" table, to remain there for one day only.
2. For each table, delete the unnecessary data (i.e. same data stored in purged tables in step 1)
During these 2 steps, the transaction log grows, and since we perform the transactional log back up, the back up at that time is huge. We are running a bit low on the hard disk space and I'd like to disable logging into the transaction log when these operations are performed.
I really don't care about being able to recover this data.
I thought that one option is to set the database to simple recovery, then perform the purging of the database, and then change back to full.
However, I think that trans log can grow even if recovery model is simple [although you won't be able to retrieve any changes].
So, is there a way to delete a portion of a table [or insert into it] so that no data is written to a transaction log (I know that we can use TRUNCATE if we need to remove whole table without logging)?
1. Inserting records from front end using begin trans ... commit trans 2. By using stored procedures ? - Is there any begin trans .. commit trans in stored procedure ? If so how to use it.
I am trying to insert 200 records on a table that has two fields: TagID and Name. TagID is a Unique Identifier and is not generated in the table. I created some code but is not working and I am a little bit confused:
declare@i integer select @i = 1
while @i <= 800 begin insert into Tags (TagID, [name]) values(newid(), select ('Tag ' + right('000' + convert(varchar(3), @i), 3))) select@i = @i + 1 end
I currently have two tables called Book and JournalPaper, both of which have a column called Publisher. Currently the data in the Publisher column is the Publisher name that is entered straight into either table and has been duplicated in many cases. To tidy this up I have created a new table called Publisher where each entry will have a unique ID.
I now want to remove the Publisher columns from Book and JournalPaper, replace it with an ID foreign key column and move the Publisher name data into the Publisher table. Is there a way I can do this without duplicating the data as some publishers appear several times on both tables?
Any help with this will be greatly appreciated as my limited SQL is not up to this particular challenge!!! Thanks!
Hi , I have datatable which have atleast ten records . i want to insert these records at once in sql server 2000 when user clicks save button. please send me the code for procedure and code for sending bulk of data at once from asp.net form
I am currently working on a simple page to insert 1.6 million UK postcode records into an SQL server table. The table has three columns for the postcode, longditude coordinate and lattitude coordinate. The data is sourced from a pipe (|) delimited txt file and inserted into the database using a FOR loop. The problem I have is that the page will hang after inserting only 10,000 records, the page displays either an invalid View State error or a page cannot be found error. Now I assume the viewstate error stems from the fact that there is a form on the page which simply contains a button to execute the script and a few labels to show the progress. But without the form and associated viewstate the insert still fails to complete.... any ideas?? Would I be better running this on a thread or should I just do it in stages and be patient. I have now modified the page to read the database on load and pick up from where it crashes?
Once again - My table should consist of 100 new records for a field MobilePhone(of char type) and last 5 digits should be randomly choosed (should be like this: +381randomno1randomno2.. etc.(example: +38156465, where '+' sign makes it char type and digits after +381 are randomly choosed. :confused: Anyone knows how to solve this....PLEASE?
Good day to all, I am new here so i hope i am doing things correctly.
The Company i work for make coils of shaped wire and work a 6 - 6 shift pattern
I have a database that is updated from a data collection source (MS Access) at 06:00 every morning. This seems to be working ok, my problem is that most coils fit nicely into the 6 - 6 shift pattern, but some now and again drift over into the next shift. I have written a crystal report that picks up this data. at the moment the coils are put in the database as: [Coil Start Time], [Coil Finish Time], [Coil Start Weight], [Coil Finish Weight], etc.
I have written (been helped to write) a SQL statement that will do the following:
Step 1: If the Coil Finish time is greater than the shift end time, then set the shit end time to be coil end time and zero start and finish wheight. Step 2: The original Coil record is duplicated and Coil Start time set to start time of shift, all other data left alone.
Example of code:
-->>
SELECT [Batch Name], [Batch Start], [Batch End], [Coil Start Weight], [Coil Finish Weight], [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE (DATEPART(hour, [Batch Start]) >= 6 AND DATEPART(hour, [Batch End]) < 18) OR ((DATEPART(hour, [Batch Start]) < 6 OR DATEPART(hour, [Batch Start]) >= 18) AND (DATEPART(hour, [Batch End]) < 6 OR DATEPART(hour, [Batch End]) >= 18)) UNION ALL SELECT [Batch Name], [Batch Start], DATEADD(hour, 17, DATEADD(minute, 59, CONVERT(char(10), [Batch End], 101))), 0, 0, [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE DATEPART(hour, [Batch Start]) >= 6 AND DATEPART(hour, [Batch Start]) < 18 AND (DATEPART(hour, [Batch End]) < 6 OR DATEPART(hour, [Batch End]) >= 18) UNION ALL SELECT [Batch Name], DATEADD(hour, 18, CONVERT(char(10), [Batch Start], 101)), [Batch End], [Coil Start Weight], [Coil Finish Weight], [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE DATEPART(hour, [Batch Start]) >= 6 AND DATEPART(hour, [Batch Start]) < 18 AND (DATEPART(hour, [Batch End]) < 6 OR DATEPART(hour, [Batch End]) >= 18) UNION ALL SELECT [Batch Name], [Batch Start], DATEADD(hour, 5, DATEADD(minute, 59, CONVERT(char(10), [Batch End], 101))), 0, 0, [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE (DATEPART(hour, [Batch Start]) < 6 OR DATEPART(hour, [Batch Start]) >= 18) AND DATEPART(hour, [Batch End]) >= 6 AND DATEPART(hour, [Batch End]) < 18 UNION ALL SELECT [Batch Name], DATEADD(hour, 6, CONVERT(char(10), [Batch Start], 101)), [Batch End], [Coil Start Weight], [Coil Finish Weight], [Product], [Shift], [Operator ID], [Works Order No] FROM dbo.tblCoilData WHERE (DATEPART(hour, [Batch Start]) < 6 OR DATEPART(hour, [Batch Start]) >= 18) AND DATEPART(hour, [Batch End]) >= 6 AND DATEPART(hour, [Batch End]) < 18
<<--
I have 2 options now
option 1: Leave this as a SQL View and report from this
option 2: Insert updated records to the tblCoilData table so that the data in the table is permanent
I would prefer option 2 but am a bit of a nugget when it comes to writing update / insert statements, Could someone please help me with this
I am trying to insert into the target table by using select and using like keyword. It is inserting records randomly. But need to insert records in sequence which should be ordered by. The problem here Iam not able to use order by because of 42 lakh records. Any possible solution for this is appreciated.
Following is the example. ------- insert into target table select * from test_policy where policynumb like '[cdklnqtswxyabeghjmpr1z]%'
this statment is used in the stored procedure
these records should be sorted out while insertion. But it is inserting randomly.