I am trying to figure out an efficient way of comparing two tables of identical structure and primary keys only I want to do a join where one of the tables reveals values for records which have been modified and/or updated.
To illustrate, I have two tables in the generic form:
id-dt-val
For which the 'val' in table 2 could be different from the 'val' in table 1 - for a given id-dt coupling that are identical in both tables.
Does anyone know of an efficient way I could return all id-dt couplings in table 2 which have values that are different from those with the same id-dt couplings in table 1?
NOTE: I am asking this because I am trying to avoid explicit comparisons between the 'val' columns. The tables I am working with in actuality have roughly 900 or so columns, so I don't want this kind of a monster query to do (otherwise, I would simply do something like where a.id = b.id and a.dt = b.dt and a.val <> b.val) - but this won't do in this case.
As a sample query, I have the following script below. When I attempt the where not exists, as you might expect, I only get the one record in which the id-dt coupling is different from those in table 1, but I'm not sure how to return the other records where the id-dt coupling is the same in table 1 but for where modified values exist:
create table #tab1
(
id varchar(3),
dt datetime,
val float
)
go
create table #tab2
(
id varchar(3),
dt datetime,
val float
)
go
insert into #tab1
values
('ABC','01/31/1990',5.436)
go
insert into #tab1
values
('DEF','01/31/1990',4.427)
go
insert into #tab1
values
('GHI','01/31/1990',7.724)
go
insert into #tab2
values
('XYZ','01/31/1990',3.333)
go
insert into #tab2
values
('DEF','01/31/1990',11.111)
go
insert into #tab2
values
('GHI','01/31/1990',12.112)
go
select a.* from #tab2 a --Trouble is, this only returns the XYZ record
where not exists
(select b.* from #tab1 b where a.id = b.id and a.dt = b.dt)
go
drop table #tab1
drop table #tab2
go
I really dont' want to have to code up a loop to do the value by value comparison for inequality, so if anyone knows of an efficient set-based way of doing this, I would really appreciate it.
hi, I am trying to 1)get all the names of a table that match another table 2)if count=0, then I want to insert into another table 'group' after getting its key.
I am totally lost, can somebody point me in the right direction.
how do I get the individual name so that I can insert into the group table, I tried 'select @name=name, that gave no results'. Thanks
while (SELECT name FROM names WHERE name in (select name from Group) and status = 'M' )=0 begin insert into group(group_id,name,action) values (@key, name, 'play' ) end
hi,I am trying to1)get all the names of a table that match another table2)while count=0, then I want to insert into another table 'group'after getting its key.I am totally lost, can somebody point me in the right direction.how do I get the individual name so that I can insert into the grouptable,I tried 'select @name=name, that gave no results'.Thankswhile (SELECT name FROM names WHERE name in (select name from Group)and status = 'M' )=0begininsert into CAll(group_id,name,action) values (@key, name, 'play' )endhow do i get the individual names to insert into another table ..
Hi, I have below psuedo code ... it will display correct result, however, it takes a bit longer time to display all results. I think it's because we have so many loops, and each record from the loop will do a query from database. I need some advice about how to speed up the process time. Thanks in advance. ... Do While NOT RS.EOF SQL_1 = "SELECT * FROM TB1" ; Set RS2 = Conn.Execute(SQL_1); Do WHILE NOT RS2.EOF SQL_2 = "SELECT * FROM TB2 WHERE NAME=" + RS2.FIELDS("Name"); Set RS3 = Conn.Execute(SQL_2); DO WHILE NOT RS3.EOF SQL_3 = "SELECT * FROM TB3 WHERE AGE=" + RS3.FIELDS("Age"); Set RS4 = Conn.Execute(SQL_3); DO WHILE NOT RS4.EOF Response.Write RS4.FIELDS("VAL"); loop loop loop loop ...
Struggling with how to implement the following psuedo-code in SQL server 2000. ** Can you use more than one CURSOR variable? If yes, when use FETCH_STATUS is it for cur1 or cur2 ??
Sample data is at the bottom. Thanks for ANY suggestions !!
** Assume TABLE 1 is sorted by Record_Type, Order_no, Order_line_no
************************************************** *** dim @rectyp dim @ord# dim @lin#
Fetch (?) 1st record in TABLE1 While Still Records in TABLE1 Set sub_line# = 0 set @rectyp = Record_Type, set @ord# = Order_no, set @lin# = Order_line_no
while @rectyp = Record_Type and @ord# = Order_no and @lin# = Order_line_no Set sub_line# = sub_line# + 1 update TABLE1 set line_ctr = sub_line# get next record end inner WHILE
end outer WHILE
************************************************** **************************** Sample data : Data as it currently exists: Record_type ......Order No......Order line no ......Line Ctr OP.....................458001................5.... ...............0 OP .....................458001..............5 .................. 0 OP..................... 458001..............5..................0 OP .....................458001..............5........ ..........0 OP.....................458191..............1 ..................0 OP.....................458191..............1 .................. 0 OP..................... 458308..............73..................0 OP .....................458308..............73....... ........... 0 OP.....................458308..............73..... .............0 OP.....................458308..............73..... .............0
Want data to look like this after executing code: Record_type ......Order No......Order line no ......Line Ctr OP.....................458001................5.... ...............1 OP .....................458001..............5 .................. 2 OP..................... 458001..............5..................3 OP .....................458001..............5........ ..........4 OP.....................458191..............1 ..................1 OP.....................458191..............1 .................. 2 OP..................... 458308..............73..................1 OP .....................458308..............73....... ........... 2 OP.....................458308..............73..... .............3 OP.....................458308..............73..... .............4
I have a stored procedure that I am trying to write. I want it to Grab a list of ids from one table, then loop through that list and select a group of records from a second table based on the first list. The 2nd list can have 1 + records per item from the first list. With in that record, I need to check the values in 2 fields, if the values = something I update a third field and move on to the next. So I need to be able to loop thru the second set also. Currently I have a cursor with in a cursor, but was told that was slow and not a good idea, so I am trying to figure out what other options I have.
Hello all. I'm mostly a VB.Net developer. I've been working on a intranet app that allows poeple in our company to self regisiter for access to Tools/Systems.
The data stucture: 1 Request with many Users requesting access.. Also on the same request, many applications requested for those same people.
After the application routes some 'Manager Reviews' and Updated the tblRequest, approving the request, I combine the Information into a tblRegistrations.
So there is a Matrix created. The Users requested are (User1, User2, UserX....) The Apps requested are (App1, App2, Appx....)
I created a Stored Proc that reads the List of USers, and inserts a record into tblRegistered for each App that was requested. User1, App1 User1, App2 User1, App3 User1, Appx User2, App1 User2, App2 User2, App3 User2, AppX UserX, AppX ...., ....
I user 2 cursors, Nested. The outer Cursor is the List of Users, and the inner Cusros is the list apps.
I appreciate if somone can show me how to do this with some other looping structure, or if not, examine the Decalrative statement of the Cursor, and define the propteries to make it most efficient, ie Static Local Forward Only, ect.
As I said, I don't DB Developer Experience.
Below is the copy of the Working Stored Proc:
ALTER PROCEDURE [dbo].[sp_Insert_Registration] -- Add the parameters for the stored procedure here @Request_IDINT ,@SessionIDnvarchar(150)
--Local Scalar Values Select @Reg_Date=Request_Date From dbo.tblRequest Where Request_ID=@Request_ID
--First Cursor
DECLARE curRequest_Users CURSOR LOCAL STATIC FOR SELECT Request_User_FullName, Request_User_IonName From dbo.tblRequest_Users Where Request_ID =@Request_ID AND request_User_Session_ID=@SessionID
Open curRequest_Users
--Second Cursor
DECLARE curRequest_Applications CURSOR LOCAL Static FOR SELECT Request_App_ID, Request_App_Role From dbo.tblRequest_Applications Where Request_ID =@Request_ID AND Request_Apps_SessionID=@SessionID
FETCH curRequest_Users INTO @Request_user_FullName, @Request_User_IonName WHILE @@FETCH_STATUS = 0 BEGIN --Insert the Row Into tblRegistrations --Need to get the Application ID's the User(s) Has Been Approved For
Open curRequest_Applications
FETCH curRequest_Applications INTO @Request_App_ID, @Request_App_Role WHILE @@FETCH_STATUS = 0 BEGIN Insert Into dbo.tblRegistrations (Request_ID ,FUllName ,IonName ,Application_ID ,Application_Role ,Reg_Date ,Approved ,Approval_Date ) Values ( @Request_ID ,@Request_user_FullName ,@Request_User_IonName ,@Request_App_ID ,@Request_App_Role ,@Reg_Date ,'True' ,getdate() ) FETCH curRequest_Applications INTO @Request_App_ID, @Request_App_Role
END --Close the Inner Cursor CLOSE curRequest_Applications
FETCH curRequest_Users INTO @Request_user_FullName, @Request_User_IonName
END
DEALLOCATE curRequest_Applications
CLOSE curRequest_Users DEALLOCATE curRequest_Users
END TRY BEGIN CATCH -- Execute error retrieval routine. EXECUTE usp_GetErrorInfo;
Can somebody please tell me how can i write a tsql statement in sql server 2000.
Same time how can i get the last digit of a inteager variable through tsql .What i want is to write 'right(intVariable,4) which is in vb .I want that in sql server 2000
I brought my server to it's knees by creating 499.9GB of transaction log on a 500GB drive. Oops.
The db recovery model is SIMPLE.
I want to loop through some code, and minimize the transaction log file size. My second query here has an explicit transaction. Assume this code will loop about....1/2 Billion times. Is one (or both) of these going to record ALL 500,000,000 update statements in the Transaction Log (remember SIMPLE recovery)? Will the second one of these record a single transaction 500,000,000 time but not overwhelm the Log file due to simple recovery?
Any thoughts anyone?
Code Snippet declare @i bigint select @i = min(ID) from <MyTable> While @i is not null
begin
update <MyTable> set <SomeField> = <SomeValue> where ID = @i select @i = min(id) from <MyTable> where ID > @i end
Code Snippet declare @i bigint select @i = min(ID) from <MyTable> While @i is not null
begin
BEGIN TRANACTION
update <MyTable> set <SomeField> = <SomeValue> where ID = @i COMMIT select @i = min(id) from <MyTable> where ID > @i end
Hi, I'm trying to call a stored procedure in a for loop within an ASPX page. My code runs fine but it seems to skip over the function which uses the stored procedure. The procedure basically copies files from the the client to the server. I have the upload portion working which physically copies the files from the client to server but when it comes to copying the path into a field in a table it totally skips it.
Some one suggested that the for loop is operating too fast for the stored procedure to work.
I need to modify the folliwng report to produce a weekly version of it. To get a weekly version, simply find the first day of the week for the @dtm1 value :
should be something like @firstofweek=dateadd(day,(datepart(dw,@dtm1)*-1)+1,@dtm1) to get the Sunday date
I think i will then then need to loop through from firstofweek to last of week (last of week will be the Saturday)
would suggest having a dayofweek column on your temporary table.
CREATE PROCEDURE rpt_siteMealListWeekly
@dtm1 AS DATETIME
,@cmb1 AS VARCHAR(100)
WITH ENCRYPTION
AS
--DECLARE @dtm1 AS DATETIME
DECLARE @siteid as integer
SELECT @siteid=siteid from site where sitename=@cmb1
--SELECT @dtm1='2007-09-13',@siteid=1
--select convert (char(11),@dtm1,113)
DECLARE @mybit AS INTEGER
SET @mybit=10
SELECT @mybit = CASE datepart(dw,@dtm1)
WHEN 1 THEN 1 -- 'Sunday'
WHEN 2 THEN 2 -- 'Monday'
WHEN 3 THEN 4 -- 'Tuesday'
WHEN 4 THEN 8 -- 'Wednesday'
WHEN 5 THEN 16 -- 'Thursday'
WHEN 6 THEN 32 -- 'Friday'
WHEN 7 THEN 64 -- 'Saturday'
END
CREATE TABLE #tmp_table(
childid INTEGER null
,br BIT null
,di BIT null
,te BIT null
,type integer default 0
)
INSERT #tmp_table
SELECT DISTINCT sa.childid
,CASE WHEN sha.br & @mybit>0 THEN 1 ELSE 0 END
,CASE WHEN sha.di & @mybit>0 THEN 1 ELSE 0 END
,CASE WHEN sha.te & @mybit>0 THEN 1 ELSE 0 END
,0
FROM
sessionAttendance sa
,simplehoursassignment sha
,session s
,child c
WHERE
sha.childid=sa.childid
AND
sha.siteid=sa.siteid
AND
s.siteid=sa.siteid
AND
c.siteid = sa.siteid
AND
c.childid = sa.childid
AND
sa.siteid=@siteid
AND
s.identityid=sa.identityid
AND
s.dayofweek=datepart(dw,@dtm1)
AND
s.siteid=sa.siteid
AND
@dtm1 between cast(floor(cast(sa.datefrom as float))as smalldatetime) and cast(floor(cast(sa.dateto as float))as smalldatetime)
AND
sa.userdefid=0
AND
(
--check not a company holiday
not exists
(select
1
from
companyholidays ch
where
siteID=@siteID
and
@dtm1 between cast(floor(cast(ch.datefrom as float))as smalldatetime)
and
cast(floor(cast(ch.dateto as float))as smalldatetime)
My problem is basically i need to call a stored proc for each entry in a table, i.e, basically a for loop calling stored procs with parameter coming from the table. I know two ways of doing this .. using cursor and using while loop with temp table. I dont like both approaches. Is there any good practice for this situation..
declare mycur cursor fast_forward for select ID from sometable
open mycur FETCH NEXT FROM mycur INTO @AID
WHILE @@FETCH_STATUS = 0 begin exec dbo.storedproc @AID
I have a problem when using nested loops in my Control Flow. The package contains an outer Foreach Loop using the Foreach File Enumerator which in my test case will loop over two files found in a directory. Inside this loop is another Foreach Loop using the Foreach Nodelist Enumerator. Before entering the inner loop a variable, xpath, is set to a value that depends on the current file, i e /file[name = '@CurrentFileName']/content. The Nodelist Enumerator is set to use this variable as its OuterXPATHString. Now, this is what happens:
First Iteration: The first file is found and the value of xpath = /file[name = 'test1.txt']/content. When the inner loop is entered it iterates over the content elements under the file with name test1.txt as expected.
Second Iteration: The second file is found and the value of xpath = /file[name = 'test2.txt']/content. When the inner loop is entered it unexpectedly still iterates over the content elements under the file with name test1.txt.
My question is: Should it not be possible to change the loop condition of an inner loop in an outer loop such that the next time it is entered it will be done based on the new condition? It seems that the xpath variable is read once, the first time, and never again. If that is the case, does anyone know of a workaround?
It would appear that if a Child package is called more than once from a Parent using the 'Execute Package' task, then after the first execute the Parent Package Variables are not applied to child package. I.E we build dimensions in a master database and these are then loaded to a number of topic specific datamarts. We simply pass Parent variables to the child that hold source & target connection strings, the first time the package is called the correct database is accessed, subsequent Executes ignore the variables and use the original values. Manipulating the (our) event queue to run the package once results in the correct behaviour
Are packages cached when they are called from a Parent? if so is there a flag that I have missed to force a reload each time a child is executed?
This has just become a big problen for us so any guidance would greatly appreciated.
I am running an script and the following sentence throws and error because the DTC service is not running in the Remote Server:
insert into MyLocalTable execute synonym_MyRemoteProcedure @SomeParameter
Since a transaction is not declared within the script, why is the DTC required? How can I avoid the usage of the DTC? Is there a way to say "this code is not within a distributed transaction"?
Requirements: Write a MS SQLServer 2000 Storeed Procedure to: 1. Update the Tasks table by assigning the task to an Employee. 2. Incrememnt the employee's Emp_Task_Cnt for each Task assigned. 3. Match the Employee to the Task by matching the Task_Requirement to the Emp_Specialty. 4. Do not exceed the employee's Max_Task_Cnt.
I have a working solution to the requirements, but it involves using cursor logic. For all the obvious reasons, I wanted to avoid using a cursor (or cursor-like looping structure) but could not figure out any other way to avoid processing the Task table one record at a time because of the: "4. Do not allow an Employee's Task_Cnt to exeed the Max_Task_Cnt."
Q: Is there a way to do this without using a cursor and still meet all of the requirements?
I'm trying to performance tune a procedure and am sort of being thwarted by caching.
When I first run the procedure, it takes a few seconds which is too long in this case. Subsequent executions in Management Studio are nearly instantaneous, though, which I imagine is due to caching and does not reflect the behavior of the procedure in production.
Is there a way to disable caching so that each execution of the procedure in Management Studio will be consistent and reflect the "first run" performance?
This query uses a cursor to fetch a parameter and pass it to another Stored proc. Is there a straightforward way to do this without using a cursor?
declare @deleteunassigned int declare cur_unassigned cursor for select distinct a.cust_cont_pk from cust_cont a, cont_fold_ass b (NOLOCK) where a.cust_cont_pk != b.CUST_CONT_PK open cur_unassigned fetch next from cur_unassigned into @deleteunassigned while @@fetch_status = 0 begin exec spDeleteCustContbypk @deleteunassigned fetch next from cur_unassigned into @deleteunassigned end close cur_unassigned deallocate cur_unassigned GO
declare @deleteunassigned int declare cur_unassigned cursor for SELECT DISTINCT a.cust_cont_pk FROM cust_cont a, cont_fold_ass b (NOLOCK) WHERE a.cust_cont_pk != b.CUST_CONT_PK open cur_unassigned FETCH NEXT FROM cur_unassigned INTO @deleteunassigned while @@fetch_status = 0 begin exec spDeleteCustContbypk @deleteunassigned FETCH NEXT FROM cur_unassigned INTO @deleteunassigned end close cur_unassigned deallocate cur_unassigned GO
Using small stored procs or sp_executesql dramatically reduces the number ofrecompiles and increases the reuse of execution plans. This is evident fromboth the usecount in syscacheobjects, perfmon, and profiler. However I'm ata loss to determine what causes a compilation. Under rare circumstances theusecount for Compiled Plan does not increase as statements are run. Seemsto correspond to when there is no execution plan. It would seem to me thatcompilation is a resource intensive task that if possible (data and schemaare not changing) should be held to a minimum.How does one encourage the reuse of compile plans?Is this the same as minimizing compilation?Looks like some of this behavior is changing in SQL 2005....Thanks,Danny
I have a stored procedure spUpdateClient, which takes as params a number of properties of a client application that wants to register its existence with the database. The sp just needs to add a new row or update an existing row with this data.
I tried to accomplish this with code somethign like this. (The table I'm updating is called Client, and its primary key is ClientId, which is a value passed into the sp from the client.)
IF (SELECT COUNT(ClientId) FROM Clients WHERE ClientId=@ClientId) = 0 BEGIN -- client not found, create it INSERT INTO Clients (ClientId, Hostname, Etc) VALUES (@ClientId, @Hostname, @Etc) END
ELSE
BEGIN -- client was found, update it UPDATE Clients SET Hostname=@Hostname, Etc=@Etc WHERE ClientId=@ClientId END But the client apps call this every second or so, so soon enough I started getting primary key violations. It looks like one client would make two calls nearly at the same time, both would get a 0 value on the SELECT line, so both would try to insert a new row with the same ClientId. No good. So then I added
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRANSACTION at the top, and a COMMIT at the bottom. I thought the first one in would get to run the whole sp, and the next one in would have to wait for the first to be done. Instead I'm now getting deadlock errors. If I understand the docs right, that's because the exclusive lock is not placed on the Clients table until the INSERT happens, not at the SELECT. So when two calls to the sp happen at nearly the same time (call them A and B), A does the SELECT and that locks Clients so nobody else can update it. Then B does the SELECT, locking Clients so nobody else (including A) can update it. Now A needs to exclusively lock Clients to do its INSERT, but B still has that read lock on it, and they're deadlocked. I could catch the deadlock in my client app after SQL Server kills one of the transactions, but it seems to me there should be some way to set a lock at the top of the sp that says "nobody else can enter this sp until I exit it". Any such thing? Thanks. Nate Hekman
I have 2 tables, with a one to many relationship - lets say customers, and order items.
Each order record has a field that is meant to be a comma delimited list (they are reference numbers) that is driven by the quantity field. So, say in the order record, an item has a quantity of 3. The reference number will look like this:
1, 2, 3
And if the next order item for that customer has a quantity of 4, the reference number value is
4, 5, 6, 7
And the final item with quantity of 2:
8, 9
Reference numbers can either be auto assigned (and are in my web application) or manually set. If manually set they will NOT be numeric.
In my web application, it is possible for users to return to a customer's order and edit a line item. My problem is when users changes the quantity of an item, and I have to reset the reference numbers.
If the quantity of line item 2 changes from 4 to 3, I need to reset all the values for that, and any other, order item that comes after it:
4, 5, 6 (2nd) 7,8 (3rd with same quantity of 2).
I felt a cursor would be the best way to handle this. But I am having trouble re-assigning my variable to be the next number in the series when the cursor is running.
This is what I have so far. The print lines and hard coded values are for debugging purposes only.
DECLARE @NumberingType varchar(10) DECLARE @TotalSum int DECLARE @DoorLineItemID int DECLARE @Quantity int DECLARE @SeedInt int
SET @SeedInt = 1
SELECT @TotalSum = SUM(Quantity) FROM DoorLineItems WHERE UniversalOrderID = 12345
DECLARE UpdateRefCursor CURSOR FOR SELECT DoorLineItemID, Quantity FROM DoorLineItems WHERE UniversalOrderID = 12345 AND NumberingType = 1
OPEN UpdateRefCursor
FETCH NEXT FROM UpdateRefCursor INTO @DoorLineItemID, @Quantity DECLARE @RefNumberLine varchar(1024) SET @RefNumberLine = ''
WHILE @@FETCH_STATUS = 0 BEGIN
WHILE @SeedInt <= @Quantity BEGIN
SET @RefNumberLine = @RefNumberLine + CONVERT(varchar, @SeedInt, 101) + ', ' SET @SeedInt = @SeedInt + 1
Hello. I have been developing a small site that has two backend SQL Server databases. One for my application data and one for the ASPNETDB database that is created by the ASP .NET Configuration utility. Is it possible to configure the ASP .NET Configuration tool to use my custom database instead of creating a second database called ASPNETDB? Thanks in advance. Kev
I am exclusively using Stored Procedures to access the database, i.e. there are no Ad-Hoc SQL statements anywhere in the C# code. However, one thing I need to be able to do is to allow filtering for data grids on my ASP.NET page. I want to do the filtering in the Stored Procedure using Dynamic SQL to set the WHERE clause. However, one fear of mine is SQL injection from the client. How can I avoid arbitrary SQL injection, yet still allow for a dynamic WHERE clause to be passed into the stored procedure?
I currently have an asp script that is generating a 12 month rolling report. From asp I'm running a for loop with 12 iterations, each one sending the following query:
select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID where f.findingInvalid <> 1 and month(aReportDate) = " & Mo & " and year(aReportDate) = " & Yr
where the Mo and Yr variables are incremented accordingly.
I actually have 4 sets of data being pulled back to populate a graph, so this results in 48 queries with each page load! Obviously not ideal. So I'm hoping to reduce this to 4 queries. I was playing with the following in enterprise manager:
DECLARE @DT DATETIME DECLARE @CNT INT SET @DT = '10/31/07' SET @CNT = 1 WHILE(@CNT < 12) BEGIN select count(a.aReportDate) as ttl from findings f left outer join audits a on a.aID = f.auditID where f.findingInvalid <> 1 and month(aReportDate) = month(@DT) and year(aReportDate) = year(@DT)
SET @CNT = @CNT + 1 END
I haven't yet added any logic to increment the date, but my concern is that it looks like it is returning 12 separate results. Is there any way to combine this all into one resultset that will be passed back to my asp script? Hopefully this makes sense?
Suggestions on a completely different approach would also be welcome.
Hope someone could help me in revising a long running query. Here is the query
select * from table1 where classid is null and productid not in ( select productid from table1 where classid = 67)
In here table1 could have several occurance of productid in which productid could have different classid. The possible values of classid are: NULL,1,2,3,67. Basically I am looking for all records whose classid is null but should never had an instance in table1 where its classid is 67.
Do you have something like a "join" statment that will only include all records in the left table that is not in the right table?
Hope someone could help me with this. Thanks in advance.
I have a table in our system that hold temporary data for doing calculations. It will process several million records in it. each time they forecast our products.....
Is there any way to have the SQL server NOT add these transactions to the transaction log, since I'm going to wipe the data anyway? I'd like to be able to pick and choose the tables that are 'backed up' into the transaction log...
The C++ application calls the database to look up property data. Onetroublesome query is a function that returns a table, finding data whichis assembled from four or five tables through a view that has a join,and then updating the resulting @table from some other tables. Thereare several queries inside the function, which are selected accordingto which parameters are supplied (house #, street, zip, or perhaps parcelnumber, or house #, street, town, city,...etc.). If a lot of parametersare provided, and the property is not in the database, then several queriesmay be attempted -- it keeps going until it runs out of queries or findssomething. Usually it takes ~1-2 sec for a hit, but maybe a minute insome failure cases, depending on the distribution of data. (~100 milproperties in the DB) Some queires operate on the assumption the input datais slightly faulty, and take relatively a long time, e.g., if WHEREZIP=@Zip fails, we try WHERE ZIP LIKE substring(@Zip,1,3)+'%'. Whileall this is going on the application may decide the DB is never going toreturn, and time out; it also seems more likely to throw an exception thelonger it has to wait. Is there a way to cause the DB function to fail ifit takes more than a certain amount of time? I could also recast it asa procedure, and check the time consumed after every query, and abandonthe search if a certain amount of time has elapsed.Thanks in advance,Jim Geissman
I have a Master/Detail table setup - let's call the master "Account" and the detail "Amount". I also have a "black box" stored procedure (BlackBox_sp) which carries out a lot of complex processing.
What I need to do is, for each Account, I need to iterate thtough it's Amount records and call the black box each time. Once I've finished going through all the Amount records, I need to call the black box again once for the Account. This must be done with the Account & Amount rows in a specific order.
So I have something along the lines of
Code Block
DECLARE Total int
DECLARE Account_cur OPEN Account_cur FETCH NEXT FROM Account_cur WHILE FETCH_STATUS = 0 BEGIN
SET Total = 0
DECLARE Amount_cur OPEN Amount_cur FETCH NEXT FROM Amount_cur WHILE FETCH_STATUS = 0 BEGIN
SET Total = Total + Amount
EXEC BlackBox_sp (Amount) END CLOSE Amount_cur
EXEC BlackBox_sp (Total)
END CLOSE Account_cur
Any tips on another approach would be appreciated given the contraints I have.