Transact SQL :: Query To Update A Table With More Than 150 Million Rows Of Data?

Sep 17, 2015

I have been tasked with writing an update query to update a table with more than 150 million rows of data. Here are the table structures:

Source Tables :

OC
CREATE TABLE [dbo].[OC](
[OC] [nvarchar](255) NULL,
[DATE DEBUT] [date] NULL,
[DATE FIN] [date] NULL,
[Code Article] [nvarchar](255) NULL,
[INSERTION] [nvarchar](255) NULL,

[Code] ....

The update requirement is as follows:

DECLARE @Counter INT=0 --This causes the @@rowcount to be > 0
while @@rowcount>0
BEGIN
    SET rowcount 10000
    update r
    set Comp=t.Comp

[Code] ....

The update took more than 48h and didn't terminate , how to accelerate it ?

View 6 Replies


ADVERTISEMENT

SQL Server 2012 :: Update Statement Will Not Update Data Beyond 7 Million Plus Rows Out Of 38 Millions Rows

Dec 12, 2014

I run the following statement and it will not update beyond 7 million plus rows and I have about 38 million to complete. I keep checking updated row counts and after 1/2 day it's still the same so I know something is wrong because it was rolling through no problem when I initiated it. I need to complete ASAP so it's adding to my frustration. The 'Acct_Num_CH' field is an encrypted field (fyi).

SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
WHILE @@ROWCOUNT > 0
BEGIN
SET rowcount 10000
UPDATE [dbo].[CC_Info_T]
SET [Acct_Num_CH] = 'ayIWt6C8sgimC6t61EJ9d8BB3+bfIZ8v'
WHERE [Acct_Num_CH] IS NOT NULL
END
SET rowcount 0

View 5 Replies View Related

Transact SQL :: Adding A Column To A Large (100 Million Rows) Table With Default Constraint?

Apr 24, 2013

IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END

It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.

View 4 Replies View Related

DB Engine :: Deleting 1 Million Records From Transaction Table Of 10 Million Data On 24/7 Environment

Jun 12, 2015

I have a requirement to delete 1 Million records from a table having 10 Million data and it's being queried on 24/7 basis (don't have a downtime). how can I achieve that?

View 13 Replies View Related

Query Optimization Having 10 Million Rows

Feb 27, 2015

i have a following table

table name : emp_master

empid efname emname elamane efathername emothername deptno edob edoj createdby updateby lastupdatedatetime lastactionperformed

empid is primarykey.

this table contains 20million of records and i want to fire following query on this to get employye all data where eployee is more than 10 year old

select empid ,efname, emname, elamane, efathername, emothername, deptno ,edob ,edoj ,createdby, updateby, lastupdatedatetime ,lastactionperformed
from emp_master
where year(doj)+10 > year(getadate())

this will return approx 10 million rows and taking 18 mins. tune this query what approaches should i take to reduce the time of execution.

View 6 Replies View Related

Transact SQL :: Iterate Through Table Rows And Do Conditional Update

Aug 11, 2015

I have a table with the following fields:

ID (int, identity)
Name (nvarchar(255))
Block (nvarchar(50))
Street (nvarchar(255))
Floor (nvarchar(50))
Unit (nvarchar(50))
Address1 (nvarchar(255))
Address2 (nvarchar(255))

I want to iterate through the table and populate Address1 as [Block] [Street] #[Floor]-[Unit].If the 'Floor' field contain a number < 10 (e.g., '8'), I want to add a '0' before it (e.g., '08'). Same for Unit.How would I do this using cursors (or other recommended method)?

View 4 Replies View Related

Transact SQL :: Update Table From Another Table For Multiple Rows

Jun 10, 2015

Matrix table has ID column and data below.

ID        Flag     TestDate         Value     Comment                                                                  
111          2       12/15/2014     7.5             null
222         2            Null                10          received

Matrix_Current table could have 1 or multiple rows as below.

ID        Flag               TestDate           Value         Comment
111         2                  01/26/2015        7.9                                                                      
111         2                  02/23/2015        7.9                                                      
111         2                  04/07/2015        6.8
222        1                   null                   8               test comment 1
222        3                   null                   9               test comment 2

When I run below update

 UPDATE  AM
 SET  M.Flag = MC.Flag, M.TestDate = MC.TestDate,
M.Value = MC.Value, M.comment = MC.Comment
 FROM dbo.Matrix M inner join dbo.Matrix_Current MC on M.ID = MC.ID

Matrix table has value below:

ID        Flag     TestDate         Value     Comment                                                                  
111          2       01/26/2015      7.9             
222         1            Null               8            test comment 1

I want to update Matrix table from all row from Matrix_Current, final table would like below:

ID        Flag     TestDate        Value     Comment                                                                  
111          2        04/07/2015      6.8             
222         3            Null                9         test comment 2

View 3 Replies View Related

500 Million Rows Of Data?

Apr 9, 2008

I'm new to using a DB and have a few questions about what I'm trying to do. I have some historical options data and want to place it into a sql express database. (I understand I might need to use a none express version once the db gets to big.) A months worth of data is over 5.5 million rows of data. So six years worth is ~400 million rows. Is it possible to put this into a sql db and be able to search it very fast? I have a months worth in a db now and it is pretty slow. Should I use a new table for each month and then have 6 years * 12 month = 72 tables to increase the search speed? I search by date and stock_symbol and the data looks like this:
 Date, Stock_Symbol, Option_Symbol, Strike, BidPrice, AskPrice, Volume, OpenInterest, (and a few others)
The select statement is simple: SELECT * FROM Options WHERE Date = @Date and StockSymbol = @Symbol
Thanks

View 4 Replies View Related

Transact SQL :: Updating A Table With 45 Million Records

Jul 21, 2015

I am trying to update a large table which consists of 45 million records , it is taking more than 2 days to the update , below is my approach

1. The table has only one clustered index and no other indexes on the table.
2. I am updating in batches say 20000 record-wise.
3. Changed the recovery mode to bulk logged and auto-growth size is set to  300MB and there is enough space in my disk for transaction log .

But still the query is running slowly.

View 10 Replies View Related

Transact SQL :: How To Update Multiple Rows In Different Transactions In A Single Table

Jul 16, 2015

We have control table which will be useful whether we need to start the job or not. If we are starting the Job we will make it to 1.

Below is the Table Structure.

Table Name       IN_USE_FG
CUST_D                     0
PROD_D                     0
GEO_D                       0
DATE_D                     0

Now we have different packages for 4 tables data loading. These 4 packages will start at a time. Before going to load the data we have to make the Flag to 1 and after that we have to load it. Because of this we have written Update statement to update the Value to 1 in respective Package. 

Now we are getting dead lock because we are using same table at a same time. Because we are updating different records. 

View 6 Replies View Related

Updating A Column In A Table That Contains 50 Million Rows

Feb 27, 2008

I'm looking for some performance assistance on updating a column value in a table that contains approximately 50 million rows. I have a permanent table in another database that has the key column and value to be set. My query is listed below, but I'm afraid it will run quite awhile. Any suggestions would be appreciated.

update mytable
set column2 = b.column2
from mytable as a
join mytable1 as b
on a.column1 = b.column1



There is a one to one relationship between the two tables.

View 8 Replies View Related

SQL Server 2014 :: Insert 500 Million Rows Into In-memory Table

Jul 29, 2014

I am doing a performance testing for In-memory option is sql server 2014. As a part I want to insert 500 million rows of records into a in-memory enabled test table I have created.

I need a sample script to insert 500 million records into a table ....

View 9 Replies View Related

AFTER INSERT Trigger Takes Forever On A Large Table (20 Million Rows)

Aug 30, 2007

I have a row that is being used log track plays on our website.

Here's the table:


CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]


There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.

I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:


CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId


When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.

How exactly is the pseudo 'inserted' table formed?

For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.

I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?

Thanks!

View 6 Replies View Related

SQL 2012 :: Snapshot Getting Corrupted After Insert Update Few Million Records Into A Table

Mar 12, 2015

We are facing a weird scenario in which the snapshot is getting corrupted after insertupdate few million records in to a table .

SQL Server 2012
windows server 2008 R2
service pack 1
64-bit OS

View 1 Replies View Related

Transact SQL :: Update Table With Its Value And Data From Row In Temp Table For Matching Record?

Oct 25, 2015

I have a temp table like this

CREATE TABLE #Temp
 (
  ID int,
  Source varchar(50),
  Date datetime,
  CID varchar(50),
  Segments int,
  Air_Date datetime,

[code]....

Getting Error

Msg 102, Level 15, State 1, Procedure PublishToDestination, Line 34 Incorrect syntax near 'd'.

View 4 Replies View Related

Transact SQL :: Update Multiple Table Referencing New Table Data

Aug 4, 2015

I have a table called ADSCHL which contains the school_code as Primary key and other two table as

RGDEGR(common field as SCHOOl_code) and RGENRl( Original_school_code) which are refrencing the ADSCHL. if a school_code will be updated both the table RGDEGR (school_code) and RGERNL ( original_schoolcode) has to be updated as well. I have been provided a new data that i have imported to SQL server using SSIS with table name as TESTCEP which has a column name school_code. I have been assigned a task to update the old school_code vale ( ADSCHL) with new school_code ( TESTCEP) and make sure the changes happen across all 3 tables.

I tried using Merge Update function not sure if this is going to work.

Update dbo.ADSCHL
SET dbo.ADSCHL.SCHOOL_CODE = FD.SCHOOL_Code
FROM dbo.ADSCHL AD
INNER JOIN TESTCEP FD
ON AD.SCHOOL_NAME = FD.School_Name

View 10 Replies View Related

SQL Server 2012 :: Copy A Table With 200 Million Rows To Another Table On Same Server

Aug 11, 2014

I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...

View 9 Replies View Related

Transact SQL :: Create Index On Temp Table To Reduce Run Time Of Update Query

Apr 29, 2015

I want to create index for hash table (#TEMPJOIN2) to reduce the update query run time. But I am getting "Warning!

The maximum key length is 900 bytes. The index 'R5IDX_TMP' has maximum length of 1013 bytes. For some combination of large values, the insert/update operation will fail". What is the right way to create index on temporary table.

Update query is running(without index) for 6 hours 30 minutes. My aim to reduce the run time by creating index. 

And also I am not sure, whether creating index in more columns will create issue or not.

Attached the update query and index query.

CREATE NONCLUSTERED INDEX [R5IDX_TMP] ON #TEMPJOIN2
(
[PART] ASC,
[ORG] ASC,
[SPLRNAME] ASC,
[REPITEM] ASC,
[RFQ] ASC, 

[Code] ....

View 7 Replies View Related

Solution To Store Million Of Data In A Table.

Jun 27, 2007

Dear all,

i need to design a database table which will store supplier's demand information. 1 supplier will probably have 10000 records and there are posibility that there are 10,000 suppliers. So, in total, the number of records will be 10000 * 10000 = XXXXA LOT XXXX which will be very large number of record to be inserted into a table. So, how can i design an table and structure to cater this scenario? Thanks.

Hope to hear from you..

View 10 Replies View Related

Retrieving Data From Table With 7 Million Entries Takes Time

Jul 25, 2007

Can anyone help me on this...
when i select data from table using select statement it takes huge amount of time....The table contains 7 million entries and when i select by mentioning a criteria it takes around 45 secs..The system has 4GB RAM and Dual Processing CPU. The select statement does not contain any grouping and all..

Will it take this much time to retrieve data.?.
The table does include an indexed field,
So can anyone help me on the different things i can do to make the retrieval faster?

Andy

View 5 Replies View Related

Transact SQL :: Table Trigger To Delete All Rows Before Inserting Rows

Jul 24, 2015

I have a SQL script to insert data into a table as below:

INSERT into [SRV1INS2].BB.dbo.Agents2
select * from [SRV2INS14].DD.dbo.Agents

I just want to set a Trigger on Agents2 Table, which could delete all rows in the table , before carry out any Insert operation using above statement.I had below Table Trigger on [SRV1INS2].BB.dbo.Agents2 Table as below: But it did not perform what I intend to do.

USE [BB]
GO
/****** Object:  Trigger    Script Date: 24/07/2015 3:41:38 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON

[code]....

View 3 Replies View Related

Update Query Containg Static Data And Data From Another Table.

Sep 28, 2006

Hi,First post so apologies if this sounds a bit confusing!!I'm trying to run the following update. On a weekly basis i want toinsert all the active users ids from a users table into a timesheetstable along with the last day of the week and a submitted flag set to0. I plan then on creating a schduled job so the script runs weekly.The 3 queries i plan to use are below.Insert statement:INSERT INTO TBL_TIMESHEETS (TBL_TIMESHEETS.USER_ID,TBL_TIMESHEETS.WEEK_ENDING, TBL_TIMESHEETS.IS_SUBMITTED)VALUES ('user ids', 'week end date', '0')Get User Ids:SELECT TBL_USERS.USER_ID from TBL_USERS where TBL_USERS.IS_ACTIVE = '1'Get last date of the weekSELECT DATEADD(wk, DATEDIFF(wk,0,getdate()), 6)I'm having trouble combing them as i'm pretty new to this. Is the bestapproach to use a cursor?If you need anymore info let me know. Thanks in advance.

View 4 Replies View Related

Transact SQL :: Query Based On Date - Get Data From Table Above WHERE DateofBirth Is Tomorrow

Oct 30, 2015

Lets say I have a table, tblPersons

FirstName |  LastName   |  DateofBirth
----------------------------------------
  Thomas   | Alva Edison |   10-10-2015
  Benjamin | Franklin    |     10-10-2015
  Thomas   | More        |     11-10-2015
  Thomas   | Jefferson   |    12-10-2015

Suppose today's date is 09-10-2015 in (dd-MM-yyyy format), I want to perform a query in such a way that I should get the data from the table above WHERE DateofBirth is tomorrow, so I could get the following result.

FirstName |  LastName   |  DateofBirth
----------------------------------------
  Thomas   | Alva Edison |   10-10-2015
  Benjamin | Franklin    |     10-10-2015

View 15 Replies View Related

Transact SQL :: Update Incoming Rows Based On Percentage Calculation

Apr 2, 2015

This is on SQL Server 2008. Please find a detailed description and the file of the data, that I am working on.

Requirements:

1.
If
'Channel'
is
not
equal to "Omnibus"
where
the 'Trans Description'is
equal to "Purchase"
and
"Redemption"
for
one purchase and
one redemption that match on 'System'
,
'Account TA Number'
,
'Product Name'
,
'Settled Date'
,
and
where
the 'Trade Amount'
of the purchase and
redemption is
within 5%,
then
display those set of records.

2.
If
deemed wash trades,
allow user to update the purchase and
redemption pair 'Trans Description'
from
"Purchase"
to "Exchange In"
and
'Trans Description'
from
"Redemption"
with
"Exchange out"

System Channel Dealer Name Firm Name Product Cusip Product Name Product Share Class Trade ID Settled Date Account TA Number Trans Description  Trade Amount 

SCHWABPORTAL US - ASG MILLIMAN MILLIMAN 64128K777 Strategic Income Fund A 29806259 30-Jan-15 000BY00F2RW Redemption  $      25,68,458.15

[Code] .....

View 36 Replies View Related

Copying 4 Million Rows Everyday

Mar 21, 2000

In our database, we have a very large table that gets updated every morning, start of the day is copying 4 million rows from the fact table from previous date to today's date in the same table and then some other processing. It takes 1 1/2 to 2 hrs to do this. There is a dts package created to copy these rows into temp table and then to this fact table.

This table has more than 200 million rows

Any ideas on how to accomplish this without doing the copy twice and not running into locking problems.

Thanks for any suggestions.

View 5 Replies View Related

Deleting 8 Million Rows From Database

Jul 16, 2013

i am deleating 8 Million rows from my database,I am wondering how to control T-Log,also I heard something about row lock and table lock

View 4 Replies View Related

Which Task Is Best For Loading 1 Million Rows

Mar 24, 2008

Hi,

i have to load 1million rows from database( or flatfile) to the database(or flat file).
which task is used as the best solution for this?

Appreciate any assistance in this regard.

Thanks,
Das

View 5 Replies View Related

75 Million Row Update???

Apr 17, 2003

Hi all,
I have a table with approx 75 million rows of names and addrersses in it that I am trtying to update...so far the update is running 5 hours and with no end in sight...a liitle background is that this is running on a quad zion 500 with 3 gb ram ands one 145 gb drive (boooo) without improving the hardware needs can i improve the performance...I have indexed all the where fields that i read on and only update the table but once or twice a month, but I do daily selects by zip or county (all indexed) i even have a composite key on phone and zip...

i have heard of horizontal partioning but i always thought that was reserved for archiving old transactional data that rarely gets read on....

when i performed a trace there are plenty of reads but no writes...is this normal during an update like this...

i have been running this proc for the past 7 HOURS!!!....any help is appreciated, since all i have is time at this point....

THANKS!!!!

--Set rowcount to 100000 to limit number of updates
--performed in each batch to 100K rows.
Set rowcount 100000

--Declare variable for row count
Declare @rc int
Set @rc=100000

While @rc=100000
Begin

Begin Transaction

--Use tablockx and holdlock to obtain and hold
--an immediate exclusive table lock. This unusually
--speeds the update because only one lock is needed.
Update [2000] With (tablockx, holdlock)
set [source] = '2000'

--Get number of rows updated
--Process will continue until less than 10000
Select @rc=@@rowcount

--Commit the transaction
Commit
End

View 5 Replies View Related

Help On Updating 1.3 Million Rows On The Production Server

May 4, 2000

I need to update about 1.3 million rows in a table of mine.
I am getting the data from one of the columns of the same table and
updating the new column.
I am doing this using a cursor which I have put in a stored procedure.
As this is a production table which users might be accessing.It is a
web based application and I can't slow the system down.
So I am willing to run the stored prcedure during off peak hours.
However, do I need to put this in a transaction?
If I did put it in a transaction what type of isolation level should I
opt for?
Data integrity is very important for me and I don't mind to compromise
on the performance.
I am doing this because one of the columns which has "short description"
entry is has become too small for business purposes and we want to increase it's
length from varchar(100) to varchar(150).
As this is SQL 6.5, I can't increase the lenght of the column.
So Iadded a new column and will run the stored proc.
What precautions are to be taken?
This is on a high priority basis and very important too.

Thanks in advance...

Stored procedure code:

USE DB_Registration_Dev
GO
IF EXISTS (SELECT NAME FROM SYSOBJECTS WHERE NAME='usp_update_product' AND TYPE='P')
DROP PROCEDURE usp_update_product
GO
CREATE PROC usp_update_product
AS
DECLARE @short_desc varchar(100)
DECLARE @prod_id int

DECLARE sdesc_curs CURSOR
FOR
SELECT [Product].[product_id] , [Product].[short_description]
FROM Product

OPEN sdesc_curs

FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc

WHILE @@FETCH_STATUS = 0
BEGIN
UPDATE Product
SET [Product].[sdesc] = @short_desc
WHERE Product_id=@prod_id
FETCH NEXT FROM sdesc_curs
INTO @prod_id, @short_desc
IF @@FETCH_STATUS <> 0
PRINT ' Finished Updating the table...go ahead and have fun ...! '
END
DEALLOCATE sdesc_curs
GO

View 1 Replies View Related

Missing 1 Million Rows During BCP PROCESS!! Urgent Help

Mar 12, 1999

Hi, I used the /e in my bcp code. yet did not get all the rows from the main frame into the sql talbes... here is the case I have 11 million rows in an ftp server I use this code to bcp into sql server can anyonecheck if this code is good for the process, I am missing one million row in the bcp process and do not know why??? I put the /e to see if there is any error but could not see any error file in my hard drive?
Please check it out and let me know

regards
Ali


Exec master..xp_cmdshell "bcp dbname..tablename in c:ftprootNbtorder.txt /fd:ftprootformatfileablename.fmt /Servername /Usa /Password /b250000 /a8000 /eerrfileORD"

View 2 Replies View Related

Transact SQL :: Firing After Update Trigger - On Table Row Update

Jul 8, 2015

I have a table where table row gets updated multiple times(each column will be filled) based on telephone call in data.
 
Initially, I have implemented after insert trigger on ROW level thinking that the whole row is inserted into table will all column values at a time. But the issue is all columns are values are not filled at once, but observed that while telephone call in data, there are multiple updates to the row (i.e multiple updates in the sense - column data in row is updated step by step),

I thought to implement after update trigger , but when it comes to the performance will be decreased for each and every hit while row update.

I need to implement after update trigger that should be fired on column level instead of Row level to improve the performance?

View 7 Replies View Related

DataWarehouse - 140 Million Rows - Load Performance Issue

Mar 10, 2000

Need help, I am managing a Data Warehouse (80 G.B Database Size), I purge older than 6 months data from a table which has more than 140 Million rows on daily basis. The daily data load performance is degrading. The table has no clustered indexes (only non-clustered indexes).

Tried dropping and rebuilding the non-clustered indexes, didn't work.

One way to solve the problem is drop the non-clustered index, bcp out the data, truncate the table and bcp in the data and rebuild the non-clustered indexes. This is too risky and taking 14 hours to bcp out the data.

This was not the issue in SQL Server 6.5, because SQL 6.5 always insert new record indexes at the end of the heap link (heap = non-clustered indexes without clustered index). In contrast, SQL Server 7.0 first checks for available space in existing pages by using percent free space pages (this is where it is killing the performance ).

Thanks for your help!

View 4 Replies View Related

Checking To See If A Records Exists Before Inserting - 3 Million + Rows

Aug 21, 2007

I have 1+ CSV files (using a foreach loop) which I'm doing a lot of transform work on and then inserting into a SQL database table.
Each CSV file usually contains about 2 days worth of data (contains date stamps) - somewhere in the region of 60k records per day.
The destination table currently contains 3 million+ rows and will get bigger.
I need to make sure that before inserting into the destination table, the data doesn't already exist.

I've read the following article: http://www.sqlis.com/311.aspx
While the lookup method works, it takes ages and eats up memory as it caches the 3m+ records before running for each CSV. Obviously this will only get worse as the table grows in size.

To make things a little more efficient what I'd like to do, is first derive the dates I'm dealing with in the current file - essentially storing the max(date) and min(date) in variables. Then in the lookup SQL use those vars, to reduce the amount of data that needs to be brought into the transformation to check against before inserting into the destination table.
Lookup SQL eg. SELECT * FROM MyTable WHERE Date BETWEEN varMinDate AND varMaxDate.

Ideally I'd use an aggregate transformation and then use the subsequent output from that either in the lookup query or store the output in vars, but I don't think you can do that and I get the feeling I'm approaching this with the wrong mindset.

Any thoughts would be great!

View 6 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved