Add Column To Existing Table With Large Number Of Rows

Dec 24, 2007

Hey Guys

i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all

whenever i do : alter table mytable add Updated_date datetime

it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit

can u plz advice on how to add this column. its urgent!!

thanks in advance

View 5 Replies


ADVERTISEMENT

T-SQL (SS2K8) :: Write Into A Table User-entered Value And Increment That Number N Times As Existing Rows

Jun 25, 2014

I need to have a script where it ask the user for a value, the script will search for all records that match the value. Then it will display the numbers of records found and ask the user to enter a different value. The rest of the script will use this new value and increment by 1 n times as the number of records found. I started the script where it will ask for "HANDLE" and display the number of records found with that "HANDLE"

declare @HANDLE as varchar(30)
declare @COUNT as varchar(10)
declare @STARTINV as varchar(20)

set @HANDLE = ?C --This is the parameter to search for records with this value
set @STARTINV = ?C --User will input the starting invoice number
SELECT COUNT as OrderCount FROM SHIPHIST
where HANDLE = @HANDLE

I just can't figure out how to proceed to use the entered invoice # and increment by 1 until it reach the number of records found.

This will be the end results:

Count=5 --results from query
STARTINV=00010 --Value entered by user

Handle,Inv_Num
AAABBB,00010
AAABBB,00011
AAABBB,00012
AAABBB,00013
AAABBB,00014

View 9 Replies View Related

Transact SQL :: Adding A Column To A Large (100 Million Rows) Table With Default Constraint?

Apr 24, 2013

IF NOT EXISTS (SELECT TOP 1 1 FROM dbo.syscolumns WHERE id = OBJECT_ID(N'dbo.Employee) and name = 'DoNotCall')
BEGIN
ALTER TABLE [dbo].[Employee] ADD [DoNotCall] bit not null Constraint DoNot_Call_Default DEFAULT 0
IF ( @@ERROR <> 0 )
GOTO QuitWithRollback
END

It just takes a LOT of time in SQL Server Management studio. I have to cancel the query and cancelling takes a whole lot time. I am using SQL Server 2008.

View 4 Replies View Related

Very Large Number Of Rows

Jul 23, 2005

We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA

View 4 Replies View Related

Large Number Of Rows.

Apr 3, 2007

Hi all,



A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?



Does the number of records returned affect the performance inspite of the indexing ?



Thanks,



DBLearner

View 3 Replies View Related

Inserting Large Number Of Rows

Nov 24, 1999

Hi,

I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently

Thanks

View 2 Replies View Related

Deleting Large Number Of Rows Times Out

Sep 14, 2007

I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.

DELETE FROM tblLog

I just get a message about the server timing out, what can I do?

Thanks

View 5 Replies View Related

How To Manage IDENTITY In Tables With Large Number Of Rows?

May 23, 2002

Table structure: col1 IDENTITY (seed=1 increment=1) + few other columns (col2...col7) + one text column (col 8)
I have around 50,000,000 rows per day inserted in the table T1. At the end of the day 40,000,000 rows are deleted. I have to keep the records for 12 months and then archive it. Database is 24/7 web serving and there is no down time allowed. IDENTITY column will go out of range (overflow) after less than two years, unless the identity seed is reset to the start value (seed=1, increment=1).
At the end of 12th month data is archived in another table and only last month is kept in the table T1. So table T1 enters new year with data from last month of the previous year. There are few other tables that refer to this table by using there own field with values from T1.IDENTITY column (referential integrity is not enforced). Identity column in T1 is needed as a unique id for some search actions. Performance is an issue therefore bigint data type is used for this identity column rather than decimal.

Another problem I have is how to do table update on one column (1 mil rows to be updated out of 2 mil of rows) with the minimum impact on the users who are querying this table heavily. Not need to mention that it is web app 24/7 no down time.

Thank you in advance.


Goran

View 1 Replies View Related

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related

SQL Server 2008 :: Changing Large / Existing Table To Sparse Columns?

Sep 21, 2015

I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.

1.) three steps for each column in each table in each db.

Step 1: update table to allow for nulls

Step 2: update tabe set column=null where column =0.00

Step 3 update table set sparse columns

2.)

Step 1: Create entirely new table with sparse column definitions

Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS

Step 3: drop original table, rename new table to original name

View 0 Replies View Related

Add Sequence Number To Existing Table

Nov 14, 2013

I have a table with no sequence column. I created a new column, but I need a way to add sequence number. This is not going to be a Primary key or indexed. I assume I will use update, but how do I do this?

Example

Before
Name | Seq
Paul | NULL
Tom | NULL
Grant | NULL

After
Name | Seq
Paul | 1
Tom | 2
Grant | 3

View 1 Replies View Related

How To Enter More Number Of Rows In A Table Having More Number Of Columns At A Time

Sep 24, 2007

Hi

I want to enter rows into a table having more number of columns

For example : I have one employee table having columns (name ,address,salary etc )
then, how can i enter 100 employees data at a time ?

Suppose i am having my data in .txt file (or ) in .xls

( SQL Server 2005)

View 1 Replies View Related

How To Show Distinct Rows Of The Column Of The Dataset And Number Of Distinct Rows Of That Column

Mar 29, 2007

suppose i have aDataset with 11 rows. field1 with 5 rows of aaa, 6 rows of "bbb"

I want's some thing like

field1 rowcount
aaa 5
bbb 6

View 1 Replies View Related

Copying All Rows From One Table Into Another Existing Table And Overwriting Data

Feb 15, 2005

i have 2 tables (both containing the same column names/datatypes), say table1 and table2.. table1 is the most recent, but some rows were deleted on accident.. table2 was a backup that has all the data we need, but some of it is old, so what i want to do is overwrrite the rows in table 2 that also exist in table 1 with the table 1 rows, but the rows in table 2 that do not exist in table one, leave those as is.. both tables have a primary key, user_id.

any ideas on how i could do this easily?

thanks

View 1 Replies View Related

Managing Large Number Of Objects (table,sp,view)

Jul 28, 2007

Hi,
Our database has very large number of objects. We have a naming convension by modules, subprojects etc. But for example when we need to open a specific table it still takes time to find it. If we could create custom folders under table folder or stored procedure folder it will be easier to find an object. We could create sub folders by module, subproject and classify our objects with these folders. Will the next version SQL Server 2008 support this kind of functionality?

View 8 Replies View Related

T-SQL (SS2K8) :: Updating Existing Table With Max (value) And Row Number (partition By 2 Columns)

Sep 15, 2015

I have 3 columns. I would like to update a table based on job_cd and permit_nbr column. if we have same job_cd and permit_nbr, reference number should be same else it should take max(reference number) from the table +1 for all rows where reference_nbr column is null

job_cdpermit_nbrreference_nbr
ABC1 990 100002
ABC1 990 100002
ABC1991100003
ABC1992100004
ABC1993100005
ABC2880100006
ABC2881100007
ABC2881100007
ABC2882100008
ABC2882100008

View 2 Replies View Related

Inserting Rows Into Existing SQL Table

May 21, 2008

Hi all,

I am trying to insert some new rows into an existing SQL table. The table name is Agt_table, and I want to add some data for some new agents into existing columns:
Agent name, agent code, phone number, fax number

Example -
I want to add the following record to my existing table Agt_table
Agent name: ABC Company
Agent code: 012345
Phone #: 555-555-5555
Fax#: 555-555-5555

How would I write the sql statement to do that?

Thanks so much!!!

View 12 Replies View Related

SQL 2012 :: Generating CREATE TABLE Scripts For Large Number Of Tables

Feb 11, 2014

Other than right-clicking on each individual table in SSMS and generating a CREATE script, is there a simple way to generate CREATE TABLE scripts for tables within a given database?

Background: I have a bunch of tables in one database, and I would like to add tables to a second database that have the same names and basic structures of some of the tables from the first database.

I do not need to transfer any data from the tables, this is a seperate project that will use a similar data structure. I just want to generate the CREATE TABLE scripts for 30ish tables within the first database, and then I'll tweak the scripts as appropriate and run them against the new database.

[URL] ....

View 7 Replies View Related

Select Multiple Rows Of Data Not From Any Pre-existing Table

Apr 29, 2008

I know that this is legal sql: "SELECT 1 AS Blah"
I want to do something like this except for I need to select multiple rows each with a different value for Blah. The query needs to be legal to be passed to the SqlCommand.ExecuteReader Method. Is this possible?

View 3 Replies View Related

Number Of Rows Depending On A Column Value

Sep 26, 2007



Hi all,
I have a table with artikels and count, sample:

Art Count
------------
12A 3
54G 2
54A 4

I would like to query this table and for each 'count' retrieve one row:
query result:

Art Count
------------
12A 3
12A 3
12A 3
54G 2
54G 2
54A 4
54A 4
54A 4
54A 4



Is this possible?

Thanks, Perry

View 3 Replies View Related

Transact SQL :: Existing Table Change To Support Multiple Rows?

Nov 4, 2015

Existing table structure is below:

Table name: Student

Columns in Student are below:

Student_Id
Subject_Id
Quarter
Col1
Col2

Combination of Student_Id, Subject_Id and Quarter columns is the primary key. One student can take one subject in a quarter. Now the new requirement is a student can take multiple subjects in a quarter. So need to add another table like below:

NEW table name: Student_Subject and
column are below:
Student_id
Subject_Id
Quarter1

All the above three columns combination is primary key.

After the new table Student_Subject created,
remove Subject_Id column
from Student table.

When the user clicks on a button after selecting multiple subjects and provide col1 and col2 data then one row gets inserted into Student table and multiple rows gets inserted into Student_Subject table.

Is there any other table design that satisfies one student can take multiple subjects in a quarter?

View 15 Replies View Related

Adding Column To A Table Before An Existing Column

Mar 30, 2004

I simply need the ability using SQL to add columns in an existing table before (or after) columns that already exist.

The MS SQL implementation of ALTER TABLE doesn't seem to provide the before or after placement criteria I require. How is this done in MS SQL using SQL or is there a stored procedure I can use?

Thanks.

View 5 Replies View Related

Transact SQL :: How To Alter Existing Table Column As Identity Without Dropping Table

Nov 20, 2013

I have created a table as below mentioned. Then I want to alter the ID column as identity(1,1) without dropping the table as well as losing the data.

create table dbo.IdentityTest
(
id int not null,
descript varchar(255) null,
T_date datetime not null
)

View 7 Replies View Related

Joining Large Fact Table To A View That Returns 120 Rows

Jan 19, 2015

I have a simple query that joins a largeish fact table (3 million rows) to a view that returns 120 rows. The SKEY in the view is returned via a scalar function. The view returns instantly if queried on it's own however when joined to the fact table in the simple query below results in a query execution plan that runs forever. Interestingly if I change the INNER JOIN to a LEFT OUTER JOIN the query returns the matched results almost instantly.

Select
Dimension.Age_Band.[10_Year_Age_Band],
Count(*)
From
Fact.APC_Episodes
Inner Join Dimension.Age_Band ON
Fact.APC_Episodes.AGE_BAND_SKEY = Age_Band.AGE_BAND_SKEY
Group By
Dimension.Age_Band.[10_Year_Age_Band]

I know joining to a view using a column generated by a scalar function is not a good recipe for performance. I also know that I could fix this by populating a physical table with the view first as I have already tested this though I hoping not to have to go down that route.

Why a LEFT OUTER JOIN works and not an INNER JOIN or anyway I can get the query optimizer to generate an execution plan that works?

View 9 Replies View Related

Column In Query Results To Number Rows Sequentially

Apr 20, 2015

I need a column in my query results that just numbers the rows sequentially (i.e. 1, 2, 3). How can I do that?

View 6 Replies View Related

How Can I Add A Column To An Existing Table

Nov 14, 2007

Hi everyone!

I just started with SQL and SQL Server 2005. I cantfind in my books how to add a column to an existing table. It should be a primary key, auto_increment column.

Hope someone can help, many thanks in advance

Greetings from Vienna Austria

landau

View 2 Replies View Related

AFTER INSERT Trigger Takes Forever On A Large Table (20 Million Rows)

Aug 30, 2007

I have a row that is being used log track plays on our website.

Here's the table:


CREATE TABLE [dbo].[Music_BandTrackPlays](
[ListenDate] [datetime] NOT NULL DEFAULT (getdate()),
[TrackId] [int] NOT NULL,
[IPAddress] [varchar](20)
) ON [PRIMARY]


There's a CLUSTERED INDEX on ListenDate ASC and a NON CLUSTERED INDEX on the TrackId.

I have a TRIGGER on the Music_BandTrackPlays table that looks like the following:


CREATE TRIGGER [trig_Increment_Music_BandTrackPlays_PlayCount]
ON [dbo].[Music_BandTrackPlays] AFTER INSERT
AS
UPDATE
Music_BandTracks
SET
Music_BandTracks.PlayCount = Music_BandTracks.PlayCount + TP.PlayCount
FROM
(SELECT TrackId, COUNT(*) AS PlayCount
FROM inserted
GROUP BY TrackId) AS TP
WHERE
Music_BandTracks.TrackId = TP.TrackId


When a simple INSERT statement is done on the Music_BandTrackPlays table, it can take quite a long time. When I remove the TRIGGER the INSERTs are immediate. The Execution plan for the TRIGGER shows that a 'Inserted Scan' is taking up most of the resources.

How exactly is the pseudo 'inserted' table formed?

For now, I think the easiest thing to do is update my logging page so it performs 2 queries. One to UPDATE the Music_BandTracks table and increment the counter, and perform the INSERT into the Music_BandTrackPlays table seperately.

I'm ok with that solution but I would really like to understand why the TRIGGER is taking so long. The 'inserted' pseudo table will be 1 row 99% of the time. Does SQL Server perform a table scan on all 20 million rows in order to determine what's new and put it in the inserted pseudo table?

Thanks!

View 6 Replies View Related

Transact SQL :: Select 1000 Rows At A Time From / Into A Large Temp Table?

May 12, 2015

I am using SQL SERVER 2008R2, not Denali, so I cannot use OFFSET FETCH Clause.

In my stored procedure, I am doing a SELECT INTO #tblTemp FROM... Working fine. This resultset is going to be used in an SSIS package which will generate a pipe-delimited .txt file... Working fine.

For recoverability sake, I am trying to throttle back on the commit chunks to 1000 rows per commit until there are no more rows. I am trying to avoid large rollbacks.

Q: Am I supposed to handle the transactions (begin/commit/rollback/end trans) when the records are being inserted into the temp table? Or when they are being selected form the temp table?

Q: Or can I handle this in my SSIS package for a flat file destination? I don't see option for a flat file destination like I do for an OLE DB Destination (like Rows per batch, Maximum insert commit size).

View 6 Replies View Related

SQL Server 2012 :: Update Table Based On Existing Values In Multiple Rows?

Oct 1, 2015

The objective is to identify orders where an order fee has been applied incorrectly. I have multiple orders per customer, my table contains an orderID and a customerID. Currently if the customer places additional orders before the previous orders have been closed/cancelled, then additional fees are being applied.

Let's say I'm comparing order #1 to order #2. I need to identify these rows where the following is true:-

The CustID is the same.

Order #2 has a more recent order date.

Order #2 has a FeeDate Before the CancelledDate of Order #1 (or Order #1 has no cancellation date).

So in the table the orderID:2835692 of CustID: 24643 has a valid order fee. But all the subsequently placed orders have fees which were applied before the first order was cancelled and so I want to update the FeeInvalid column with a 'Y'. The first fee will always be valid.

I think I understand why the code I am trying doesn't achieve the result I want but I can't figure out how to write it correctly. Below is one example of code I've tried and also code to create the table and insert some test data.

update t1
SET FeeInvalid = 'Y'
FROM MockData t1 Join MockData t2 on t1.CustID = t2.CustID
WHERE t1.CustID = t2.CustID
AND t2.OrderDate > t1.OrderDate
AND t2.FeeDate > t1.CancelledDate
CREATE TABLE [dbo].[MockData](
[OrderID] [float] NULL,

[code]....

View 4 Replies View Related

Add SQL Count Column To Existing Table

Apr 22, 2008

I have a table for blog comments I want to add a column that counts the number of comments for each article.
existing table looks like this:
CommentID
ArticleID (FK)
commentAuthor
authorEmail
comment
commentDate (getDate())
I would like to add a column that counts the number of total comments for each article.This will give me what I want using the VS query tool:
"SELECT COUNT(comment) AS Expr1 FROM UserComments WHERE (articleid = @articleid)"
But can I add that to the table somehow so it does it automagically???

View 5 Replies View Related

Inserting A Column In An Existing Table

Mar 5, 2004

I have an existing table (see below).

-----------
[FormCode] [varchar] (4) NULL ,
[FiscalYear] [char] (4) NULL
-----------

I want to add the column below after the [FormCode] when my SPROC runs.
-----------
[FiscalMonth] [char] (2) NULL
-----------

Any ideas would be a big help?
TIF

View 7 Replies View Related

How To Archiv Table To Another Table With Unique Number For All Rows Once + Date

Apr 28, 2008

need help
how to archiv table to another table with unique number for all rows once + date time (not the second only day time +minute)
i need whan i insert to the another table add 2 more fields (unique number , date_time )

this is the table 1 i select from
ID fname new_date val_holiday
----------------------------------------------------

111 aaaa 15/03/2008 1
111 aaaa 16/03/2008 1
111 aaaa 18/03/2008 1
111 aaaa 19/03/2008 1
111 aaaa 20/03/2008 1
111 aaaa 21/03/2008 1

222 bbb 02/05/2008 3
222 bbb 03/05/2008 3
222 bbb 04/05/2008 3
222 bbb 05/05/2008 3
222 bbb 06/05/2008 3
222 bbb 07/05/2008 3
222 bbb 08/05/2008 3
222 bbb 09/05/2008 3

333 ccc 03/04/2008 4
333 ccc 04/04/2008 4

this is the table 2 i insert into
----------------------------------
ID fname new_date val_holiday unique number date_time
--------------------------------------------------------------------------------------------------------------------

111 aaaa 15/03/2008 1 666 15/04/2008 17:03
111 aaaa 16/03/2008 1 666 15/04/2008 17:03
111 aaaa 18/03/2008 1
111 aaaa 19/03/2008 1 666 15/04/2008 17:03
111 aaaa 20/03/2008 1 666 15/04/2008 17:03
111 aaaa 21/03/2008 1 666 15/04/2008 17:03

222 bbb 02/05/2008 3 666 15/04/2008 17:03
222 bbb 03/05/2008 3
222 bbb 04/05/2008 3 666 15/04/2008 17:03
222 bbb 05/05/2008 3 666 15/04/2008 17:03
222 bbb 06/05/2008 3 666 15/04/2008 17:03
222 bbb 07/05/2008 3 666 15/04/2008 17:03
222 bbb 08/05/2008 3 666 15/04/2008 17:03
222 bbb 09/05/2008 3 666 15/04/2008 17:03

333 ccc 03/04/2008 4 666 15/04/2008 17:03
333 ccc 04/04/2008 4 666 15/04/2008 17:03

for evry archiv table to another table (insert) i need to get a unique number + date time (not the second only day time +minute)

next insert ......
ID fname new_date val_holiday unique number date_time
--------------------------------------------------------------------------------------------------------------------

111 aaaa 15/03/2008 1 667 15/04/2008 17:15
111 aaaa 16/03/2008 1 667 15/04/2008 17:15
111 aaaa 18/03/2008 1
111 aaaa 19/03/2008 1 667 15/04/2008 17:15

.........................
.....................................................................667 15/04/2008 17:15


next insert ......
ID fname new_date val_holiday unique number date_time
--------------------------------------------------------------------------------------------------------------------

111 aaaa 15/03/2008 1 668 15/04/2008 08:15
111 aaaa 16/03/2008 1 668 15/04/2008 08:15
111 aaaa 18/03/2008 1
111 aaaa 19/03/2008 1 668 15/04/2008 08:15

.........................
.....................................................................668 15/04/2008 08:15



TNX

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved