Reorganizing A Large Table 25 Hours And Counting. HELP!

Nov 7, 2007



I'm currently running a reorganize on a large table of ~60 gig. I started the reorganize when I noticed the fragmentation was 97.95%!!!

Well the reorganize has been running for 25 hours now and apparently bulk inserts cannot happen during this time since my SSIS package just bombed trying to prepare for bulk insert.

Anyway, my question is can I cancel this reorg? I didn't start the reorg through the query analyzer. I saw this cute little reorganize button when I right clicked my indexes, properties, fragmentation in the SQL 2005 management studio. I clicked it and then clicked ok.

I know I should have done an alter rebuild but I wasn't comfortable with the process and did the one click solution that is now killing me.

What happens if I go into task manager and shut down the process? Am I risking a serious side effect of corruption or will SQL just stop so I can rebuild the index properly?

View 4 Replies


ADVERTISEMENT

Slow UPDATE - Running For 16 Hours And Counting...

Jun 7, 2004

The following basic UPDATE SQL statement has been running for 16 hours and counting. I need to get this done ASAP.


UPDATE Recipients SET UndeliverableTime = getdate()
FROM Recipients
INNER JOIN Domains ON (Recipients.DomainID = Domains.ID)
INNER JOIN Undeliverables ON (
Recipients.UserName + '@' + Domains.Domain =
Undeliverables.EmailAddress)


Is there any way I can see how far this has gone and how long it will take to finish? Will this take another hour to finish or another week?

Both tables (Recipients and Undeliverables) have approximately 80 million records

I did a nearly identical operation with another table that had only 7 million records and it took 10.5 hours. I hope this doesn't scale linearly to 115 hours.

I am tempted to cancel, retune, and rerun but that may be trigger a really expensive rollback operation that could take days. Any ideas?

thanks!

View 14 Replies View Related

Transact SQL :: To Display Days Hours Mins Format Based On Business Hours

Apr 22, 2015

I want to display Days Hours Mins Format.

I am Having two columns Like below,

Col1 (in days)    col2 (In Hours : Mins)
3days  4:5 

In this first have to  add Col1 and Col2 (Here one day is equals to 9 hours ) so the addition is 31.5

From this 31.5 I should display 3 Days 4 Hours 30 Mins because 31.5 contains 3 (9 hours) days 4 Hours and .5 is equals to 30 mins.

View 6 Replies View Related

Breaking Down Total Hours Worked Into Day And Evening Hours

Sep 21, 2006

I have data coming from a telephony system that keeps track of when anemployee makes a phone call to conduct a survey and which project numberis being billed for the time the employee spends on that phone call in aMS SQL Server 2000 database (which I don't own).The data is being returned to me in a view (see DDL for w_HR_Call_Logbelow). I link to this view in MS access through ODBC to create alinked table. I have my own view in Access that converts the integernumbers for start and end date to Date/Time and inserts some otherinformation i need.This data is eventually going to be compared with data from someelectronic timesheets for purposes of comparing entered hours vs hoursactually spent on the telephone, and the people that will be viewing thedata need the total time on the telephone as wall as that total brokendown by day/evening and weekend. Getting weekend durations is easyenough (see SQL for qryTelephonyData below), but I was wondering ifanyone knew of efficient set-based methods for doing a day/eveningbreakdown of some duration given a start date and end date (with theday/evening boundary being 17:59:59)? My impression is that to do thiscorrectly (i.e., handle employees working in different time zones,adjusting for DST, and figuring out what the boundary is for switchingfrom evening back to day) will require procedural code (probably inVisual Basic or VBA).However, if there are set-based algorithms that can accomplish it inSQL, I'd like to explore those, as well. Can anyone give any pointers?Thanks.--DDL for view in MS SQL 2000 database:CREATE VIEW dbo.w_HR_Call_LogASSELECT TOP 100 PERCENT dbo.TRCUsers.WinsID, dbo.users.username ASInitials, dbo.billing.startdate, dbo.billing.startdate +dbo.billing.duration AS EndDate,dbo.billing.duration, dbo.projects.name ASPrjName, dbo.w_GetCallTrackProject6ID(dbo.projects.descript ion) AS ProjID6,dbo.w_GetCallTrackProject10ID(dbo.projects.descrip tion) AS ProjID10,dbo.billing.interactionidFROM dbo.projects INNER JOINdbo.projectsphone INNER JOINdbo.users INNER JOINdbo.TRCUsers ON dbo.users.userid =dbo.TRCUsers.UserID INNER JOINdbo.billing ON dbo.users.userid =dbo.billing.userid ON dbo.projectsphone.projectid =dbo.billing.projectid ONdbo.projects.projectid = dbo.projectsphone.projectidWHERE (dbo.billing.userid 0)ORDER BY dbo.billing.startdateI don't have acess to the tables, but the fields in the view comethrough as the following data types:WinsID - varchar(10)Initials - varchar(30)startdate - long integer (seconds since 1970-01-01 00:00:00)enddate - long integer (seconds since 1970-01-01 00:00:00)duration - long integer (enddate - startdate)ProjID10 - varchar(15)interactionid - varchar(255) (the identifier for this phone call)MS Access SQL statement for qryTelephonyData (based on the view,w_HR_Call_Log):SELECT dbo_w_HR_Call_Log.WinsID, dbo_w_HR_Call_Log.ProjID10,FORMAT(CDATE(DATEADD('s',startdate-(5*60*60),'01-01-197000:00:00')),"yyyy-mm-dd") AS HoursDate,CDATE(DATEADD('s',startdate-(5*60*60),'01-01-1970 00:00:00')) ASStartDT,CDATE(DATEADD('s',enddate-(5*60*60),'01-01-1970 00:00:00')) AS EndDT,DatePart('w',[StartDT]) AS StartDTDayOfWeek, Duration,IIf(StartDTDayOfWeek=1 Or StartDTDayOfWeek=7,Duration,0) ASWeekendSeconds,FROM dbo_w_HR_Call_LogWHERE WinsID<>'0'

View 3 Replies View Related

Reorganizing Indexes

Jun 13, 2007

Rebuilding Indexes automatically updates statistics.
But does Reorganizing Indexes also update statistics?

Regards
Paresh Motiwala
Boston, USA

View 2 Replies View Related

Counting Issue In Table

Aug 24, 2004

I have a table with the following data (sample)

Type Year Price
---- ---- -----
A 00 100.00
A 00 200.00
A 01 105.00
A 01 105.00-
B 00 100.00
B 00 200.00
B 01 105.00
B 02 00.00


I need to establish a Type Count. The business rule to do the count is -

For a particular type, take a single year and add up the price. If the price is greater than zero then count = 1 else count = 0. After completing the counts for all the years for a particular type, add up the counts.

Based on the business rule for the above sample data I should have the following count.

Count -
Type A = 1 (For Year 00, price = 300.00 so count = 1 and for year 01, price = .00 (105 + 105.00 - ) and so count = 0. add counts to get a total of 1)

Type B = 2 (For Year 00, price = 300.00 so count = 1 and for year 01, price = 105 and so count = 1 and for year 02 price = 00 so count = 0. add counts to get a total of 2)

How do I implement this count using a query. I am free to add any indicator columns that may be required to perform the counting.

Let me know.

Thanks

Vivek

View 3 Replies View Related

Converting Decimal Hours To Hours And Minutes

May 13, 2008

I have a float variable that holds a decimal number of hours.

So 1.5 equals 1 hour 30 minutes.

I need to change this to the format 1:30

Any idea how to do this?

View 10 Replies View Related

Counting Rows That Refence Another Table

Oct 20, 2005

Hi,

I have two tables,

Table A(A_ID, Info)

Table B(B_ID, A_ID, Blah) where B.A_ID references A.A_ID

How can I detemine how many records in table B reference each unique A_ID in table A?

I've tried the following but it doesn't work:

Select A.A_ID, COUNT(B.A_ID) FROM A
JOIN B ON A.A_ID = B.A_ID

View 3 Replies View Related

Table Name Changes - How To Grab Last 24 Hours

May 30, 2014

My DB saves it's data into a table at the end of each day like:

'e4_event_20140530' where the last bit changes according to the date. So 30th May 2014 in this case.

What I am trying to do is query the last 24 hours. I know i can grab from 2 tables and do a 'between' with times but it means having to change table name and times in the query every time i run it. I'd just like to run it and for it to just fetch the last 24 hours at any point in time.

My DB outputs time like '2014-05-30 08:54:23'

View 2 Replies View Related

SQL Server 2014 :: Counting Corresponding Rows In Table

Mar 21, 2015

Select statement joining file1 to file2. File 1 may have 0, 1, or many corresponding rows in file2. I need to count the corresponding rows in table2. Table2 also has a Boolean column and I need to count the number of rows where it is true. So I need to count the total number of matching rows and the count of those that are set to true. This is an example of what I have so far. I had to add each column being selected into a Group by to make it work, but I do not know why. Is there some other way this should be set up.

SELECT c.CarId, c.CarName, c.CarColor, COUNT(t.TrailerId) as trailerCount, (add count of boolian, say t.TrailerFull is true)
FROM Car c
LEFT JOIN Trailer t on t.CarId = c.CarId
GROUP BY c.CarId, c.CarName, c.CarColor

View 3 Replies View Related

Counting Records In All Non System Database Table

Jun 9, 2007

is there a way to get a count of records for each table in a database by table in one query? I can query each table using a count, but this is pretty tedious when you have 50+ tables. Anybody have any ideas?

View 4 Replies View Related

SQL 2012 :: Reorganizing Full Text Catalogs

Feb 25, 2014

I have a handful of databases that are enabled for Full-Text search. After investigating some recent performance issues, I discovered the FullText Catalogs needed to be reorganized. This is a task I knew I wanted to automate, without having to hard-code db names or catalog names. My first thought was to use sp_executesql with dynamic tsql strings. I was quite disappointed to realize that I couldn't use fully qualified names to run either of these commands:

ALTER FULLTEXT CATALOG [DBName].[SchemaName].[CatalogName] REORGANIZE
ALTER FULLTEXT CATALOG [DBName]..[CatalogName] REORGANIZE

My next thought was to create a stored proc on each user db that would do the re-orgs. Then I could have a sql job iterate through the db's and run the sp on each db. Thinking...Hmm...That's do-able, but I don't like it. Add a new db to the server, and I have to remember to create the sp. Relying on my memory to do something isn't always a good idea. Plus, if I have to fix/edit/enhance the sp, I get the pleasure of doing it multiple times on multiple servers. Too much work.

I came up with some code that would dynamically reorganize all the catalogs, but I had to run it while connected to a specific db. How do I run the code while connected to [master], but in the context of a different db? The undocumented proc [sp_MSforeachdb] came to mind. I'd never used it, and was reluctant to do so after reading about other dba's experiences with it. So I came up with my own derivitive, just for this one purpose. The code is below.

CREATE PROCEDURE dba.ReorganizeFullTextCatalogs
AS
/*
Purpose:
Reorganizes the FullText Catalogs (as needed) on all user databases.

Inputs: None

History:
02/25/2014DMasonCreated
*/
--This is the tsql statement that get executed on each db.
DECLARE @InnerSql NVARCHAR(MAX) =
'DECLARE @Tsql NVARCHAR(MAX)

[Code] ......

View 0 Replies View Related

SQL 2012 :: Reorganizing Full Text Catalog?

Sep 10, 2014

I am in a dilemma if I should reorganize or rebuild a full text catalog.

My application owner does not want a rebuild as he says that it takes week for the rebuild to occur on these full text indexes.

Will this code just re-organize without turning off the full text indexes : Alter fulltext catalog catalog_name Reorganize

View 4 Replies View Related

Reorganizing/Rebuilding Index Results In More Fragmentation?

Dec 11, 2007

I have been reworking my index maintenance jobs from my old SQL 2000 table and view references to the DMV's and System Tables in SQL 2005, and I noted that some of my indexes end up being more fragmented after a reorganization and or rebuild. That doesn't make much sense to me at all. The code I am executing is:




Code Block
print ' '
print '************* Beginning Index Updates for '+db_name()+' *************'
print ' '

DECLARE @tablename varchar(250),


@indexname varchar(250),
@fragpcnt decimal(18,1),
@indexid int,
@dbID int

-- Determine DB ID
SELECT @dbID = DB_ID()

DECLARE tnames_cursor CURSOR FOR

SELECT b.name, c.name, a.avg_fragmentation_in_percent, a.index_id
FROM sys.dm_db_index_physical_stats (@dbID, NULL, NULL, NULL, NULL) a

JOIN sys.indexes b ON a.object_id = b.object_id



AND a.index_id = b.index_id
JOIN Sys.objects c ON b.object_id = c.object_id
WHERE a.index_id > 0
ORDER by a.page_count DESC

OPEN tnames_cursor
FETCH NEXT FROM tnames_cursor INTO @indexname, @tablename, @fragpcnt, @indexid
WHILE (@@fetch_status = 0)

BEGIN

-- Declare and determine the tablename ID
declare @tablenameID int
select @tablenameID = object_id(@tablename)



IF @fragpcnt > 30

BEGIN

EXEC('ALTER INDEX ['+@indexname+'] ON ['+@tablename+'] REBUILD')
PRINT '***************************************************'
PRINT 'Index '+@indexname+' was rebuilt.'
PRINT 'Original framentation Percent: ' + convert(varchar, @fragpcnt) + '%'


SELECT @fragpcnt = avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (@dbID, @tablenameID, @indexid, NULL, NULL) a

JOIN sys.indexes b ON a.object_id = b.object_id



AND a.index_id = b.index_id
JOIN Sys.objects c ON b.object_id = c.object_id

PRINT 'Post Rebuild fragmentation Percent: ' + convert(varchar, @fragpcnt) + '%'
PRINT ''
END
ELSE IF @fragpcnt BETWEEN 5 AND 30

BEGIN

EXEC('ALTER INDEX ['+@indexname+'] ON ['+@tablename+'] REORGANIZE')
PRINT '***************************************************'
PRINT 'Index '+@indexname+' was Reorganized.'
PRINT 'Original framentation Percent: ' + convert(varchar, @fragpcnt) + '%'

SELECT @fragpcnt = avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (@dbID, @tablenameID, @indexid, NULL, NULL) a

JOIN sys.indexes b ON a.object_id = b.object_id



AND a.index_id = b.index_id
JOIN Sys.objects c ON b.object_id = c.object_id

PRINT 'Post Reorganization fragmentation Percent: ' + convert(varchar, @fragpcnt) + '%'
PRINT ''
END
ELSE

BEGIN

PRINT '***************************************************'
PRINT 'Index '+@indexname+' was left alone.'
PRINT 'Original framentation Percent: ' + convert(varchar, @fragpcnt) + '%'
PRINT ''
END
FETCH NEXT FROM tnames_cursor INTO @indexname, @tablename, @fragpcnt, @indexid
END
print ' '
print '************* NO MORE TABLES TO INDEX *************'
PRINT 'All indexes for the '+db_name()+' database have been updated.'
print ' '
DEALLOCATE tnames_cursor





Below are some snipits of the output:


***************************************************
Index _dta_index_wuci_history_8_1123587141__K2_K5 was rebuilt.
Original framentation Percent: 58.3%
Post Rebuild fragmentation Percent: 58.3%

***************************************************
Index PK__batchjob__776C5C84 was left alone.
Original framentation Percent: 0.0%

***************************************************
Index PK__ContactWebDetail__116A8EFB was rebuilt.
Original framentation Percent: 44.4%
Post Rebuild fragmentation Percent: 77.8%

***************************************************
Index PK__managed_object_s__5DCAEF64 was left alone.
Original framentation Percent: 0.0%

***************************************************
Index kb_IX_kb_scope_scope_role was rebuilt.
Original framentation Percent: 75.0%
Post Rebuild fragmentation Percent: 87.5%

***************************************************
Index PK__query__09A971A2 was left alone.
Original framentation Percent: 0.0%

***************************************************
Index PK__email_message__38996AB5 was rebuilt.
Original framentation Percent: 85.7%
Post Rebuild fragmentation Percent: 0.0%

***************************************************






If the index begins with PK, then it is the primary key index which is generally the clustered index on the table, but not always. If it has an IX on it, it is generally a non-clustered index on the table, but again not always. In the case of the above, the PK is a clustered index, and the IX is a non-clustered index.

Anyone have any ideas why this is functioning in this manner?

Thanks,

Jon

View 6 Replies View Related

Counting The Inserts And Updates On A Table In A Sql Server Database

Jul 20, 2005

Hello,Can someone point me to getting the total number of inserts and updates on a tableover a period of time?I just want to measure the insert and update activity on the tables.Thanks.- Vish

View 3 Replies View Related

How Trustworthy Is Sys.partitions As A Means For Counting Rows In A Table?

Nov 20, 2007



Hi all,
I have the following function:




Code Block
create function udf_CountRows(@pTableName sysname)
returns int
as
begin
declare @ret int

select @ret = SUM(p.rows)
from sys.partitions p
inner join sys.objects o
on p.object_id = o.object_id
and o.[name] = 'Well'

return @ret
end





Can I trust sys.partitions to always return the correct value or does it suffer the same issue as sysindexes prior to SQL2005?

Thanks
Jamie

View 5 Replies View Related

SQL 2012 :: Index Reorganizing Is 10 Times Slower Than Rebuild?

Oct 19, 2014

why index reorganizing is 10 times slower then rebuild with "ONLINE=ON" clause?

View 9 Replies View Related

SQL Server 2008 :: Reorganizing All Failed - Disabled Indexes

Jun 16, 2015

We have a maintenance plan that reorganize all indexes in a database. We disabled one of the index on one table this job failed. How to set up to the maintenance plan to without failing to ignore the reorganizing of disabled indexes?

View 1 Replies View Related

SQL Server 2014 :: Selecting Records From Table 2 While Counting Records In Table 1

Aug 11, 2015

Table1 contains fields Groupid, UserName,Category, Dimension

Table2 contains fields Group, Name,Category, Dimension (Group and Name are not in Table1)

So basically I need to read the records in Table1 using Groupid and each time there is a Groupid then select records from Table2 where Table2.Category in (Select Catergory from Table1)
and Table2.Dimension in (Select Dimension from Table1)

In Table1 There might be 10 Groupid records all of which are different.

View 9 Replies View Related

Does It Store All The Results To Tempdb Database When I Query Against A Large Table Which Joins Another Table?

Jun 25, 2007

Hi, all experts here,



I am wondering if tempdb stores all results tempararily whenever I query a large fact table with over 4 million records which joins another dimension table? Since each time when I run the query, the tempdb grows to nearly 1GB which nearly runs out all the space on my local system drive, as a result the performance totally down. Is there any way to fix this problem? Thanks a lot in advance and I am looking forward to hearing from you shortly for your kind advices.



With best regards,



Yours sincerely,



View 11 Replies View Related

Large Table-Table Partition, View Or Other Method?

Aug 27, 2007

Hi everyone,

I use sql 2005. What is the best practice for dealing with large table (more than million rows)? Table Partition, View or other?

Can you please give some suggestions? It will be very helpful if you can post some references or examples.

Thank you!

View 12 Replies View Related

PK On A Large Table

Nov 16, 2007

I am developing an application that has a table with lots of records(network traffic) but the data is summarize every so often to create summary records (old records are deleted). The problem is that I have a PK based on an autoincrement ID (int) that will run out of numbers. However, this ID is not referenced anywhere, (not a foreign key from another table, not use for deletion and there is no update in this table whatsoever).

So my possibilites are:
1.- reseed the id when it is about to run out.
2.- make the id bigint
3.- remove the id and change the PK to 2 other fields
4.- remove the id and without PK

I am leaning toward option 4, because I do not see the need for a PK, but I understand that it is quite out of the normal.. So I would like to hear from other people ( I do not have much experience with DB).

I also like option 3. I already have a index on one of the other fields (time).

Any input will be appreciated.

Claudio Robles

View 7 Replies View Related

BCP Large Table.

Jul 23, 2005

If I use BCP to export a very large table will that table be blockedfor writes during the export process? I don't want to prevent usersfrom accessing that table during the bcp process?Thank You, TFD.

View 1 Replies View Related

Storing Large PDF's In Table

May 26, 2008

Hi,
I have this page that upload's PFD's to a table. In principle this works fine.
Until I try to upload large files (3 to 4 MB)I need to even upload larger files than that. (Don't really know as of yet what users are going to come up with) I get TimeOut problems. Now some people say it is not possible to exceed a limit of about 4 MB. But that there is a workaround by changing something to the web.config file.Can somebody give me info about that, (I am quite a novice really)I tried to change it like this, but to no avail:
<system.web><httpRuntime maxRequestLength="102400"enable = "True"requestLengthDiskThreshold="102400" useFullyQualifiedRedirectUrl="True"executionTimeout="102400"/></system.web> 
Thanks for any help!

View 2 Replies View Related

Move Large Table From DB To DB

Mar 3, 2004

I have a table of approx 1/2 million rows.

On a nightly basis, this table gets rebuilt in a temporary database. Once the table has been built and scrubbed, i need to move it into our webservers db.

I'd like to do this with minimal interuption to the website.

Possible techniques:

1) I could set up a DTS package to copy the table object overwriting the destination table

2) I could export to a flat file and then bulk import into the live table (after truncating it)

3) I could run a process to update smaller chunks of data at a time running delete queries and insert queries.

Anybody have a thought on the best way to do this so that the web users would be virtually unaware that anything was happening ?

View 4 Replies View Related

Duplicate In Large Table

Mar 1, 2002

Hi,

I am absolutely innocent as far as T-SQL is concerned. I need to detect all duplicates (key consists of 5 fields) in the table and delete the duplicates.
I tried different approaches like joins etc but nope.
Any help is appreciated
Thanks

View 2 Replies View Related

Help! Indexing LARGE Table

Aug 6, 2004

OK, I imported 680 million records into an unindexed table. That went well.

Then, I went into Enterprise Manager and added a two column non-unique clustered index to that table to speed access.

It's been running for ~36 hours and I have no idea when it will complete. I have deadlines that I'm going to miss and am very nervous; what can I do?

SQL Server 2000 Enterprise Edition (8.00.818 - sp3 + hotfixes)
Dual 3Ghz Xeon (two physical CPUs each have HyperThreading enabled)
Windows 2000 SP4
4GB RAM (although I just noticed the 3GB OS switch wasn't on)
SCSI boot drive
tempdb, data, and transaction log are on a FibreChannel RAID SAN

Help! Thanks in advance!

View 8 Replies View Related

Partitioning A Large Table - How Much Is Too Much?

Nov 14, 2007

Hi folks! I'm looking for advice on partitioning a large table. In the DDL below I've changed names to protect the guilty.

My table has this schema:


CREATE TABLE [dbo].[BigTable]
(
[TimeKey] [int] NOT NULL,
[SegmentID] [int] NOT NULL,
[MyVal] [tinyint] NOT NULL
) ON [BigTablePS1] (TimeKey) -- see below for partition scheme

alter table [dbo].[BigTable] add constraint [PK_BigTable]
primary key (timekey asc, SegmentID asc)

-- will evaluate whether this one is needed, my thinking is yes
-- based on the expected select queries.
create index NCI_SegmentID on BigTable(SegmentID asc)


The TimeKey column is sort of like a unix time. It's the number of minutes since 2001/01/01, but always floored to a 5 minute boundary. so only multiples of 5 are allowed.

Now, this table will be rather big. There are about 20k possible SegmentIDs. For every TimeKey from 2008/01/01 to 2009/01/01 (12 months), I'll have on the order of 20000 rows, one for each SegmentID.

For the 12 month period, there are 365*24*60/5=105120 possible TimeKey values. So the total rowcount is over 2 billion. (20k * 105120)

Select queries are expected to be something like this:


-- fetch just one particular row...
select MyVal from BigTable
where TimeKey=5555 and SegmentID=234234

--fetch for a certain set of SegmentID and a particular time...
select
b.SegmentID
,b.MyVal
from BigTable b
join OtherTable t on t.SegmentID=b.SegmentID
where b.TimeKey=5555
and t.SomeColumn='SomeValue'


Besides selects, also I need to be able to efficiently issue update statements against the table with new values in the MyVal column based on a range of TimeKey values (a contiguous span of a few days) and sets of about 1000 SegmentID. updates would always look like this:


update t
set t.MyVal=p.MyVal
from BigTable t
join #myTempTable p on t.TimeKey=p.TimeKey
and t.SegmentId=p.SegmentId


where #myTempTable would have order of 1000*24*60 rows in it, all with contiguous TimeKey values, and about 1000 different SegmentID values. #myTempTable also has a clustered pk on (timekey asc, SegmentId asc).

After the table is loaded, it would never get any inserts or deletes. only selects and updates.

Given the size, and the nature of the select and update queries, this table seems like a good candidate for partitioning. I'm thinking it makes sense to partition on TimeKey.

So my question is, is it stupid to create a separate partition for each day in the year long span of TimeKeys this table covers? That would mean 365 partitions in the partition function and partition scheme. Something like this:


CREATE PARTITION FUNCTION [BigTableRangePF1] (int)
AS RANGE LEFT FOR VALUES
(
3680640 + 0*1440, -- 3680640 is the number of minutes between 2001/01/01 and 2008/01/01
3680640 + 1*1440,
3680640 + 2*1440,
3680640 + 3*1440,
...snip...
3680640 + 363*1440,
3680640 + 364*1440,
3680640 + 365*1440
);
GO

CREATE PARTITION SCHEME [BigTablePS1]
AS PARTITION [BigTableRangePF1]
TO
(
[PRIMARY],[PRIMARY],[PRIMARY],
...snip...
[PRIMARY],[PRIMARY],[PRIMARY]
);
GO


does anyone have any experience with partitioned tables with so many partitions? Is a few hundred partitions too many? From my understanding of partitions, seems like having so many will be ok. Is it somehow worse than having hundreds of tables in a database?

Even with one partition for each day, I'll still have 24*60*20000/5 ~ 5m rows in each one.

5m seems like a manageable number. 2b does not.



elsasoft.org

View 2 Replies View Related

Select On Large Table.

Jul 23, 2005

Greetings All, I was wondering what would happen if I were to do a"select * from table" on a table that has about 5 million rows. Wouldmy read block other writers to the same table? Would it block otherreaders? I know SQL uses optimistic lockign by default but I am notsure what this means to other users trying to access the same table?Any advise would be greatly appreciated.TFD

View 3 Replies View Related

Large Table Schema Changes

Jul 20, 2005

Quick question:Does SQL do table/schema changes "in place"?I've got a large table (140+ million rows of very widedata) that we want to change the schema on -- basicallyto remove a number of the unused data elements that wedon't use.Anyway, does anyone know if SQL will do an in-placechange, or if it will copy the table to a new table, therebyincreasing my space allocation needs? I'd effectively,temporarily, need space for two tables while the changeis happening if it copies the table first. This is not good asI do not have enough available space at the moment.If you've got pointers to specific MS docs regardingthis issue, please let me have 'em.Thanks in advance.

View 2 Replies View Related

Joining Large Table

Apr 29, 2008



I have query that takes 12 minutes to execute. The query uses around 9 tables but I have narrowed down the problem to one table that has over 65 million rows. The problem table has only 3 fields

FieldOne (PrimaryKey)
FieldTwo Varchar(3000)
FieldThree Varchar(3000)

The query uses the primary key of this table to perform the join. FieldTwo and FieldThree are only used as output parameters.

I noticed if I remove FieldTwo and FieldThree from the output (but still leave the table in the query), the query executes in 1 second. However if I include FieldTwo and FieldThree in the output, the query takes over 12 minutes to execute.

I cannot index FieldTwo and FieldThree because of the field size and I cannot reduce the size of the fields because of the data that needs to be stored in it? How can I index or do something similar to speed up the table look up.

View 1 Replies View Related

Very Large Table Maintenance

Jul 8, 2007

I have been asked to look at some performance isssues with an application that utilises a 800GB table. This table is huge and contains 4 int columns and 1 decimal column. The table has a clustered index that covers 4 of the int columns and is heavily fragmented and it has not been maintained for a long time. The system has limited free space to even attempt rebuilding the index. Does anyone have any experience of running a the Alter Index Reorganize command on such a large table? Any information on what storage would be required to attempt this, how long would this take?

View 4 Replies View Related

Very Large Table Migration

Sep 21, 2006

I have to migrate an Oracle Db to SQL Server 2005, including a 450 gb table with images. The estimate is that it will take about 24 hours to move this data. I€™m using SSIS with just one OLEDB input and sending to one OLEDB output. The SSIS process will be running on the destination SQL Server with 2gb of memory and at least half that memory in use by other apps (including SQL Server). I tried the SQL Server Destination but received errors trying to use it.

Are there any suggestions on settings or the best way to do this?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved