Reducing Large Tables, Re-index And Backup Them

Jan 2, 2003

What is the best procedure/sequence to reduce some tables containing large number of rows of
a SQL 2000 server?
The idea is first to check which tables grow extremely fast (all statistics, user or log tables), reduce the table
according to the number of months the user wishes to keep in the table.
As a second step backup remaining rows of table as txt files on harddisk (using DTS), UPDATE STATISTICS and re-indexing reduced table.
Run DTS Package every month once (delete oldest month and backup newest month) and do the same as above to keep size of tables adequate.
What is a fast way to reduce number of rows of a large table - the following example produces an error (timeout expired) of my
ADO connection when executing:
SET @str = 'DELETE FROM ' + @ProcessTable + ' WHERE ' + @SelectedColumn + ' < DATEADD (m,' +' -' +
@KeepMonthsInDatabase + ',
+ GETDATE())'
EXEC (@str)
Adding ConnectionTimout = 0 did not help unfortunately.

What is the best way to re-index the table just maintained?

Thanks

mipo

View 2 Replies


ADVERTISEMENT

SQL 2012 :: Index Maintenance For Large Tables?

Mar 8, 2014

We are having very big tables in TBS and wanted to setup a strategy for index maintenance.

View 3 Replies View Related

Impact Of Non-clustered Index With Included Columns On Large Tables

Nov 14, 2011

I would like to know the impacts (if any) of adding nonclustered index with included columns on large tables (these tables are populated by bulk insert from text files).

View 3 Replies View Related

SQL Server Admin 2014 :: Columnstore Index On Large Tables

Jul 1, 2015

I created columnstore index on the table with 20 columns and about 1000 000 000 rows

every day added about 5M rows

"select" queries became faster because of batch mode and table demand less disk space then before

I have also 6 similar tables with 5 000 000 000 rows and plan to move them on columnstore index

server has 128 G RAM

What pitfalls I could face if I will have so many columnstore indexes on one server?

How a could see problems in DMV?

View 3 Replies View Related

Reducing Time On Retrieving Large Amount Of Data

May 5, 2003

Hi,
My application needs to retrieve data from a table which has more than 15 lakh records. The records keep increasing in thousands every 15 days.
Is there anyway i can reduce the time to retrieve? basically i have a select statement with a few conditions and a clause for the id's of these records.

View 2 Replies View Related

SQL Server Admin 2014 :: Reducing Index Fragmentation During Inserts

Apr 26, 2015

We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.

This is the DDL for the table:

CREATE TABLE [dbo].[Inventory](
[EAN] [bigint] NOT NULL,
[Day] [smalldatetime] NOT NULL,
[State] [int] NOT NULL,
[Quantity] [int] NULL,
[StockValue] [float] NULL,
CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED

[code]...

There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:

1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)

2. Partition the table

3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.

View 9 Replies View Related

Need Help Reducing Database Backup Times

Feb 5, 2007

Hi all,

Currently we take full database backups nightly for our SQL Server 2000 data warehouse systems. The backups take a very long time over 20 hours and we would like to find a good way to reduce these backup times. How can I change our backup plan to reduce the long backup run times. Data size is 1 TB for our data warehouse database server.

Thanks

View 3 Replies View Related

Online Index On Large DB (&> 4GB)

May 22, 2007

Anyone using the ONLINE=ON option on large DB's? We have a db of 5 GB and we are doing some load testing for SQL 2005. We are modifying the Index scripts for the upgrade. We will run a load with the ONLINE=ON option but just wanted to find out if anyone already is doing it on a similar scale db and has seen any issues?
Also, we have auto-update stats off at the DB level. Does setting the ONLINE=ON require turning this auto-update stats to ON too? I didnt see anything to that effect in BOL, so was wondering.

Thanks for any feedback.

Dinakar Nethi
SQL Server MVP
************************
Life is short. Enjoy it.
************************
http://weblogs.sqlteam.com/dinakar/

View 7 Replies View Related

CREATE INDEX On Large Table

Jul 23, 2005

SQL Server 7/2000: We have reasonably large tables (3,000,000 rows)that we need to add some indexes for. In a test, it took over 12 hoursto CREATE a new INDEX against this table. One of us suggested that wecreate a temp table with the new index and copy the data from the oldtable into the new one, then rename it. I understand this took 15minutes. Why the heck would it be faster to move the data and buildmultiple indexes incrementally vs adding an index??

View 11 Replies View Related

SQL 2012 :: Create Clustered Index On A Very Large Table (500 GB)

May 7, 2014

I need to create a Clustered Index (CI) on a very large SQL Server 2012 database table. This table has about approximately 10 billion rows, 500 GB in size. The job ran for about 20 hours into it and then fails with error: "Out of disk space in tempdb". My tempDB size is 1.8TB, but yet it's still not enough.

Here is my script:

CREATE CLUSTERED INDEX CI_IndexName
ON TableName(Column1,Column2)
WITH (MAXDOP= 4, ONLINE=ON, SORT_IN_TEMPDB = ON, DATA_COMPRESSION=PAGE)
ON sh_WeekDT(Day_DT)
GO

View 9 Replies View Related

SQL Server 2014 :: Index Dates To Numbers With A Large Data Set?

Jun 16, 2015

I am trying to index dates to numbers with a large data set.

The first colums is index, the next is FactorsS, the next is value and the next is Date and the last is Lag.

Would it be difficult to write code that would determine the lag values. The lag value is based on the date value.

Index FactorS Value Date Lag
1 XYZ 2.3 12/31/2014 1
2 XYZ 1.4 12/30/2014 2
3 XYZ 3.3 12/29/2014 3
4 ABC 1.8 12/31/2014 1
5 ABC 2.2 12/30/2014 2
6 CBA 1.7 12/31/2014 1
7 CBA 1.8 12/30/2014 2
8 CBA 1.9 12/29/2014 3
9 CBA 2.1 12/28/2014 4

View 9 Replies View Related

SQL Server 2012 :: Best Way To Handle Like Percentage On Column Too Large For Index

Sep 18, 2015

We have a table to 100M rows and up until now we were fine with an non clustered index a varchar(4000) because we never went above 900 bytes (yes it is a bad design).We have the need to support international character sets now so the column was updated to nvarchar(4000) and now we have data past the 900 byte limit.

The data is long, seems useless but is needed by the business and they need to be able to search "where bigcolumn like 'test%'". With an index, even with a huge amount of data, it was 'fast'. Now of course without an index it is unusable. The wildcard is always at the end of the search. I made a full text index on the column and basic queries such as: select * from ourtable where contains(bigcolumn, 'AReallyLongStringofTextHere') works fine unless there is a space in the data. We loose thousands of returned rows because of spaces in the data.

I have tried select * from ourtable where contains(bigcolumn, '"AReallyLongStringofTextHere that includes spaces"') but not all of the data is returned. I get 112 rows with the contains statement. The table scanning statement of "select * from ourtable where bigcolumn like 'AReallyLongStringofTextHere that includes spaces%' returns 1939 rows.I understand that a full text index is breaking the long string up since it contains spaces. Is there a way to retain the entire string as 1 index entry or is there a way to fix my query to return all of the rows?

View 9 Replies View Related

Large DB Backup

Jun 19, 2008

I have this large DB around 250GB. In which 70% of the space is occupied by one single table. This is not a high transaction DB, it has only a few users.

Backup is taking 2:30 hours!

Is there a way I can reduce this time?

In desperation, I am thinking of doing,

1. detach the DB (off peak hours)
2. copy .mdf and .ldf files (backup)
3. attach the DB

Any suggestion ?



------------------------
I think, therefore I am - Rene Descartes

View 20 Replies View Related

How To... Backup Large DB...

Jul 23, 2007

hardware configuration:



2 node cluster, active-passive

+

for backup: direct attach tape library (dell powervault 132T with 2 tape drive's) on 1 node. Tape type is LTO (100Gb, and with compression 200Gb). Last time one filegroupe backup with errors:



Job 'BackupFG_index_ToTape1' : Step 1, 'step' : Began Executing 2007-07-02 Job 'EscortBackupFG_index_ToTape1' : Step 1, 'step' : Began Executing 2007-07-16 01:00:01

10 percent backed up. [SQLSTATE 01000]
20 percent backed up. [SQLSTATE 01000]
30 percent backed up. [SQLSTATE 01000]
40 percent backed up. [SQLSTATE 01000]
50 percent backed up. [SQLSTATE 01000]
60 percent backed up. [SQLSTATE 01000]
70 percent backed up. [SQLSTATE 01000]
Msg 4028, Sev 16: End of tape has been reached. Remove tape '\. ape1' and mount next tape for BACKUP DATABASE...FILE=<name> of database 'pakdb'. [SQLSTATE 42000]


i'am use native SQL Server backup, t-sql script:



BACKUP DATABASE pakdb
FILE = 'pakdb_index',
FILEGROUP = 'db_index'
TO TAPE = '\. ape1' WITH INIT, FORMAT, STATS, NOUNLOAD


i have 2 way's for solving this problem:

1) using any software (veritas as axample)

2) breack large filegroupe on any small filegroups - don't good think



Question:

How you solve this problems. Use veritas (example) - it problem too, if server full dead and veritas repository too - how i'm can restore information from my types (it's my ideas).

View 4 Replies View Related

Backup In Large Datacenters

Apr 9, 2008

Hi,

I know that some of you here work as DBAs in pretty large companies and I have a few questions about backup in such environments. As some of you might have picked up I recently got a job as a DBA in a major enterprise hosting company and we are having som backup/restore problems. We are currently using TSM (Tivoli) but it seems to be awfully slow and tedious compared to some of the tools out there, like LiteSpeed. We have in excess of 100 sql server installations at the moment...how do you guys do it in your environment?

--
Lumbago

View 15 Replies View Related

Backup Large Database

Jan 26, 2007

Hello all,

We have total 7 databases in the warehouse and these db are big. All the data files are approximately 300G now and they all growing more. Some are small db some are large db. (ranging from 20G to 50G of data files). Can someone please recommend me the best backup plan for this scenario. The regular backup through SQL 2005 takes too long.

Thanks for your help.

tocroi

View 1 Replies View Related

Differential Backup Size Very Large

Jan 8, 2001

I recently started using Differential backups. They are working but are growing in size a lot quicker than I expected.

The backups are growing by 2.5GB every day although the total size of all
transaction backups is under 350MB. I would have imagined that the total transaction log backups would be a good indicator of total database changes and therefore the differential backups would approach this figure.

Please explain!

View 2 Replies View Related

Backup Options Large Database

Oct 9, 2014

I've just inherited the job of looking after our SQL server (2012). The server works fine but I am a bit concerned about the backup strategy we have in place. The current backup strategy is a SQL script that runs every 2 nights and backs up to an ISCSI box elsewhere on the network. One of the databases on the server is currently at 303GB, which will take about a full working day to restore to another server if this one fails. I am just looking at other ways which could reduce the amount of time to restore this if this server was to fail. i heard so people talking about sql clusters and configuring replication on posts by others, which i will look into if its the way to go, but i guess I am just looking what others would do if they were in the same position with such a large database.

View 1 Replies View Related

.bak File For Differential Backup Is Becoming Too Large

Apr 9, 2008

When we do a full backup the .bak file is 700MB but in the meanwhile our differential backup has grown to a size of 20GB. The backup set expires in 1 day and if the backup file already exists we "append" to it.

What we basically want is a differential backup for one day only. I realize that in SQL Server 2005 you could add a "Clean up" task but that is to delete files but because we only have 1 .bak file for the differential backup this is not an option.

If we set to overwrite the backup file if it already exists what does this mean? Assume that we run a differential backup every hour, does this mean that the differential backup file will be overwritten every hour?

How can we make sure that we have a differential backup every hour and keep only the differential backups for the last 24 hours?

View 10 Replies View Related

Not Able To Restore SQL Server Database From A Large Backup File

Aug 12, 2006

Hi everybody,
On executing the RESTORE command of SQL Server to restore from a backup of 78.3 MB, the "Server Application Unavailable" error message comes up.The error message in the Application log is as follows:aspnet_wp.exe  (PID: 2184) was recycled because memory consumption exceeded the 152 MB (60 percent of available RAM).
However using Query Analyser of SQL Server I am able to restore the database.
What is the solution to this problem?

View 2 Replies View Related

DB Engine :: Restore Failed With Very Large Backup Devices

Jul 1, 2015

I have a database size of 9.8TB and I backup it to 30 backup devices. Each one has 110GB after backup compression.I tried to restore these files to standby server via 100MbE network but it always failed. My colleague told me this never happen before and I said yes because I have done this before a lot of times when backup devices are significantly smaller.He said the only way is to copy backup files locally and restore locally. But I am trying another method, I create more: 64(maximum) backup devices and try to restore via network again.SQL server version is Microsoft SQL Server 2012 (SP1)

- 11.0.3401.0 (X64)   Jan  9 2014 13:22:15   Copyright (c) Microsoft Corporation 
Enterprise Edition: Core-based Licensing (64-bit) on Windows NT 6.1 <X64> (Build 7601:
Service Pack 1) .

---  error message ---
5 percent processed.
10 percent processed.
15 percent processed.

[code]....

RESTORE DATABASE is terminating abnormally.

View 11 Replies View Related

RESTORE DATABASE Timeout In SQL 2000 With Large Backup Files

Sep 11, 2007



Hello,

I am attempting to restore the database from within VB.NET application I am making the following 3 calls:

RESTORE FileListOnly FROM DISK = 'C:MyDatabase.dat'

USE Master RESTORE DATABASE MyDatabase FROM DISK = 'C:MyDatabase.dat' WITH NORECOVERY, MOVE 'MyDatabase' TO 'C:Program FilesMicrosoft SQL ServerMSSQLDataMyDatabase.mdf', MOVE 'MyDatabase_log' TO 'C:Program FilesMicrosoft SQL ServerMSSQLDataLDFMyDatabase.ldf', REPLACE

RESTORE DATABASE MyDatabase FROM DISK = 'C:MyDatabase.dat'


using SMO. This logic works fine with small *.dat files, however when using *.dat file of about 4Gb I get an error on the 3d restore database call:



ExecuteNonQuery failed for Database 'master'.

An exception occurred while executing a Transact-SQL statement or batch.

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Operator aborted backup or restore. See the error messages returned to the console for more details.

ExecuteNonQuery failed for Database 'master'.

An exception occurred while executing a Transact-SQL statement or batch.

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Operator aborted backup or restore. See the error messages returned to the console for more details.



The same program/logic also works fine when I use MS SQL 2005 and it runs fine from MS SQL 2005 Query Analyzer for both 2005 and 2000 databases. There seem to be only problem with MS SQL 2000 from within VB.NET. Anybody has any idea? I'd appreciate any response. Thanks

Eugene

View 6 Replies View Related

Large Tables In SQL 7.0

Jul 12, 2001

We currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.

The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.

Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?

I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.

Thanks for any help you can provide.

George M. Parker

View 6 Replies View Related

Large Tables

Aug 10, 2000

Hi,

How can i partition the large tables so that the insert and updates which iam doing on the tables take less time.

I want to know how can i partition large tables and if i do that how is that the performance is going to be increased.

Thanks.

View 1 Replies View Related

Large Tables

Mar 13, 2001

How can I find largest 5 or 10 tables in a database?

Thanks in advance
Chan

View 2 Replies View Related

Searching In Large Tables........

Mar 7, 2007

Hi there,i am having some problem related to SQL server........ Actually i am having a table called ZipCodes that have around 80,000 rows... and the size of the table is around 100 MB...... and my table is now on web Server,.  now my problem is that when i fire some query that needs to go through whole of the table then it estimated time to execute the query comes to be 13 seconds and the corsor threshold is set to 7 seconds (and i can't change that)....... so the SQL server cancels the query to be fired........Now i need some Methodology/Technique through which i can search Large Tables with minimum calculations in minimum Time............(Any Ideas)....

View 3 Replies View Related

COMPRESSING LARGE TABLES

Mar 19, 2001

Is it possible to compress the large tables in the database,

like COMPRESS, ARCHIVE options we use to reduce the size of files
stored on any operating system.

I know there is a difference between the file stored on disk and the table created in the database, but currently I am facing space problems wherein, I have to manage my database within the space available, so please advice me if the option is available in SQL Server 6.5 or 7.
I will be happy if I get the solution immediatly as currently I am facing this problem and waiting for your reply.
Thank you
Amol

View 1 Replies View Related

Restore Two Tables From Large DB Into A New DB

Feb 12, 2008

I am fairly new to SQL, so please forgive me if my question is a bit elementary. I need to pull two individual tables out of a massive DB into a new DB for testing.

Thanks for the help.

View 2 Replies View Related

Manipulating Large Tables

Feb 15, 2008

I'm in the midst of a long file conversion job. Today I found that one of the tables (converted from csv) to be 6.7 million records. My sql script which I use to reconfigure the weird original date format, into something the rest of the planet uses, times out due to the size.

Does anyone please know of a file utility to automagically split sql server 2005 tables for later re-combining once my scripts have successfully completed their task on the smaller tables?

View 7 Replies View Related

Partioning Large Tables

Nov 14, 2007

I am making a warehouse managment system. The system will cotain much data, but only a small portion of the data will be accessed frequently. Most of the data will only be accessed seldomly, but the customer wants to keep all historic data (just in case they should need it sometime). I have figured I need to partion the tables somehow to keep what is fresh in one place, and historical data in another place. What is the best way to do this? I am thinking about making historical tables. For example I can have a table named PickList and another table named PickListHistorical. When a picklist is processed/complete I can move it over to the PicklistHistorical table, but when the users need to search for a specific picklist I have to look in both tables. I can ofcourse create a view for this to make it transparant. Sql server 2005 introduced some automatically partioning. Will it be better to use this than create my own historic tables? If so, can you please tell me how I do it?

Thank you!

View 11 Replies View Related

Comparing Large Tables

Oct 19, 2007



I've successfully created SSIS packages where I compare two tables in different databases on different servers. However, this is good enough to compare hundreds of thousands of records quickly. The process becomes a huge performance problem when trying to compare table differences when I'm looking at tables that each contain tens of millions of records.


One database is on a SQL 2005 box and the other DB is SQL 7.0 so the lookup component fails for this type of SQL Server. I've been implementing merge joins and conditional components to do my standard table comparisons.

Is there another way to implement this process or maybe partition it somehow to take pieces of the table at a time and compare them? I'm open to ideas.

View 11 Replies View Related

What Is Most Efficient Way To Use DataAdapters With Large SQL Tables

Jan 9, 2008

 I'm using DataAdapters with my SQL database with the intention of all the SELECT, UPDATE, INSERT, DELETE commands to be automatically generated.One table is huge so I'm wondering is it more efficient to "SELECT Top(1) * FROM hugetable" instead of  "SELECT * FROM hugetable" in order to facilitate the generation of commands.I hope this isn't too confusing.Thanks,Geoff  

View 2 Replies View Related

4 Seperate Tables Or One Large Table?

May 10, 2008

I have 4 tables with the respective amount of records
1) 6755
2) 2021
3) 2021
4) 355

They all have the same columns. However, they need to be seperate, or at least when I query them. I'll be accessing this database via the web. i was first afraid that a large database would cause major slow down when accessing the db. So I broke it up into 4 tables. If I combined all 4 tables into one large table and just had a column that differentiated the 4, how significant would be the change in speed when accessing the table? It's not a big deal to keep them seperate, its just that when I have to add or remove a column from one table I have to remove it from all the tables. Furthermore, I'm using a module from DEVEXPRESS, don't know if anyone has heard of it, but when you use a gridview, it loads up the entire table even though your paging (which I think is retarded), so for that reason I was afraid it would slow up my access to the db. Any thoughts?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved