Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies


ADVERTISEMENT

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

SQL Server 2005 Full Text Performance With Large Number Of Records

Dec 12, 2007

Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
 
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
 
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
 
Thanks in advance

View 1 Replies View Related

Performance Issues With Large Tables

Dec 5, 2007

Hi,

I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?


Thanks,
Harish

View 6 Replies View Related

Inconsistent Performance Results With Large Partitioned Tables.

Dec 5, 2007

I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.

The first query completed in around 7s and has 47,000 logical reads.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime


The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.


select mo.monitor_id,

mo.site_id,

mo.testtime,

sum(mo.NumBytes),

sum(mo.DNSTime),

sum(mo.ConnectTime),

sum(mo.FirstByteTime),

sum(mo.ContentTime),

sum(mo.RelocTime)

from monitor_raw mr(nolock), monitor_object mo(nolock)

where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)

and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'

and mo.returncode = 200

and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)

and mr.escalationlevel = 0

and mr.monitor_id = mo.monitor_id

and mr.testtime = mo.testtime

and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime



The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.


My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?

One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.

Thanks,

Tim

View 5 Replies View Related

How To Manage IDENTITY In Tables With Large Number Of Rows?

May 23, 2002

Table structure: col1 IDENTITY (seed=1 increment=1) + few other columns (col2...col7) + one text column (col 8)
I have around 50,000,000 rows per day inserted in the table T1. At the end of the day 40,000,000 rows are deleted. I have to keep the records for 12 months and then archive it. Database is 24/7 web serving and there is no down time allowed. IDENTITY column will go out of range (overflow) after less than two years, unless the identity seed is reset to the start value (seed=1, increment=1).
At the end of 12th month data is archived in another table and only last month is kept in the table T1. So table T1 enters new year with data from last month of the previous year. There are few other tables that refer to this table by using there own field with values from T1.IDENTITY column (referential integrity is not enforced). Identity column in T1 is needed as a unique id for some search actions. Performance is an issue therefore bigint data type is used for this identity column rather than decimal.

Another problem I have is how to do table update on one column (1 mil rows to be updated out of 2 mil of rows) with the minimum impact on the users who are querying this table heavily. Not need to mention that it is web app 24/7 no down time.

Thank you in advance.


Goran

View 1 Replies View Related

SQL 2012 :: Generating CREATE TABLE Scripts For Large Number Of Tables

Feb 11, 2014

Other than right-clicking on each individual table in SSMS and generating a CREATE script, is there a simple way to generate CREATE TABLE scripts for tables within a given database?

Background: I have a bunch of tables in one database, and I would like to add tables to a second database that have the same names and basic structures of some of the tables from the first database.

I do not need to transfer any data from the tables, this is a seperate project that will use a similar data structure. I just want to generate the CREATE TABLE scripts for 30ish tables within the first database, and then I'll tweak the scripts as appropriate and run them against the new database.

[URL] ....

View 7 Replies View Related

Number Of SQL Queries Too Large?

Jul 28, 2004

Hi All,

I'm currently in the middle of building quite a large CMS using ASP.NET and MSSQL2K and have began to question if the amount of queries I am using for one page to be built is too many?

For one page (View Forum) I am getting all of the templates and checking access then pulling a list of threads, getting the first and last posts, then user info for the first and last posts... anyway to view 10 threads on the page the number of queries comes to about 54 and the page takes 0.064 seconds to load.

My question is, Is this to many queries to be running for a single page load? All queries are using Stored Procedures.

Thanks Guys.

View 3 Replies View Related

Large Number Of Connections

Jul 19, 2004

hi,
Does large number of connections to a sql server result in slogging queries??
by large number of connections I mean in the range of 2000 to 2500 connections to various databases on the same server.
This happens on the production database, if I download the copy of the database and run the specific queries on the local box it hardly takes 1 second to execute, whereas on the production box it takes about 45 to 60 secs, i m working on the indexes part... was not sure whether number of connections to a server affect the performance of the queries or it is just that the indexes need defragmentation...

View 2 Replies View Related

Very Large Number Of Rows

Jul 23, 2005

We are busy designing a generic analytical system at work that willhold multiple analytic types over time. This system is being developedin SQL 2000.Example of tableIDENTITY intItemId int [PK]AnalyticType int [PK]AnalyticDate DateTime [PK]Value numeric(28,15)ItemId - the item for which the analytic is being storedAnalyticType - an arbitrary typeThe [PK] tag indicates the composite primary key.Our scenario is the following:* For this time series data, we expect around 250 days per year(working days) and the dataset could extend to over 20 years* Up to 50 analytic types* Up to 20,000 itemsLooking at the combined calculation - this comes to roughly somethinglike25 * 20,000 * 50 * 250 or around 5 billion rows.We will be inserting around 50*20,000 or around 1 million rows each day(the inserts will take place in the middle of the night (outside themain query time) - this could be done through something like BCP orBULK INSERT.Our real problem is we have not previously worked with such largetables before and are nervous that our system is going to grind to ahalt. Our biggest tables are around 20 million rows at the moment.Scanning through google and microsoft's own site we have found aparititioning method that is available.http://www.microsoft.com/resources/...art5/c1861.mspxHaving experimented with the above system it seems rather quirky andlooking at the available literature it seems that this is not moreeffective than a clustered index as far as queries go.It needs to be optimized for queries like:Given the ItemID and the AnalyticType search for a specific date or aspecific range of dates.If anyone has any experience or helpful suggestions I would reallyappreciate it.ThanksA

View 4 Replies View Related

Large Number Of Rows.

Apr 3, 2007

Hi all,



A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?



Does the number of records returned affect the performance inspite of the indexing ?



Thanks,



DBLearner

View 3 Replies View Related

Improving Large Table Performance

Aug 15, 2007

We have a table that is 800GB. We are planning to re-build the clustered index on this table to a different filegroup. The new filegroup and files associated with it will sit on a SAN which will have a 1.5TB allocation. Does anyone have any suggestions in regards to how many files to have associated with the filegroup to provide optimal performance? Apparently we could have 3 LUNS (500gb each), so would 1 file on each LUN provide additional performance as opposed to one file on 1 LUN?

View 1 Replies View Related

Performance Of NVARCHAR(n) For Large Strings

Jul 25, 2007

What are the performance or storage implications of using a large value for NVARCHAR? For example, if I specify a NVARCHAR(4000) column just to cope with the rare case there are strings that long, but have a table full of strings 255 characters long, is the performance identical to specifying an NVARCHAR(255) column? Is there a reason then NOT to specify NVARCHAR(4000) on everything? i.e. does the query optimizer use it?



I know how long a row is and how many rows can fit in a page (4096 bytes) affects performance, but my understanding is that NVARCHAR only stores the characters needed so it wouldn't be affected unless there was actually a longer string. I wanted to know if there are any other considerations to using a large value here.



Also, how does NTEXT compare to using NVARCHAR? I noticed the documentation said it used a new page after 256 characters, which sounds like it's different from how a 4000 byte NVARCHAR would be stored?

View 1 Replies View Related

Inserting Large Number Of Rows

Nov 24, 1999

Hi,

I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently

Thanks

View 2 Replies View Related

Large Number Of Databases On 2005

Feb 23, 2007

For anyone with a larger number of databases (500+): How many do you have in a single instance. If you are using multiple instance on a single server, how many dbs per instance. This is why I'm asking

We are experiencing 701 "out of system memory" and temporary (usually) system freezes when the error occurs. We have 32bit 2005 version 9.00.2153.00, 32GB of memory, AWE enabled, quad dual-core 3GHz hyperthreaded server. Nether the bPool or VAS show any pressure when the "out of system memory error" occurs. Since this error usually indicates a VAS problem we tried increasing VAS to 1GB w/the -g flag. It made no difference. PSS has been working on the case for 3 weeks. They dont seem to be finding any evidince of memory pressure either. When I last spole to the escalation engineer yesterday it seemed that they are going to recommend reducing the number of databases on the server. I asked for clarification as to whether we are hitting a 32 bit barrior, an instance limitation, or both. I am awaiting the answer. How many databases do you have on your server? We had between 1700 and 1900 (the number varies) at times when the error occured. We are now at 1500, and have not had the error in the 2 days since reducing the number of databases...

View 4 Replies View Related

Inserting Large Number Of Records

Jun 20, 2006

Hi!

The DB I am working with has about 10 tables and some of the tables have 200,000 to 500,000 records. All tables have a clustered index on the primary key.

I performance during INSERT could be better I think - I add thousands of records at a time from many connections.

Is there a way to defer the update of indicies? So that, I can update the tables and then let the indicies regenerate ?

thanks,

Jas.

View 4 Replies View Related

Admin Performance Issues, Large Amount Of DBs

Mar 10, 2004

I have a large amout of dbs (150) on my SQL Server and when using the enterprise manager to do administrational tasks like Backup, Restore etc. it takes 1.5 hour to open the Database folder. Server is 2xP4, 3Gb RAM. Any ideas on how to manage this number of dbs on the same server and instance of SQL.
Cheers!

View 2 Replies View Related

How Do You Improve SQL Performance Over Large Amount Of Data?

Jul 23, 2005

Hi,I am using SQL 2000 and has a table that contains more than 2 millionrows of data (and growing). Right now, I have encountered 2 problems:1) Sometimes, when I try to query against this table, I would get sqlcommand time out. Hence, I did more testing with Query Analyser and tofind out that the same queries would not always take about the sametime to be executed. Could anyone please tell me what would affect thespeed of the query and what is the most important thing among all thefactors? (I could think of the opened connections, server'sCPU/Memory...)2) I am not sure if 2 million rows is considered a lot or not, however,it start to take 5~10 seconds for me to finish some simple queries. Iam wondering what is the best practices to handle this amount of datawhile having a decent performance?Thank you,Charlie Chang[Charlies224@hotmail.com]

View 5 Replies View Related

Performance Issue On A Singel Large Insert

Dec 19, 2006



Hi,



I'm testing Mirroing.

1) I have dedicated NIC for Mirroring - 100Mb

There is no issue with the network (file of 25MB goes in 2.5 seconds)

2) I'm issuing the next simple command

Select * into dbo.Table2 from dbo.Table1

3) the size of the table is 25MB



in async mode it takes 3-4 sec (as if it runs only local)

in sync mode it takes 25-29 sec !!!!

WHY ? IS THIS NORMAL ?

is there any configuration i can change ?

View 5 Replies View Related

Question On Displaying Large Number Of Record?

Oct 20, 2006

I have datagrid that needs to display a log table which has more than million records. Since it it huge number, it is not possible to get dataset using "select * from log_table" to fill and to bind to datagrid.Is there anyway to display first 100 rows on first page and show next 100 rows if use clicks on page 2?Thank you very much in advance!Justin 

View 4 Replies View Related

Error While Handling Large Number Of Records

Jan 29, 2008

Hi I have created one store procedure which handles global updates I am using cursor to fetch one be one row for updating (It is required for implementing business logic)Now when i execute this store procedure ---it gives me dedlock error , I dont know why i m getting this error(Approx number of rows 1.5lakh)if then i removed unnecessary records from table (Approx -50000) it works fine,Is there any way to handle itI am calling this storeprocedure from my window service.please give me a good solution if possible 

View 3 Replies View Related

Large Number Of Processes In Activity Monitor

Jun 4, 2007

All;

We have a Windows App and Web App that share business objects which points to a single database. When a Windows user logs in, an average of 50 processes are created in the first few seconds and never go away. The details window is blank and they all remain sleeping from that point on.

I have stepped through the code to see if there is anything odd going on but most of the processes are created when validating the number of parameters the stored procedure has or the length of the stored procedure name. This translates to 1000-1500 processes on average.

Is this normal? Will it hurt performance? Is there a way to remove them?

View 1 Replies View Related

Deleting Large Number Of Rows Times Out

Sep 14, 2007

I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.

DELETE FROM tblLog

I just get a message about the server timing out, what can I do?

Thanks

View 5 Replies View Related

NTEXT Vs NVARCHAR For Large Number Of Columns

Jul 23, 2005

Hi all,I need to store data into about 104 columns. This is problematic with MSSQL, since it doesn't support rows over 8kb in total size.Most of the columns are of type NVARCHAR(255), which means we can't havemore than 8092/(255*2) = 15 columns of this type.With a row length of more than 8kb, SQL gives a warning that any rows overthat amount will be truncated.So far I'm seeing two possible solutions to this problem:1. Split data into multiple tables with the same ID column accross alltables, and then join them on SELECT statements.2. Use NTEXT instead of NVARCHAR. NTEXT's length is 16 bytes because itcontains a pointer to the actual value stored somewhere else. However, NTEXTdoesn't support regular indexing, only through a Full-Text Index catalog. Inthis case I'll need to user "WHERE CONTAINS(columnName, 'sometext')" toperform searches, which is bearable.I'm inclined toward #2. However I haven't used Full-Text indices before anddon't know their limitations. Will I run into problems with NTEXT? Is therea better solution?Thanks.-Oleg.

View 7 Replies View Related

Large Number Of INSERT Statements - Not All Are Executed

Feb 9, 2007

Hello!I have a developer that is playing around with some SQL statementsusing VB.NET. He has a test table in a SQL 2000 database, and he hasabout 2000 generated INSERT statements.When the 2000 INSERT statements are run in SQL query analyzer, all2000 rows are added to the table. When he tries to send the 2000statements to SQL Server through his app., a random number ofstatements do not get executed. But, SQL Profiler shows that each ofthe 2000 statements are getting sent to the server.I suggested that he add a "GO" statement at the end of the INSERTblock, but the statement fails when that is sent to the server.I know that this is not the ideal manner to insert bulk data to thesystem, but now we are all just curious as to why SQL server doesn'texecute each individual INSERT.Any thoughts?

View 3 Replies View Related

SQL 2005 Full-Text Performance On Large Results

May 10, 2006

Hello everybody,
I've got a little problem wich i'm trying to solve since 1-2 years and i hoped it would go away with SQL 2005 - but that wasn't the case :(.

Situation:
I've just bought a new Server containing:
SQL 2005
64 Bit Enviroment
4 GB RAM
2x AMD Opteron 2 GHz Prozeccors (Dual Core)
2x RAID Controllers (RAID 1) containing
1.1 System
1.2 Data
2.1 Transaction Logs

I've created a full-text table containing all the search terms i need to search.
Table build:
RecID - int - Primary Key
SrcID - varchar(30)
ArticleID - int - referring to an original table
SearchField - varchar(150) - Containing the search terms
timestamp - timestamp field

Fulltext index:
RecID as Primary Key
SearchField as indexed field - Wordbreaker: Neutral (containing several languages), Accent sensitivity off

Now i've got different tables imported in here resulting in a table size of ~ 13 million rows.

There is no problem with the performance on this catalog if i search a term wich isn't contained in more than 200-300 recordsets - but if i search for a term wich could occur in 200'000 upwards it gets extremely slow.

On the slow query the first records get in after no time, but until the query finished up to 60 seconds pass.
The problem is that i have to sort by a ranking value wich is stored externally - so i need all results to sort them...

current (debugging) query:
SELECT ArticleID FROM fullTextTable AS ft INNER JOIN CONTAINSTABLE(FullTextCatalog,SearchField,'"term*"') AS ftRes ON ftRes.[KEY]=ft.idEntry

Now if i check in the performance monitor:
As soon as i run the query the 'Avg. Disk Read Queue Length' counter on disk D (SQL Data Files) jumps to the top, until the query has finished.
Almost no read/write activity on C: where the Fulltext is stored...

If i rerun the query, after it finished once successfully - it takes place below 1-2 seconds, would be nice to get that result in first place :).

Does anybody know a workaround to this problem?

View 9 Replies View Related

Large Table/slow Query/ Can Performance Be Improved?

Jul 20, 2005

I am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?

View 3 Replies View Related

Creating Indexes On Large Table To Increase Performance

Mar 5, 2008

Dear all,
I'm using SQL Server 2005 Standard Edetion.
I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags)
The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records
Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc
Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes.
So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:


CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))

CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))



INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC

INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType

FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID

WHERE RecordedCalls.ID <= '9369907'

AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))

AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)

AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')

AND RecordedCalls.ServerID NOT IN('2')

AND RecordedCalls.AgentID NOT IN('1000010000')

AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))

AND RecordedCalls.IsDeleted='false'

GROUP BY RecordedCalls.CallType

SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')

WHEN 'Arabic' THEN #tempLookups.NameA

ELSE #tempLookups.NameE

END AS CallsType FROM

#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code

DROP TABLE #tempLookups

DROP TABLE #tempTable


Thanks all,
Tayseer

Any suggestions how to create efficient indexes??!!

View 2 Replies View Related

Does A Large Amount 'image' Data Affect Overall SQL Performance?

Jul 20, 2007

Recently we added a new table into our SQL2000 database specifically to store scanned in images of documents. This new table contains a PK field, a couple of datetime fields, a couple of char(1) fields and one 'image' field.



Before adding this table, the database size was approx 6GB. Six months after adding this new table, the database has grown to 18GB - 11GB of this is due to the scanned in images.



Would this new table affect the SQL performance with regards to accessing other data in the database that has nothing related to the new table?



If so, would moving this new table into it's own database be recommended?



Thanks

Rod

View 1 Replies View Related

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

Managing Large Number Of Objects (table,sp,view)

Jul 28, 2007

Hi,
Our database has very large number of objects. We have a naming convension by modules, subprojects etc. But for example when we need to open a specific table it still takes time to find it. If we could create custom folders under table folder or stored procedure folder it will be easier to find an object. We could create sub folders by module, subproject and classify our objects with these folders. Will the next version SQL Server 2008 support this kind of functionality?

View 8 Replies View Related

Add Column To Existing Table With Large Number Of Rows

Dec 24, 2007

Hey Guys

i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all

whenever i do : alter table mytable add Updated_date datetime

it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit

can u plz advice on how to add this column. its urgent!!

thanks in advance

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved