Very Large Number Of Rows

Jul 23, 2005

We are busy designing a generic analytical system at work that will
hold multiple analytic types over time. This system is being developed
in SQL 2000.

Example of table
IDENTITY int
ItemId int [PK]
AnalyticType int [PK]
AnalyticDate DateTime [PK]
Value numeric(28,15)

ItemId - the item for which the analytic is being stored
AnalyticType - an arbitrary type

The [PK] tag indicates the composite primary key.

Our scenario is the following:
* For this time series data, we expect around 250 days per year
(working days) and the dataset could extend to over 20 years
* Up to 50 analytic types
* Up to 20,000 items

Looking at the combined calculation - this comes to roughly something
like
25 * 20,000 * 50 * 250 or around 5 billion rows.

We will be inserting around 50*20,000 or around 1 million rows each day
(the inserts will take place in the middle of the night (outside the
main query time) - this could be done through something like BCP or
BULK INSERT.

Our real problem is we have not previously worked with such large
tables before and are nervous that our system is going to grind to a
halt. Our biggest tables are around 20 million rows at the moment.

Scanning through google and microsoft's own site we have found a
parititioning method that is available.
http://www.microsoft.com/resources/...art5/c1861.mspx
Having experimented with the above system it seems rather quirky and
looking at the available literature it seems that this is not more
effective than a clustered index as far as queries go.

It needs to be optimized for queries like:
Given the ItemID and the AnalyticType search for a specific date or a
specific range of dates.

If anyone has any experience or helpful suggestions I would really
appreciate it.

Thanks

A

View 4 Replies


ADVERTISEMENT

Large Number Of Rows.

Apr 3, 2007

Hi all,



A select query returns around 1 million rows. The column in the WHERE condition is indexed. This query takes nearly 1 minute for returning the all the records. Is this normal ?



Does the number of records returned affect the performance inspite of the indexing ?



Thanks,



DBLearner

View 3 Replies View Related

Inserting Large Number Of Rows

Nov 24, 1999

Hi,

I need to insert a very large number of rows into a table (in SQL Server 7.0) using ADO.
Could you please tell me i there is a way for FAST insert, something similar to BCP ... or any other way of
inserting large number of rows efficiently

Thanks

View 2 Replies View Related

Deleting Large Number Of Rows Times Out

Sep 14, 2007

I have a table with about 10 million rows, its been logging data incorrectly and I want to start again.

DELETE FROM tblLog

I just get a message about the server timing out, what can I do?

Thanks

View 5 Replies View Related

How To Manage IDENTITY In Tables With Large Number Of Rows?

May 23, 2002

Table structure: col1 IDENTITY (seed=1 increment=1) + few other columns (col2...col7) + one text column (col 8)
I have around 50,000,000 rows per day inserted in the table T1. At the end of the day 40,000,000 rows are deleted. I have to keep the records for 12 months and then archive it. Database is 24/7 web serving and there is no down time allowed. IDENTITY column will go out of range (overflow) after less than two years, unless the identity seed is reset to the start value (seed=1, increment=1).
At the end of 12th month data is archived in another table and only last month is kept in the table T1. So table T1 enters new year with data from last month of the previous year. There are few other tables that refer to this table by using there own field with values from T1.IDENTITY column (referential integrity is not enforced). Identity column in T1 is needed as a unique id for some search actions. Performance is an issue therefore bigint data type is used for this identity column rather than decimal.

Another problem I have is how to do table update on one column (1 mil rows to be updated out of 2 mil of rows) with the minimum impact on the users who are querying this table heavily. Not need to mention that it is web app 24/7 no down time.

Thank you in advance.


Goran

View 1 Replies View Related

Deleting Large Number Of Rows In SQL Server 2000

Jan 26, 2004

Hi,

I have some problems with our database which is growing too large, and was hoping someone might have some tips on what I can do!

I have about 100 clients, each logging about 10 000 rows of status logs a day. So after just a few days the db is growing very large.

At present it's manageable, since I don't need to "dig" into the logs more than a few times a day. The system it self is not affected by the size of the log or traffic on the server. But it will increase to about 500 clients in 2004, and 1000-1500 in 2005. So I really need a smarter solution than what I have today to be able to use the log efficiently.

98-99% of these rows are status-messages which are more or less garbage during normal operation. But I still need to keep them in case an error occurs, and we need to go back an hour or two (maybe a day) to see what went wrong. After 24-48 hours these 98-99% are of no use. I do however like to keep the remaining 1-2%, they are messages like startup, errors, etc. Ideally they should be logged in two separate tables by the clients, but unfortunatelly I cannot make the clients change their logging.

This presents problems on multiple levels. Mainly in searching, which often times out, but also with backup and storagespace. At the moment I check the system for errors, and every other day I just truncate the log-file. It works, but it's not exacly elegant......

The server is a 1100 MHz P3 / 512MB / Windows 2000 Server /
SQL Server 2000. Faster hardware would help, but the problem is more of a "bad design" than "slow hardware" problem.

My log is pretty simple, as follows:

LogId - int - primary key - clustered index
ClientId - int - index asc
LogTypeId - int - index asc
LogValue - nvarchar[2500], ikke index
LogTimeStamp- datetime - index asc


I have deducted 3 different solutions:

Method 1:
Simply run "Delete from db_log where logtyipeid <> stuff_I_want_to_keep".

This is the simplest and the one i prefer, but it takes too long time to complete. Any tips to speed this process up?


Method 2:
Create a trigger which runs something like "Delete from db_log where logtypeid <> stuff_I_want_to_keep and date < today_minus_two_days" every hour or so. This will ensure that the db doesn't grow to large. But if I'm away from work a few days we might loose data we'd wanted to keep.


Method 3:

Copy what I want to keep into another table, and empty the log. Sort of like "Insert into db_log_keep stuff_to_keep; drop db_log; create table db_log; " (or truncate, but that takes a long time too)

But then I would be stuck with two log tables, "48-hour_db_log" and "db_log_keep". I could use a view to "union" them so they would appear as a single table, but that's not ideal either.

However, it seems as this method is what will work best for my set-up, unless there are other suggestions??

Method 4:

...eagerly awaiting ideas!!! :-)




(Also, whatever tips and/or links to info on maintaing VLDB's are greatly appreciated. )

Thanks in advance for your help! :-)

Nikolai

View 4 Replies View Related

Add Column To Existing Table With Large Number Of Rows

Dec 24, 2007

Hey Guys

i need to add a datetime column to an exisitng table that has like 1.2 million records and its being accessed frequently
but i cant afford to stop the db at all

whenever i do : alter table mytable add Updated_date datetime

it just takes too long and i have to stop executing the query after a couple of mins
I am running sql express 2005 sp2. db size is over 3 gb but still under the 4 gb limit

can u plz advice on how to add this column. its urgent!!

thanks in advance

View 5 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related

How To Enter More Number Of Rows In A Table Having More Number Of Columns At A Time

Sep 24, 2007

Hi

I want to enter rows into a table having more number of columns

For example : I have one employee table having columns (name ,address,salary etc )
then, how can i enter 100 employees data at a time ?

Suppose i am having my data in .txt file (or ) in .xls

( SQL Server 2005)

View 1 Replies View Related

Number Of SQL Queries Too Large?

Jul 28, 2004

Hi All,

I'm currently in the middle of building quite a large CMS using ASP.NET and MSSQL2K and have began to question if the amount of queries I am using for one page to be built is too many?

For one page (View Forum) I am getting all of the templates and checking access then pulling a list of threads, getting the first and last posts, then user info for the first and last posts... anyway to view 10 threads on the page the number of queries comes to about 54 and the page takes 0.064 seconds to load.

My question is, Is this to many queries to be running for a single page load? All queries are using Stored Procedures.

Thanks Guys.

View 3 Replies View Related

Large Number Of Connections

Jul 19, 2004

hi,
Does large number of connections to a sql server result in slogging queries??
by large number of connections I mean in the range of 2000 to 2500 connections to various databases on the same server.
This happens on the production database, if I download the copy of the database and run the specific queries on the local box it hardly takes 1 second to execute, whereas on the production box it takes about 45 to 60 secs, i m working on the indexes part... was not sure whether number of connections to a server affect the performance of the queries or it is just that the indexes need defragmentation...

View 2 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

View 10 Replies View Related

Large Number Of Databases On 2005

Feb 23, 2007

For anyone with a larger number of databases (500+): How many do you have in a single instance. If you are using multiple instance on a single server, how many dbs per instance. This is why I'm asking

We are experiencing 701 "out of system memory" and temporary (usually) system freezes when the error occurs. We have 32bit 2005 version 9.00.2153.00, 32GB of memory, AWE enabled, quad dual-core 3GHz hyperthreaded server. Nether the bPool or VAS show any pressure when the "out of system memory error" occurs. Since this error usually indicates a VAS problem we tried increasing VAS to 1GB w/the -g flag. It made no difference. PSS has been working on the case for 3 weeks. They dont seem to be finding any evidince of memory pressure either. When I last spole to the escalation engineer yesterday it seemed that they are going to recommend reducing the number of databases on the server. I asked for clarification as to whether we are hitting a 32 bit barrior, an instance limitation, or both. I am awaiting the answer. How many databases do you have on your server? We had between 1700 and 1900 (the number varies) at times when the error occured. We are now at 1500, and have not had the error in the 2 days since reducing the number of databases...

View 4 Replies View Related

Large Number Of Tables And Performance

Jan 25, 2008

Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.

I would really appreciate if you can give me some advice and if you have any good links that would be great...

Waleed Eissa
http://www.waleedeissa.com

View 9 Replies View Related

Inserting Large Number Of Records

Jun 20, 2006

Hi!

The DB I am working with has about 10 tables and some of the tables have 200,000 to 500,000 records. All tables have a clustered index on the primary key.

I performance during INSERT could be better I think - I add thousands of records at a time from many connections.

Is there a way to defer the update of indicies? So that, I can update the tables and then let the indicies regenerate ?

thanks,

Jas.

View 4 Replies View Related

Question On Displaying Large Number Of Record?

Oct 20, 2006

I have datagrid that needs to display a log table which has more than million records. Since it it huge number, it is not possible to get dataset using "select * from log_table" to fill and to bind to datagrid.Is there anyway to display first 100 rows on first page and show next 100 rows if use clicks on page 2?Thank you very much in advance!Justin 

View 4 Replies View Related

Error While Handling Large Number Of Records

Jan 29, 2008

Hi I have created one store procedure which handles global updates I am using cursor to fetch one be one row for updating (It is required for implementing business logic)Now when i execute this store procedure ---it gives me dedlock error , I dont know why i m getting this error(Approx number of rows 1.5lakh)if then i removed unnecessary records from table (Approx -50000) it works fine,Is there any way to handle itI am calling this storeprocedure from my window service.please give me a good solution if possible 

View 3 Replies View Related

Large Number Of Processes In Activity Monitor

Jun 4, 2007

All;

We have a Windows App and Web App that share business objects which points to a single database. When a Windows user logs in, an average of 50 processes are created in the first few seconds and never go away. The details window is blank and they all remain sleeping from that point on.

I have stepped through the code to see if there is anything odd going on but most of the processes are created when validating the number of parameters the stored procedure has or the length of the stored procedure name. This translates to 1000-1500 processes on average.

Is this normal? Will it hurt performance? Is there a way to remove them?

View 1 Replies View Related

NTEXT Vs NVARCHAR For Large Number Of Columns

Jul 23, 2005

Hi all,I need to store data into about 104 columns. This is problematic with MSSQL, since it doesn't support rows over 8kb in total size.Most of the columns are of type NVARCHAR(255), which means we can't havemore than 8092/(255*2) = 15 columns of this type.With a row length of more than 8kb, SQL gives a warning that any rows overthat amount will be truncated.So far I'm seeing two possible solutions to this problem:1. Split data into multiple tables with the same ID column accross alltables, and then join them on SELECT statements.2. Use NTEXT instead of NVARCHAR. NTEXT's length is 16 bytes because itcontains a pointer to the actual value stored somewhere else. However, NTEXTdoesn't support regular indexing, only through a Full-Text Index catalog. Inthis case I'll need to user "WHERE CONTAINS(columnName, 'sometext')" toperform searches, which is bearable.I'm inclined toward #2. However I haven't used Full-Text indices before anddon't know their limitations. Will I run into problems with NTEXT? Is therea better solution?Thanks.-Oleg.

View 7 Replies View Related

Large Number Of INSERT Statements - Not All Are Executed

Feb 9, 2007

Hello!I have a developer that is playing around with some SQL statementsusing VB.NET. He has a test table in a SQL 2000 database, and he hasabout 2000 generated INSERT statements.When the 2000 INSERT statements are run in SQL query analyzer, all2000 rows are added to the table. When he tries to send the 2000statements to SQL Server through his app., a random number ofstatements do not get executed. But, SQL Profiler shows that each ofthe 2000 statements are getting sent to the server.I suggested that he add a "GO" statement at the end of the INSERTblock, but the statement fails when that is sent to the server.I know that this is not the ideal manner to insert bulk data to thesystem, but now we are all just curious as to why SQL server doesn'texecute each individual INSERT.Any thoughts?

View 3 Replies View Related

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

Managing Large Number Of Objects (table,sp,view)

Jul 28, 2007

Hi,
Our database has very large number of objects. We have a naming convension by modules, subprojects etc. But for example when we need to open a specific table it still takes time to find it. If we could create custom folders under table folder or stored procedure folder it will be easier to find an object. We could create sub folders by module, subproject and classify our objects with these folders. Will the next version SQL Server 2008 support this kind of functionality?

View 8 Replies View Related

SQL Server 2005 Slows Down After A Large Number Of Queries

Jul 4, 2007

Hi,

We are running SQL Server 2005 Ent Edition with SP2 on a Windows 2003 Ent. Server SP2 with Intel E6600 Dual core CPU and 4GB of RAM. We have an C# application which perform a large number of calculation that run in a loop. The application first load transactions that needs to be updated and then goes to each one of the rows, query another table get some values and update the transaction.

I have set a limit of 2GB of RAM for SQL server and when I run the application, it performs 5 records update (the process described above) per second. After roughly 10,000 records, the application slows down to about 1 record per second. I have tried to examine the activity monitor however I can't find anything that might indicate what's causing this.

I have read that there are some known issues with Hyper-Threaded CPUs however since my CPU is Dual-core, I do not know if the issue applies to those CPUs too and I have no one to disable one core in the bios.

The only thing that I have noticed is that if I change the Max Degree of Parallelism when the server slows down (I.e. From 0 to 1 and then back to 0), the server speeds up for another 10,000 records update and then slows down. Does anyone has an idea of what's causing it? What does the property change do that make the server speed up again?

If there is no solution for this problem, does anyone know if there is a stored procedure or anything else than can be used programmatically to speed up the server when it slows down? (This is not the optimal solution however I will use it as a workaround)

Any advice will be greatly appreciated.

Thanks,
Joe

View 3 Replies View Related

SQL SERVER 2005 Database Mirroring For Large Number Of Databases

May 30, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

this are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

View 1 Replies View Related

SQL SERVER 2005 DATABASE MIRRORING For Large Number Of Databases

Jun 1, 2006

I am trying to enable database mirroring for 100 database.
It goes error free till 59 databases (some times 60 databases) with the
status (principal, synchronized) on principal. on the 60th or 61st database
it gave the status (principal, disconnected). Also mirror starts acting
abnormal. connection to mirror starts to give connection timeout and it is
not enabling database mirroring on any more databases. I have SQL SERVER
2005 Enterprise with SP1 on the servers. witness is not included yet.

these are my test servers... i have more than 500 databases on my production
servers.

principal and mirror both are using port 5022 for ENDPOINT communication.

All of the databases are critical and all must be included in the Database Mirroring.
so, after that I tried to implement database mirroring again......
System has 3 GB of RAM, SQL SERVER (Mirror) using 85 MB of RAM but still
giving this error while trying to enable database mirroring for 37th
Database.....

"There is insufficient system Memory to run this query"

WHY?

View 19 Replies View Related

SQL Server 2005 Full Text Performance With Large Number Of Records

Dec 12, 2007

Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
 
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
 
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
 
Thanks in advance

View 1 Replies View Related

SQL 2012 :: Generating CREATE TABLE Scripts For Large Number Of Tables

Feb 11, 2014

Other than right-clicking on each individual table in SSMS and generating a CREATE script, is there a simple way to generate CREATE TABLE scripts for tables within a given database?

Background: I have a bunch of tables in one database, and I would like to add tables to a second database that have the same names and basic structures of some of the tables from the first database.

I do not need to transfer any data from the tables, this is a seperate project that will use a similar data structure. I just want to generate the CREATE TABLE scripts for 30ish tables within the first database, and then I'll tweak the scripts as appropriate and run them against the new database.

[URL] ....

View 7 Replies View Related

SQL Server Admin 2014 :: Database Mirroring For Large Number Of Databases

Oct 27, 2015

I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.

View 0 Replies View Related

SQL Server Admin 2014 :: MDW Data Collector - Large Number Of SPIDs

Nov 6, 2015

I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.

On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.

Is this normal behaviour? I've seen more blocking as a result of this.

Is there any way to reduce the number of SPIDs generated?

View 0 Replies View Related

SQL Server Integration Services - Stop Large Number From Being Converted To Exponential

Sep 23, 2006

I am importing a text file with a column (serial numbers) with alphanumeric data, some mixed and some only numeric. The very large values that are all numeric are being converted to exponential when I run it thru an import package in SQL Server Integration Services (2005)

Ex. 4110041233214321 --> 4110040000000000 (displays as 4.11E+15)

In the past I dealt with this by importing the text file into Excel and changing the format of the column to number. This works even when many of the values contain alpha characters. I am not sure how to accomplish this same thing without going thru Excel. If you have any ideas on this I would be happy to hear from you.

I am importing the text file into a sql table.

View 1 Replies View Related

Retrieving First N Rows From A Large Query

Feb 15, 2007

Sriram writes "Hi,

I want to retrieve only the first n rows from a query which returns a large number of rows.

Say,

select empno, name from emp where deptno=100

returns 1000 rows.

I want to improve the query so that it returns only the first 10 rows and not 1000 rows.

Thanks in Advance,
Sriram."

View 1 Replies View Related

Large Amount Of Pages For Few Rows

Jul 23, 2005

Hello,I have experienced that some of my tables occupies an extremely large amountof pages but with few rows. An example is a table with 37 rows over 22000pages !. The columns are simple integer and char. I fixed the problem byintroducing a clustered index. Now it only uses 1 page. But can anyoneexplain this behaviour in SQLServer 2000 ?regards Jakob Mathiasen

View 4 Replies View Related

Outputting Large Rows (>1000 Chars) Via Osql Or Isql?

Jan 25, 2001

Iam attempting to generate files containing more than 1000 characters per line by outputting the results of a stored procedure via osql to a flat file. Osql (and isql) appear to force a newline after 1000 characters, even when specifiying a -w2000 parameter.

I have also tried to output the results of the stored procedure via DTS and this appears to do the same thing!

Does anybody know how to prevent osql (or isql) from forcing the newline?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved