I have a process(Peoplesoft) in SQL Server which takes long time to finish. I am looking for is there any way to find out fragmentation of the table to defagment it or is there any way to allocate the size for particular table. Couple of users are running the process at same time but as SQL Server has table/page locks it locks and releases after the job is done. Can i make it row level lock by executing sp_tableoption procedure. I would appreciate for all your suggeations.
I have bunch of heap tables and the fragmentation seems to be high, i am not sure whether i shall add index for them, as these tables are inserted and updated every day.
i have inherited a database running on sql7 sp2. a lot of the tables do not have a clustered index. some of these tables are highly fragmented. below is a sample of the dbcc showcontig output
this table has 5 non-null char columns whose total length is 43 chars and the table occupies about 900 mb for only about 670,000 rows! select on the table sucks!
since there is no clustered index i have to rebuild the table to remove the fragmentation. is there a way to reorganize the table without recreating it?
Abe writes "i am using SQL server 2000, when i tried to back up the database in SQL server, the backup began failing coz SQL Server could no longer allocate 964kb of contiguous blocks. (the min needed is 1MB)
What causes the memory or blockes to be fragmented?
How do you solve this issue other than rebooting the server?
How do you prevent this from happening or monitor this issue?
im doing network monitoring app where basically i run a checks on servers every few minutes and log the data to a table. Naturally the table can get big, quite quickly. What I want is to be able to overwrite the table data at the start of each new day. Alternatively, rollup the data into a daily or weekly packets and then overwrite table data. How do i do this?
I save Table size and recs. no every day. and check it some days.
... insert into @t exec sp_msforeachtable 'exec sp_spaceused ''?''' ...
But Today I saw sudden increase size in a table. about 128 MB in a day. (Average Growth fro this table was 4 or 5 MB in a day)This growth was for Only 4222 Records. While for more number of records (about 7000) in yesterday we had only 2 MB GRowth!
This Table information (Now):
sp_spaceused 'Table1'
Result:
name ---Rows --reserved --data
Table1--1021319--460328 KB --283104 KBI Try to gess The reason. I copy These new records to another table.But The result was more strange : on new table the size of these record was : < 1 MB I copied All records to another table . The size was : 148 MB (while this is 283 MB in my real database)
I created a thread 2 days back on a performance problem RE non-trusted check constraints and foreign key constraints. We are planning to make them trusted to see if it works. On the other hand, we see about 50+ clustered/non-clustered indexes with >90% fragmentation but the page counts for all these indexes are in the range between 500-900. I'm reading in the whitepaper that index fragmentation with less than 1000 pages is not a concern.
Hi, i use this script that show me the size of each table and do the sum of all the table size.
SELECT X.[name], REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows], REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved], REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data], REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size], REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused] FROM (SELECT CAST(object_name(id) AS varchar(50)) AS [name], SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows], SUM(CONVERT(bigint, reserved)) * 8 AS reserved, SUM(CONVERT(bigint, dpages)) * 8 AS data, SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size, SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP) AS X ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup. example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?
We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.
This is the DDL for the table:
CREATE TABLE [dbo].[Inventory]( [EAN] [bigint] NOT NULL, [Day] [smalldatetime] NOT NULL, [State] [int] NOT NULL, [Quantity] [int] NULL, [StockValue] [float] NULL, CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
[code]...
There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:
1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)
2. Partition the table
3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.
How can we monitor the all tables in all databases and send notifications to the team.Is there a way to check to find the no of rows and size of a table last month and find out growth % now
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
The largest table in our database eats up above 4G . we do "sp_spaceused" for this table.The length of all columns of this table ( just int, char, varchar, money ,numeric fields types) is about 200 bytes, and the table has around 1,300,000 rows, but the reserved spaced for this table is 4,800,000kb and the data space is around 4,600,00kb.
How can average each row take 3.7kb ( the total size of all columns just 200 bytes)? Any other things I need check? Any one can give any suggestion what cause this problem? or it is normal?
I am wondering if there is the limitation of maximum table size in SQL 6.5. I have a table with 2.6GB and 12,000,000 rows in SQL 6.5 database. Is this a problem?
Is there a practical size limit, in MB's, of a table in SQL Server 6.5?
Is there a size, that once exceeded, degrades performance signifigantly?
I am speaking of raw megabytes. The table in question will consist of only 3 int columns but has the possiblity of becoming VERY LARGE (+1,000,000 rows). I am still in the design phase and can change my strategy if this will prove to be a problem.
I am trying to solve this problem for quite some time.. I was wondering if I can get some help..
These questions are all abt. MSSQL 6.5
1. Is there a limit on the size of the table ? 2. Does it make sense to have more tables if the size of the row size is more that the limit set by 6.5 or i should let have more rows in a different table with duplicate entries for a particular field. 3. What is the number of rows before the performance of a query starts getting affected..
Is there a maximum or optimum number of rows I should have in a table so that I can have fastest search queries. I am a novice programmer just developed something for my work place. The database has a table created by converting data from excel spreadsheets. There were 24 spreadsheets for 12 months each having approximately 500 rows. Designed this way the table will have approximately 24 * 500 = 12000 records. Should I consider redesigning the database to make searches faster