I was having interesting discussion on estimation of log file with a fellow collegue who happens to be quite knowledgable as well.
He told me if we identify the most frequently hit tables for a database and then (sum their sizes * 1.5) for OLAP we get rough estimate for disk space to be allocated for log file.
I have a table that contains 1.6 million rows, and is about 2.6GB in size.One of the columns is ntext and contained some xml.Realistically, I don't need the xml for anything older than about 2 months, however, i'd like to keep some of the data in the other fields.I decided that if I updated the xml to blank, I would see some considerable space savings, however that doesn't appear to be the case.This is the output of sp_spaceused for my table
name rows reserved data index unused CommitmentsForPosting1660979 2740336 KB 1857104 KB312 KB882920 KB
After I updated the table to remove the xml, the output of sp_spaceused remained the same.My first thought was that it was probably statistics, so I updated statistics for the table, and nothing changed.I then updated statistics for everything in the database, still no change.I then ran DBCC CLEANTABLE for that table. I didn't really expect it to make a difference, and no surprise, it didn't.
The index on the table is a clustered index on just a GUID column.I rebuilt the index and again it still made no difference (that's a bit of a lie - index_size changed by a few hundred KB)my next test was to run 'select * into XXX from YYY' to create a copy of this table with the same data and data types. I also created the same clustered index from the original table onto the new one.If I then run sp_spaceused on this new copy of the table, I see what I expect to see in sp_spaceused - a table using approximately 256MB of space
name rows reserved data index unused SPS_TEST 1660979 266400 KB 266400 KB 304 KB56 KB
To be honest, this isn't hugely critical, but I'm just curious as to why the original table is still reporting 2.6GB size, when I think it should probably be nearer 256MB.
My client's website database is hosted by a third party. I need to alter one of the column definitions for the largest table in the database. Unfortunately, the transaction log fills up if I try to alter the table. I've done all the usual stuff like truncating the log, etc., but the simple fact is that the operation requires more log space than we have available. Therefore, we need to purchase additional disk space for the database. What I'm looking for is a way to roughly estimate how much log space will be required to alter this table so that we purchase enough but not too much additional space. The table has an identity primary key and 4 other single column indexes: one int, one datetime and two varchar(30) columns. Any suggestions? Thanks in advance.
i need to full-text index a table so that i can easily search the text fields of that table.. the table has about 21,000 rows, and i was wondering how long it might take to full-text index it?
I have deleted nearly 30 million rows from a table. But however when I used the sp_spaceused command to calculate the data occupied by the table I don't see any difference in the data size of the table. In fact the data has increased to few MBs after the deletion, but not much.
We have an application with replicated environment setup on sql server 2012 . Users will have a replica on their machines and they will replicate to the master database. It has 3 subscriptions subscribed to the publications on the master db.
1) We set up a replica(which uses sql server 2012) on a machine with no sql server on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 33gigs and ldf has grown to 41 gigs. I went to sql server management studion . Right click and checked the properties of the local database. over all size is around 84 gb with little empty free space available.
2) We set up a replica(which uses sql server 2012) on a machine with sql server 2008 on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
3) We set up a replica(which uses sql server 2012) on a machine with sql server 2012 on it. We have dropped the local database and recreated the local db and did the initial synchronization using replmerge tool. The mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
Why it is allocating the space differently? This is effecting our initial replica set up times.
Hi, i use this script that show me the size of each table and do the sum of all the table size.
SELECT X.[name], REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows], REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved], REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data], REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size], REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused] FROM (SELECT CAST(object_name(id) AS varchar(50)) AS [name], SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows], SUM(CONVERT(bigint, reserved)) * 8 AS reserved, SUM(CONVERT(bigint, dpages)) * 8 AS data, SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size, SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP) AS X ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup. example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?
I have a ordersystem with repeating orders. Each order has a orderline with a frequency and a intervaltype (day, week, month and so on) for indicating how often a task should be done.
When an task/job is done, an equivalent receipt with receiptlines is generated.
When a new job/task should be suggested, it has to calculate the shortest next intervallength for each repeating orderline. This intervallength must then be added to the date of the last receiptline.
I want to to this all in a SPROC. My result should be the one next orderline that should be carried out with a date for when this is.
RESULT OrderID TaskID TaskDesct EstDate 5000 GETGROCERIES Pick up groceries for miss Lama 19.12.2003
************************************************************************ Is this possible to do all in a SPROC and not having to do any in .NET (which this is developed in), and can you help me with some ideas? ************************************************************************
To se my starting point, se the diagram at: http://www.promotion.no/div/diagram.gif
My mind is stuck and this is so far I've reached:
ALTER PROCEDURE dbo.GetNextOrders AS
CREATE TABLE #Interval ( ivID int IDENTITY(1,1), ivTaskID char(20), ivTaskDescr nvarchar(200), ivHour int )
INSERT INTO #TemporaryTable
SELECT RepID, RepIntName FROM RepeatInterval
SELECT InvoiceDetails.iID, InvoiceDetails.ReceiptID, InvoiceDetails.OrderID, InvoiceDetails.TaskID, InvoiceDetails.TaskDescr, InvoiceDetails.DateRegistered FROM InvoiceDetails LEFT OUTER JOIN Orders ON InvoiceDetails.OrderID = Orders.OrderID WHERE (Orders.DateStart < GETDATE()) AND (Orders.DateEnd > GETDATE() OR Orders.DateEnd IS NULL)
I would like to ask some help for estimating necessary hardware for a database, I can tell you about usage profile the following:
- 2 tables, approximately 100.000 records, the tables joined in 1:1 relation, one record is approximately 500 bytes long - typical scenario: - select what returns 1 or zero record - if it returned 1 record an update performed on this record - the event logged in a third table (one insert) - this typical scenario needs to run 100-300 times in every second
Can you tell me any advice on sizing of hardware for this? We can rely on a well designed database structure (proper indexes, etc...)
Hi, I've read that a SQL Server Express database is limited to GB. If I have a database with about 120 tables which store only simple datatypes (int, money, datetimes, varchar, etc.), no blobs or large files, how can I estimate the amount of data or number of rows it can hold before it runs out?
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
I am working on Sql Server 2012. and I have multiple databases there. Out of those, i want to move one of my databases to other SQL server 2012, For that i was trying to get approximate size of my database on current server. As i don't have the admin rights, so i can't get that. Can i get the approximate size by right clicking on database and by using the size property Under Database category to get the size idea?
I have a 2.5TB database that we need to run checkdb on. Is there a formula to figure how log it would take with no options? Doing research, it appears the we are stuck with doing the PHYSICAL_ONLY due to time constraints.
finding the database size from the backup file.I have SQL 2012 backup file, is there any way to find the estimated database size from the backup.I tried restoring , i got an error saying " no space need additional xxx bytes " ...does this error gives the exact space needed to restore ?
One more question....one of the backup file size is 7.2 GB, when i try to restore it ....it throws error saying it needs 292GB extra space while only 100 Gb is available. How come 392 Gb sized database becomes 7.2 Gb .bak file ?
I have a 50gb database, with 3 files at the primary filegroup, each one of those has around 16gb I truncated 2 tables releasing 33gb, so the database should have around 17gb now, but when I check at the properties it says that each file doesn't have any empty space
As in title. Is there any tool? I'm asking beceuse, I have some big bases, and processing may take a lot of time (at least few hours), and I'll be glad if it's possibility to know estimate time before query runs. I'm using MsSql 2005 Developer edition.