I'm currently maintaining 4 servers - 1 for public/customers and 3
for backups, development, etc...
I regularly backup the entire SQL database for our public server and
restore it on each of the other servers. Lately, however, the database
backups have grown (in size) incredibly fast - they've gone from about
200MB to 2+ GB in 2 months. (I wasn't entirely surprised by this at
first since our client traffic has drastically increased as well.) The
weird thing, though, is that (on two of the backup servers) when I
restore the backup then use those servers to create a new complete
backup, the new backup is only about 200-300 MB in size.
My assumption is that there's some kind of setting buried deep inside
the sql configuration allowing it to compress or otherwise alter
backups. Does anyone have any ideas/thoughts as to what may be causing
this issue?
We're using SQL Server 7 on Windows 2000 servers.
I have a publisher database set up for a merge replication. This is using parameterized filter with join filters.
I also have a stored procedure that does deletes & inserts on the table where the parameterized filter is applied to aid in changing a subscriber's eligibility to receive so and so data. I have observed that running the stored procedure takes extraordinarily long and as a result, the log file grows to a size 1.5 - 2.5 times the database size.
At first I reasoned that this might because I had it set up to use precomputed partitions and changing it requires recalculating the partitions. As a test, I turned off the precomputed partitions. Didn't work. I turned on "optimize synchronization" AKA "keep_partition_changes", which normally is not available when you have precomputed partition on, and that didn't work, either.
At this point, I think I can rule out precomputed partitions being a problem here but I'm stumped now what else I should do to reduce the amount of log writes being required. We do need the parameterized filters & join tables, so that can't go.
We have a database that's growing pretty fast because of firewall logs. We need the data available via an asp.net application. I don't have great experience with SQL other than installing and doing some development as a back end, so i'm wondering if there's a general rule of thumb of database size, when you should start breaking it out into smaller segments? if so, what are some good practices?
I have a database that seems to have grown out of control. I have tried deleting tables, but that has not really reduced the size. What could have caused the database to grow this big and what can I do to reduce it's size. I have backed up, truncated the logs, ran the shrink database command, all to no avail. Pleas help.
Does anyone know at what point SQL Server 7.0 decides to grow the database when the autogrow option is set? Our site just went down for 45 minutes because the growing process was taking too long as compared to the data coming in, so the device filled up.
Ray? Craig? You guys seem to know all, so jobs.com appreciates your input...
I have seen a couple of cases where an error occurs on a server running SQL Server 2005, and very quickly the log folder at MSSQL.1MSSQLLOG starts filling up with files, and does not stop until the entire hard drive is full (at which time the server stops responding). Is there any way to limit the number of .dmp files that are written?
I have question about database automatically growing for SQL Server 7. It seems to me that SQL database automatically grow will ONLY happen when it's getting really full, maybe above 90% full. Even if you manually increase the DB size, actually the increased DB size will only keep very short time. And DB size will be back to the smaller size again. If you have any suggestions I will be really appreciated. Thanks
I have a database that is growing by 40 to 50 megs a day. I understand that the '_W' objects are statisical information for query performance and not indexes, but does anybody know how much disk space is actually used by these objects. I do have the 'auto create statistics' and 'auto update statistics' set on.
for the first time in my long SQL DBA live I see such a behaviours. My tempdb database is growing every damn second since a this morning. Now it reached 30Gb, the log file is empty (217 Mb).
We use SQL 2000 Ent on Win 2000 Advance Server. Running Siebel Call Center (7.5 ver) with about 300 users.
Some users time to time obtain and hold a huge amount Exclusive locks on the tempdb extents
We're a very small company, I’m the “technical” director who has evolved enough skill in a wide variety of tasks (network setup, machine config, email systems, html, asp, database...) and then one day you notice that parts of the system are starting to get way more complex and troublesome than the layman knowledge you have can cope with... Well, I think I’ve got to that point and I need some outside help to get our system to the next level!
OK, some rough details to start with. We run a small but fast-growing vehicle tracking system that sends back a LOT of data via GPRS to our SQL 2000 Enterprise server hosted on a dedicated server in London. The physical machine is a P4 3.2Ghz Dual-core Dell rackmount with 2GB RAM and 2 x 76GB SCSI disks in a RAID 1 array. This is partitioned into a 15GB C: partition and a 51GB D: partition. The system paging file is set to be 1536MB and is on the C: partition. The server is used for everything we do... it runs Smartermail email server (only about 5 or 6 domains and a few users, hardly used at all), SQL server as mentioned, web server & the proxy software that receives incoming data from our tracking devices.
There are 9 or 10 active databases on the SQL server. 8 of them take up less than a gigabyte between them and are sparingly used. The main “active” database on the SQL server is the tracking system – and this is big... As our tracking devices send in data every 10 – 30 seconds, the database is hit with hundreds of thousands of events per day. On a weekday, some half a million rows of data are written to the main “events” table on the database. Over 7 days from 26th November to 2nd December, almost exactly 3 million rows of data were written to the events table. We undertake to hold 3 months or so of data “live” for our customers and I periodically archive data off. I’ve been too busy to archive recently and the database is holding data on the events table going back to July 1st. The physical .mdf file is just under 30GB on partition d: at present. The plan is to drop the active data stored to only 1 – 2 months, but this still leaves a 12GB .mdf file.
The worrying thing with this is that this is only 700 or so devices writing to us at present... we aim to have thousands out there soon! We are looking into how we can hugely improve system performance and look to the future. Our hosting company is recommending VMWare virtual servers and SAN storage, but I’m not entirely sure that is the best way forward.
Our non-tech MD thinks the way forward is to have one database per customer and can't understand when I tell him I think that's bad as it will create all the system tables and bits & pieces for EVERY customer if we do that, right? Also it would be a nightmare to add a new column to a table as I'd have to update every single version of the database too... I want to avoid this unless I'm missing something and this is actually the best way to go forward?
I've had someone mention horizontal partitioning to me? not sure what implications this has to coding and table naming? Or is it all one big database spread among separate servers?
Currently our server is drowning on disk access and it's only going to get worse... any suggestions or links to reading online that I can do would be great, thanks!
I upgraded from SQL 6.5 to SQL 7 last month, and so far, everything's been going fine.
However, I'm not using my old SQL 6.5 backup scripts, which, when the backup was done, would dump the transaction log with TRUNCATE_ONLY, shrinking the log size.
My SQL 7 server is set up with a Maintenance Plan which does everything, including backup, but the log file seems to be growing and growing. I'm up to 4.5 gigs now, for a database with a data file of 3.5 gigs.
How do I "dump transaction with TRUNCATE_ONLY" on a SQL 7 database?
I have merge replication setup up for 6 SQLCE Subscribers. I have noticed that the MSmerge_tombstone table is growing at a fast rate regardless of any changes to the data in the database. It seems to be consistantly adding 50 rows of data to the table every 2 minutes. As the table grows it causes the SQLCE subscirbers to fail with the following message:
ERROR: -2147467259 SQL Server Reconciler failed: Run
ERROR: -2147200925 : Failed to enumerate changes in the filtered articles.
ERROR: 0 : The merge process timed out while executing a query. Reconfigure the QueryTimeout parameter and retry the operation.
I'm sure that this is due to the size of the MSmerge_tombstone.
Should the MSmerge_tombstone table grow at this rate? 36,000 rows every 24hrs!
I understand there is the sp_mergecleanupmetadata Stored procedure but if i use this does that mean that because i have to reinitialise all the subscribers, they are going to have to pull down the whole subscription again.
I have since Changed a settings to make subscription expiration date to 8 days instead of never expires but we're still getting 50 rows added every 2 minutes
SQL SERVER 2000 SP3 Hope someone can shed some light on this for me.
The space allocated to the Log in question is 180 GB. During this time period I was running TLog backups every 5 minutes, yet the log continued to chew through to 80 GB used, even after the process was complete and a final TLog backup had been taken. It continued to stay very large until the Full backup was complete -- or something else that I'm unaware of completed. Like every other DBA I typically take a TLog backup to shrink the log, but what appeared to be the case here was the Full completed and it released the used log space. All said, will Transaction Log backups not free up the log during Full backups?
I know that I have read not to backup a database over a netwrok. So I am curious as to what others are doing out there. BAckup to your local hard drive on the server and then move the backup files to a repository some where on the network? Do others have a file structure out on another server that stores all of the backups from all of the different servers that have SQL 7.0 on them? We are a small company and are just starting to migrate data to SQL Server 7.0.
I have to perform a backup for disaster recovery purposes before an application upgrade. The upgrade will alter the database and stored procedures. My cuurent backup takes a backup of master and msdb weekly. The user database uses the Full Recovery model and is backed up daily at 21:00 and the logs daily at 13:00. Assuming the databse is modified between the last backup and the upgrade starting at 9:00am what should my backup stratergy be for roll back purposes. 1) backup master, msdb and the user Database to a different location than the normal backups. Use these to restore if required 2) backup the master, msdb and user databases using the same jobs and therefore overwriting the original evening backups 3) do nothing and just restore master and msdb from a backup and replay the logs to a given point in time for thr user database should the upgrade fail
Can anyone tell me what the impact of dynamic database backups in sql 6.06.5 will have on users using the database?
Will their user processes be blocked? Will their queries run slower than normal (how slower)? Will there be a lot of locking activity as the SQl tries to backup? Will the serverdatabase run slower
I am looking for the best method to backup SQL Server databases. Currently we are running a dump database statement to disk and backing up the files to tape through Arcserve.
One problem that I am having is the statement to dump the database. I would like to retain the dump for at least three days and be able to restore the database from any one of those three days. My current statement is: "DUMP DATABASE CHOISDAT TO DISK=`D:BACKUPCHOIS.BAK` WITH NOUNLOAD , STATS = 10, INIT , RETAINDAYS = 3, NOSKIP"
but, every other day I receive the message from SQL executive: "Can`t open dump device `D:BACKUPCHOIS.BAK`, device error or device off line. Please consult the SQL Server error log for more details. (Message 3201)"
What am I doing wrong? Any suggestions?
P.S.
Is there anyway to tell the Maintenance Wizard to delete the backups. I tried using the wizard but the backup files still remain on the disk and I have to delete them every week.
I have a database which is 72GB, which is backed up every night as part of the maintenance plan. I have plenty of storage space, and the server that runs the database is fairly powerful (quad-processor 3.2ghz, 64bit, 48GB RAM) and is part of an active-passive cluster. The database backup is also copied to a SAN location.
My issue is with the size of the backup file. As part of the Disaster Recovery plan, I need to copy this database backup file accross the network to a remote site, so that in the event of a disaster at the site, business can continue at the remote site after restoring the database backup file. However, my database backup file is so big that I cannot copy it accross the network in time for the next morning. I have tried using WinRar and have managed to achieve a file about 20% of its original size, but it takes 2 hours to produce this file.
Is there any recommended reeading for this type of issue? Log shipping / mirroring has been investigated and will be part of the DR model but the 'powers that be' insist on having a full copy performed to the remote site.
Any suggestions? Thanks in advance guys n gals :-)
Our DBs are set up to do a full backup once a day (late at night) and then transaction log backups during the day at shorter intervals.
I want to setup a dev database on the same server. I want this database to be an automatically restored copy of the live database. So every night, after the full backup of the live DB, I want to restore the live DB to this dev DB.
Can this be automated? Can the restore automatically stop the dev database in case some open connections exist?
In a non-clustered environment, I am under the impression that backups must be to a local disk or local tape device.
My plan is to have a separate disk in a clustered environment on a shared array for holding my backups, until they can be transferred somewhere else.
My question is, will SQL Server 2K support backing up to the disk in the shared array since it is (I believe) not considered a local disk? What key points may I need to know.
I have inherited a new SQL Server 2008 database server and can not figure out how my user databases are being backed up. This database server is running under a VM.
All the user databases are being backed up nightly per the SQL server log. The backups are written to a virtual disk and is kicked off by the NT AUTHORITYSYSTEM user. I can not see the virtual disk. A restore task does not provide any information about the last backup. I have created a new database, and it is automatically included in the next set of backups.
I have looked at the windows event viewer with out any luck. There are no SQL Server Maintenance Plans or Agent jobs that call a backup. I have also checked the Windows Task Scheduler and can not find any task that does a backup.Could the backups be called from another server ?
Hello everyone! Looking for some insight here on database backups that fail.
We have many SQL servers that we maintain by storing Job/Maintenence Plan history on a central server, which then emails out daily reports to let us know what backed up last night and what didn't
This was easy to do in SQL 2000, not so much in SQL 2005. I have put together a query that gathers the info I need for the successes:
SELECT DISTINCT '00000000-0000-0000-0000-000000000001' AS Plan_ID, mpld.line1 as "Plan Name", bud.database_name as "Database", mpld.server_name, 'Backup Database' as Activity, mpld.succeeded, bs.backup_finish_date, DATEDIFF (MS,bs.backup_start_date,bs.backup_finish_date) as Duration, bs.backup_start_date, mpld.error_number, mpld.error_messageFROM msdb.dbo.sysmaintplan_logdetail mpld INNER JOIN msdb.dbo.backupset bs on (select convert(char(12),mpld.start_time,109))=(select convert(char(12),bs.backup_start_date,109))-- on bs.database_name=bud.database_name inner join msdb.dbo.bu_dbs bud on bs.database_name = bud.database_name WHERE mpld.succeeded = 1 and mpld.line2 like 'Backup%' and bs.type='d' and bs.backup_start_date > ( SELECT CONVERT(char(12), (GETDATE()-1), 109) ) ORDER BY bud.database_name DESC
But I am having trouble using a query to determine the databases the FAILED during backup. MSDB.BackupSet and MSDB.SYSMaintPlan_LogDetail really have nothing,because often times, even if a step in a Maint. Plan fails, the plan finishes reporting success.
Does anyone know of a good way to gather info about failed database backups?
What will be best procedure for the following situation.
Heavy traffic database on daily basis. G growth every day. so Full backups every nights are needed. Vendor recommends not taking Log backups but copy just log files over other location. will this help avoid degrading the performance during business hours.
if i don't take log backups, i am not able to recover Point in time if needed. also log files can then grow faster and then i will have to shrink it more often.
I am trying to find out where is the maintenance plan which is backing up SQL Dbs on its own at 12 am daily where as we didn't scheduled maintenance plans at all. We see I/o frozen and resume events every day in event l/o
Log Name:Â Â Â Â Â Application Source:Â Â Â Â Â Â Â MSSQL$MSSQLSERVER2K8 Date:Â Â Â Â Â Â Â Â Â 5/4/2015 12:00:23 AM Event ID:Â Â Â Â Â 3198 Task Category: Server Level:Â Â Â Â Â Â Â Â Information Keywords:Â Â Â Â Â Classic
I made a copy of a database "sac_prod" and named the new copy "vgs_prod". Now, when I do a backup of the new database, it still shows the name of the original. Is there any way to change this so it will be the same as the new database name?Here is the BACKUP script:BACKUPdatabase vgs_prod TODISK='\sac-srvr1data$TechnicalSharedProductionSQLBackup LasVegasvgs_prod_CopyOnly.BAK' with COPY_ONLYHere is the messages I received from this BACKUP:Processed 1752 pages for database 'vgs_prod', file 'sac_prod' on file 1.Processed 6 pages for database 'vgs_prod', file 'sac_prod_log' on file 1.BACKUP DATABASE successfully processed 1758 pages in 0.412 seconds (34.955 MB/sec). I would like to change the file 'sac_prod' to be 'vgs_prod' in lines 1 and 2 just above. Thanks,
Michael writes "We are running SQL and Veritas to backup the databases. Supposedly the SQL agent in Veritas, after a full backup, truncates the log files but for some reason this isn't happening... any ideas?"
I backup a database at the begining of each month with a full and then do nightly diffs on it.
For the same database I run daily fulls and 10 minute log backups.
these two backups create / append to two different backup files.
The problem im having is that I cant restore the Differential backup set. SQL seems to restore the full just fine but alwasy throws an error when its about to start to retore the last diff. now forgive me but I clicked OK on the message and I cant find any record of the error in the logs but its something like:
"SQL cannot restore the database as the database has not been restored to the previous correct state"
is my 10 min TS log backups screwing up the DIff chain somehow?
this is really doing my head in. any help appreciated.
"A computer once beat me at chess - but it was no match for me at kick boxing" - Emo Phillips.
Is it possible to load both the SQL 7 database and transaction logbackups to SQL 2000 ? I assume it will perform the upgrade during theload.Thanks,James