We are experiencng high cpu utilization across all 4 cpu's at the top of the hour when our transaction log dump job runs. Has anyone observed this bahavior before? Is there anything we can do to mitigate this? Thank You.
Can anyone tell me where I can find information about why sql server might be executing a stack and/or symptom dump? That is, what are some of the conditions that might cause this. Every time I try to DBCC DBREINDEX on a certain table, stack and symptom dumps occur and SQL Server remains incommunicado (can't connect, even locally) until I restart it. I'm going to try to fix the table by using rename table, create new table (and dependent objects) , transfer data, and toast the original table. No problem, I've done that before with smashing success. But, if this solves my problem, I'll still be no wiser about what it was that caused the problem in the first place. Any suggestions, questions, comments, or even jeers would be appreciated. Thanks, Chris
I am running an Execute SQL task that does a Begin Tran, then the next task in the sequence is a data task which imports a XML file into two tables. If i doo a Rollback Tran only one of the two tables is rolled back.
Is it possible to have both tables rolled back from one Begin tran command or do i need to split the datatasl into two and treat each import as a seperate issue ?
I want to rollback my t-sql if it encounters an error. I wrote this code:
begin tran mytrans; insert into table1 values (1, 'test'); insert into table1 values (1, 'jsaureouwrolsjflseorwurw'); -- it will encounter error here since max value to be inputted is 10 commit tran mytrans;
I forced my insert to have an error by putting a value that exceeds the data size. However, I didn't do any rollback. Anything i missed out?
Is there a way to configure mirroring to go from High Availability to High Protection without having to reconfigure Database Mirroring? Using the interface in Management Studio, I can change the configuration option to High Performance, but not High Protection despite both of them being Synchronous.
If not, what are the recommended steps to configure the mirror once it already has been configured? Is just like initially setting up the mirror or would there be any shortcuts I could take? If I stop the mirroring and remove the witness, will the High Protection option be available?
I realise this is a stupid quesiton but i cannot really find any confirmation of this in BOL.
If you are running High Safety with automatic failover, when failover occurs does this automatically change to High Performance mode. SInce for failover to occur something has happen with the primary , it will be impossible to commit transactions on the new primary and mirror asyncronously since 1 of them is no longer available.
So am i correct in assuming that automatic failover also automatically changes the mode to High Performacne for that session?
I need to set up a dump of my main database's transaction log to dump every hour but not overwrite until the 8th hour (keeping 7 log dumps). Can anyone tell me how to set up the scheduled tasks for the transaction log dump? Is there anything special I need to do for the main database dump?
Hi: Actually we have problems doing database dumps, so we're copying directly the .DAT files to a backup machine, I mean the .DAT files are the data devices that uses SQL no dumps files. We can do or eecute the dump process.
What problems can we expect doing this? Which can be the problems 'cause we cannot dump our DB's?
I have noticed a question on the Admin exam which involves declaring and usting a time variable for scheduling a backup or dump of a database. Is this possible and has anyone else seen this on the exam or used it in reality?
Our SQL Servers is giving us a headache, after a certain period in time, either SQL Service automatically shuts down by itself or hangs. I've opened the logs and found hex dumps. Can you help me out with these?
I've read that the SQL dump can only be done directly into the local server. Is there a way to put directly the dump into another server via the network (not copying it but writing it during the process of backup). Any help much apreciate.
I am testing a procedure to automate the transaction log dump. I am following the steps located in Chapter 22 of the Microsoft SQL Server Administrator's Companion ("Automatic Transaction Log Dumps Using Performance Monitor"). The alert in Performance Monitor appears to be starting when the log is 75% full, but the alert is not firing off the file that contains the dump transaction sql command. For the 'Run Program on Alert' box this is what I have: isql -Ssvrname -Usa -P -id:appsmssqlinndump.sql The dump.sql file contains: 'dump transaction pubs with no_log'
I have also tried the following 4 steps: 1) Created a SQL Alert Messsage, 2) Created an NT Performance Monitor Threshold Alert to run sqlalrtr to issue a certain error when the pubs log is 75% full, 3) Created a TSQL Task, and 4) Created a SQL Server Alert to run the Task created in step 3. This appears to do the same thing. The Alert is fired off, but the Task is never executed. Note: I am able to execute the task from within the Schedule Tasks Window.
I am using Standard Security with SQL Server 6.5 (sp5a) running on NT4. Thanks for you help in advance.
I am new to microsoft SQL server as I am from Oracle background. I am preparing to the MCP certification for Exam 70-229, Designing and Implementing Databases with Microsoft SQL Server 2000 Enterprise Edition. The exam shows as SQL server 2000 is there any exam for SQL server 2005.
Is it the write exam to give to start as fesher into microsoft platform. Iam into development not towards administration.
If any body has any Exam preparation questions please forward them to krishna.kanigelpula@gmail.com.
How can I check to make sure that my dumps are not corrupted? I have been using a utility from Microsoft called DSCAN5, but have found that this has some limitations.
does anybody know how to automate the loading of incremental transaction dumps. The manual way is to use the "load tran DB with FILE = x" statement.
Since this has to be done in the right sequence and i need to automate this task to keep a second server up to date i like to know if there is a stored procedure or any other tool which could do the task.
I have a package which access a DB2 database and pulls data from a single table. I can't put a specific event on it, but the package has been causing a dump to occur on a rather regular basis. The really odd part is sometime when I add a data viewer on the output link of the OLE DB Source it works....then it starts to dump again a couple of executions later. There are not date/time values involved in the result set, just character strings. Default code page is set to 1252 and use default page is set to False....any ideas appreciated - this is really starting to drive me nuts!
our users believe that we lost some valid data, but no one knows who did it, I thought I can find it from the transaction dumps I take every hour so ,Can I read Transaction Dumps (*.TRN) file in SQL server 7.0 or Can I get this information through other means.
I'm going back and forth on an issue and was looking for some outside observations. I have over a dozen SQL NT servers, both 6.5 and 7.0. On some servers I'm dumping the databases and backing up the dumps with Veritas BackupExec, on 4 (two SQL 6.5, two v7) I'm backing up with the Veritas SQL Backup Agent.
Obviously, if you don't have room to dump a database, you must use the backup agent. That is the case on one of my servers and becoming so on a second. But my personal preference is to do the dump/backup.
(as a side note, one server is not backing up correctly with the backup agent but ultimately that box will require it due to DB growth, so it is something that I have to resolve)
I like the dump system for a few reasons. I find it easier to load from a dump, particularly a single table. Likewise, I find it easier to copy a database by loading from a dump rather than going from a backup, but that's mainly because of BackupExec being a little bit strange on redirecting restores.
Here's my clincher. I find restores via the backup agent to be ridiculously slow. Let's say I have a 5gig DB that has 1.5gig in it. The dump size will probably be somewhere below 2gig. The restore via backup agent carefully writes the entire 5gig even though 3.5g of that is empty. This takes a lot of time. Add to that the "post-restore DBCC". I had such a restore take something on the order of 13 HOURS which, needless to say, conflicted with my nightly DBCC's and backups.
OK. End of rant. Any suggestions or thoughts on the subject?
I have a transaction log that is over f gig in size....what can be done with this..and what are the pros and cons if I delete it...also how can I keep this from getting that big in the future. Thanks!
On my SQL 6.5 box, I have a corrupt Tran log. I do not use my Tran log but now I am getting an 1105 error, that the log is full. I run Dump tran with no log but it does not work. I cannot perform any other function without getting the 1105 error. Now I tried to reboot and now it is hanging during reboot. It is hanging while checking the partition where the tran log resides. I went in to VGA Mode. Any ideas would be appreciated.
Does this seem right? We have our transaction logs set to "Truncate Log on Checkpoint" and they still grow over 1GB. Is it possible that one transaction (to a checkpoint) generates this much logged information? Will transaction log backups every 5-10 minutes help me out better or is this just a poorly written application?
Help. I have a database with high transaction rates. THe log is 300 mbs. No matter what i do I cannot get it below 64%. I have dumped and trucated the log yet it will not budge. Being friday and it being a time card application this is my heaviest transaction day. Please help
Hi There, How do I find the space used for the tran log of the db. sp_spaceused gives the space used for the complete database. but I need the space used for a tran log alone. Thanks in advance. pete
What is the advantage of taking frequent tran log backups (say every 30mins) as opposed to once a day? Say, I backup data and tran log once every night and I lost a table at 10:00am next day. Can't I recover the database to the point in time by restoring the previous night's backup and then applying the transaction log from previous night and then applying the transaction log (to the point int time) that you just dumped when the mishap was reported to you?
Reviewing the MSSQL process info screen, I am seeing the same process appear a numer of times. It is always the same, being
'IF @@TRANCOUNT > 0 COMMIT TRAN'
Sometimes, there can be up to a hundred of these processes (listed in the process info screen). They generally have a 'sleeping' status, but nonetheless, I would like to see these processes disappear if they are not being used.
I have checked in all of the stored procedures and triggers in the application, and none have this sql statement.
When I run profiler, I get these entries, but the profiler says they belong to either SQL Enterprise Manager or 'Microsoft Windows 2000 Operating System', and not to the application I am running.
Does anyone know where these transactions come from? Can I prevent these from appearing? If no, what is the impact (other than sql server having to maintain a connection).
I have a database of 22 gb in sql 2000, my database option is set to full recovery mode, the problem i'm having is the tran log is growing too fast, this morning it was 24 gb, more than the database size. Can anyone help how I can keep it in a managable size?
All my jobs run fine, except for a Transaction Log Backup job that fails with the following error: Microsoft (R) SQLMaint Utility (Unicode), Version Logged on to SQL Server 'Server1' as 'sa' (non-trusted) Starting maintenance plan 'MaintPlan-TLogs- AllData' on 5/12/2001 10:00:01 AM Backup can not be performed on database 'AllData'. This sub task is ignored. I have not change the sa or agent password. I cannot figure out why this job started failing, it ran fine for a while. Any insight is appreciated. Thanks
The former programmer wrote this stored procedure. It haven't been run for a while, so I was given the assignment to get it working. When I ran the stored procedure, it took almost 9 hours. Then I found that I can't access a few tables, so my guess it there is some issues with table locking. The stored procedure use this...
Code:
BEGIN TRAN
--blah blah
COMMIT TRAN
ERROR_HANDLER:
ROLLBACK TRAN
Obviously there seem to be a logic error in the middle of the script while running the stored procedure. So, how do I cancel the transaction and unlock the table? I'm unable to access the few tables.
Also, does rebooting the computer helped to release the transaction or table locking?