Just wondering if there was a way of querying the transaction log to ascertain particularly large queries filling it?
If you have taken over another persons solution and the transaction log seems to be filling very fast and want to break it down to find out what are the main causes, what would the best way be?
When I try querying the xml document in SQL, I get nothing back, unless I remove the schema information. I'm using this:
declare @x xml select @x = P from openrowset (bulk 'E:VehicleOption0514.xml', single_blob) as Products(P) declare @hdoc int exec sp_xml_preparedocument @hdoc output, @x
I have a table (we will cal DateTable) with several (20) columns, each being a date type. Another table's (Project) PK is referenced in the DateTable.
I am trying to write a query that will pull all dates for a specific project from the DateTable if they meet certain criteria(i.e. if the date is <= 7 days from now.
I started with a normal select statement selecting each column with a join to the project and then a where clause using
(DateTable.ColumnName BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE()) OR (DateTable.ColumnName BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE())) ...
The rest of the columns(all with OR between them).
The problem with this is that because I am using OR once one of the dates meets the criteria it selects all the dates that are associated with the project. I ONLY want the dates that meet the criteria and don't care about the rest.
Obviously because I have all the columns in the select statement... So I need something like
Select ALL Columns from DateTable d Join Project p where p.ProjectID = d.ProjectID AND only dates BETWEEN GETDATE() AND DATEADD(day, 7, GETDATE()))
I'm fairly new to SQL and am just setting up a Windows 8 app using an Azure SQL server. The issue I have is looking up a part number supersession and getting the latest number. One part number can have multiple supersessions (ie RTC5756 > STC8572 > STC3765 > STC9150 > STC9191 > SFP500160 ).The data I am supplied monthly has both the superseeded items and the supersession information in both columns and is not easy to decipher - for example:
The newest part number is kept in a separate table - called "source" - which in this instance is SFP500160. I need access to the latest part number but also to the part's previous numbers, due to the fact that some people may still be stocking them as an old part number and for them to search by. Is there an easy and efficient way of doing both a lookup for the supersessions and a join on the two tables to minimize the queries on the database?
I am doing some general housekeeping for a couple of our SQL boxes in the Development environment. All the databases are set to Simple recovery mode. No need in anything else for these boxes. I have a database on all the boxes named "DatabaseMaintenance" Keeps things like all the sprocs for any type of database maintenance, etc....
I would like to schedule a single sproc that is located in the DatabaseMaintenance database to shrink the Transaction logs on a set schedule. They sometimes grow quite large while testing and developing. The thing that I cannot seem to get around, is when using the ShrinkFile command, one must use the Log Name. If this code is in a sproc that is located in the DatabaseMaintenance database, it will fail when attempting to call out to a different database. Because the Log does not exist on the database that the sproc is located.
How can I get around this small dilemma? There are only about 10 databases per box. To a point we really do not care what happens to them. They are on a Full backup schedule daily, just to keep the objects. As I stated previously, the logs will still grow huge at times while pumping data.
Is there a way to create a piece of code that will run against each database on the server, and be stored in a single database? Other than the system databases of course.
I wanted to schedule the transaction replication. How do I do it? Currently I have set up a transaction replication which runs continuously and synchronizes the changes with immediate effect.
I need to configure a replication which will gather logs from the publication once in a day.
Looking at the recovery model on MSDN - was under the belief from it that if you set your database to use the simple recovery model, then when you preform a full back up it would truncate the transaction log file but this doesn't seem to be the case?We aren't looking to do a point in time recovery for this particular databse.
We have poor performance spikes on a drive containing our log file but this is only for reads and seems to be at a time when we run a re-index job. If this is a likely correlation as to poor performance in reading the log file, and what reads are done from a log file.
I have a update trigger. In this trigger I need to insert few records in 3 tables. If error comes in any of these inserts then previous inserts to get committed. This trigger was written in Sybase and it was possible to create transaction and commit the transactions.
I have a publication on Sql Server 2012 that uses transactional replication to 7 subscribers (these are a mix of Sql Server 2008R2 and Sql Server 2012). Last night I scheduled the Snapshot job to run to "re-publish" the database to the subscribers. I had a few new table to push down. Unfortunately the snapshot job became the deadlock victim. Now updates to the publisher are not being sent to the subscribers.
Short of rerunning the snapshot job, is there a way to repair the replication so the updates to the publisher are pushed to the subscribers? The "re-publish" can only be run overnight when there is very little impact to users.
Bill Soranno MCP, MCTS, MCITP DBA Database Administrator Winona State University Maxwell 143
"Quality, like Success, is a Journey, not a Destination" - William Soranno '92
I'm taking the liberty to announce the availability of a suite of articles on my web site about error and transaction handling in SQL Server. In total there are three main parts and three appendixes.
The first part is a short jumpstart, while Part Two is a long in-depth discussion of what can happen in SQL Server in case of an error and what commands that are available. Part Three covers implementation and has lot of examples as well as a facility for logging and raising errors.
The appendixes cover special areas: linked servers, the CLR and Service Broker.
For a few days now I have a discussion with a colleague about shrinking the transaction log as a daily maintenance job on an OLTP database. The problem is I cant figure out a way to convince her she is doing something really wrong. Its not the first discussion.. Maintenance Plans.
She implemented this "solution" with a lot of customers as a solution against VLFs fragmentation and huge transaction log sizes. My thoughts about doing this is very clear and I have used the following arguments without success to convince her:
- To solve too many VLFs you have to focus on the actual size of the transaction log and the autogrowth settings in combination with regularly transaction log backups. Check the biggest transaction and modify the transaction log size based on this. Not use shrinking as a solution for solving many VLFs.
- Shrinking the transaction log file on a daily basis that is disk I/O intensive. When the transaction log file is too small for new transactions, the transaction log needs to grow and this will cause disk I/O, this can cause performance problems.
- It looks unprofessional.
These steps are used every morning at 6:00 AM and a transaction log backup is made every 30 minutes.
Step 1 DBCC SHRINKFILE (N'' , 0, TRUNCATEONLY); go
Step 2 ALTER DATABASE MODIFY FILE (NAME = N'', SIZE = 4098MB); GO
My main purpose is making sure the customers have the best possible configuration and I cant accept this is being implemented. Are there any more arguments available for this issue?
Some lots may not have any transactions for some of the periods between the start and end dates but I need to report every period between the start and end period for each lot. I have a period table that I thought I could use but haven't come up with a way to get the results I'm after.
We use AlwaysOn availability groups, which has 2 SQL nodes configured (version: 11.0.3373.0), my full and differential backups are working for all my databases, however I am unable to perform any LOG backups.
I have double checked my Availability Group settings, and the backup preferences is set to: 'Prefer Secondary'
I've tried creating a maintenance job as well as using Ola Hallengren's maintenancescript job to back them up, but nothing is written to the drive. All jobs return successful every time, and take less then 3 seconds to run. There are no events being written in the SQL error log or event log.
I am basically trying to update a table which reflects account transactions. Accounts get paid in full but occasionally balance payments can be reversed and I want to update the table to show this - I need to show which period the account was previously paid in full.I've created a simplified version of the scenario and below are a couple of examples of things I've tried that do not work. I understand why they do not work but I'm struggling to figure out how to update the 'PeriodPrevPaidInFull' field.
create table Trans ( AccNo int, Transaction_Period_Index int, PeriodOpeningBalance money, DebtBalance money, PeriodPaidInFull int NULL, PeriodPrevPaidInFull int NULL,
begin try declare @param2 int begin transaction exec proc2 @param2 commit transaction end try begin catch if @@trancount > 0 rollback transaction end catch
i haven't had an opportunity to do this before. I have nested stored proc and both inserts values into different tables. To maintain atomicity i want to be able to rollback everything if an error occurs in the inner or outer stored procedure.
I have a Customer running a database in a High Availability Group and I am not familiar with the set up... They have a transaction log that quadrupled in size during a data import and update which was generated by an external application. They have limited server space and would like to shrink the log again now as this import / update only happens once a year. The way this has always been dealt with in the past was by shrinking the DB and logs after the update.
Now however, when attempting to do a log or db shrink, an error message is generated which says that the log cannot be shrunk as the DB is in use as part of an Availability Group....
The more I search and try to read up on this subject, it looks like the DB has to be removed from the Availability Group before the log can be shrunk and then the Availability Group has to be re-created or restored in some way. Is there a simple solution to this conundrum?
Database is in simple recovery mode, and published with transaction replication push subscription, just one subscriber but the database is huge. I don't want to overwrite the schema at the subscriber either.
I had to run an alter database command on a published database, it created so many logs that an extra drive had to be added along with an extra log file to accommodate all the logs.
The problem I have is I'd like to know clear the file of logs so I can drop the temporary log file, and give the drive back, but I cannot.
I have tried dbcc shrinkfile with the emptyfile option but it never clears, I have also tried it with notruncate and truncateonly options (mainly out of desperation).
I do not need to worry about point in time restore as a full backup is taken before and after the operation. After which the database will be put back into Full recovery mode.
I have looked at log_reuse_wait_desc and the file says 'Replication', so I am now thinking the file cannot empty because replication is keeping one of the VLFs active. I tried dropping and recreating the subscription hoping it might free something up and I could get somewhere, but it made no difference.
Do I have to remove replication completely to get round this? Surely not.
I have also tried putting the database back into full recovery mode, doing a full DB backup, and a transaction log backup, but its made no difference, which is also what makes me think a portion of the log is still active because of replication, and perhaps the transactions have not gone through to the subscriber, which raises another question, why not?
I have not tried restarting SQL server, as I'd like to know a way out of this without having to do that, plus I do not think it would make any difference anyway.
Any way to have a process run that will not write its changes to the transaction log? I have a process that runs every three hours and has a huge impact on the transaction log (it becomes larger than the database itself). We do hourly backups of the transaction log and normally it is reasonably sized but when this process runs, it gets HUGE.
The process takes source data, massages it and writes it to summary tables. It is not something we need to track as we can recreate the summary tables if needed and it has no impact on the source tables.
Everything is driven through a stored procedure. Is there a way to run a stored procedure and tell it that nothing it does should be written to the transaction log?
I am creating a report query that returns all unreconciled P/O lines. I am near completion but I am unable to find a way to remove the reconciled records.
I have included a script to produce some sample table, data & query.
The recordset dispalys 6 rows. All reconciled Supplier Invoices are duplicated and have transaction codes 40, 50 and reconcile code of 9 (5024, 921689471).
All unreconciled only appear once and have transaction codes 40 and reconcile code of 0 (4835 & 921978016). These are the only records that I want to show.
I am developing a process to monitors a table in a high transaction database. The process will count the number of lines in the table to verify if it has changed or it is stuck. Due to the fact that the database has a lot of transaction I don't want to execute a query on database too often.l Is there another suitable way to accomplish this goal ?
I have a setup of transaction replication between one publisher and subscriber in the Same server.Now, I need to add a new subscriber to the existing publisher. So publisher database name is DB_A and Subscriber 1 name is DB_B. So the new subscriber will be DB_C. Is this kind of setup possible on one server?
If yes then at the time of reinitialization is it going to apply the snapshot on DB_B as well as DB_C?Also let say if due to disk error DB_B gets corrupted then will data be still replicated between DB_A and DB_C? (Assuming publisher, subscriber 1 and 2 are sitting on individual disks).
Got this situation, trying to do Use SignleMode to recover my handing db, after that lost ldf (and physically too). Tried all things thru SSMS and scripts (below) that I know with no result, is there anything else I can try to recover it, I don't need log file.
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
Could not open new database 'MyLostDB'. CREATE DATABASE is aborted.
File activation failure. The physical file name "C:xxxMyLostDB.ldf" may be incorrect.
The log cannot be rebuilt because there were open transactions/users when the database was shutdown, no checkpoint occurred to the database, or the database was read-only. This error could occur if the transaction log file was manually deleted or lost due to a hardware or environment failure. (Microsoft SQL Server, Error: 1813)
EXEC sp_attach_single_file_db @dbname='Commissions', @physname=N'C:SQLDataMyLostDB.mdf' GO CREATE DATABASE Commissions ON (FILENAME = N'C:SQLDataMyLostDB.mdf') FOR ATTACH_REBUILD_LOG GO
Setting up Transaction Replication in test environment. I am willing to bet that most of you take a production backup (if so, how, and using what?), restoring the database to your test environment, then running a snapshot to your subscriber and away you go.
But perhaps you take a backup of your publisher and subscriber, if so, how do you know there are no inconsistences because there were transactions sitting on the distributor?
What do you do if you have additional indexes on the subscriber for reporting, that are not on the publisher?
Here at work we are having issues with getting consistent databases set up with T Rep, missing rows, duplicate keys at subscriber etc. How to avoid these issues.
BEGIN TRANSACTION Copy records from live to archive END TRANSACTION with commit or rollback execute sproc to write audit log with success or fail IF transaction was committed BEGIN TRANSACTION Delete records from live the archive END TRANSACTION with commit or rollback execute sproc to write audit log with success or fail End IF
END TRANSACTION OUTERTXN with commit if both inner transactions were successful or rollback if either failed
If either inner transaction rolled back execute sproc to write audit log saying whole process is rolling back End IfMy problem is that if the outer transaction rolls back then I am losing the two audit records because they are part of the transaction scope. I want these executes to commit even if the master transaction fails.