I'm seeing some strange behavior from a stored procedure of mine. It essentially grabs a bunch of rows using a fairly simple JOIN....here's the from statement:
Code Snippet FROM Payment PY (NOLOCK) JOIN (SELECT DISTINCT PY.AccountPaymentId, ROW_NUMBER() OVER(ORDER BY PY.AccountPaymentId ASC) AS RowNum FROM Payment PY (NOLOCK)) AS SQ ON (SQ.AccountPaymentId = PY.AccountPaymentId) INNER JOIN Payee PE ON PE.PayeeId = PY.PayeeId INNER JOIN Party PT ON PE.PartyId = PT.PartyId INNER JOIN Distribution DS ON PY.DistributionId = DS.DistributionId LEFT OUTER JOIN Account AC ON DS.AccountId = AC.AccountId INNER JOIN clm CM ON PE.clm_no = cm.clm_no LEFT OUTER JOIN PartyAddress PA ON PY.PartyAddressId = PA.PartyAddressId AND PT.PartyId = PA.PartyId WHERE RowNum BETWEEN (((@Page * @PageSize) - @PageSize) + 1) AND ((@Page * @PageSize) - @PageSize) + @PageSize and ((@PayeeName IS NULL) OR (PT.[Name] LIKE '%' + @PayeeName + '%')) AND ((@AccountId IS NULL) OR (AC.AccountId = @AccountId)) AND ((@DistributionId IS NULL) OR (DS.DistributionId = @DistributionId)) AND ((@PaymentDate IS NULL) OR (DATEADD(day, DATEDIFF(day, 0, PY.PaymentDate), 0) = DATEADD(day, DATEDIFF(day, 0, @PaymentDate), 0))) -- Ignores the time AND ((@PaymentNumber IS NULL) OR (PY.AccountPaymentId = @PaymentNumber)) AND ((@IsReconciled IS NULL) OR (PY.ReconciledInd = @IsReconciled)) AND ((@AmountIssued IS NULL) OR (PY.PaymentAmount = @AmountIssued)) AND ((@AmountPaid IS NULL) OR (PY.AccountPaidAmount = @AmountPaid)) AND ((@IssueStatus IS NULL) OR (PY.PaymentStatusEnumItemId = @IssueStatus)) AND ((@AccountStatus IS NULL) OR (PY.AccountStatusEnumItemId = @AccountStatus)) AND ((@IsReissued IS NULL) OR (PY.ReissuedInd = @IsReissued)) ORDER BY AccountPaymentID ASC
When I pass a 1 for the @IsReconciled parameter, I get the right number of rows back - 9779. But when I pass a 0 (zero), i get no rows back, although there are 222 rows which satisfy the condition.
Is there somethig I'm overlooking (I don't think I am...)? I don't know whay 1 works and 0 wouldn't...
FYI - the @IsReconciled parameter is set to NULL at the outset of the procedure -
We have an interesting problem. We are attempting to migrate from sql 2000 to sql 2005. the schema we have is exactly the same. the new 2005 box is more powerful than our 2000 box.
here is our schema:
tbl_Items ItemID int pk ReferenceID int sessionid varchar(255) StatusID int
tbl_ItemsStatus statusid int pk isinternalstatus bit
there is an index on (ReferenceID, SessionID, StatusID) and (SessionID, StatusID)
this is the query:
DECLARE @referenceid INTEGER SET @referenceid = 1019
SELECT MAX(i2.itemid) FROM tbl_Items i2 (NOLOCK) JOIN tbl_ItemsStatus s (NOLOCK) ON i2.StatusID = s.StatusID WHERE s.IsInternalStatus = 0 AND i2.referenceid = @referenceid AND i2.sessionid IN ( SELECT i3.sessionid FROM tbl_Items i3 (NOLOCK) WHERE i3.referenceid = @referenceid AND i3.status <> 7 AND i3.status <> 8 AND i3.status <> 10 AND i3.itemid IN ( SELECT max(i4.itemid) FROM tbl_Items i4 (NOLOCK) WHERE i4.referenceid = @referenceid GROUP BY i4.sessionid ) AND i3.itemid NOT IN ( SELECT MAX(i7.itemid ) FROM tbl_Items i7 (NOLOCK) WHERE i7.referenceid = @referenceid AND i7.SessionID IN ( SELECT i5.SessionID FROM tbl_Items i5 (NOLOCK) WHERE i5.status <> 11 AND i5.referenceid = @referenceid AND i5.itemid IN ( SELECT MAX(i6.itemid) FROM tbl_Items i6 (NOLOCK) WHERE i6.referenceid = @referenceid AND i6.status IN (7,11,8) GROUP BY i6.sessionid ) ) GROUP BY i7.SessionID ) )
GROUP BY i2.sessionid
we know this query is pretty bad and can be optimized. however, if we run this query as is on 2005 it takes about 2 hours to run...if we run the exact same query on 2000 it takes 9 seconds.
so this query on 2005 if run takes 2 hours..however, if we omit the s.IsInternalStatus = 0 or the i2.referenceid = @referenceid line it takes about 9 seconds.
why would this be? it makes no sense why omitting one of those where clauses would increase the performance of the query by 2 hours? we know its a bad query...but this doesnt make sense.
Guys, I have some data in an excel sheet. Some of the columns have a few NULL values for certain amount of rows till is gets data. What makes it so weird is that when priviewing this in the wizard, the whole column is filled with NULL values when the number of leading NULLs is quite large. When NULLs are quite a few, the column works fine!! Can anyone explains this? We tried some manual work to cut some of the rows from below and put them at the start and it worked! It's so strange though this behavior. Shiko
I was able to successfully create a database maintenance plan for SQL Server 2000 Transaction Log Shipping for a few databases a few weeks ago. Yesterday, I created a few more but to my surprise, I can no longer do it. I can create a maintenance plan but the job it creates does not start even if I force the job to start. I did exactly the same thing as what I did (as I document everything I do) before but no luck.
I had created 2 packages... one is the parent package and contains a 50 iterations loop running a secondary package for each iteration... i had reached the following conclusion:
My package takes an average of 5 seconds from the time it ends executing one iteration and starts another... after about 30 iterations... my average time between end and start increases significantly to about 12 secs or even more...
All packages have delay validation set to false, and receive several variables from the parent package... Has for the logging, it is done to files based on a path coming from a variable in the parent package.
To execute the parent package i am using dtexecui.exe and i consider this behavior rather strange... had anyone experienced the same? Can anyone test this?
My environment is a 4 x64 processors with 8gb memory, so i guess its good enough to get 0 secs from end to start
I have a script component in a data flow that is exhibiting some strange behavior. In the PreExecute event of the data flow, I stuff a recordset into a variable that is declared at the data flow scope. Within the data flow, I use a script component to read in the data from the recordset.
Example:
Dim olead As New Data.OleDb.OleDbDataAdapter
Dim dt1 As New System.Data.DataTable
Dim row As System.Data.DataRow
olead.Fill(dt1, Me.Variables.rsIntRateStrata)
If I display the count of the records in the data table dt1, it shows 42 rows, which is correct. Run the package, everything runs as expected. So far, so good.
Now, I set up another source/destination within the same data flow, as well as a script component between them, same as the first flow described above. Now my data flow has two parallel flows (different source & destinations). I copy the same script logic from the first flow into the second. Run the package- no errors, everything is fine... except when I inspect the data, it looks like the transformation isn't working correctly in the second script.
So I display a messagebox of each script component during run time. The first component displays 42 records, while the second displays 0 records? Same variable. Same data flow.
So I delete the first (original) flow from my data flow. Run the package again. Now the messagebox says 42.
What is happening here? Do I have to create two variables to duplicate the same recordset if I need to use it multiple times within the same data flow? Is this a bug?
BEGIN declare @datefin_flag datetime, @strip datetime SELECT @strip = dateadd(d,datediff(d,0,getdate()),0) SELECT @datefin_flag = DATE_FIN_PERIODE_FISCALE FROM DM_LKP_CALENDRIER_PERIODE_F WHERE DATE_DEBUT_PERIODE_FISCALE < @strip AND DATE_FIN_PERIODE_FISCALE = @strip --select @datefin_flag --select @strip IF(@datefin_flag != @strip) RAISERROR('You cant run this',16,1) END
Well this Query should return the raiserror it returns completes successfuly since todays date is not the same as the date in the database. if you select @datefin_flag it returns NULL and if you select @strip it brings back todays date how can NULL be equal to to todays date assuming that todays date is equal to NULL. ?
I have a webhost where it seems my control of my database is fairly restricted. I cannot backup the database because I don't have the necessary permissions. I cannot perform a DBCC SHRINKFILE (permissions) - and many other DBCC commands. I ran into a problem where my log file filled up during the middle of the day and impacted my operations - data was lost.
I found some TSQL for shrinking the log file, but the statement: SET @TruncLog = 'BACKUP LOG [' + db_name() + '] WITH TRUNCATE_ONLY'; EXEC (@TruncLog)
will not execute because I cannot execute the BACKUP LOG command (permissions)...
Is there anything else I can do, or am I to far up the creek (w/o my paddle)??
I've been trying to shrink my SQL 7 Transaction log after it had grown to 30+GB's. After running the command dbcc shrinkfile 'filename' and the new size, I'm getting a result set of:
I'm having a problem shrinking my transaction log. I have a 1GB database with a 500MB transaction log. The transaction logs are backed up every 10 minutes but it has grown to 2.2GB. I've tried backing the transaction log with TRUNCATE_ONLY and then tried doing a DBCC SHRINKFILE but it doesn't seem to work. I've checked if there were any old, long running queries but there is none. What else can I do to reduce the transaction log size?
I've had a thought to create a secondary log file and then delete the primary but that isn't allowed. Is there anyway I can make the secondary log file the primary and vice-versa? This way I should be able to delete the secondary log file to reclaim space.
We have a database that was created with a 50 MB transaction log, that is set to autogrow. Due to activity the log periodically grows to over 200 MB. So we expanded it to 200 MB. But everytime the log is backed up, it shrinks to it's original size of 50 MB. The autoshrink option is not set, and no one is manually shrinking the log. Any idea what could be causing this? We are using SQL 7, SP 3.
I am having a problem truncating a transaction log. The truncate and database shrink commands execute successfully, but do not reduce the size of the transaction log.
I have steps that I take to shrink transaction log. One of the steps is to delete the transaction log after I detach the database. Am I losing any data by doing that?
I have a transaction log with 22mb of used space and 2,632.50MB of free space. I tried using the following statement to shrink the log and it did not do anything: USE DB GO DBCC SHRINKFILE (DB_log, 50) GO CHECKPOINT
Can anybody help me understand what is going on here?
My transaction log blows out to more than 55 GB when I run an index rebuild on a fairly large table. It’s ok for the transaction log to get that big but I want to shrink it back down ASAP while maintaining the integrity of my transactional backups. I’m thinking of a process something like the following: 1.Full database backup 2a.BACKUP LOG database-name WITH TRUNCATE_ONLY 2b.dbcc shrinkfile(2,20,TRUNCATEONLY ) 3.Full database backup
Is this the best way to shrink the T-Log while maintaining the integrity of my transactional backups? My database is MSSQL 2000 sp4 in full recovery mode.
Gurus My logfile for a DB grows very rapidly and grows to more than 10gb every night,i have scheduled a job to shrink this file. Now in job activity monitor it is showing me a yellow triangle in front of the job and status as successfull and it has ran only for 1 second It is not shrinking the logs I have transactional replication implemented My point is when the logfile grows more than 10 gb the job does not shrink the logs I have got around 8 steps in thos job and this is the 7th step
Hi All,My SQL server transaction log is getting bigger every day and my HDD ifrunning out of space.So i follow the MS KB about how to Shrinking the Transaction Log.After doing so the log is much much smaller as i can see the size of itunder enterprise manager.The problem is that the HDD still shows the same size.If i shrink the DB why the and reduce its size why the HDD does notshows it?Is there a way to clear the size from the HDD?Thanks All
I am currently having a rather pestering issue with my full/transaction log backups. It seems that after running either, my logs do not truncate and the file continues to stay @ the current size and eventually fill to the brim. This issue only began after mirroring was setup. Is there any differences in log file maintainence when dealing with Mirrored databases ? I made my last attempt to run backups and waited over the weekend for some hope that a checkpoint would occur and my file would be shrunk once again for normal use.
OK, I'm stumped. The 37 GB log file associated with my 22 GB database continues to grow--even though the database file is backed up each week and the database is not experiencing heavy use. I've reviewed numerous log-reated threads on the SQL Server forums, but I'm still confused as to why my log file keeps growing.
The recovery model for this database is set to Full. The database is backed up every Thursday night. Just the transaction log is backed up on all other nights. (I realize that I could simply change to the Simple recovery model and just backup the entire database each night; however, I was hoping that backing up the database once a week and backing up just the log six nights a week would result in much smaller usage of my hosting company's off-site backup device.)
sys.databases reveals the following for this database: log_reuse_wait=0; log_reuse_wait_desc=NOTHING. DBCC SQLPERF (LOGSPACE) returns a 'Log Space Used' of 0.1946281% and a Status of 0.
My hosting center uses Ahsay Online Backup to perform the nightly backups. The backup log indicates that the log file is being backed up each night--actually, the backup log indicates that the night after the weekly database backup, one log file is backed up, the night after that two log files are backed up, etc., right up until the sixth night when six log files are backed up. I'm not sure if this is the way backups are supposed to work or not.
It's my understanding that the log file won't truncate until SQL Server believes that the transactions in the log file are committed and (from what I've been reading) a checkpoint occurs. Is that correct or do I have a wrong understanding? In any case, do I need to truncate the log myself, perhaps as part of regular maintenance (as some of the threads indicate)? If so, how can I be sure that my truncating the file won't cause a loss of any data? Lastly, is it conceivable that the hosting center's online backup tool is somehow interfering with log process?
As I inferred above, I'm not currently doing any regular maintenance on this database, but I need to start; first, however, I'd like to find out why the log file continues to grow. I'm concerned that if I don't resolve this, the drive will fill up.
I am working on deploying my production DB's to a new server and am looking for some advice. The new server is running RAID-5 (the old one is RAID-0) but it only has one controller therefore I can not create another array group. So I was wondering since the disk setup is C: and D: (Virtual Drive) if I would see any benefits from placing my log files on C: and my Data files on D:?
The other question is related to the size of my log files. Some of the DB's were not created by me and there are no constraints in log file growth causing some very large TRANS logs. For example, there is a 100 MB DB with a 1 GB trans log. I restored the DB on my new server and truncated the log file (manually, the Ent Mgr Shrink DB tool doesn't work worth a hoot) to half the size of the DB and it functions properly. I was just wondering if this would cause any problems down the road if I did all of my DB's like this?
I am restricting log file growth to approximately half the size of the DB.
There is one DB that is 1.5 GB and the log file was set restricted to 2mb and it works fine, but I feel that I should bump this up a bit.
I have inherited a SQL 2000 database ( (I am new to SQL DBA) and I found this when I was checking the db properites . The transaction log has grown bigger than the actual data file, I thought transaction log backups would truncate the inactive portion of the log file and shrink the transaction log, but it was not the case it seems, may be it was truncating the inactive portion of the log, but not shrinking it. This site does not have a job for truncating the data/log files periodically. What is the best method to deal this situation, how can I shrink the Transaction log quickly?,
I'm getting this when executing the code below. Going from W2K/SQL2k SP4 to XP/SQL2k SP4 over a dial-up link.
If I take away the begin tran and commit it works, but of course, if one statement fails I want a rollback. I'm executing this from a Delphi app, but I get the same from Qry Analyser.
I've tried both with and without the Set XACT . . ., and also tried with Set Implicit_Transactions off.
set XACT_ABORT ON Begin distributed Tran update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TRANSACTIONMAIN set REPFLAG = 0 where REPFLAG = 1 and DONE = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.WBENTRY set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.FIXED set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.ALTCHARGE set REPFLAG = 0 where REPFLAG = 1 update OPENDATASOURCE('SQLOLEDB','Data Source=10.10.10.171;User ID=*****;Password=****').TRANSFERSTN.TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 update TSADMIN.TSAUDIT set REPFLAG = 0 where REPFLAG = 1 COMMIT TRAN
It's got me stumped, so any ideas gratefully received.Thx
I have a design a SSIS Package for ETL Process. In my package i have to read the data from the tables and then insert into the another table of same structure.
for reading the data i have write the Dynamic TSQL based on some condition and based on that it is using 25 different function to populate the data into different 25 column. Tsql returning correct data and is working fine in Enterprise manager. But in my SSIS package it show me time out ERROR.
I have increase and decrease the time to catch the error but it is still there i have tried to set 0 for commandout Properties.
if i'm using the 0 for commandtime out then i'm getting the Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
and
Failed to open a fastload rowset for "[dbo].[P@@#$%$%%%]". Check that the object exists in the database.
I am getting this error :Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OleDb.OleDbException: Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.have anybody idea?!
i have a sequence container in my my sequence container i have a script task for drop the existing tables. This seq. container connected to another seq. container. all these are in for each loop container when i run the package it's work fine for 1st looop but it gives me error for second execution.
Message is like this:
Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.
i am getting this error "Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.".
my transations have been done using LINKED SERVER. when i manually call the store procedure from Server 1 it works but when i call it through Service broker it dosen't work and gives me this error.
1) Does shrinking a db have any side affects ? Or this is pretty much a normal operation ? 2) Also in db options, is it recommended to have auto shrink checked ?
We have many databases for which log files have grown much bigger. The one I need to Shrink immediately has 16 MB .mdf file and 12 GB of .ldf file.
I will very much appreciate, if somebody can help me with step by step process to shrink the database/log file (some way). We are in a crunch situation
Hi all,I've deleted a lot of albums but the size of personal.mdf isn't shrinking - how do i go about acheiving this? I've tried to shrink it in sql management express but the file is read only.. please help! thanks
I have a 13 Gig Log File with only 121 Mgs of space used. I have run the DBCCSHRINKFILE command and it has shrunk it by about 100 Mgs. Why can't I get it to shrink to a reasonable size.