I'm trying to configure log shipping on a 2005 sql server. I follow the wizard's instructions (see http://technet.microsoft.com/en-us/library/ms190640.aspx) and everything looks right except for the backup job that somehow is not being created on the primary server. Secondary server contains copy, restore and alert jobs.
We are having full backup every day and hourly transactional log backups during the working hours in our production server which is running sql server2008 R2 as a clustered instance. For the Db's under simple recovery model we are having full backups. Now we want to implement transaction log shipping to a remote server in another site. I understand that log shipping involves the restoration of a full backup initially in the remote server and then restoring the transactional log backups which are shipped to it ,on a no recovery basis.
My question is whether we can continue taking the full backups every day in the production sever which is given for offsite storage. Will the full backups taken in the primary server, after the log shipping has been implemented, affect the log backups which are restored into the remote server. Will the chain of log backups which are restored into the secondary server be affected in any manner if a full backup is taken in the primary?
I have a database called PrimaryJunk that is being log shipped to another location, secondary database is secondaryjunk. From PrimaryJunk to SecondaryJunk logs ship and apply fine with no issues. So I figured lets make sure that I am able to perform a role change and swap the roles and that is not working well. My original primary db is stuck in restoring state.
I manually backed up the active transaction log on primary server by performing a transaction log backup with the option 'backup the tail of the log and leave the db in restoring state'
MS site has the same step but mentions NORECOVERY. I am not sure if my step above does that automatically. http://msdn2.microsoft.com/en-us/library/ms191233.aspx
Wonder if thats the reason my original primary db is still in restoring state. any idea?
If I perform a point in time restore on a database that is currently a primary log shipping database, will the rollback be reflected on the secondary servers or are extra steps neccesary to accomplish this?
I've got log shipping set up, and everything seems to be working fine, but the log files are not being deleted from the primary server despite configuring log shipping to retain them for 3 days. I see no errors concerning the log shipping, but did not configure a monitor. What process is responsible for deleting the older log backups, and how can I look for errors. I could simply set up a jog to delete the older files, but that will only mask the issue.
I want to redirect the logshipping primary backup folder to another drive, how to change the configurations steps to move the primary logship folder to another location within the same server!
Recently one of our DB went to suspect mode, we have resolved it(repair_allow_dataloss) and DB came online but when we fire CheckDB on that it is throwing following error
Msg 7985, Level 16, State 2, Line 2 System table pre-checks: Object ID 3. Could not read and latch page (1:355) with latch type SH. Check statement terminated due to unrepairable error. DBCC results for 'xxxxxxx'. Msg 5233, Level 16, State 98, Line 2 Table error: alloc unit ID 196608, page (1:355). The test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. The values are 12716041 and -4. CHECKDB found 0 allocation errors and 1 consistency errors not associated with any single object. CHECKDB found 0 allocation errors and 1 consistency errors in database 'xxxxxxx'.
And error log is also continuously popping the below message
Error:
824, Severity: 24, State: 2. SQL Server detected a logical consistency-based I/O
I have a scenario where a customer is going to be using Log Shipping to the DR site; however, we need to maintain the normal backup strategy on the current system. (i.e. Nightly Full, Every 6 Hour Differential and Hourly Transaction Log backup)I know how to setup Transaction Log Shipping and Fail-over to DR and backup but now the local backup strategy is going to be an issue. I use the [URL] .... maintenance solution currently.
Is it even possible to do regular backups locally keeping data integrity for your backup strategy with Transaction Log Shipping enabled?
I have created a formview which I among other things uses to insert new values into a database. What I want to check is if the primary key which is put into the form already exists in the db. If it is I want to get a message to my web page, if not the data can be inserted.
How can I do this?
And if the only way to control this is to create a stored procedure. How do I write such a proc?
Hi, Can someone please tell me the best practices for checking the primary key field before inserting a record into my database? As an example I have created an asp.net page using VB with an SQL server database. The web page will just insert two fields into a table (Name & Surname into the Names table). The primary key or the Names table is "Name". When I click the Submit button I would like to check to ensure there is not a duplicate primary key. If there is return a user friendly message i.e. A record already exisits, if there no duplicate, add the record. I guess I could use try, catch within the .APSX page or would a stored procedure be better? Thanks Brett
OK i have script which is to be run on several databases. Within this script there are commands to create a primary key on a specific table. Can anyone tell me if it is possible to check if a specific primary key exists on a table?
I have a copy of this database table and the first thing I noticed was that the Primary Key was pretty much useless and there were no sensible indexes. Every query hitting this table ended up table scanning.So I thought I would try dropping the existing Primary Key constraint and then creating a more natural key that would make data retrieval quicker (hopefully). I understand that creating a clustered index on this table is going to take a long time as ALL the data will need to be reorganised (I estimate this will take at least 1 hour). However, just dropping the existing Primary Key constraint is taking forever.I can see that the server is doing a lot of disk reading/ writing and the wait type in Activity Monitor is PAGEIOLATCH_EX.I would have thought that just dropping a primary key would not change the data in the table, just delete the associated index.
When i have multiple sql server instances running on one server, how to check the names of that instances with out connecting to sql server management studio.
I have been searching for a means to change the System Failure Error Check policy that comes as part of the Best Practice policies. I want to look back 24 hours. The WQL query shipped with the policy doesn't have a WHERE clause component that looks at TimeGenerated. That query looks like:
IsNull(ExecuteWql('Numeric', 'rootCIMV2', 'select EventCode from Win32_NTLogEvent where EventCode=6008 and Logfile="System"'), 0)
After searching for an example of how to do this and not finding any that are specific to PBM, I decided to fall back to a very basic approach - use wbemtest.exe to try out where clause additions and see how they work, then plug the result into the policy and see if it works. As a start, I tried the following query using wbemtest.exe:
select Event Code from Win32_NTLogEvent where EventCode = 6008 and Logfile = 'System' and TimeGenerated > '20130101010000.000000–000'
This works great in wbemtest.exe. My next step was to plug this into the policy condition expression as follows: IsNull(ExecuteWql ('Numeric', 'rootCIMV2', 'select EventCode from Win32_NTLogEvent where EventCode=6008 and Logfile="System" and TimeGenerated > "20130101010000.000000–000"'), 0)
When I try to manually evaluate this policy in SSMS, I receive an "Invalid Query" error message.I assume that SWbemDateTime isn't available to use inside Policy Based Management policies. All the examples of how to handle the kind of dynamic date creation I have seen are for use in PowerShell, VBScript, or SSIS. I've played with using DateDiff, DateAdd, and GetDate inside the query string, with no success.
Why does the ExecuteWql above fail?Is it at all possible to dynamically generate a datetime (say, 24 hours ago) as part of the query string parameter of the ExecuteWql call?What might that look like?
We have our Production server having database on which Few DTS packages execute every night. Most of them have Bulk Insert stored procedures running.
SO we have to set Recovery Model of the database to simple for that period of time, otherwise it will blow up our logs.
Is there any way we can set up log shipping between our production and standby server, but pause it for some time, set recovery model of primary db to simple, execute DTS Bulk Insert Jobs, Bring it Back to Full recovery Model AND finally bring back Log SHipping.
It it possible, if yes how can we achieve this.
If not what could be another DR solution in this scenario.
I have not used log shipping before and find myself in a position where I need to reboot the secondary node and then the primary node and I don't actually need to failover.
Is there anything I need to be aware of. When rebooting the secondary node I assume the transactions will be held in the primary nodes log till the secondary comes back and just carry on once back up?
When rebooting the primary node nothing needs to be done and the log shipping will just start again once it has come back?
We are going to use SQL Sever change tracking. The problem is that some of our tables, which are to be tracked, have no primary keys. There are only unique clustered indexes. The question is what is the best way to turn on change tracking for these tables in our circumstances.
I could not able to find Forums in regards to 'Log Shipping' thats why posting this question in here. Appriciate if someone can provide me answers depends on their experience.
Can we switch database recovery model when log shipping is turned on ?
We want to switch from Full Recovery to Bulk Logged Recovery to make sure Bulk Insert operations during the after hours load process will have some performance gain.
I 'm sure I am missing something obvious, hopefully someone could point it out. After a failover log shipping, I want to fail back to my inital Primary server database; however, my database is marked as loading. How can I mark it as normal?
I did the failover as follow:
I did a failover log shipping from the 2 server Sv1 (Primary) and Sv2 (Secondary) by doing the following
1) Stop the primary database by using sp_change_primary_role (Sv1)
2) Change the 2nd server to primary server by running sp_change_secondary_role (Sv2)
3) Change the monitor role by running sp-change_monitor_role (Sv2)
4) Resolve the log ins - (Sv2)
5) Now I want to fail back - I copy the TRN files to Sv1 - use SQL Ent to restore the database at point in time. The task is done; however, the database is still mark as loading. I could not use sp_dboption.
Uma writes "Hi Dear, I have A Table , Which Primary key consists of 6 columns. total Number of Columns in the table are 16. Now i Want to Convert my Composite Primary key into simple primary key.there are already 2200 records in the table and no referential integrity (foriegn key ) exist.
may i convert Composite Primary key into simple primary key in thr table like this.
I have recently been looking at a database and wondered if anyone can tell me what the advantages are supporting a unique collumn, which can essentially be seen as the primary key, with an identity seed integer primary key.
For example:
id [unique integer auto incremented primary key - not null], ClientCode [unique index varchar - not null], name [varchar null], surname [varchar null]
isn't it just better to use ClientCode as the primary key straight of because when one references the above table, it can be done easier with the ClientCode since you dont have to do a lookup on the ClientCode everytime.
We have a table, which has one clustered index and one non clustered index(primary key). I want to drop the existing clustered index and make the primary key as clustered. Is there any easy way to do that. Will Drop_Existing support on this matter?
CREATE TABLE [dbo].[property_instance] ( [property_instance_id] [int] IDENTITY (1, 1) NOT NULL , [application_id] [int] NOT NULL , [owner_id] [nvarchar] (100) NOT NULL , [property_id] [int] NOT NULL , [owner_type_id] [int] NOT NULL , [property_value] [ntext] NOT NULL , [date_created] [datetime] NOT NULL , [date_modified] [datetime] NULL )
I have created an 'artificial' primary key, property_instance_id. The 'true' primary key is application_id, owner_id, property_id and owner_type_id
In this specific instance - property_instance_id will never be a foreign key into another table - queries will generally use application_id, owner_id, property_id and owner_type_id in the WHERE clause when searching for a particular row - Once inserted, none of the application_id, owner_id, property_id or owner_type_id columns will ever be modified
I generally like to create artificial primary keys whenever the primary key would otherwise consist of more than 2 columns.
What do people think the advantages and disadvantages of each technique are? Do you recommend I go with the existing model, or should I remove the artificial primary key column and just go with a 4 column primary key for this table?
I need to create a RO copy of a production DB owned by an outside company. We are connectd via a WAN link, but cannot use replication. They are proposing using an initial load via tape, and sending us a text file nightly with the days changes to the DB. We will then need to load that data using BCP, DTS or some other method. Does any one have any ideas on using log shipping instead of the text file. It would only be practical to get a fresh load of the entire DB once a quarter or once a month at most. It is a 40+ GB database and we are expecting 100 to 200 MB of logs per night. For business reasons, we are limited to some type of file transfer mechanism for the data transfer, and cannot really change their backup schedule which is nightly fullbackups and tlogs every 30 minutes.