I recently had a problem with one of my publications that I haven't been able to figure out. Hopefully someone can help me with it.
I got this error last week: You must rerun snapshot because current snapshot files are obsolete. I assume this means the subscription needed to be re-initialized. Is this correct?
If so, I don't understand why. The publication was not changed. The subscription did not expire. The subscription was not marked for reinitialization.
Here are the specifics of my installation:
Merge publication with 1 pull subscription.
Subscriptions set to expire after 14 days.
Merge agent runs every day, every 5 minutes between 12:00:00A & 11:59:59P.
Snapshot agent was run once when publication was created and then disabled.
Publisher SQL ver: 2000 sp3
Publisher Win ver: 2003 sp1
Subscriber SQL ver: 2000 sp3
Subscriber Win ver: 2003 sp1
The subscription was successfully replicating every 5 minutes. The data for 12:35 shows the first problem. Here's the agent history at the time:
runstatus
start_time
comments
error_id
1
5/23/07 12:25
Initializing
0
3
5/23/07 12:25
Connecting to Publisher 'AASTAUSSQL01'
0
I am in need of some help with this error - I am seeing on my PDA units. Here is my scenario:
I have 5 PDA units that use merge replication with SQL 2005. These PDA units replicate fine. But when I try to sync a 6th or 7th PDA unit, I see the following error:
Source : Microsoft SQL Server 2005 Mobile Edition - The snapshot for this publication has become obsolete. The snapshot agent needs to be run again before the subscription can be synchronized. -2147198698
I know that if I regenerate a new snapshot, I will be able to sync fine. But here are my concerns and questions:
What happens to the first 5 PDAs when I regenerate the a new snapshot. Will they be able to sync without losing the data? Does generating a new snapshot, relinitialize all the subscriptions?
Am I missing the whole picture with generating a new snapshot?
:mad: Hi folks, i am trying an anonymous pull merge replication over VPN. Publisher and FTP server are the same machine(VPNSERVER). In the Snapshot configuration in the enterprise manager i have following parameters: Generate snapshot in following location \VPNSERVERD$FTP FTP Server Name :VPNSERVER Port:21 Client path to this folder from ftp root: /ftp/ :confused: may be wrong here. login: repl Ftp Server configuration is: FTP port no: 21 Repl user has administrative rights When i run snapshot agent; SNAPSHOT.CAB is created in the following folder is created: \vpnserverd$ftpftpVPNSERVER_repl_repl2004052 4164453
When i run the the merge agent at subscriber: following error i recieve:
The schema script '' could not be propagated to the subscriber. The process could not retrieve file 'VPNSERVER_repl_repl20040524164453snapshot.cab' from the FTP site 'VPNSERVER'. 200 Type set to I. 200 PORT Command successful. 550 /d:/ftp/VPNSERVER_repl_repl/20040524164453/snapshot.cab: No such file.
if i change the parameter "client path to this folder" i get error that the snapshot files are obsolete, run the snapshot again. I guess the problem is with the exact location of snapshot file, because when i run the agent i can see the user REPL in the Ftp-monitor and there is no problem with accessing SQL at the publisher.
My Snapshot Agent for SQL Server 2005 sometimes does not fully complete. I am expecting around 2073 files in the snapshot folder but only get 492 (this number does change). I have immediate_sync set to true, so I should get a full snapshot every time. It seems to quit after all data has been BCP'ed out and before the schema generation starts. I do get the "The replication agent has not logged a progress message in 10 minutes. This might indicate an unresponsive agent or high system activity. Verify that records are being replicated to the destination and that connections to the Subscriber, Publisher, and Distributor are still active. " message in the MSSnapshot_history table.
I am guessing that there is some sort of blocking that is preventing the snapshot agent from continuing and then it times out and dies. But that is a total guess as I have not been able to observe the behavior, only the outcome.
Any suggestions to what the cause is or how I can fix it?
We have a large database with a small number of large tables in it (and a larger number of SMALLER tables), and it is a publisher for a transactional replication scenario. When I create a snapshot to initialize a new subscription, I notice with the larger tables that sometimes it generates multiple files in the snapshot folder, usually in multiples of 16, and numbers them like this:
With other tables, I'll get just one LARGE snapshot file, named:
MyOtherTable_4.bcp
In the latter case, the file can be very large (most recent is 38GB).
In both cases, the subscription will eventually be initialized, but the smaller files will generate separate log entries every few minutes in the Replication Monitor, showing 'Bulk Copied data into 'MyTable' (34231221 rows)', whereas the larger table will generate only ONE log entry, showing 'Bulk coping data into table 'MyOtherTable', and it may take a couple of hours before there is anything else showing...except for an entry saying, 'The process is running and is waiting for a response from the server.'
My question is: what would be the difference between the two tables that would result in one generating MULTIPLE snapshot files, the other only a single, much larger one? The only difference I can see in the table definition is that the one generating multiple files has a clustered index, whereas the others do not.
There is a SQL Server 2008 R2 SP3 Clustered Instance that has Transactional Replication. It is by no means a large replication setup in terms of data/article count. SQL Server was recently patched to SP3 and is current on Windows 2008 R2 Patches.
When I added a new article to replication (via 2014 SSMS GUI) it seems to add everything correctly (replication tables/procs show the new article as part of the publication). The Publication is set to allow the snapshot to generate for just new articles (setting immediate_sync & allow_anonymous to false).
When the snapshot agent is run, it runs without error and claims to have generated a snapshot of 1 article. However the snapshot folder only contains a folder for the instance (that does have the modified time of the snapshot agent execution) and none of the regular bcp/schema files.
The tables never make it to the subscribers and replication continues on without error for the existing articles. No agents produce any errors and running the snapshot agent w/ verbose output provides no errors or insight into any possible issues.
I have tried:
- dropping/re-adding the article in question.
- Setting up a new Snapshot Folder
- Validated all the settings and configurations
I'm hesitant to reinitialize a subscriber since I am not confident a snapshot can be generated. Also wondering if this is related to the SP3 Upgrade, every few months new articles are added to the publication and this is the first time since the upgrade to SP3 that it has been done.
I need to find all of the SQL Server objects, (tables, procs, functions, etc.) that my application is NOT using, so they can be removed from the system. Does anyone know how to accomplish this?
When I find that a column should no longer be used because of design changes. ( Moved column to other table to correct design normalization etc.)
I added a check to make sure the column must be NULL.
The following list the obsolete columns
SELECT col.TABLE_NAME + '.' + col.COLUMN_NAME AS FULL_COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS col JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON col.COLUMN_NAME = ccu.COLUMN_NAME AND col.TABLE_SCHEMA = ccu.TABLE_SCHEMA AND col.TABLE_NAME = ccu.TABLE_NAME JOIN INFORMATION_SCHEMA.CHECK_CONSTRAINTS cc ON ccu.CONSTRAINT_SCHEMA = cc.CONSTRAINT_SCHEMA AND ccu.CONSTRAINT_NAME = cc.CONSTRAINT_NAME WHERE IS_NULLABLE = 'YES' AND cc.CHECK_CLAUSE = '([' + col.COLUMN_NAME + '] IS NULL)' ORDER BY col.TABLE_NAME, col.COLUMN_NAME
In the past columns could not be set to null. This script list those columns that I need to add null checks to.
SELECT col.TABLE_NAME + '.' + col.COLUMN_NAME AS FULL_COLUMN_NAME, cc.CHECK_CLAUSE FROM INFORMATION_SCHEMA.COLUMNS col JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE ccu ON col.COLUMN_NAME = ccu.COLUMN_NAME AND col.TABLE_SCHEMA = ccu.TABLE_SCHEMA AND col.TABLE_NAME = ccu.TABLE_NAME JOIN INFORMATION_SCHEMA.CHECK_CONSTRAINTS cc ON ccu.CONSTRAINT_SCHEMA = cc.CONSTRAINT_SCHEMA AND ccu.CONSTRAINT_NAME = cc.CONSTRAINT_NAME WHERE cc.CHECK_CLAUSE LIKE '([[]' + col.COLUMN_NAME + '] =%' AND NOT cc.CHECK_CLAUSE LIKE '% OR %' AND NOT cc.CHECK_CLAUSE LIKE '% AND %' AND NOT cc.CHECK_CLAUSE LIKE '% IS %' AND NOT cc.CHECK_CLAUSE LIKE '% IN %' ORDER BY col.TABLE_NAME, col.COLUMN_NAME
Client issues query which sends out individual requests to the 2 OLAP servers that are load balanced. The client evaluates the versions of the returned record sets to ensure the consist data being returned for one single query. Otherwise, this error will be seen:
The cube has been updated by the server the data is now obsolete.
These servers sit behind a cisco 11506 CSS with load balancing based on balance type: least busy server, also persistence based on cookies.
My developer says this worked fine for a long time then just 'started happening'.
I seem to have a strange problem when applying a snapshot when the tables in the publication have been updated while the snapshot was being generated.
Say for example there is a table called RMAReplacedItem in the publication. When the snapshot starts being applied to the subscriber, a stored procedure called sp_MSins_RMAReplacedItem_msrepl_css gets created that handles an insert if the row already exists (ie it updates the row rather than inserting it). However, after all the data has been loaded into the tables, instead of calling this procedure, it tries to call one called sp_MSins_RMAReplacedIte_msrepl_cssm - it takes the last letter of the table name and adds it to the end of the procedure name.
The worst part is that this causes the application of the snapshot to fail, but it doesnt report what the error is, and instead it just tries applying the snapshot again. The only way i have managed to find which call is failing is to run profiler against the subscriber while the snapshot is being applied and see what errors.
I have run sp_broswereplcmds and the data in there is what is applied to the subscriber - ie the wrong procedure name.
All the servers involved are running sql 2005 service pack 2. The publisher and subscriber were both upgraded from sql 2000, but the distribution server is a fresh install of sql 2005.
I proposed on a new server that we separate Data Files, Log Files, tempDB, Backups, etc. onto separate LUNS on a SAN with High Speed Solid State Drives.I was told that with the new technology with solid state SAN's that it would decrease performance and that it did not work the same way as it did when you had RAID 5's etc.I thought that if things were cared out correctly by a SAN Administrator they would know how to configure for optimal performance.
In the For Loop, How to Iterate from Older flat files to Newer flat files based on File's Timestamp. If there are some older files in that folder, it should be processed first and then continue with the newer one.
In the first step of my SSIS package I need to get files from FTP and dump it/them in a local directory, but it's more than that, the logic is like this: 1. If no file(s) found, stop executing and send email saying no file(s) found; 2. If file(s) found, then compare it/them with existing files in our archive folder; if file(s) already exist in archive folder, stop executing and send email saying file(s) already existed, if file(s) not in archive folder yet, then transfer it/them to the local directory for processing.
I know i have to use a script task to do this and i did some research and found examples for each of the above 2 steps and not both combined, so that's why I need some help here to get the logic incorporated right.
Thanks for the help in advance and i apologize for the long lines of code!
example for step 1: ----------------------------------------------------------------------------------------------------------
' Microsoft SQL Server Integration Services Script Task ' Write scripts using Microsoft Visual Basic ' The ScriptMain class is the entry point of the Script Task.
' The execution engine calls this method when the task executes. ' To access the object model, use the Dts object. Connections, variables, events, ' and logging features are available as static members of the Dts class. ' Before returning from this method, set the value of Dts.TaskResult to indicate success or failure. ' ' To open Code and Text Editor Help, press F1. ' To open Object Browser, press Ctrl+Alt+J.
Public Sub Main()
Dim cDataFileName As String Dim cFileType As String Dim cFileFlgVar As String WriteVariable("SCFileFlg", False) WriteVariable("OOFileFlg", False) WriteVariable("INFileFlg", False) WriteVariable("IAFileFlg", False) WriteVariable("RCFileFlg", False) cDataFileName = ReadVariable("DataFileName").ToString cFileType = Left(Right(cDataFileName, 4), 2) cFileFlgVar = cFileType.ToUpper + "FileFlg" WriteVariable(cFileFlgVar, True) Dts.TaskResult = Dts.Results.Success End Sub Private Sub WriteVariable(ByVal varName As String, ByVal varValue As Object) Try Dim vars As Variables Dts.VariableDispenser.LockForWrite(varName) Dts.VariableDispenser.GetVariables(vars) Try vars(varName).Value = varValue Catch ex As Exception Throw ex Finally vars.Unlock() End Try Catch ex As Exception Throw ex End Try End Sub Private Function ReadVariable(ByVal varName As String) As Object Dim result As Object Try Dim vars As Variables Dts.VariableDispenser.LockForRead(varName) Dts.VariableDispenser.GetVariables(vars) Try result = vars(varName).Value Catch ex As Exception Throw ex Finally vars.Unlock() End Try Catch ex As Exception Throw ex End Try Return result End Function End Class
example for step 2: -------------------------------------------------------------------------------------------------------
' Microsoft SQL Server Integration Services Script Task
' Write scripts using Microsoft Visual Basic
' The ScriptMain class is the entry point of the Script Task.
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Public Class ScriptMain
' The execution engine calls this method when the task executes.
' To access the object model, use the Dts object. Connections, variables, events,
' and logging features are available as static members of the Dts class.
' Before returning from this method, set the value of Dts.TaskResult to indicate success or failure.
'
' To open Code and Text Editor Help, press F1.
' To open Object Browser, press Ctrl+Alt+J.
Public Sub Main()
Try
'Create the connection to the ftp server
Dim cm As ConnectionManager = Dts.Connections.Add("FTP")
It works remotely if I run it via command prompt. But when I add this to a TSQL job on my remote SQL instance, it runs without deleting anything. What I'm missing?
Brief overview...Running SQL Server 2003 Server Enterprise 64 bit - All Service Packs and patches current SQL Server 2005 Enterprise Edition 64 bit Build Microsoft SQL Server 2005 - 9.00.3054.00 (X64) Mar 23 2007 18:41:50 Copyright (c) 1988-2005 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: Service Pack 2)
I cannot import any SSIS packages nor crete any new folders under stored packages. I hve googled the news groups and looked at BOL to no avail. HELP!!!!
I am thinking about replacing the INSERT data scriptfiles that I have with XML files. This way I can open the XMLfile using an XML Editor and see the values in a GRID andmake changes easier.Do you see any problem with this approach?I managed to put together some code that is exportinga SQL table with its data to an XML file and also a codethat reads the XML file's data and inserts it into a table.Now I am researching on XSD, td:datatype, DTD...(I am new to XML) in order to figure out how I canuse a single xml file that will hold both the sql serverfields, the datatypes and their values.If you have links to some sample code that has anythingto do with the datatype export and import I am workingon, can you please share them with me?Most importantly what do you think about the idea of usingXML files vs sql scripts?Thank you
I am a new fellow to replication.What is the problem I am getting is I am not able to create snapShot.Whenever I start Snapshot Agent it is going for half an hour and it will stuck in the case of one view saying like that 'failed to process bulk copy of data from dbo.syncobj_03x666*'.If I am publishing tables only then also I am getting same error.This view created by system.In my case publiher only working as a distributor.I created a SQL username which have access in subscriber & Publisher.Preferably I will be happy if someone suggest what and all are the primary criterias we have to keep in mind while doing Replication
Create jobs to copy database and restore database in destination servers
------------ Robert at 5/7/2002 11:00:30 AM
Yes and I would rather not use dts to accomplish this task.
------------ Ray Miao at 5/7/2002 10:02:15 AM
Do you have direct network connection to remote server? Did you try dts?
------------ Robert at 5/7/2002 9:08:06 AM
I've been trying to replicate a database to an off site server using snapshot replication. It is scheduled to run every hour but I've noticed when data is changed at the source it never gets replicated to the destination. Does anyone know why?? I can't use transactional replication beause not all the tables have primary keys and they can't be added due to code. Some tables have id colunms and have been created with the Not for Replication option on the subscriber. Any help will be appreciated.
I can set up snapshot replication for those tables without foreign key constraints. But if there are foreign keys in the table, there will be error message indicating that this object can not be dropped because it is referenced by ....
I've problem in replicating data thru SNAPSHOT as I've the tables with Foreign Keys at subscriber end. The Truncate table is not working because these tables were referenced by a FK. Even for recreating table during snapshot is also same problem. Any suggestions?
1) In snapshot replication, can the subsciber send info back to the publisher (even in a manual process)
2) In snapshot replication, do we need a distributor set up between the publisher and subscriber if there will only be a single subscriber, or can we write directly to it?
I am replicating all tables in the DB. the tables have PKs,FKs and identity columns. While truncating the data from the tables in subscriber getting the constraint errors. Please suggest how to get rid of this error.
I am replicating all tables in the DB. the tables have PKs,FKs and identity columns. While truncating the data from the tables in subscriber getting the constraint errors. Please suggest how to get rid of this error.
We have a production server in East Coast (SQL Server 2000 SP2 - Database size is around 30 Gig). We have a reporting server is the West Coast. We need to replicate (transactional replication every one hour) from East coast to West coast. Is there any way that I can take a backup and restore upto the last transaction backup and then start replication agent on the production (by saying schema and data already exist). Basically we don't wan't to snapshot using FTP or bcp through WAN because it is going to be very slow.
If this is possible, will there be any validation problem.
Suppose i want to replicate data from server A to server B I am using snaphot replication.I did the snapshot replication for the first time and server B got a snapshot of server A.
Next time i run snapshot i want the incremental data to be replicated and not all..Is this possible in snapshot replication? If not which type of replication should i use?
We have a training server and I've had a request that after each training session, we have the ability to quickly roll the training server back to its previous state so that the next group of people can be trained with the same examples.
Doing a restore requires that the maintenance (defrag, warm cache, etc..) also runs after that before the server performs fast enough.
So I was thinking of snapshots as an alternative. When you roll back to a snapshot, does that invalidate what's in the cache or have any adverse effect on query plans, stats, or indexes?