We have transactional one-way replication running, and suddenly today started getting this error :
The DELETE statement conflicted with the REFERENCE constraint "FK_BranchDetail_Branch".
The conflict occurred in database "LocationDB", table "dbo.BranchDetail", column 'BranchNumber'.
Message: The row was not found at the Subscriber when applying the replicated command.
Violation of PRIMARY KEY constraint 'PK_Branch'. Cannot insert duplicate key in object 'dbo.Branch'. The duplicate key value is (23456)
Disconnecting from Subscriber 'SQLDB03'Publisher - SQLDB02.LocationDB Subscriber - SQLDB03.LocationDB
Tables on both servers: Branch (BranchNumber PrimaryKey) BranchDetail (BranchNumber ForeignKey references previous table)
select * from SQLDB02.LocationDB.Branch -- contains : 23456,'Texas',... select * from SQLDB03.LocationDB.Branch -- contains : 23456,'NULL',...The problem is - the BranchNumber in question '23456' exists in all 4 tables (Publisher..Branch, Publisher..BranchDetail, Subscriber..Branch, Subscriber..BranchDetail). Yet, when I ran a trace on Subscriber, I see repeated commands like:
exec [sp_MSdel_dboBranch] 23456 -- which throws FK violation exec [sp_MSins_dboBranch] 23456,'NULL',... -- which throws PK violationI'm guessing it's trying to Update the record on subscriber by doing a Delete + Insert. But it's unable to..
Users do not have access to modify Subscriber table. But they can modify Publisher table through UI, and have been doing so for long time without issue. There is also job that updates Publisher table once every night. We started getting this error around noon today.
Our last resort is to reinitialize subscription off-hours.
I have many new tables for which i need to write Insert,Update and delete triggers manually. Is there any way to generate triggers script which takes table name as a input parameter and print/generate trigger's script?
Hi all, I have a table (Products), and the table is like this:
- Product_ID (Primary Key), Not Null, Identity Specification: Yes, Increment 1, seed 1 - Name ... - Price ...
When i insert in the publisher (PC, Sql Server 2005), it inserts correct ex: |Product_ID | Name | Price| Inserted by| | 1 | P1 | X | Pc | | 2 | P2 | Y | Pc |
Then i replicate to Subscriber(Pocket PC, SqlMobile ), when i insert in Subscriber, it creates strange and big id's, is it normal? is this to avoid conflict inserts?
Ex: |Product_ID | Name | Price| Inserted by|
| 1 | P1 | X | Pc |
| 2 | P2 | Y | Pc | | 3000 | P3 | Z | PPC | | 6000 | P4 | Y | PPC |
My manager does not want to go through the process of moving 40 GB of data to the subscriber database over the network as would occur during the initialization phase of replication (exact method of replication has yet to be determined). The subscriber db is for Web Access and will be read-only. He wants to back up a db and use tapes to restore the db on the subscriber db server, and then activate replication. I have told him that you have to allow SQL Server to create the objects on the subscriber side and you have to allow SQL Server to control the data flow, that manually creating the objects on the subscriber side will not work. He says that I am incorrect.
Who is right? I haven't been able to find an answer from the Microsoft site and news groups.
trying to get a new database created then running a script to created the tables, relationships, indexes and insert default data. All this I'm making happen during the installation of my Windows application. I'm installing SQL 2012 Express as a prerequisite of my application and then opening a connection to that installed SQL Server using Windows Authentication.Â
E.g.: Data Source=ComputerNameSQLEXPRESS;Initial Catalog=master;Integrated Security=SSPI; Then I run a query from my code to create the database eg: "CREATE DATABASE [MyDatabaseName]".
From this point I run a script using a Batch file containing "SQLCMD....... Myscriptname.sql". In my script I have my tables being created using "Use [MyDatabaseName]   Go  CREATE TABLE [dbo].[MyTableName] .....". So question is, should I have [dbo]. as part of my Create Table T-SQL commands? Can I remove "[dbo]."? Who would be the owner of the database? If I can remove the [dbo]., should I also remove dbo. from any query string from within my code?
We have just installed SQL and C# Express. We have lots of experience with SQL, but none with C#. With both the SQL and C# apps running; if we create a new table, view etc, we cannot see that new object from C# only the objects that existed when we opened the apps for the first time (each time).
If we close everything and re-open everything from scratch, the new objects show..!?!?
Thank you for any information on this difficult problem.
passing serialised objects to a stored procedure for the purpose of data inserts. I see this as being a way to handle multiple row inserts efficiently.
However, in my limited use of XML data I am not so sure how to link the data when I have a dependency on another "object" within the serialised XML.
Below is a code snippet showing what I have so far.
The first insert statement works fine - but how to retrieve the identifier created by the DB - I want to use an SQL statement that finds the record in the table based on the XML representation (of the PluginInfo), allowing me to insert the ConfigurationInfo with the correct reference to the PluginInfo
DECLARE @Config NVARCHAR(MAX) DECLARE @Handle AS INT DECLARE @TransactionCount AS INT SELECT @Config = ' <ConfigurationDirectory > <ConfigurationInfo groupKey="Notifications" sectionKey="App.Customization.PluginInfo"
Here i have small problem in transactions.I don't know how it is happaning. Up to my knowldge if you start a transaction in side the transaction if you have DML statements Those statements only will be effected by rollback or commit but in MS SQL SERVER 7.0 and 6.5 It is rolling back all the commands including DDL witch it shouldn't please let me know on that If any one can help this please tell me ...........Please............ For Example begin transaction t1 create table t1 drop table t2
then execute bellow statements select * from t1 this query gives you table with out data
select * from t2 you will recieve an error that there is no object
but if you rollback T1 willn't be there in the database
droped table t2 will come back please explain how it can happand.....................
I have a production database which uses merge and snapshot replication. The Merge is for 3 tables. The snapshot is to update the rest of the data once daily. I use a Full recovery model and perform database backups (full, differential) and transaction log backups.
I have a database optimization plan which runs 4 times a week. This plan performs and integrity check and rebuilds the indexes. This optimization plan is growing the transaction log by about 8MB each time it is run and we are running out of space on the drive for our log files. The space is not being reused.
I saw in another post where Gail Shaw suggesting using SELECT name, log_reuse_wait_desc FROM master.sys.databases to see why the log space is not being reused. On the database in question, the above returns "REPLICATION".
A colleague tried to backup the transaction log a couple of times to truncate the log this weekend. She was going to perform a DBCC Shrinkfile command afterwards. But the truncate failed. Again looking into things it seems replication prevented the truncation.
We are looking at stopping the merge replication or even removing it to truncate the log file and then recreate the merge replication. How to handle shrinking the log file for now and then seeing if there are any checks or changes I can perform which will allow the transaction log space to be reused.
I need to drop and recreate indexes in some of my tables that are currently been replicated. I am not sure how this will affect my ongoing replication. Will this cause a problem for me? Please help
I have two replicated databases i.e. 1. Database 1 is a live database where all live applications point 2. Database 2 is a replica of Database 1 and all reporting / BI applications point to this db. This is read-only.
Is it possible for reporting applications, pointing to Database 2, to create temporary tables in the read-only database?
So we got ServerA.dbA and ServerB.dbB and looks like nobody thought about that MVew. IT works thru linked server now, so surely enough it sucks..
Then even more: we got piece of DW on ServerRep with replcicated pieces of ServerA and ServerB, how it works I asked: "Don't worry man, we took care of everything, go directly to ServerRep for everything..."
And I see that our MView sucks even deeper, practically 0 performance, I think what going on it's still hit that linked server between ServerA.dbA and ServerB.dbB on background.I still can use it on ServerRep WITH NOEXPAND hint, which means it was properly defined...
Can we have that scenario at all? Or we can replicate all Tables on ServerRep and then define/redefine this view directly on replicated server ?
I moved this from another forum because it seems more related to replication the longer I look into it! The following code often dies on a deadlock. I have also seen "Spid x blocked by Spid x", where x is the spid of this connection. The view selects from several tables and some other views as well, and many of the underlying tables are being updated by replication. I have seen cases where the replication Insert proc participated in the deadlock, and the table being inserted into is joined twice in the view, suggesting that somehow this causes the deadlock or makes a deadlock more likely?
SELECT * INTO dbo.tbl_stg_LoansMarketingFirstBorrower FROM view_loans_marketing_first_borrower OPTION (MAXDOP 1)
I do not understand how selecting from a view can cause a deadlock with a proc which does not reference tbl_stg_LoansMarketingFirstBorrower. Any suggestions on how to diagnose this? I used (1204) to identify the proc and underlying table.
I have several databases set up for transactional replication to another instance of SQL Server 2005 for fail over purposes. Today, I restored one of those replicated databases to my development machine and discovered two surprising problems:
1) The Default Values settings in the replicated tables are missing. They are there in the publishing tables, just as they were before I set up replication. However, they are not in the subscribing tables. Now, this is not such a big issue, since I tend to send all default values in insert queries as necessary.
2) The second problem is a more of an issue, since I use auto-numbered Identity columns in my tables (yes, I know that's just plain lazy...). Anyway, in the replicated tables, €œIs Identity€? is indeed set to yes, but despite that fact that there are thousands of records with incrementally unique IDs, SQL server is trying to insert a record starting with 1. This, of course, throws a PK constraint error.
Obviously, if I am use them for failover purposes, these replicated databases need to be identical in every way.
So, what did I do to cause this situation, and how to I fix it?
I have a publisher database set up for a merge replication. This is using parameterized filter with join filters.
I also have a stored procedure that does deletes & inserts on the table where the parameterized filter is applied to aid in changing a subscriber's eligibility to receive so and so data. I have observed that running the stored procedure takes extraordinarily long and as a result, the log file grows to a size 1.5 - 2.5 times the database size.
At first I reasoned that this might because I had it set up to use precomputed partitions and changing it requires recalculating the partitions. As a test, I turned off the precomputed partitions. Didn't work. I turned on "optimize synchronization" AKA "keep_partition_changes", which normally is not available when you have precomputed partition on, and that didn't work, either.
At this point, I think I can rule out precomputed partitions being a problem here but I'm stumped now what else I should do to reduce the amount of log writes being required. We do need the parameterized filters & join tables, so that can't go.
p.s. Does anyone have any needles I can borrow? I think sticking them in my eyes would be nicer than working with SSIS.
===================================
An error occurred while objects were being copied. SSIS Designer could not serialize the SSIS runtime objects. (Microsoft Visual Studio)
===================================
Could not copy object 'Preparation SQL Task' to the clipboard. (Microsoft.DataTransformationServices.Design)
------------------------------ For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%u00ae+Visual+Studio%u00ae+2005&ProdVer=8.0.50727.762&EvtSrc=Microsoft.DataTransformationServices.Design.SR&EvtID=SerializeComponentsFailed&LinkId=20476
------------------------------ Program Location:
at Microsoft.DataTransformationServices.Design.DtsClipboardCommandHelper.SerializeRuntimeObjects(ICollection logicalObjects) at Microsoft.DataTransformationServices.Design.ControlFlowClipboardCommandHelper.InternalMenuCopy(MenuCommand sender, CommandHandlingArgs args)
===================================
Invalid access to memory location. (Exception from HRESULT: 0x800703E6) (Microsoft.SqlServer.ManagedDTS)
------------------------------ Program Location:
at Microsoft.SqlServer.Dts.Runtime.PersistImpl.SaveToXML(XmlDocument& doc, XmlNode node, IDTSEvents events) at Microsoft.SqlServer.Dts.Runtime.DtsContainer.SaveToXML(XmlDocument& doc, XmlNode node, IDTSEvents events) at Microsoft.DataTransformationServices.Design.DtsClipboardCommandHelper.SerializeRuntimeObjects(ICollection logicalObjects)
I have transactional replication setup from server A to Server B. I wanted to move the subscriber from B to C. What could be the best approach.
1. Backup the DB from Server B and restore on Server C. set the replication between A & C. 2. setup the transaction replication between A & C along between A & B. Test A& C working fine and then remove B.
If I am going with approach 2 , I have to replicate data approx. 70 GB so If I ran both the replication on Server A that will stress as 140 GB of data moving out. How do I control this large movement ? Can the replication be manual synch?
I have added a new table to a database (existing publication) using T-SQL, I then opened up publisher properties, and ticked the new table/article so that it would be added to the subscriber. It did not show up.
I did not use a snapshot to initialize the subscription.
Immediate Synch is 0. allow_anonymous is 0.
I mark the subscription to be reinitialized. When I start the snapshot agent I get '0% A snapshot was not generated because no subscriptions needed initialization'.
What could I be doing wrong, or missed out? Do I need to drop and recreate the subscription to get the article to show up?
Dear All, i dont know the exact reson why, but i'm guessing the reason for each table had a new guid column is because of transactional replication with update subscribers. actually i dont need this. can i change this to transactional replication? or i need to drop the column in each table? please guide me
Arnav Even you learn 1%, Learn it with 100% confidence.
I have a setup of transaction replication between one publisher and subscriber in the Same server.Now, I need to add a new subscriber to the existing publisher. So publisher database name is DB_A and Subscriber 1 name is DB_B. So the new subscriber will be DB_C. Is this kind of setup possible on one server?
If yes then at the time of reinitialization is it going to apply the snapshot on DB_B as well as DB_C?Also let say if due to disk error DB_B gets corrupted then will data be still replicated between DB_A and DB_C? (Assuming publisher, subscriber 1 and 2 are sitting on individual disks).
We have a database which is (a subset of tables are) replicated to another via transactional replication. Whilst most changes made at the published database reach the subscriber within a matter of seconds, we have a SQL Agent job which performs a calculation in the published database and then immediately exports data from the subscriber using log shipping. The result is that the calculated changes do not make it through to the exported transaction logs in time.
Is there a way to manually "refresh" the subscriber databases using T-SQL?
I am working with replication on sql server 2005 (standard edition sp1).There is scenario that some time one of the team of coders want to alter objects mostly tables being replicated on publication database but unable to do that due to error on adding column "Cannot add columns to table 'table1' because it is being published for merge replication.." in sql server 2000.
While other one want to alter replicated objects on subscriber end (like name of objects, add columns in replicated table etc).
We was working on sql server 2000 and for implementing this scenario I always use mechanism disabling/reconfigure the replication setup by the mean of long exercise.
After that, In order to alter the objects in publication database simple DDL script was executed after disabling the replication.
While manipulating the requirements on subscriber end, I created tables with same structure as replicated tables and replicate the data on self created tables by customizing code in triggers (ins_C9D57350-605A-4D87-85C0-0DB645F1CEC8 etc.) of replicated tables.
Also I have script of all replicated tables€˜s triggers but when I rerun snapshot agent it replaces the name of triggers with new one so at that time I lost my mind and I put code again in all tables€™ triggers. IS there any way to force sql server 2000/2005 generate and rerun snapshot but use already generated guid of articles instead created new one .
Let me know Plz, is there any solution/feature or any easy way in sql server 2005 to avoid this annoy exercised. I could implement this scenario.
I have a transactional replication environment that creates subscribers on another server as a staging area for an ETL process to a data warehouse application on a 3rd server which is the report repository. Currently the ETL process runs every 10 minutes and performs it's function across approx 150+ subscriber databases and consolidates it to the data warehouse.
I have an SLA of 2 minutes. I'd like to rework the ETL process (which run as SSIS job at the moment) to be specific to a single database and fire that one ETL proces when changes have been applied to that subscriber database only. Of these 150+ databases generally only about 8-10 are updating the subscriber at any given time per Repl Monitor. I'm thinking that if I only have a few transactions to apply to a single db the ETL would run in seconds dynamically as the subscriber is update.
The issue is how to fire the ETL process upon completion of updates to the subscriber DB? I'm thinking of using SP_Start_job passing the DBID to update the warehouse but unsure whether this is possible but if so where to trigger it.
This procedure runs and I see my GM names come up one at a time but I get no tables created.
Sorry about the sloppy code - I'm not a real pro.
declare @@GM Char(100) declare @SQL VarChar (2000) Declare spot cursor scroll for select GM from BIM_Historical_Performance.dbo.Performance_Master open spot fetch first from spot into @@GM While @@Fetch_Status =0 BEGIN
set @SQL = 'select Comp, Billto, Cust_name as Customer, Branch, BAC, [MgMt_Type], BondValue as Bondbucks , BondValueAlloc as Allocbucks, c1 as Curcode, u1 as CurUnderP$, s1 as Cur3mthBIMsales, p1 as performance, I1 as Inside, [IS_since] as Insidesince, O1 as Outside, [OS_since] as Outsidesince, c2 as CodeM-1, c3 as CodeM-2, c4 as CodeM-3, c5 as CodeM-4 ,c6 as CodeM-5, GM into ' +@@GM + ' from BIM_Historical_Performance.dbo.Performance_Master where BIM_Historical_Performance.dbo.Performance_Master.gm = ' +@@GM