I am not sure how to tackle the following. Environment has SQL Server 2005 and above which hosts one database (table1 & table2) per server. I have created a new SQL 2012 server (database called collection (table1 & table2) which will be the central point. I need to do the following:
1. Connect to each server get the data from specific table
2. Add a additional column called "datasourceserver" and add the server name where data came from
3. Able to schedule this task so that new data are sync to SQL 2012 server
I Enabled Data Collection on one of the server and planned to make it as Centralised Management Data Warehouse I configured data collection on it and can view reports. Next, I went to other server and configured "Set up data collection" to use my first instance as the centralised Database. But the issue is I can only see reports of first server. Am I missing something here.
I did exactly as explained in this video [URL] .....
I am using VB as front end application and MS sql server as back end. I want to make the database centralized. does anybody know how to connect the database across the network?
1) We are providing a e governance solution for an organization,where we are providing a centralized database,Client have provided 5 Database server for the same.how can we position the Database Server? there are 5000 Concurrent users and 25000 users,SAN Storage for approx. 60 TB,Database size of 2 TB and growth of 1 TB every year
2) How many instance can we have for above said Case?
We have 5 servers with MS Sql Server 2000 with sp2. We need latency near to real time.
One server is the Central, at this server is happening changes in centralized tables like articles for example. At the rest of servers is hapenning changes in operational tables like orders for example.
I have implemented the use of one publication of merge replication for all objects in one way using -ExchangeType 2 parameter of merge agent, upload only from suscribers. And other publication with some tables that are the centralized tables with ExhangeType 1, Download only to suscribers.
The -ExchangeType only delay the changes, for this we need delete msmerge_contents and msmerge_genhistory because the other changes we want discard.
The administration of changes at the scheme are made it at the first publication the general publication.
The distributor database is at the Central server.
Is this correct ?
Because i have problems with msmerge_genhistory index. I think that the problem is the use of the same server for the two publications.
This way is for better replication administration.
i know how centralized database works ,In a centralized system, I'll store it on one server. Then, I'll replicate that exact information to Server #2 for safekeeping.All the data's will be placed on root server,but i don't know diff between cloud and centralized database how they are working together.....
I am new to 2005 mirroring but have read a lot of documentation on it. I want to use mirroring to centralize a bunch of DBs to one location and then back them up from that central point. However, from what I've read since the mirror is kept in a "recovering state" it cannot be accessed directly. Does that mean I wont be able to backup the mirrors? If so, are there any work arounds or different strategies for this?
I am having a problem with the disk usage reports after creating a centralised MDW on SQL Server 2008, this is the report displayed when you drill down onto a specific database.
When I drill down on a database on the local server to the MDW database the disk usage report is shown correctly.
However, when I drill down onto another servers database I receive the following error: A data source instance has not been supplied for the data source 'DS_TraceEvents'.
I think I am definitely thrashing and am not getting anywhere on something I think should be pretty simple to accomplish: I need to pull the total amounts for compartments with different products which are under the same manifest and the same document number conditionally based on if the document types are "Starting" or "Ending" but the values come from the "Adjust" records.
So here is the DDL, sample data, and the ideal return rows
CREATE TABLE #InvLogData ( Id BIGINT, --is actually an identity column Manifest_Id BIGINT, Doc_Num BIGINT, Doc_Type CHAR(1), -- S = Starting, E = Ending, A = Adjust Compart_Id TINYINT,
[Code] ....
I have tried a combination of the below statements but I keep coming back to not being able to actually grab the correct rows.
SELECT DISTINCT(column X) FROM #InvLogData GROUP BY X HAVING COUNT(DISTINCT X) > 1
One further minor problem: I need to make this a set-based solution. This table grows by a couple hundred thousand rows a week, a co-worker suggested using a <shudder/> cursor to do the work but it would never be performant.
I want to write a query where I can see all races and age range as column.
TblRace
ID, RaceName
TblAgeRange
ID,AgeRange.
There is no connection between this two table. I need to display result like below.
Race 17-20 21-30 31-40
A
B
I
W
How do i get this kind of empty data set so that I can fill it out in front end or any better solution. The age range will be displayed as many row as they have. It's not static. Above is just an example.
Trying to get the PSI Outcome, Expected, and PSIIndex every month whether it has data or not. Created a CTE and left outer joined with PSI table, but it's still not pulling every month for every PSIKey.
1. Take a subset of data from about 100 tables that have multiple references to other tables in this group of 100 from a first DB. 2. Insert the above data into a second DB, a database that already has data in the 100 tables, while maintaining the correct references.
As a general approach, the best way I can think of doing this is as follows:
1. Create mapping tables for every ID that is referenced in a different table (OldID NewID) 2. Insert the old data into the new table and output the OldID and NewID into the mapping table. 3. Use that mapping data to make sure all tables that use those IDs have the new IDs in DB2.
This approach is extremely labor intensive both on initial implementation and would require a fairly substantial amount of work to maintain going forward.
Here is my Query, I don't know whether I'm getting it right?
--Quarter 1 SELECTD.MerchantName, A.MID, A.TID, ISNULL(SUM(A.SumTrxnMon), 0) AS SumTrxnMon, E.FullName, E.DxBEmail INTO#Quarter1 FROMdbo.tblRPT_Spend AS A INNER JOIN dbo.tblMer_DeployORetrieveTerm AS B ON A.MID = B.MID AND A.TID = B.TID INNER JOIN
We have a query that joins column A int which is an int onto column B with contains only int's but was created as a varchar and can't be changed to an int at the moment.
Casting column a as a varchar in the ON of the join to left join seems to void the index altogether and the query just runs for every.
We are talking a few hundred million rows of data in each table.
Temp solution is select into a #Hash table as correct data type and index then use the #Hash table in the join.
Today I have a very similar situation, only today I am dealing with missing text data, not numeric data.
DECLARE @MissingTextData TABLE ( RowID int ,UserID int , EmailAddress varchar(20) ,StreetAddress varchar(20)
[code]...
I would like to fill in the NULL columns with data from the other row, and then select the one row that is filled with all data. I was able to use MAX() for a numeric value, but I am really stumped on the text data. Everything that I have tried is not working.
I was assigned a project to read binary data file (2G) and select data from OrderID, OrderDate and Price three columns (there are about 50 columns) into SQL table.
Where to start? Do I need to convert entire binary data file into text file?
I know that SQL Server runs on Windows Server, is it possible to download data in UNIX using shell script? does SQL Server has any binary or command line client for any operating system other than Windows?
What I need is split the data into two columns if data in column Main starts with 'PR-' then output result to column P and if it starts with 'CC-' then to column C (the output needs to be in one table).
I am using SQL 2012 SE. I have 2 databases say A and B with same structure and relationships. There are 65 tables in each database. A is already replicating data to database C for 35 tables. Now I need to move data from A to B which is greater than getdate()-1 everyday for all the tables and once the move is done I need to delete this data from A. And the same thing the next day and everyday. Since this is for 65 tables its challenging to identify the insert order. Once the insert order is identified the delete order will be the reverse of it.
Is there a tool or any SP that could generate the insert order script? The generate scripts data only is generating the entire data and these databases are almost 400GB. Some tables have 200Mil+ rows. So it takes forever.
I have system id information in table system_ids and productids and systemidinsformation has lot of data but I am looking two strings in tire data to pull into two separate columns. details below
Database versions :ms sql 2008/2012 tablename:system_id's column:system id information
sample data from system_id_information column
######################################## <obj xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:vim25" versionId="5.5" xsi:type="ArrayOfHostSystemIdentificationInfo"><HostSystemIdentificationInfo xsi:type="HostSystemIdentificationInfo"><identifierValue> unknown</identifierValue><identifierType><label>Asset Tag</label><summary>Asset tag of the system</summary><key>AssetTag</key></identifierType>
[Code] .....
I am looking output of two columns, which are bolded
product_id snumber 654081-B21 MXQ43905SW
for serial number this is common
before string :HostSystemIdentificationInfo"><identifierValue>
and after string </identifierValue><identifierType><label>Service tag
and snumber is always between the before and after string and number of characters of snumber varies and entire data for a row also varies
I am fetching large amount of data from teradata to sql server using linked server. I am facing below query:
A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)
I wrote code to archive Data.I created a table that set the maximum number of records to archive and delete and loop until finished or a set a flag in a table to stop archiving. I was able to limit the number of records to commit.
In the t-sql 2012 that is attached message, is a query that always comes up with the same results. It does not make any difference what start and end dates are given to this t-sql. The results are always the same.
DECLARE @StartDateDATETIME DECLARE @EndDateDATETIME SET@StartDate= '2013-07-01' SET@EndDate= '2015-08-01' ; WITH Com_House_1 AS (
I'm trying to extract some data from an XML column, into the demo below I would like to obtain the CommandText value but my attempts so far are in vain, I'm fairly sure its just a path issue in the .query command but I just can't seem to get it to work.