We've had a new server set up with SQL 2012 and I'm in the process of moving data to it from a 2008 (SP2) server.
Details are as follows:-
2012 instance:- Microsoft SQL Server 2012 - 11.0.5058.0 (X64)
May 14 2014 18:34:29
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)
2008 instance:- Microsoft SQL Server 2008 (SP2) - 10.0.4000.0 (X64)
Sep 16 2010 19:43:16
Copyright (c) 1988-2008 Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7600: ) (VM)
I don't want to do a backup/restore routine as there are collation conflicts on the 2008 server.I've created the database and tables on the 2012 instance and now I want to transfer the data from the 2008 instance to the 2012 one.
The 2012 instance has a linked server to the 2008 instance.I was trying to use sp_MSForEachTable (I know, it's old and will probably disappear shortly) but that doesn't seem to work properly because some of the columns have an Identity field set up.
Some of the tables have upwards of 10 million records in them and are quite sizeable.how I can achieve the transfer without a back-up/restore?
1. Take a subset of data from about 100 tables that have multiple references to other tables in this group of 100 from a first DB. 2. Insert the above data into a second DB, a database that already has data in the 100 tables, while maintaining the correct references.
As a general approach, the best way I can think of doing this is as follows:
1. Create mapping tables for every ID that is referenced in a different table (OldID NewID) 2. Insert the old data into the new table and output the OldID and NewID into the mapping table. 3. Use that mapping data to make sure all tables that use those IDs have the new IDs in DB2.
This approach is extremely labor intensive both on initial implementation and would require a fairly substantial amount of work to maintain going forward.
Is there any simple way to copy tables from one database to another in SQL Management Studio or VS 2005? I sometimes work split work between home and work and I often need to copy and table and its data (data, stored procedure, etc) to a different database, but having to create a new database then copy the data is a pain. Is there an easier way?
Hi. I need to move data from one database table to another across database instances. A simple example of the typical move would be:
[CODE]
INSERT into destination_db.dbo.table1
SELECT column1, column2, column3, column4 from source_db.dbo.table2
[/CODE]
My options are:
1. Create an SSIS package to perform the move.
2. Create sprocs and schedule the data move as jobs.
3. Write .NET code using sprocs to perform the move.
I'll have to move hundreds of thousands of records, so I want the option that provides the best performance. I'm guessing that option 3 will be the slowest.
I have a table, dbo.Table1(Id,Col1,Col2,Col3,Col4,Col5,Col6,Col7,Col8) that I need to split between two tables dbo.Table2(Id, Col1, Col3, Col4) and dbo.Table3(Id,dbo.Table2_Id,Col5,Col6,Col7,Col8). But in dbo.Table3 I need to have the Id column from dbo.Table2 populated since its a foreign key constraint in dbo.Table3. How do I go about doing this?
I have 3 tables with the follwing schema Table <Category> {
UniqueID, LastDate DateTime }
Assume the follwing tables with data following the above schema
Table Cat1 {
1, D1 2, D2 3, D3 } Table Cat2 {
2, D4 3,D5 4, D6 } Table Cat3 {
1, D7 3,D8 5,D9 }
I have a Master and the schema is as follows Table master {
UniqueId, Cat1 DateTime, -- This is same as the Table name Cat2 DateTime, -- This is same as the Table name Cat3 DateTime -- This is same as the Table name }
After inserting the data from all these 3 tables, I want the my master table to look like this Table Master {
I've two SQL databases, the first one having the schema named sch1 and the second one with the schema named sch2. I need to copy data of more tables belonged to the sch1 into the sch2 of the second SQL Server db and I'd like to create a SSIS 2012 pkg in order to achieve this goal.
i have sql local database in the application . I want to copy the table from one local database to another. here the detination table is already created with one field which is incremental and other field is image and some other fields are text. any solutions on how to do it
I have a database called marketing in it i have a table called products and right now there are five products in the table with product_id as 8003,8004,8005,8006,8007 i want to create the same table in the database but my product_id should start from 1 and i only want three products from the old table to be copied into the new table any idea how to make this happen.
I'd like a really simple way of making a replica of a table. The thing is i'd like the table name to be a variable. The following code doesn't work, any ideas??
Thanks in advance,
Alph
CREATE Procedure Test
@vMonth as varchar(3)
As
SELECT tbl_Targets.* INTO @vmonth FROM tbl_Targets; GO
I have a query currently that looks like this . @Month and @Year are supplied as parameters
SELECT -- select the sum for each year/month combination using a correlated subquery (each result from the main query causes another data retrieval operation to be run) (SELECT SUM(SalesofProductA) FROM #ABC WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_SalesofProductA]
[Code] ...
Right now I see an output like this : for a particular value of @Month and @Year
SalesofProductA, SalesofProductB, SalesofProductC What I would like to see is :
I need to make a query that counts installed developer software for all our developers (from the sccm database), for licensing purposes. The trick here is that a license should only be counted once per. developer and that should be the highest version. But in the database, the developers can have different versions of the software installed (upgrades) on the same computer and they often use several computers with different software versions.
So for example: A source table with two developers
------------------------------------------------------------------- | dev1 | comp1 | Microsoft Visual Studio Ultimate 2013 | dev1 | comp1 | Microsoft Visual Studio Professional 2010 | dev1 | comp2 | Microsoft Visual Studio Premium 2010 | dev2 | comp3 | Microsoft Visual Studio Professional 2010 | dev2 | comp4 | Microsoft Visual Studio Premium 2012 --------------------------------------------------------------------
I want the result to be: ----------------------------------------------------- | dev1 | Microsoft Visual Studio Ultimate 2013 | dev2 | Microsoft Visual Studio Premium 2012 ------------------------------------------------------
I have created a query using cursors that give me the correct result, but it's way to slow to be acceptable (over 20 min..). I also toyed with the idea of creating some sort of CRL proc or function in C# that does the logic, but a SCCM consultant from MS said that if I create any kind of custom objects on the SCCM SQL Server instance, we loose all support from them. So I'm basically stuck with using good old fashioned T-SQL queries.
My idea now, is to use a CTE table and combine it with a Temp table with the software and a rank. I feel that I'm on the right track, but I just can't nail it properly.
This is how far I have come now:
IF OBJECT_ID('tempdb..#swRank') IS NULL CREATE TABLE #swRank(rankID int NOT NULL UNIQUE, vsVersion nvarchar(255)) INSERT INTO #swRank(rankID, vsVersion) VALUES (1, 'Microsoft Visual Studio Ultimate 2013'), (2, 'Microsoft Visual Studio Ultimate 2012'),
I have found a bunch of duplicate records in our housing database that ideally I need to delete.There are two tables that I need to remove data from ih_cml_log_entry and ih_cml_log_notes. There is no unique identifier between the tables for a log entry. So I have had to join on the person_ref, log_seq and the date/time of entry.How do I go about deleting the data - I've used the script below to identify what I need to delete -
SELECT * FROM ( select cml.person_ref, cml.open_date + open_time as 'datetime',cml.open_user,cml.log_type ,ROW_NUMBER() OVER (PARTITION BY cml.person_ref, cml.open_date + cml.open_time,cml.open_user,cml.log_type ORDER BY (SELECT 0)) AS RowNo ,n.note FROM ih_cml_log_entry cml
Hi! I've got a very simple problem I can't find an answere to. I've got an MSDE database and I want to copy a table. I've tried something like: create table2 as select * from table1 with and without the "as", but I can't get it to work and I can't find a good answere on the internet. very thankful for an answere!
In a Library Management database we have these tables
1) Document ( DocNo , Doc_type , permalink,inDate) 2)Title(id, DocNo,Main_Title, Other_Title) 3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....) 4)Publisher(id,DocNo , Name,Publisedate,address) 5)Subject(id,DocNo,Subject) 6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc
In document table I have 500,000 records.
I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?
I have 6 tables which are very huge in row count and records needs to deleted which are older than 8 days.
Little info: Every day, 300 Million records are inserted in below 7 tables. we should maintain only 8 days worth of data in below tables. How to implement Purge script which can delete records in all tables in the same time and with optimized parallelism.
Master table which has [ID],[Timestamp] Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. So the records needs to deleted based on Min(ID) from Sample
I need to consume a live data feed from a golf tournament. And by consume, I really mean insert (merge) into our own SQL Server database on a regular intervals as a tournament progresses.This site didn't let me upload an XML file, but you can see a sample of the data feed here: URL....
I need to insert this data into 2 tables, Player_Holes and Player_Shots. But while doing the insert, I need to lookup several things such as our player ID match to theirs on an external_id against the players table. The shot types translation, and some other logic about the process overall.
The columns in my player_holes tables are: id, player_id, hole_id, round, shots (this is a total # of strokes) and date_created/date_modified.Shots table is similar: id, player_id, hole_id, round, shot_number, shot_type_id, club, distance, date_created/date_modified.
The only way I know how to do it, is inefficient. I would parse the XML in ColdFusion (please no comments on ColdFusion, that's what we use for webdev), and then loop over it and do inserts for each player, each hole for each round, and the shots would probably be separate for each hole.
It would be so much better and more efficient if I could do it in SQL directly. I've done some research and SQL Server Data Tools looks promising. I've never used it, so would have to learn, but also I'm not sure if that'd work in this application when we want to run is as a scheduled task every few minutes.
If F.parent_id(101)=T.team_id(101) and T.team_id(101)=T.parent_folder_id (101) then output should come as 'Mobile/c' (this is for f.parent_id=101)
If F.Parent_id=T.team_id and T.team_id!=T.parent_folder_id then parent_folder_id have to start search on team_id column where it got match and pick the Team_name from that corresponding id
Ex: F.parent_id=202 is matching with T.Team_id (202) but this T.team_id(202) is not matching with T.parent_folderid(200) , so this T.parent_folderid (200) have to search on T.id (200) ,if now T.id(200) is matching with T.Parent_folder_id(200) then it have to give the names from the starting hirache
There will only be a max of 3 Country entries for each Employee. So I want to select the EmployeeId and get the three CountryIds so it would look like this:
I have a requirement of creating nested tables in SQL server. how to create them. Just to give a background I am trying to move the RDBMS from oracle to SQL server.
Structure of tables is as follows. I have table 'Employees' with address as one of the column. I have one more table with columns Street, Town, Dist, State. When I query the table 'Employees' I should see the attribute name and values of all the columns in address table in address column.
Employees: with columns: ID, FirstName, LastName, dept, gender, dob, address
Address (Nested table): with columns : Street, Town, Dist, State
This was done in oracle using Nested tables and user defined data types. what is alternative for this in SQL server. How can I achive this requirement in SQL server.
I'm pulling data from XML into tables, but I'm unsure how to link the data after it's imported. This example has names and tasks, and I can pull the data into two tables, but I can't find any way to link the task to the appropriate person. My person and task tables populate without issue, but there's nothing I can find to link the rows together. So in this example Test 1 would go to the first two Tasks and Test 2 would go to the second two work items.
Is there anyway to get the order in which data to be import on to tables when they have primary and Foreign Key relations?
For ex:We have around 170 tables and when tries to insert data it will throw error stating table25 data should be inserted first when we insert data in table 25 it say 70 like that.
I'm working on a project where I need to build a small database and then copy it to a server at the client's site. I can't connect directly, so I have to use a VPN connection and use Remote Desktop, copy the database backup from my machine to the cloud, then download it to the client machine. The project is still in the early stages, and the client is still sending me data in CSV files and Excel spreadsheets. I'm periodically needing to do a complete refresh of the database at the client. I've hacked my way through it a couple of times, but I need to know the proper way to do it. I get errors on the restore step, telling me the file is in use.
HI, I current have two 2005 boxes running 9.0.3050 in different DMZ with the source running a DTS to drop and copy its tables to the source every night. It was working up until last thursday. Nothing has changed in the FW rules and getting no errors. One is 3.5GB which copies fine, the other is 21GB and runs all night with only getting a fraction of the tables populated. I'm the hardware guy, but have some understanding w/ sql. Thanks in advance for any help.
I have a large table that I need to copy, but I need to generate a new value for my id field using a SPROC and replace my existing ID value. I also have a few mapping tables I need to copy, so I need to store this new ID for later use. I currently have a SPROC that performs all these actions, but it takes about 3 or 4 minutes to complete and completely hogs the CPU time. Thus, I can't perform any actions until it finishes.
I'm looking for a way to run this procedure in the background. Unfortunately, my ID field value is not a GUID nor an IDENTITY column. I've researched Integration Services, but I was unable to find any DataFlow Tranformations to call a SPROC to retreive a new id nor could I find anything that would let me store my new id to update my mapping tables. SQLBulkCopy wasn't a good solution either.
If anyone has any insight to this, it would be greatly appreciated. Thanks,