I am using DTS to transfer some tables from one server to another as part of a migration. We want to be down for as little time as possible, but we need the most up-to-date copy of the database tables in question.
I am currently testing the transfer process in our test environment by migrating the data from one database to another on the same SQL instance.
There are 7 tables to transfer and the total size of the database is 450 MB (with around 117 MB used). The two largest tables have around 17,000 records each.
One table (the header) has no text column and it takes just a few seconds to transfer. The other table (the detail) has two columns, one of which is a text column (actually, its not fair to call it the detail table; the relationship is actually one-to-one, but for the sake of this discussion, let's leave it at that).
The header takes seconds to transfer, but the detail takes up to 18 minutes.
Physically, our test server is quite robust; 2 processors, a 3 disk RAID-5 for the data files and a separate RAID 1 partition for the logs. Performance counters don't indicate any real issues: during the transfer, the disk utilization on the data partition occasionally spikes to a high level, but comes right back down until the next spike (the spikes being separated by about 1 minute. No issues with memory, paging or CPU.
I have removed the clustered index on the affected table as well as the PK. No help.
Are text columns just slow? Is there something that I am missing?
i have a MSSQL 2000 database with about 30 tables in it. On one of those tables i've defined an full text index on an image field. In this table are around 500 records with binary files.
it functioned well for a time but now when i try to upload a file into the table this goes extremely slow (300 KB takes over 3 minutes).
i tried disabling "change tracking" but this didn't help a thing
adding blobs to other tables (without fulltext index on it) still goes fast.
what could be a reason that the uploading goes so slow??
To extract data from an ODBC source, try the following:
Add an ADO.Net Connection Manager. Edit the Connection Manager editor and select the ODBC Data Provider Configure the Connection Manager to use your DSN or connection string Add a Data Flow Task to your package. Add a Data Reader Source adapter to your data flow Edit the Data Reader source adapter to use the ADO.Net connection manager that you added. Edit the Data Reader source to query for the data you wish to extract.
hth
Donald
Using the steps outlined above as described by Donald Farmer in another post on this forum, I have created an SSIS package which retrieves data from Lotus Notes 6.55. The DSN referenced by the ADO.Net Connection Manager connects to Lotus Notes via the NotesSQL ODBC driver 3.02g.
When I execute the dataflow, data is transferred from Lotus Notes, but the data transfer rate is extremely slow compared to SQL 2000 DTS. In SQL 2000 DTS, we can retrieve just under half a million records from Lotus Notes in about 13 minutes. Utilizing the same DSN on the same machine, SQL 2005 SSIS completes the transfer in about 57 minutes.
Is there anything that can be done to improve the performance in SSIS to retrieve data from Lotus Notes via ADO.Net ODBC?
Is it possible/advisable when transfering very large amounts of data from server to server to: trasnfer the data to a new table first second alter new table adding indexes, defaults, ets based on original table
if it is what flow item would be used to transfer/alter the indexes and defaults?
I'm very new to ssis so the more detail you can give the better.
I m using SQL Server 2000. I have Tabel named Topic and have a column name lineage. lineage has data like following: //////546707//546707//546707/43213/ Now I want to get records who has only one "/" in it's crreponding lineage column. Can somone tell me how to do that in SQL Server 2000? Thanks Khushbu
I have a table with contains Text column. and which is accessed by each page of the site. It is getting frequent Dead Lock Any body can suggest me a good solution pl.
Table: User name, field1 Jack 1000|1001|1003 Berg 2000|1001|2004 Paul 1001|1000 Jane 1001
Now, I would like to replace all "1001" with nothing, and also remove the "|"-separator behind 1001 if it exists, basically removing both "1001" and "1001|", so the resulting table looks like this:
name, field1 Jack 1000|1003 Berg 2000|2004 Paul 1000 Jane
My tries, been plenty, but here are some:
Code: UPDATE UserTable SET field1 = REPLACE(field1, '1001', '') UPDATE UserTable SET field1 = REPLACE(field1, '1001|', '')
But the above queries replaces field1 only if the whole field matches '1001' or '1001|'...
The above queries do work, just like I want them to. I just happened to write them in this order on this post... did not do a copy of the actual query.
Hi, I am using data flow task to load data with the source as text file to sql server table with identity column. After mapping all the columns except identity column, when I execute, the package failing saying it can not insert null value into identity column. I donno how to get around with it ???? Thnaks, V
Hi All,I am looking for a store procedure or any alternate method which save my html file(s) text (with or without tags) in my table column automatically when I upload my html file to my file system (local hard drive).any help will be appreciate.Thanks in advance.
I need to update a large table, about 55 million rows, without filling the transaction log, in the shortest time as possible. The goal is to alter the table and change the data type for Text column from VARCHAR(7900) to NVARCHAR(MAX).
Since I cannot do it with an ALTER TABLE statement (it would fill up the transaction log) I'm thinking to:
- rename column Text in Text_OLD - add Text column of type NVARCHAR(MAX) - copy values in batches from Text_OLD to Text
The table is defined like:
create table DATATEXT( rID INTEGER NOT NULL, sID INTEGER NOT NULL, pID INTEGER NOT NULL, cID INTEGER NOT NULL, err TINYINT NOT NULL,
[Code] ....
I've thought about a stored procedure doing this but how to copy values in batch from Text_OLD to Text.
The code I would start with (doing just this part) is the following, but maybe there are more efficient ways to do it, or at least there's a better way to select @startSeq in the WHILE loop (avoiding to select a bunch of 100000 sequences and later selecting the max).
declare @startSeq timestamp declare @lastSeq timestamp select @lastSeq = MAX(sequence) from [DATATEXT] where [Text] is null select @startSeq = MIN(Sequence) FROM [DATATEXT] where [Text]is null BEGIN TRANSACTION T1 WHILE @startSeq < @lastSeq
Working on partitioning a few large tables. One of the tables included a text column and the €śTEXTIMAGE_ON [PRIMARY]€? clause which would prevent the partitioning of this table. After some research we found that the data was legacy and no longer used. We updated the column on the affected rows to NULLS and altered the column to a VARCHAR(20) When I attempted to run the ALTER TABLE SWITCH I encountered the error Msg 4947, Level 16, State 1, Line 1 ALTER TABLE SWITCH statement failed. There is no identical index in source table 'LocalDeltanet.dbo.testresultsjoe' for the index 'PKIDX_testSummary' in target table 'LocalDeltanet.dbo.testresults_part'. After a lot of grief and testing I determined that the message was bogus and the real issue is that the 'sys.tables' still has €ślob_data_space_id€? with a value of 1 for this table. I created a copy of the table with the text column altered to varchar and one with just the varchar to begin with. After copying data from the original table, I tried to run the alter switch. It failed once again for the text column altered to varchar table, but it worked for the varchar from the start.
Since it appears that this value is causing my issues, is there anyway to update the table in place. I know I can BCP the data out, but that would take too long and would defeat the advantage of using the alter switch method.
BOL States:
The allow updates option is still present in the sp_configure stored procedure, although its functionality is unavailable in Microsoft SQL Server 2005 (the setting has no effect). In SQL Server 2005, direct updates to the system tables are not supported. This means we cannot update the table manually.
I've created a FullTextCatalog on one of my Databases and added a full text index on one of my text columns.
The first time I run a query against the data it takes roughly 40-45 seconds to return data. After that it blazes and runs in under a second. If I don't query it for 20-30 minutes, it will take 40-45 seconds again and then fly until the next break.
Is there a configuration setting somewhere that I'm missing on this? Currently the Index is only about 5MB so it should take that long to read in when I'm querying it.
I don't think it has anything to do with size because the actual return can contain anywhere from 10-80K rows and the speed is about the same.
I'm using Standard edition on a 2003 Standard Server if that plays into the potential problem at all. We're currently downloading and installing the new Service Pack to see if that fixes it, but for some reason I'm not holding my breath. I'm assuming that there is something we need to change in the configuration.
Let me know if I'm missing any pertinent information. Thanks.
I fought for days with the following problem: On our development server, FTS was very fast. When I deployed the database by backup/restore to an identical production server, the 1st query on FTS became incredibly slow, it took about 1 minute for a simple query like Select TOP 1 'a' from ic_ItemContents where CONTAINS(s_Title,'fund'). Subsequent searches would be fast - at least for a while. After FTS has not been used for about 1/2 hour, the next search committed was very slow again.
This of course lead to inacceptable response times for users.
I found out that our DB seems to have the problem described here - altough, in this article, it is not mentioned that it could have an impact on performance: http://support.microsoft.com/kb/910067.
After detaching and reattaching the database as decribed there, FTS is now really fast even on the first attempt, the problem has gone away.
System specs: SQL Server 2005 SP2, Windows 2003 SP2.
Perhaps this helps someone else with the same problem.
Hi All, i have mutiple text file. let us say,a1.txtb1.txtc1.txt i have to port this text file data into the table (SqlServer Database) which have the same file structure.(i.e)x1 (SqlServer table)y2 (SqlServer table)z3 (SqlServer table) now i have to transfer a1.txt file data ----to--- x1b1.txt file data ----to--- y2c1.txt file data ----to--- z3 using SSIS. like that, i have to transfer more than 250 files at a time.manually binding 250 files into the package is very cumbersome and time consuming process. so, can any one give ur valuable sugession to solve this issue.
when i execute a query for the first time whith full text service from visual studio, show me the error 'server not responding' and when i execute this query for second time works perfectly.
What is the best way to transfer data from the staging table into the main table.
Example: Staging Table Name: TableA_satge (# of rows - millions) Main Table Name: TableA_main (# of rows - billions)
Note: Staging table may have some data same as the main table.
Currently I am doing: - Load data into staging table (TableA_stage) - Remove any duplication of rows from the staging table (TableA_stage) - Disable all indexes on main table (TableA_main) - Insert into main table (TableA_main) from staging table (TableA_stage) - Remove any duplication of rows from the main table using CTE (TableA_main) - Rebuild indexes on main_table (TableA_main)
The problem with the above method is that, it takes a lot of time and log file size grows very big.
I'm having problems designing a package to attempt to execute a fast load data transfer but failback to regular speed with error redirection in the event of an error.
The way I designed this was to add one data flow task to my package called "DFT FASTLOAD". The data flow copies a table SRC to another table DEST in the same SQL Server database. In the error handler for the data flow task I copied the original data flow task and changed the name to "DFT REGULARLOAD with Error redirection". In this data flow task I did not use fast load and addtionally redirected errors to a text file.
In the Data Flow Task "DFT FASTLOAD". I am copying from a varchar source field(with non-date strings) to a datetime destination field to force errors. However the Data Flow Task "DFT REGULARLOAD with Error redirection" never seems to start transferring data from source to destination. The data Flow Task "DFT REGULARLOAD with Error redirection" turns yellow (after the error occurs in "DFT FASTLOAD"), but no data is being transferred). It seems like it hangs.
Do I need to increase the MaximumError Count or something? The data flow task "DFT FASTLOAD" does not turn red when the error occurs it just remains yellow, so i assume I'm on the right track since it seems the error is caught.
I have added screenshots ... hopefully these screenshots will clarify my problem.
Hi everyone,I encountered an error "Need to run the object to perform this operationCode execution exception: EXCEPTION_ACCESS_VIOLATION" When I try to import data from Oracle to MS SQL Server with EnterpriseManager (version 8.0) using DTS Import/Export Wizard. There are 508 rowsin Oracle table and I did get first 42 rows imported to SQL Server.Anyone knows what does the above error message mean and what causes therest of the row failed importing?Thanks very much in advance!Rene Z.--Posted via http://dbforums.com
I need to transfer data from one table to another.I will be using the SQL Query Analyzer to do this.This is not a simple transfer of data to the same structured tables.These tables are completely different, for the most part.For instance, I will be selecting certain fields of one table.....SELECT fldOne, fldTwo FROM someTableI need to take this information, one row at a time and input it into a different type table.So, something like this to insert into the other table...INSERT INTO otherTable( fld1, fld2) VALUES( value1, value2 )I've looked around for a sample to achieve this, but may have overlooked it?Anyone have a link to show this or a quick sample?Thanks all,Zath
I created a schema, Admin. I have to transfer a table from the dbo schema to the admin schema. I keep getting an error that I do not have permission or the table does not exist.
Simply looking for confirmation here - is my syntax correct?
ALTER SCHEMA Admin TRANSFER MyShop.Addresses; (MyShop is the Database, Addresses is the table)
NOTE: When I created the schema, I did not create an inner table. The syntax for that was simply CREATE SCHEMA Admin;
My problem is very simple and that is that I'm trying to copy some tables between databases, but these tables are in different schemas. let's say I have
dbo.tableA sch1.tableA sch2.tableA sch3.tableA And I just want to copy let's say sch1.tableA to a Different DB. If I use Transfer SQL Server Object task and select the table and save the package and try to open the task again, all the tables with name TableA will be selected!! it seems like although it does show the schema ( when I am selecting the table manually ) but it doe snot store the schema detail in the tablelist collection property of the task. Would please recommend any other way to achieve this. Many Thanks in advance
I've got a DB2 database that contains a lot of tables. I need to extract the data from some of them and put them in a SQL Server database. The number of tables needed may change, so I need an easy way of controlling this.
So, I created a lookup table in the target SQL Server DB that lists the tables I want with any selection criteria needed. I have a For Each loop that goes through every row in the table. using the current row, it deletes the contents of the destination table (done and working), and then tries to load the source into it.The destination OLE DB uses a variable for the table name to insert into.
The problem I've got is whilst I can set up the OLE DB Source to use a SQL command as variable, I have to do it in the advanced editor or in the properties, as the normal editor gives an OLE DB Error. This is probably due to the fact that the variable has nothing in it at this point. I've turned EvaluateExternalMetaData off to avoid any design time errors, but when attempting to run I still get :
Code Snippet [DTS.Pipeline] Error: "output "OLE DB Source Output" (11)" contains no output columns. An asynchronous output must contain output columns.
Which I understand, as there are none until runtime. How do I get around this problem of not having any columns to map until I know which table I am processing?
I am trying to transfer a database from one server to another using the Import Export wizard in SSIS and I am consistantly getting this error on 2 different tables so far.
- Execute the transfer with the TransferProvider. (Error) Messages * ERROR : errorCode=-1073451000 description=The package contains two objects with the duplicate name of "output column "ErrorCode" (79)" and "output column "ErrorCode" (14)". helpFile=dtsmsg.rll helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC} (Microsoft.SqlServer.DtsTransferProvider)
This error message is beyond cryptic and when I click on the link it sends me to a web page the just tells me that there is no information available for my current issue. I am transfering the tables to an empty database so I do not understand why I am receiving this error. I have to say that I am not impressed with SSIS at all. I know alot of developers think it's the best thing since sliced bread, however either I am doing something wrong or Microsoft needs to come out with a service pack that fixes these bugs...
i create the OLED destination on a table that contains
1. Name varchar (20)
2. Address varchar(50)
3. Age bigInt null
but i got the following errors
Error 1 Validation error. Data Flow Task: OLE DB Destination [550]: Column "Name" cannot convert between unicode and non-unicode string data types. Package.dtsx 0 0 Error 2 Validation error. Data Flow Task: OLE DB Destination [550]: Column "Address" cannot convert between unicode and non-unicode string data types. Package.dtsx 0 0
On my online store, I need to figure out with sql how I can copy data from several different tables into another table: There is a table that contains all of the customers billing infoanother table has customer's shipping info The target table is for tracking and processing orders. I will need to populate different fields from different tables, as well as insert values into fields from my code behind (i.e. customers can choose different types of shipping, payment options). I googled this and keep running into the join statement (which I use all the time). I don't need to join anything for displaying, I literally need insert data into a column in my orders table, just not sure what the sql syntax is for that.
For example,I have a table "authors" with a column "author_name",and it has three value "Anne Ringer,Ann Dull,Johnson White".Here I want to create a new table by using a select sentence,its columns come from the values of the column "author_name".
can you tell me how can I complete this with the SQL?