I'm in the process of converting a rather huge VSAM database into a set of SQL tables.
I am using the same data names from the mainframe (like XDB-NAME to RDB-NAME).
I load the files using Import Export Data and it makes the tables with such column names as col001, col002, col003, etc... and always sets the data types to varchr(255).
And I have to cut and paste the data names from the manframe side to the server side (and the data types to.)
So, is there an easier way to do this? Or am I doomed to cut-n-paste my days away...
I´m exporting an ms-excel file, then I use a lookup transformation to get a field from a SQL Server 2005 table. The Lookup transformation editor, after selecting the table, shows a warning that says:
at least one mapping between a column from available input columns ans a column from available lookup columns must be defined on the columns page.
So I try to make a relationship in the Lookup transformation editor's column tab where I find the Available input columns and the available lookup columns but I get the following error:
The following columns cannot be mapped: [Department, DEP_CLEGALCODE] One or more columns do not have supported data types, or their data types do not match.
The field in SLQ Server is varchar(10) and the input field is a derived column transformation; I have tried different Data Types but I always have the same error.
The DataFlow is: ExcelSource --> Derived Column --> Lookup --> Flat file destination
Ok, i am finally giving in on this one and asking for some help! I am trying to set up a T-SQL Statement that will extract data from all the tables in the current database to a csv file including Column names!
I know that bcp can not handle the column names, so i tried to get around this with an append of the column names from a select, but unfortunatly the select gives me the names in Alpha order and not the order of the fields.
I have tried putting in an order by on the select, but this does not seem to have any effect. I have included the snippet of my script that is causing the problem here :
-- set up the echo command select @colcommand= 'exec master..xp_cmdshell' + ' ''' + 'echo ' + @names + ' >> c:cp' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + '.txt' + '''' from INFORMATION_SCHEMA.TABLES where TABLE_NAME=@TABLE
and just in case you are interested in the rest of the script, the full monster is included at the bottom of the post. Also if you can see any more efficient ways of doing what i am trying to do, please let me know!
-- Script to create a csv file of data from all tables inside current database
-- declare all variables declare @command varchar(200) -- command used for bcp declare @fetch_status int-- variable for fetch status in cursor
declare @TABLE varchar (200)-- Variable to hold table name declare @colcommand varchar (200)-- Variable to hold column creation command declare @count int-- Variable used to determine first itteration of Column loop declare @names varchar(100) -- variable used for the column names declare @delimiter varchar(10)-- variable used for delimiter in column names
SET @delimiter = ','-- set up the delimiter to comma select @count=0-- initialises the COUNT variable
-- setup cursor to create the bcp command to backup the data files to csv format declare bcpcommand cursor READ_ONLY FOR select 'exec master..xp_cmdshell' + ' ''' + 'bcp' + ' ' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + ' out' + ' c:cp' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + '.txt' + ' -c -t,' + ' -T' + ' -S' + @@servername + ''''from INFORMATION_SCHEMA.TABLES where TABLE_TYPE = 'BASE TABLE'
-- setup cursor to pick up all the tables in the given database (used for column names section) declare dbtables cursor READ_ONLY FOR select TABLE_NAME from INFORMATION_SCHEMA.TABLES where TABLE_TYPE = 'BASE TABLE'
open bcpcommand
select @fetch_status=0
while @fetch_status=0 begin fetch next from bcpcommand into @COMMAND select @fetch_status=@@fetch_status if @fetch_status<>0 begin continue end
-- print 'Command to be run : ' + @COMMAND EXEC (@COMMAND) end
-- close and tidy up close runme deallocate runme
-- now create the fieldname files and then echo the 2 files together!
open dbtables select @fetch_status=0 while @fetch_status=0 begin fetch next from dbtables into @TABLE select @fetch_status=@@fetch_status
if @fetch_status<>0 begin continue end
SELECT @names = COALESCE(@names + @delimiter, '') + name FROM syscolumns where id = (select id from sysobjects where name=@TABLE)
-- due to the concatonation used, the second itteration onwards has a , attached to the front of the line -- this section removes the first char if @count <> 0 begin Select @names=SUBSTRING(@names,2,198) end
-- set up the echo command select @colcommand= 'exec master..xp_cmdshell' + ' ''' + 'echo ' + @names + ' >> c:cp' + TABLE_CATALOG + '.' + TABLE_SCHEMA + '.' + TABLE_NAME + '.txt' + '''' from INFORMATION_SCHEMA.TABLES where TABLE_NAME=@TABLE
-- print 'COMMAND : ' + @colcommand
exec (@colcommand)
-- reset @names variable for next itteration, and set count to 1 to trigger IF above select @names='' select @count=1 end
-- close and tidy up close dbtables deallocate dbtables
I'm working on my first data warehouse and I'm not sure how I should name the columns in the database.
The first phase of the data warehouse is to store a bunch of data from one third party source. The source contains over 100 pieces of data and the business user doesn't even know what some of the fields are but he wants to store everything. The third party refers to the each field with a somewhat cryptic short name and a longer description. The short name isn't always cryptic.
My question is am I better off naming my columns the same as the source system's short name so that I can easily debug problems later? Should I instead try to shorten their definition into something meaningful? On a side note, I'm 100% positive that we'll never populate the tables in questions with data from an additional source.
I have Two Database that exist on Two seperate servers. The two database contain same schema and contains tables and columns of same name. Some tables have slight differences in terms of data types or Data type lenght.
For example if a Table on ServerA has a column named - CustomerSale with Varchar (100, Null) and a table on ServerB has a column named CustomerSale with Varchar (60, Null), how can i find if other columns have similar differences in all tables with the same name and columns in the two servers.
I am using SQL Server 2005. And the Two Servers are Linked Servers
What Script can i use to accomplish this task. Thanks
I have a indexing problem. I have a sequence that needs to has a index number. I want to use a table data type and have a working sample BUT I cannot reseed the table when needed. How do I do this.
This works only for the first ExitCoilID then I need to RESEED.
:: REGEDIT:::Â HKEY_LOCAL_MACHINESoftwareMicrosoftOffice14.0Access Connectivity EngineEnginesExcelTypeGuessRows ::TypeGuessRows value to zero (0) IMEX=1 Provider=Microsoft.ACE.OLEDB.12.0;Data Source=D:destination.xlsx;Extended Properties="Excel 12.0 XML;HDR=YES;IMEX=1";
But SQL Table Last 39 Records Dumped as NULLÂ whichever is Alphanumeric. Why? Dynamically How Can I import without doing Text to column in Excel on that column ?
I have a SQL text column from SP_who2 in table #SqlStatement:
like 1row shown  below :
 "update Panel  set PanelValue=7286 where PanelFirmwareID=4 and PanelSettingID=9004000"
I want to find what table and column  names are in the text ..
I tried like below .. Â
Select B.Statement from #sp_who2 A  LEFT JOIN #SqlStatement B ON A.spid = B.spid  where B.Statement IN ( SELECT T.name, C.name FROM sys.tables T JOIN sys.columns C ON T.object_id=C.object_id WHERE T.type='U' )Â
Something like this : find the column names and tables name
I created a ssis package which exports the data from oledb source to flat file (csv format). For this i have OLEDB source and Flat File as destination. I generate the file and filename dynamically with the column names in the first row. So if the dynamically generated file name already exists , then i want to append the data in the same existing file. But I dont want to append the column names again. I just want to append the rows to the existing rows.
so lets say first time i generate a file called File1_3132008.csv.
Col1, Col2 1,2 3,4
After some days if my ssis package generates the same file name i.e. File1_3132008.csv, this time i just want to append the rows to the existing file. So the file should look like this- Col1, Col21,23,45,67,8
But instead my file looks like this if i set Overwrite propery to false
Col1,Col2 1,2 3,4 Col1,Col2 5,6 7,8
Can anyone help me to get the file as shown in the highlighed
OLE DB source which calls a stored proc that returns a result set
data conversion
Excel destination I am in design mode in Business Intelligence studio. My excel destination (with an Excel Connection) shows no sheet name though I have an execute SQL task before the data flow to create the excel table called SHEET1. Needless to say, there are no output columns visible to do any mappings. I did go to the ExcelConnection to set the OpenRowset Property to SHEET1 but it seems to have no effect.
I can do the export in SQL Server Management studio and that works fine, but it is basic and does not meet my requirements. I have to customize the package to allow dynamic Excel filenames based on account names and have to split my result set into multiple excel sheets because excel 2003 has a max of 65536 rows per sheet. Also when I use the export wizard, I have the source as a table and eventually the source has to be a stored proc with input parms.
What am I missing or doing wrong? Thanks in advance
I am trying to find a reference for a client that lists the fields available to be substituted into a data driven subscription from the query, along with the expected data types.  For example, the field on whether or not to include a link to the report seems to be expecting a bit data type.I have searched and can't seem to find anything.  I guess I could walk through the interface and try different data types, but if  a list exists, that would be better.Â
While run time these values are lets suppose @SSN = '999-000-000' & @State='ABC'
Now the Result is displayed with the state data Like 'AB' only.
Output: 1 999-000-000 AB
instead it should give system generated error.
Here I have 2 Questions: 1. Why it is taking 1st 2 Charecters? 2. Why it does not have any system generated for length?
I can do validation with Length function for these 2 variables however if have 100 variables then it should not feasible case. So, what is the reason behind?Â
I am following the SSIS overview video- URL...I have a flat file that i want to import the contents onto a SQL database.I created a Dataflow task, source file and oledb destination.I am getting the folliwung error -"column "A" cannot convert between unicode and non-unicode string data types".in the origin file the data type is coming as string[DT_STR] and in the destination object it is coming as "Unicode string [DT_WSTR]"I used a data conversion object in between, dosent works very well
I am trying to modify data in a tble using the Stored Procedure below. It doesnt work properly. I seem to only be getting one character. I think it has something to do with me using "nchar" with the variables. The value I am changing is actually a string.
Is it possible to easily copy data from one table to another if the data types don't match. I know you can do a INSERT INTO table1(col1,col2) SELECT (col2,col7) FROM table2 if the data types match but is there a way to do this if they don't. I'm not trying to copy date times into bit fields or anything. I just have an old table that I built when I really didn't know what I was doing now I at leastthink I have a better understanding of what data types to use, so I was wanting to move the data in the orignal table to my new one. Most of the fields in the olddatabase are text datatypes and the new database is nvarchar(50) data types. Thanks for any suggestions.
I am new to sql server 2000, and need a little help.
I have a table called CMRC_Products with various columns, there is one column called Product Images that has the name of every image in my catalog over 4000, I want to add to each row in that particular column .jpg without loosing what is already in there
I have tried:
UPDATE CMRC_Products SET ProductImage = ProductImage&' .jpg'
I have a project where I need the ability to update data in a SQL table (SQL 2008) from a tool like Access or Excel. My SQL table has 3 records: Employee number, Employee Name, and a yes / no value (1 or 0). I want to be able to display the table data (in Access or Excel), and be able to have the user modify the yes / no value, but not the Employee number or Employee name. How to handle this in SQL. Should I connect Access directly to the SQL table?Â
We need to insert data/rows from a SQL Server 2014 database into MS Access database. The problem is, there are so many columns (100+) in the table and there are so many insert transactions of this kind (from different tables) that it is not very easy to write the code in VB.NET that lists all column names.
Both the Access and SQL Server tables have the same number of columns and the equivalent data types, so inserting is not really the problem. It's just that is there a way to do an insert statement in T-SQL that does not name all the columns?
I have a excel sheet that has data that needs to be UpSert'ed into 2 different tables. In addition, I need to use a value in the spreadsheet to determine the PK from a reference data table, for one of the UpSert oprations.
That is all working now.
The thing I'm struggling with is something I am sure is quite simple - but I'm not seeing a solution from attempts, googling or BOL.
2 of the columns I receive have either nothing, or X in them. The columns they go into are defined as BIT, NOT NULL.
So, in SQL it would be something relatively simple like:
CASE
When IsAvailable = 'X' then 1
When IsAvailable is null then 1
ELSE 0
end
But I don't know how to do this to data that was in a spreadsheet, and now is a resultset being handed from a task to another task.
to outline my current solution:
---- table 1 = this all works -------------
Excel Source --> MultiCast (For Table 1)-->Data conversion for table1-->:Sort for Table1--> Merge Join for table 1 (left Outer join) as 'left' leg
Table1 Source --> Sort Table1 --> Merge Join for table 1 (left Outer join) as 'right' leg
Merge Join for table 1 --> Conditional Lplit for table1
Conditional Split for table1 (table1 source PK is null) -->Insert Into Table1 Destination
Conditional Split for table1 (table1 source PK is not null) -->Update Table1 OLE DB Command
---- table 2 = this needs to be able to convert X/NULL to BIT -------------
MultiCast (For Table 2)-->Copy Column for Table2 -->Data Conversion for Table 2-->table3 lookup to get FK-->Sort for Table2 merge-->Merge Join for table 2 (left outer join) as 'left' leg
Table2 Source --> Sort Table2 --> Merge Join for table 2 (left outer join) as 'right' leg
Merge Join for table 2 --> Conditional split for table 2
Conditional split for table 2(table2 source PK in null) -->insert into table 2
Conditional split for table 2(table2 source PK in not null) -->update table 2 ole db command
-----------------------------------------
Now, if I correct the spreadsheet to have 0's and 1's in the two column, then the process above works. But I cannot (yet) force business to do that.
If tried to use SQL Command for the excel source, but there is limited functionality on the command - I cannot do SQL coalese, isnull or case statements, which would allow me to resolve that data at source.
I've tried to use derived columns to alter the columns. I think that the REPLACE (IsAvailable, VariableContainingX,VariableContaining1) might work to change X's to 1, but that doesn't resolve the NULL issue.
I've tried to use a script component to handle the conversion - which REALLY feels like a bad way to do this - the .Net script is wrote was:
-------------.net script code-------------
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper
Imports Microsoft.SqlServer.Dts
Public Class ScriptMain
Inherits UserComponent
Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)
If Not (Row.EndOfRowset) Then
If (Row.IsDotComVanEnabled_IsNull) Or (Row.IsDotComVanEnabled.Equals("X")) Then
Row.IsDotComVanEnabled = "1"
End If
If (Row.IsStoreCollectionEnabled_IsNull) Or (Row.IsStoreCollectionEnabled.Contains("X")) Then
It seens that a simple script:Update sysxlogins set name = 'AA001' + substring(name, 9, LEN(name)-8)where name like 'ILLINOIS%'Will replace the SQL2000 domain name correctly in sysxlogins:ILLINOISJonesP becomes AA001JonesPBut for some strange reason via ILLINOISJonesP can still logon viaQueryAnalyzer although he is no longer in the sysxlogins tableanymore? SQL has been stop/started, server even rebooted, yet BOTHthe new and old logins seem to both allow QA login, any thought howthe old one is getting thru SQL security?thanks in advance for any help...
We have enabled Change Data Capture for auditing our table changes in SQL Server 2008. There is a request to NULL out a few columns (for all rows) in a couple CDC tables, due to compliance with a certification. Is there a compelling reason not to modify these tables and to leave the audit trail as-is?
In my SSIS package, I have a data flow task I am loading a CSV file into a SQL table (OLE DB destination)
I have a couple of CSV files to be loaded Instead of creating a separate task for each file , can I combine them together into a single task
I was thinking about using a ForEach container
This approach works if the number of columns in all the CSV files is same But in my case it is not
So what I want is a script task that dynamically modifies the mappings
Can I do this?
I was browsing the net and I found certain code which uses IDTSExternalMetadataColumn90, MapOutputColumn etc. But the code was creating a new package for each mapping
I couldn't understand the code
So can you please help me with this?
My script task should modify the mappings in my data flow task For e.g. If I have 3 columns in my CSV and 3 columns in DB, they should be mapped in the same order
Please help figure out what is wrong with my code. The script is supposed to load a package (from file). The loaded package already has everything set up to run a query against a local server and output the results to an Excel file. The reason for the outer script is because I need to change the query based on a global variable. When the query changes, though, I think the existing dataflow Path is no longer valid, so I should remove it and re-create another one with the new input mappings. Here is my code, which runs and throws an exception at the AcquireConnections call.
The error is
Error: 0x2 at Script Task: The script threw an exception: Exception from HRESULT: 0xC020801B
I pieced together this code from the examples in the online books, but I am not sure what to do.
' Microsoft SQL Server Integration Services Script Task
' Write scripts using Microsoft Visual Basic
' The ScriptMain class is the entry point of the Script Task.
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Imports Microsoft.SqlServer.Dts.Pipeline
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Public Class ScriptMain
Public Sub Main()
'
Dim app As Microsoft.SqlServer.Dts.Runtime.Application = New Application()
Dim package As Microsoft.SqlServer.Dts.Runtime.Package = _
Hi all, I have several reports using single shared datasource. I want to change at a runtime database that is used by that datasource. Can this be achieved? If not what are the other solutions €“ I guess that using not shared datasource for each report may be the solution (is it?) but it is not the best solution for me. My goal is to allow users to run the same set of reports, viewed in ReportViewer control, but using different databases (connection string dependant).