Inconsistency Between Data Source Designer And Underlying Code
Mar 28, 2007
I have 2 data sources that have recently been updated from SQL Express to full versions of 2005. The connection strings have been changed, and the changes appear in the code, but the data source designer still shows the SQLExpress portion of the connection string. This seems to be fouling up SSIS packages that are using these data sources. Has anyone else encountered this? If so, what can I do to fix this issue?
, Hi In this code how can I create a new data source and new data source view and model and structure that it run dynamic. In this code I have a lot of errors, that they are about server and database don€™t have in current code, In this code, first I should definition server or no?
How can I create data source and data source view and model and structure? Please say code of that, and guide me. databasename and srv is unknown. Do I add other reference with analysis services? Please explain about these codes: ************************************************************************ 1) RelationalDataSource dsNew = new RelationalDataSource( datasourceName, Utils.GetSyntacticallyValidID( datasourceName, typeof(RelationalDataSource)));
Server: Msg 8928, Level 16, State 1, Line 1 Object ID 650485396, index ID 255: Page (1:5426420) could not be processed. See other errors for details. DBCC results for 'LTINTERVENTIONS'. There are 739501 rows in 37787 pages for object 'LTINTERVENTIONS'. CHECKDB found 0 allocation errors and 1 consistency errors in table 'LTINTERVENTIONS' (object ID 650485396).
This error has cleared up without any CheckTable repair, but the same sort of error appears on other tables.
I restore a backup (last night backup 7:30PM - CheckDB reported errors) from the database unto another server, ran CheckDB, and no errors.
Is the data file corrupted? Can I delete the data file and restore from a backup?
The system is Windows 2003 Server (latest SP), SQL Server 2000 (SP3a) and we have a SAN with RAID 10 arrays for the drives. The errors are appearing on 2 different drives on 2 different databases.
The CheckDB on Sunday shows errors on tables that now have no errors on the CheckDB from last night, but last night's CheckDB has different tables with errors.
The errors logs on the server only show Application SQL errors, no system errors.
I currently have the following code in my designer file <asp:SqlDataSource ID="SqlDataSource1" Runat="server" SelectCommand="select Site_name, system_id, ASP_Archive, sitetimes,HPOV_ROC, UPPER(CircuitType) as CircuitType, QwestCircuit_ID, SiteConfig,Site_Nat, PVC_VCI from tblASPCustomerWan order by Asp_Archive asc" UpdateCommand="UPDATE tblASPCustomerWan SET [Site_name] = @Site_name, [system_id] = @system_id, [ASP_Archive] = @ASP_Archive, [sitetimes] = @sitetimes, [HPOV_ROC] = @HPOV_ROC, [CircuitType] = @CircuitType, [QwestCircuit_ID] = @QwestCircuit_ID, [SiteConfig]= @SiteConfig, [Site_Nat]=@Site_Nat,[PVC_VCI]=@PVC_VCI" ConnectionString="server=localhost;Trusted_Connection=yes;uid=portal_user;pwd=Cr@zyP@55w0rd;database=CusPortal_Staging" /> I would like the change the connection so it takes it value from System.Configuration.ConfigurationManager.AppSettings("appStrConnection"), how can i do that??
When using a typed dataset in VS 2005 with SqlCe 3.1, it conitinues to make the param.dbtype for the GUID to be an 'object'. When as far as I can tell it needs to be a 'Guid' dbtype. So each time I modify the typed dataset I go in and manually change the various bugs and things work fine. My question is whether or not anyone knows a better work around for this after finding themselves in this situation.
I've attached a code sample of the erroneous designer code, and highlighted the line that should instead be set to DbType.Guid.
Hi All there!I am quite new in MS SQL administration so let me explain how it workon Your instances of SQL Servers.We have several DTS packages on our server, all of them managed onsome station which have seriously hardvare problem. So we wolud liketo catch two problems at one time and decided to develop systematicway of DTS manipulation.One of several aspects of this operation would be migration fromsystem ODBC data sources definitions into file ODBC sources ( .dsnfiles) in order to make them easier to manage ( backup for example,and even reusability on other workstations). All .dsn files should belocated on some network resource (\server\directory...) which wouldbe set as default ODBC directory in ODBC administrator on managementstation.When I begin to do so, then it apears that EM DTS Designer does notremember the path to the DSN files ( for example on design panel Ichose file dsn and by browse button point at the certain .dsn file,and then after DTS save the path disapears).Do You use this facility ( file .dsn) in DTS EM Designer, or maybe MShas it treated as usless, and nobody wants to use this?RegardsK
I don't know if there is a fundamental problem with what I am trying to do, or am I just having problems setting it up correctly:
I have a SQL server multi-user database. I want my users to connect to this databsase via Access 2000 Data Project. No problem there. The database consists of one main table and several views (based on the office branch that the user works from). For example there is a Chicago view, an Atlanta view, etc. that all extract different records from the same underlying table. I need my users to have FULL ACCESS (select, update, delete) to their respective VIEWS, but they cannot have access to the underlying table. I've tried several configurations and I'm beginning to think that this may not be possible... is that the case?
If it is not possible to grant access to views but not the underlying table, then what are my other options? The objective is to have a multi-user table that each user "owns a piece of" without being able to see the tables or records belonging to their peers. Do I need to setup a table for every office, and somehow link those tables into one main table? How would I avoid duplicate records being entered into the separate tables? Any help would be GREATLY appreciated, as this problem has had me stumped for weeks.
I am trying to develop a cube with translated calculated measures in it. I have also translated the all underlying measures/dimensions being used in action for a header.
I can see my translated measures and dimension in excel but when I go to Underlying data, I see default member's names not the translated one. Please note that I have translated all underlying members.
I'm having a real hard time coming up with a solution to this problem. I created a custom gridview control from Dino Esposito's "Extending Gridview" article which autogenerates a checkbox column that allows for multiple record selection. Once a user checks a box, the entire row gets selected. I added a dropdown list on the top of the page that has only two options, "Yes" or "No". What I'm trying to do is update a boolean column called "contract" (I'm using the Pubs sample database) for all selected rows (via checkboxes checked) depending on whether the user selects "Yes" or "No" from the drop down menu. For example,1) the user selects "No" in the dropdown2) The user checks all rows in the checkbox column for which he wants the all the values"contract" field set to "No"3) The user then clicks on a button called "Submit" and all selected records get updated to "No" under the "contracts" column. The idea is to allow the user to change the boolean values from a field for multiple records. Hence, making individual cells editable is pointless. Anybody have an idea how to go about this?
I have two databases DB1 and DB2 DB1 has a source table named 'Source' I have created a login 'Test_user' in DB2 with Public access. I have also created a view named 'Test_view' in DB2 which references data from DB1.dbo.Source
I have two databases DB1 and DB2 DB1 has a source table named 'Source' I have created a login 'Test_user' in DB2 with Public access. I have also created a view named 'Test_view' in DB2 which references data from DB1.dbo.Source
While Creating a script task in Control Flow, I am getting "Package Validation Error". Here is the complete message:
Error at Validate File and Load Data: The task is configured to pre-compile the script, but binary code is not found. Please visit the IDE in Script Task Editor by clicking Design Script button to cause binary code to be generated. (Microsoft.DataTransformationServices.VsIntegration)
As mentioned in the message, I opened the script IDE and added the code I need. When I close the VSA IDE, package designer displays the same error message.
The worst part of whole story is that if I close the package designer and reopen it, I find that all the code I wrote in the script task has been deleted by the package designer. This is not at all acceptable as I saved the package the and still lost all my work. I did all the coding from scratch for that task.
Please respond if anyone faced similar problem.
Thanks in advance!
Anand
PS: If any one from Microsoft is reading this, please see what you guys are coding there. Due to the buggy software you deliver, I am loosing my credibility.<P< P>
I have set up a new connection as a connection from data source, but I cannot see how to use this connection to create my Data Flow Source. I have tried using an OLE DB connection, but this is painfully slow! The process of loading 10,000 rows takes 14 - 15 minutes. The same process in Access using SQL on a linked table via DSN takes 45 seconds.
Have I missed something in my set up of the OLE DB source / connection? Will a DSN source be faster?
Hi,I'm developing an application to crypt SQL databases and i need toread SP and Views source code from VB6, if you know how to do thisplease answer me.Greetings.
I am trying to get source code of a view (SELECT part) into an TSQL variable, but I have no idea, where to find this source code and how to proceede the assignment ? Any idea ? thanks
Hi, We use a lot of stored procedures within our project and I am looking for a source code/versioning control software that can be used with SQL Server 7.0 to track any changes to these stored procedures. Does anyone have any suggestions?
Look for recommendations for the best product out there to facilitate both source control and code deployment. Typical features I would be looking for are bullet below:
Control object versions Tracking History Automated feature Capture database changes Roll-back feature (should such a thing exist)
Hi am trying without luck to load a package which contains a ScriptTask and read the source code of that task.
I can load the package and get the ScriptTask no problem. However i am not sure how to get the source code. I know i have to use the ScriptTaskCodeProvider and i assume the GetSourceCode() method.
This is what i have so far
ScriptTask scriptTask = taskHost.InnerObject as ScriptTask; ScriptTaskCodeProvider codeProvider = new ScriptTaskCodeProvider(); codeProvider.LoadFromTask(scriptTask); string sourceCode = codeProvider.GetSourceCode(scriptTask.VsaProjectName);
Is there a way to print the source code from the local package. I know you can go in the source and cut and paste it into a word document. I was wondering if there was a way to do it when you are in designing the package so that it would print all the source codes of SQL statments/queries.
I'm looking to create a table in SQL using data from the as/400. I need some code that extracts it from as/400 using odbc and I need it to loop and create that table every minute. The file its retrieving is a live file but not very large.. I need to basically poll the data. Does anyone have code for something like this.. thanks
Hi all, I am in a situation where some reports were developed by a colleague who left the company. The reports were published on the server but since the code behind the reports was deleted inadvertently. We don't have documentation about them.
My questions are:
is there a way to recover the source code?
is there a way to produce a report showing a list of the reports and the underlying data straucture/stored procedure, even in cases where the RDL is missing (i.e. from the published report)? Many thanks in advance Massimo
Hello, I'm not up to speed on the IBM database and provider technologies, and I have an issue that I'm not clear on. I am using the .Net ODBC Data Provider in a connection mananger. The DSN uses the iSeries Access ODBC Driver. I have iSeries Access for Windows version 5 release 4 installed on laptop where I have built the package.
I have a sql command configured in a DataReader source that uses the connection manager. Between the source and sql server table destination, I have a Derived Column transform that converts the columns from Unicode to non unicode data.
When I run the package, I get an error that states that the package failed because a particular column, 'ADDR1' is set to fail if truncation occurs. I set up error output to a text file so that I could look at the failing row(s). The text file had to be set up using UTF-8 before it would accept data. The ADDR1 column at the destination will accept 30 characters. The data in the ADDR1 column of the failing row is 19 characters long, but I think there is a carriage return character present. I have tried TRIM in the exression for the column, but no luck. I have tried code pages 65001, and 37 in the Derived Column transform, but that has not worked. I remembered to set up DefaultCodePage and AlwaysUseDefaultCodePage for the OLE DB destination for each code page change.
Am I not using the correct code page, or do I need to do something else to clean up the data?
i have a number of interfaces in which i have used oledb source.
the problem i am facing is oledb source components code page is not configurable now if i want to deploy the interface in a different environment which has a database with a different collation it gives a error that oledb source needs new metadata.
has anybody faced this problem earlier.please give me a solution to this problem ..
[DTS.Pipeline] Error: The PrimeOutput method on component "Excel Source" (351) returned error code 0x80040E21. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Here is my request: Print out the row number and the contents of the offending field when an error like this occurs.
Wish list item:Have One Switch that tells SSIS to keep going and skip bad row. Now I have to tell it twice for every field and it still stops!
Enough already. Why let one bad record stop 65,000?