SSIS - OLE DB - IBM DB2 - SQLDUMP - More Dumps Than Works....
Feb 8, 2007
I have a package which access a DB2 database and pulls data from a single table. I can't put a specific event on it, but the package has been causing a dump to occur on a rather regular basis. The really odd part is sometime when I add a data viewer on the output link of the OLE DB Source it works....then it starts to dump again a couple of executions later. There are not date/time values involved in the result set, just character strings. Default code page is set to 1252 and use default page is set to False....any ideas appreciated - this is really starting to drive me nuts!
Can anyone tell me where I can find information about why sql server might be executing a stack and/or symptom dump? That is, what are some of the conditions that might cause this. Every time I try to DBCC DBREINDEX on a certain table, stack and symptom dumps occur and SQL Server remains incommunicado (can't connect, even locally) until I restart it. I'm going to try to fix the table by using rename table, create new table (and dependent objects) , transfer data, and toast the original table. No problem, I've done that before with smashing success. But, if this solves my problem, I'll still be no wiser about what it was that caused the problem in the first place. Any suggestions, questions, comments, or even jeers would be appreciated. Thanks, Chris
I would like to loop through a SQL Server table that contains the paths to all the reports(SSRS) we need to run and then execute the reports via SSIS. What task should I be doing to do this? Will the For Loop work for something like this? Anyone Please Explain how to do it . Or either Explain me how to run a Report(SSRS) from SSIS
I am running SQL Server 2005 and have built a simple SSIS package to import data from an oracle database to my SQL Server 2005 database. When I run it in SSIS, it works and it imports just fine. When I schedule it, it gives me problems. Help!
It might be a problem with Oracle Provider client I installed. Is there a client version I can download and install? the one I downloaded from oracle doesn't work. I bet i did something wrong though.
Here is my version:
Microsoft SQL Server Management Studio 9.00.2047.00 Microsoft Analysis Services Client Tools 2005.090.2047.00 Microsoft Data Access Components (MDAC) 2000.086.1830.00 (srv03_sp1_rtm.050324-1447) Microsoft MSXML 2.6 3.0 4.0 6.0 Microsoft Internet Explorer 6.0.3790.1830 Microsoft .NET Framework 2.0.50727.42 Operating System 5.2.3790
Microsoft Visual Studio 2005 Version 8.0.50727.42 (RTM.050727-4200) Microsoft .NET Framework Version 2.0.50727
Installed Edition: IDE Standard
SQL Server Analysis Services Microsoft SQL Server Analysis Services Designer Version 9.00.2047.00
SQL Server Integration Services Microsoft SQL Server Integration Services Designer Version 9.00.2047.00
SQL Server Reporting Services Microsoft SQL Server Reporting Services Designers Version 9.00.2047.00
I have a problem running an SSIS package in a SQL Server job. The package runs fine if I run it from the MSDB location, but if I try to run the job it fails. The job is set to Run as: SQL Agent Service Account. The SQL Service Agent service runs as a domain user SQLExec. I have logged in as this user and run the SSIS package and it runs fine, but if I create a job with only this step it fails. There isn't much information about where there is a problem. Any ideas or ways to troubleshoot this problem would be very much appreciated.
If this is a duplicate post I apologise in advance as my search yielded many results about mail alerts but none like this.
The scenario is my SSIS package is scheduled to read data from a remote FoxPro source. If for any reason it fails I have set up an email task to alert internal users and an external helpdesk.
My problem is that if I am running it via Management Studio i.e. SQL Server Agent/Jobs/Start Job and I 'force a failure' by unplugging the network cable it successfully sends an email alert to all recipients (internal and external). If I let the job execute according to the schedule (still with the network cable unplugged) the job fails (as expected) but no email alerts are sent.
I log onto the server with a valid domain user account who has administrative rights to the server as well as dbo rights on the SQL instance. I deploy my package as the domain user and have checked that the domain user is also the 'owner' of the scheduled job.
I suspect it has something to do with ownership or which user is 'truly executing the scheduled job'. Any ideas would be welcome.
I am new to SSIS. I would like to know if I want to transfer data from one Oracle schema to another Oracle schema and also to do scheduling of the packages, can I still use SSIS? If yes, what are the components that need to be installed on the database server and the development environment? I hope I don't need the full SQL Server database installation in order to use SSIS.
We have a Process Task component setup in a couple SSIS jobs to call a command batch file to support transfering a file via Secure FTP to other servers and the process works fine if we start the SQL Agent job manually, however when the job is started via the scheduler, it fails with an exit code of 4. Even though there is a proxy setup on the agent job, is there a different user account being invoked by the scheduler??? We're on 2005 SP1 Hotfix 1 (2153). Thanks
Some more info...have found that if we leave a login session open on the server (login is the proxy account) the process works. It appears the issue is associated with a need to render/create a command window for the command line/batch process to run in and without an active windows session it fails....would seem to be that a product setup to run on a server in a batch mode would be able to work without this...is this the case? if so, how? Thanks.
We're experiencing a problem where intermittently our SSIS packages will hang. There are no log errors or events in the event viewer. It will happen whether the package is executed from the SQL Job Agent or run from BIDs. When running from BIDs it appears to hang inside one of the data flows (several parallel pipes with sorts, merge joins etc...). It appears to hang in multiple pipes within the data flow component. The problem is reproducable, we just kill it and re-run, and it appears to hang in the same places.
Now here's the odd thing: as we simply open and close some of the components in the pipe line after the place it hangs, a subsequent run will go further in the pipeline before hanging. If we open and close all the components after the point it initially hung, the data flow will run fine, from there on out. When I say "open and close" I mean no changes are made, we simply double-click the component, like a merge join, then click 'close.'
To me this does not seem like a memory problem but likely something is wrong with the metadata, where opening a component and closing it somehow alters the metadata to "right it".
This seems to occur intermittently after we make modifications to the package. It's like if you make any mod, even unrelated to the data flow, you then have to go through and open and close every component in your package to ensure it will work. Again, no errors or warnings are fired.
I need to set up a dump of my main database's transaction log to dump every hour but not overwrite until the 8th hour (keeping 7 log dumps). Can anyone tell me how to set up the scheduled tasks for the transaction log dump? Is there anything special I need to do for the main database dump?
Hi: Actually we have problems doing database dumps, so we're copying directly the .DAT files to a backup machine, I mean the .DAT files are the data devices that uses SQL no dumps files. We can do or eecute the dump process.
What problems can we expect doing this? Which can be the problems 'cause we cannot dump our DB's?
I have noticed a question on the Admin exam which involves declaring and usting a time variable for scheduling a backup or dump of a database. Is this possible and has anyone else seen this on the exam or used it in reality?
Our SQL Servers is giving us a headache, after a certain period in time, either SQL Service automatically shuts down by itself or hangs. I've opened the logs and found hex dumps. Can you help me out with these?
I've read that the SQL dump can only be done directly into the local server. Is there a way to put directly the dump into another server via the network (not copying it but writing it during the process of backup). Any help much apreciate.
According to microsoft, we can cluster SSIS service but it is NOT RECOMMENDED. http://msdn2.microsoft.com/en-us/library/ms345193.aspx
Now this is the situation that I have where I need to understand how SSIS works?
Enviornment: Active Active cluster enviornment for SQL server with SSIS server installed as stand alone as default on both node.
Name: Node 1 Node 2 --------- -------------- --------------------- Server name: Nd1 Nd2 SQL server name: cs-nd1in01 cs-nd2in02 SSIS server name: Nd1 Nd2
BTW, this is cosolidated enviornment so there are more than one application expected and resides on each instance of SQL server.
The question is around SSIS, what would be the best practice to develop SSIS package that can work with above envoinrment.
Secnario: What if my Nd1 fails. SQL server cs-nd1IN01 will be failover to Nd2 and it will be available. But How about SSIS packages? How that understands to use Nd2 SSIS as Nd1 SSIS is not available. Is anyone has similar experience to setup SSIS in cluster envionrment but as non-cluster service?
We are experiencng high cpu utilization across all 4 cpu's at the top of the hour when our transaction log dump job runs. Has anyone observed this bahavior before? Is there anything we can do to mitigate this? Thank You.
I am testing a procedure to automate the transaction log dump. I am following the steps located in Chapter 22 of the Microsoft SQL Server Administrator's Companion ("Automatic Transaction Log Dumps Using Performance Monitor"). The alert in Performance Monitor appears to be starting when the log is 75% full, but the alert is not firing off the file that contains the dump transaction sql command. For the 'Run Program on Alert' box this is what I have: isql -Ssvrname -Usa -P -id:appsmssqlinndump.sql The dump.sql file contains: 'dump transaction pubs with no_log'
I have also tried the following 4 steps: 1) Created a SQL Alert Messsage, 2) Created an NT Performance Monitor Threshold Alert to run sqlalrtr to issue a certain error when the pubs log is 75% full, 3) Created a TSQL Task, and 4) Created a SQL Server Alert to run the Task created in step 3. This appears to do the same thing. The Alert is fired off, but the Task is never executed. Note: I am able to execute the task from within the Schedule Tasks Window.
I am using Standard Security with SQL Server 6.5 (sp5a) running on NT4. Thanks for you help in advance.
I am new to microsoft SQL server as I am from Oracle background. I am preparing to the MCP certification for Exam 70-229, Designing and Implementing Databases with Microsoft SQL Server 2000 Enterprise Edition. The exam shows as SQL server 2000 is there any exam for SQL server 2005.
Is it the write exam to give to start as fesher into microsoft platform. Iam into development not towards administration.
If any body has any Exam preparation questions please forward them to email@example.com.
does anybody know how to automate the loading of incremental transaction dumps. The manual way is to use the "load tran DB with FILE = x" statement.
Since this has to be done in the right sequence and i need to automate this task to keep a second server up to date i like to know if there is a stored procedure or any other tool which could do the task.
our users believe that we lost some valid data, but no one knows who did it, I thought I can find it from the transaction dumps I take every hour so ,Can I read Transaction Dumps (*.TRN) file in SQL server 7.0 or Can I get this information through other means.
I'm going back and forth on an issue and was looking for some outside observations. I have over a dozen SQL NT servers, both 6.5 and 7.0. On some servers I'm dumping the databases and backing up the dumps with Veritas BackupExec, on 4 (two SQL 6.5, two v7) I'm backing up with the Veritas SQL Backup Agent.
Obviously, if you don't have room to dump a database, you must use the backup agent. That is the case on one of my servers and becoming so on a second. But my personal preference is to do the dump/backup.
(as a side note, one server is not backing up correctly with the backup agent but ultimately that box will require it due to DB growth, so it is something that I have to resolve)
I like the dump system for a few reasons. I find it easier to load from a dump, particularly a single table. Likewise, I find it easier to copy a database by loading from a dump rather than going from a backup, but that's mainly because of BackupExec being a little bit strange on redirecting restores.
Here's my clincher. I find restores via the backup agent to be ridiculously slow. Let's say I have a 5gig DB that has 1.5gig in it. The dump size will probably be somewhere below 2gig. The restore via backup agent carefully writes the entire 5gig even though 3.5g of that is empty. This takes a lot of time. Add to that the "post-restore DBCC". I had such a restore take something on the order of 13 HOURS which, needless to say, conflicted with my nightly DBCC's and backups.
OK. End of rant. Any suggestions or thoughts on the subject?
I have two calls to stored procedures that in an SSIS package fails silently. They are simply not executed in production but works fine in test, nothing happens and the sql server agent reports that everything has gone just fine.
In test they have 1 server with db A and B. No issue here.
In prod they have 2 servers with db A and B. On server 1 sql server agent executes a job that includes an SSIS package that on server 2 runs a couple of sp's. That user is db owner on server 2 db B and yet nothing happens. The sp's are not executed.
If I in prod run the job manually then it works, but not when run with the sql server agent account that as said is even db owner.
I've got a popular problem so i get a message that server acces denied! ..
But that what is different in my error.... When i use same setting same database and connection string (on MSDE server) there is no problem...
On SQL server i have got windwos authentication but i added all accounts as ASPNET and SA.... and when i try to connect by
RETTO - name of my server
server=RETTO;uid=sa;pwd=password;database=db1; or by Integrated Security=SSPIserver=RETTO;uid=RETTOASPNET;database=db1;
I CAN BROWSE RECORDS THERE ARE NO PROBLEMS WITH CONNECTION!!! but when i try to update or iinsert or delete something in database there becomame this error that access denied or server does not exist!!!
PLEASE HELP I'm FIGHTING WITH THAT FOR OVER 5 DAYS!!!
I MADE FOR MY ACCOUNTS (SA, ASPNET) ALL THINGS ALLOWED AS EXECUTING stored procedures.. OR ACCESING datatables with insert delete and update query WHERE IS THE PROBLEM!!!??
I'm having a strange problem with this but I know (and admit) that the problem is on my PC and nowhere else. My firewall was causing a problem because I was unable to PING the database server, switching this off gets a successful PING immediately. The most useful utility to date is running netstat -an in the command window. This illustrates all the connections that are live and ports that are being listed to. I can establish a connection both by running
TCP 18.104.22.168:1134 22.214.171.124:1433 ESTABLISHED
TCP 126.96.36.199:1135 188.8.131.52:1433 ESTABLISHED
TCP 127.0.0.1:1031 0.0.0.0:0 LISTENING
TCP 127.0.0.1:5354 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51114 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51201 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51202 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51203 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51204 0.0.0.0:0 LISTENING
TCP 127.0.0.1:51206 0.0.0.0:0 LISTENING
UDP 0.0.0.0:445 *:*
UDP 0.0.0.0:500 *:*
UDP 0.0.0.0:1025 *:*
UDP 0.0.0.0:1030 *:*
UDP 0.0.0.0:3456 *:*
UDP 0.0.0.0:4500 *:*
UDP 184.108.40.206:123 *:*
UDP 220.127.116.11:1900 *:*
UDP 18.104.22.168:5353 *:*
UDP 127.0.0.1:123 *:*
UDP 127.0.0.1:1086 *:*
UDP 127.0.0.1:1900 *:*
Both these utilities show as establishing a connection in netstat so I am able to connect the database server every time, this worked throughout yesterday and has continued this morning.
The problem is when I attempt to use SQL Server Management Studio. When I attempt to connect to tcp:sql5.hostinguk.net, 1433 nothing shows in netstat at all. There is an option to encrypt the connection in the connection properties tab in management studio, when I enable this I do get an entry in netstat -an, see below:
Amost as if it's trying the different ports but you get this time_wait thing. The error message is more meaningful and hopefull because I get:
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.) (.Net SqlClient Data Provider)
I would expect this as the DNS has not been advised to encrypt the conection.
This is much better than the : Login failed for user 'COX10289'. (.Net SqlClient Data Provider) that I get, irrespective of whether I enter a password or not.
This is on a XP machine trying to connect to the remote webhosting company via the internet.
I can ping the server
I have enabled shared memory and tcp/ip in protocols, named pipes and via are disabled
I do not have any aliases set up
No I do not force encryption
I wonder if you have any further suggestions to this problem?
I use the code below for updating data from a AS400 Liked server. I dont understend how the WHERE NOT EXISTS( sections work however usualy they do, in this case it does not andt I can't seem to find out why.
Does anyone see the error?
--========================================= --Create a local temporary table that hold --all the data from the source table --=========================================
SELECT * INTO #TEMP FROM dbo.LINK_LTTSTOC
--========================================= --Remove table entries that are no longer --needed or that have to be updated --=========================================
DELETE FROM LTTSTOCK
WHERE NOT EXISTS( SELECT * FROM #TEMP
WHERE LTTSTOCK.WarehouseNo = LTWHLO
AND LTTSTOCK.Location = LTWHSL
AND LTTSTOCK.ItemNo = LTITNO
AND LTTSTOCK.NumberAvail = LTAVAL
--Insert data that is missing or that
--needed to be updated and was previously
INSERT INTO dbo.LTTSTOCK(WarehouseNo,Location,ItemNo,NumberAvail,rowguid)