SQL Server Admin 2014 :: SSIS Job Fails For No Reason
Aug 11, 2015
I have a job that runs an SSIS package. The job seems to be able to run through the package successfully, but at the end it errors out saying "The binary code for the script is not found. ...". The script referred to is at the beginning part (not the very first step) and should already be run.
The thing is, I can manually run this package on the server or visual studio without any problem. Also this job has been run on a regular basis without any issues on our old SQL 2008. I'm migrating this to Amazon Cloud SQL 2014.
Together with this package are other two very similar ones. They all work fine. I just can't figure out what can be wrong with this one.
i m not able to start the SSIS service on my laptop . IT gives error saying SQL server integretion service 11.0 service on local computer started and then stopped . some services stop automatically if they are not in use by other services or program
i am not able to start SSDT . it gives error
microsoft visual studio is unabble to load this document to desigen integration service package in ssdt , ssdt has to be installed by one of these edition of sql server ; std enterprise,dev,or evloution
i hav installed sql 2012 evolution verison on my local desktop.
We recently had a problem with DB Mail. SQL jobs that sends an email succeeded but the email in the job fail to sent. There was a problem with the email server. The error is included. We fixed the problem with the email server. How can I get an alert when a DB Mail email fails send?
Date4/23/2015 10:01:06 AM LogDatabase Mail (Database Mail Log)
Log ID5907 Process ID13204 Mail Item ID5702 Last Modified4/23/2015 10:01:06 AM Last Modified Bysa
Message The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 1 (2015-04-23T10:01:06). Exception Message: Cannot send mails to mail server. (Insufficient system storage. The server response was: 4.3.1 Unable to accept message because the server is out of disk space.). )
I have searched extensively and not been able to find a solution to this problem.
The problem: We have one SSIS package will sometimes 'finish' executing (or crash from a .NET exception) when it certainly has not made its way through all of the data flow components. There are no SSIS error messages, no warnings, and it never happens at the same location in the package's pipeline. The only thing that is instantly visible is a command window that flashes on the screen and disappears too quickly to see anything,.
Sometimes the package does actually complete without any problem, but most of the time, it does not.
What we see: If the packages is being run through the "Execute Package Utility" (by double clicking the dtsx file), after a bit, a command window flashes on the screen and instantly disappears (no text is visible), then the €œExecute Package Utility€? disappears. The event viewer of the machine then shows:
If the package is running within visual studio, again the command window flashes on the screen, then the "Execution has completed" prompt appears, but any "running" component remains Yellow (no red), both within the data flow and control flow (we do not have any event handlers set up). Neither our SQL Log provider, nor the "Execution Results" tab in visual studio show any type of error message... all SSIS messages just stop right in the middle of the many OnPipelineRowsSent log events (so there is no PackageEnd log event when this happens). The event viewer on the machine contains no useful messages when running within visual studio.
And other packages: Are fine. This is only the case for this one package... we have nearly a dozen other packages, all very similar in design, that complete without issue.
We have also tried re-creating this troublesome package from scratch with no avail. <!--[if !supportLineBreakNewLine]--> About the package: The Data Flow is pulling rows from 3 different external SQL data sources (400k-500k rows total), sorting and merging the rows, performing some basic lookups, then SCD'ing the results. This Data Flow is executed multiple times within 2 nested for loops (these nested loops give us particular dates, i.e. years 2000 through 2008, then months 1 through 12 for each year). There is not a single script task in the package. The problem seems to happen most as the data is being pulled from the sources and merged together, but it is not limited to this area.
<!--[if !supportLineBreakNewLine]--><!--[endif]--> The environment: We€™ve tried to use multiple machines with the same result. The current machine specs are as follows: SQL Server 9.0.3042 (SP2) Windows Server 2003 R2, Enterprise x64 Edition, SP2 3.00GHz x 16 processors, all 64 bit 63.5 GB of RAM Over 1 terabyte of hard disk space .NET 2.0.50727.42
The package was designed using: Visual Studio 2005 with SP1 Microsoft SQL Server Integration Services Designer - Version 9.00.3042.00
I am quite new to SSIS but managed to build a package which imports text files in to SQL. The text files are generated after users complete a manufacturing process on a machine.
The SSIS package is stored in the SSIS catalog and currently a SQL Agent tasks runs every evening to import new files that have been created during the day. Users have now requested the ability to run the import process as soon as they have finished their manufacturing runs as they may want to query the data to looks up stats etc.
What is the best way to do this considering all of the users are not SQL guys and wont have direct logins into the SQL Server or access to SQL Server Management studio. They will have access to the PC where the files are generated, so I ideally I need a batch file which they can just execute to import their new files.
I have seen lots of things on the web about running dtsexec but as the package is stored in the SSIS Catalog, how can I execute this remotely?
I have SSIS 2012 Enterprise, using catalog deployment and have more that 50 environment variables for connection to databases across my enterprise.
The problem when i go to configure the packages after deployment and pick the proper env variables, that are not sorted, so i have to browse all entries in order to find the proper entry in environment variables.
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
I have 10 databases which are configured as principal in mirroring I need to failover all the databases as part of failover , instead of writing query each database as parner failover, is an script which will generate the databases as principal to failover ?
After installing SSMS on some computers - the only way we can get SSMS to run correctly is to run it as the administrator. Is there a way where you don't have to do that? These end users are logging as themselves and have accounts in SQL Server all set up - but SSMS will only launch for them if we right click and select "run as administrator".After doing some digging - it seems that this is a common problem out there.
Have a SQL 2014 install and cannot for the life of me get the maintenance plan to remove old backups. I've tried everything. Rights to the folder where the backups are stored are adequate, extension set in the clean up task is as it should be, etc. Log shows the job ran successfully. Running the command manually shows successful completion, but backups are still not removed.
when execute the restore log command, in the messages window it shows how many seconds the restore takes, at the meantime, on the status bar, it also shows the seconds the command takes.
Two values are different and could be very different, please see below examples , restoring takes 1.8 seconds, but in total the command takes 4 seconds to complete, the other one is 8.1 seconds and 12 seconds.
What does SQL Server or Windows do after the restoring?
I did a xperf, I can see after the restoring is completed, sql server did garbage collect and log write, which just run very quickly, but storage is busy on reading the log file for nearly 2.2 seconds( 4-1.8), and 4 seconds ( 12-8.1) .
see pic 1 above, from 13 to 17, the restore operation is finished, but the storage jump to 100% active to do some reads, only reads no writes. zoom that period shows pic 2, it read 4096 (I don't know the unit size) for about 4 seconds, what does this do?
Data file, log file, backup file are no different drives, but all local drive, the interesting point is the read jumped after restoring, I tested it on different server, same result...
I've got an old version of SQL Server 2008 R2 Developer Edition on an old PC which is failing. I've got a new PC and have put SQL Server 2014 Developer Edition onto it. Now before the old machine completely dies, I've gotten into SSMS on the old machine and did a backup of the databases I want to save. I've moved the .BAK files to where I could get to them from SSMS on the new machine. I've gotten into SSMS and tried to do a restore the database to my new machine. However I'm getting an error that does not make any sense to me.
The database I'm I've backed up is named JobSearch. When I backed it up, that was the only database I had selected. Like I said I copied the .BAK to the new machine. Got into SSMS, told it that I wanted to restore the JobSearch database, telling it where I wanted to put it, and it then immediately fails with a:
"Restore of database 'JobSearch' failed. System.Data.SqlClient.SqlError: Logical file 'VideoLibrary_Data' is not part of the database 'JobSearch'. Use RESTORE FILEISTONLY to list the logical file names."
Well of course VideoLibrary isn't "the logical file". But neither did I select VideoLibrary (which is a database I also want to move, but I'm doing one at a time). So what in heck is going on here? Why is it complaining about a database I haven't even selected to back up? Why, when I check everything on the old machine, it's backing up JobSearch, but on the new machine it sees VideoLibrary?
Message: Executed as user: NT AUTHORITYSYSTEM. The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases [SQLSTATE 42000] (Error 9002). The step failed in my sql server agent job i have the above error, this type of errors i got some of multiple jobs.