Hi, I recently migrated from SQL*Server v7.0 to v2000. The maintenance routines on the new v2000 was set up (this is also a new server - completely new machine) are not running. When they attempt to run, I get the following error:
[Microsoft][ODBC SQL Server Driver][SQL Server]Repair statement not processed. Database needs to be in single user mode.
Consequently, no backups no repair, the only thing in the maintenance routine that does work is the rebuilding of indexes. Any ideas would be greatly appreciated.
Hi. I'm using the db maint. plan to do database & trans.log backups, and it's failing on my databases that are set to use the 'simple' recovery method (similar to 'trunc. log on checkpoint'). The SQL Agent job then shows up as failed, although what's really happened is that it's just skipped over the databases that are in that mode.
To clarify this, regular database backups complete successfully, while trans.log backups fail due to the databases set to simple recovery.
Hello. I setup a new maintenance plan for my SQL2005 SP2 server to do a full backup and transaction log backup on all databases (System and 3 User defined).
Every time I execute the maintenance plan, I get the following error in the job history, and no backups are made:
Message Executed as user: AD-ENTMKTSQLService. ...00.3042.00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 12:13:35 PM Error: 2008-04-01 12:13:45.09 Code: 0xC00291EC Source: {AE791AB6-1317-4803-9C53-444B45DAD533} Execute SQL Task Description: Failed to acquire connection "Local server connection". Connection may not be configured correctly or you may not have the right permissions on this connection. End Error Warning: 2008-04-01 12:13:45.09 Code: 0x80019002 Source: OnPreExecute Description: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors. End Warning Error: 2008-04-01 12:13:45.14 Code: 0xC0024104 Source: Back Up Database (Full) ... The package execution fa... The step failed. _____________________________________________________
The MKTSQLSERVICE account is the account setup for the SQL Server Agent Service to use, is in the Administrators group of the server OS, and has been granted SA in SQL2005.
I've just set up 2 new SQL 7.0 servers, and my new maintenance jobs - backups, optimisations, consistency check jobs etc - are all mysteriously failing. I've created them both with the Maintenance Wizard, and again by hand. I've attempted manual and scheduled runs. All to no avail. Nor do they populate the sysmainthistory table, although they're configured to do so. The mystery is that we have successfully installed some user DTS processes, and THEIR jobs work. In order to resolve the problem, I've:
(1) ensured the Agent service is running;
(2) ensured adequate space on the drive etc;
(3) verified that sqlmaint.exe exists in Mssqlinn;
(4) unchecked the 'attempt to repair minor errors' box (I read this was a known cause of the problem);
I've got some clues to go on, but nothing conclusive;
'sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.'
Executed as user: NT AUTHORITYSYSTEM. sqlmaint.exe failed. [SQLSTATE 42000] (Error 22029). The step failed.
Additionally, both servers hold databases that I imported from another server-INCLUDING the msdb, and BOTH have the same problems and same error messages. This cannot be a coincidence.
I have a Database maintenance plan (DMP) that always fails! The plan reorganises data and index pages and checks database integrity. The plan covers several databases, and it always fails on the same database.
The error message (in the DMP history) is the following: "[Microsoft SQL-DMO (ODBC SQLState: 01000)] Error 0: This server has been disconnected. You must reconnect to perform this operation."
The odd thing is that the DMP is locally executed, so I don't see why the network could be an issue here.
Recently I created a new db in production and restored it from test. Ihave a nightly backup job that backs up all user databases. The backupfor this new db is failing.The message in the error log is2005-10-04 00:13:47.65 backupBACKUP failed to complete the command BACKUP DATABASE [MTUDD_GEMINI]TO DISK = N'd:sqldataMSSQLBACKUPMTUDD_GEMINI MTUDD_GEMINI_db_200510040013.BAK'WITH INIT , NOUNLOAD , NOSKIP , STATS = 10, NOFORMATI notice that the file name seems to have a space in it between thedatabase name and the end of the backup _db and a space also appears atthe end of the directory name.I can perform a backup successfully using the backup task under alltasks under the database via Enterprise Manager.Anyone seen this? Any suggestions other than backuping up the dbmanually, dropping the database, recreating it, and restoring it fromthe backup?-- Mark D Powell --
Environment: SQL Server 2005 Enterprise Edition x64, 3 server cluster. Two active servers with seperate instances and one passive server. SQL Server was installed on the two active servers.
Problem: When I fail over either of my instances to the passive server in the cluster my maintenance jobs fail to run and there are error messages in the application event viewer "SSIS Subsystem failed to load". I am guessing that all of the needed components are not installed on the passive server? Is this a close guess? If so, exactly what components are missing and do you have to have another license to install them?
One of my programmers changed their database from full to Simple recovery. Saw that my job that backs up the Full Recovery mode databases failed, so I moved that database to my Simple database backup job plan and removed it from the Full Recovery job. I am unable to remove the db from the Transaction Log task on the Full Plan because when I try to edit that job "Databases with Simple Recovery will be excluded"
My transaction log backups are still failing with the following error: "The statement BACKUP LOG is not allowed while the recovery model is SIMPLE. Use BACKUP DATABASE or change the recovery model using ALTER DATABASE. BACKUP LOG is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.Just want to remove that database so my Full Recovery backup job does not try to back it up.
Maintenance plan for bakcup is failing with "Login failed for user 'sa'. [CLIENT: <local machine>]"
I went to the Maintenance Plan and opened the Subplan. I clicked the "Manage Connections"
It has three tabs:
Name: Local Server Connection Server: prod Authentication: SQL Server Authentication
I clicked the Edit and it shows the Connection Properties:
It says: Enter information to logon to the server. "Use a specific Username and Password" is checked. Username is set as "sa" while the password is empty. I typed in the correct password and pressed Ok. When I go back again, the password still shows empty. I tried to run the plan and it again fails. Do you know why it is not showing the password as blank even if I try to save the password.
When I use the following code to execute a RAISERROR from within a CLR Routine (Stored Procedure), and I call this CLR stored procedure from T-SQL within a TRY/CATCH block, the error is not caught in the CATCH block. Why is this happening? Is there any way around this? Any help much appreciated.try { SqlContext.Pipe.ExecuteAndSend(cmd); } catch { }
I am looking for some advice and opinions on daily backup routines on SQL2000 and SQL2005, I want to know what peoples best practices are for nightly full backups. Currently I have the following in place,
When I use the following code to execute a RAISERROR from within a CLR Routine (Stored Procedure), and I call this CLR stored procedure from T-SQL within a TRY/CATCH block, the error is not caught in the CATCH block. Why is this happening?
We have a legacy app where some of our web page urls were hardcoded into the stored procs (SQL 2000 SP4 database). We have changed the system and so changed the hardcoded strings with a value stored in a config table. We used the following query to identify the hardcoded urls (say LegacyPage.asp) €“
select routine_name from information_schema.routines where routine_definition like '%LegacyPage.asp?%'
However, even after this we keep getting issues with LegacyPage.asp being referenced. Tracing the code, I found that there is at least one SP (say spHardCoded) which does not turn up in the query, but does have the string LegacyPage.asp? in the routine definition. When I run the following query €“
select routine_name from information_schema.routines where routine_name = 'spHardCoded' and routine_definition like '%LegacyPage.asp?%'
0 rows are returned!
Am I missing something obvious here or is INFORMATION_SCHEMA.Routines metadata not always updated? Is there any way to force the metadata to be updated, before we query it? Is there a better system catalog view which lets me do the same thing? Any help would be really appreciated.
I'm running W2K3 SP1. The MDAC Configuration Checker reports MDAC 2.8 SP2. The only discrepancy is MSADCO.DLL, where the expected version is 2.82.1830.0 and my actual version is 2.82.2651.0 (svr03_sp1_gdr.060301-1546).
The error message I'm getting when I use the ODBC Administrator dialog to set up a new entry is: "The setup routines for the SQL Server ODBC driver could not be found. Please reinstall the driver."
This is followed by a dialog titled: "Driver's ConfigDSN, ConfigDriver, or ConfigTranslator failed." The body text of the dialog is "Component not found in the registry."
I have a question that I hope someone can clear up for me. I have come across a number of different suggestions on DB maintenance, for example reindexing with the following script:
USE DatabaseName --Enter the name of the database you want to reindex
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR SELECT table_name FROM information_schema.tables WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName WHILE @@FETCH_STATUS = 0 BEGIN DBCC DBREINDEX(@TableName,' ',90) FETCH NEXT FROM TableCursor INTO @TableName END
CLOSE TableCursor
DEALLOCATE TableCursor
My question is, doesn't the maintenance plan have this functionality inherent in it when you create the maintenance jobs to reindex? Is there a benefit to scripting things out vs just using the maintenance plan wizard for this sort of thing and any of the items it covers? I came from an Oracle background where this was a no-brainer but I am a bit confused on the choices with SQL Server.
I am testing some maintenance tasks sql commands such as index rebuild, index reorg, update statistics and db integrity check on a SQL Server 2014 Database. This is a new non-production vendor database (DB Size 500 GBs, Log Size 25 GBs) which eventually will be created in production. Currently, it is in full recovery model and without log backups. The database has a whole lot of indexes. I am just trying to rebuild and reorganize all the indexes (that need it), in addition to trying to get an idea of how long these maintenance task will take and the space needed in the log file to complete these tasks/commands. I would like to execute these tasks manually (the first time) to gather the duration and space required information. Eventually, I would probably schedule a weekly job to perform this maintenance.
I ran the index rebuild task on the database and noticed that the log file grew by over 50 GBs. I killed the process and truncated and shrunk the log file back down.
1. Does the index rebuild, index reorg, update statistics and db integrity check commands all use the log file?
2. Does Indexs Reorg have less impact on log file then Index Rebuild?
3. Should a truncate log and shrink log file be performed after these maintenance commands?
4. Should a full database backup be performed after these maintenance commands? Or before the maintenance commands?
I have read and understand that shrinking is not good for the database (could lead to more fragmentation and more data file growth when data is added) and I know about rebuilding indexes when fragmentation is GT 30% and reorganizing indexes when fragmentation is GT 5% and LE 30%.
Since this is a non-production database maybe I should set the recovery model to simple, run the maintenance commands and leave the database in simple recovery model unless the vendor needs it in full recovery model for some unknown reason.
5. With the simple recovery model the log file should be reused in a circular manner and not grow during these maintenance tasks. Is this correct?
i ran it from the command prompt. I used my nt account which belongs to the domain admin nt group. my account does have sql access as sa.
also on one of the servers all jobs are failing with the following message - Unable to Connect to Sql Server (local). The nt log records the error that the specific user sqlexec (this is the account on which sql executive runs) is not defined as a valid user of a trusted sql server connection. I am not able to change the security setting on this server using EM nor am i able to use the sql security manager. I get an access denied error. What is the workaround for this problem? Will stopping and restarting the sql service help? ------------
How did you run bcp? In dos prompt or as sql job? Which nt account did you run bcp under? Did you grant sql access for that nt account?
yes i did. it still gives me the same error - 18452 error not associated with a trusted connection -----------------
Did you enable mixed login mode on the server?
------------ aruna at 1/3/01 2:55:59 PM
hello ray
It still does not work. I granted SA rights for the nt group via sql security manager. For one of the servers i get the following error message - This sql server does not support Windows NT SQL Server Security stored procedures.
-------------- From: Date: bcp over trusted connections failing (reply) Ray Miao () 1/3/01 12:51:50 PM
Use security manager to grant access for nt account.
------------ aruna at 1/3/01 11:59:49 AM
i am attempting to bcp using the -T (trusted connection) option in sql 6.5. the login security mode is set to integrated. the bcp is however failing with msg 18452 error not associated with a trusted connection. why is this happening? i do not want to hardcode the sa password in the bcp command.
I have an excutable on the c drive and I have created a job to run that excutable
In the Job C:Folderjob.exe BA
The job was running until we had a power outage. Now I can get it to run with a scheduled job, the only way I can get it to run is typing it on the command line. I have tried droping and recreating this job but nothing works.
The error is: The stip did not generate any out put.
Do I need to troubleshoot the excutable which is a whole other beast.
I have a scheduled job on a SQL 2000 database which is failing. Here is the error message :
The job failed. Unable to determine if the owner (cacisnasir) of job Integrity Checks Job for DB Maintenance Plan 'IDS' has server access (reason: Could not obtain information about Windows NT group/user 'cacisnasir'. [SQLSTATE 42000] (Error 8198)).
I am the SA on the instance. I wonder why would I be getting this error message? I am able to logon to this instance and browse and change things. So clearly it recognizes me. But when I run the job it fails. Wonder why? my SQL Server version is 8.0.
Currently I am building an application for a theme park where I work as a trainee for school, one project for me is to rebuild all the hundreds of databases into a few sql driven application's. Now I got a problem whit the use of SCOPE_IDENTITY(). Because the data has to be correct before inserting it into the database I use the transact features of .NET and I create 1 SQL string wich I dump in that method. The problem is that I can't be able to use the value of SCOPE_IDENTITY() for some reason, maybe you guys see a mistake in the actual (dynamic) query: Here is the query built up by my program to write the data (of a single form) into the database:
I have some DTS packages some times failing.one day sucess and next day it's failing. The following error showing. DTSRun: Executing... DTSRun OnStart: DTSStep_DTSExecuteSQLTask_3 DTSRun OnError: DTSStep_DTSExecuteSQLTask_3, Error = -2147217900 (80040E14) Error string: OLE DB provider 'SQLOLEDB' reported an error. Error source: Microsoft OLE DB Provider for SQL Server Help file: Help context: 0 Error Detail Records: Error: -2147217900 (80040E14); Provider Error: 7399 (1CE7) Error string: OLE DB provider 'SQLOLEDB' reported an error. Error source: Microsoft OLE DB Provider for SQL Server Help file: Help context: 0 Error: -2147217900 (80040E14); Provider Error: 7312 (1C90) Error string: [OLE/DB provider returned message: Timeout expired] Error source: Microsoft OLE DB Provider for SQL Server Help file: Help context: 0 DTSRun OnFinish: DTSStep_DTSExecuteSQLTask_3 DTSRun OnStart: DTSStep_DTSExecuteSQLTask_1 DTSRun OnFinish: DTSStep_DTSExecuteSQLTask_1 DTSRu... Process Exit Code 1. The step failed.
this is the message that i'm getting and i dont know what to do so that i can access my SQL databases thru cold fusion:
ODBC Error Code = 37000(Syntax error or access violation)
[Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection.
i didnt have any problems with this database until i moved it over to another SQL server and tried the cold fusion front end to it. i dont know what to do now.
I have inherited the task of setting some standards for SQL Server setup and usage in my company. Use of SA with and without a password was rampant. As I get DTS jos and VB code changed to use another account I have been securing the SA account with a password that no one uses. I now get a multitude of failed logins for the SA account on multiple systems by people trying to logon as SA, not jobs. Is there any way to generate an error message that will pass the host PC or server, or network ID of the user trying to login with the SA account?
When I create a DTS to import data from Visual FoxPro it will work if I run immeadiately, but when I schedule it to run at a specific time it will Fail. Any ideas why??
I have a table with a field called remarks as text field. I have a trigger on it, "Create Trigger trg_inbox_bess506a_mstr_on_del On dbo.inbox_bess506a_mstr For Delete As -- 040226, archive inbox to arc set nocount on insert into inbox_bess_mstr_arc ( pk_id, batch_id, py, appropriation, issueFrom, issueTo, submitBy, submitDate, validID, validDate, approveDate, approveBy, accountCode, transType --remark ) select pk_id, batch_id, py, appropriation, issueFrom, issueTo, submitBy, submitDate, validID, validDate, approveDate, approveBy, accountCode, transType --remark from deleted return
GO"
It fails with an error message: "Server: Msg 21, Level 22, State 1, Procedure trg_inbox_bess506a_mstr_on_del, Line 8 WARNING - Fatal Error 7113 occurred at Dec 22 2004 11:25PM. Please note the error and time, and contact your System Administrator."
It's failing on a field with remarks greater than 1885 chars.
When I used a stored procedure to do the same, it worked. Why is the trigger failing now? Is there a limit on size for triggers and not procedures?
The DTS package would execute and immediately fail. a reboot of this server fixed the problems, but does anyone know how to get more info out of DTS to state why it failed, we have branch on error and NT event log entries, but nothing specific to state why. The 1st task is to assign global variables, but I'm not even sure it got that far.
Obviously the problem is fixed now, but if it happens again, some ideas of how to get data out would be useful.
Hello I have two tables that have the same data in them but not all the data is in the new table. the old one has 397 more records then the new one and I need to insert that data in the new table but it keeps giving me a primary key violation rule.
SELECT dbo.Revised_MainTable.[IR Number], dbo.Report.[Incident Report No], dbo.Report.Date, dbo.Report.[I/RDocument], dbo.Report.TypeOfIncident FROM dbo.Revised_MainTable RIGHT OUTER JOIN dbo.Report ON dbo.Revised_MainTable.[IR Number] = dbo.Report.[Incident Report No] WHERE (dbo.Revised_MainTable.[IR Number] IS NULL)
I have a SP that basically copies data from one table to another. Some of the data could be duplicates and so the SP detects any primary key violations (error 2627) and if detected uses a random number for the PK and tries the insert again.
This SP works fine when run manually from Management Studio but when scheduled as a job step, it fails. From investigation, it seems that the logic to handle PK violations is being processed but if there are more than around 16 PK violations in the batch copy, the job step fails at around the 17th violation insert and fails to process the rest of the step.
When this happens, as well as seeing the 2627 error logged in the message field of the job log history, it also records an error code 3621 in the SQL Message ID field of the log with Severity 14.
Does anyone know why this SP should fail as a job? I have checked permissions and also tried setting the agent login and job owner to the same account that successfully ran the SP in Mangement Studio but this also failed.
At present the only way to get this job to run is to set the step retry attempts to a number greater than the number of fails. Each time the job is rerun, it will process a certain number before failing and it only fails after processing a certain number of PK violations. This work around is fine in a test environment of a few hundred records but this job needs to process roughly 75,000 records and if all these happened to be duplicates, it would require over 4500 retries assuming its fails after every 16 records.
I have a SP that basically copies data from one table to another. Some of the data could be duplicates and so the SP detects any primary key violations (error 2627) and if detected uses a random number for the PK and tries the insert again.
This SP works fine when run manually from Management Studio but when scheduled as a job step, it fails. From investigation, it seems that the logic to handle PK violations is being processed but if there are more than around 16 PK violations in the batch copy, the job step fails at around the 17th violation insert and fails to process the rest of the step.
When this happens, as well as seeing the 2627 error logged in the message field of the job log history, it also records an error code 3621 in the SQL Message ID field of the log with Severity 14.
Does anyone know why this SP should fail as a job? I have checked permissions and also tried setting the agent login and job owner to the same account that successfully ran the SP in Mangement Studio but this also failed.
At present the only way to get this job to run is to set the step retry attempts to a number greater than the number of fails. Each time the job is rerun, it will process a certain number before failing and it only fails after processing a certain number of PK violations. This work around is fine in a test environment of a few hundred records but this job needs to process roughly 75,000 records and if all these happened to be duplicates, it would require over 4500 retries assuming its fails after every 16 records.