I just ran into an issue with cascading locks due to a SPID on one of my production servers. When researching the lock, I noticed that there was no sql text. SP_Who 2, nor the following query captured anything,
I spoke to the user causing the lock and he ran into a visual basic error when this occurred and didn't close out that window. So my guess is that it's due to an uncommitted transaction. However, shouldn't I still see something if that was the case?
We have a database with a table that contains around 180m records. Each day a further 70k are inserted. No records are ever deleted as this table is used for archiving only.Users are required to perform SELECTs on this table constantly but due to the high number of INSERTs the indexes become very fragmented very quickly. My aim is to avoid daily rebuilds of the indexes which is what our software house is telling us we have to do.
This is the DDL for the table:
CREATE TABLE [dbo].[Inventory]( [EAN] [bigint] NOT NULL, [Day] [smalldatetime] NOT NULL, [State] [int] NOT NULL, [Quantity] [int] NULL, [StockValue] [float] NULL, CONSTRAINT [PK_Inventory] PRIMARY KEY CLUSTERED
[code]...
There are also three clustered Indexes on this table each referencing a single column. The problem from my side is that I cannot understand why the three columns in a primary key would also be configured as non-clustered indexes.My solution would be one of the following:
1. Accept the tables are going to be fragmented and require a daily rebuild (don't like this one!)
2. Partition the table
3. Remove the non-clustered Indexes and let the clustered index for the primary key do the work.
I have to disable newly implemented database encryption. It's a necessity unfortunately. Can I do this during production hours without much of a hit? I know I have to restart the instance after it's done. Can I expect performance impacts or other issues?
I want to set up a database role so that users can use sp_readerrorlog through SSMS. It does a check on membership in the securityadmin role.
I have tested it and can see you can grant execute on xp_readerrorlog but the SSMS GUI uses sp_readerrorlog.
I thought I could create a user/certificate and add the signature to sp_readerrorlog but it's not permitted (likely because it's not a normal database object).
So the other solution is to add the users to the securityadmin role but then explicitly deny alter any login (best done with a custom server role in 2012+ but otherwise just manually in 2008). I tested this out and it works, I'm not able to alter any logins or increase my own permissions, I also did a check of what's reported from fn_my_permissions(null, null) and it shows minimal permissions like I'd expect.
Over the weekend I decided to give it the ability to do a case sensitive character swap. Updating the code was pretty straight forward but when I was through, I noticed that I was getting Cardinality Estimate warnings that I wasn't getting before.
Anyway, here is some test data and two versions of the executed SQL (the base code is all dynamic and the two code versions are the result of toggling the @MatchCase parameter).
/* ======================================== CREATE TABLE ======================================== */ CREATE TABLE [dbo].[PersonInfoSmall]( [PersonID] [BIGINT] NOT NULL, [FirstName] [NVARCHAR](50) NOT NULL, [MiddleName] [NVARCHAR](50) NULL, [LastName] [NVARCHAR](50) NOT NULL,
We have a batch job that runs for about 45 min and reference couple of tables in queries but on one of the table DD (dummy name) we want to take a shared lock so that other sessions can read the table but no session should be able to write to table, so at the beginning of transaction/batch job we take a SHARED level table lock on this table. Now the locking behavior is strange, If someone want to read the table (IS on table DD) they are working fine but as soon as one of the DML wants to modify the table and get blocked to get IX lock on DD other sessions who just wants to read the table and need to get the IS lock gets blocked so system is kind of hung after that. I tried the same scenario on SQL Server 2005 and it works fine so this looks like kind of bug in SQL Server 2000 unless Microsoft consider this as "AS Designed" locking strategy. We can the following queries to reproduce the issue:
>From QA window1 run the following code:
create table dd (dd int primary key, dd1 int,dd2 int)
set nocount on declare @i int begin set @i=1 while (@i < 10000) begin insert into dd values(@i,@i+10000,@i+100000) set @i=@i+1 end end
Now we have a table with about 26 pages and 10k rows.
Now we run the following code from same window: Begin Tran select 1 from dd with(tablock,holdlock) Don't commit at this point
And we will see the following using sp_lock: 57 7 850818093 0 TAB S GRANT
Now on Window2 in QA run the following code: begin tran insert into dd values(10000,10001,100001)
And we will see that session in window2 is waiting for IX lock: 56 7 850818093 0 TAB IX WAIT 57 7 850818093 0 TAB S GRANT
Now we run the following code from Windows3, we are just reading the table and it still gets blocked: select * from dd where dd=99
56 7 850818093 0 TAB IX WAIT 57 7 850818093 0 TAB S GRANT 66 7 850818093 0 TAB IS WAIT
Since this is fixed in SQL Server 2005 we are wondering if this is known issue and there is any hotfix available or this is considered by Microsoft as "as designed? What is the other alternative to block writers on this table and allow readers when this batch job is running?
I am using SQL 2005 with reporting services, I have very basic reports, most users can print fine with the print icon in the toolbar, some users get the Blue screen and have to reboot. They have the rsclientprint.dll loaded in the browser, we have deleted it and reloaded, what do I do ?
I have a sql snippet from a 3rd party application that will not complete its transaction. The SELECT statement executes but does not finish. Instead the statement just sits in AWAITING COMMAND for 1000 seconds then dies, thus killing the UPDATE statement that is supposed to follow.
CREATE TABLE [Sales].[Test_inmem] ( [c1] [int] NOT NULL, [c2] [nvarchar](20) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL, [ModifiedDate] [datetime2](7) NOT NULL CONSTRAINT [IMDF_Test_ModifiedDate] DEFAULT (sysdatetime()),
[Code] ....
I have to generate 1000000 random records into it. I tried various ways to insert records, but not being a developer could not do it. I hope to make the C1 as a serial number, C2 can be anything, C3 I want to be the timestamp.
I need a script that inserts the data of an excel sheet into a table. If something already exists it should leave it, unless it's edited in the excel sheet and so on and so on. This proces has to go through a stored procedure... ...But how?
One process ( a service ) inserts data into this table , as well as updates certain fields of the table periodically.
Another process ( SQL Job ) updates the table with certain defaults and rules that are unknown to the service - to deal with some calculations and removal of null values where we can estimate the values etc.
These 2 processes have started to deadlock each other horribly.
The SQL Job calls one stored procedure that has around 10 statements in it. This stored proc runs every minute. Most of them are of the form below - the idea being that once this has corrected the data - the update will not affect these rows again. I guess there are read locks on the selecting part of this query - but usually it updates 0 rows - so I am wondering if there are still locks taken ?
UPDATE s SET equivQty = Qty * ISNULL(p.Factor,4.5) / 4.5 FROM Stock s LEFT OUTER JOIN Pack p on s.Product = p.ProductId AND s.Pack = p.PackId WHERE ISNULL(equivQty,0) <> Qty * ISNULL(p.Factor,4.5) / 4.5
The deadlocks are always between these statements from the stored procedure - and the service updating rows. I can't really see how the deadlocks occur but they do.
Another suggestion has been to try and use an exists before the update as below
IF EXISTS( SELECT based on above criteria ) ( UPDATE as before )
Does this reduce the locking at all ? I don't know how to test the theory - i added this code to some of the statements, and it didn't seem to make much difference ?
Is there a way to make a process ( in my case the stored procedure ) - give up if it can't aquire the locks rather than being deadlocked - which leads to job failures and emails etc ?
We are currently trying to filter down the data that is updated to be only the last few months - to reduce the amount of rows even analyzed - as the deadlocking does seem to be impacted by the number of rows in the tables.
I have two tables for insertion in one transaction scope. Table one have 10 rows. After first table insert statement (not yet committed) if I run select on first table from other session, it holds table until my insert is committed or rolled back and from (SSMS), it display 10 rows and then wait for transaction scope till finished. My question is do I need to use no lock hint in this situation. Or there is something wrong with isolation level. One saying that in this situation table should not hols select while insert is in transaction scope.
If I install an instance with Windows Only authentication, and then change it to Mixed Mode, if I enable the sa login, the password has already been set. What is the default? If it's generated, how secure is it? Is the password generated? What algorithm is used for that?
My sql databases in SQL Server 2014 has the status "suspend" as I saw in SQL Management Studio. I can't restore to serviceable condition sql databases through standard procedures. I need to restore .mdf file.
I am using a monitoring system where I can monitor a numeric SQL result assuming the result is one field and one row.I would like to do this to say monitor the free available space or percentage on say the Master database. DBCC SQLPERF gives me a few columns and results for all databases on the server.
In our environment applications are using a DNS name which points to the physical server ip address. Now we are planning to move to 2014. We are planning to have servers in different subnets so we will be having two ip adresses for listener. How we can point the DNS to the listener ips? If failover happens can the DNS point to the exact ip address of the listener where it's primary node?
"Process 0:0:0 (0x1e10) Worker 0x00000006B6D341A0 appears to be non-yielding on Scheduler 13. Thread creation time: 12906028806348. Approx Thread CPU Used: kernel 0 ms, user 0 ms. Process Utilization 13%. System Idle 84%. Interval: 70189 ms."
Is it better to run the profiler or performan counter?
What are the filters we have to select in the profiler to monitor the Sql server
I have a SQL server box running 2014 reporting services. I have another server running IIS v8.
I would like to be able to connect to the IIS site and be given the SSRS report browser.
So externally if I browse to [URL], I am presented with the report server interface, the same as if I browse to http://xxx.xxx.xxx.xxx/reports internally.
What is the best approach for a read only copy of a database that is ~ 1TB. The primary database is fed nightly with an ETL process. We are currently trying to duplicate the ETL to read only server but that process is not going well. So we are looking at other options to let SQL make the copy.
The primary database is on a Win12R2 with SQL 12 or 14, a 2 node A/P failover cluster.
The read only copy will be on a Win12R2 with SQL 12 or 14. It is not a requirement to fail over to the read only copy if the primary should go down.
What would best the approach to accomplish the end result?
I have 10 databases which are configured as principal in mirroring I need to failover all the databases as part of failover , instead of writing query each database as parner failover, is an script which will generate the databases as principal to failover ?
After installing SSMS on some computers - the only way we can get SSMS to run correctly is to run it as the administrator. Is there a way where you don't have to do that? These end users are logging as themselves and have accounts in SQL Server all set up - but SSMS will only launch for them if we right click and select "run as administrator".After doing some digging - it seems that this is a common problem out there.
Have a SQL 2014 install and cannot for the life of me get the maintenance plan to remove old backups. I've tried everything. Rights to the folder where the backups are stored are adequate, extension set in the clean up task is as it should be, etc. Log shows the job ran successfully. Running the command manually shows successful completion, but backups are still not removed.
when execute the restore log command, in the messages window it shows how many seconds the restore takes, at the meantime, on the status bar, it also shows the seconds the command takes.
Two values are different and could be very different, please see below examples , restoring takes 1.8 seconds, but in total the command takes 4 seconds to complete, the other one is 8.1 seconds and 12 seconds.
What does SQL Server or Windows do after the restoring?
pic a:
pic b:
I did a xperf, I can see after the restoring is completed, sql server did garbage collect and log write, which just run very quickly, but storage is busy on reading the log file for nearly 2.2 seconds( 4-1.8), and 4 seconds ( 12-8.1) .
pic 1:
pic 2:
see pic 1 above, from 13 to 17, the restore operation is finished, but the storage jump to 100% active to do some reads, only reads no writes. zoom that period shows pic 2, it read 4096 (I don't know the unit size) for about 4 seconds, what does this do?
Data file, log file, backup file are no different drives, but all local drive, the interesting point is the read jumped after restoring, I tested it on different server, same result...
I've got an old version of SQL Server 2008 R2 Developer Edition on an old PC which is failing. I've got a new PC and have put SQL Server 2014 Developer Edition onto it. Now before the old machine completely dies, I've gotten into SSMS on the old machine and did a backup of the databases I want to save. I've moved the .BAK files to where I could get to them from SSMS on the new machine. I've gotten into SSMS and tried to do a restore the database to my new machine. However I'm getting an error that does not make any sense to me.
The database I'm I've backed up is named JobSearch. When I backed it up, that was the only database I had selected. Like I said I copied the .BAK to the new machine. Got into SSMS, told it that I wanted to restore the JobSearch database, telling it where I wanted to put it, and it then immediately fails with a:
"Restore of database 'JobSearch' failed. System.Data.SqlClient.SqlError: Logical file 'VideoLibrary_Data' is not part of the database 'JobSearch'. Use RESTORE FILEISTONLY to list the logical file names."
Well of course VideoLibrary isn't "the logical file". But neither did I select VideoLibrary (which is a database I also want to move, but I'm doing one at a time). So what in heck is going on here? Why is it complaining about a database I haven't even selected to back up? Why, when I check everything on the old machine, it's backing up JobSearch, but on the new machine it sees VideoLibrary?