Is it good practice to use #tables on the tempdb, created with 'select into' statements, as working tables in large, multi-step stored procedures? Large in that there are hundreds of thousands of rows involved in joins, updates, etc. Is it not better practice to bite the bullet and explictly create/drop such working tables an the actual database. What are the opinions of the experts at hand?
Since upgrading from SQL Server Management Studio 2008 R2, I've noticed that it no longer autosaves queries that have not been manually saved first. If a file has been manually saved the autorecover files end up in the following directory:
%appdata%MicrosoftSQL Server Management Studio11.0AutoRecoverDatSolution1
However, I have ended up in the situation where I have unsaved queries when my computer has crashed and have not been able to recover them.
I have also found references to .sql files stored in temp files in the following directory, but the files here seem to be very haphazardly caught:
So I've been monitoring long-running transactions on a SQL Server that hosts a couple of vendor-supplied databases that look after our factory.Today I noticed a pair that have confused my Excel spreadsheet (that I've been using to analyze these transactions).So here's the weird thing that I spotted. Given this query:
SELECT p.spid, p.login_time, at.transaction_begin_time, datediff(second, p.login_time, at.transaction_begin_time) as [difference] FROM sys.sysprocesses AS p INNER JOIN sys.dm_tran_session_transactions AS st ON st.session_id = p.spid INNER JOIN sys.dm_tran_active_transactions AS at ON st.transaction_id = at.transaction_id
[code]....
I had a look in the event log on the server, which had just been rebooted at around that time. It seems that the clock got changed on boot-up, with the size of it quite surprising. This meant that these processes were able to start their transactions *before* they logged on. Hopefully this doesn't cause any other weird problems.So I've requested an investigation about time synchronization on our virtualization hosts... and in the mean time, have set the SQL Server services to 'delayed start'.
We have an App server where Microsoft reporting services (report Builder 2005) is hosted and the backend Database server which is used by the reporting services.
The clients create very complex reports (usually towards month end) and run against the database causing the tempdb grow expenentially, leading to perfromance degradation and worst case space issues.Our only solution is to reboot the server.
There should be many systems which sould be having similar scenarios like mine. How to handle the scenario.usually in my database tempdb should be below 15GB... but it had grown to 60GB in some instances.
I try to find some feedback regarding setting the TempDB files on a RAM disk.Specifically I am looking for "production results" that could show the difference/benefit of such an usage.The tests on physical server and VM I already made have shown a boost in overall SQL Server 2012 performance on SQL Server instances housing data for SharePoint 2013 and Dynamics AX 2012 R2.Graphic below show differences between 5 different configuration on the same physical server:
- Physical HD: Server with local HD - Physical SANEX1PRD: Server with TempDB files stored on a low-end SAN - Physical SAN1: Server with TempDB files stored on a high-end SAN (around 100000 IOps) - Physical SAN1 Jumbo: same setup with Jumbo Frame activated on NIC and DB engine - Physical RAMdrive: with TempDB files stored on a 16 GB soft RAM drive within OS memory
Results were really impressive for the DB engine housing Dynamics AX data. My colleagues from the SharePoint team told me it also boosted a bit overall SharePoint performances but they did not have any baseline comparison to show.If you have some feedback, results, links, whatever I am interested.Indeed before setting this to all our SQL Server 2012 instances I y rather collect some *real world* feedback.
I have this job that runs and Kills blocking sessions that are longer than 10 mins. I need to add a list of login's that are exclulded from this Job. I cannot seem to get it to exclude specific users in this script. Below is the script.
SET NOCOUNT ON -- Table variable to hold InputBuffer data DECLARE @Inputbuffer TABLE ( EventType NVARCHAR(30) NULL, Parameters INT NULL, EventInfo NVARCHAR(4000) NULL
I use management studio on the sql server. Each time i want to run scripts over new data, i have to delete the old files in a database and import new ones (from csv files to .dbo). These are the same files everytime except that the data change. Is it possible to make an automated proces for this import?
In tweaking performance of tempdb by adding add'l data files I want to reset back to defaults and remove all add'l files I've created. I was not able to do it for most as they were in use, but by starting the server in single-user mode with all other sql services shut off, and using sqlcmd I was able to use the ALTER DATABASE tempdb REMOVE FILE <tempdev#> to remove the files... except for one.
Restarted SQL server, and tried the ALTER DATABASE ... REMOVE FILE again but am always denied with the message that the file can't be removed b/c it's still in use.
I also tried to shrink it with EMPTYFILE but that also fails with the message that a page is a work table page and can't be removed.
I really need to get tempdb back to just one data file but am unable to find a way to remove this last data file.
In on of the server tempdb is not releasing the reserved space after completion of data loads,as of now 99% of free space available in data file,we tried to shrink the datafile ,and space has not been released.
Could a simple update statement on a user database ever caused space usage in tempdb? Assuming the update statement fires no triggers and not using any temp tables?
IE:
User DatabaseA Update TableX Set col1 = X
Reason I ask is tempdb filled up and the only thing I could see running at that time was the update statement.
INSERT INTO Query_results(login_name, total_elapsed_time, total_elapsed_time) SELECT login_name, total_elapsed_time, total_elapsed_time FROM sys.dm_exec_sessions
I need to then kill all sessions at 11:59pm then log all those that are killed. This is so I can schedule a job at that time, I have sessions that are blocking my job.
I have used Extented event to monitor the occurances of TempDB contention on Production server . I found there are several entried logged in in 30 mints .Now I am trying to determin if Tempdb contention on PFS, GAM or SGAM page then I will decide if I need to increase the number of TempDB data files on Production server . Currently , There are 8 TempDB Data files configured on its separate Disks .There are Page_IDs I found in the extented events for Tempdb files -
Page_ID =1 for PFS page Page_ID = 2 for GAM page Page_ID =3 for SGAM page
but I found the Below Page_IDs and I know there is a formula that you can use to identify if page is PFS,GAM or SGAM ? How should I use this formula and what should I look for to determine if page is PFS,GAM or SGAM ? Is there any threshold value for the duration of TempDB contention occured ?
today I've put in production a big database accessed by 200 concurrent users, this database has READ_COMMITTED_SNAPHOT set to ON.I know that RCSI set to ON is very aggressive on tempDB so I'm monitoring it.I've noticed that the Transaction log space usage (%) on TempDB is slowly but ever increasing, I mean in the last 24 hours I've started from a 99% space free, now we are 37% space free...is it normal? TempDB log is 35GB in size.
I have scenario where I have process that loades data into SQL server 2012 database by doing some manipulation on data like sorting , aggregation, etc. Once this process is completed it's not free up the Tempdb space. If I restart the database, then it does.
is there any way (apart from shirking) to release space for Tempdb, like writing some post SQL queries to delete/ truncate the data and logs from temp db?
Is it possible to manually force/call/start the system AUTOSHRINK process? I have an issue that appears only when the engine shrinking process is running and I need this to reproduce my bug.
I know how to start a "regular" database shrink process with:DBCC SHRINKDATABASE(xxxx);, but this is not the same as one started from the database engine.
I have a table with the following columns employeeSessionID, OpDate, OpHour, sessionStartTime, sessionCloseTime. I need to see how many users remain active per hour. I can calculate how many logged in per hour, but I am stumped on how to count how many are active per hour. I have a single table that stores login data. I have created a query that pulls out the only the data needed from the table into a temp table using this query. Also note it is possible that the sessionCloseTime is null if the device has not been logged out this would need to be counted a active.
TABLE NAME #empSessionLog Contains the time stamp data OpDate, sessionStartTime and sessionCloseTime. OpDatesessionStartTimesessionCloseTime 2015-01-202015-01-20 14:32:59.1302015-01-20 14:33:14.6299166 2015-01-202015-01-20 06:58:33.7302015-01-20 15:27:16.9133442 2015-01-202015-01-20 09:56:22.8402015-01-20 17:56:29.7555853 2015-01-202015-01-20 05:59:18.6132015-01-20 14:05:19.0426707
[code]....
can see how many sessions logged in per hour with the following statement:
SELECT opDate, FORMAT(DATEPART(HOUR, sessionStartTime), '00') AS opHour, Count(*) AS Total FROM #empSessionLog Group BY opDate, FORMAT(DATEPART(HOUR, sessionStartTime), '00') Order BY opDate, FORMAT(DATEPART(HOUR, sessionStartTime), '00') ASCResults: opDateopHourTotal 2015-01-20041
[code]....
Where I am stuck is how do I count the sessions that remain active per hour until the session is closed with the sessionCloseTime.
I'm trying to kill a bunch of processes in SQL 6.5 and I can't. I'm running the only machine with SQL tools installed on it (the server) and it won't let me kill them. I try the GUI screens and the Kill statement in ISQL_w. Is there any way around this?
I've stopped the SQL Server and rebooted the NT Server. Is there anyway I can get rid of these processes. They are locking some tables and keeping me from inserting data within my code. Very frustrating.
It is possible that Data Collection can cause massive increasing MB/sec to tempdb ? I cannot find connection with tempdb and I set cash file, but on same disk.
Or it can be something different? Last two weeks what I checked was Read/Write MB/s to tempdb increasing progressively.
One time it was about 20MB/sec
After it was reseting and again 1MB/sec..
What I checked , External company which install SQL Server made one file for tempdb, next week or during breaktime(it will be possible), I would like make 8files next weekend work.
Now I saw that TempDB mdf was still increased, but using was just 8-10%
Error:- (1 row(s) affected) DBCC execution completed. If DBCC printed error messages, contact your system administrator. Msg 5042, Level 16, State 1, Line 1
The file 'tempdev1' cannot be removed because it is not empty.
Note: =>I restarted SQLServer from SSMS and then ran same commands mentioned above ,......and getting same error... => I executed above commands and restarted services...no change...
I'm having an argument with our infrastructure architect who has just gone and bought lots of SSD drives to use for our tempdb data and log files, sounds great doesn't it? There is a catch though, his plan is to add the disks to the two available slots in each blade in a RAID0+1 configuration, effectively giving you one usable drive, and adding both data and log files on to one disk.
I then pointed out that SQL Server best practice is to host tempdb data and log files on two separate drive to reduce contention. The architect then basically said that because this isn't spinning disk the issue of drive, r/w contention isn't an issue I don't agree with this and wanted to get some opinions from the community, I'm still advising that two separate disks should be used but someone just went and spent £80k ($150k) on SSDs and doesn't want to back down...
I can get a snapshot of tables in tempDB, but I would like to track which procs are causing the load in the tempDB.
I think I can sample and record objects in the tempdb, but I would like to record the proc creating the most tempDB usage, and disk read/writes associated with those procs.
The DMV's give usage in the individual DB's, but what's a good way to correlate procs in the DB's to tempdb usage?
I have a server which is not running optimally and I checked the default trace. I have around 600 entries in the default trace which are all Missing Column Statistics and the database is tempdb.is_auto_create_stats_on and is_auto_update_stats_on are both 1 for tempdb.
So we have new servers that are going to be installed with SQL 2012 and I'm debating the wisdom of splitting tempdb with multiple files.
I know it's a myth that performance automatically improves if you split it into a number of files based on processors, but I'm debating the wisdom of putting a file on each of my data / log file drives.
For instance, I have a server with a C: drive (OS), D: drive (Data for system DBs and install of programs - 458 GB), an F: drive for user DB data files (767 GB), and a J: drive for log files (255 GB).
Obviously no files are going on C:. I'm debating on whether or not we should even leave system DBs on the D: drive given in our current 2k8 servers, we end up with Memory.dmp files over flowing the D: drives as well as .cabs and other install / update files that tend to collect on that drive over the years.
But if we leave the system DBs on D:, I'm wondering if adding a second tempdb file to F: and a third to J: will improve query performance or not.
Some file names listed could not be created. Check related errors.
[code]
I did not have remote connections enabled yet, so the resolutions I have found that include sqlcmd or starting in single user configuration are not working. Any way that might allow me to restore the usual tempdb settings, which I think would allow SQL to start again?