Hi,
Im a Jr DBA and have been given an assignment by my lead to find information on the following.
We are to migrate existing db of size 4TB to a
DELL PowerEdge 2950[Mem:Up to 32GB]
OS : Windows Server 2003 Std Edition X64 SP2
DB : SQL Server Enterprise Edition x64
I am to find on how to design the db to provide optimum performance,fail over and consider the growing factor of the db.
1)What would be the recommended RAID settings?
2)Placement of the tempdb ?
3)Should we do clustering and why ?
4)What Data partioning would do to help?
5)Any Other aspects to be considered for sizing db ?
6)Placement of data files and log file on separate physical disk ?
7)Indexing?
I have read many sites.I would appreaciate if someone could write suggestions and opinions based on their current db design spec or previous experience,by selecting best db design points.Thank You.
My SSIS packages produce output Excel files. Each Excel has a set of columns that must have a background color. For instance, Col A1-B2 background should be Yellow while Col E background should be Green.
If my excel files are statically named, this requirement can be implemented with the Excel conditional formatting rules.
However, my scenario calls for dynamic Excel filenames. ExcelFilePath is set to be in the form of <timestamp to the millisecs> <accountname> <costcenter> .xls
So I am not sure what to do: script task( ??? to do what). I am not well-versed with doing IO functions to Excel files.
I'm new to asp.net. Please help me with some ideas about the design of databases and interface of the web site. We are a public school district, I want to create a website for families to go on line to update their family and student's demographic information. Basically I put the students' infomation pulled from our student information system software on the web, the parents can log in on line to update it at the beginning of school year. After that, I will create some report and let secretary manually do the update to our student information system. The demographic infor includes 3 parts, 1. family street address, city, state, zip2 guardian1 primary phones,second phone, emails. primary phones,second phone3, student date birth, gender. may have multiple students in one family But how can I track which field parents changed, shall I do it in programming in the web form, or later when I create some kind of reports. For I only want to pull students with the fields that updated and give them to secretary to update manully, and I don't want to generate a full report and let them compare which field is changed and then update, it will take them too much time. Thanks much in advance
I am in the process of designing a package for populating a Dimension table for my new data warehouse. I would like to discuss with you my proposed solution in the hope that some of you will be able to advise me on any mistakes that I have made so far.
I have 3 source tables which are used to populate my Dimension table (I will restrict each source table to a few columns to keep this simple).
Please note that I haven't included the start and end dates to keep this simple, but in my real solution, the Dimension table has many more columns.
The Meetings and Events tables contains hundreds of thousands of records so when my package is run, I should only retrieve information that has changed since the last time my Dimension table was populated. This is why I store the time stamps in my Dimension table.
So when my package runs, I get max(Meeting_Time_Stamp) and max(Event_Time_Stamp) from my Dimension table then retrieve all records from the source table where their timestamps are GREATER than the max values.
The problem is this :
If in the source system, an event is changed, the time stamp associated with that record will change value but the meeting record associated with that meeting will not have its time stamp changed.
How do I ensure that when my package is run, I retrieve all meeting and events information even if one hasn't changed?
Should I build a table made of IDs?! And if I do need to build a table made up of IDs, should I store this as a permanent table in my database or is there an SSIS data structure I can use to hold this information?
So what I need to do is store applications info, such as application name, path, server it's installed on, etc., into a table.
I thinking of designing the table like this but not sure if this is a good design:
ApplicationInfo --------------- ID Application -- pk Path ServerName
There are apps that are installed on all servers and there are some apps installed only on a few servers. I was thinking of making the "Application" field unique so that only 1 instance of the application name exists and then comma delimit the "ServerName" field values.
So with this approach records would look like this:
Field Value ----------------------------------- ID "1" Application "Adobe" Path "C:Program FilesAdobe" Server "ServerA,ServerB,ServerC"
ID "2" Application "Microsoft Office" Path "C:Program FilesMicrosoft Office" Server "ServerB,ServerC"
I got a server that has a RAID-5 array partitioned into C: and D:drives (OS Win2K Adv. Server installed on C:). The server also has amapping to a NAS device using the latest protocols that trick thesystem into thinking the map is actually a local SCSII drive. That'sdrive X:.This server is used only for SQL, and contains an OLTP database thatsees a lot of use and is pretty heavily indexed.I am toying with the idea of centralizing my data storage on the NAS(data center network segment is 1-gigabit ethernet). So I wasthinking about putting my primary data file on the NAS (drive X:) andkeeping all tables there, creating a secondary data file on localRAID-5 (drive D:) and putting all non-clustered indexes there, as wellas keeping the tempdb there and specifying the sort in tempdb option.Log files would also remain on D:.If anyone can suggest a better scenario given the above setup - I'dlove to hear it. Much appreciated.Alexey Aksyonenko
Hi,I am to make a DB that will handle over a million inserttions everymonth. Right Now I am to design it. I was wondering if any of you havea tutorial or some guide that can talk about the best practices that aDBA has to folow before he designs the new huge DB.The DB will be used with ASP and will be online on a Dedicatedwebserver in US only.I will be thankful if anyone can guide me to a tutorial or tell theirown expiriences about such DBs.Regardsjaunty Edward
For a long time now I have been using SQL 2005 to hold information about images for different image galleries. I have going though and rebuilding the site from the ground up and thought this might be a good time to optimize the tables and databases. Right now each game has it's own table in the database to hold information about the images (Group, ImageName, Width, Hight, AltTag, ImageNumber [Group is to split up by character, ImageNumber is a unique number for each image so I can use it to pull into the ASP script to create the thumbnail and internal linkings from image to image]). I was going to stick with this design but decided I would pop in here and see if anyone has any suggestions on a better way of doing this? And remember I am recreating all the scripts so the gallery script used to generate the HTML page for the users can be modified to work with the new DB.
Here is the code to create the current tables for anyone that wants to look at how I made the original ones.
CREATE TABLE [Eaglef90].[cbr]( [GroupName] [varchar](15) NOT NULL, [ImageName] [varchar](50) NOT NULL, [Width] [decimal](3, 0) NOT NULL, [Height] [decimal](3, 0) NOT NULL, [AltText] [varchar](50) NOT NULL, [ImageNumber] [decimal](3, 0) NOT NULL ) ON [PRIMARY]
-- If I get used to envying others... Those things about my self I pride will slowly fade away. -Stellvia
CREATE TRIGGER [tr_update_doc_count] ON [Documents] FOR INSERT AS [Declared variables]
[I want to get the tablereference from the inserted information] SELECT @tableref = Table_id, @Account_id = Account_id from inserted
[Then I get the Actual Table Name from the UsercreatedTables Table where it is stored] SELECT @tablename = TableName from UsercreatedTables WHERE Table_id = @tableref
[Now I want to use the @tablename I got from UsercreatedTables in the query] SELECT @Current_Doc_Count = Account_Docs from @tablename WHERE Account_id = @Account_id
I am not sure how to get it to use the variable @tablename for the table name in the last query. It gives me an error stating that @tablename is invalid syntax.
g'day gents, I run a select query against my mysql database using asp.net. The query looks like this: SELECT distinct(rubbet.ortnamn) as Bebyggelsenamn, count(rubbet.ortnamn) as 'Antal',min(rubbet.artal) as Beläggsår, rubbet.belaggsform as Beläggsform, rubbet.harad as Härad, rubbet.socken as Socken, rubbet.ovrigt as Övrigt, rubbet.filnamn as Filnamn, rubbet.path as Sökväg FROM rubbet WHERE rubbet.ortnamn like '%" + searchTextBox.Text + "%' and rubbet.harad like '" + DropDownList1.SelectedValue + "' and rubbet.socken like '" + DropDownList2.SelectedValue + "' and rubbet.ovrigt like '" + ovrigtRadioButtonList.SelectedValue + "' group by Bebyggelsenamn ORDER BY '" + sorteraDropDownList.SelectedValue + "' " + ordning + "; The query works just fine and runs smoothly. I only have one problem. If a user types for example monkey in the 'searchTextBox' the database finds all the instances of monkey, counts them and presents one line that states: Monkey, 138 instances, bla bla bla my problem is with the Min aggregate function. It selects the lowest rubbet.artal for Monkey but I want rubbet.belaggsform to get the data from the same row as Min(rubbet.artal) gets the data from. As it is now, rubbet.belaggsform selects its data from the row with the lowest primary key. For example the result could look like the following: Holmagården, 127, , 1873 when it should say: Holmagården, 127, Holmagaard, 1873 I'm only presenting the columns above, the rest of them are for creating links and what not. I guess I should write a sub query of some kind to pick the correct rubbet.belaggsform that corresponds to MIN(rubbet.artal) together with the other factors but I just can't get it to work. Help me out here please.
I have a database approximately 30 GB in sixe which need to be moved from one SQL server to another. Does anyone know the most efficient way of doing this, other then backing up to tape?
I have been experimenting with SQL Server 2005 partitions. I loaded a terabyte of information into 2 tables. The first holds the document information and the second holds the actual binary document (in this case pdf). Most of the documents are about 1 megabyte in size, but the largest is 212 megabytes.
SQL Server has no problem storing the blobs. The problem occurs when I attempt to get the data.
I did some quick tests to test how fast I could pull the documents out. The largest took about 24 seconds. The 1 meg documents are sub-second.
Here is how the 212 meg doc breaks down:
Time to load datatable: 18.79 seconds Time to load byte array: 3.84 seconds Time to Write and open document: 0.01 seconds
If I access the file from a file server, the time is 0.04 seconds to begin showing the document.
As you can see, the longest time period is related to retrieving the data from SQL, and it is much slower that launching it from disk across the network. (note: the sql server and file server used to test are next to each other).
My question is, how can I speed up the access from SQL Server? I believe the keys are "partition aligned". Any suggestions would be appreciated.
I will add the table definitions and partition information as a reply since only 5000 chars are allowed in the post.
I have a 50Gig OLTP production database that currently takes +- 50 minutes to backup, (normal sql flat file backup to disk).
This database will grow to +- a terrabyte by next year.
My major concern is how will i be able to backup this DB when it is that big in 2 hours or less.
I have been checking out my options, in terms of SAN snapshots/clones. Also multiple backup devices and using differential/filegroup/full backup strategy.
What i want to know is if anyone out there is backing up VLDB's what strategy/methos/tools are you using, even 3rd party tools for faster,smaller backups?
Any pointers/best practices for VLDB backups would be greatly appreciated.
Hi, Situation: I have a web application that is running on a server. IT has told me that the application I developed is taking up enormous amount of ram. It seems that the amount of ram that is taken up, is slowly increasing. About 10 megs per hour (memory leak). Furthermore, he has informed me that the ram is taken up by the sql. My senior developer told me that I probably have sqlconnections open that are not closed. I have checked through the application and it seems they are all closed properly. But I may have missed some, as the application is pretty big. Questions: 1. Is there an application or piece of code that can monitor how many sql connections I have open during runtime? 2. Could there be any other common causes to such a kind of memory leak. Any help with this matter would be very much appreciated. Thank you for your time. Sincerely, Jeff
Hi everybody.I'm having difficulties with a button handler I put together from a few examples around the internet.Here's my code: <script runat="server"> Sub Add_To_Cart(ByVal Src As Object, ByVal Args As EventArgs) Dim FVArtikel_id As Label = FormViewDisplay.FindControl("Artikel_idLabel") Dim FVArtikel_naam As Label = FormViewDisplay.FindControl("Artikel_naamLabel") Dim FVArtikel_prijs As Label = FormViewDisplay.FindControl("Artikel_prijsLabel") Dim DBConnection As OleDbConnection Dim DBCommand As OleDbCommand Dim SQLString As String Dim SQLAddString As String If Not Session("sid") Is Nothing Then DBConnection = New OleDbConnection( _ "Provider=Microsoft.Jet.OLEDB.4.0;" & _ "Data Source=C:Documents and SettingsAdministratorBureaublad2ehandslego.nldatadb.mdb") DBConnection.Open() SQLString = "SELECT Count(*) FROM Orders WHERE sid = '" & CType(Session("sid"), String) & "' AND Artikel_id = '" & FVArtikel_id.Text & "'" DBCommand = New OleDbCommand(SQLString, DBConnection) DBCommand.Parameters.AddWithValue("@sid", Session("sid")) 'string? DBCommand.Parameters.AddWithValue("@Artikel_id", FVArtikel_id.Text) 'string? If DBCommand.ExecuteScalar() = 0 Then SQLAddString = "INSERT INTO Orders (sid, Order_datum, " & _ "Artikel_id, Order_artikel, Order_prijs, Order_hoeveelheid) VALUES (" & _ "'" & Session("sid") & "', " & _ "'" & Today & "', " & _ "'" & FVArtikel_id.Text & "', " & _ "'" & FVArtikel_naam.Text & "', " & _ "'" & FVArtikel_prijs.Text & "', 1)" DBCommand = New OleDbCommand(SQLAddString, DBConnection) DBCommand.Parameters.AddWithValue("@sid", Session("sid")) 'string? DBCommand.Parameters.AddWithValue("@Artikel_id", FVArtikel_id.Text) 'string? DBCommand.Parameters.AddWithValue("@Artikel_naam", FVArtikel_naam.Text) DBCommand.Parameters.AddWithValue("@Artikel_prijs", FVArtikel_prijs.Text) 'string? DBCommand.ExecuteNonQuery() End If DBConnection.Close() Src.Text = "Item Added" Src.ForeColor = Color.FromName("#990000") Src.BackColor = Color.FromName("#E0E0E0") Src.Font.Bold = True End If End Sub</script> I'm not getting any errors, it seems to me that i'm not getting a 'sid' value passed along but i don't know what to do about it.I've also already tried step debugging. This is my last resort. I hope you can help me.
I have a database with a dozen or so tables. No table constraints. Logic is all in stored procedures.
I have several Excel spreadsheets of data to import into the database, one speadsheet to a table. Each spreadsheet has additional data(columns) that each table has no interest in and should be ignored.
I would appreciate your thoughts on methods and best practices for loading this data to the database.
I am about to investigate SQL Server 2005 Express handling of XML. I am familiar with XML and XSL conversions and it seems to me that XSL conversion of Excel data to XML gives me a lot of flexibility prior to database import for shaping the data.
In short, importing data to the database from an XML source.
I am not famliar with SQL Server's XML capability and would appreciate thoughts on this while I look into it.
And of course alternate ways that I am overlooking.
Hi, sorry if this is not the right place to post, any help or direction would be appreciated. I'm using SQLServer2005.
I'm having trouble writing a query and could do with some help. I'll illustrate the problem with an example.
Say I have 2 tables.
Table 1 - tbl_news:
ID NewsTitle linkedDoc1 linkedDoc2
1 my news title 5 3 Table 2 - tbl_docs:
ID docTitle
1 doc1
2 doc2
3 doc3
4 doc4
5 doc5
Now, I'd like to write a query which returns bascially the first table, but which gets the doc title from table 2 using the IDs....how would I write this?
I hard that SQL Server 7.0 has problems when the database reaches 50 - 100GB, in areas such as backup, transaction logging, and database admin and that by 100GB parallel queries are also affected.
Is this true ? Where I can get information on this ?
Does anyone have experience/advice with large databases (5-10 Gig)? If so, I was wondering about performance/other benefits of spanning a large database across multiple devices (different disks). Would anyone vote for or against doing this?
I have a production database that is in the low gigabyte size andgrowing steadily. No issue there.I wish to completely refresh the development database daily on asecond server. What is going to be the fastest easiest way to do thiswith hindering performance on the production system ?Thanks,Craig
I need to manage the problem of negative performance implications when I fragment a 1TB+ DB. I want to perform Index Reorganization if fragmentation is no higher than 30%, and Index Rebuild if the fragmentation exceeds 30%.
Firstly can anyone recommend a script which uses sys.dm_db_index_physical_stats system to ascertain the fragmentation level. Secondly, is there a technique I can employ to prevent the ONLINE operation completely killing performance on 27/4 production system?
I've got a few VLDB's that we want to make smaller. Since the tables are running on legacy stuff, all of it's basically made with int's and char's and it's horriably inefficant.
The problem that I came across is when I made a new table with the best data types and copied the data from the old table, the table size was the exact size (excluding the index size). It was estimated that a total of ~20 GB would be saved with this change. As it turned out, 0 bytes of data were saved with the data types chagnes.
Why are the two tables the same, even though one has much more efficant data types?
If you want more information about the table I'm using:
391 columns. 50,147,035 rows. 65,295.625 MB in size.
Hi -- I am fairly new to SSIS development, although I am starting to appreciate it more an more, especially since I have started getting into extending the object model. Here's my question:
I have a data flow that pulls data from any number of different delimited files with different numbers of columns. I have had no problem dealing with setting up run-time file locations and file names by using the expressions of a flat file data source, and i have been able to pretty easily deal with varying file delimiters by standardizing the files before they get into the data flow. however, I have not been able to come up with a solution that will allow my data source to discover its column info at run time, and then pass that information on to the data flow task. all i really care about is being able to properly parse the individual rows into individual column data by the flat file data source because the data flow itself is able to discover the actual data that the columns hold at run-time.
i would very much appreciate any feedback from anyone on possible solutions for this.
I have the MS SQL2000 database failed to recover at computer restart. Now the database is marked suspect. How can I manage to recover the data? Thank you.
I have been using AlwaysON AG for a long time now and currently have about 10TB of data across 120 databases and 3 AG groups for any application that is on SQL 2012 with great success. Each AG group is running on patch level 11.0.5058.0 with 2 synchronous replica(on different SANS) in Primary Data center and 1 ASYNC replica in DR. Migration has been a non-issue because none of the databases weren't substantial enough that I could not fit into my maintenance window which is 12-4AM on SAT morning.
My issue is that my last application to migrate to 2012 includes a 4TB TDE encrypted databases database which is about 10x larger than any of the previous ones I have migrated. The database takes 4 hours to backup after tuning extensively(I hate TDE!!)
The restore to the primary replica is instant because of seeding incremental but the issue comes from having to backup the database before adding to the availability group. 4 hours is my exact outage window and I can't get any more. My plan to migrate application is to -
First Outage Window
1) Restore Database from 2008 to 2012 Primary Replica 2) Change application ARECORD(or cname not sure which) to Primary replica 3) Run database on single node until next outage window
Week Later 1) Add database to availability group 2) Change ARECORD/CNAME to listener
What I don't like about this is I am going an entire week with 1 node instead of 3 which is worrisome. How to accomplish this I would love to hear from you or any type of comment from people who have worked with VLDB in availability groups and what you like/hate/loved about doing it. I am trying to go all in on this software and have loved it so far but getting worried when it comes to the VLDB migration.
The column I'm adding needs to be part of the clustered PK (it will be the last of three columns) so I need to recreate all the indexes.
My DB is set for FULL recovery mode ALLOW_SNAPSHOT_ISOLATION ON. I've tried two methods so far.
Method 1:
BEGIN TRANSACTION CREATE TABLE dbo.Tmp_copyoftablewithnewfield ( ) ON PRIMARY IF EXISTS(SELECT * FROM dbo.originaltable) EXEC('INSERT INTO dbo.Tmp_copyoftablewithnewfield (<original fields>) SELECT <original fields> FROM dbo.originaltable WITH (HOLDLOCK TABLOCKX)') GO DROP TABLE dbo.originaltable GO EXECUTE sp_rename N'dbo.Tmp_copyoftablewithnewfield', N'originaltable', 'OBJECT' GO <recreate PK constraint> <rebuild indexes> COMMIT
Pro's: Lets me add the new field in the spot I'd like it (not a big deal) Con's: Tons of wasted space and time. It took about 15 hours.
Method 2: SET XACT_ABORT ON GO SET TRANSACTION ISOLATION LEVEL SERIALIZABLE GO BEGIN TRANSACTION <drop PK constraint> <drop indexes>
ALTER TABLE [dbo].[originaltable] ADD [newfield] [tinyint] NOT NULL CONSTRAINT [DF_originaltable_newfield] DEFAULT ((1))
Pro's: No making a copy of the entire table taking up 200GB more space in the db data file Con's: My tempdb grew to accomodate the row versioning info for every row in the 200GB table. It took over 30 hours.
A lot of time and disk space is wasted with both.
Since the db is going to be unavailable to users I have some flexibility here. I was considering turning ALLOW_SNAPSHOT_ISOLATION OFF and then trying method 2 again which should stop the versioning in tempdb and then turning it back on.
I was also curious if setting the database recovery mode to SIMPLE would cut down on db log usage and then I could set it back to FULL when done.
Do these really need to be in a transaction? If there's some hardware failure or something unexpected I can just restore from backup and do the conversion again. If the presence of the transaction itself is causing more disk usage for logging or any other slowdown, I think I'd rather do without.
Given the amount of time this conversion takes, I wanted to get some feedback other than "just try it" before doing any new tests.
It's arriving from ebuyer tomorrow for 32 quid. I already have aninstance of MSDE running on the laptop and ideally i would like SQLserver to be installed and use the current instance and not installany other additional services. [hope i got my terminology correcthere.] Basically all i would like is to use the Enterprise Managerwith my existing MSDE database. I was wondering if there is anything ishould be doing before I install. Should i shut down the service orleave it running for instance.Thanks for any tips,AndyB
SQL 2000 to SQL 2005 Upgrade Error - Database Down - Help Appreciated
I am upgrading a SQL 2000 standard database server to SQL 2005 standard on a windows 2003 server
I am logged in as domain admin and started the upgrade as 'sa'
The upgrade stops with the error:
Service MSSQLSERVICE can not be started. Verify you have sufficient privilages to start system services. The error code is (17185)
The error log shows:
2007-01-04 15:59:38.77 Server Authentication mode is MIXED.
2007-01-04 15:59:38.79 Server Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86)
Oct 14 2005 00:33:37
Copyright (c) 1988-2005 Microsoft Corporation
Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 1)
2007-01-04 15:59:38.79 Server (c) 2005 Microsoft Corporation.
2007-01-04 15:59:38.79 Server All rights reserved.
2007-01-04 15:59:38.79 Server Server process ID is 4188.
2007-01-04 15:59:38.79 Server Logging SQL Server messages in file 'F:SQLDataMSSQLlogERRORLOG'.
2007-01-04 15:59:38.79 Server This instance of SQL Server last reported using a process ID of 2980 at 1/4/2007 3:56:58 PM (local) 1/4/2007 2:56:58 AM (UTC). This is an informational message only; no user action is required.
2007-01-04 15:59:38.79 Server Registry startup parameters:
2007-01-04 15:59:38.79 Server -d F:SQLDataMSSQLdatamaster.mdf
2007-01-04 15:59:38.79 Server -e F:SQLDataMSSQLlogERRORLOG
2007-01-04 15:59:38.79 Server -l F:SQLDataMSSQLdatamastlog.ldf
2007-01-04 15:59:38.79 Server Command Line Startup Parameters:
2007-01-04 15:59:38.79 Server -s MSSQLSERVER
2007-01-04 15:59:38.79 Server SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.
2007-01-04 15:59:38.79 Server Detected 4 CPUs. This is an informational message; no user action is required.
2007-01-04 15:59:38.83 Server Set AWE Enabled to 1 in the configuration parameters to allow use of more memory.
2007-01-04 15:59:39.02 Server Using the static lock allocation specified in the locks configuration option. Allocated 20000 Lock blocks and 20000 Lock Owner blocks per node. This is an informational message only. No user action is required.
2007-01-04 15:59:39.02 Server Multinode configuration: node 0: CPU mask: 0x0000000a Active CPU mask: 0x0000000a. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.
2007-01-04 15:59:39.04 Server Multinode configuration: node 1: CPU mask: 0x00000005 Active CPU mask: 0x00000005. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.
2007-01-04 15:59:39.04 Server Attempting to initialize Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
2007-01-04 15:59:41.04 Server Attempting to recover in-doubt distributed transactions involving Microsoft Distributed Transaction Coordinator (MS DTC). This is an informational message only. No user action is required.
2007-01-04 15:59:41.04 Server Database Mirroring Transport is disabled in the endpoint configuration.
2007-01-04 15:59:41.04 spid7s Starting up database 'master'.
2007-01-04 15:59:41.05 spid7s 1 transactions rolled forward in database 'master' (1). This is an informational message only. No user action is required.
2007-01-04 15:59:41.07 spid7s 0 transactions rolled back in database 'master' (1). This is an informational message only. No user action is required.
2007-01-04 15:59:41.07 spid7s Recovery is writing a checkpoint in database 'master' (1). This is an informational message only. No user action is required.
2007-01-04 15:59:41.08 spid7s SQL Trace ID 1 was started by login "sa".
2007-01-04 15:59:41.08 spid7s Starting up database 'mssqlsystemresource'.
2007-01-04 15:59:41.11 spid7s Using 'dbghelp.dll' version '4.0.5'
2007-01-04 15:59:41.11 spid7s ***Stack Dump being sent to F:SQLDataMSSQLLOGSQLDump0035.txt
2007-01-04 15:59:41.11 spid7s SqlDumpExceptionHandler: Process 7 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.
2007-01-04 15:59:41.27 spid7s Unable to update password policy.
2007-01-04 15:59:41.27 spid7s SQL Trace was stopped due to server shutdown. Trace ID = '1'. This is an informational message only; no user action is required.
Hi, I would like to delete a data from a 750million row table in chunks of 10000,without blocking the users.As ours is a 24/7 shop I donot want to block the users for a long time. Answer for this is highly appreciated. Thanks Samna
I need for a database to give my users and indication that a renewal has been complete. Basically what happens is every year once a month a report is generated from sql of how many employees need their gaming license renewed the filter is based off of a field called final suit. I need to find a way to let them know through the database that an employee has been renewed. anyone got any ideas??