I have a schema that is mostly working, but I was wondering if some of you with more experience than I might give me some constructive criticism on my methodology.
Basically, I have a single table that stores data for many records. Each record has a variable number of fields, each of which can be a different data type. Later, queries will pull filtered subsets of data from the table, and do calculations on specific fields. In my implementation, the fields for a record are bound together by the datagroup (uniqueidentifier) column in the LotsOData table, the field name is defined by the dataname column, and the field value is stored in the datavalue column, which is type sql_variant.
One problem I had, and I'm not able to reliably replicate, is that the more complicated queries sometimes raise casting errors on the sql_variant column, even when the data is absolutely correct. I've been able to avoid this case by pre-selecting some of the subqueries into temporary tables first, then joining on the temp tables in the main query, but that seems horribly inefficient.
I've included a sample table, data, and query to demonstrate my basic solution. I was wondering if anybody could provide some insight on a better way of designing a solution for this scenario.
Thanks!
-Eric.
PS: bonus points if you have any insight at all on the casting error I mentioned!!
-- create tablecreate table LotsOData( pk int identity, dataname nvarchar(16) not null, datagroup uniqueidentifier, datavalue sql_variant);-- lot of insertsdeclare @group_a uniqueidentifier, @group_b uniqueidentifier, @group_c uniqueidentifier;set @group_a = newid();set @group_b = newid();set @group_c = newid();insert into LotsOData (dataname, datagroup, datavalue)select 'some_int', @group_a, 1union all select 'some_int', @group_b, 2union all select 'some_int', @group_c, 3insert into LotsOData (dataname, datagroup, datavalue)select 'some_char', @group_a, 'a'union all select 'some_char', @group_b, 'b'union all select 'some_char', @group_c, 'c'insert into LotsOData (dataname, datagroup, datavalue)select 'some_string', @group_a, 'abc'union all select 'some_string', @group_b, '!@#'union all select 'some_string', @group_c, 'xyz'insert into LotsOData (dataname, datagroup, datavalue)select 'some_float', @group_a, 1.23union all select 'some_float', @group_b, 2.34union all select 'some_float', @group_c, 3.45insert into LotsOData (dataname, datagroup, datavalue)select 'some_datetime', @group_a, cast('01/01/2001 01:00:00' as datetime)union all select 'some_datetime', @group_b, getdate()union all select 'some_datetime', @group_c, cast('01/01/2009 01:00:00' as datetime)-- do some big ugly query:select cast(a.datavalue as datetime) as datatime_data, cast(b.datavalue as int) as int_data, cast(c.datavalue as char(1)) as char_data, cast(d.datavalue as nvarchar(max)) as string_data, cast(e.datavalue as float) as float_data, cast(b.datavalue as int) * cast(e.datavalue as float) as calc_datafrom ( select datavalue, datagroup from LotsOData where dataname = 'some_datetime' ) a inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_int' ) b on b.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_char' ) c on c.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_string' ) d on d.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_float' ) e on e.datagroup = a.datagroupwhere cast(a.datavalue as datetime) between '01/01/2006' and '01/01/2008';
I am looking for information, books, websites, etc that will help me figureout the following things in MSSQL and Oracle:1. Optimizing the database configuration itself.2. Optimal table and index design.3. Optimizing SQL statement lookups.4. Anything else that might help to speed up our database applications.Any advice at all would be useful.Thanks alot.Andy ReynoldsJoin Bytes! (remove the no.spam to send email)
I'm creating a DB to track clients, programs, and client participation in the programs. They are service programs. A client can be in more than one program and a program can have more than one client. Can someone give me an example of how they would layout the tables? My guess is: tblClient, ClientID tblClientProgramLog, ProLogID, ClientID tblProgramDetails, ProDetailID, ProLogID tblPrograms, ProgramID, ProDetailID I appreciate any suggestions,
I'm trying to design a database that handles Clients, Cases, Individual and Group Sessions. My problem is that a client can have individual sessions and belong to more than one group at the same time, so I have a many-to-many relationship to deal with. Also I'm trying to design it so that I can have a form that when a group is selected from a drop down it shows all clients assigned to that group and will let me enter new session data for them.
Just looking for some advice on how to handle the relationships. Maybe someone could show me how they see the relationships working.
My take is that the session is linked to the case not the client, I could be thinking incorrectly?
I have an construction estimation system, and I want to develop a project management system. I will be using the same database because there are shared tables. My question is this, critical data tables are considered tables with dollar values and these tables should not be shared across the whole company. I do however need information from these tables, such as product and quantity of the product for a given project. When an estimate becomes a project it is assigned a project number. At this point I thought of Copying the required data from the estimate side to the project side. This would result in duplicate data in a sence but the tables will be referenced from two standalone front end applications. Should I copy the data from one table to another, or create new "views" to the estimate tables for the project management portion.
What would be the best solution to this problem? I find in some circumstances, a new table is required because additional data will be saved on the "Project Management" side, but not in all cases.
Ok, so I have some horribly convuluted SQL that I would love to optomize. I'm not happy leaving it in it's current state, that's for sure!
I'm currently working on our test bed servers, so obviously my stats are out because of the "crap-ness" (yes, that's the technical term) of the hardware, but still, it should NEVER need to take this long!!
Basically, the issue arises in the nasty join to the career table (one employee can have multiple career lines). Just to make things complicated, employees can have any number of career records on any given date, these can even be input for future career events. The following SQL picks out the latest-current career date for each employee based on the career_date being <= GetDate() and the date of entry for this date being the greatest.
From the above we want to return 2007-01-01 | 2006-05-05 13:54:18.000
SET STATISTICS IO ON SET STATISTICS TIME ON
SELECT a.sAMAccountNameAs 'sAMAccountName' , a.userPrincipalNameAs 'userPrincipalName' , 'TRUE'As 'Modify' , RTRIM(e.unique_identifier)As 'employeeID' , RTRIM(e.employee_number)As 'employeeNumber' , RTRIM(e.known_as) + CASE WHEN RTRIM(e.surname) IS NOT NULL THEN ' ' + RTRIM(e.surname) ELSE NULL ENDAs 'displayName' , RTRIM(e.known_as)As 'givenName' , RTRIM(e.surname)As 'sn' , RTRIM(c.job_title)As 'title' , RTRIM(c.division)As 'company' , RTRIM(c.department)As 'department' , RTRIM(l.description)As 'physicalDeliveryOfficeName' , RTRIM(REPLACE(am.dn,'\',''))As 'manager' , t.full_mobile + CASE WHEN RTRIM(t.mobile_number) IS NOT NULL THEN ' (DD: ' + RTRIM(t.mobile_number) + ')'ELSE NULL END As 'mobile' , t.mobile_numberAs 'otherMobile' , ad.address_ad_countryAs 'c' , ad.address_ad_address1 + CASE WHEN ad.address_ad_address2 IS NOT NULL THEN ', ' + ad.address_ad_address2 ELSE NULL END + CASE WHEN ad.address_ad_address3 IS NOT NULL THEN ', ' + ad.address_ad_address3 ELSE NULL END + CASE WHEN ad.address_ad_address4 IS NOT NULL THEN ', ' + ad.address_ad_address4 ELSE NULL END + CASE WHEN ad.address_ad_address5 IS NOT NULL THEN ', ' + ad.address_ad_address5 ELSE NULL ENDAs 'streetAddress' , ad.address_ad_poboxAs 'postOfficeBox' , ad.address_ad_cityAs 'l' , ad.address_ad_CountyAs 'st' , ad.address_ad_postcodeAs 'postalCode' , RTRIM(ad.address_ad_telephone) + CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NOT NULL THEN ' (Ext: ' + RTRIM(a.othertelephone) + ')' ELSE CASE WHEN RTRIM(a.othertelephone) IS NOT NULL AND RTRIM(ad.address_ad_telephone) IS NULL THEN 'Ext: ' + RTRIM(a.othertelephone) ELSE NULL END ENDAs 'telephoneNumber' FROM employee e LEFT JOIN career c ON c.parent_identifier = e.unique_identifier AND c.career_date =( SELECTmax(c2.career_date) FROMpwa_master.career c2 WHEREc2.parent_identifier = c.parent_identifier ANDc2.career_date <= GetDate() ) AND c.datetime_created =( SELECT max(c3.datetime_created) FROMpwa_master.career c3 WHEREc3.parent_identifier = c.parent_identifier ANDc3.career_date = c.career_date ) LEFT OUTER JOIN AD_Import am ON am.employeeNumber = c.manager_number INNER JOIN AD_Import a ON a.employeeID = e.unique_identifier LEFT JOIN AD_Telephone t ON t.unique_identifier = e.unique_identifier LEFT JOIN AD_Address ad ON ad.address_pwa_location = e.location LEFT JOIN xlocat l ON l.code = c.location WHERE (a.employeeNumber IS NOT NULL OR a.employeeID IS NOT NULL)
SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times: CPU time = 15203 ms, elapsed time = 8114 ms.
Any advice on what I can do to optomize?
Oh judt to point out that "employee" is a view on the "Table 'people'." EDIT: I know it's pointing out the obvious, but I'm pulling out the managers "DN" from AD_Import based on the manager_number and employeeNumber matching.
I am creating a database where: - I have a Blogs and Folders system. - Use a common design so I can implement new systems in the future.
Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.
- One Tag or Category can be associated to many Posts or Files. - One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)
Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.
I am using UniqueIdentifier as Primary Keys. I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records. I didn't decided yet for INT or UNIQUEIDENTIFIER.
I am looking for some feedback on the design of my database. One thing I need to improve is mentioned in (1)
Thank You, Miguel
My Database Script:
-- Users ... create table dbo.Users ( UserID uniqueidentifier not null constraint PK_User primary key clustered, [Name] nvarchar(200) not null, Email nvarchar(200) null, UpdatedDate datetime not null )
Hello Everyone. Im sorry for this urgent post, but have critical issue that needs a solution quick. So for my issue. I am adjusting our sales order tables to handle a couple different scenarios. Currently we have 2 tables for sales orders
SALESORDERS ------------ SORDERNBR int PK, { Addtl Header Columns... }
SALESORDERDETAILS ------------------- SODETAILID int, SORDERNBR int FK, PN varchar, SN varchar(25), { Addtl Detail Columns ... }
Currently the sales order line item is serial number specific. I need to change the tables to be able to handle different requests like :
Line Item Request ( PN, QTY ) Line Item Request ( SN ) Line Item Request ( PN, GRADE, QTY ) ETC.
I am thinking i need to create a new table to hold the specifics for a particular line item. Maybe like this :
SALESORDERSPECS ---------------- SOSPECID int, SODETAILID int FK, SPECTYPE varchar, IE : SN, PN, GRADE. { one value per row } SPECVALUE varchar IE : GRADE A
Im thinking i would need to rename the SALESORDERDETAILS table to SALESORDERITEMS. SALESORDERITEMS would just contain header info like SalePrice, Warranty, Etc...
Then rename SALESORDERSPECS to SALESORDERDETAILS...
Anyone understand what im trying to do? If you need more info please ask. You can also get a hold of me through IM.
Hi All, I have read MANY posts on how to track changes to data overtimeIt appears there are two points of view1. Each record supports a Change Indicator flag toindicate the current record(would this be EVERY table?)2. Each table is duplicated as an archive table andtriggers are used to update archiveCan someone give me some guidance based on REAL world experiencewhich works best for them?My scenario - I have insurance policies and must track history aspolicies are updated by customer service reps.Imagine many tables Policy>LifePol>LifePolRiders[color=blue]>AccidentPol >etc...>DIPol>DIPolRiders[/color]To me the archive table scenario does not seem scalable at all....someguidance on design would be aprreciated...Thanks!!!
I am creating a database where: - I have a Blogs and Folders system. - Use a common design so I can implement new systems in the future.
Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.
- One Tag or Category can be associated to many Posts or Files. - One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)
Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.
I am using UniqueIdentifier as Primary Keys. I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records. I didn't decided yet for INT or UNIQUEIDENTIFIER
I am looking for some feedback on the design of my database. One thing I think need to improve is mentioned in (1)
But any advices to improve it would be great.
Thank You, Miguel
My Database Script:
-- Users ... create table dbo.Users ( UserID uniqueidentifier not null constraint PK_User primary key clustered, [Name] nvarchar(200) not null, Email nvarchar(200) null, UpdatedDate datetime not null )
Any design suggestions for the best way to architect this report using SQL Reporting Services 2005 are appreciated!
My website features a catalog of roughly 50,000 items, each of which may be appear in a list of search results or in a detailed view. There are counters on the pages that update totals for such appearances and track other item-specific information in several tables in a SQL database. The catalog of items changes frequently, so the list of item IDs is never exactly the same from month to month.
I've been asked to produce a monthly report of this data for each of the items in the catalog, with reports for the current and previous months (for many years) accessible at all times. Some -- but not all -- items are useful for one purpose or another and so can be considered as belonging to a group of items. Although I have not yet been asked to create a report that aggregates the values for all Group members into a single report for that Group, I can clearly see it would be valuable and will be requested soon.
To ensure the report captures the data for an entire month, it must be run at the very end of each month. That means I will need to run the report using a Schedule that kicks off the process at 12:01am every 1st of the month. The report must be processed and stored for later retrieval and rendering on demand.
Considering the number of items and the indefinite length of time the report data must be retained, my question is really what's the best way to set all this up?
Should I create a report for each item separately? That would mean the scheduled task would have to somehow discover the current list of item IDs (which is available via query from the database) and create and process (but not render) a report for each (passing the item ID as a report parameter?), adding it to the report history. Although each report would be small take only a short time to run, overall that seems like it would take a long time to run and create a huge number of reports to store each month.
Or should I create a single 'master' report that contains all the data for every item for the month, and then use the item ID as a filter on the data when it is rendered? While that means only one report is created each month and added to the history, it would be a much larger report and take much longer to run (with more potential for timeouts and errors to scuttle the whole report). It also means all the data for the entire report has to be loaded every time the report is rendered, even though only 1/50,000 of the data (the data for 1 of the 50,000 items) will actually be viewed with any given rendering. But that would seem overly cumbersome, slow, and wastefully band-width intensive.
Any alternatives, suggestions, considerations, etc. -- all welcome!
I need to create a text file using information from SQL tables/views in the following format...Can anyone recommend a direction or procedure to look into, i.e, sql script, custom dts, etc. The items in parentheses identify specific portions of the text file.
Hi,I have two tables Table A and B, below with some dummy data...Table A (contains specific unique settings that can be requested)Id, SettingName1, weight2, lengthTable B (contains the setting values, here 3 values relate to weightand 1 to length)Id, Brand, SettingValue1, A, 1001, B, 2001, null, 3002, null, 5.3(There is also a list of Brands available in another table). No primarykeys / referential integrity has been setup yet.Basically depending upon the Brand requested a different setting valuewill be present. If a particular brand is not present (signified by anull in the Brand column in table B), then a default value will beused.Therefore if I request the weight and pass through a Brand of A, I willget 100If I request the weight but do not pass through a brand (i.e. null) Iwill get 300.My question is, what kind of integrity can I apply to avoid the userspecifying duplicate Ids and Brands in table B. I cannot apply acomposite key on these two fields as a null is present. Table B willprobably contain about 50 rows and probably 10 of them will be brandspecific. The reason its done like this is in the calling client code Iwant to call some function e.g.getsetting(weight) .... result = 300Or if it is brand specificgetsetting(weight,A) ..... result = 100Any advice on integrity or table restructuring would be greatlyappreciated. Its sql 2000 sp3.Thanksbrad
I have recently been task with rewriting a database that holds large volumes of data, whilst ensuring that query can be run in optimal time. Having never really delved into this sort of thing before, I hoped you guys might be able to offer some advice and guidance.
The design I have inherited is based around 2 main tables:
[captured_varbinds] [id] [int] IDENTITY (1, 1) NOT NULL [captured_trap_id] [int] NOT NULL [varbind_oid] [varchar] (500) [varbind_text] [varchar (500)
The relationship between the two tables is on the "captured_traps (id)" to "captured_varbinds (captured_trap_id)". Currently the "captured_traps" table contains around 350 million rows, the "captured_varbinds" table contains around 900 million rows.
Now as you can probably gather this model runs like a....well it sort of hobbles more than runs hence the need to redesign.
My current thoughts on this are:
- Normalising all varchars - there is alot of duplicate values in most of the varchar fields. - Full Text Indexing
However beyond that I am not sure which route to go down. After googling for most of today I have come across a number of "solutions" however I do not want to go steaming down the track of one of these to discover that it is fatally flawed somewhere.
I have a small tricky problem here...need help of all you experts.
Let me explain in detail. I have three tables
1. Emp Table: Columns-> EMPID and DeptID 2. Dept Table: Columns-> DeptName and DeptID 3. Team table : Columns -> Date, EmpID1, EmpID2, DeptNo.
There is a stored procedure which runs every day, and for "EVERY" deptID that exists in the dept table, selects two employee from emp table and puts them in the team table. Now assuming that there are several thousands of departments in the dept table, the amount of data entered in Team table is tremendous every day.
If I continue to run the stored proc for 1 month, the team table will have lots of rows in it and I have to retain all the records.
The real problem is when I want to retrive data for a employee(empid1 or empid2) from Team table and view the related details like date, deptno and empid1 or empid2 from emp table. HOw do we optimise the data retrieval and storage for the table Team. I cannot use partitions as I have SQL server 2005 standard edition.
Please help me to optimize the query and data retrieval time from Team table.
Dear Advance,I used one stored procedure to retrive 3 different result set. and in the codebehind i seperate it. means from the dataset i seperate three different datatable and then show my data as my need.but the main problem is ... after retriving the datafrom the database i have to user foreach loop to bind the coulmns data to my different custom class.example: foreach (DataRow oDrow in MyDataTable.Rows) {oClass=new Class();oClass.Name1=oDrow["Name1"] .toString();oClass.Name2=oDrow["Name2"] .toString();.... } 1. so my first question is there any optimization possible ?2. my result set is too loong ... so should keep just one hit to database or hit more than one time Currently i am optimizing my web application. in the previous version 1 have to hit the database 3/4 times for different purposes. but now it hits only one time... but it takes time in the codebehind to perform different operation.Any Suggestion
I have a SP that calls about 10 stored procedures sequentially. The 10 SP's are basically complex update statements, each one individual. Is there any way to optimize this? I know putting the 10 into 1 SP would make it compile faster but thats about it. Are there any execution tricks of Stored Procedures firing off sequentially?..or anything I should know?
Hello All, What is the best way to optimize this code or rewrite it using ISNULL ?
CREATE PROCEDURE get_employees (@dept char(8), @class char(5)) AS IF (@dept IS NULL AND @class IS NOT NULL) SELECT * FROM employee WHERE employee.dept IS NULL AND employee.class=@class ELSE IF (@dept IS NULL AND @class IS NULL) SELECT * FROM employee WHERE employee.dept IS NULL AND employee.class IS NULL ELSE IF (@dept IS NOT NULL AND @class IS NULL) SELECT * FROM employee WHERE employee.dept=@dept AND employee.class IS NULL ELSE SELECT * FROM employee WHERE employee.dept=@dept AND employee.class=@class
I am wondering if the size of the data file makes a difference in running Insert's and/or doing Fetch's. Our DB was 11GB in size, I ran a dbcc shrinkdatabase and it shrank it to 5.5 GB in size, now that it is smaller will it run a select query faster as opposed to when we run large inserts and it has to automatically grow to accommodate the insert. I am trying to figure out if I should leave my .mdf file large or keep it small or does it even make a difference. I am only doing large inserts while loading data to get ready for production after that the inserts will be hourly but much smaller, however our queries to the DB after it is in production will be much more intensive.
We're building a company wide network monitoring systemin Java, and need some advice on the database design andtuning.The application will need to concurrently INSERT,DELETE, and SELECT from our EVENT table as efficiently aspossible. We plan to implement an INSERT thread, a DELETEthread, and a SELECT thread within our Java program.The EVENT table will have several hundred million recordsin it at any given time. We will prune, using DELETE, aboutevery five seconds to keep the active record set down toa user controlled size. And one of the three queries willbe executed about every twenty seconds. Finally, we'llINSERT as fast as we can in the INSERT thread.Being new to MSSQL, we need advice on1) Server Tuning - Memory allocations, etc.2) Table Tuning - Field types3) Index Tuning - Are the indexes right4) Query Tuning - Hints, etc.5) Process Tuning - Better ways to INSERT and DELETE, etc.Thanks, in advance, for any suggestions you can make :-)The table is// CREATE TABLE EVENT (// ID INT PRIMARY KEY NOT NULL,// IPSOURCE INT NOT NULL,// IPDEST INT NOT NULL,// UNIXTIME BIGINT NOT NULL,// TYPE TINYINT NOT NULL,// DEVICEID SMALLINT NOT NULL,// PROTOCOL TINYINT NOT NULL// )//// CREATE INDEX INDEX_SRC_DEST_TYPE// ON EVENT (// IPSOURCE,IPDEST,TYPE// )The SELECTS areprivate static String QueryString1 ="SELECT ID,IPSOURCE,IPDEST,TYPE "+"FROM EVENT "+"WHERE ID >= ? "+" AND ID <= ?";private static String QueryString2 ="SELECT COUNT(*),IPSOURCE "+"FROM EVENT "+"GROUP BY IPSOURCE "+"ORDER BY 1 DESC";private static String QueryString3 ="SELECT COUNT(*),IPDEST "+"FROM EVENT "+"WHERE IPSOURCE = ? "+" AND TYPE = ? "+"GROUP BY IPDEST "+"ORDER BY 1 DESC";The DELETE isprivate static String DeleteIDString ="DELETE FROM EVENT "+"WHERE ID < ?";
There are two main tables in my app,in order to optimize search via scope condition, I set many indexs for these two tables
however,at the same time the two tables are also used for my etl app,everyday there are more than thousands of data need to be updated or inserted, but index is not suitable for huge modification,any idea about how to handle this?
Hello, what is the meaning about <MissingIndexGroup Impact="99.9521"> in the Queryplan? Should I create a Grouped Index? An what is the meaning about Impact="99.9521"?
If the Impact =100 you get a 100% better performance, and if the impact =20 ypu get a 20% better performance, is this the meaning?
Hi, Can anyone help me optimize the SELECT statement in the 3rd step? I am actually writing a monthly report. So for each employee (500 employees) in a row, his attendance totals for all days in a month are displayed. The problem is that in the 3rd step, there are actually 31 SELECT statements which are assigned to 31 variables. After I assign these variable, I insert them in a Table (4th step) and display it. The troublesome part is the 3rd step. As there are 500 employees, then 500x31 times the variables are assigned and inserted in the table. This is taking more than 4 minutes which I know is not required :). Can anyone help me optimize the SELECT statements I have in the 3rd step or give a better suggestion. DECLARE @EmpID, @DateFrom, @Total1 .... // Declaring different variables SELECT @DateFrom = // Set to start of any month e.g. 2007-06-01 ...... 1st Loop (condition -- Get all employees, working fine) BEGIN SELECT @EmpID = // Get EmployeeID ...... 2nd SELECT @Total1 = SUM (Abences) ...... 3rd FROM Attendance WHERE employee_id_fk = @EmpID (from 2nd step) AND Date_Absent = DATEADD ("day", 0, Convert (varchar, @DateFrom)) (from 1st step) SELECT @Total2 ........................... same as above SELECT @Total3 ........................... same as above INSERT IN @TABLE (@EmpID, @Total1, ...... @Total31) ...... 4th Iterate (condition) to next employee ...... 5th END It's only the loop which consumes the 4 minutes. If I can somehow optimize this part, I will be most satisfied. Thanks for anyone helping me....
Could any one tell me what is the best way to declare a connection from ASP .net to a SQL database so the sql could support the maximum users, because it seems that the way i'm using is not correct cuz when i make some transactions from my website to the database, the database send an error message saying that there are no more free connections.
This may sound a little silly, but does anyone have any words of wisdom on how to optimize a server/database for minimim rollback? We have some multimillion row tables we were trying to do updates against, and after several days they increased the size of the transaction log to the point they filled up the drive the database files/logs were on. We've now been running a rollback for about five days. I'd like to make sure this doesn't happen again.
I am using the Database maintenance on a database that is about 4gb. The database optiiztion is running about an hour. Does this job only do an update stats? If I run the stored procedure sp_updatestats on the database it only takes a couple of minutes. Are thes two processes doin the same thing? Do I need them if the create, update statistics are turned on?
Trying to optimize a query, and having problems interpreting the data. We have a query that queries 5 tables with 4 INNER JOINS. When I use INNER HASH JOIN, this is the result:
(Using SQL Programmer)
SQL Server Execution Times: CPU time = 40 ms, elapsed time = 80 ms.
Now, when timing the code execution on my ASP page, it's "faster" not using the HASH. Using HASH, there are a few Hash Match/Inner Joins reported in the Execution Plan. Not using HASH, there are Bookmark Lookups/Nested Loops.
My question is which is better to "see": Boomark Lookups/Nested Loops or Hash Match/Inner Joins for the CPU/Server?
IS there any way to rewrite this Query in optimized way?
SELECT dbo.Table1.EmpId E from dbo.Table1 where EmpId in( SELECT dbo.Table1.EmpId FROM (SELECT DISTINCT PersonID, MAX(dtmStatusDate) AS dtmStatusDate FROM dbo.Table1 GROUP BY PersonID) derived_table INNER JOIN dbo.Table1 ON derived_table.PersonID = dbo.Table1.PersonID AND derived_table.dtmStatusDate = dbo.Table1.dtmStatusDate))
How can I optimized the following query: (SELECT e.SID FROMStudents s JOINTable1e ON e.SID= s.SID JOINTable2 ed ON ed.Enrollment = e.Enrollment JOINTable3 t ON t.TNum = e.TNum JOINTable4 bt ON bt.TNum = t.TNum JOINTable5 b ON b.Batch = bt.Batch JOIN IPlans i ON i.IPlan = ed.IPlan JOINPGroups g ON g.PGroup= i.PGroup
WHERE t.TStatus= 'ACP' ANDed.EStatus= 'APR' ANDe.SID=(select distinct SID from Table1 where Enrollment=@DpEnrollment)) AND(ed.EffectiveDate= (SELECT EffectiveDate FROM Table2 ed JOIN Table1 e ON e.enrollment=ed.enrollment WHERE IPlan = @DpIPlan ANDTCoord = @DpTCoord ANDAGCoord= @DpAGCoord ANDDCoord=@DpDCoord ) ANDDSeq= @DpDSeq) ANDe.SID= (select distinct SID from Table1 where Enrollment=@DpEnrollment)) ) ANDed.TerminationDate= (SELECT TerminationDate FROM Table2 ed JOIN Table1 e ON e.enrollment=ed.enrollment WHERE IPlan = @DpIPlan ANDTCoord = @DpTCoord ANDAGCoord= @DpAGCoord ANDDCoord= @DpDCoord ) ANDDSeq= @DpDSeq) ANDe.SID= (select distinct SID from Table1 where Enrollment=@DpEnrollment)) ) ))
DECLARE @PTEffDate_tmp AS SMALLDATETIME SELECT @PTEffDate_tmp = DateAdd(day, -1, PDate) FROM PDates pd WHERE iplan = @DIPlan and pd.TCoord = @DTCoord and DType = 'EF'
DECLARE @PTCoord_tmp as char(3) SELECT @PTCoord_tmp = tc.TCoord FROM PDates pd JOIN TCoords tc ON (pd.TCoord = tc.TCoord) WHERE pd.Iplan = @DIPlan and tc.TGroup = @TGroup_tmp and PDate = @PTEffDate_tmp and DateType = 'TR1'
DECLARE @EStatus_tmp as char(3) SELECT @EStatus_tmp = EDStatus From EDetails ed JOIN ENR e ON (ed.enr = e.enr) JOIN Trans t ON (e.transID = t.TransID) WHERE iplan = @DIPlan and ed.TCoord = @PTCoord_tmp and t.TransS= 'ACP' and DCoord = @DCoord and CEnr is null