High Volume Database Query Optimization

Dec 10, 2007

Hello every body

i am doing a research about high volume database treatment (maybe a database with tera bytes volume) , so is there any optimization or specialization for queries deal with such database? !!

View 5 Replies


ADVERTISEMENT

High Volume I/O!!!

Feb 28, 2008

I have a summary table with a 9 field composite primary key. Every 10 minutes, my system generates 2 files of 500,000 to 750,000 rows to be summarized into this table. I first Bulk insert those into a temp table, and then trigger an inner-join update query to do the updates, followed by a left-outer join to do the inserts. As the day goes on, millions of rows in my summary table, this process is too slow. Any ideas about causes/solutions???

RLiss

View 2 Replies View Related

Handling High Volume Data

Apr 18, 2007

Hi Guys,

I am facing problems with concurrent access in SQL Server 2000,The scenario is that the DB contains one huge de-normalized table containing 40 million records.

The application frequently queries this table to populate other derived tables,the sql queries take a long time to return results.

So if one query is in execution the other user's query goes into a
wait mode.Please suggest how I can better this.

Or do I need to upgrade to 2005.

Regards,
Prashant

View 2 Replies View Related

High Volume DB Performance Problems

Mar 19, 2008

Hello ,i am a master student and i am making a seminar about high volume DB performance problems ,example : if i have a table with length of 1000000 record and this length is growing exponentially by the time,what the problems may i face in insertion ,deletion , search,in such table?? and what the problems in processing such DB in general

View 1 Replies View Related

MSDE For Large Volume And High No. Of Users?

Sep 21, 2005

Hi,We need to use a free database for a project because of tight budget.Is MSDE ok for handling large volume of data and 70 - 80 users?My understanding is that MSDE is optimized for 5 concurrent users.Is MySQL better than MSDE?Thanks,Ben

View 1 Replies View Related

Solution Design For High Volume Of Data

Apr 18, 2006

Hi,

I have been asked to design a solution for a client of mine who basically requires the daily analysis and reconciliation of the differences between 2 extremely large text files.

The files are not in an identical format but are both in some form of delimited format (one is CSV, the other is a little more complex). For the sake of this question, let's assume that I can effectively import each file into an MS SQL table.

Each file will have in excess of 100,000 rows each day (new data for each day).

Whilst I know that MS SQL does easily have the capacity to store the data, is there a recommended way to tackle the potential problems (I imagine that performance is important... they will be running the report every day)

Or is building the solution as simple as importing the data into 2 tables, and then querying the differences and outputting as a report using Crystal?

Any suggestions appreciated.

Thanks

Rael

View 10 Replies View Related

Performance Tuning On High Volume Server

Jan 4, 2007

I am looking to improve the performance of my sql server databases.

I currently have a dual location system, the database server setup is basically a quad xeon with 4gb at my office and a double xeon with 4gb at a remote webhosting location. There are separate application/web/intranet servers at each site. The two databases servers are replicated with the local server publishing to the remote server.

The relational database holds circa 26 million records, growing by a volume of 10,000 per day, there are approximately 50,000 queries performed per day.

My theory is that the replication of the two databases is causing a slowdown; despite fast network connections (averaging 200ms between servers) the replication seems to place a large load on the local server. Would it be sensible to replicate to a second local server and then replicate to the remote server, placing any burden on the second server?



I am planning to upgrade the local server to a high capacity 4+ cpu 64bit server, my problem is that although I have noticed a slow down in performance over time, I am unsure how to go about measuring and quantifying this in order to diagnose the bottlenecks and ensure that investing in a new server would be worthwhile. Where would one be best advised to start this project?

View 5 Replies View Related

SQL Server 2008 :: High Volume Reads And Writes?

Jul 6, 2015

We are in the process of moving existing clustered SQL server databases to AWS. There is one major database that has intensive reads and writes transactions. I'm wondering what is the best design to optimize the performance for both R/W since we have constant issues historically with the current environment when massive updates are happening. Reads shall have higher priority over writes.

View 2 Replies View Related

Store Currency And Volume In Database

Aug 22, 2007

We are creating an enterprise application for fuel, and I am fighting with my DBA about the proper way to store volume and currency in the database. We have 2 main arguments. The first argument is whether we should store costs in the database in $ and convert in the presentation layer, or to store the amount and currency in the database. We sell product from the US in dollar but depending on the customer we may invoice in Euro. Our second argument is the same, execept with volume and UOM. We often purchase product by BBL but sell/transfer by gallon, or Ton.

please tell us the best practice for our dilemma.

View 2 Replies View Related

SQL 2005 And SQL 2008 Database Volume Capacity

Dec 1, 2007

Hi Everybody:

4 -5 years ago, I started my career as a translator translating the MetaTexis CAT (Computer Aided Translation Software).

It's amazing to see all the improvements that have been made until now, but recently I found some problems regarding
databases:

I heard that ACCESS databases volume is limited up to 2 GB and that SQL 2005 databases volume is limited up to 4 GB, but I think this information is wrong, or at least I was only able to import 10% of that amount.

Speaking in words, 2 GB doesn't represent a database with a volume of 125,000 segments/sentences (for ACCESS) and 4 GB a volume of 250,000 (for SQL 2005).

Concrete, my "mega.mxtm" database has "only" 359 MB and suddenly I refuses to import more sentences. Is that normal? (MICROSOFT SQL 2005)

Question: Is the new SQL 2008 also limited? Is there any way to "free" or increase the volume capacity?

Point 2: As I updated the SQL 2005 into 2008 I am not able to open the "old" "mega.mxtm" anymore... : (

Regards!


De Sena Viegas

View 4 Replies View Related

Stored Procedure Query Optimization - Query TimeOut Error

Nov 23, 2004

How to optimize the following Stored procedure running on MSSQL server 2000 sp4 :

CREATE PROCEDURE proc1
@Franchise ObjectId
, @dtmStart DATETIME
, @dtmEnd DATETIME
AS
BEGIN


SET NOCOUNT ON

SELECT p.Product
, c.Currency
, c.Minor
, a.ACDef
, e.Event
, t.Dec
, count(1) "Count"
, sum(Amount) "Total"
FROM tb_Event t
JOIN tb_Prod p
ON ( t.ProdId = p.ProdId )
JOIN tb_ACDef a
ON ( t.ACDefId = a.ACDefId )
JOIN tb_Curr c
ON ( t.CurrId = c.CurrId )
JOIN tb_Event e
ON ( t.EventId = e.EventId )
JOIN tb_Setl s
ON ( s.BUId = t.BUId
and s.SetlD = t.SetlD )
WHERE Fran = @Franchise
AND t.CDate >= @dtmStart
AND t.CDate <= @dtmEnd
AND s.Status = 1
GROUP BY p.Product
, c.Currency
, c.Minor
, a.ACDef
, e.Event
, t.Dec

RETURN 1
END



GO

View 8 Replies View Related

SQL Select Query Need For Following Criteria. Please Help, Retrieve Records With Independent Price And Its Total Volume Per Min

Jul 17, 2006

Time       Price   Volume
090048       510      6749
090122       510      2101
090135       510      1000
090204       505      2840
090213       505      220
090222       505      1260
090232       505      850
090242       505      200
090253       510      1200
090313       510      570
090343       510      250
090353       510      160
 
Criteria
Retrieve records with independent price and its total volume per minute
 
SELECT SUBSTRING(st,1,4) AS Ttime,d_price AS Price,SUM(l_cum) AS Volume FROM cmd4
WHERE sd='20060717' AND serial='0455'
GROUP BY SUBSTRING(st,1,4),d_price,l_cum
 
Result of the above query: -
Time  Price Volume                 
0900    510     6749
0901    510     1000
0901    510     2101
0902    505     200
0902    505     220
0902    505     850
0902    505    1260
0902    505    2840
0902    510    1200
0903    510    160
0903    510    250
0903    510    570
 
 
THE FOLLOWING RESULT STILL NOT A TOTAL FOR A MINUTE]
E.G
 
0901    510            1000
             +
0901    510            2101
            =         
0901    510            3101 <- I NEED THIS
 
Can any one advice or give me tips over this. Please
 

View 3 Replies View Related

T-SQL (SS2K8) :: Measuring Volume Of Data Created Temporarily To Replace Usage Of Physical Tables In Query

Sep 12, 2014

How I can measure the volume of data created temporarily to replace usage of physical tables in an SQL query.

View 1 Replies View Related

Query Optimization - Please Help

Aug 15, 2007

Hi,
Can anyone help me optimize the SELECT statement in the 3rd step? I am actually writing a monthly report. So for each employee (500 employees) in a row, his attendance totals for all days in a month are displayed. The problem is that in the 3rd step, there are actually 31 SELECT statements which are assigned to 31 variables. After I assign these variable, I insert them in a Table (4th step) and display it. The troublesome part is the 3rd step. As there are 500 employees, then 500x31 times the variables are assigned and inserted in the table. This is taking more than 4 minutes which I know is not required :). Can anyone help me optimize the SELECT statements I have in the 3rd step or give a better suggestion.
                DECLARE @EmpID, @DateFrom, @Total1 .... // Declaring different variables
               SELECT   @DateFrom = // Set to start of any month e.g. 2007-06-01            ...... 1st
               Loop (condition -- Get all employees, working fine)
               BEGIN
                        SELECT         @EmpID = // Get EmployeeID                                      ...... 2nd      
                        SELECT         @Total1 = SUM (Abences)                                           ...... 3rd
                        FROM            Attendance
                        WHERE         employee_id_fk = @EmpID  (from 2nd step)
                        AND              Date_Absent = DATEADD ("day", 0, Convert (varchar, @DateFrom)) (from 1st step)
                        SELECT        @Total2 ........................... same as above
                        SELECT        @Total3 ........................... same as above
                         INSERT IN @TABLE (@EmpID, @Total1, ...... @Total31)                 ...... 4th
                         Iterate (condition) to next employee                                                ...... 5th
END
It's only the loop which consumes the 4 minutes. If I can somehow optimize this part, I will be most satisfied. Thanks for anyone helping me....

View 11 Replies View Related

Query Optimization

Feb 19, 2001

Trying to optimize a query, and having problems interpreting the data. We have a query that queries 5 tables with 4 INNER JOINS. When I use INNER HASH JOIN, this is the result:

(Using SQL Programmer)

SQL Server Execution Times:
CPU time = 40 ms, elapsed time = 80 ms.

Table 'Table1'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0.
Table 'Table2'. Scan count 1, logical reads 40, physical reads 0, read-ahead reads 0.
Table 'Table3Category'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0.
Table 'Table4'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0.
Table 'Table5'. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0.

When I use INNER JOIN, this is the result:

SQL Server Execution Times:
CPU time = 10 ms, elapsed time = 34 ms.

Table 'Table1'. Scan count 4, logical reads 10, physical reads 0, read-ahead reads 0.
Table 'Table2'. Scan count 311, logical reads 670, physical reads 0, read-ahead reads 0.
Table 'Table3'. Scan count 69, logical reads 102, physical reads 0, read-ahead reads 0.
Table 'Table4'. Scan count 69, logical reads 98, physical reads 0, read-ahead reads 0.
Table 'Table5'. Scan count 1, logical reads 17, physical reads 0, read-ahead reads 0.

Now, when timing the code execution on my ASP page, it's "faster" not using the HASH. Using HASH, there are a few Hash Match/Inner Joins reported in the Execution Plan. Not using HASH, there are Bookmark Lookups/Nested Loops.

My question is which is better to "see": Boomark Lookups/Nested Loops or Hash Match/Inner Joins for the CPU/Server?

Thanks.

View 1 Replies View Related

Query Optimization

Mar 14, 2003

IS there any way to rewrite this Query in optimized way?

SELECT dbo.Table1.EmpId E from dbo.Table1
where EmpId in(
SELECT dbo.Table1.EmpId
FROM (SELECT DISTINCT PersonID, MAX(dtmStatusDate) AS dtmStatusDate
FROM dbo.Table1
GROUP BY PersonID) derived_table INNER JOIN
dbo.Table1 ON derived_table.PersonID = dbo.Table1.PersonID AND
derived_table.dtmStatusDate = dbo.Table1.dtmStatusDate))

Thanks....j

View 1 Replies View Related

Query Optimization

Mar 7, 2006

How can I optimized the following query:
(SELECT e.SID
FROMStudents s
JOINTable1e ON e.SID= s.SID
JOINTable2 ed ON ed.Enrollment = e.Enrollment
JOINTable3 t ON t.TNum = e.TNum
JOINTable4 bt ON bt.TNum = t.TNum
JOINTable5 b ON b.Batch = bt.Batch
JOIN IPlans i ON i.IPlan = ed.IPlan
JOINPGroups g ON g.PGroup= i.PGroup

WHERE t.TStatus= 'ACP'
ANDed.EStatus= 'APR'
ANDe.SID=(select distinct SID from Table1 where Enrollment=@DpEnrollment))
AND(ed.EffectiveDate=
(SELECT EffectiveDate
FROM Table2 ed JOIN Table1 e
ON e.enrollment=ed.enrollment
WHERE IPlan = @DpIPlan
ANDTCoord = @DpTCoord
ANDAGCoord= @DpAGCoord
ANDDCoord=@DpDCoord )
ANDDSeq= @DpDSeq)
ANDe.SID=
(select distinct SID from Table1 where Enrollment=@DpEnrollment))
)
ANDed.TerminationDate=
(SELECT TerminationDate
FROM Table2 ed JOIN Table1 e
ON e.enrollment=ed.enrollment
WHERE IPlan = @DpIPlan
ANDTCoord = @DpTCoord
ANDAGCoord= @DpAGCoord
ANDDCoord= @DpDCoord )
ANDDSeq= @DpDSeq)
ANDe.SID=
(select distinct SID from Table1 where Enrollment=@DpEnrollment))
)
))

View 2 Replies View Related

Query Optimization

Mar 20, 2006

DECLARE @PTEffDate_tmp AS SMALLDATETIME
SELECT @PTEffDate_tmp = DateAdd(day, -1, PDate)
FROM PDates pd WHERE iplan = @DIPlan and pd.TCoord = @DTCoord and DType = 'EF'

DECLARE @PTCoord_tmp as char(3)
SELECT @PTCoord_tmp = tc.TCoord
FROM PDates pd JOIN TCoords tc ON (pd.TCoord = tc.TCoord)
WHERE pd.Iplan = @DIPlan and tc.TGroup = @TGroup_tmp
and PDate = @PTEffDate_tmp and DateType = 'TR1'

DECLARE @EStatus_tmp as char(3)
SELECT @EStatus_tmp = EDStatus From EDetails ed
JOIN ENR e ON (ed.enr = e.enr)
JOIN Trans t ON (e.transID = t.TransID)
WHERE iplan = @DIPlan
and ed.TCoord = @PTCoord_tmp
and t.TransS= 'ACP'
and DCoord = @DCoord
and CEnr is null

View 3 Replies View Related

Query Optimization

Aug 17, 2006

How can I optimazed my query. Since my DB is more then 1 mln it takes a while to do all those join?
select *
FROM EEMaster eem
JOIN NHistory nh
ON eem.SNumber = nh.SNumber
OR eem.OldNumber = nh.SNumber
OR eem.CID = (Replicate ('0',12-len( nh.SNumber))+ nh.SNumber )

View 4 Replies View Related

Query Optimization

Apr 23, 2008

I work on tables containing 10 million plus records.
What are the general steps needed to ensure that my queries run faster?
I know a few:
- The join fields should be indexed
-Selecting only needed fields
-Using CTE or derived tables as much as I can
-Using good table reference
eg
select a.x , b.y
from TableA a inner join TableB b
on a.id = b.id


I will be happy if somebody could share or add more to my list.

Regards to all

View 4 Replies View Related

Query Optimization

May 1, 2008

Dear all,
The below query take 7 min to execute so i want optimize the query.please any suggestions..........


SELECT DISTINCT VC.O_Id C_Id, VC.Name C_Name,VB.Org_Id B_Id,
VB.code S_Code,VB.Name S_Name, mt12.COLUMN003 M_D_Code,
mt12.COLUMN004 M_D_Name,CQ.COLUMN004 R_Code,
CQ.COLUMN005 R_Date, CQ.COLUMN006 Ser,CQ.COLUMN008 R_Nature,
CQ.COLUMN011 E_Date,mt26.COLUMN003 W_Code, mt26.COLUMN004 W_Name,
mt17.COLUMN005 V_Code,mt17.COLUMN006 V_Name, mt19.column002 I_Code,
mt19.column003 I_Name, mt19.COLUMN0001 R_I_No,mt92.COLUMN001 B_Id,
mt92.COLUMN005 B_No, CASE mt92.COLUMN006 WHEN '0' THEN 'Ser'
WHEN '1' THEN 'Un-Ser' WHEN '2' THEN 'Ret' WHEN '3' THEN 'Retd'
WHEN '4' THEN 'Rep' WHEN '5' THEN 'Repd' WHEN '6' THEN 'Con'
WHEN '7' THEN 'Cond' ELSE mt92.COLUMN006 END S_C_Type,
mt20.COLUMN003 T_G_Code,mt20.COLUMN004 T_G_Name, V.U_Code,V.U_Name,
mt19.column005 I_Quantity,mt20.COLUMN003 T_Code, mt20.COLUMN004 T_Name,
mt59.COLUMN005 T_Price,VR.code C_L_Code,
VR.Name C_L_Name
FROM tab90 CQ
INNER JOIN tab91 mt19 ON mt19.COLUMN002 = CQ.COLUMN001
LEFT JOIN tab92 mt92 ON mt92.COLUMN002 = CQ.COLUMN001
LEFT JOIN tab93 mt93 ON mt93.COLUMN004 = CQ.COLUMN001
INNER JOIN tab12 mt12 ON mt12.COLUMN001 = CQ.COLUMN003
LEFT JOIN tab26 mt26 ON mt26.COLUMN001 = CQ.COLUMN009
LEFT JOIN tab20 mt20 ON mt20.COLUMN001 = mt93.COLUMN005
LEFT JOIN tab59 mt59 ON mt59.COLUMN002=mt20.COLUMN001
LEFT JOIN tab17 mt17 ON mt17.COLUMN001 = CQ.COLUMN010
INNER JOIN VM V ON V.UOM_ID = mt19.COLUMN004
INNER JOIN tab19 mt19 ON mt19.COLUMN001 = mt19.COLUMN003
INNER JOIN vOrg VR ON CQ.COLUMN007 = VR.Org_Id
INNER JOIN vOr VB ON CQ.COLUMN002 = VB.Org_Id
INNER JOIN vOr VC ON VB.Top_Parent = VC.Org_Id
WHERE CQ.COLUMN005 Between '02/01/2007' and '08/25/2008' And VC.O_Id in ('fb243e92-ee74-4278-a2fe-8395214ed54b')




Thanks&Regards,

Msrs

View 4 Replies View Related

SQL Query Optimization

Jun 18, 2008

Hi All,

table with initial data:
Primary key (COL1 + COL2)

COL1 COL2 NEW LATEST
1241 1 1
125 0 1 1

by default, NEW and LATEST columns will have values 1, 1.

Now, one row is inserted with values (124,2)
Table data should be:

COL1, COL2, NEW, LATEST
1241 1 0
125 0 1 1
1242 0 1

LATEST column value changes for Row 1 since there is a repetition of value 124, meaning this row is no longer the latest.

NEW COLUMN value changes for ROW 2 since there it is no longer new; we already have an occurrence of 124 in the first row.

I m not sure if i can solve this query using any option other than cursor. it will be like taking first row --> comparing it with all the other rows and then moving further.

Plz. suggest me if there is a better approach for doing this

View 11 Replies View Related

Query Optimization

Mar 12, 2007

Okay guys, this will probably be messy. Just throw out some thoughts and I'll deal with it. How do I make this query smaller and more efficient?

Query deleted and link posted: http://theninjalist.com/

View 4 Replies View Related

Query Optimization

Oct 16, 2007

Dear all,

View 15 Replies View Related

Query Optimization

Nov 2, 2007

Dear all,

View 6 Replies View Related

Query Optimization

Apr 9, 2008

Hello!

I have a query:



SELECT *,
.....

(SELECT add_house
FROM hs_address
WHERE add_id = do_address_registration_id) as add_house,
(SELECT add_flat
FROM hs_address
WHERE add_id = do_address_registration_id) as add_house,

.....
FROM hs_donor
WHERE do_id = 400





Fields add_flat and add_house belong to one table. How one may optimize this query?



P.S. do_address_registration_id can be equal NULL


TIA

View 1 Replies View Related

Query Optimization

Mar 11, 2008



I am writing a query which will display employee details who is handling maximum number of projects.
Here I am joining 2 tables. one is LUP_EmpProject, which contain employee id and project id and project date, in this table I have used a composite primary key of employee id, project id and project date. The other table is

EmployeeDetails which contain employee names and employee id.

I want to display the details of the employee who is handling maximum projects.
Below given is the code which is working fine. But the query is taking time to execute it. Any body know how to optimize the code so that I can get the result quickly.




Code Snippet
SELECT EmployeeDetails.FirstName+' '+EmployeeDetails.LastName AS EmpName,
COUNT(LUP_EmpProject.Empid) AS Number_Of_Projects
FROM LUP_EmpProject
INNER JOIN EmployeeDetails
ON LUP_EmpProject.Empid=EmployeeDetails.Empid
GROUP BY EmployeeDetails.FirstName+' '+EmployeeDetails.LastName,
LUP_EmpProject.Empid
HAVING COUNT(LUP_EmpProject.Empid)>0
AND COUNT(LUP_EmpProject.Empid)=(SELECT
MAX(Number_Of_Projects)
FROM (SELECT COUNT(LUP_EmpProject.Empid) Number_Of_Projects
FROM LUP_EmpProject
GROUP BY LUP_EmpProject.Empid)AS sub)




Please help!!!!!!!!!!

View 6 Replies View Related

Optimization Of Query

Aug 14, 2007

My Query is like this..


set @Grouptitle = @GroupPFR

set @GroupOrder = 5

set @Unittype = 2

set @MetricName = 'Product to Net Revenue %'

set @MetricOrder = 6

insert into @FinalData (Grouptitle,MetricName,UnitTypeID,WeekDate,WeekValue,GroupOrder,metricOrder)


select @GroupTitle,@MetricName,@UnitType,f1.weekdate,


max(f1.WeekValue)/case when max(f2.WeekValue) = 0 then NULL else max(f2.WeekValue) end,

@GroupOrder,@MetricOrder --from @temptable

from @FinalData f1 inner join @FinalData f2 on f1.weekdate = f2.weekdate

where (f1.Grouptitle = @GroupPFR and f1.MetricName = '$ Products')


and ( f2.Grouptitle = @GroupRevenue and f2.MetricName = 'Net Revenue')

group by f1.weekdate


There are many calculations like this in my procedure.
and It takes like 3 min to run whole procedure
now as I am doing group by..
So In Execution plan it show me that 60% of the query time is take n by SORT operation..
can any one give me any other option to do this.

Thanks

View 9 Replies View Related

Need Help On Query Optimization

Dec 5, 2007

Hi all,
I have the following query to be optimized. It just takes too long to complete the execution.

----------------------------------------------------------------------------------
SELECT COUNT(*)
FROM Tbl_A a
INNER JOIN Tbl_B b
ON a.AID = b.AID
INNER JOIN Tbl_C c
ON a.AID = c.AID
INNER JOIN Tbl_D d
ON d.DID = a.DID
INNER JOIN Tbl_E e
ON e.DID = d.DID
INNER JOIN Tbl_F f
ON e.EID = f.EID
WHERE a.Col_1 = 1
AND (a.Col_2 LIKE N'%abc%')
AND a.Col_3 <>
CASE
WHEN d.Col_1 ='ABC' THEN 'BR'
ELSE ''
END
AND c.Col_1 =
CASE
WHEN d.Col_1 ='ABC' THEN 'ABC_COMPANY'
ELSE 'PPRO'
END
AND f.Col_1 = 'val1'
------------------------------------------------------------------------------------------------------------------

here is the estimated records for the tables.
------------------------------------------------------------------------------------------------------------------
Tbl_A has over 150,000 records
Tbl_B has over 150,000 records
Tbl_C has over 450,000 records
Tbl_D has over 33 records
Tbl_E has over 4000 records
Tbl_F has over 5000 records
------------------------------------------------------------------------------------------------------------------


Thanks in advance,
Soe Moe

View 5 Replies View Related

Table Data Retrieval And Optimization Optimization Help

Apr 10, 2008

Hello Everybody,

I have a small tricky problem here...need help of all you experts.

Let me explain in detail. I have three tables

1. Emp Table: Columns-> EMPID and DeptID
2. Dept Table: Columns-> DeptName and DeptID
3. Team table : Columns -> Date, EmpID1, EmpID2, DeptNo.

There is a stored procedure which runs every day, and for "EVERY" deptID that exists in the dept table, selects two employee from emp table and puts them in the team table. Now assuming that there are several thousands of departments in the dept table, the amount of data entered in Team table is tremendous every day.

If I continue to run the stored proc for 1 month, the team table will have lots of rows in it and I have to retain all the records.

The real problem is when I want to retrive data for a employee(empid1 or empid2) from Team table and view the related details like date, deptno and empid1 or empid2 from emp table.
HOw do we optimise the data retrieval and storage for the table Team. I cannot use partitions as I have SQL server 2005 standard edition.

Please help me to optimize the query and data retrieval time from Team table.


Thanks,
Ganesh

View 4 Replies View Related

Sql Query Optimization Help Needed

Feb 5, 2007

I need help in optimizing this query. The major time takes in calling a remote database. Thanks in advance.ALTER PROCEDURE dbo.myAccountGetCallLogsTest@directorynumber as varchar(10),@CallType as tinyint ASdeclare @dt as intSELECT     TOP 1 @dt=datediff(day,C.EstablishDate,getdate())FROM         ALBHM01CGSERVER.Core.dbo.Customer C INNER JOIN                      ALBHM01CGSERVER.Core.dbo.UsgSvc U ON C.CustID = U.CustIDWHERE     (U.ServiceNumber = @directoryNumber)ORDER BY C.EstablishDate DESCIF @dt>90select DN as Number, Remote_DN as [Remote Number], City, StartTime as [Start Time], EndTime as [End Time] from vw_Call_Logs where DN = '1' + @directoryNumber and call_type = @CallType and datediff(day,starttime,getdate())<90order by starttime descELSE select DN as Number, Remote_DN as [Remote Number], City, StartTime as [Start Time], EndTime as [End Time] from vw_Call_Logs where DN = '1' + @directoryNumber and call_type = @CallType and datediff(day,starttime,getdate())< @dtorder by starttime desc

View 13 Replies View Related

Query Optimization In Sql 2005

Mar 28, 2008

 Hi ,        How to optimize sql query in sql server 2005, any idea 

View 4 Replies View Related

Query Optimization - Confused!

May 2, 2002

i have query similar to this:

select count(a.callid) from tbl1 as a
inner join tbl2 as b on a.calldefid=b.calldefid
where a.programid=175


select count(a.callid) from tbl1 as a
inner join tbl2 as b on a.calldefid=b.calldefid
where b.programid=175

callid - pk on tbl1
calldefid - nonclustered index on both tbl1 and tbl2
programid - nonclustered index on both tbl1 and tbl2
tbl2 is the smaller table

from my understanding, the second query will run faster because you reduce the records in the smaller table, then join to the larger table (tbl1).

but can you explain to me why limiting the rows on tbl1 first, then joining to tbl2 would take longer?

View 1 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved