Wednesday, January 30, 2013 11:26 AM
Hi I am working on a query optimization part where in I have found that most of the query execution time has been consumed by the process mentioned above. Can someone please suggest me some ideas how to rectify this.
Indexes of the tables I am mainly using is mentioned below.
declare @ResultTable table
-- insert the values into the result temp table.
Insert into @ResultTable
<All Filelds required>FROM HotMetal.TapTicket TT
JOIN HotMetal.TapTicketWeight TTW ON TT.TapTicketID = TTW.TapTicketID
LEFT JOIN Potline.Crucible PO on TT.CrucibleID = PO.CrucibleID AND PO.IsActive = 1 --This is a very small table
- Edited by pepcoder Wednesday, January 30, 2013 11:56 AM
Wednesday, January 30, 2013 11:36 AM
Both TapTicket and TapTicketWeight have a primary key declared as (TapTicketID, Sequence). However, in your query, you only join on TapTicketID. Are you sure this is what you want?
For your current query, the optimizer has to chose a Hash match, because the join key has duplicates in both tables.
Wednesday, January 30, 2013 11:57 AM
Apologize for the mistake. I had uploaded the indexes of TapticketWeight in both images. I have corrected it in my first post. please have a look.
Thanks again for your quick reply!!
Please find latest execution plan as well.
- Edited by pepcoder Wednesday, January 30, 2013 12:31 PM
Wednesday, January 30, 2013 6:02 PM
Your query plan shows no Outer Joins. So I guess it doesn't really match your query. In this case, Inner or Outer joining Crucible can make a big difference.
If the query you ran had all Inner Joins, then the optimizer's approach isn't that strange, although another approach could be good as well. It all depends on the statistics.
The alternative approach would be to scan TapTicket, merge join with TapticketWeight and then hash join with Crucible. You could test this approach by writing the query with joins in exactly the order as listed above, and including the hint OPTION (FORCE ORDER) at the end of the query. You can time both approaches and see which one runs fastest.
If you are not looking for the best performance, but were just wondering about the Hash Match, then I would say: don't worry, because there is no real problem with the optimizer's approach.
Wednesday, January 30, 2013 8:52 PMModerator
Using a table variable for more than about 100 rows is extremely slow. I would highly suggest using a temp table instead.
Friday, February 08, 2013 7:39 AMModerator
>Using a table variable for more than about 100 rows is extremely slow
In one test the threshold was at 12,000. The best answer is: "it depends".
Kalman Toth Database & OLAP Architect
Paperback / Kindle: SQL Server 2012 Pro