none
Performance problems when performing Incremental Load using Look up transformation?

    Question

  • Hi ,

    I am using Look up transformation to perform incremental load.Cache used 248030554 bytes(200MB).I know the max buffer size is 100Mb.for 20,000 rows of incremental upload it is taking more than 90 minutes.is there any way to improve the performance.More over the requirement is type2 so i have to use two oledb commands for updating and inserting.

    Is there any alternative to perform the same using other logic.

    Any help or suggestions is appreciated.

    Wednesday, July 17, 2013 5:12 PM

Answers

All replies

  • Do not use OLE DB commands. Period.
    Drop the rows in a staging table and do set based updates from there.

    My guess is your package will be running under two minutes once you get rid of the OLE DB commands. Let me know if I'm right ;)


    MCSE SQL Server 2012 - Please mark posts as answered where appropriate.

    Wednesday, July 17, 2013 8:47 PM
  • YES, there is fast way to perform incremental load. Use T-SQL MERGE statement to perform UPSERTs...

    Thanks, hsbal

    Wednesday, July 17, 2013 9:07 PM
  • YES, there is fast way to perform incremental load. Use T-SQL MERGE statement to perform UPSERTs...

    Thanks, hsbal

    How do you handle the slowly changing dimension type 2 with a MERGE statement?

    MCSE SQL Server 2012 - Please mark posts as answered where appropriate.

    Wednesday, July 17, 2013 9:09 PM
  • YES, there is fast way to perform incremental load. Use T-SQL MERGE statement to perform UPSERTs...

    Thanks, hsbal

    How do you handle the slowly changing dimension type 2 with a MERGE statement?

    MCSE SQL Server 2012 - Please mark posts as answered where appropriate.


    http://www.mssqltips.com/sqlservertip/2883/using-the-sql-server-merge-statement-to-process-type-2-slowly-changing-dimensions/

    Thanks, hsbal

    • Marked as answer by moh1kumr Thursday, July 18, 2013 4:28 PM
    Wednesday, July 17, 2013 9:22 PM
  • Thanks Hsbal the post works for me.

    But my question is how to handle look up transform when it exceeds 100 Mb buffer size?

    Any help is appreciated.

    Thursday, July 18, 2013 4:36 PM
  • Hi ,

    I am using look up transform and the data size is huge and when it caches the rows the size exceeds the max buffer size (100MB).Is there any way we can improve the performance.

    Thursday, July 18, 2013 4:44 PM
  • YOu might want to use partial cache.

    Though you should also look at the options of changing the design of the package, viz looking to use something like a MERGE JOIN transform or use something like a CACHE connection manager.

    YOu would need to test it out what option improves the performance for you


    Abhinav http://bishtabhinav.wordpress.com/

    Thursday, July 18, 2013 5:32 PM
    Moderator
  • You don't need to pay attention to the maximum buffer size, it has nothing to do with the lookup.

    If the lookup is configured in full cache mode, it will load the entire lookup table specified in the lookup into memory.

    So you need to take two points into account:

    1. Make sure the machine has enough memory (obviously)

    2. Make your lookup table as small as possible. Don't select the table from the dropdown, but write a SQL query and take only the columns you actually need.


    MCSE SQL Server 2012 - Please mark posts as answered where appropriate.

    Thursday, July 18, 2013 7:31 PM
  • As I mentioned in the other thread, the maximum buffer size of the data flow has nothing to do with the lookup component.

    100MB isn't that much. How big is the lookup table? Do you need all the columns?


    MCSE SQL Server 2012 - Please mark posts as answered where appropriate.

    Thursday, July 18, 2013 8:03 PM