How do I use Integration Services to achieve Push-mode processing for Analysis Services? RRS feed

  • General discussion

  •  [This is just for sharing information for those common asked questions collected from forums. If you have any better way or feedback, please directly reply in this thread so that community users can benefit from your answers.]

    I have some data sources which are plain text files and I want to directly load the data from these sources to my SSAS cubes. How can I do that?

    Usually when you process a cube, the SSAS service will send SQL queries to the underlying data source so as to load data into the cube. This is a typical pull-mode processing. But in some cases, the data sources are non-regular databases or plain text files, which will cause difficulties to use pull-mode processing since you could not use any type of SQL. In this case, you may want to use a push-mode processing.
    Microsoft SSIS provides this function. It can fetch data from underlying data sources and pushes the data into SSAS cubes directly.

    For example, the internet order quantity of the calendar year 2001 is: 1031. But for some reason, you need to add a new record from txt file to the cube:
    ProductKey    OrderDateKey    DueDateKey    ShipDateKey    CustomerKey    PromotionKey    CurrencyKey    SalesTerritoryKey    SalesOrderNumber    SalesOrderLineNumber    RevisionNumber    OrderQuantity    UnitPrice    ExtendedAmount    UnitPriceDiscountPct    DiscountAmount    ProductStandardCost    TotalProductCost    SalesAmount    TaxAmt    Freight    CarrierTrackingNumber    CustomerPONumber
    310    20010701    20010713    20010708    21768    1    19    6    SO75124    1    1    1    3578.27    3578.27    0    0    2171.2942    2171.2942    3578.27    286.2616    89.4568    NULL    NULL
    1.Create a package, drag and drop a "Data Flow Task" on to the Control Flow tab. Copy and paste above record to a txt file.
    2.Switch to Data Flow, create a "Flat File Source" pointing to the txt file. Select the option "Column names in the first data row", keep the other default setting and then switch to the columns tab, review the data to ensure you can access to that file without any problem.
    3.In Flat File Source Editor, select the option "Retain null values from the source as null values in the data flow".
    4.Drag and drop a "Partition Processing Destination" on to the Control Flow tab, connect "OLE DB Source" to "Partition Processing Destination", double click the "Partition Processing Destination", and then create a data Source pointing to the Adventure Works DW 2008 multidimensional database.
    5.Select partition Internet_Sales_2001 as target object, select ‘Add’ as processing method for testing only.
    6.Map the input columns with the partition attributes. You should map every attribute to the input columns, for example:

    7. Run the package, then check the internet order quantity of calendar year 2001, and it should be 1,014 now.

    Please remember to mark the replies as answers if they help and unmark them if they provide no help
    Monday, June 21, 2010 7:39 AM

All replies

  • Usefull Informaion.

    Thanks Charles.

    Sunday, September 12, 2010 8:24 AM