1

Automated Export/Extract to SQL Server

This is a unique client request that I am trying to assist with. The need is to export directly from a Pyramid Discovery to a SQL database. My thought was to connect SQL SSIS job and induct a CSV file, but I need to make sure that CSV file is available automatically and daily. The client wants to pull directly from Pyramid and not through a subscription as to avoid any issue with using specific user's accounts. 

Is there some connection function to pull or publish directly from Pyramid to an external SQL Server environment? The client does not have direct access to the original SQL Server to pull the source data directly. 

5 replies

null
    • NPANS
    • 8 mths ago
    • Reported - view

    There are multiple techniques for this:

    1. Use the 'Data Flow' tools in Model: set the chosen report as a source and choose your SQL server as the target. It would go straight without issue. No CSV etc. Then put this data flow on a daily schedule.  The whole thing works out of the box.
    2. Use the OData connector end point for the report and suck it into SQL Server via something like SSIS (clumsy compared to #1)
    3. Use the Query APIs to run the report, extract the results programmatically and load it into SQL Server using any language (lots of extra work compared to #1 or #2)
    4. Use the Publish tools to push a CSV file daily. Then (with a bunch of effort), suck the CSV into your SQL Server (clumsiest)

    Based on what you want, I would choose #1.

      • terrance_smith.1
      • 8 mths ago
      • Reported - view

       Clarification on option 1. I can get this setup, but I can't seem to set the filter options on the discover from the modeling tool. Is that an option or does the discovery have to be configured without filters or with a parameter than always selects the most recent value? I am guessing the report should not have dynamic filters like Term/Year that will need to be changed over time?

      • NPANS
      • 8 mths ago
      • Reported - view

       The plot thickens................

      If you want to automate this process with a UI experience, then options 1 and 2 are no good - they work without a UI to intercept the query. If you want to intercept the query, without the UI, then use programmatic filtering logic with option 3. Option 4 can work with a UI - but it's true power is in scheduling publications to run automatically without UI interception. 

      Options 1,2 and 4 can all include a framework to use a "current" slicer value to be injected into the query to drive the extraction. And this can run without a UI.  This can work with any selection, including years.

      So, build a 'list' to represent what you would want to pick every time this runs. The list can include logic, which would resolve to 1 item. (Alternatively, it resolves to more than 1, and you would auto pick the first or last item). Then include that list in your report. Each time the report runs, the list is evaluated and a new selection is injected into your extract. For example, the list could be "Get a listing of years from the date column, then take the last year." Your extract will then always work off the last year of data. 

    • Customer Solutions Architect
    • Moshe_Yossef
    • 8 mths ago
    • Reported - view

    You can create a data flow in the model.

    The data source is the discover.

    The target node is the SQL target.

    Run the model with an API call, and you're done.

     

    Although I have to say - I'm wondering what is the actual use case, sounds like a long circle just to avoid subscriptions...

    • terrance_smith.1
    • 8 mths ago
    • Reported - view

    I am trying option 1 right now and need to wait on client SQL information to confirm. But this sounds exactly like what I was looking for. Preemptive thanks and I will update this again once I get it working. 

Content aside

  • Status Answered
  • 1 Likes
  • 8 mths agoLast active
  • 5Replies
  • 48Views
  • 3 Following