0

What is the maximum size for a text variable?

Hello,

I am seeking information on the maximum allowable size for a text variable within Pyramid Analytics.

Currently, I am utilizing a REST API node within a master flow, employing a GET method to retrieve data from QuickBooks API. The response is formatted in JSON, and it appears that the only method to handle this response is to read it directly into a text variable. Subsequently, this data is processed through a Python script node within a data flow, where it is read into a stream and then stored in the IMDB.

I am concerned about the potential size of the JSON response, which may exceed 70+ MB, and whether this could be too large for a text variable within Pyramid. Has this approach been successfully implemented by anyone? Any guidance or advice based on your experience would be greatly appreciated.

5 replies

null
    • Daniel_Van_Wyk
    • 7 mths ago
    • Reported - view

    After testing with a larger set through REST API, we can see an issue. A small set seems to be ok. But with larger sets returned, I get a generic error message. Not at the API node, but in the data flow. I cannot see what is the actual value being passed through the variable. The error message is generic. It does show a value in the "Variables" tab, but that value is not the response from the API request. I can see the response through Postman is completely different. 

      • Principal Customer Solutions Consultant
      • Mark_Oldfield.1
      • 7 mths ago
      • Reported - view

      Hi  

      Have you tried to use the JSON file source as a source within a data flow instead, this is more aimed at receiving a JSON data payload, where a web application API is one use-case, where a variable is not.
       

      To connect to an API endpoint from a website you use the URL option,

      below is an example of me connecting to my energy provider API endpoint and returning the JSON response,  the JSON Extract node can then be used to extract the elements from the document
       

      further details on JSON file sources can be found here 

      hope this helps

      Mark.

      • Daniel_Van_Wyk
      • 7 mths ago
      • Reported - view

      Hello  , 

       

      Thank you for the quick response.

      This seems to be closer to what we need. However, the JSON response from the source might not be what the node expects. It is nested and recursive.  In one test response the node picks up on about 200 columns to import, and in another about 1200+ columns;

      With only one row of data.

      Even though there is more than one record in the source.  In other words, the more data there is,  the deeper the levels go. 

      We do have a way of resolving the recursiveness in the JSON in a Python script. This is why we want to pick up the text from the response, and feed it into the Python script.  Do you have any advice on this?   

      • Principal Customer Solutions Consultant
      • Mark_Oldfield.1
      • 7 mths ago
      • Reported - view

      Hi  

      The  2 approaches you could use are either putting a Python script into your flow (with the option to create a new table if applicable), or you could use a Python script as a source object and connect to your API and parse your JSON from within the Python script.

       

      thanks Mark.

      • Daniel_Van_Wyk
      • 7 mths ago
      • Reported - view

      Hello  , 

       

      Thank for the suggestions. Seems as though the latter would be the best approach here; to have it all in a Python script. This also would allow us to parameterize the URL a bit, as well as the headers.

Content aside

  • Status Answered
  • 7 mths agoLast active
  • 5Replies
  • 26Views
  • 3 Following