Technical Questions on LLM/GenAI Architecture and Data Flow
Hello PA Team,
I'm looking for a deeper technical understanding of how the GenAI/LLM integration works, specifically regarding data flow and security. I've seen the general documentation, but I have a few more detailed architectural questions.
Could you please provide clarification on the following points:
What data is sent? When a GenAI feature is used, what exactly is included in the payload sent to the external LLM provider? For example, does it send metadata like table/column names, the query structure, or aggregated/sample data from the visual?
Where is it sent? What are the specific endpoints (domains/IPs) that the Pyramid server communicates with for these features? This is important for firewall configurations.
How is it sent? What protocol and port are used for this communication (e.g., HTTPS on port 443)?
Is there any logging? Can I audit or log the prompts sent to the LLM and the responses received back from it? Having a full trail of the entire "conversation" would be very helpful for monitoring and troubleshooting.
A more technical description of this process would be extremely valuable for understanding the security and data governance aspects of using this feature in an enterprise environment.
Thanks!
2 replies
-
Hi ,
I found some information regarding LLM Audit/logging for you to use.
Essentially, if you navigate to the admin console and select logs, you'll find log settings where you will scroll down and see "detailed LLM logging". Here you can enable extra data from LLM to be connected through different settings such as adjusting how long you want data to be logged and "purge now" to simply track data immediately. The only catch with this is that it will be heavier on the database and only should be used for diagnostic purposes.
Also, keep in mind that logging can slow down the application.
Here is a link for your reference with more details on this subject:
Log Settings