Appearance
Getting prompt logs
INFERENCE DEFEND
This article is meant for Inference Defend users.
ROLES AND PERMISSIONS
To complete the tasks described in this section, make sure you have the required permissions.
Each time you send a prompt to an LLM, the prompt is saved and stored as a log you can view.
Get the prompt logs
To get the prompt logs, you need to call the cai.client.prompts.get()
method in your request. The cai.client.prompts.get()
method accepts multiple optional parameters you can include to refine your results.
In this scenario, we are going to get the prompt logs, without including any optional parameters.
To get the prompt logs:
Add your token value to the following sample:
pythonfrom calypsoai import CalypsoAI # Define the URL and token for CalypsoAI CALYPSOAI_URL="https://www.us1.calypsoai.app" CALYPSOAI_TOKEN="ADD-YOUR-TOKEN-HERE" # Initialize the CalypsoAI client cai = CalypsoAI(url=CALYPSOAI_URL, token=CALYPSOAI_TOKEN) # Get the prompt logs prompts = [prompt for prompt in cai.prompts.iterate()] # Print the response print(prompts)
Run the script.
Analyze the response.
json{ "next": "eyJyb3ciOjEwLCJsaW1pdCI6MTAsInR5cGUiOiJyb3cifQ==", "prev": null, "prompts": [ { "externalMetadata": null, "fromTemplate": false, "id": "01975a7e-f050-706e-afa8-1e22f1064f21", "input": "Hello world", "memory": null, "orgId": null, "parentId": null, "preserve": false, "projectId": "01975a7d-ba51-70a9-97c1-8158db2a8957", "provider": "01975a76-69c9-700f-8871-b689fb827e7f", "receivedAt": "2025-06-10T15:39:17.968442Z", "result": { "files": null, "outcome": "blocked", "providerResult": null, "response": null, "scannerResults": [ { "completedDate": "2025-06-10T15:39:18.969754Z", "customConfig": false, "data": { "type": "custom" }, "outcome": "failed", "scanDirection": "request", "scannerId": "019745f2-abad-700e-a805-93993f59e036", "scannerVersionMeta": null, "startedDate": "2025-06-10T15:39:17.978433Z" } ] }, "type": "prompt", "userId": "919ff136-9cfa-4f8a-b347-c0cde08aca7c" }, (...) ] }
DEFAULT RESPONSE
Our sample Python request uses the
cursor
attribute to return a paginated list of prompts, starting from the most recent prompt. By default, there is a limit of 10 results per page, and only the prompts sent by the owner of the token are shown.You can use the
next
andprev
cursor properties to view prompts on specific pages.
Refine your search
You may want to refine your search to get the logs for prompts that match your specific search requirements.
In this scenario, we are going to get the prompt logs for all blocked prompts, from all users, with a limit of 1000 results shown per page.
To get the prompt logs:
Add your token value to the following sample:
pythonfrom calypsoai import CalypsoAI from calypsoai.datatypes import PromptOutcome, PromptType # Define the URL and token for CalypsoAI CALYPSOAI_URL="https://www.us1.calypsoai.app" CALYPSOAI_TOKEN="ADD-YOUR-TOKEN-HERE" # Initialize the CalypsoAI client cai = CalypsoAI(url=CALYPSOAI_URL, token=CALYPSOAI_TOKEN) # Get the prompt logs prompts = cai.client.prompts.get(limit=1000, type_=[PromptType.PROMPT], outcomes=[PromptOutcome.BLOCKED], onlyUser=False) # Print the response print(prompts.model_dump_json(indent=2))
Run the script.
Analyze the response.
Our sample Python request produces a paginated list very similar to the sample JSON response in Get the prompt logs, showing the prompts that match your search criteria.