Skip to content

Prompts and scans in CalypsoAI

Prompts and scans are fundamental features in our Inference Defend product. Both features involve scanning input content, but serve distinct purposes and follow different workflows.

Prompts

A prompt is a text input you send to an LLM (Large Language Model) to generate a response.

Before the prompt is sent to the LLM, our scanners analyze the prompt to make sure it complies with the security and content policies configured in the scanners.

  • If the prompt is not flagged by the scanners, it is sent to the LLM.
    • If a scanner is configured to scan responses, the LLM response is also scanned to ensure compliance with your configured policies, and is blocked if flagged by a scanner.
  • If the prompt is flagged by the scanners, it is blocked and not sent to the LLM, preventing inappropriate content from reaching the LLM.

This two-directional scanning process, on both the prompt and the response, provides comprehensive protection for your users and applications.

Scans

A scan request is a text input you can submit for direct analysis by our scanners. Similar to a prompt, the scan request is analyzed to make sure it complies with the security and content policies configured in the scanners, but it is not sent to an LLM and does not leave our Platform.

Scan requests allow you to leverage our powerful scanning engine for non-LLM contexts, providing an efficient way to enhance overall system security across different areas of your operations by validating content, analyzing text, or checking data against your custom policies.

WHAT'S THE MAIN DIFFERENCE?

Both the Prompts and the Scans API resources use our scanners.

  • Successful prompts are sent to an LLM.
  • Scan requests are not sent to an LLM. You simply get the outcome of the scan and must decide on the next steps.

Scanning direction

The direction of the scan depends on whether you are sending a prompt or a scan request.

When sending a prompt, you can configure scanners to analyze:

  • The initial request (the user's prompt)
  • The final response from the LLM
  • Both the request and the response

When sending a scan request, the text input is not sent to an LLM, so only the request is scanned.

RESPONSE SCANNERS NOT WORKING

A response scanner does not trigger for scan requests (as there is no response by default), or for prompts if the initial prompt is blocked. The request must successfully reach the LLM for a response to be generated and scanned.

Sending prompts

To send a prompt to an LLM, users must create an authorization token and a provider. Providers are companies that make or host LLMs, and each provider can include several models.

Updated at: