Process documents with LLMs
segment_processing
step, click here to learn more.
This is how you can configure the LLMs:
LlmProcessing
configuration controls which language models are used for processing segments and provides fallback strategies if the primary model fails.
Field | Type | Description | Default |
---|---|---|---|
llm_model_id | String | The ID of the model to use for processing. If not provided, the system default model will be used. | System default |
fallback_strategy | FallbackStrategy | Strategy to use if the primary model fails. | System default |
max_completion_tokens | Integer | Maximum number of tokens to generate in the model response. | None |
temperature | Float | Controls randomness in model responses (0.0 = deterministic, higher = more random). | 0.0 |
FallbackStrategy.none()
: No fallback will be used. If the primary model fails, the operation will return an error.FallbackStrategy.default()
: Use the system default fallback model.FallbackStrategy.model("model-id")
: Specify a particular model ID to use as a fallback. This gives you explicit control over which alternative model should be used.Note: This table is dynamically generated by fetching data from our API. Model availability may change over time.