segment_processing
step, click here to learn more.
This is how you can configure the LLMs:
LLM Processing Options
TheLlmProcessing
configuration controls which language models are used for processing segments and provides fallback strategies if the primary model fails.
Field | Type | Description | Default |
---|---|---|---|
llm_model_id | String | The ID of the model to use for processing. If not provided, the system default model will be used. | System default |
fallback_strategy | FallbackStrategy | Strategy to use if the primary model fails. | System default |
max_completion_tokens | Integer | Maximum number of tokens to generate in the model response. | None |
temperature | Float | Controls randomness in model responses (0.0 = deterministic, higher = more random). | 0.0 |
Fallback Strategies
When working with language models, reliability is important. Chunkr provides three fallback strategies to handle cases when your primary model fails:FallbackStrategy.none()
: No fallback will be used. If the primary model fails, the operation will return an error.FallbackStrategy.default()
: Use the system default fallback model.FallbackStrategy.model("model-id")
: Specify a particular model ID to use as a fallback. This gives you explicit control over which alternative model should be used.
Example Usage
Here’s how to configure LLM processing in different scenarios:Available Models
The following models are currently available for use with Chunkr:Note: This table is dynamically generated by fetching data from our API. Model availability may change over time.