Core Concepts
This section describes the key building blocks of Contextica. Understanding these concepts will help you define, configure, and optimize AI functions in a structured way.
AI Functions
AI Functions are the foundation of Contextica. They are plain Java methods that represent tasks you want an LLM to perform.
The typical workflow when working with AI Functions is:
- Create an interface and declare methods annotated with
@AIFunction. - Provide configuration for each function using
@AIFunctionConfiguration. - Run the generate-contexts process to optimize prompts.
- Use the function in your application at runtime.
Function Declaration
AI Functions are declared inside an interface annotated with @AIService.
@AIService(name = "classificationService", provider = LLMProvider.OPENAI, model = LLMModel.GPT_4O_MINI)
public interface ClassificationService {
@AIFunction
String classify(String input);
}
Function Configuration
Each AI Function can have its own configuration.
- Configuration is defined in a class annotated with
@AIServiceConfiguration. - Each method annotated with
@AIFunctionConfigurationreturns aFunctionConfigurationobject. - This object specifies the task type, description, examples, and evaluation hints.
@AIServiceConfiguration(client = ClassificationService.class)
class ClassificationServiceConfig {
@AIFunctionConfiguration
public FunctionConfiguration classify() {
return FunctionConfiguration.builder()
.description("Classify the input into one of: Positive, Neutral, Negative")
.taskType(TaskType.CLASSIFY)
.build();
}
}
Task Types
Task types describe the intended purpose of an AI Function. Selecting the right task type ensures that Contextica applies the appropriate optimization and evaluation strategy.
| Task Type | Purpose |
|---|---|
| GENERATE | Create longer or more detailed content from a short instruction. |
| SUMMARIZE | Condense longer text into its main ideas. |
| EXTRACT | Pull specific information from a larger body of text. |
| CLASSIFY | Categorize or label input text into predefined groups. |
| TRANSFORM | Convert text into a different structure or format (e.g., JSON, Markdown, HTML). |
| TRANSLATE | Convert text from one language into another. |
| CHAT | Provide interactive, conversational responses to user input. |
Task type directly influences the candidate templates, optimization strategy, and evaluation strategy.
See the Usage section for guidance on choosing the correct task type.
Contexts
A context is the optimized prompt selected for a given AI Function.
- Contexts are stored in
.contextica/contexts.json(with backups). - At runtime, when a function is invoked, Contextica retrieves the stored context, injects input parameters, and sends the resulting prompt to the LLM.
This ensures repeatable and auditable behavior without manual prompt writing.
Optimization Engine
The optimization engine is responsible for generating and selecting the best prompt for each AI Function.
The process typically follows these steps:
- Inspect the task type.
- Choose an optimization and evaluation strategy.
- Generate initial candidate prompts.
- Produce variations where applicable.
- Evaluate candidates against developer-provided examples.
- Select and store the highest scoring context.
For some task types (for example, TRANSLATE or CHAT), a predefined template is used instead of running a full optimization process.