Skip to main content

Core Concepts

This section describes the key building blocks of Contextica. Understanding these concepts will help you define, configure, and optimize AI functions in a structured way.


AI Functions

AI Functions are the foundation of Contextica. They are plain Java methods that represent tasks you want an LLM to perform.

The typical workflow when working with AI Functions is:

  1. Create an interface and declare methods annotated with @AIFunction.
  2. Provide configuration for each function using @AIFunctionConfiguration.
  3. Run the generate-contexts process to optimize prompts.
  4. Use the function in your application at runtime.

Function Declaration

AI Functions are declared inside an interface annotated with @AIService.

@AIService(name = "classificationService", provider = LLMProvider.OPENAI, model = LLMModel.GPT_4O_MINI)
public interface ClassificationService {

@AIFunction
String classify(String input);

}

Function Configuration

Each AI Function can have its own configuration.

  • Configuration is defined in a class annotated with @AIServiceConfiguration.
  • Each method annotated with @AIFunctionConfiguration returns a FunctionConfiguration object.
  • This object specifies the task type, description, examples, and evaluation hints.
@AIServiceConfiguration(client = ClassificationService.class)
class ClassificationServiceConfig {

@AIFunctionConfiguration
public FunctionConfiguration classify() {
return FunctionConfiguration.builder()
.description("Classify the input into one of: Positive, Neutral, Negative")
.taskType(TaskType.CLASSIFY)
.build();
}

}

Task Types

Task types describe the intended purpose of an AI Function. Selecting the right task type ensures that Contextica applies the appropriate optimization and evaluation strategy.

Task TypePurpose
GENERATECreate longer or more detailed content from a short instruction.
SUMMARIZECondense longer text into its main ideas.
EXTRACTPull specific information from a larger body of text.
CLASSIFYCategorize or label input text into predefined groups.
TRANSFORMConvert text into a different structure or format (e.g., JSON, Markdown, HTML).
TRANSLATEConvert text from one language into another.
CHATProvide interactive, conversational responses to user input.

Task type directly influences the candidate templates, optimization strategy, and evaluation strategy.

See the Usage section for guidance on choosing the correct task type.


Contexts

A context is the optimized prompt selected for a given AI Function.

  • Contexts are stored in .contextica/contexts.json (with backups).
  • At runtime, when a function is invoked, Contextica retrieves the stored context, injects input parameters, and sends the resulting prompt to the LLM.

This ensures repeatable and auditable behavior without manual prompt writing.


Optimization Engine

The optimization engine is responsible for generating and selecting the best prompt for each AI Function.

The process typically follows these steps:

  1. Inspect the task type.
  2. Choose an optimization and evaluation strategy.
  3. Generate initial candidate prompts.
  4. Produce variations where applicable.
  5. Evaluate candidates against developer-provided examples.
  6. Select and store the highest scoring context.

For some task types (for example, TRANSLATE or CHAT), a predefined template is used instead of running a full optimization process.