Skip to main content

Usage

This section explains the recommended workflow for implementing AI features with Contextica. The process is designed to keep development structured, repeatable, and optimized for both quality and efficiency.


When building an AI-powered feature, follow these steps:

  1. Split the feature into AI functions
    Break down the AI feature into smaller methods with the appropriate task types (e.g., GENERATE, CLASSIFY, SUMMARIZE).

  2. Assemble example data
    For task types that require optimization, prepare a small training set of input–output examples based on real-world use cases.

  3. Declare the AI function
    Define a Java interface annotated with @AIService, and add methods annotated with @AIFunction.

  4. Configure the function
    Create a configuration class annotated with @AIServiceConfiguration. For each function, add a method annotated with @AIFunctionConfiguration that returns a FunctionConfiguration.
    Include examples if the task type requires optimization.

  5. Run the context generation process
    Execute the generate-contexts process using the maven plugin or using the following command:

     mvn contextica:generate-contexts

    Contextica will generate candidate prompts, evaluate them, and store the best-performing context.

  6. Analyze the report
    Review the generated report (stored in the /.contextica/reports directory) to ensure the prompts behave correctly and match your expectations.

  7. Lock the function if satisfied
    Once a function produces high-quality results, lock it. This prevents it from being regenerated in future runs, saving tokens, cost, and avoiding regressions.

  8. Test at runtime
    Use the generated service in your application code and confirm it produces the desired results.

  9. Commit and deploy
    Commit the contexts.json file (stored in /.contextica/contexts) and your service code, then deploy your application as usual.

Declaring and Configuring Functions

Create an interface and annotate it with @AIService. Then, add AIFunctions with the @AIFunction annotation. For each AIFunction, create an AIFunctionConfiguration as follows:

@AIService(name = "classificationService", provider = LLMProvider.OPENAI, model = LLMModel.GPT_5_MINI)
public interface ClassificationService {

@AIFunction(description = "Detect whether or not the input contains sensitive data")
SensitiveDataClasses detectSensitiveData(String input);

public enum SensitiveDataClasses {
PERSONALLY_IDENTIFIABLE_INFORMATION,
PROTECTED_HEALTH_INFORMATION,
FINANCIAL_DATA,
CONFIDENTIAL,
NONE
}

}

@AIServiceConfiguration(service = ClassificationService.class)
class ClassificationServiceConfigugration {

@AIFunctionConfiguration
public FunctionConfiguration detectSensitiveData() {
List<Example> examples = List.of(
Example.withInput("John Smith lives at 22 King Street, London, and his phone number is 0044-7749-123456")
.withOutput(SensitiveDataClasses.PERSONALLY_IDENTIFIABLE_INFORMATION.name()),
Example.withInput("Our new hire, Emily Carter, has employee ID 45781")
.withOutput(SensitiveDataClasses.PERSONALLY_IDENTIFIABLE_INFORMATION.name()),
Example.withInput("Patient Jane Doe was diagnosed with Type II diabetes and prescribed Metformin")
.withOutput(SensitiveDataClasses.PROTECTED_HEALTH_INFORMATION.name()),
Example.withInput("The MRI scan shows early signs of multiple sclerosis")
.withOutput(SensitiveDataClasses.PROTECTED_HEALTH_INFORMATION.name()),
Example.withInput("The customer’s credit card number is 4532 7812 9012 4456, expiry 05/27, CVV 842")
.withOutput(SensitiveDataClasses.FINANCIAL_DATA.name()),
Example.withInput("Quarterly P&L shows €2.1M in revenue and €1.4M in expenses.")
.withOutput(SensitiveDataClasses.FINANCIAL_DATA.name()),
Example.withInput("The unreleased product roadmap includes Project Falcon, targeting Q4 2025")
.withOutput(SensitiveDataClasses.CONFIDENTIAL.name()),
Example.withInput("M&A discussions with company X are ongoing, valuation expected at €150M")
.withOutput(SensitiveDataClasses.CONFIDENTIAL.name()),
Example.withInput("Let's schedule the marketing campaign review for next Tuesday at 3 PM")
.withOutput(SensitiveDataClasses.NONE.name()),
Example.withInput("The office cafeteria will be closed on Friday for maintenance.")
.withOutput(SensitiveDataClasses.NONE.name())
);
return FunctionConfiguration.builder()
.taskType(TaskType.CLASSIFY)
.locked(false)
.examples(examples)
.build();
}
}

Function Configuration properties

PropertyDescription
descriptionA human-readable explanation of what the function does.
taskTypeThe type of AI task to perform (e.g., GENERATE, SUMMARIZE, CLASSIFY).
contextA user defined prompt that gets sent to the LLM. This overrides any default prompt template or generated prompt.
instructionsDefines how the output should be structured, restricted, and formatted. Includes rules, restrictions, structure, format, keywords, and reading level.

Instructions properties

PropertyDescription
rulesGuidelines the model should follow when generating output (e.g., “cite sources,” “quantify impacts”).
restrictionsHard constraints on what the model must not do (e.g., “do not fabricate numbers”).
structureDefines the output sections and their content expectations (e.g., Executive Summary, Valuation).
formatSpecifies the output format (e.g., MARKDOWN, JSON).
targetAudienceSpecifies the target audience for the generated output.
roleContxtSpecifies the role the LLM should assume.
keywordsDomain-specific terms to bias the output toward.
readingLevelIntended reading level or target audience sophistication (e.g., PROFESSIONAL).

Task types

Generate

The GENERATE task type is used for generating a larger piece of text from a shorter piece of text, usually along certain instructions. For example: generating a financial report section, producing a product description from product specifications, etc...

Example AIFunction:

    @AIFunction(description = "Generates a basic financial report for the specified ticker symbol")
String generateFinancialReport(String ticker);

Example AIFunctionConfiguration:

    @AIFunctionConfiguration
public FunctionConfiguration generateFinancialReport() {
return FunctionConfiguration.builder()
.description("Generate a basic financial research report on the company with the specified ticker symbol")
.taskType(TaskType.GENERATE)
.instructions(Instructions.builder()
.rules(List.of(
"Be precise and concise.",
"Cite sources inline and include a References section."
))
.restrictions(List.of(
"Do NOT fabricate data.",
"Do NOT provide personal financial advice."
))
.structure(OutputStructure.builder()
.addSection("Executive Summary", "One paragraph thesis...")
.addSection("Company Overview", "Business model, revenue streams...")
.addSection("Valuation", "DCF, comps, sensitivities...")
.build())
.format(OutputFormat.MARKDOWN)
.readingLevel(ReadingLevel.PROFESSIONAL)
.roleContext("Act as a senior equity research analyst.")
.targetAudience("Audience is institutional investors.")
.build())
.build();
}

Summarize

The SUMMARIZE task type is straightforward and is essentially the opposite of the GENERATE task type: it turns a large piece of input text into a smaller piece of output text by summarizing the input text.

Example AIFunction:

    @AIFunction(description = "Generates a summary of the specified input text")
String generateSummary(String text);

Example AIFunctionConfiguration:

@AIFunctionConfiguration
public FunctionConfiguration generateSummary() {
return FunctionConfiguration.builder()
.locked(true)
.examples(List.of(
Example.withInput("The cat chased a mouse across the garden, but eventually got tired and fell asleep under a tree.")
.withOutput("The cat chased a mouse but fell asleep under a tree."),
Example.withInput("Albert Einstein developed the theory of relativity, which changed how scientists understood space and time.")
.withOutput("Einstein developed the theory of relativity."),
Example.withInput("The company launched a new smartphone with improved battery life, faster performance, and a better camera.")
.withOutput("The company launched a new smartphone with major improvements."),
Example.withInput("During the storm, many trees fell and power lines were damaged, leaving thousands without electricity.")
.withOutput("The storm caused tree falls and power outages."),
Example.withInput("Leonardo da Vinci was a painter, inventor, and thinker, best known for the Mona Lisa and his scientific sketches.")
.withOutput("Leonardo da Vinci was a painter and inventor known for the Mona Lisa.")
))
.build();
}

Extract

The EXTRACT task type is similar to SUMMARIZE: it also produces a smaller piece of output text based on a larger piece of input text. The difference is that it does this by simply extracting some information from the large piece of (unstructred) input text instead of summarizing.

Example AIFunction:

    @AIFunction(description = "Extracts information related to a specific subject from the source text")
String extractInformation(@ContextParameter(name = "input") String text,
@ContextParameter(name = "subject") String subject);

Example AIFunctionConfiguration:

@AIFunctionConfiguration
public FunctionConfiguration extractInformation() {
List<Example> examples = List.of(
Example.withInputs(Map.of(
"subject", "invoice total",
"input", "Thank you for your purchase from Northwind. Invoice #AC-9041 was issued today for the laptop. Total due: $1,284.90 (USD). Please pay by 15 September."
)).withOutput("Total due: $1,284.90 (USD)."),

Example.withInputs(Map.of(
"subject", "meeting time",
"input", "Product kickoff is scheduled for next week. The kickoff is at 14:30 CET on 3 Oct 2025. We'll meet in room Atlas. Please arrive 10 minutes early."
)).withOutput("14:30 CET on 3 Oct 2025"),

Example.withInputs(Map.of(
"subject", "tracking number",
"input", "Your order left the warehouse this morning. Carrier: UPS. Tracking No.: 1Z999AA10123456784. It should arrive by Friday."
)).withOutput("1Z999AA10123456784"),

Example.withInputs(Map.of(
"subject", "contact email",
"input", "Our support hours run 9-17 CET. Reach us at support@acme.io for critical issues. Non-urgent questions can go to the forum. We usually reply within a day."
)).withOutput("support@acme.io"),

Example.withInputs(Map.of(
"subject", "forecast high temperature",
"input", "A heatwave is moving into Lisbon. Highs will reach 31°C with dry winds. Nights cool to 19°C. Consider early morning workouts."
)).withOutput("31°C"),

Example.withInputs(Map.of(
"subject", "CEO name",
"input", "After the board vote concluded, the company announced new leadership. New CEO: Priya Raman, effective immediately. She replaces the interim chief. Markets reacted positively."
)).withOutput("Priya Raman")
);

return FunctionConfiguration.builder()
.taskType(TaskType.EXTRACT)
.locked(true)
.examples(examples)
.build();
}

Classify

CLASSIFY is a very common task type which classifies a piece of input text. To use the CLASSIFY task type you must specify an ENUM with the correct categories, and a training set of examples so Contextica can generate an optimized prompt.

Example AIFunction:

    @AIFunction(description = "Detect whether or not the input contains sensitive data")
SensitiveDataClasses detectSensitiveData(String input);

Example AIFunctionConfiguration:

    @AIFunctionConfiguration
public FunctionConfiguration detectSensitiveData() {
List<Example> examples = List.of(
Example.withInput("John Smith lives at 22 King Street, London, and his phone number is 0044-7749-123456")
.withOutput(SensitiveDataClasses.PII.name()),
Example.withInput("Our new hire, Emily Carter, has employee ID 45781")
.withOutput(SensitiveDataClasses.PII.name()),
Example.withInput("Patient Jane Doe was diagnosed with Type II diabetes and prescribed Metformin")
.withOutput(SensitiveDataClasses.PHI.name()),
Example.withInput("The MRI scan shows early signs of multiple sclerosis")
.withOutput(SensitiveDataClasses.PHI.name()),
Example.withInput("The customer’s credit card number is 4532 7812 9012 4456, expiry 05/27, CVV 842")
.withOutput(SensitiveDataClasses.FINANCIAL_DATA.name()),
Example.withInput("Quarterly P&L shows €2.1M in revenue and €1.4M in expenses.")
.withOutput(SensitiveDataClasses.FINANCIAL_DATA.name()),
Example.withInput("The unreleased product roadmap includes Project Falcon, targeting Q4 2025")
.withOutput(SensitiveDataClasses.CONFIDENTIAL.name()),
Example.withInput("M&A discussions with company X are ongoing, valuation expected at €150M")
.withOutput(SensitiveDataClasses.CONFIDENTIAL.name()),
Example.withInput("Let’s schedule the marketing campaign review for next Tuesday at 3 PM")
.withOutput(SensitiveDataClasses.NONE.name()),
Example.withInput("The office cafeteria will be closed on Friday for maintenance.")
.withOutput(SensitiveDataClasses.NONE.name())
);
return FunctionConfiguration.builder()
.taskType(TaskType.CLASSIFY)
.locked(true)
.examples(examples)
.build();
}

Transform

TRANSFORM is a very straightforward task type that simply takes an input text, and transforms it into another data format such as JSON, Markdown, HTML, ... Every AIFunction of type TRANSFORM takes two context parameters: an input and a format. Without these two parameters, the AIFunction will not work correctly.

Example AIFunction:

    @AIFunction(description = "Transform text into the specified format")
String transform(@ContextParameter(name = "input") String text,
@ContextParameter(name = "format") String format);

Example AIFunctionConfiguration:

    @AIFunctionConfiguration
public FunctionConfiguration transform() {
return FunctionConfiguration.builder()
.taskType(TaskType.TRANSFORM)
.build();
}

The TRANSFORM task type will use a default template. It will not be optimized by the context-generation process. Therefore, it does not require examples.

Translate

TRANSLATE is another straightforward task type that takes an input text, and translates it into another language. Every AIFunction of type TRANSLATE takes two context parameters: an input and a targetLanguage. Without these two parameters, the AIFunction will not work correctly.

Example AIFunction:

    @AIFunction(description = "Translates a piece of text into a different language")
String translate(@ContextParameter(name = "input") String text,
@ContextParameter(name = "targetLanguage") String language);

Example AIFunctionConfiguration:

    @AIFunctionConfiguration
public FunctionConfiguration translate() {
return FunctionConfiguration.builder()
.taskType(TaskType.TRANSLATE)
.build();
}

The TRANSLATE task type will use a default template. It will not be optimized by the context-generation process. Therefore, it does not require examples.

Chat

The CHAT task type is the simplest task type in the sense that it simply sends the input text directly to the LLM. It does not use a default or an optimized prompt.

Example AIFunction:

    @AIFunction(description = "Answers a users query directly")
String getResponse(String message);

Example AIFunctionConfiguration:

    @AIFunctionConfiguration
public FunctionConfiguration getResponse() {
return FunctionConfiguration.builder()
.taskType(TaskType.CHAT)
.build();
}

This is most useful when providing a chat based UI to the end user. It is also recommended to use this in combination with streaming.

Context generation process

By default, Contextica will determine the prompt generation strategy as follows:

Task TypeGeneration strategy
GENERATEGenerate & optimize
SUMMARIZEGenerate & optimize
EXTRACTGenerate & optimize
CLASSIFYGenerate & optimize
TRANSFORMUse default template
TRANSLATEUse default template
CHATNo prompt generated

When Contextica optimizes the prompts, it is possible the generation process does not provide the correct prompt. This is almost always the result of a flawed training example set.
To help Contextica generate the right prompts, make sure to follow these guidelines:

Provide enough training examples
Provide a large enough set of training examples so the context generation engine can produce a prompt that is fit for your real-world use cases. For example, when optimizing a classification task, make sure you have positive and negative examples for each classification category.

Provide varied training examples
Provide a varied set of training examples so that the LLM does not overfit to your examples.

Avoid unclear or ambiguous examples
If the examples are vague or contradictory, the LLM will struggle to identify the intended task. Ensure inputs and outputs are consistent and unambiguous.

Balance positive and negative cases
If only positive or ideal examples are provided, the LLM may fail when unexpected inputs appear. Include edge cases, invalid inputs, and counterexamples.

Prevent over-similarity in examples
If examples are too similar, the LLM may overfit and fail to generalize. Vary sentence structures, word choice, and scenarios to reflect real-world diversity.

Check for mismatched input-output pairs
Sometimes the output doesn’t actually follow logically from the input. Verify each training example carefully before adding it.

Avoid domain drift
Examples that don’t match the production domain cause the LLM to underperform in practice. Build your training set from real data that matches your end-user environment.

Keep examples concise and relevant
Examples that are excessively long or noisy may confuse optimization. Keep them focused on the intended task and strip out irrelevant details.

Locking functions

If you are satistified with the prompt generated by the generation process, you can then lock an AIFunction. This will result in this function being skipped during future context generation runs.

If you do not lock functions, the prompts will be regenerated on subsequent context generation runs. This results in longer context generation runs, wasted input/ouput tokens, money and potentially prompts that perform worse than the version before.

    @AIFunctionConfiguration
public FunctionConfiguration generateProductDescription() {
return FunctionConfiguration.builder()
.description("Generate a product description based on the product specifications provided")
.taskType(TaskType.GENERATE)
.locked(true) // Setting locked to true skips context generation for this function
.examples(...) // Optimization examples go here
.build();
}

Overriding context generation

Contextica also provides the option to override the use of default or generated prompts. You can do this by specifying the context property in the AIFunctionConfiguration object. This will cause Contextica to skip all prompt generation logic. It will not use a default or optimized prompt. This means you are also responsible for adding the correct input parameter names, etc.

Example AIFunction:

    @AIFunction(description = "AIFunction with a custom context specified by the developer")
String customQuery(String input);

Example AIFunctionConfiguration:

    @AIFunctionConfiguration
public FunctionConfiguration customQuery() {
String context = """
Answer the following question: ${input}
""";
return FunctionConfiguration.builder()
.context(context)
.build();
}

Runtime Usage

Once you have declared your AIFunction and generated the contexts, you can use it in your application as follows:

ClassificationService classificationService = Contextica.createService(ClassificationService.class);
SensitiveDataClasses category = classificationService.detectSensitiveData("...");

In the above code you first instantiate the Contextica service. Then you invoke the detectSensitiveData() method. Behind the scenes, Contextica will automatically select the optimized generated prompt for this function, inject the input parameter into the prompt, send the request to the LLM and parse the response into the ENUM.

Streaming responses

Contextica provides support for streaming LLM responses through a class called StreamingContext. This is especially useful for use cases where an end user is waiting for the response (eg using a chat based interface).

Here is an example of a Spring Boot controller that uses streaming:

@GetMapping(value = "/generate", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public SseEmitter generate(@RequestParam String input) {
ChatService chatService = Contextica.createService(ChatService.class);

SseEmitter emitter = new SseEmitter(0L);

CompletableFuture.runAsync(() -> {
try {
StreamingContext.run(true, chunk -> {
try {
emitter.send(SseEmitter.event().data(chunk));
} catch (Exception ignored) {}
}, () -> chatService.getResponse(input));
emitter.complete();
} catch (Exception e) {
emitter.completeWithError(e);
}
});

return emitter;
}

This example shows a Server Sent Event emmiter provided by the Spring framework in combination with the StreamingContext and AIService provided by Contextica.