Skip to main content

Introduction

Overview

Contextica is a declarative context programming library for Java. It enables developers to declare AI functions as plain Java interfaces and methods, which are then automatically converted into optimized prompts for large language models (LLMs).

Instead of manually designing and tuning prompts, developers can focus on defining the intent of their functions. Contextica handles prompt generation, optimization, evaluation, and execution at runtime. This leads to faster development, higher-quality outputs, and improved consistency across applications.

Key characteristics of Contextica include:

Declarative
Define AI function intent with Java annotations and configuration instead of hand-crafting prompts.

Optimized
Automatically generate multiple prompt candidates, evaluate them against examples, and select the best performing prompt.

Observable
Generate HTML reports for full traceability of the optimization process.

Portable
Use in any JVM application or integrate with Spring Boot microservices without additional infrastructure.


Key Concepts

Understanding the following concepts will help you get the most out of Contextica:

AI Functions
Plain Java methods annotated with @AIFunction inside an @AIService interface. These methods describe what you want the LLM to do.

Function Configuration
Per-method configuration defined via @AIFunctionConfiguration. This allows developers to specify examples to be used in the prompt optimization process and additional instructions to the LLM.

Task Types
Built-in task types such as GENERATE, SUMMARIZE, EXTRACT, CLASSIFY, TRANSFORM, TRANSLATE, and CHAT. These cover common use cases and guide the optimization engine when generating prompts.

Optimization Engine
The component that creates candidate prompts, applies systematic mutations, evaluates results against examples, and selects the best performing prompt. This process ensures that the selected prompt is consistent and reliable for the defined task.

Contexts
The optimized prompts selected by the engine. These are stored and reused at runtime, ensuring stable results and repeatable application behavior.


Why Contextica

Modern generative AI development often suffers from duplicated prompt logic, inconsistent quality, and a lack of governance. Contextica addresses these issues by providing a structured and repeatable approach:

Consistency
Centralize and standardize prompt logic across teams and projects.

Velocity
Reduce boilerplate code and shorten development cycles with a declarative API and automated context generation.

Quality
Improve LLM output quality through algorithmic evaluation and selection of the most effective prompts.

Governance
Maintain auditability and versioning through generated reports and stored prompt contexts.

Maintenance
Adapt easily to changes in underlying LLM providers or models by rerunning the optimization process, without rewriting prompt logic.

In short, Contextica provides a robust foundation for building enterprise-grade generative AI applications in Java.


Next Steps

To get started with Contextica:

  • Install the library and dependencies
  • Define your first AI service and function
  • Run the optimization process
  • Use your AI function

See the Getting Started guide for detailed instructions.