Skip to main content

Frequently Asked Questions

This page answers common questions about setup, optimization, providers, and licensing.


Common Setup Issues

Q: My AI service cannot be created.
A: Ensure your interface is annotated with @AIService. Without this annotation, Contextica cannot create the service.

Q: I see an error about missing @AIFunctionConfiguration.
A: Every @AIFunction requires a corresponding configuration method annotated with @AIFunctionConfiguration. Add a configuration for each function.

Q: The application complains about a missing API key.
A: Add your provider credentials to application.properties, for example:
contextica.llm.openai.api.key=sk-...

Q: Why is my AI function not optimizing?
A: Some task types (TRANSLATE, CHAT, TRANSFORM) use predefined templates and do not go through optimization.
In addition, a FunctionConfiguration can set the property locked=true, which prevents further optimization.


Context Optimization Tips

Q: How can I reduce costs during context generation?
A: Be deliberate in choosing your examples. More examples increase LLM calls, which means higher costs and longer optimization runs. Use enough examples for generalization, but avoid unnecessarily large sets.

Q: How do I ensure good optimization quality?
A: Pick the right task type and carefully design your examples. Too few examples may cause overfitting, while poorly designed examples can lead to weak prompts.

Q: Should I rerun optimization when switching models?
A: Yes. When changing the underlying LLM model, rerun the context generation process to adapt prompts to the new model's behavior.


Provider Support

Q: Which LLM providers are supported?
A: Currently only OpenAI is supported. Support for more providers and models is coming soon.

Q: How do I know which models are available?
A: Check the LLMModel enum in the library. Some evaluation strategies, such as embeddings-based evaluation, require providers that expose embeddings.


Usage and Maintenance

Q: Do I have to write prompts manually?
A: No. You only declare intent via annotations and configuration. Contextica generates, optimizes, and stores the prompts automatically.

Q: Where are optimized contexts stored?
A: By default, in .contextica/contexts.json with backups (Enterprise Edition only). The stored context is automatically injected at runtime.

Q: Can I audit or review what Contextica generated?
A: Yes. Reports are produced during context generation and stored in the .contextica/reports directory. The Community Edition provides basic reporting, while the Enterprise Edition provides advanced reporting and history.


Licensing and Editions

Q: Can I use the Community Edition in production?
A: No. The Community Edition is free but limited to non-commercial, personal, and research use.

Q: How is the Enterprise Edition licensed?
A: It is licensed on a per-developer-seat basis. Every developer using Contextica must have a seat.

Q: Who do I contact for support or licensing questions?
A: Contact Agensys for licensing, sales, and support inquiries.