Overview #
AI Configuration in DnXT Administrator manages the artificial intelligence features integrated into the DnXT Suite. DnXT AI provides capabilities such as intelligent document analysis, content suggestions, automated quality checks, and retrieval-augmented generation (RAG) for contextual assistance. This guide covers how to configure AI providers, models, RAG settings, and tenant-level AI policies.
All AI configuration is accessed through the AI Configuration sub-tabs within the Configurations module. The configuration is organized into five sub-tabs: AI Settings, Providers, Models, Tenant Config, and Audit Logs.
Accessing AI Configuration #
- Log in to DnXT Administrator.
- Click Configurations in the left sidebar.
- Select the AI Configuration tab.
- The AI Configuration view displays five sub-tabs: AI Settings, Providers, Models, Tenant Config, and Audit Logs.
AI Settings #
The AI Settings sub-tab is the primary configuration panel for enabling and configuring AI capabilities. It is divided into several sections.
Provider Selection #
Select which AI service provider DnXT should use for AI operations. Available options:
| Provider | Description | Requirements |
|---|---|---|
| Azure OpenAI | Microsoft Azure-hosted OpenAI models with enterprise security and compliance | Azure subscription with OpenAI service deployed |
| OpenAI | Direct connection to OpenAI’s API service | OpenAI API key with appropriate model access |
| Local | Self-hosted AI models running on your own infrastructure | Local model server with compatible API endpoint |
Azure OpenAI Settings #
When Azure OpenAI is selected as the provider, configure the following:
| Field | Description | Example |
|---|---|---|
| Endpoint | The Azure OpenAI resource endpoint URL | https://yourcompany-openai.openai.azure.com/ |
| API Key | The Azure OpenAI API key | (masked) |
| Deployment Name | The name of the deployed model in your Azure resource | gpt-4-turbo |
| API Version | The Azure OpenAI API version to use | 2024-02-01 |
OpenAI Settings #
When OpenAI is selected as the provider, configure:
| Field | Description |
|---|---|
| API Key | Your OpenAI API key |
| Organization ID | Your OpenAI organization identifier (optional) |
| Model | The model to use (e.g., gpt-4, gpt-4-turbo, gpt-3.5-turbo) |
Local Model Settings #
When Local is selected as the provider, configure:
| Field | Description |
|---|---|
| Endpoint URL | The URL of your local model server (must be accessible from the DnXT server) |
| Model Name | The name of the model as recognized by the local server |
| Authentication | API key or other authentication method (if required by the local server) |
RAG Settings #
RAG (Retrieval-Augmented Generation) enhances AI responses by providing relevant context from your organization’s document repository. When a user asks DnXT AI a question, the system first retrieves relevant documents from your knowledge base and includes them as context for the AI model.
RAG configuration includes:
- Enable RAG — Toggle RAG functionality on or off
- Knowledge Base Path — The repository path containing documents to index for RAG
- Chunk Size — The size of text chunks used for indexing (affects retrieval precision)
- Overlap Size — The overlap between adjacent chunks (ensures context continuity)
- Top K Results — The number of relevant chunks to retrieve for each query
- Embedding Model — The model used to generate vector embeddings for semantic search
Configuring AI Settings #
- Navigate to AI Configuration > AI Settings.
- Select your Provider from the dropdown.
- Fill in the provider-specific settings (endpoint, API key, model).
- Configure RAG Settings if you want contextual AI responses.
- Click Save.
Providers #
The Providers sub-tab displays a registry of all AI service providers that have been configured in the system. It provides an overview of each provider’s status, endpoints, and configuration health.
This tab is primarily informational — use it to verify that providers are correctly configured and to troubleshoot connectivity issues.
Models #
The Models sub-tab lists all AI models available through the configured providers. Each model entry shows:
| Field | Description |
|---|---|
| Model Name | The identifier of the model |
| Provider | Which provider hosts this model |
| Type | The model type (e.g., Chat, Embedding, Completion) |
| Status | Whether the model is active and available for use |
Administrators can enable or disable specific models to control which capabilities are available to end users.
Tenant Config #
The Tenant Config sub-tab allows administrators to set AI policies at the tenant level. This is particularly useful in multi-tenant environments where different organizations may have different AI usage requirements.
Tenant-level AI configuration includes:
- AI Enabled — Enable or disable AI features for the tenant
- Allowed Models — Restrict which models are available to the tenant’s users
- Usage Limits — Set rate limits or token quotas for AI usage
- Data Processing Agreement — Track acceptance of data processing terms
Audit Logs #
The Audit Logs sub-tab provides a dedicated log of all AI-related activity. Unlike the main Audit Trail (which captures all system events), the AI Audit Logs focus specifically on AI interactions.
What Gets Logged #
- Every AI query submitted by a user
- The model and provider used for each query
- Token usage (input and output tokens)
- Response times
- Errors and failed requests
- RAG retrieval details (which documents were used as context)
Using AI Audit Logs #
- Navigate to AI Configuration > Audit Logs.
- Browse the log entries, which are displayed in reverse chronological order.
- Use filters to narrow results by date, user, model, or status.
- Click any log entry to expand its details (query text, response, token counts, latency).
Setting Up AI: Step-by-Step #
Azure OpenAI Setup #
- Create an Azure OpenAI resource in the Azure Portal.
- Deploy a model (e.g., GPT-4 Turbo) within the resource.
- Copy the Endpoint and API Key from the Azure resource’s Keys and Endpoint page.
- In DnXT, navigate to AI Configuration > AI Settings.
- Select Azure OpenAI as the provider.
- Paste the Endpoint, API Key, and Deployment Name.
- Click Save.
- Test by using an AI feature in DnXT Publisher (e.g., AI-powered document analysis).
OpenAI Direct Setup #
- Create an OpenAI account and generate an API key at platform.openai.com.
- In DnXT, navigate to AI Configuration > AI Settings.
- Select OpenAI as the provider.
- Enter the API Key and optionally the Organization ID.
- Select the desired Model.
- Click Save.
Local Model Setup #
- Deploy a compatible AI model on your infrastructure (e.g., using Ollama, vLLM, or a custom inference server).
- Ensure the model server exposes an OpenAI-compatible API endpoint.
- In DnXT, navigate to AI Configuration > AI Settings.
- Select Local as the provider.
- Enter the Endpoint URL of your model server.
- Enter the Model Name.
- Click Save.
FAQ #
Is my regulatory data sent to external AI services? #
When using Azure OpenAI or OpenAI as the provider, data is sent to those services for processing. Azure OpenAI offers enterprise data protection guarantees (data is not used for model training). For maximum data privacy, use the Local provider option, which keeps all data on your infrastructure.
Can I use multiple AI providers simultaneously? #
The AI Settings tab configures the active provider. Only one provider is active at a time. However, you can configure different providers for different tenants using the Tenant Config sub-tab.
How do I monitor AI costs? #
Use the Audit Logs sub-tab to track token usage, which directly correlates to API costs. For Azure OpenAI, you can also monitor costs through the Azure Portal’s cost management tools. For OpenAI, check usage at platform.openai.com.
What happens if the AI provider is unavailable? #
AI features will fail gracefully. Users will see an error message indicating that the AI service is temporarily unavailable. All non-AI functionality in DnXT continues to work normally.
Can I disable AI for specific users? #
AI access is controlled at the tenant level through Tenant Config and at the feature level through Permission Management. You can disable AI features for specific roles by unchecking AI-related permissions in the Module Access permission tree.