View Categories

AI Configuration

Overview #

AI Configuration in DnXT Administrator manages the artificial intelligence features integrated into the DnXT Suite. DnXT AI provides capabilities such as intelligent document analysis, content suggestions, automated quality checks, and retrieval-augmented generation (RAG) for contextual assistance. This guide covers how to configure AI providers, models, RAG settings, and tenant-level AI policies.

All AI configuration is accessed through the AI Configuration sub-tabs within the Configurations module. The configuration is organized into five sub-tabs: AI Settings, Providers, Models, Tenant Config, and Audit Logs.

Key Concept: DnXT AI connects to external AI services (Azure OpenAI, OpenAI, or local models) to power intelligent features. The administrator controls which providers and models are available, configures connection settings, and monitors AI usage through audit logs.

Accessing AI Configuration #

  1. Log in to DnXT Administrator.
  2. Click Configurations in the left sidebar.
  3. Select the AI Configuration tab.
  4. The AI Configuration view displays five sub-tabs: AI Settings, Providers, Models, Tenant Config, and Audit Logs.

AI Settings #

The AI Settings sub-tab is the primary configuration panel for enabling and configuring AI capabilities. It is divided into several sections.

Provider Selection #

Select which AI service provider DnXT should use for AI operations. Available options:

Provider Description Requirements
Azure OpenAI Microsoft Azure-hosted OpenAI models with enterprise security and compliance Azure subscription with OpenAI service deployed
OpenAI Direct connection to OpenAI’s API service OpenAI API key with appropriate model access
Local Self-hosted AI models running on your own infrastructure Local model server with compatible API endpoint

Azure OpenAI Settings #

When Azure OpenAI is selected as the provider, configure the following:

Field Description Example
Endpoint The Azure OpenAI resource endpoint URL https://yourcompany-openai.openai.azure.com/
API Key The Azure OpenAI API key (masked)
Deployment Name The name of the deployed model in your Azure resource gpt-4-turbo
API Version The Azure OpenAI API version to use 2024-02-01

OpenAI Settings #

When OpenAI is selected as the provider, configure:

Field Description
API Key Your OpenAI API key
Organization ID Your OpenAI organization identifier (optional)
Model The model to use (e.g., gpt-4, gpt-4-turbo, gpt-3.5-turbo)

Local Model Settings #

When Local is selected as the provider, configure:

Field Description
Endpoint URL The URL of your local model server (must be accessible from the DnXT server)
Model Name The name of the model as recognized by the local server
Authentication API key or other authentication method (if required by the local server)
Tip: For organizations with strict data governance requirements, the Local provider option ensures that no data leaves your infrastructure. All AI processing happens on your own servers, eliminating concerns about data transmission to external services.

RAG Settings #

RAG (Retrieval-Augmented Generation) enhances AI responses by providing relevant context from your organization’s document repository. When a user asks DnXT AI a question, the system first retrieves relevant documents from your knowledge base and includes them as context for the AI model.

RAG configuration includes:

  • Enable RAG — Toggle RAG functionality on or off
  • Knowledge Base Path — The repository path containing documents to index for RAG
  • Chunk Size — The size of text chunks used for indexing (affects retrieval precision)
  • Overlap Size — The overlap between adjacent chunks (ensures context continuity)
  • Top K Results — The number of relevant chunks to retrieve for each query
  • Embedding Model — The model used to generate vector embeddings for semantic search

Configuring AI Settings #

  1. Navigate to AI Configuration > AI Settings.
  2. Select your Provider from the dropdown.
  3. Fill in the provider-specific settings (endpoint, API key, model).
  4. Configure RAG Settings if you want contextual AI responses.
  5. Click Save.
Important: AI API keys provide access to paid services. Store them securely and rotate them periodically according to your organization’s security policies. Never share API keys or include them in documentation.

Providers #

The Providers sub-tab displays a registry of all AI service providers that have been configured in the system. It provides an overview of each provider’s status, endpoints, and configuration health.

This tab is primarily informational — use it to verify that providers are correctly configured and to troubleshoot connectivity issues.

Models #

The Models sub-tab lists all AI models available through the configured providers. Each model entry shows:

Field Description
Model Name The identifier of the model
Provider Which provider hosts this model
Type The model type (e.g., Chat, Embedding, Completion)
Status Whether the model is active and available for use

Administrators can enable or disable specific models to control which capabilities are available to end users.

Tenant Config #

The Tenant Config sub-tab allows administrators to set AI policies at the tenant level. This is particularly useful in multi-tenant environments where different organizations may have different AI usage requirements.

Tenant-level AI configuration includes:

  • AI Enabled — Enable or disable AI features for the tenant
  • Allowed Models — Restrict which models are available to the tenant’s users
  • Usage Limits — Set rate limits or token quotas for AI usage
  • Data Processing Agreement — Track acceptance of data processing terms
Tip: Use Tenant Config to gradually roll out AI features. Start by enabling AI for a pilot tenant, monitor usage and quality via Audit Logs, and then enable it for additional tenants once you are confident in the configuration.

Audit Logs #

The Audit Logs sub-tab provides a dedicated log of all AI-related activity. Unlike the main Audit Trail (which captures all system events), the AI Audit Logs focus specifically on AI interactions.

What Gets Logged #

  • Every AI query submitted by a user
  • The model and provider used for each query
  • Token usage (input and output tokens)
  • Response times
  • Errors and failed requests
  • RAG retrieval details (which documents were used as context)

Using AI Audit Logs #

  1. Navigate to AI Configuration > Audit Logs.
  2. Browse the log entries, which are displayed in reverse chronological order.
  3. Use filters to narrow results by date, user, model, or status.
  4. Click any log entry to expand its details (query text, response, token counts, latency).
Compliance Note: AI Audit Logs provide the traceability required for validating AI-assisted regulatory decisions. In regulated environments, maintain these logs as part of your compliance documentation to demonstrate that AI features are being used appropriately.

Setting Up AI: Step-by-Step #

Azure OpenAI Setup #

  1. Create an Azure OpenAI resource in the Azure Portal.
  2. Deploy a model (e.g., GPT-4 Turbo) within the resource.
  3. Copy the Endpoint and API Key from the Azure resource’s Keys and Endpoint page.
  4. In DnXT, navigate to AI Configuration > AI Settings.
  5. Select Azure OpenAI as the provider.
  6. Paste the Endpoint, API Key, and Deployment Name.
  7. Click Save.
  8. Test by using an AI feature in DnXT Publisher (e.g., AI-powered document analysis).

OpenAI Direct Setup #

  1. Create an OpenAI account and generate an API key at platform.openai.com.
  2. In DnXT, navigate to AI Configuration > AI Settings.
  3. Select OpenAI as the provider.
  4. Enter the API Key and optionally the Organization ID.
  5. Select the desired Model.
  6. Click Save.

Local Model Setup #

  1. Deploy a compatible AI model on your infrastructure (e.g., using Ollama, vLLM, or a custom inference server).
  2. Ensure the model server exposes an OpenAI-compatible API endpoint.
  3. In DnXT, navigate to AI Configuration > AI Settings.
  4. Select Local as the provider.
  5. Enter the Endpoint URL of your model server.
  6. Enter the Model Name.
  7. Click Save.

FAQ #

Is my regulatory data sent to external AI services? #

When using Azure OpenAI or OpenAI as the provider, data is sent to those services for processing. Azure OpenAI offers enterprise data protection guarantees (data is not used for model training). For maximum data privacy, use the Local provider option, which keeps all data on your infrastructure.

Can I use multiple AI providers simultaneously? #

The AI Settings tab configures the active provider. Only one provider is active at a time. However, you can configure different providers for different tenants using the Tenant Config sub-tab.

How do I monitor AI costs? #

Use the Audit Logs sub-tab to track token usage, which directly correlates to API costs. For Azure OpenAI, you can also monitor costs through the Azure Portal’s cost management tools. For OpenAI, check usage at platform.openai.com.

What happens if the AI provider is unavailable? #

AI features will fail gracefully. Users will see an error message indicating that the AI service is temporarily unavailable. All non-AI functionality in DnXT continues to work normally.

Can I disable AI for specific users? #

AI access is controlled at the tenant level through Tenant Config and at the feature level through Permission Management. You can disable AI features for specific roles by unchecking AI-related permissions in the Module Access permission tree.

Related Articles #

Powered by BetterDocs

Leave a Reply

Your email address will not be published. Required fields are marked *