Imagine a business analyst whose task is to extract necessary information from a data warehouse to generate various reports. They need to be proficient in crafting SQL queries of any complexity—a challenging and time-consuming feat that requires deep understanding of database structures and SQL language. But what if this business analyst had an AI assistant to whom they could describe the required data and format in natural language? This assistant would understand the request and convert it into an SQL query, retrieving the exact data needed.

Well, we made this a reality. We developed a Natural Language Interface for Data Platforms. This system utilizes a conversational AI to interactively engage with the user, receiving questions, translating them into SQL using internal knowledge of the data model, and returning the results within the same interface. Users receive precisely the data they asked for, with the desired grouping, sorting, and other specifications.

Architecture overview

The AI core engine of our system is built using Retrieval-Augmented Generation (RAG) architecture, following standard practices.

We further expand this architecture by adding Knowledge Sources, such as:

  • Data Warehouse Metadata.
  • Operational Data Store (ODS) Data Definition Language (DDL) commands. DDL is a subset of SQL used to define and manage database structures such as tables, indexes, and relationships. Examples of DDL commands include CREATE, ALTER, and DROP.
  • Questions and Answers that incorporate input learned from AI users based on Like/Dislike feedback through chat.
  • User Access Information.
  • Customer documentation.

The Knowledge Sources are stored in a PostgreSQL database as vectorized embeddings, enhancing the context of the questions provided to the LLM through prompts.

Click or tap on the picture to open it in full size

Step-by-Step Process

Let’s examine the technical implementation of the solution by going through the sequential stages of processing a user query. We will explore the interactions among various modules, as depicted in the diagram below.

Click or tap on the picture to open it in full size

1. User Query Submission

The journey begins when a user inputs a question into the Web App, and this natural language query is sent through a REST API, which acts as the bridge between the user interface and the AI system. The REST API module receives the natural language queries, validates them, and routes them to the Natural Language Processing (NLP) module for interpretation. It also keeps track of ongoing user sessions, ensuring continuity and preserving conversational context.

2. Query Interpretation

The NLP module, having received the query from the REST API, interprets the user’s query to ascertain potential table names, views, functions, or specific data points, thus determining the necessary components for constructing the SQL query. To manage situations where users include relative terms like “last semester” in their queries, the system uses contextual values. This involves referencing a framework that translates such terms into specific date ranges or identifiers relevant to the client’s data. This framework needs a comprehensive set of contextual parameters and historical data to accurately interpret and convert user queries with time-based or context-dependent phrases into appropriate SQL WHERE clauses.

3. Schema and Metadata Retrieval

The NLP Module collaborates with the Schema Analyzer Module to obtain the full schema details for the identified database tables, views, and functions, inclusive of their descriptions and example values. This metadata is stored in PostgreSQL Database in a vectorized form, which means it is converted into embeddings — numerical representations that capture the semantic relationships and context of the data.

4. Constructing SQL Query

Using the retrieved schema and metadata, the NLP Module manages the prompt for the Large Language Model (LLM), that constructs the SQL query. This process is enhanced by a repository of “golden queries” which serve as exemplary templates, ensuring high accuracy in the translation from natural language to SQL. The flexibility of our solution allows for the integration of various LLMs, such as those available through public APIs from OpenAI and Amazon Bedrock, as well as many others, including open-source options.

5. Query Validation

The constructed SQL query is passed to the SQL Verification Module. This module performs a “dry-run” to simulate query execution without altering live data. It checks for syntax correctness, execution efficiency, and schema alignment. Based on the analysis, the module may suggest modifications or optimizations to improve query performance.

6. Access Control

Throughout the process, the Access Control Module ensures that the generated SQL queries only access data that the user is authorized to view. It enforces role-based permission checks, monitors SQL query requests in real time, and provides notifications for unauthorized access attempts, maintaining system security and user awareness.

7. Response and Execution

Once validated, the SQL query is sent back to the Web App, where users can manually execute it within their database environments. Upon execution, the Web App constructs an easily decipherable answer for the user, displays the SQL query, and generates a report preview based on the acquired data.

The Feedback and History Retention Module collects user feedback on system responses, logs query-response histories, and allows users to access previous chat histories. This module supports continuous system improvement through feedback analysis and history recording.

This diagram provides a clearer view of each stage in the entire process.

Click or tap on the picture to open it in full size

Watch this demo video, which clearly demonstrates how our solution works.

AI Learning & Tuning

It’s crucial to establish a robust feedback loop and continuously tune the system for improvements. Just like any other RAG-based solution in the industry, our product will keep evolving by incorporating high-quality data into our metadata descriptions. This includes providing more detailed descriptions of the Operational Data Store (ODS) metadata, such as tables and views, and additional descriptions of specific columns and their meanings, as well as supplemental knowledge in the form of documents.

Here’s how we achieve this through our AI learning and tuning process:

  • User Feedback Analysis: We carefully analyze user feedback to identify areas where the AI responses did not meet expectations. This helps us understand where improvements are needed.
  • Admin Panel Intervention: System administrators use the admin panel to review user queries, the corresponding AI responses, and user-suggested preferred responses. This intervention is crucial for identifying and addressing issues.
  • Data Refinement: Based on the feedback, administrators can modify the input data, correct any highlighted issues, and enhance the descriptions of metadata. This refinement ensures the AI has accurate and detailed information to work with.
  • Learning and Tuning: After updating the data, the system’s RAG mechanism uses this refined data for future queries. This continuous learning process helps improve the precision and relevancy of the AI’s responses.

Through these steps, our AI system can learn and adapt, ensuring that it becomes more accurate and reliable with each iteration.

Unlock Your Data with Natural Language Interface

If you’re intrigued by the potential of this solution and want to learn more about how it can benefit your organization, we’d love to hear from you. Whether you have specific questions, need a detailed demo, or want to discuss how to integrate this powerful tool into your existing data infrastructure, our team is here to help.

Do you have a data store? Let’s add a Natural Language Interface to your data!

Reach out to us, and let’s unlock the power of your data with a Natural Language Interface.

Ready to start the conversation?