In the world of investment data, staying current and compliant is a must. Large Language Models (LLMs) like ChatGPT offer impressive capabilities, but they’re limited by the static nature of their training data and the risk that they’ll generate plausible-sounding but inaccurate responses—a phenomenon known as hallucination. 

Retrieval-augmented generation (RAG) addresses these limitations, enabling LLMs to integrate real-time, accurate data from external sources. This article explores how RAG can be used with investment data within a compliant framework, the critical role of the orchestration layer, and the essential function of an analytics and attribution engine to guarantee precise, compliant output.

Understanding Retrieval-Augmented Generation

RAG combines the generative power of LLMs with the precision of real-time data retrieval. This approach involves querying up-to-date information from external knowledge stores and embedding it into LLM prompts to guarantee accurate and contextual output. LLMs face two significant challenges:

  • Outdated training data: Models like ChatGPT have a knowledge cut-off date that limits their ability to provide accurate information on recent trends and events.
  • Hallucinations: LLMs sometimes generate false information when there's a gap in their knowledge, leading to potential factual errors and compliance issues in investment reporting.

RAG addresses these challenges by integrating a retrieval mechanism that fetches precise, up-to-date information, grounding LLM outputs in real-world data. This capability is particularly useful for generating investment reports, market analysis, and compliance documents, where accuracy is non-negotiable.

RAG: Not Just a Tool

RAG isn't a single tool, but a collection of strategies tailored to enhance the relevance and accuracy of LLMs. When implementing RAG, it’s crucial to align business needs with desired outcomes. Key considerations include:

  • Context definition: Understand what you want the model to know when performing a task or answering a question. Determine the data needed, its format, and the best retrieval method based on the task's context.
  • Context creation: Efficient RAG implementation requires defining and creating the right context so the LLM can be highly relevant. The process involves more than vector search, which is useful for powerful similarity searches but not always for precise matches. For example, creating a customer context might require querying Salesforce, support systems, and other databases to compile a comprehensive textual representation for the LLM.
  • Data retrieval: Focus on how to retrieve and represent the data in a way the LLM can effectively use. This might involve precise database queries rather than similarity searches, especially when exact matches are required.

RAG enhances LLM relevance, and every additional variable in your prompt contributes to effective RAG deployment and the performance of the orchestration layer.

The Role of the Orchestration Layer

The orchestration layer manages the interaction between the LLM, retrieval tools, and the user's input. It acts as the backbone of a RAG-enabled system, ensuring seamless integration and operation of the various components. When it comes to investment data, the orchestration layer handles a wide range of sources—such as Bloomberg, MSCI, Morningstar, and others—and ensures that they cohesively inform the LLM’s output. Here’s how it functions:

  1. User input handling: The orchestration layer receives user prompts and associated metadata.
  2. Data source integration: It manages the interaction with multiple data sources, ensuring comprehensive coverage of the required information. This step is necessary since many sources still rely on old-school weekly or monthly collection of spreadsheets via File Transfer Protocol (FTP), adding complexity to the integration process.
  3. Context compilation: It compiles the retrieved data into a coherent context, filling in gaps and mitigating the risk of hallucination by the LLM.
  4. LLM integration: The orchestration layer sends the compiled prompt, enriched with retrieved context, to the LLM for inference.
  5. Response management: After receiving the LLM’s response, the orchestration layer confirms the output is accurate and compliant with regulations before delivery to the user.

Key Components of the Orchestration Layer

The orchestration layer is made up of several moving parts that power its ability to retrieve and contextualize data:

  • System integration tools help structure the orchestration layer, managing how prompts are built and how context is integrated.
  • Custom logic knits together various tools and handles specific business logic and compliance checks.
  • Validation mechanisms ensure that prompts stay within token limits and that responses adhere to compliance standards.

Integrating an Analytics and Attribution Engine

An analytics and attribution engine is essential in the context of portfolios, particularly for performing complex calculations and remaining compliant with regulations. The engine performs relevant analytics on the data retrieved by the orchestration layer and ensures that the data fed into the LLM is accurate and processed according to regulatory standards. Some of the engine' key functions include:

  1. Data validation: Ensures all data used for RAG is compliant with financial regulations.
  2. Performance metrics: Analyzes the effectiveness of the RAG system, providing insights into the accuracy and relevance of the responses.
  3. Risk calculations: Uses methods such as Monte Carlo simulations and value at risk (VaR),which depend on storing and processing large datasets
  4. Attribution analysis: Tracks and attributes the source of data for transparency and compliance in data usage.

Implementing RAG for Investment Data

Follow these steps to prepare your data for retrieval RAG and refine the RAG-enabled LLM's performance.

1. Data Aggregation and Cleaning

  • Aggregate data: Collect relevant investment data such as market reports, financial statements, earnings reports, stock prices, and bond yields.
  • Clean data: Remove any personally identifiable information (PII) or sensitive data to ensure compliance.

2. Data Loading and Storage

  • Load data: Use tools to load various document types into your system.
  • Store data: Use vector stores to store data in a format optimized for quick retrieval based on textual similarity.

3. Contextual Retrieval

  • Query vector stores: Retrieve the most relevant data chunks that match the user's query.
  • API-based retrieval: Supplement vector store data with real-time data from APIs, ensuring the most current information is used.

4. Prompt Construction

  • Create prompt templates: Develop templates that incorporate system prompts, user input, and retrieved context.
  • Fill variables: Populate templates with real-time data and user-specific information.

5. Compliant Inference

  • Clean prompts: Ensure no sensitive data is included in prompts.
  • Token management: Validate prompt length to stay within LLM token limits.
  • Perform inference: Send the prompt to the LLM and receive a contextually enriched response.

6. Enhancing Performance and Compliance

  • Optimize source data: Ensure high-quality data is used for retrieval to improve the accuracy of LLM responses.
  • Refine text splitting: Adjust how text is chunked to maintain contextual integrity.
  • Tune system prompts: Modify prompts to better guide the LLM in using the provided context.
  • Filter results: Implement metadata filters to refine the retrieval results further.

The Need for RAG

While fine-tuning enhances LLM performance for specific tasks, it is static and can’t adapt to real-time changes. The combination of fine-tuning and RAG enablement, however, provides a comprehensive solution that can keep up with the needs of your business. Fine-tuning with compliance data and feedback on outputs significantly improves the accuracy and relevance of the model. When combined with RAG, which augments LLMs with real-time data, the result is a powerful, adaptable application that meets the stringent demands of the investment sector.

Acid Test: Daizy's Patent-Pending Hallucination Inference Technology

Here at Daizy, we employ a patent-pending hallucination inference technology called "Acid Test." This technology is designed to detect and mitigate hallucinations by cross-referencing LLM outputs with verified data sources. Acid Test ensures that the information generated by the LLM is accurate and compliant, enhancing the reliability of the RAG system.

Conclusion

RAG provides a robust solution for keeping LLMs relevant and accurate, particularly in the investment sector where data compliance and currency are critical. With a well-designed orchestration layer, businesses can pass on the generated content to their compliance teams with 99% of the work done. A compliance human in the loop is still critical and the objective is to cut turnaround times to minutes rather than days. The orchestration layer integrates a wide range of data sources, ensuring no scope for hallucination. RAG's reliable outputs make it an invaluable tool in modern investment analysis and reporting.