Guide / Artificial Intelligence

AI 101 Guide for Private Equity

To leverage AI successfully, private markets firms first need to decipher which AI applications are appropriate for for their legal, compliance, and governance workflows. Ontra’s updated AI 101 guide lays the foundation for understanding AI and adopting the reliable tools.

A black computer circuit board with warm white lights throughout and the word

Intro to AI 101

Now more than ever, private equity needs technology-based solutions to address the expanding legal workload generated by an increasingly complex industry. Rapid growth in private markets, heightened competition, and mounting regulatory scrutiny have all contributed to this complexity.

Historically, private fund managers have attempted to meet these demands by hiring larger internal legal and compliance teams and outsourcing to external law firms. While these approaches provide short-term relief, the associated costs make them unsustainable over the long term.

Now, AI provides a cost-effective way to streamline processes and improve efficiency. AI legal tech tools have enormous potential for core private fund activities, such as making investments, and administrative and legal tasks, such as managing and negotiating key contracts. Generic AI tools, however, don’t meet private managers’ exacting standards for legal work. They need a purpose-built AI platform for private markets.

Download Now

Common AI Terms

  • Artificial Intelligence (AI): A broad field focused on the creation of machines capable of performing complex tasks that typically require human intelligence, such as understanding and generating language and making decisions.
  • Deep Learning Networks: Multi-layered neural networks that can learn to approximate almost any function.
  • Fine-tuning: The process of training a pre-trained model on a smaller, domain-specific dataset to optimize its performance for a particular task. Fine-tuning allows general models to adapt to specialized use cases without requiring extensive training from scratch.
  • Large Language Models (LLM): A deep learning neural network that can perform natural language processing tasks. These models are referred to as large because of the number of parameters in the model (possibly in the billions) and the amount of data involved.
  • Machine Learning (ML): Uses algorithms and statistical models to analyze and learn from patterns in data, giving machines the ability to learn from experience without being explicitly programmed by humans.
  • Natural Language Processing: A field of machine learning in which machines can understand language as people speak and write it, which enables machines to recognize, understand, translate, and generate text.
  • Neural Networks: A specific class of machine learning algorithms inspired by the human brain. These networks contain numerous interconnected neurons. Each neuron performs a specific function, processing inputs and producing outputs that it sends to other neurons.
  • Precision: The accuracy of a model’s predictions, calculated by dividing the number of true positives by the total number of predicted positives (both true and false). For example, if a model predicts 100 out of 100 contracts contain standstill clauses and only 60 actually do, then the precision is 60%.
  • Recall: Measures the proportion of actual positives a model correctly identifies. For example, say a model predicts 90 out of 100 contracts contain a standstill clause when, in fact, all 100 do. Then recall equals 90%.
  • Retrieval-Augmented Generation (RAG): A technique that combines information retrieval with generative AI. It retrieves relevant documents or data from a source and uses a language model to generate responses based on the retrieved content. This approach improves the accuracy and relevance of generated outputs, especially in specific or complex domains.

How AI Works: Training then Inference

AI systems use algorithms to analyze data and identify patterns and relationships in that data. Developers can train algorithms by using various machine learning techniques. In general, developers provide the AI system with large amounts of data and adjust the system’s parameters until it can accurately perform a given task on new data.

Training is not inherently continuous — it’s an intensive and iterative process with feedback loops. The difference between training and inference is critical to understanding how data remains private and secure when using AI.

The success of an AI application depends on many components, but the following two stand out:

  • The models, which are the systems used to learn from data.
  • The volume and quality of the data the developers use for training.

Models

Models can be either open source or closed source (also known as proprietary). Open source code is generally available to the public, and depending on the license, parties may be able to access, modify, and distribute the model royalty free. Proprietary models may contain open-source code but rely on private source data to deliver unique capabilities. Only authorized parties can access these models.

 

Data

During training, models are exposed to large quantities of labeled and unlabeled data to learn how to perform specific tasks. These datasets can also be either open source or proprietary. A model’s accuracy depends on the volume and relevance of the training data used. Modern LLMs are trained on vast amounts of data, which gives them a better understanding of human language in all its forms.

 

Training

To create functional AI models, developers feed training algorithms labeled and unlabelled data. Labeled data is information that’s tagged to guide the algorithms. Unlabelled data is raw data without any labels or tags.

Throughout training, developers will use supervised and unsupervised learning. Typically, during supervised learning, the algorithm is given labeled data — the labels are the types of outputs the AI model is learning to produce. During unsupervised learning, the AI model is given unlabeled data to find the structure or patterns within that data.

Developers will also use other techniques, such as reinforcement learning, during which they provide feedback to the model after it performs a specific action. Over time, the feedback enables the model to make better decisions.

 

Inference

Once trained, a user can give an AI model a problem, and it will provide an answer. This process is called inference. The AI model draws a conclusion or makes a prediction based on the inference data the user provides. This process does not train the model further.

 

Prompts

A prompt is the input provided to an LLM to elicit a desired response. It can be as simple as a question or statement or a more structured guide to the model’s output. The design of a prompt significantly affects the quality, relevance, and accuracy of the model’s response.

Prompts typically include instructions, context, or examples to shape the response. For example:

  • A simple prompt: “Explain machine learning.”
  • A structured prompt: “You are an expert in AI. Explain machine learning to a beginner in three sentences.”

 

Prompts play a central role in LLMs because these models generate responses purely based on the context provided in the input. Crafting effective prompts, sometimes referred to as prompt engineering, is a key skill for leveraging LLMs efficiently, especially when working on complex tasks.

Types of AI Models

AI models are broadly classified as predictive or generative.

 

Predictive models

Predictive models make decisions or predictions about future outcomes (for example, predicting the complexity of an upcoming contract negotiation) by identifying patterns and trends in data. Predictive models can deliver consistent, accurate results when trained on high volumes of relevant information. They can automate many manual tasks that require minimal human oversight. However, the quality of their outputs declines precipitously with poor training data.

 

Generative models

Generative models create unique text, images, audio, and synthetic data (for example, drafting a legal clause) by mimicking content they have previously analyzed. Generative models allow legal professionals to tackle use cases that require context-specific, text-based responses.

Unfortunately, these models have two considerable shortcomings. First, they’re prone to hallucinating — fabricating baseless assertions that they present as fact. Second, they generate inconsistent answers to the same questions, known as non-deterministic outputs. For these reasons, generative AI requires human fact checkers — professionals familiar with the subject matter and the way in which the organization will use the AI outputs.

Checklist: AI Provider Caution Signs

Download Now

Why Humans in the Loop Matter

While the financial services and legal industries can leverage AI more now than ever, the technology is still not advanced enough to replace human expertise. It’s essential that humans remain involved when 1) training AI models and 2) reviewing inference outputs.

When an AI tool keeps humans in the loop, these professionals provide the model with explicit qualitative feedback regarding whether the output is correct. This feedback loop is invaluable in improving the quality of AI outputs over time.

How to Evaluate AI Outputs

The most practical way to evaluate AI tools is by comparing their outputs to a professional’s expertise and work quality. While developers may rely on technical metrics to assess performance, users should focus on the tool’s practical value: Does it deliver accurate, comprehensive, and reliable answers? Does it improve efficiency without adding risk?

During each test, identify whether the tool produces inaccurate data, overlooks key steps, or generates superfluous information. These insights will help determine whether the tool meets your needs and adds value to your workflow.

 

To evaluate effectively, test the AI tool on specific use cases and scrutinize its outputs for:

  • Errors or incomplete results
  • Missing critical information
  • Unnecessary or irrelevant content

 

Conventional AI vs. Ontra’s Purpose-Built Solutions

Several factors differentiate conventional AI tools from those purpose-built for private markets.

 

Industry-specific data

Little information about private fund contracts and legal documents exists in the public domain, which is why conventional AI can’t fully meet the needs of the financial services and legal industries in terms of relevance and precision. Private fund managers, asset managers, and investment banks need AI solutions built specifically for this industry.

Though not currently training its models, Ontra relies on years of proprietary industry-specific data to develop its solutions for private markets. Using this data set as a reference yields higher-quality AI-generated results.

 

In-depth contract frameworks

Private markets contracts contain nuances not found in contracts from other industries. For AI-powered contract automation to be effective, the solution needs a thorough understanding of all the clauses in a given document as well as the associated details. Ontra has processed over one million private markets contracts, which have become the basis for our extensive, industry-specific contract frameworks. These proprietary “contract anatomies” can contain hundreds, if not thousands, of fields, providing our AI models with a robust understanding of industry-specific contracts.

 

Proprietary AI systems

Based on industry-specific use cases, Ontra has defined the types of answers private markets firms need to accelerate their legal workflows. For example, users may need to access precedent while negotiating a contract or quickly summarize an executed document. Based on our deep understanding of these requirements, we’ve built predictive and generative AI systems to produce the high-quality, context-specific outputs users need.

 

LLM integrations

Ontra integrates commercial LLMs into its legal tech platform, combining the power of LLMs with our proprietary contract frameworks and machine learning models. LLMs have several strengths over specific machine learning models — LLMs can perform many tasks well, enabling developers to use the same model across several problems. LLMs require less data to leverage than traditional machine learning and offer a quicker validation cycle, enabling developers to experiment faster. Ontra aims to leverage state-of-the-art LLMs to continue providing private markets firms with the best AI experience.

 

Integrated solutions

Ontra offers a seamless user experience by embedding AI outputs throughout our solutions. The AI surfaces the right information at the right time to the right user to maximize productivity. For example, when a lawyer is reviewing a contract term, an AI feature can retrieve previously negotiated contracts that contain similar provisions and highlight relevant clauses.

 

Human-in-the-loop supervision

The private funds industry requires a high degree of precision and can’t rely on AI systems that produce answers with low relevance and accuracy. Conventional generative AI models are prone to present fabricated facts as true and deliver inconsistent responses to questions. Ontra avoids these pitfalls by tapping trained experts, including our Legal Network of over 600 independent legal professionals, to review the accuracy of our AI outputs. We use human feedback to deliver strong outputs and ensure our customers can be confident in our solutions.

"We're incredibly pleased with Ontra's use of cutting-edge technology to deliver legal solutions optimized for quality, speed, and low cost."
Neal Kalechofsky
Former VP of Alternatives Legal, AllianceBernstein

AI Privacy and Security

Financial services and legal users of AI tools are most concerned with accuracy and data security. Private fund managers handle an immense volume of sensitive data — investor information, deal analyses, and business intelligence. Any AI tool they use must be able to access this information to offer actionable outputs while adhering to the industry’s privacy and security requirements.

 

Given this concern, reliable AI providers take a comprehensive approach to security, including:

 

  • Enterprise-grade infrastructure, such as AWS, Google Cloud, and Cloudflare
  • Advanced encryption techniques, such as TLS 1.3 and AES-256 with rotating keys
  • Multiple levels of redundancy, such as site-by-site data replication, load balancing, and real-time document backups
  • SOC 2 Type 2 Audit
  • ISO 27001:2022 Certification
  • Zero Data Retention (ZDR) requires LLMs to not use the data in prompts in any way, including training

Learn More About Ontra

For those without technical training, AI may always seem a bit of a mystery. However, by understanding the fundamentals, you and your colleagues can evaluate and choose the right AI tools for your needs.

Download Guide

Learn more about Ontra's AI-powered platform for private markets

Contact Us
Trusted by 1,000+ Leading Firms
The Alliance Bernstein logo in white
Battery Ventures white logo
White L Catterton logo
White Blackstone company logo
Carlyle white company logo
The Alliance Bernstein logo in white
Battery Ventures white logo
White L Catterton logo
White Blackstone company logo
Carlyle white company logo

Ontra is not a law firm and does not provide any legal services, legal advice, or referral services and, as a result, we do not provide any legal representation to clients, nor do we participate in any legal representation of clients. The contents of this article are for informational purposes only, and are not intended to constitute or be relied upon as legal, tax, accounting, regulatory, or other professional advice, opinion, or recommendation by Ontra or its affiliates. For assistance or guidance regarding the impact or applicability of the topics discussed in this article to your business, please consult your legal or other professional advisers.