A richer dialogue between human experts and large language models may improve outcomes.
Artificial intelligence and human talent are intertwined in investment research today, a symbiotic relationship that seems likely to last as long as research itself. The quest to unlock outperformance through distinctive analysis is the bedrock of traditional fundamental and quantitative research, and we believe that AI is strengthening that foundation.
Large language models (LLMs) are a key part of the AI toolkit. Parsing vast amounts of unstructured data, news and reports at a speed and scale beyond human capabilities, they extract pertinent financial insights, market trends and potential investment signals—sharing them in high-quality, human-like text. LLMs are essentially well-read, highly knowledgeable associates available to analysts 24/7.
In order for research analysts and portfolio managers to harness LLMs’ full potential in making more informed and more timely investment decisions, it’s necessary to ask the right questions and provide the right instruction. In doing so, analysts must amplify one of their own abilities: communication.
That’s where prompt engineering comes in: crafting precise queries that direct LLMs to generate the most relevant and accurate insights.
Prompt Engineering: Leading the Conversation
Put simply, an LLM is on its best behavior when analysts furnish well-crafted prompts. This critical series of carefully designed questions or instructions provides models with needed context, clarity on the specific task being requested and specifications for the output.
It’s crucial to clearly define stakeholder expectations and specific requirements from the start. This can be accomplished by applying patterns or techniques to improve the effectiveness of prompts, making them more consistent and more scalable—effective across multiple uses. This process is typically iterative, with analysts refining prompts based on the feedback and results from the LLM.
This process can take considerable bandwidth from both analysts and data scientists—and require a healthy dose of trial and error. But given the stakes, we think the investment of time and effort is capital well spent, with a potentially powerful payoff in more targeted, more reliable and more meaningful insights from LLMs that may enhance investment outcomes.
Understanding Prompt-Engineering Techniques
What are some of the methods in prompt engineering that may reduce the amount of “capital” that needs to be invested in interacting with an LLM, while keeping its output more on point? The real-world investment applications can be diverse and complex—and proprietary!—but a few basic hypothetical examples can shed some light.
Imagine you’re an entry-level credit analyst reviewing a loan document, and you want to double-check your understanding of the term “debtor in possession.” Asking the LLM a direct question such as “Explain the term ‘debtor in possession’ to me” may produce the following response:
“Debtor in-possession (DIP) is a term used in bankruptcy law to refer to a company or individual that has filed for bankruptcy and is allowed to continue operating their business while they restructure their debts. The DIP is given the authority to manage their own assets and operations, subject to court approval, and is responsible for developing a plan to repay their creditors. This is in contrast to a trustee, who is appointed by the court to manage the assets of a bankrupt company or individual. The DIP is typically given certain advantages, such as priority in payment of certain debts, to help them reorganize and emerge from bankruptcy as a viable business.”
While this explanation might be useful to some people, it may not delve as deeply into the topic as an investment expert would require. Eliciting that level of response requires more clarification as to the intended audience. Armed with the context that the query is coming from a credit analyst, the LLM can provide a more detailed response for someone with greater financial acumen (Display).