Improving Core Earnings Analysis with LLMs




Scaling Core Earnings Measurement With Large Language Models | CLS Blue Sky Blog

Scaling Core Earnings Measurement With Large Language Models

Large language models (LLMs) are revolutionizing how we analyze corporations, presenting new opportunities and potential pitfalls. These models, rooted in deep learning and capable of predicting word sequences, can perform complex tasks—previously reliant on human judgment—now with remarkable accuracy.

In their recent paper, authors Matthew Shaffer and Charles C.Y. Wang explore the application of LLMs in estimating “core earnings,” which reflect a firm’s ongoing profitability, distinct from transitory financial items. Traditional measures mandated by Generally Accepted Accounting Principles (GAAP) often include these one-time items, complicating the accurate assessment of a company’s true economic performance.

The authors highlight the challenges involved in delineating core earnings, which require extensive analysis of financial statements and footnotes. Historically, analysts have faced a trade-off between in-depth, qualitative research and broad quantitative analysis. LLMs have the potential to bridge this gap—enhancing the depth and efficiency of analysis without the biases that human analysts might introduce.

Initial tests using GPT-4o demonstrated that a standard, out-of-the-box application of the LLM could produce significant errors in estimating core earnings. This prompted the authors to refine their approach by directly instructing the model through a structured, sequential method. This new method yielded more reliable outputs that aligned better with expected core earnings characteristics.

Tests revealed that the sequential approach not only produced smoother earnings measures, indicative of effective separation of transitory and recurring elements, but also showed promise in predicting future earnings and reflecting market valuations accurately.

While their findings underline the transformative potential of LLMs in financial analysis, the authors caution against overly relying on these models. They emphasize the need for human oversight to avoid errors that may arise from automated analyses, particularly for high-stakes investment decisions.

In conclusion, this research not only advances the conversation around LLMs in corporate financial analysis but also sets a framework for future studies to harness their capabilities effectively while remaining cognizant of their current limitations.

Article by Matthew Shaffer, Assistant Professor at USC, and Charles C.Y. Wang, Professor of Business Administration at Harvard Business School. Read more in their paper, “Scaling Core Earnings Measurement with Large Language Models” here.


Categories:

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Posts

Categories

Tags