Written by 15:38 Financial Centre, TOP-NEWS

AI in Financial Analysis – a Newcomer with Uncertain Potential

According to DVFA Investment Professionals, AI is increasingly easing analytical workflows, yet uncertainties regarding data quality, regulation, and liability constrain its use and make human oversight indispensable.

DVFA surveyed its Investment Professionals on how they assess the current use of artificial intelligence in terms of benefits, limitations, and obstacles in their professional environment. “In many teams, AI has become a productivity tool—especially for text, structuring, and the rapid preparation of content. The key is to establish governance and data quality in such a way that speed also translates into reliability,” comments Christoph Schlienkamp, Head of the newly established DVFA Expert Committee on Artificial Intelligence.

Text generation currently provides the greatest value

At present, DVFA Investment Professionals see the most worthwhile application of artificial intelligence in their work primarily in text creation, for example for summaries or minutes, but also for preparing Q&A catalogues (50% of all responses).

In second place, far behind at 18%, is support for idea generation in research, such as hypotheses, screenings, and peer comparisons. Close behind in the ranking of key applications is assistance with data work, for example extracting or structuring KPIs for reports (17%).

By contrast, AI has so far delivered only limited added value for monitoring early-warning signals or for news/sentiment analysis (5%), and similarly for metric calculations within company valuations (2%).

Ad hoc use with single prompts still dominates AI usage

When asked about their current AI usage, the vast majority of respondents (71%) stated that they use AI tools primarily “ad hoc” via individual prompts—selectively and often experimentally. One in five (21%) already uses the new possibilities for standardized workflows or templates, for example to review reports. The largely automated use of end-to-end agents has evidently not yet gained traction, accounting for only 4% of responses. A further 3% do not use AI at all.

Publicly accessible tools are the leading AI setup

Nearly half (47%) of Investment Professionals who already use AI rely on publicly accessible tools such as ChatGPT or Gemini without using internal data. The second most important setup, at 25% of responses, consists of solutions such as in-house models in protected environments, where internal data can be used without the risk of copyright conflicts, for example.

Almost one in four responses (23%) referred to the use of Microsoft Copilot (M365/Teams/Excel). At 2%, the use of specialist platforms—for example those of research or data providers—currently represents only a niche application in financial analysis.

Data quality, compliance rules, and copyright issues slow adoption

In day-to-day work, the strongest obstacle to the use of AI tools among DVFA Investment Professionals is that the quality of input data in large public AI models is often inadequate and not verifiable (46% of responses).

There is also considerable uncertainty regarding compliance with copyright, data protection, and regulatory requirements across the AI process chain, which means that internal policies hinder broader AI adoption, not least for liability reasons (27%).

High integration effort—such as IT interfaces and processes—reputational risks, and limited client acceptance represent further barriers (14%).

Additional reasons for the still cautious use of AI in core financial-analysis processes include a lack of skills, unclear guidelines, internal resistance to change, and fears of a growing detachment from the professional identity of the “independent-thinking financial analyst” (13%).

Strict human oversight remains indispensable

Almost half of respondents (49%) agreed that metric calculations or company valuations used as a decision basis should never be produced solely by AI or would, in any case, require oversight by “human intelligence.”

AI-generated research ideas as well as news, sentiment, and early-warning signals—areas where relatively high value from AI is expected—must also always be validated (24% of responses). One in ten respondents demands the same for AI-generated text, such as summaries or Q&A suggestions. The assessment is somewhat less strict regarding AI-generated KPI extraction or data structuring from reports (5%).

In comments, some respondents noted that essentially “everything must be checked again,” similar to reviewing the work of a junior employee. Unlike a talented staff member, however, the potential of AI tools appears limited as long as they draw on arbitrary, partly questionable and increasingly AI-generated sources and repeatedly incorporate “hallucinated” passages into their results. For liability reasons alone, such AI products should therefore not be used without reservation, particularly in far-reaching investment decisions.

“The survey demonstrates healthy professionalism: AI is welcome as assistance, but not as a substitute for analytical responsibility. Especially in valuation and decision-relevant research, transparent methods, plausibility checks, and clear guardrails are required,” says Volker Sack, Head of the DVFA Commission on Corporate Analysis, to which the AI Expert Committee is thematically assigned.

Facebook
X
LinkedIn
(Visited 31 times, 1 visits today)
Close