Investors are using artificial intelligence to extract the truth behind CEOs’ calm words

Investors are using artificial intelligence to extract the truth behind CEOs’ calm words

Stay informed with free updates

On his last earnings call as CEO of genetic sequencing company Illumina, Francis de Souza did his best to remain positive.

A controversial eight-billion-dollar acquisition of cancer screening company Grail has led to a campaign by activist investor Carl Icahn, battles with competition authorities on both sides of the Atlantic, and criticism from Grail’s founding directors.

D’Souza told analysts that this drama only affected “a very small part of the company.”

But every time he was asked about Grail, there were changes in his speaking rate, pitch and volume, according to Speech Craft Analytics, which uses artificial intelligence to analyze audio recordings. There was also an increase in filler words such as “um” and “ah” and even audible gulps.

This combination “speaks of anxiety and stress specifically when tackling such a sensitive issue,” according to David Pope, senior data scientist at Speech Craft Analytics.

D’Souza resigned less than two months later.

The idea that audio recordings could provide advice about executives’ true feelings has caught the attention of some of the world’s largest investors.

Many funds already use algorithms to search transcripts of earnings calls and company presentations to gather signals from executives’ choice of words — a field known as “natural language processing” or NLP. Now they are trying to find more messages in the way these words are pronounced.

“The idea is that audio captures more than just what’s in text,” said Mike Chen, head of alternative alpha research at Rubico Asset Management. “Even if you have a sophisticated semantic machine, it only picks up semantics.”

Hesitant and filler words tend to be left out of texts, and AI can also pick up some “micro-jerks” that the human ear can’t see.

Robeco, which manages more than $80 billion in algorithmically driven funds, making it one of the largest quant companies, began adding audio signals captured by artificial intelligence to its strategies earlier this year. Chen said it has added to the returns, and he expects more investors to follow suit.

The use of voice represents a new level in the cat-and-mouse game between fund managers and executives.

“We found tremendous value from the texts,” said Yin Luo, head of quantitative research at Wolf Research. “The problem it has created for us and many others is that public sentiment has become more and more positive… (because) the company’s management knows that its messages are being analyzed.”

Several research papers have found that presentations have become increasingly positive since the advent of neurolinguistic programming (NLP), as companies modify their language to manipulate algorithms.

A research paper Lu co-wrote earlier this year found that combining traditional NLP and voice analysis was an effective way to differentiate between companies as their profiles became increasingly “standardized.”

Although costs are falling, this approach is still relatively expensive. Robeco spent three years investing in new technology infrastructure before it even began work on integrating voice analysis.

Chen spent years trying out voice before joining Robeco, but found the technology wasn’t advanced enough. Although available insights are improving, limitations remain.

To avoid jumping to conclusions based on different personalities — some executives may be naturally more emotional than others — the most reliable analysis comes from comparing different speeches of the same person over time. But this can make it difficult to judge a new leader’s performance – a time when insight would arguably be particularly useful.

An executive at a company that provides NLP analysis said: “One of the limitations even in NLP is that changing the CEO spoils the overall feeling (of the analysis).” “This disruption effect should be stronger with sound.”

Developers should also avoid adding their own biases to audio-based algorithms, where differences such as gender, class, or race can be more pronounced than in text.

“We’re very careful to make sure that conscious biases that we’re aware of don’t spread, but it’s possible that there are unconscious biases,” Chen said. “Having a large and diverse research team at Robeco helps.”

Algorithms can give misleading results if they try to analyze someone speaking a non-native language, and simultaneous translation that works in one language may not work in another.

As companies try to adapt to text analysis, Pope expects that investor relations teams will begin to train executives to monitor tone of voice and other behaviors that texts ignore. Voice analysis has difficulty with trained actors who can stay in character convincingly, but replicating that may be easier said than done for executives.

“Very few of us are good at modifying our voices,” he said. “It’s easier for us to choose our words carefully. We’ve learned to do this since we were very young to avoid getting into trouble.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *