“Saying ‘AI’ 15 times is no longer enough.”
Stuart Kaiser, head of US equity trading strategy at Citi, spoke to the Financial Times on Wednesday (June 19) about the current state of artificial intelligence (AI) investing.
As the report points out, most of the stocks that soared during last year’s AI boom have fallen this year, suggesting investors may be becoming more cautious about betting on companies that claim to benefit from the rise of artificial intelligence.
The surge in share prices by companies such as Nvidia, now the world’s most valuable publicly traded company, has sparked debate about whether the U.S. stock market is being fueled by speculative hype, according to the Financial Times.
“AI remains a big topic, but if you can’t provide evidence, you’re going to suffer,” Kaiser said.
About 60% of stocks in the S&P 500 have risen this year, but more than half of the stocks in Citi’s “AI Winners Basket” have fallen, according to the report. More than 75% of the companies in this group rose last year.
“Investors are paying attention to the earnings story of ‘AI’ stocks,” Mona Mahajan, senior investment strategist at Edward Jones, told the Financial Times. “The differentiator with a company like Nvidia is that they’re showing real data while also making profits.”
In other AI-related news, PYMNTS wrote an article on Tuesday about a problem plaguing businesses: AI systems that confidently serve up plausible, but inaccurate, information — a phenomenon known as “hallucinations.”
As companies increasingly rely on AI for decision-making, the risks posed by such fabricated output are receiving increasing attention. At the heart of the issue is a type of AI system employed by many modern tech companies: large language models (LLMs).
“LLM is built to predict the most likely next word,” Kelwin Fernandes, CEO of NILG.AI, an AI solutions company, told PYMNTS. “Instead of basing its answer on factual reasoning or understanding, it bases its answer on probability theory about the most likely sequence of words.”
This reliance on probabilities means that if the training data used to develop the AI ​​is flawed, or if the system misinterprets the intent of a query, it can create confidently delivered but inaccurate responses – hallucinations.
“While the technology is rapidly evolving, it’s still possible for results to be completely wrong,” PolyAI CTO Tsung-Hsien Wen told PYMNTS.
