#6 What Are LLMs Really Created For? Hint: It’s Not Creativity

09.01.2025  

Plastics rarely make headlines. However, an article about black plastic, published in October 2024 in the peer-reviewed journal Chemosphere, caused a wave of panic. It claimed that your kitchen spatulas were toxic, suggesting that every meal prepared with them was a ticket to the toxicology ward. Media outlets from Newsmax and Food and Wine to Daily Mailand CNN were ablaze with reports, and the public quickly began debating the grim findings and wondering how to dispose of their supposedly poisonous spatulas. But by December, this mathematical study was uploaded into the language model ChatGPT o1, which identified a calculation error. Black plastic — and kitchen spatulas — were deemed safe once again!

This story perfectly illustrates the true purpose of large language models (LLMs): analyzing data at a scale that surpasses the capabilities of both the human mind and conventional computer algorithms.

LLMs Are Not for Texts

Most people view LLMs as tools for generating texts, creating artwork, or composing music. These capabilities are fascinating, and at AUP training sessions, we often explore how to use them effectively. However, the primary purpose of language models is to bring order to the chaos of information. We live in an era where the volume of data far exceeds our ability to process and comprehend it. While basic computer algorithms handle structured tables well, they are powerless when faced with the vast ocean of unstructured text where outcomes hinge on context.

LLMs can do more than just read billions of pages; they can identify connections between events that humans might overlook. They detect patterns in chaos and do so with remarkable speed.

Generating texts, images, or music is merely a marketing strategy — a storefront designed to capture attention and secure funding. Licensing, integration into business services, or social media applications are all means to gather resources for advancing the core function of LLMs: analysis.

Remember Hari Seldon from Isaac Asimov's Foundation trilogy? His theory of psychohistory predicted humanity's future over millennia using mathematics and the analysis of large-scale societal data. While that was fiction, artificial intelligence brings us closer to a similar reality. Imagine LLMs not only identifying patterns but also building models of civilization's development, forecasting crises, determining optimal paths for progress, and helping to prevent chaos.

The Risks of Absolute Predictions

The use of LLMs on a global scale not only transforms how we address problems but also redefines the problems themselves. In the past, we often waited until a situation reached a critical point before taking action. Now, we have tools that enable us to predict potential catastrophes in advance.

Imagine that the LLM predicts an economic crisis in 2030, and all governments around the world start acting on this prediction. Politicians change tax policies, corporations cut costs, consumers save money. Will the predicted crisis happen? Or perhaps it was the prediction that changed our future so much that the crisis receded, or a completely different problem emerged? And did the LLM forecast model itself create this new panic?

This phenomenon can be called "third-level prediction", when the mere fact of knowing about the future changes what was supposed to happen. In physics, there is a similar principle of uncertainty: the act of observation changes the observed object. But here, it's even more serious - we're talking about the impact on the economy, climate, social movements, and political decisions.

The Future in Human Hands

This raises another question: won't LLM predictions that turn out to be wrong turn into something like myths? Similar to astrological predictions, they can influence human behavior, but their accuracy becomes unattainable due to the lack of feedback from millions of people who change their actions. As a result, the future becomes different every time. Do we risk ending up in a situation where, instead of accurate predictions, we get phantom or bizarre ideas that look scientifically and mathematically sound but won't work in reality? And if so, should we rely on LLM predictions at all?

Let's look at this situation: LLM models several scenarios to address a global challenge, such as the climate crisis or the threat of World War III. One scenario is optimal, the other is faster but riskier. Who makes the decision? Is it a person? The government? The UN? Or artificial intelligence itself? The more power we give to LLM predictions, the more important the question of the limits of autonomy of such models becomes. And if the LLM does predict the outbreak of World War III, won't states be tempted to strike first and "accelerate" human development?

Sources for Further Exploration

How a simple math error sparked a panic about black plastic kitchen utensils

"Large Language Model Prediction Capabilities: Evidence from a Real-World Forecasting Task"

"An Analysis of Large Language Models: Their Impact and Potential Applications"

DeepMind and BioNTech build AI lab assistants for scientific research

#

"AUP-info" multimedia online media 
(identifier in the Register of Entities in the Media Sector: R40-00988).
envelopetagclockmagnifiercrossmenuchevron-uparrow-leftarrow-right