#9 Is smart AI enough for us?

17.02.2025  

Discussions about AI often revolve around the question: “Will it become smarter than us?”  But perhaps something else is more important: “How smart do we need to make it?”.

Einstein once said: We cannot solve our problems at the same level of thinking at which we created them [1]. If this is true, then the creation of super-intelligent artificial intelligence (Artificial General Intelligence – AGI) may be the only way to solve the most complex problems of humanity.

But to understand what an effective end result should look like, it is worth understanding the nature of human thinking, its limitations, and how they can affect the programming of AI algorithms. If artificial intelligence learns from human thought patterns, won't it inherit our inflexibility? We can create an AI that solves problems differently than we do, or that mimics our way of thinking - and that is where the choice lies. Or maybe we should create something fundamentally different.

Navigation

Mental labels that simplify reality

We like to think we are rational. We think that when we argue or solve complex scientific problems, we are applying pure logic. But this is just an illusion.

Our brains were not designed for objective analysis. It was designed for survival. Instead of carefully calculating every decision, we use heuristics - mental shortcuts that help us act quickly, even if they are not always accurate. Tversky and Kahneman's study «Judgement under Uncertainty: Heuristics and Biases» [4] shows that most of our judgements are based on approximate strategies that save effort but also systematically lead to errors. Simply put, we don't analyse reality in detail as much as we adapt it to the patterns we already know. This saves brain resources but leads to systematic errors.

  • Availability heuristic. We estimate the likelihood of an event based on how easy it is to recall similar cases. That's why, after the news of a plane crash, people become afraid to fly, even if the statistical probability of an accident remains negligible. However, many more people die in car accidents every day, but this does not cause fear and refusal to drive.
  • The anchoring effect. The first number we see affects our subsequent estimates. If someone offers to buy a car for $50,000 and then reduces the price to $40,000, this amount will seem like a bargain, although without the first number, we might consider it overpriced. What can we say about the old marketing techniques that we all know about, but we still fall for - $299.99 is not $300.
  • Confirmation bias. This mechanism works in political propaganda and social media - we tend to trust information that supports our beliefs, even if it is unfounded.

For example, in a classic experiment with the anchoring effect [4], participants were asked: “Are more or less than 10% of the UN countries located in Africa?” They were then asked to give their own estimate. Those who were shown 10% gave significantly lower answers than those who were first told 65%.

These same mental shortcuts (heuristics) also affect the work of language models. Due to the power of processors, chip memory, and the limited number of tokens involved, they do not calculate the perfect answer, do not analyse information from scratch, but find the most likely pattern based on the data already uploaded.

Thinking as an excuse, not as an analysis

Another serious problem is that even when we think we are analysing a situation thoroughly, it is often just an after-the-fact justification for a decision that has already been made.

Most people do not think from first principles, but simply use familiar patterns from past experience, culture or social environment [2]. This means that our decisions are not based on rational analysis, but rather on impulsive choices that we later try to explain with logic.

Evidence of this can be found in studies on brain splitting [5]. In the 1980s, Michael Gazzaniga studied patients with split brain hemispheres. He found that the left hemisphere (responsible for speech) made up stories to explain the actions of the right hemisphere, although it did not actually know their true motives. For example, when the right hemisphere was shown the word “stand”, the patient stood up, and the left hemisphere (which did not see the word) justified it by saying that he “wanted to go for coffee”.

This suggests that our consciousness often works as a ‘’press secretary‘’ who justifies decisions made at another level. This is not just a feature of human thinking. Modern large-scale language models (LLMs) work on a similar principle - they don't analyse reality, they just give the most likely answer that fits into their loaded knowledge base.

In his book ‘’Thinking, Fast and Slow‘’, Daniel Kahneman notes that people are confident in their judgements even when they are obviously wrong [6]. This explains why experts can make catastrophically wrong predictions but not doubt their own competence.

What kind of AI are we building?

If our thinking is based on heuristics, should we strive to create AI that thinks differently? Won't it turn out that truly rational AI will become incomprehensible and unacceptable to us?

Studies show that people often trust more objective models than those that make decisions similar to themselves. This means that if artificial intelligence is too logical and devoid of human biases, people will simply refuse to use it. There are already such cases, and even industrial companies are adding instructions to be more human and take into account the perceptions and needs of the audience.

Currently, there are three main approaches to the development of AGI:

  • An AI prophet is a model that always gives the right answers. It knows a lot, possesses the achievements of all mankind, and has access to the latest news around the planet. But it cannot always explain why.
  • An AI engineer is a model that can not only recognise a problem but also find a practical solution. For example, robots under the guidance of AGI can walk, run, and even change a light bulb in a room or repair water pipes on time.
  • An AI human is a model that not only understands what needs to be done but also understands the context, has a sense of humour, ethics, and intuition. It can carry on a conversation and help with heartfelt advice. This is an ideal that we are not even close to yet.

Do we need AI that only predicts events? Or do we want one that takes action? Or do we want an AI that understands context and adapts like a human? But if we go further, we face a more complex question: what does “intelligent” mean, and do we want AI to think differently from humans? Or perhaps, instead of forcing it to mimic human thinking, we should focus on creating systems that complement us rather than repeat our mistakes?

And if an AGI is too rational, it may make decisions that seem illogical to humans in terms of ethics or human feelings. For example, what if it decides that severe restrictions on freedoms are necessary for the survival of humanity?

Is it possible to think without prejudice?

Most people believe that AI should be objective. But here's the paradox: if AI is trained on human data, it inherits all our mistakes. The oldest LLM users have already noticed this - the most popular language models have become “lazy”, meaning they show an unwillingness to consider a query in depth, choosing the most general answers. This is a side effect of the limitations of model training.

When OpenAI released one of the first public LLMs, many expected it to be “fair and unbiased”. But what does fairness mean for a machine? If it is asked to solve controversial issues, it will try to balance between different views. But this is also a choice - it makes decisions not because it understands, but because it has been taught to do so. You can read more about this in our other publication “#8 The Paradox of Sensitivity and Censorship in Generative Models” [3].

If we want to create a truly independent intelligent AI, will it be able to avoid biases? After all, if AGI thinks as inflexibly as we do, won't it become an obstacle to development just like human limitations?

AGI as a mirror or as a mentor

Humans are not independent rational agents. We tend to rely on heuristics and cognitive biases, instead of deep analysis, simply picking up familiar patterns. If artificial intelligence learns from human data, it also adopts this feature. But what does this mean for AGI?

There are two possible ways of development:

  • An AGI mirror reproduces our thinking, improves our efficiency, but does not bring anything fundamentally new. This model will be a convenient tool that confirms our beliefs rather than challenges them.
  • An AGI mentor asks new questions that we would not have formulated on our own. It doesn't just provide answers, but helps us think better, go beyond preconceptions, and expand our horizons.

It would seem obvious that we want to create a “mentor”. But is it possible?

AGI already has one of the main features of human thinking - once formed, beliefs are difficult to change. LLM algorithms, like our brains, seek stability and resist radical change. This means that we can create a system that does not help us develop, but only reinforces our old ideas.

Let's imagine a world where AGI optimises information for our preferences, just like social media does. Instead of fostering intellectual development, it will only feed into existing beliefs, making society even more polarised.

Therefore, the real future of AGI is not just in solving problems, but in forcing us to ask the right questions. If it does not help us change our thinking, it will become a trap rather than a saviour. The main question is whether we dare to create an intelligence that will argue with us...

Sources

  1. Albert Einstein https://www.goodreads.com/quotes/320600-we-can-not-solve-our-problems-with-the-same-level
  2. Humans as Heuristic Thinkers: A Multi-Disciplinary Analysis https://gist.github.com/ruvnet/f5d35a42823ded322116c48ea3bbbc92
  3. #8 The Paradox of Sensitivity and Censorship in Generative Models https://www.aup.com.ua/en/novostien/8-the-paradox-of-sensitivity-and-censorship-in-generating-models/
  4. Judgment under Uncertainty: Heuristics and Biases https://pubmed.ncbi.nlm.nih.gov/17835457/
  5. Who’s in Charge of Our Minds? The Interpreter https://fs.blog/michael-gazzaniga-the-interpreter/
  6. Daniel Kahneman, Thinking, Fast and Slow https://www.goodreads.com/quotes/1132839-confidence-is-a-feeling-which-reflects-the-coherence-of-the

 

#AUPtrends #LLM

"AUP-info" multimedia online media 
(identifier in the Register of Entities in the Media Sector: R40-00988).
envelopetagclockmagnifiercrossmenuchevron-uparrow-leftarrow-right