Daniel B. answered 07/25/25
A retired computer professional to teach math, physics
I would like to offer an opposing view to Alexander's.
The short answer to your question is
"Artificial intelligence can understand human values as well as
humans can understand their values, but AI will never understand them because
it would be prohibitively expensive".
Alexander brings up the Chinese room experiment, and you can read about it, and its counter-arguments here:
https://en.wikipedia.org/wiki/Chinese_room
My interpretation of the Chinese room experiment is that the person in the room
can find relations between Chinese writings and material or abstract objects,
but he can never understand the true meaning of the characters.
That is true, but it does not point to any shortcoming of mere syntactic manipulation of symbols,
only to the limitation his access to richer context.
If a person who speaks only English is locked in the room,
he might figure that the Chinese people outside the room use these two characters ”你“ and ”您“
to refer to himself.
But it would be impossible for him to figure out whether one of them means "you",
or whether one of them is his Chinese name, or something different.
I would argue that he would not understand their meaning even if somebody translated or
explained them to him in English.
The reason is that those two characters have no English language equivalent nor
does their equivalent meaning exist in English speaking cultures.
The only way to understand their meaning is by observing their use in the Chinese culture.
My argument is that "understanding" a word or concept means knowing how to use it in context.
Exhibit A: Children learn the meaning of words from nothing more than observing how they are used in context.
Exhibit B: The easiest way to explain the meaning of a word to someone is to give him examples
of how it is used in context.
Exhibit C: Understanding a mathematical theorem does not mean being able to recite its wording,
but being able to apply it in the context of proofs.
Exhibit D: We measure the degree of understanding a word by the variety of contexts in which
the word can be applied.
Assuming that I convinced you that "understanding" means "knowing how to apply in context",
then let me point out that Large Language Model essentially just keep applying words in context.
Thus, computers are capable of understanding concepts the same way as people
(who also know nothing more than how to apply concepts in context).
Or equivalently, our brains are just computers.
But they are much more efficient than the electronic versions.
We can accomplish on the 20W consumed by out brains much more than computers can accomplish
with megawatts.
That is why I am saying that computers could potentially reach the same level of understanding
as people, but it would be prohibitively expensive.