LLM and meaning

I have a lot of trouble when I read everywhere that Large Language Models capture the 'meaning.' The origin of my problem is probably terminology since for me meaning cannot be separated from the person that interprets the language. This applies not only to persons; I might include aliens, computers, or any species that interpret symbols as humans do.

But lately, with the appearance of LLM, I'm forced to look at this in other ways, and none of them convince me. What is this big screenshot of language? Can I see it just as another Markov chain? Is this another symbol muncher or something completely different?

Still, I don't believe it's meaning what it captures, I can only think about in terms of representations.

Representations

Long before computers, people communicated through speech, writing, artistic depictions, etc. So many representations. Then the WWW started, and people could share these ubiquitously through URLs.

With the World Wide Web in place we can share more representations than ever.

Do our communications capabilities improve? each person brings their own experiences, biases, and cultural references to the conversation. This traditional gap between what one and the other person interprets it's still there.

More than knowing if an LLM captures the meaning, I'm more interested in learning if LLM can shred some light concerning the following question:

When two persons communicate through a medium, how can they diminish the gap between interpretations?

Exploring that question looks like a big challenge for anyone interested in semantics or preserving meaning. How to figure out better ways for real communication between people?

Image of a computer, in 1892

Pasted image 20230324151654.png

Not long ago, a computer was a person, then it was a tool. What is it now?