The "meaning" of any word, concept, or object is different for different individuals, depending on the information (knowledge) about the word, concept, or object currently available to them. All meaning is "contextual" and the most important context is what is currently in the individual's mind. This obviously includes the immediate external context, for example, the word being heard or read is surrounded by text, both explicitly and implicitly - those alternative words that could substitute with little change in meaning. In our information theory of mind, it is the past experiences that are reproduced (played back) from the Experience Recorder and Reproducer (ERR) that provide most of the meaningful context for a word or object. For example, if the agent has had no past experiences that resemble the current experience in some way, the agent may not find any meaning at all. The simplest case would be a new word, seen for the very first time. If the word is not isolated, the meanings of the familiar surrounding text will bring back their past uses clearly enough to allow the agent to guess the meaning of the new word, in that context. In any case this fresh experience with the word will be stored away along with that context for future reference. The problem of the "Meaning of Meaning" has a rich history in the past century or two of analytic language philosophy. Gottfried Leibniz hoped for an ambiguity-free ideal language with exactly one term for each concept. It would reduce language to a kind of mathematics where the meaning of complex combinations of terms could be worked out precisely. In the middle of the nineteenth century, John Stuart Mill tried to simplify proper nouns by insisting that they are just names for the things we are talking about in sentences or propositions. Nouns are subjects, predicates are the attributes of the subject. Leibniz was an inspiration for Bertrand Russell, whose logical positivism imagined "logical atoms" of meaning that could combine following strict rules to form complex concepts. But Russell and the great logician Gottlob Frege tangled over exactly how words describe, denote, or refer to concepts and objects. Is an absolute meaning to be found in the dictionary definitions of how a word refers to an object, independent of the intentions of a speaker or inferences of the hearer? Frege distinguished between the straight reference of a word and what he called the "sense." Why does the statement "Aristotle is the author of De Anima" carry more information than the identity statement "Aristotle is Aristotle." Our information theory of meaning finds the answer in the reader's past experience (or none) of De Anima. Russell's young collaborator in early logical positivism, Ludwig Wittgenstein, broke with Russell and insisted that meaning depends on the use to which a word is being put. There is no objective independent meaning for a word as the object it "stands for." This relativism became more extreme when Jacques Derrida showed how the meaning of a word can be deferred and disseminated by the words following it in time.
Charles Sanders Peirce, and the great linguist and inventor of structuralism, Ferdinand de Saussure had accepted straightforward connections, like Peirce's triad object-percept-concept and Saussure's dual signifier/signified s/S for a symbol and the object. These were captured in the book, "The Meaning of Meaning," by Ogden and Richards as their "semantic triangle," symbol (word), reference (thought/concept), and object (word).Willard van Orman Quine thought he could escape ambiguities in meaning. In his book, "Word and Object" he urged "naturalizing" epistemology by focusing on the empirical connections made by speakers when they say what they mean. Favoring extensionality over intentionality (or intensionality as he preferred), he said to look at how a speaker of another language says what a word means, or how a baby learns the meaning of new worlds, by a process of behavioral conditioning and ostension (pointing at things). He said one may not be a behaviorist in psychology, but cannot avoid being a behaviorist in linguistics. But post-moderns like Derrida and Roland Barthes showed that fundamental ambiguities of language cannot be removed, that the dictionary definitions summarizing the past uses in a community of discourse only trap meaning in a "circle of signifiers" without a referent object (s/Z). New uses are always being created, a consequence of our theory of humans as "co-creators" of our universe. Are we then living in a Humpty Dumpty world of "When I use a word, it means just what I choose it to mean - neither more nor less." H. P. Grice insisted that the intentions of the "utterer" are carrying the meaning. Or do we need to consider the "reader response" to any text. In Claude Shannon's theory of the communication of information, the emphasis is on the new information at the receiver carried in the message from the sender. But Shannon never claimed the meaning was carried in the message itself. It depends on the information in the message, but only in the context of the information in the receiver's experience recorder and reproducer (ERR).
Information in an Object or ConceptThe information content in a material object is independent of any human. In particular. it is independent of human language, the words, names, or descriptions that humans use to communicate about the object. This is because most language consists of symbols or signs that have an arbitrary relation to the object or concept being signified, as shown by the semiotics of Charles Sanders Peirce and the semiology of H. P. Grice. But both Saussure and Peirce recognized that some natural signs are not arbitrary, because they contain some information that is isomorphic with some of the information in the object or concept. These especially include icons, abstract pictures of some aspect of the object. Information philosophy generalizes this insight well beyond two-dimensional images or "pictures," that Ludwig Wittgenstein briefly considered as a theory of meaning, atomic facts that could be shown, if not said, for example, Tractatus 2.16, "In order to be a picture a fact must have something in common with what it pictures." . Information philosophy identifies the total information in a material object with the yes/no answers to all the questions that can be asked or with the true/false statements that can be said about the object. In modern information theory terms, information philosophy "digitizes" the object. From each answer or truth value we can, in principle, derive a "bit" of information. While "total" information is hopelessly impractical to measure precisely, whatever information we can "abstract" from a "concrete" object gives us a remarkably simple answer to one of the deepest problems in metaphysics, the existential status of ideas, of Platonic "Forms," including the entities of logic and mathematics. model chess piece dice
Information in MindsThe information theory of meaning starts with the information philosophy model of the mind, which asserts that the mind is the abstract information being processed by the brain, a material information structure, which works as a biological information processor. The meaning in a message incoming to the mind, which could be just a perception of sensations from the environment and not necessarily from another human being with intentions, is found in the past experiences of the agent that are brought to mind by the Experience Recorder and Reproducer (ERR) based on the content of the message. This nicely captures the subjectivism or relativism of meaning. It also gets close to answering Thomas Nagel's provocative question "What Is It Like To Be a Bat?" The past experiences reproduced, complete with their feelings, depends on what has been recorded and what can be reproduced (played back). A frog cannot play back the experience of concave objects flying by, because the frog's eye has filtered them out from reaching the brain and its experience recorder.