The Basics of Language
We use language almost constantly. Whether you are speaking with a friend, writing an email, or reading a novel, language is being employed in some way or another. Despite the fact that most people have a firm grasp on language, it is actually a highly complex system that has left many of the greatest thinkers truly baffled. The complexity of language is perhaps one reason many computer systems fail to speak in our place, to correct our grammar, or to translate our words into foreign languages.
To begin with, language is considered part of semiotics—a fancy word for systems of communication. Semiotic systems rely on signs and symbols, like words, to give meaning. One of the simplest semiotic systems is a traffic light, which is why it frequently serves as a starting point for many linguists.
A traffic light is a system that uses three colors to communicate meaning, and it is widely understood by the general public. Red means stop, yellow means yield, and green means go. These colors are largely arbitrary, or random, in the sense that you could easily substitute purple for red or blue for green, as long as everyone understood the changes.
In addition to their arbitrary nature, these lights are also differential. In other words, you can tell them apart. If there were three red lights, communication would come to a halt because you couldn't distinguish between them. So, in a sense, stop means stop because it does not mean go. Red is red, in part, because it is not green.
Language functions in a similar manner. These ideas are often attributed to Ferdinand de Saussure, though many of these concepts date back to the seventeenth century (at least in Western philosophy). In his work "An Essay Concerning Human[e] Understanding," John Locke asserts that there is a dual system of signification, that which is signified (a concept) and a signifier (a word). If I have a concept or a picture of a tree in my head, then I use the letters "t-r-e-e" to express that idea or concept.
Three Basic Ways to Think of Language
Though linguists have developed and discovered many categories and aspects of language, there are three that are worth noting when talking about AutoCorrect and translation tools. These include syntax, semantics, and pragmatics.
Syntax. This is the bare bones of language. It consists of the arrangement of words or phrases, grammar, and other components. Without proper syntax, readers or listeners will be utterly confused.
Semantics. This is the meaning or definition of words. For instance, a chair is defined as an individual seat. Conversely, it can also be the head of a department or organization, like the chair of a committee.
In his 1957 book Syntactic Structures, Noam Chomsky uses the following sentence to explain semantics: "Colorless green ideas sleep furiously." Syntactically, or grammatically, this sentence makes sense; nonetheless, it is nonsense because it is semantically unsound.
Pragmatics. This is all about context. For example, assume you are waiting for an important package in the mail, and your spouse knows this. You ask your spouse, "What time is it?" They may respond by saying, "The mail hasn't come yet." This doesn't answer your question literally ("What time is it?"), but it functions as a deictic expression (pragmatically).
In an influential essay on language and literature titled "Discourse in Life and Discourse in Art," Mikhail Bakhtin argues that language carries a social component. Words only make sense if other people use the same words, and communication is based on a social event between one or more people. In short, there are "extraverbal" components to speech and writing that must be considered. Bakhtin argues that "verbal discourse is a social event," an idea that applies to literature and scientific discourse, as well as everyday speech. Language is an event of exchange, and it's important to understand the context of such an event in order to grasp meaning.
What Does This Have to Do With AutoCorrect?
If language relies heavily on social meaning and on the context of enunciation, confusion can arise very easily. Many software programs that translate too literally or fail to correct language often lack enough complexity to grasp social understanding—something that is constantly in flux.
Rhetorically speaking, every statement is both static and dynamic. A statement depends on a specific context, like the speaker, audience, environment, topic, etc. It is also dynamic in the sense that a statement can change over time, take on new meaning and lose old meaning. In literature, for example, a "dead metaphor" is a phrase which no longer has its original meaning, but is widely understood (i.e. "When in Rome!"). Language changes in leaps and bounds, making it nearly impossible for some computers to keep up.
Can Computers Keep Up?
Some scholars believe that computers will never be able to reach the mental capacity of human beings; however, this isn't necessarily true—at least when it comes to language. AutoCorrect and translation tools that fail to capture meaning are really just simple software programs. Theoretically, a complex computer system that mirrors the human mind could keep up with social understanding and linguistic cues. This is easier said than done, however.
The key to successful language software presently often relies on imitation. How well can a machine act like it understands what is happening? This can be especially difficult when considering constraints and various factors like regional dialects, cultural background, race, religion, and countless other things.
Language and Computers
The Turing Test, a thought experiment developed by Alan Turing, actually relies on a language game to make a distinction between humans and computers. Turing asks: If a computer can think and communicate like a human behind closed doors, then is there really a difference?
A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.
The premise of the Turing Test is this:
Imagine you are in a room with two doors. Behind one door is a human, and behind the other is a computer. You can only communicate with each via slips of paper. Now you must determine which is the human. For Turing, if a computer is complex enough to seem like a human, then there is little difference between the two. This is sometimes called a "Black Box" theory of the mind.
Ever played around with Cleverbot? This feisty computer can simulate human conversation to a degree, leaving many to question the parameters for artificial intelligence (AI). Despite the simulation of communication, Bakhtin would argue there isn't really a linguistic exchange taking place when a computer talks back, an idea expanded by John Searle.
The Chinese Room Experiment
Searle says there is a distinction between strong AI and weak AI. Strong AI is basically the notion that computers can become so complex that they are indistinguishable from humans. Weak AI is the concept that computers can merely imitate human action and communication. In order to show this, Searle developed the Chinese Room thought experiment.
Here's how it goes:
Imagine you are in a sealed room with a single slot to the outside. You are given a set of manuals written in Chinese—a language that is completely foreign to you. Basically, the manuals say: If A, then reply B. Now imagine someone slips paper through the slot, a paper covered with Chinese symbols.
Now you must take these symbols, look up a reply in your manual, and send back the slip with a proper reply. To the Chinese speakers on the outside of the room, it seems like you understand Chinese. However, you are simply mimicking communication. Throughout the whole exchange, the semantics were lacking--which means you still don't understand the Chinese language, despite your ability to reproduce a suitable response.
This is what happens in a computer, Searle would say, because it always follows programming. There is no understanding, and therefore no communication. As Bakhtin argues, language is actually a social event; ergo, a computer can merely imitate the process.
The BBC Explains Searle's Chinese Room
Most computer systems, like AutoCorrect or translation software, are not complex enough to use pragmatics or semantics. Because language is highly dependent on these functions, many computer systems fail to capture our intended meaning. Even if a computer can manage to translate well or correct your grammar, it is controversial to claim that language and communication are really taking place.
Yale Professor Paul Fry Discusses Semiotics
© 2016 Sebastian A Williams