“The biolinguistic perspective views a person’s language in all of its aspects – sound, meaning, structure — as a state of some component of the mind, understanding “mind” in the sense of 18th century scientists who recognized that after Newton’s demolition of the “mechanical philosophy,” based on the intuitive concept of a material world, no coherent mind-body problem remains, and we can only regard aspects of the world “termed mental,” as the result of “such an organical structure as that of the brain,” as chemist-philosopher Joseph Priestley observed. Thought is a “little agitation of the brain,” David Hume remarked; and as Darwin commented a century later, there is no reason why “thought, being a secretion of the brain,” should be considered “more wonderful than gravity, a property of matter.” By then, the more tempered view of the goals of science that Newton introduced had become scientific common sense: Newton’s reluctant conclusion that we must be satisfied with the fact that universal gravity exists, even if we cannot explain it in terms of the self-evident “mechanical philosophy.” As many commentators have observed, this intellectual move “set forth a new view of science” in which the goal is “not to seek ultimate explanations” but to find the best theoretical account we can of the phenomena of experience and experiment (I. Bernard Cohen).”
=== Noam Chomsky, Biolinguistics and the Human Capacity ===
Thomas Kuhn (Kuhn, 1970) argued, not unlike in the Chomsky-Foucault debate, that Scientific progress exhibits a jagged pattern; a low, the status-quo, that gets built upon until our understanding of an issue finally peaks and undergoes a paradigm shift, a complete shift in perspectives, methodologies and programmatic approaches to the research question at hand. In the natural sciences one can think of Isaac Newton’s invention of calculus or rejection of the mechanical philosophy of the universe as an example, fundamentally changing the very heart of Physics. Materialism itself, the soul of the natural sciences, would face an identity-crisis of sorts with Schrödinger’s works, while Darwin’s advent caused as major a paradigm shift in our understanding of natural life as any, and probably bigger than many, in the entire history of science. For Cognitive Science, sitting at that interesting intersection of Computational Science, Evolutionary Biology and Neural Sciences, the identity-crisis had been a long-standing one, particularly given its subject – cognition as a biological phenomena. On the one hand, the issues concerning what makes humans unique had a long history of debate in the philosophy and epistemology of knowledge, dating back to the ancient Greeks, on the other the 20th century (and onwards) philosophy and epistemology of the human mind was torn between the abstract disembodied nature of its subject matter and the materialism that is pre-requisite of the natural sciences. While neither the abstract nature of cognition (and other natural entities like natural numbers) nor the materialist implications of natural sciences were without supporting empirical evidence, Darwin’s advent meant that the classical dualist approach, with an intractable split between the physical and the mental, could no longer be sustained. Radical behaviourism of the 1950’s attempted to provide a material grounding for cognition, but the inability of Skinner’s approach to account for creative behaviour also pointed to the need of going beyond inductive learning in explaining cognition and behaviour in humans (and also elsewhere). The task at hand was not so much to categorize witnessed behaviour and then look for post-hoc stimuli to which aforementioned behaviour could be a response, as it was to find a causally explanatory theory of WHAT drives spontaneous behaviour in the face of limited experiences.
So, while indeed one could no longer presume that human cognition and consciousness was something mystical, separate from and opaque to basic biology, linking of cognition to basic biology has since proved to be one of the hardest challenges for science. Chomsky’s famous review of Skinner (Chomsky, 1959) helped refocus the research program that had been asking the same questions for thousands of years – “How is it that humans are able to achieve such magnificent feats as writing poetry, composing symphonies, solving equations of Physics and ponder about the nature of the Universe and their own place in it?” In short, Chomsky (Chomsky, 1965, 1975, 2002, 2005; Hauser, Chomsky, & Fitch, 2002)flipped the program on its head and argued that since humans, as biological organisms, display the same organic constituency as other animals while attaining much more complex abilities than other animals, the focus should rather be on what the organisms themselves (i.e. the innate human nature) contribute towards their abilities and activities?
Obviously a study of what an organism has evolved with involves a study of its evolutionary history and its genetic make-up (Dawkins, 2006; Lewontin, 1974). But that would only provide us with a theory of what kind of hardware system is available, so to speak. In other words, comparing a conscious organism to a functioning compare, the organism’s evolutionary history let’s us peek into the engineering choices made, while its genetic make-up gives us a theory of how a complex system (the organism) is put-together. But the real problem concerns the question of how much can an organism achieve with what it is born with? Humans obviously, in spite of their similar organic make-up as other organisms, are capable of things no other species on Earth is (Chomsky, 2007). One of the hallmarks of our species is the notion of Language in the abstract – i.e. not individual languages so much, as the ability to structure thought and concepts in a manner that allows for an open-ended possibility in terms of both quantity and the scope of things that we can contemplate and communicate. Any language can be used to learn mathematics and write poetry. No language has any limit on how much you can say or write with it, even though how much we actually hear or say or write over our entire lifetimes is obviously limited rather narrowly. Yet, any baby can not only pick-up any language on mere casual exposure (something rather hard to do beyond a critical period of development), but once picked-up the language is deployed every moment in novel ways that has not been encountered before. We say and do things that we have never said/hear and/or done/seen done before. How does a child attain this ability, and why is it that other animal communication systems do not support such abilities? The answer to such questions involve (a) understanding what the architecture of Human Language is (i.e. what rules and principles govern it, where they came from and why are they the way they are?), and (b) how does the human brain come to support such an architecture, and why is it the only one that can?
We try to hunt for answers to these questions using two parallel lines of inquiry. A more formal and mathematical line of inquiry is useful for looking into the (often hidden) architecture of Language as a computational system. This requires some unpacking – what is hidden, and why ‘computational’? Languages perform a ridiculously complicated task of encoding, transferring and decoding an infinite array of information! Yet the rules of grammar are finite, or they would never be learned. What kind of architecture allows for infinite productivity from finite primitives? How do the principles of such architecture facilitate such productivity? Where else in the universe do we see such productivity (e.g. the integer system can produce infinite numbers with ten discrete integers, 0 – 9!), and to what extent do Language reflect architectural constraints similar to such other systems?
Systems that are productive in a way that allows them to create more from less, often do so by transforming one state of affairs into another. This allows for open-ended productivity without violating the basic principle of physics – you can neither create nor destroy stuff. But you can certainly process, package and repackage information! Think of a lego-block... you could easily re-arrange the same pieces to make something unique, and do that more than once. A better analogy is the integer system! Ten symbols (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) can, and does support, an infinite possibility of numbers and the many mathematical operations that underlie the universe. Such systems are computational systems in that they combine and recombine information to create infinite new outputs. Computational processes also obey certain principles of nature that allow for inferences and deductions and other logical operations to hold, allowing efficient transfer and processing of information, and they are truth-preserving. Language, like numbers, behave similarly and often display operational patterns (e.g. adherence to Fibonacci Sequences in phrase structure, cf. (“Carnie, Andrew, Medeiros D., and C. Boeckx, 2005; Uriagereka, 2000), and hence displays all the properties of computation. Yet a lot of these operations are unknown to the lay person, and in fact are only debated by experts. They are hidden patterns, almost like the reverse side of a tapestry that no one ever looks at directly, but without which the beautiful patterns we do like to look at would not be possible!
Our brains are uniquely capable of this type of computations, and we do it not just to speak or communicate, but to do algebra, write symphonies and think about our place in the Universe! The brain is the hardware that runs a very special software, and it create infinite possibilities that has made us the dominant species on this planet. Both deserve and demand a proper understanding with the finest possible details, and a formal inquiry into the algorithm of this software is necessary to both understand what kind of a hardware could run it, and to rethink whether our understanding is correct in terms of whether the kind of brain-software we propose can really be run by the hardware that we are stuck with (inside our heads)!
A related line of inquiry, informed by formal methods but often involving sophisticated neuro-imaging, eye-tracking and other instrumental methods, is useful in studying how the brain enables Language and supports it. Children learn to use it in a handful of months, beginning to babble by themselves by the time they approach the end of the first year of their lives. Moreover, while adults are often more able in their use of a language once-picked-up, they are remarkably bad compared to babies at picking up new ones! How do babies manage to pick up the rules of Language, produce only rule-abiding sentences and sounds, before they even have the notion of rules or languages or what they mean? Chomsky, along with legendary biologists and geneticists, like the nobel laureate Luria, Pollock and Jacob(Wexler, 2013), have argued that much of the restrictions on the architecture of Language is hardwired into the human genome. Babies are born with an expectation of what rules an information encoding system must adhere to if they are to be productive in this sense, and any (and all) systems that meet the criteria are readily picked up the developing brain. In other words, children do not rules at all, they merely confirm them. Or rather, their brains do without them consciously knowing what they are doing while they are doing it! But what is special about human neonate brains? What changes after the critical period to limit such abilities drastically in adults? Is their a continuity between adult and neonate brains? Are there specific parts of the brain dedicated to specific types of linguistic operations? Do they reflect some specialization that reflects the super-specialized (linguistic) system they work with?
Taken together, these two lines of inquiry dovetail into a concentrated effort to understand the biology of the uniquely human mind/brain that enables in us a functioning ability without any parallels in the known biological world, and the intricacies of the ability itself. What constitutes an impossible language? What boundaries of human-ness do they violate? That, in a nutshell, is our primary concern. In simpler words, perhaps, it can be argued that computationalist biolinguistics studies the talking brain in order to better understand the echoes of Language and Mind.
Toys… err, Tools!!!
Carnie, Andrew, Medeiros D., and C. Boeckx. 2005. Some Consequences of Natural Law in Syntactic Structure. Ms. University of Arizona, Harvard University.
Chomsky, N. (1959). A review of BF Skinner’s Verbal Behavior. Language, 35(1), 26–58.
Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge: M.I.T. Press.
Chomsky, N. (1975). The Logical Structure of Linguistic Theory (Vol. 40).
Chomsky, N. (2002). Syntactic structures. Berlin; New York: Mouton de Gruyter.
Chomsky, N. (2005). Three Factors in Language Design. Linguistic Inquiry, 36(1), 1–22. https://doi.org/10.1162/0024389052993655
Chomsky, N. (2007). Biolinguistic explorations: Design, development, evolution. International Journal of Philosophical Studies, 15(1), 1–21.
Dawkins, R. (2006). The selfish gene: with a new introduction by the author. UK: Oxford University Press.(Originally Published in 1976).
Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: What is it, who has it, and how did it evolve? Science, 298(5598), 1569–1579.
Kuhn, T. S. (1970). The structure of scientific revolutions ([2d ed., enl). Chicago: University of Chicago Press.
Lewontin, R. C. (1974). The genetic basis of evolutionary change (Vol. 560). Columbia University Press New York.
Uriagereka, J. (2000). Rhyme and Reason: An Introduction to Minimalist Syntax. MIT Press.
Wexler, K. (2013, February). Luria’s biolinguistic suggestion and the growth of language. https://doi.org/10.1017/CBO9780511980435.009