Cambridge University Press
978-0-521-83709-5 - Linguistic Universals - Edited by Ricardo Mairal and Juana Gil
Excerpt



1     A first look at universals

Ricardo Mairal and Juana Gil




Grammatica una et eadem est secundum substantiam in omnibus linguis, licet accidentaliter varietur.     Roger Bacon


1     The debate on language universals

1.1     Introduction

For the last several decades we have been living in what has been called, for better or for worse, the postmodern era, a cultural movement or climate of social sensitivity, which, in contrast to the traditional values of the rationalistic, globalizing version of Modernism inherited from the Enlightenment, defends ideological positions based on heterogeneity, dispersion, and difference. Over the past years, contingency and individuality have gradually taken precedence over permanence and universality. As Harvey (1989) so accurately states, the views that are presently most highly valued in the postmodern world are generally those that concede greater importance to particularism and fragmentation, focus on the individual nature and interest of the parts rather than the whole, and are ultimately conducive to the disarticulation or deconstruction of all human sociocultural and economic activities. In the same way that moral values and instruction are not thought to be universally applicable, many well-known scholars of this era, even in the realm of science – especially the social sciences (e.g. the work of Lyotard) and, to a lesser extent, physics and mathematics (in line with Spengler) – affirm that there are no general principles that can be objectively evaluated independently of the spatiotemporal context in which they were initially proposed.

   Given the present state of affairs, all research on language universals (i.e. properties shared by all languages) may now seem almost paradoxical, to say the least, whereas it is hardly accidental that enthusiasm for the analysis


The authors would like to thank Ignacio Bosque, José María Brucart, Violeta Demonte and Carlos Piera for their useful suggestions regarding the first draft of this chapter, which were invaluable for the final version. Of course, any errors or oversights still remaining in the text are our responsibility. This chapter was translated into English by Pamela Faber of the University of Granada, Spain.

of linguistic variation in all of its manifestations has increased. Yet, the quest to discover what is invariable and what is shared still persists, as do the results of this quest, because, while certain scholars fervently defend individual truth, many others, who are just as prestigious in their respective fields, strive to find proof of universal reason in all areas of knowledge, including language.

   As is well known, the dialectical tension between these two positions is not a recent state of affairs. For several centuries, particularly in the area of philosophy, the same questions have repeatedly surfaced in relation to the possible existence of universal entities: which properties, relations, functions, numbers, classes, etc., can be considered universal, and, supposing that universals actually do exist, what is the exact relation between these abstract universal entities and the “particular” entities that embody them.

   The answers to these questions have laid the foundation for philosophical schools of thought throughout the ages: realism, in the early Middle Ages; nominalism, which dominated the latter part of the fifteenth century – with the sudden appearance of empiricism and positivism – and its variant, the conceptualist approach; and finally the rationalist revolution1 in the seventeenth century, which provided an especially fertile context for the discussion of universals, which concerns us here.

   To a great extent, the Renaissance was an individualistic and plural era, which fomented the meticulous description of events (and languages), rather than an explanation for them based on general underlying principles. However, in the seventeenth and eighteenth centuries, with the Scientific Revolution or the Enlightenment, the concept of universal reason first arose, according to which the general takes precedence over the particular, the abstract over the concrete, and the non-temporal over the historical (Pinillos, 1997, 76ff.). This historical period produced philosophers such as Descartes, Leibniz, Locke, Condillac, Diderot, and Rousseau; linguists and pedagogues such as Bauzée, Comenius, and Wilkins; physicists such as Newton; as well as many other great scholars in all realms of knowledge, an exhaustive list of whom would be too numerous to cite in its entirety. To a greater or lesser extent, all of them influenced the linguistic ideas of the time,2 which were centered on efforts to create new artificial and universal languages,3 and produced pioneering work in the comparison of languages4 as well as the publication of philosophical grammars that were theoretical rather than descriptive, the most important of which was the Grammaire générale et raisonnée de Port-Royal by Claude Lancelot and Antoine Arnaud (Paris 1660). And in this Grammaire, which is générale in the sense of aiming to be valid for all languages, and based on the philosophy of Descartes,5 the authors formulate a series of universal principles underlying language in general.

   Cartesian philosophy opened the door to the serious discussion of universals. One of its basic premises was the defence of innateness, or the belief that if objects in the real world are knowable, which they evidently are, it is because of the existence of innate ideas or conceptual structures that have not reached us by way of our senses or imagination, and which are not generalizations made by induction, or are even in need of empirical confirmation. Rather they already exist in the mind and constitute an eminently human characteristic. If certain ideas are innate, they must then be shared by everyone, and can thus be regarded as universal. This leads to the conclusion that innate ideas are universal, and experiential data, which can be considered contingent, is deduced and interpreted on the basis of innate ideas.

   It is precisely this conception of the origin of knowledge that is the criterion which established an opposition (more conventional than real) between the two most prominent schools of pre-Kantian philosophy (see footnote 1): the dividing line between continental rationalism (e.g. Descartes, Spinoza, Leibniz) and British empiricism (e.g. Locke, Berkeley, Hume). In vivid contrast to rationalists, empiricists affirmed that all knowledge comes from perception, and thus cannot be derived from innate principles, but rather solely from experience. What is interesting for our purposes is that both schools have had an important impact on the contemporary discussion and consideration of the problem of universals.

   Let us first focus our attention on the rationalists. It is well known that rationalism greatly influenced not only the general intellectual panorama of its era, but also the more recent generative model of linguistic analysis, which will be discussed in greater detail in the following sections. These conceptions were passed on to new generations of linguists through the writings of Descartes and his followers, and also thanks to the legacy of rationalist thinkers such as Leibniz, whose ideas on language and thought coincide to a great degree with those of Descartes, e.g. Cartesian innate ideas essentially correspond to Leibniz’s eternal and necessary truths of reason, although part of the difference between innate ideas and truths of reason is evidenced in the fact that they have been used as the basis for different research perspectives on language universals.

   In fact, in the strictly Cartesian concept of language, as Acero (1993, 15ff.) very clearly states, innate universal ideas are always accurate and valid, regardless of the data provided by experience and knowledge: “Whatever the real world may be like . . ., it has no effect on the fact that my ideas regarding objective reality are ideas and thus have a typically representational function. The access of understanding to ideas, to the content within them and to its operations with that content – what Descartes euphemistically calls ‘self-knowledge’ – does not depend on any connection with the real world. According to Descartes, even if such links were severed, representations would not be affected” (Acero, 1993, 16).6 Strictly speaking, this Cartesian postulate is static in that it presupposes a predetermined, clearly delimited, and non-externally-modelable schema, to which human knowledge and experience must adapt.

   On the other hand, according to Leibniz, truths or innate principles (e.g. principle of contradiction) and ideas or innate concepts (e.g. cause, unity, identity, etc.) are only those that can be derived from pure understanding and common sense, and therefore from the mind, never from the senses. This notwithstanding, experience may be necessary to enable us to know these innate ideas or truths: the mind has the power, faculty or competence to find within itself those ideas that are virtually innate, and which experience helps it to discover. As a result, there is a dynamic, circular conception of the interrelation between mind and objective experience.

   Moreover, Leibniz believes that human beings mentally configure what they apprehend through experience, and that this configuration is, to a certain extent, mediated by language and intimately related by it to the cognitive process of which it is a part. Since there is not one language but many, and all of them are the product of an innate human language faculty and of the diversity of human interactions with their surroundings, experiential data from the outside world will be mentally structured according to the dictates of each individual language.7 This premise, which is at the same time both philosophical and anthropological, explains the interest shown by Leibniz in the study of different languages as a means of discovering features shared by all of them (Wierzbicka, 2001).8 It is directly linked to the ideas of other great philosophers and linguists, such as Wilhelm von Humboldt,9 Franz Boas, and Edward Sapir, who, despite accepting the possibility of the “universal unity of language” (above all, in the case of Humboldt), clearly opted for an anthropological approach based on the principle of linguistic relativity with its extreme corollaries regarding the subjectivity of speech and the social nature of languages.10

   It thus becomes increasingly evident that even among the so-called “rationalists,” there are important differences regarding the conception of innate universal ideas. On the one hand, we have a conception that can be described as more intrinsic, in the sense that such intellectual truths are considered to be unconscious (although they can reach our consciousness through introspection), unlearned, hardwired into the human brain by Nature, and vitally necessary for the interpretation of experience and for language learning. On the other hand, there is the extrinsic conception, exemplified in the philosophy of Leibniz, centered on experiential data derived from the senses, which seeks to discover a shared grammar through the formal comparative study of individual languages, understood as indicators of essential characteristics of human language in general. And so, it is the path of strict Cartesian philosophy with its interest in general grammars that Chomsky’s work follows, whereas the comparative typological analysis of a wide range of languages led to the work of Sapir, Jakobson, and Greenberg. As will be explained in the next section, these two paths also represent two different ways of understanding linguistic universals, which again came into the spotlight of contemporary linguistics in the second half of the twentieth century.


1.2     The debate continues

As previously mentioned in the first section, in the history of linguistics (as in the history of philosophy in general) the debate regarding universal properties has not been centered merely on how to define them or how they should be approached, but on the acceptance of their actual existence. We previously mentioned the dichotomy established between rationalism and empiricism, and we described and concisely outlined rationalist proposals. Empiricism is associated with philosophers such as Locke, and, above all, Condillac11 in the seventeenth and eighteenth centuries, and Bréal and Taine at the end of the nineteenth century, whose reflections on language heralded the beginning of the anti-universalist movement – subsequent to romanticism and the positivism of the nineteenth-century Neogrammatical Movement12 – which would last from Saussure onwards into the twentieth century. Ideas traditionally linked to Saussure, such as the arbitrariness of the sign, its conventionality and surface linearity, the communicative nature of language, and the conception of languages as social institutions, had already appeared in the work of the linguists cited above. Obviously, linguistic universals had no place in the ideological framework that emerged, just as they had no place in most of the structuralism derived from the work of Saussure,13 nor did they arouse the interest of any of the representatives of post-structuralism. Post-structuralist thinkers, such as Foucault, became the most fervent defenders of the idea that each language should be described in its own terms, and of the arbitrariness of the sign, which transformed it into a product of sociocultural contingency.14

   However, the twentieth century is not only characterized by the structuralist and, above all, the post-structuralist rejection of the idea of language universals. On the contrary, this century also witnessed a renewed impetus in the search for properties common to all languages by linguists within the Humboldtian tradition (some of whom were mentioned in the previous section) as well as by generative linguists, who, as we have also pointed out, took a different approach to this problem.

   The research on language typology outlined in Humboldt (and before that in Leibniz), still speculative and eager to pinpoint connections between perception and the organization of grammars – e.g. a “psychological” tendency, like the one represented by Sapir – linked to a more anthropological one like the one represented by Boas, produced a great number of descriptions and classifications of languages documented on the five continents. These studies were for the most part descriptive, and often limited to the elaboration of new taxonomies. It was not until the mid twentieth century that Jakobson breathed new life into typological studies by establishing laws of general (though not universal) validity. His proposals were further developed by Greenberg (1957), who defined his well-known series of empirically based implicative universals. Just as the contribution of Greenberg and his followers laid the foundations for research methodology in language typology by offering empirical results to explain the nature of universals, at approximately the same time Noam Chomsky – who, in response to structuralism, had begun to create his Generative Grammar (Syntactic Structures had been published in 1957 and Aspects of the Theory of Syntax [1965] had just come out) – opened up new horizons in linguistic research by conceiving a model based on hypothetical-deductive criteria. This is thus the period that gave birth to the two great paradigms for the study of universals, which would dominate the linguistic panorama throughout the second half of the twentieth century and the beginning of the twenty-first: the Greenberg approach and the Chomskyan15 approach.

   Put concisely, the Greenberg approach is based on the description and analysis of the greatest possible number of language samples, and accordingly endeavors to establish authentically interlinguistic generalizations or universals of languages: in other words, intrinsic properties shared by all languages. In contrast, the Chomskyan approach seeks the specification of linguistic universals or those internal aspects of linguistic theory that are regarded as universal. In the latter case, it is the basic premises of the model that are universal and that are explained in terms of the well-known innateness hypothesis, and are consequently considered to be part of our genetic make-up (e.g. Hawkins, 1988).


1.3     Past and present

As we have endeavored to explain, any mention of linguistic universals consequently signifies the continuing of a journey begun many years ago, and refers to a subject of reflection on the part of both linguists and philosophers throughout the ages, a topic of debate that has been a constant in the history of our discipline. On the one hand, there have always been those who have defended linguistic homogeneity and shared properties of all languages, whether such properties are derived empirically or through introspection. On the other hand, the opposing side has invariably rejected universality, maintaining that the origins of knowledge, values, and ideas are particular, in other words, that they are dependent on and conditioned by their sociocultural context. The predominance of one belief or the other has always depended on the historical period.

   In this present day and age, what we have are different understandings of the idea of universals, determined by the divergent epistemological foundations of each of the two theoretical approaches that accept their existence, and that can be classified in two major categories: formal vs. functional theories of language.

   The proponents of formal models consider that the similarities found in all languages can be explained in terms of the human capacity for cognition or knowledge of language (linguistic competence), which is innate in all human beings, and thus universal. The primary goal of the generative model, which is the most representative of this type of theory, is the characterization of that knowledge or the elaboration of a universal grammar. In contrast, the functional approach (a label covering a variety of linguistic paradigms) can be applied to a wide range of models that, according to Hall (1992, 1), share the idea that form is constrained by function – in other words, the idea that regularities in languages are determined by a number of psychological or general functional parameters which are the natural result of the fact that languages are first and foremost a means of communication.16

   What is extremely significant, in our opinion, and something that we would like to highlight in this chapter, is that, even though these two schools of thought were initially opposed to each other, the years have not widened this separation, but instead have gradually brought them closer to each other. In this sense, the evolution and gradual approximation of positions have produced a new vision of linguistic theory, since all models, without exception and of whatever tendency, acknowledge the necessity of accounting for grammatical phenomena in a great number of languages. It is very revealing to look at the data in language studies carried out within both formal and functional linguistic frameworks, because it eloquently reflects the prevailing awareness that linguistic models should be capable of explaining not just one language, but many.

   The distance between formal and functional perspectives on universals is rapidly diminishing, and being replaced by more complex, integrated approaches. In Section 4, we shall try to explain why we believe that such an approximation is not only positive, but vitally necessary for a deeper understanding of certain aspects of linguistic behavior. Nevertheless, despite vanishing differences, it is obvious that each approach still possesses certain differentiating features, which will be examined in the pages that follow.


2     The underlying causes of the debate

2.1     Language or linguistic universals?

Without a doubt, the debate on universals has its roots in the vision of universal held by each of the two linguistic schools. According to Ferguson (1978, 12, 16), the universals formulated at the Dobbs Ferry Conference were statistical and implicative (see footnote 15), whereas those at the Austin Conference were not. This is hardly surprising considering the conference titles and the corresponding difference between linguistic universals and language universals, which lay at the foundation of their respective proposals. In consonance with what we explained in the preceding section, our objective is to examine the principal ideas that constitute the backbone for the study of universals within contemporary linguistics, and which, in our opinion, can be summarized in the dialectical tug-of-war between functionalists and generativists over whether to use internal or external criteria for their study and analysis.

   The distinction between internal universals (theoretical linguistic universals) and external universals (empirical universals based on languages and their surface structure) arose from the following theoretical and methodological characteristics of each of the two linguistic frameworks.

  1. Generativist and functional models are based on radically different conceptions of the nature of language. Basic features of the generativist model are its innateness, modularity, and psychological adequacy, whereas functional–cognitive models, which focus on function, meaning, and usage, are non-modular and experience-based.
  2. Generative linguists seek a theory of the language faculty known as Universal Grammar (hereafter UG), as well as the establishment of principles that govern this faculty (Chomsky, 2003, 18), an idea that will be discussed in greater detail later on. For this purpose, they use a hypothetical-deductive approach to the contrasting of the elements within a formal model that will ultimately account for human linguistic abilities. For example, if linguists begin with the hypothesis that speakers mentally associate the sentences of a language with the syntactic structures of their constituents and that these constituents are projections of lexical categories (N, ADJ, V) or functional categories (quantification, determination, subordination), then it is necessary to explain not only how this association is produced, or by means of what operations and principles this occurs, but also why languages differ in regard to the number of their projections as well as the extension assigned to grammatical categories.17 In contrast, among the methodological postulates that characterize functional linguistic models is their more empiricist orientation. Functionalists start from a representative sample of a number of languages and formulate generalizations based on this data. Consequently, their mode of analysis is inductive.18




© Cambridge University Press