Андреевская энциклопедия

интеллект

(лат. intellectus – ум, рассудок, познание, понимание; это латинский перевод древнегреческого понятия нус, «ум», тождественный ему по смыслу).
Синоним разум. Прилагательное интеллектуальный.
По-англ. intellect

Интеллект – ум, рассудок, разум; способность мыслить, рационально познавать, понимать и осмыслять, в отличие от таких, напр., душевных способностей, как чувство, воля, интуиция, воображение и т.п.; проницательность; качество психики, состоящее из способности адаптироваться к новым ситуациям, способности к обучению на основе опыта, пониманию и применению абстрактных концепций и использованию своих знаний для управления окружающей средой. Общая способность к познанию и решению трудностей, которая объединяет все познавательные способности человека: ощущение, восприятие, память, представление, мышление, воображение.

Разум – философская категория, выражающая высший тип мыслительной деятельности, способность мыслить всеобще, способность анализа, отвлечения и обобщения.

Текст статьи
    1.
    2.
    3.
Галерея
Использованные источники
Ссылки на тексты Д. Андреева
Ссылки на тексты А. Андреевой
Ссылки на сопроводительные материалы к текстам Д. Андреева
Локальные ссылки
Внешние ссылки
Библиография
Цитаты
Литературное приложение

Интеллект есть совокупность тех умственных функций (сравнения, абстракции, образования понятии, суждения, заключения и т.д.), которые превращают восприятия в знания или критически пересматривают и анализируют уже имеющиеся знания. Еще во времена средневековья возник спорный вопрос: является воля подчиненной интеллекту или, наоборот, интеллект воле. Первую точку зрения представлял Фома Аквинский, вторую – такие английские мыслители, как Дунс Скот и Уильям Оккам. Теперь преобладает представление, что хотя интеллект, так же как и воля, зависит от соответствующих обстоятельств, однако он, как относящийся к сфере духа, выше воли, относящейся к сфере психического.

В схоластике термин «интеллект» употреблялся для обозначения высшей познавательной способности (сверхчувственного постижения духовных сущностей) в противоположность разуму (ratio) как низшей познавательной способности (к элементарной абстракции). В обратном значении эти термины были употреблены у Канта: интеллект (нем. Verstand – рассудок) – как способность образования понятий, а разум (нем. Vernunft) – как способность образования метафизических идей. Это словоупотребление получило распространение в последующей немецкой философии и окончательно закрепилось у Гегеля в его концепции рассудка (интеллекта) и разума.

В зоопсихологии под интеллектом (или «ручным мышлением») высших животных понимаются такие доступные главным образом обезьянам реакции, которые характеризуются внезапностью решения задачи, легкостью воспроизведения раз найденного решения, переносом его на ситуацию, несколько отличную от исходной, и, наконец, способностью решения «двухфазных» задач.

В ряде философских течений разум – высшее начало и сущность (панлогизм), основа познания и поведения людей (рационализм). Своеобразный культ разума характерен для эпохи Просвещения.

Философский энциклопедический словарь. –

на тексты Д. Андреева

О Розе Мира: множества и множества откликнутся на ее зов, коль скоро он будет обращен не столько к интеллекту, сколько к сердцу, звуча в гениальных творениях слова, музыки, театра, архитектуры (2: 25).

Интеллектуальную любовь можно испытывать только к продукции интеллекта: можно умом любишь идею, мысль, теорию, научную дисциплину. Так можно любить физиологию, микробиологию, даже паразитологию, но не лимфу, не бактерии и не блох. Любовь к природе может быть явлением физиологического порядка, может быть явлением порядка эстетического, наконец – порядка этического и религиозного. Явлением только одного порядка она не может быть: интеллектуального (2: 74).

цитата .

на тексты А. Андреевой

Бендеровки рыдали над повестью Тургенева, потому что видели в ней свою судьбу, чувствовали себя «леночками» из книжки. Это были действительно честные, героического склада и очень низкого интеллектуального уровня люди (ПНК: 178).

цитата .

на сопроводительные материалы к текстам Д. Андреева

цитата .

Локальные

Экхарт

Внешние

Интеллект
Разум
Категория: Интеллект

по интеллекту

.

.

.


Наука Сознание





Andreev encyclopædia

intellect

A synonym reason. An adjective intellectual.
In Russian интеллект.

Intellect is a term used in studies of the human mind, and refers to the ability of the mind to come to correct conclusions about what is true or real, and about how to solve problems. Historically the term intellect comes from the Greek philosophical term nous, which was translated into Latin as intellectus (derived from the verb intelligere) and into French (and then English) as intelligence.

Reason is the capacity for consciously making sense of things, applying logic, for establishing and verifying facts, and changing or justifying practices, institutions, and beliefs based on new or existing information.

Text of the article

Sternberg, Robert J. Intelligence

Introduction

Theories of intelligence

Psychometric theories

Cognitive theories

Cognitive-contextual theories

Biologic theories

Hemispheric studies

Brain-wave studies

Blood-flow studies

Development of intelligence

Jean Piaget's work

Post-Piaget theories

The environmental viewpoint

Measuring intelligence

The IQ test

The distribution of IQ scores

The malleability of intelligence

Reason

__
Gallery
Used sources
Links to D. Andreev’s texts
Links to A. Andreeva’s texts
Links to accompanying materials to D. Andreev's texts
Local links
External links
A bibliography

General works

Theories of intelligence

Development of intelligence

Measuring intelligence

Malleability of intelligence
Quotings
Literary supplement

Robert J. Sternberg,
IBM Professor of Psychology and Education, Yale University.

Intelligence

Introduction

Intelligence IS ability to adapt effectively to the environment, either by making a change in oneself or by changing the environment or finding a new one.

Much of the excitement among investigators in the field of intelligence derives from their trying to determine exactly what intelligence is. Different investigators have emphasized different aspects of intelligence in their definitions. For example, in a 1921 symposium on the definition of intelligence, the American psychologist Lewis M. Terman emphasized the ability to think abstractly, while another American psychologist, Edward L. Thorndike, emphasized learning and the ability to give good responses to questions. In a similar 1986 symposium, however, psychologists generally agreed on the importance of adaptation to the environment as the key to understanding both what intelligence is and what it does. Such adaptation may occur in a variety of environmental situations. For example, a student in school learns the material that is required to pass or do well in a course; a physician treating a patient with an unfamiliar disease adapts by learning about the disease; an artist reworks a painting in order to make it convey a more harmonious impression. For the most part, adapting involves making a change in oneself in order to cope more effectively, but sometimes effective adaptation involves either changing the environment or finding a new environment altogether.

Effective adaptation draws upon a number of cognitive processes, such as perception, learning, memory, reasoning, and problem solving. The main trend in defining intelligence, then, is that it is not itself a cognitive or mental process, but rather a selective combination of these processes purposively directed toward effective adaptation to the environment. For example, the physician noted above learning about a new disease adapts by perceiving material on the disease in medical literature, learning what the material contains, remembering crucial aspects of it that are needed to treat the patient, and then reasoning to solve the problem of how to apply the information to the needs of the patient. Intelligence, in sum, has come to be regarded as not a single ability but an effective drawing together of many abilities. This has not always been obvious to investigators of the subject, however, and, indeed, much of the history of the field revolves around arguments regarding the nature and abilities that constitute intelligence.

Theories of intelligence

Theories of intelligence, as is the case with most scientific theories, have evolved through a succession of paradigms that have been put forward to clarify our understanding of the idea. The major paradigms have been those of psychological measurement (often called psychometrics); cognitive psychology, which concerns itself with the mental processes by which the mind functions; the merger of cognitive psychology with contextualism (the interaction of the environment and processes of the mind); and biologic science, which considers the neural bases of intelligence.

Psychometric theories

Psychometric theories have generally sought to understand the structure of intelligence: What form does it take, and what are its parts, if any? Such theories have generally been based on and tested by the use of data obtained from paper-and-pencil tests of mental abilities that include analogies (e.g., lawyer : client :: doctor : ?), classifications (e.g., Which word does not belong with the others? robin, sparrow, chicken, bluejay), and series completions (e.g., What number comes next in the following series? 3, 6, 10, 15, 21, ?).

Underlying the psychometric theories is a psychological model according to which intelligence is a composite of abilities measured by mental tests. This model is often quantified by assuming that each test score is a weighted linear composite of scores on the underlying abilities. For example, performance on a number-series test might be a weighted composite of number, reasoning, and possibly memory abilities for a complex series. Because the mathematical model is additive, it assumes that less of one ability can be compensated for by more of another ability in test performance. For instance, two people could gain equivalent scores on a number-series test if a deficiency in number ability in the one person relative to the other was compensated for by superiority in reasoning ability.

The first of the major psychometric theories was that of the British psychologist Charles E. Spearman, who published his first major article on intelligence in 1904. Spearman noticed what, at the turn of the century, seemed like a peculiar fact: People who did well on one mental ability test tended to do well on the others, and people who did not do well on one of them also tended not to do well on the others. Spearman devised a technique for statistical analysis, which he called factor analysis, that examines patterns of individual differences in test scores and is said to provide an analysis of the underlying sources of these individual differences. Spearman's factor analyses of test data suggested to him that just two kinds of factors underlie all individual differences in test scores. The first and more important factor Spearman labeled the “general factor,” or g, which is said to pervade performance on all tasks requiring intelligence. In other words, regardless of the task, if it requires intelligence, it requires g. The second factor is specifically related to each particular test. But what, exactly, is g? After all, calling something a general factor is not the same as understanding what it is. Spearman did not know exactly what the general factor might be, but he proposed in 1927 that it might be something he labeled “mental energy.”

The American psychologist L.L. Thurstone disagreed not only with Spearman's theory but also with his isolation of a single factor of general intelligence. Thurstone argued that the appearance of just a single factor was an artifact of the way Spearman did his factor analysis and that if the analysis were done in a different and more appropriate way, seven factors would appear, which Thurstone referred to as the “primary mental abilities.” The seven primary mental abilities identified by Thurstone were verbal comprehension (as involved in the knowledge of vocabulary and in reading); verbal fluency (as involved in writing and in producing words); number (as involved in solving fairly simple numerical computation and arithmetical reasoning problems); spatial visualization (as involved in mentally visualizing and manipulating objects, as is required to fit a set of suitcases into an automobile trunk); inductive reasoning (as involved in completing a number series or in predicting the future based upon past experience); memory (as involved in remembering people's names or faces); and perceptual speed (as involved in rapidly proofreading to discover typographical errors in a typed text).

It is a possibility, of course, that Spearman was right and Thurstone was wrong, or vice versa. Other psychologists, however, such as the Canadian Philip E. Vernon and the American Raymond B. Cattell, suggested another possibility – that both were right in some sense. In the view of Vernon and Cattell, abilities are hierarchical. At the top of the hierarchy is g, or general ability. But below g in the hierarchy are successive levels of gradually narrowing abilities, ending with Spearman's specific abilities. Cattell, for example, suggested in a 1971 work that general ability can be subdivided into two further kinds of abilities, fluid and crystallized. Fluid abilities are the reasoning and problem-solving abilities measured by tests such as the analogies, classifications, and series completions described above. Crystallized abilities can be said to derive from fluid abilities and be viewed as their products, which would include vocabulary, general information, and knowledge about specific fields. John L. Horn, an American psychologist, suggested that crystallized ability more or less increases over the life span, whereas fluid ability increases in the earlier years and decreases in the later ones.

Most psychologists agreed that a broader subdivision of abilities was needed than was provided by Spearman, but not all of these agreed that the subdivision should be hierarchical. J.P. Guilford, an American psychologist, proposed a structure-of-intellect theory, which in its earlier versions postulated 120 abilities. For example, in an influential 1967 work Guilford argued that abilities can be divided into five kinds of operations, four kinds of contents, and six kinds of products. These various facets of intelligence combine multiplicatively, for a total of 5 × 4 × 6, or 120 separate abilities. An example of such an ability would be cognition (operation) of semantic (content) relations (product), which would be involved in recognizing the relation between lawyer and client in the analogy problem, lawyer : client :: doctor : ?. In 1984 Guilford increased the number of abilities proposed by his theory, raising the total to 150.

It had become apparent that there were serious problems with psychometric theories, not just individually but as a basic approach to the question. For one thing, the number of abilities seemed to be getting out of hand. A movement that had started by postulating one important ability had come, in one of its major manifestations, to postulating 150. Because parsimony is usually regarded as one of several desirable features of a scientific theory, this number caused some disturbance. For another thing, the psychometricians, as practitioners of factor analysis were called, didn't seem to have any strong scientific means of resolving their differences. Any method that could support so many theories seemed somewhat suspect, at least in the use to which it was being put. Most significant, however, was the seeming inability of psychometric theories to say anything substantial about the processes underlying intelligence. It is one thing to discuss “general ability” or “fluid ability,” but quite another to describe just what is happening in people's minds when they are exercising the ability in question. The cognitive psychologists proposed a solution to these problems, which was to study directly the mental processes underlying intelligence and, perhaps, relate them to the factors of intelligence proposed by the psychometricians.

Cognitive theories

During the era of psychometric theories, the study of intelligence was dominated by those investigating individual differences in people's test scores. In an address to the American Psychological Association in 1957, the American psychologist Lee Cronbach, a leader in the testing field, decried the fact that some psychologists study individual differences and others study commonalities in human behaviour but never do the two meet. In Cronbach's address his plea to unite the “two disciplines of scientific psychology” led, in part, to the development of cognitive theories of intelligence and of the underlying processes posited by these theories.

Without an understanding of the processes underlying intelligence it is possible to come to misleading, if not wrong, conclusions when evaluating overall test scores or other assessments of performance. Suppose, for example, that a student does poorly on the type of verbal analogies questions commonly found on psychometric tests. A possible conclusion is that the student does not reason well. An equally plausible interpretation, however, is that the student does not understand the words or is unable to read them in the first place. A student seeing the analogy, audacious : pusillanimous :: mitigate : ?, might be unable to solve it because of a lack of reasoning ability, but a more likely possibility is that the student does not know the meanings of the words. A cognitive analysis enables the interpreter of the test score to determine both the degree to which the poor score is due to low reasoning ability and the degree to which it is a result of not understanding the words. It is important to distinguish between the two interpretations of the low score, because they have different implications for understanding the intelligence of the student. A student might be an excellent reasoner but have only a modest vocabulary, or vice versa.

Underlying most cognitive approaches to intelligence is the assumption that intelligence comprises a set of mental representations (e.g., propositions, images) of information and a set of processes that can operate on the mental representations. A more intelligent person is assumed to represent information better and, in general, to operate more quickly on these representations than does a less intelligent person. Researchers have sought to measure the speed of various types of thinking. Through mathematical modeling, they divide the overall time required to perform a task into the constituent times needed to execute each mental process. Usually, they assume that these processes are executed serially – one after another – and, hence, that the processing times are additive. But some investigators allow for partially or even completely parallel processing, in which case more than one process is assumed to be executed at the same time. Regardless of the type of model used, the fundamental unit of analysis is the same: a mental process acting upon a mental representation.

A number of cognitive theories of intelligence have evolved. Among them is that of the American psychologists Earl B. Hunt, Nancy Frost, and Clifford E. Lunneborg, who in 1973 showed one way in which psychometrics and cognitive modeling could be combined. Instead of starting with conventional psychometric tests, they began with tasks that experimental psychologists were using in their laboratories to study the basic phenomena of cognition, such as perception, learning, and memory. They showed that individual differences in these tasks, which had never before been taken seriously, were in fact related (although rather weakly) to patterns of individual differences in psychometric intelligence test scores. These results, they argued, showed that the basic cognitive processes might be the building blocks of intelligence.

Following is an example of the kind of task Hunt and his colleagues studied in their research. The experimental subject is shown a pair of letters, such as “A A,” “A a,” or “A b.” The subject's task is to respond as quickly as possible to one of two questions: “Are the two letters the same physically?” or “Are the two letters the same only in name?” In the first pair the letters are the same physically, and in the second pair the letters are the same only in name. The psychologists hypothesized that a critical ability underlying intelligence is that of rapidly retrieving lexical information, such as letter names, from memory. Hence, they were interested in the time needed to react to the question about letter names. They subtracted the reaction time to the question about physical match from the reaction time to the question about name match in order to isolate and set aside the time required for sheer speed of reading letters and pushing buttons on a computer. The critical finding was that the score differences seemed to predict psychometric test scores, especially those on tests of verbal ability, such as verbal analogies and reading comprehension. The testing group concluded that verbally facile people are those who have the underlying ability to absorb and then retrieve from memory large amounts of verbal information in short amounts of time. The time factor was the significant development here.

A few years later, the American psychologist Robert J. Sternberg suggested an alternative approach to studying the cognitive processes underlying human intelligence. He argued that Hunt and his colleagues had found only a weak relation between basic cognitive tasks and psychometric test scores because the tasks they were using were at too low a level. Although low-level cognitive processes may be involved in intelligence, according to Sternberg they are peripheral rather than central. He proposed that psychologists should study the tasks found on the intelligence tests and then determine the mental processes and strategies that people use to perform those tasks.

Sternberg began his study with the analogies tasks such as lawyer : client :: doctor : ?. He determined that the solution to such analogies requires a set of component cognitive processes: namely, encoding of the analogy terms (e.g., retrieving from memory attributes of the terms lawyer, client, and so on), inferring the relation between the first two terms of the analogy (e.g., figuring out that a lawyer provides professional services to a client), mapping this relation to the second half of the analogy (e.g., figuring out that both a lawyer and a doctor provide professional services), applying this relation to generate a completion (e.g., realizing that the person to whom a doctor provides professional services is a patient), and then responding. Using techniques of mathematical modeling applied to reaction-time data, Sternberg proceeded to isolate the components of information processing. He determined whether or not each experimental subject did, indeed, use these processes, how the processes were combined, how long each process took, and how susceptible each process was to error. Sternberg later showed that the same cognitive processes are involved in a wide variety of intellectual tasks, and he suggested that these and other related processes underlie scores on intelligence tests.

Other cognitive psychologists have pursued different paths in the study of human intelligence, including the building of computer models of human cognition. Two leaders in this field have been the American psychologists Allen Newell and Herbert A. Simon. In the late 1950s and early 1960s they worked with a computer expert, Clifford Shaw, to construct a computer model of human problem solving. Called the General Problem Solver, it could solve a wide range of fairly structured problems, such as logical proofs and mathematical word problems. Their program relied heavily on a heuristic procedure called “means-ends analysis,” which, at each step of problem solving, determined how close the program was to a solution and then tried to find a way to bring the program closer to where it needed to be. In 1972, Newell and Simon proposed a general theory of problem solving, much of which was implemented on the computer.

Most of the problems studied by Newell and Simon were fairly well structured, in that it was possible to identify a discrete set of moves that would lead from the beginning to the end of a problem. For example, in logical-theorem proving the final result is known, and what is needed is a discrete set of steps that lead to that solution. Even in chess, another object of study, a discrete set of moves can be determined that will lead from the beginning of a game to checkmate. The biggest problem for a computer program (or a human player, for that matter) is in deciding which of myriad possible moves will most contribute toward winning a game. Other investigators have been concerned with less well-structured problems, such as how a text is comprehended, or how people are reminded of things they already know when reading a text.

All of the cognitive theories described so far have in common their primary reliance on what psychologists call the serial processing of information. Fundamentally, this means that cognitive processes are executed in series, one after another. In solving an algebra problem, for example, first the problem is studied, then an attempt is made to formulate some equations to define knowns and unknowns, then the equations may be used to solve for the unknowns, and so on. The assumption is that people process chunks of information one at a time, seeking to combine the processes used into an overall strategy for solving a problem.

For many years, various psychologists have challenged the idea that cognitive processing is primarily serial. They have suggested that cognitive processing is primarily parallel, meaning that humans actually process large amounts of information simultaneously. It has long been known that the brain works in such a way, and it seems reasonable that cognitive models should reflect this reality. It proved, however, to be difficult to distinguish between serial and parallel models of information processing, just as it had been difficult earlier to distinguish between different factor models of human intelligence. Subsequently advanced techniques of mathematical and computer modeling were brought to bear on this problem, and various researchers, including the American psychologists David E. Rumelhart and Jay L. McClelland, proposed what they call “parallel distributed processing” models of the mind. These models postulated that many types of information processing occur at once, rather than just one at a time.

Even with computer modeling, some major problems regarding the nature of intelligence remain. For example, a number of psychologists, such as the American Michael E. Cole, have argued that cognitive processing does not take into account that the description of intelligence may differ from one culture to another and may even differ from one group to another within a culture. Moreover, even within the mainstream cultures of North America and Europe, it had become well known that conventional tests, even though they may predict academic performance, do not reliably predict performance in jobs or other life situations beyond school. It seemed, therefore, that not only cognition but also the context in which cognition operates had to be taken into account.

Cognitive-contextual theories

Cognitive-contextual theories deal with the way that cognitive processes operate in various environmental contexts. Two of the major theories of this type are that of the American psychologist Howard Gardner and that of Sternberg. In 1983 Gardner proposed a theory of what he called “multiple intelligences.” Earlier theorists had gone so far as to contend that intelligence comprises multiple abilities. But Gardner went a step further, arguing that there is no single intelligence. In his view, intelligences are multiple, including, at a minimum, linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, and intrapersonal intelligence. Some of these intelligences are quite similar to the abilities proposed by the psychometric theorists, but others are not. For example, the idea of a musical intelligence is relatively new, as is the idea of a bodily-kinesthetic intelligence, which encompasses the particular faculties of athletes and dancers. Gardner derived his listing of intelligences from a variety of sources, including studies of cognitive processing, of brain damage, of exceptional individuals, and of cognition across cultures. Gardner proposed that whereas most concepts of intelligence had been ethnocentric and culturally biased, his was universal, based upon biologic and cross-cultural data as well as upon data derived from the cognitive performance of a wide array of people.

An alternative theory also taking into account both cognition and context is Sternberg's “triarchic” theory of human intelligence. Sternberg agreed with Gardner that conventional notions of intelligence were too narrow. But he disagreed as to how psychologists should go beyond traditional conceptions, suggesting that abilities such as musical and bodily-kinesthetic ones are talents rather than intelligences in that they are fairly specific and are not prerequisites for adaptation in most cultures.

According to Sternberg, intelligence has three aspects. These are not multiple intelligences, as in Gardner's scheme. Where Gardner viewed the various intelligences as separate and independent, Sternberg posited three integrated and interdependent aspects of intelligence. These aspects relate intelligence to what goes on internally within a person, to what goes on in the external world, and to experience, which mediates between the internal and external worlds.

The first aspect consists of the cognitive processes and representations that form the core of all thought. Sternberg distinguished three kinds of processes: those involved in deciding what to do and later in deciding how well it was done, those involved in doing what one had decided to do, and those involved in learning how to do it in the first place. For example, when deciding which of two brands of a product to buy, a shopper might first decide how to decide which is a better purchase (say, in terms of unit cost), then actually do the required calculation. Of course the shopper had first to have learned how to calculate unit prices.

The second aspect consists of the application of these processes to the external world. According to Sternberg, mental processes serve three functions in the everyday world: adaptation to existing environments, the shaping of existing environments into new ones, and the selection of new environments when old ones prove unsatisfactory. The theory holds that more intelligent persons are not just those who can execute many cognitive processes quickly or well; rather, their greater intelligence is reflected in knowing what their strengths and weaknesses are and capitalizing upon their strengths while remedying or compensating for their weaknesses. More intelligent persons, then, find a niche in which they can operate most efficiently. The third aspect of Sternberg's triarchic theory is the integration of the internal and external worlds through experience. One measure of intelligence is the ability to cope with relatively novel situations. For example, intelligence might be measured by taking someone who is well adapted to one culture and placing him in an unfamiliar one, in order to assess his ability to cope with a new situation. Or in the case of a person for whom an automobile is of critical importance, intelligence may be gauged according to the way that person functions when the car is being repaired and unavailable for a short period. Sternberg also suggested that another facet of experience that is important in evaluating intelligence is the automatization of cognitive processing, which occurs when a once relatively novel task becomes familiar. For example, when children first learn to read, they are confronted with a relatively novel task. But one is interested not only in how well children deal with the novelty of reading but also in how well they render reading automatic and subconscious. The abilities to cope with relative novelty and to automatize cognitive processing are seen as interrelated: The more a person is able to automatize the tasks of daily life, the more mental resources there are left to cope with novelty.

Biologic theories

The theories discussed above seek to understand intelligence in terms of underlying hypothetical constructs, whether these constructs are factors (e.g., verbal ability, spatial ability), as is the case with psychometric theories; cognitive processes (e.g., retrieval of information from memory, inferring relations); or cognitive processes as they interact with context (e.g., musical intelligence or the shaping of the environment). Some theorists, however, have taken a radically different approach, seeking to understand intelligence directly in terms of its biologic bases without intervening hypothetical constructs. These theorists, usually called reductionists, believe that a true understanding of intelligence can result only from the identification of its biologic substrates. Reductionism takes an appealing philosophical stance, and some would argue that there is no real alternative if the goal is to explain rather than merely to describe behaviour. But the case is not an open-and-shut one. In trying to discover why an automobile does not start in the morning, for example, the driver does not usually imagine that the basic problem is one involving molecules and atoms. The driver is probably better advised to analyze the performance of components, such as the starter or carburetor. Even if the automobile's molecular or atomic reactions could be analyzed, such an analysis would probably be unhelpful. The example suggests that the biologic approaches to intelligence should be looked at as complementary to, rather than as replacing, other approaches. Although relatively little is known about the biologic bases of intelligence, progress has been made on three different fronts, all involving studies of brain operation.

Hemispheric studies

One front has been the investigation of types of intellectual performance as related to the regions of the brain from which they originate. A researcher in this area, the American psychologist Jerre Levy, investigated the functioning of the two hemispheres of the brain. Levy and others found that the left hemisphere is superior in analytical functioning, of which the use of language, for instance, is a prime example. The right hemisphere, on the other hand, is superior in many forms of visual and spatial performance and tends to be more synthetic and holistic in its functioning than the left. Such patterns of hemispheric specialization are complex and cannot easily be generalized.

The specialization of the two hemispheres of the brain is exemplified in an early study by Levy and another American psychologist, Roger W. Sperry, who worked with split-brain patients, individuals who have had their corpus callosum severed. Because the corpus callosum, in the normal brain, links the two hemispheres, in these patients the hemispheres function independently of each other. But, as in normal persons, the right side of the body connects with the left hemisphere of the brain, and the left side connects with the right hemisphere.

Levy and Sperry asked split-brain patients to match small wooden blocks held in either their left or their right hand (but not looked at) with corresponding two-dimensional pictures. They found that the left hand did better than the right at this task, but, of more interest, they found that the two hands appeared to use different strategies in solving the problem. Their analysis demonstrated that the right hand found it relatively easier to deal with patterns that are readily described in words but difficult to discriminate visually. In contrast, the left hand found it easier to deal with patterns requiring visual discrimination.

Brain-wave studies

A second front of research has involved the use of brain-wave recordings to study the relation between these waves and either performance on ability tests or in various kinds of cognitive tasks. Researchers in some of these studies found a relationship between certain aspects of electroencephalogram (EEG) waves and scores on a standard psychometric test of intelligence.

Blood-flow studies

A third and relatively new front of research has involved the measurement of blood flow in the brain, which is a fairly direct indicator of functional activity in brain tissue. In such studies the amount and location of blood flow in the brain is monitored while subjects perform cognitive tasks. John Horn, a prominent researcher in this area, found that older adults show decreased blood flow to the brain, that such decreases are greater in some areas of the brain than in others, and that the decreases are particularly notable in those areas responsible for close concentration, spontaneous alertness, and the encoding of new information. These findings highlight the importance not only of understanding intelligence in general but also of understanding it as a faculty that develops over time.

Development of intelligence

There have been a number of approaches to the study of the development of intelligence. Psychometric theorists, for instance, have sought to understand how intelligence develops in terms of changes in the factors of intelligence over time and changes in the amounts of the various abilities that children have. For example, the concept of mental age was popular during the first half of the 20th century. A given mental age was held to represent an average child's level of mental functioning for a given chronological age. Thus, an average 12-year-old would have a mental age of 12, but an above-average 10-year-old or a below-average 14-year-old might also have a mental age of 12 years. The concept of mental age fell into disfavour, however, and has come to be used only rarely. Two main reasons appear to have caused this change. First, the concept does not seem to work after about the age of 16. The mental test performance of, say, a 25-year-old is generally no better than that of a 24- or 23-year-old. Furthermore, in later adulthood some test scores seem to start declining. Second, many psychologists believe that intellectual development does not exhibit the kind of smooth continuity that the concept of mental age appears to imply. Rather, development seems to come in intermittent bursts, whose timing can differ from one child to another.

Jean Piaget's work

The landmark work in intellectual development has not come out of psychometry but rather out of a tradition forged by the Swiss psychologist Jean Piaget. Over the course of a long career, Piaget formulated what became one of the monumental theories in the history of psychology. Two of its main aspects concern the mechanisms by which intellectual development takes place and the periods through which children develop. As modeled by Piaget, the child explores the world and observes regularities and makes generalizations, much as a scientist does. The first part of Piaget's theory recognizes two fundamental cognitive processes that work in somewhat reciprocal fashion. The first is what Piaget called assimilation, a process that involves incorporating new information into an already existing cognitive structure. Suppose, for example, that a child knows how to solve problems that require a percentage of a given number to be calculated. The child then learns how to solve problems that ask what percentage of a number another number is. The child already has a cognitive structure, or what Piaget called a “schema,” for percentage problems and can incorporate the new knowledge into the existing structure. But suppose the child is to learn next how to solve time-rate-distance problems, never before having dealt with this type of problem. In this case, the child may need to call upon the second process, accommodation, to form a new cognitive structure that can incorporate the new information. Cognitive development, according to Piaget, represents a dynamic equilibrium between the two processes of assimilation and accommodation.

The second part of Piaget's theory postulates that there are four major periods in intellectual development. The first, the sensorimotor period, extends from birth through roughly two years of age. During this period, a child learns how to modify reflexes to make them more adaptive, to coordinate actions, to retrieve hidden objects, and, eventually, to begin representing information mentally. During the second, preoperational, period from about two to seven years of age, a child experiences the growth of language and mental imagery and learns to focus on single perceptual dimensions, such as colour and size. The third, concrete-operational, period from about seven to 12 years of age is the time during which a child develops an important set of skills referred to as conservation skills. For example, suppose that water is poured from a wide, short beaker into a tall and narrow one. A preoperational child, asked which beaker has more water, will say that the second beaker does (the tall, thin one); a concrete-operational child, however, will recognize that the amount of water in the beakers must be the same. Finally, children emerge into the fourth, formal-operational, period, which begins about age 12 and continues throughout life. The formal-operational child develops thinking skills in all logical combinations and learns to think with abstract concepts. For example, a concrete-operational child asked to determine all possible orderings, or permutations, of four digits, such as 3-7-5-8, will have great difficulty doing so. The formal-operational child, however, will adopt a strategy of systematically varying alternations of digits, starting perhaps with the last digit and working toward the first. This systematic way of thinking is not possible for the normal concrete-operational child.

Piaget's theory had a major impact on the views of intellectual development, but the theory no longer has the widespread acceptance it once had, particularly from the 1950s through the 1970s. One reason for this is that the theory deals primarily with scientific and logical modes of thought and much less with aesthetic, intuitive, and other modes of thought. Another reason is that Piaget tended to overestimate the ages at which children could first perform certain cognitive tasks. Despite its diminished influence, however, Piaget's theory continues to serve as a basis for other theories.

Post-Piaget theories

Later theories of intellectual development, influenced by Piaget, have taken several courses. One has been to expand on Piaget by suggesting a possible fifth, adult, period of development, such as problem finding. A second course has been to suggest quite different periods of development from those suggested by Piaget. A third course has been to accept that intellectual development occurs through the periods Piaget proposed, but to hold that the cognitive bases of development differ from those recognized by him. Some of these theories emphasize the importance of memory capacity. For example, it has been shown that children's difficulties in solving transitive inference problems (such as: If A is greater than B, B is greater than C, and D is less than C, which is the greatest?) result primarily from memory limitations rather than reasoning limitations, as argued by Piaget. A fourth course has been to focus on the role of knowledge in development. Some investigators argue that much of what has been attributed to reasoning and problem-solving ability in intellectual development is actually better attributed to the increasing knowledge base of a child. Although the above approaches are diverse, they are all related as alternative responses to Piaget.

The environmental viewpoint

The views of intellectual development described above all emphasize the importance of the organism in intellectual development. But an alternative viewpoint emphasizes the importance of the environmental context and, particularly, the social environment. This view is related to the cognitive-contextual theories discussed above. Championed originally by L.S. Vygotsky, a Soviet psychologist, this viewpoint suggests that intellectual development may be largely influenced by a child's interactions with others: A child sees others thinking and acting in certain ways and then internalizes and models what is seen. An Israeli psychologist, Reuven Feuerstein, has elaborated upon this point of view, suggesting that the key to intellectual development is what he calls mediated learning experience. The parent mediates, or interprets, the environment for the child, and it is largely through this mediation that the child learns to understand and interpret the world.

Measuring intelligence

Almost all of the theories discussed above have in common the use of fairly complex tasks for gauging intelligence in both children and adults. Some of these tasks have been explicitly discussed – for example, those requiring recognition of analogies, classification of similar terms, extrapolation of number series, performance of transitive inferences, and the like. How did theorists of intelligence come to use these tasks in preference to the myriad other possibilities? That question is answered through the discussion of the measurement of intelligence.

Although the kinds of complex tasks that have been discussed above fall into a single tradition for the measurement of intelligence, the field actually has two major traditions. The tradition that has been discussed most prominently and has been most influential is that of the French psychologist Alfred Binet. But an earlier tradition, and one that still shows some influence upon the field, is that of the English scientist Sir Francis Galton.

The publication in 1859 of Charles Darwin's The Origin of Species had a profound effect on many lines of scientific endeavour. The book suggested that the capabilities of humans are, in some sense, continuous with those of lower animals and, hence, can be understood through scientific investigation. One person who was strongly influenced by Darwin's thinking was his cousin Sir Francis Galton. For seven years – from 1884 to 1890 – Galton maintained a laboratory at the South Kensington Museum in London, where, for a small fee, visitors could have themselves measured on a variety of psychophysical tasks, such as weight discrimination and sensitivity to musical pitch. But Galton believed that these tests measured more than just psychophysical abilities. He believed that psychophysical abilities were the basis of intelligence and, hence, that his tasks were measures of intelligence. The earliest formal intelligence test, therefore, required a person to perform such simple tasks as deciding which of two weights was heavier or showing how forcefully he could squeeze his hand. The Galtonian tradition was taken to the United States by the psychologist James McKeen Cattell. One of Cattell's students, Clark Wissler, collected data showing that scores on Galtonian types of tasks were not good predictors of grades in college, or even of each other. Cattell continued his work in psychometric research, however, and with Edward L. Thorndike developed the principal facility in the United States for mental testing and measurement.

The IQ test

The more influential tradition of mental testing was developed by Binet and his collaborator, Theodore Simon, in France. In 1904 the minister of public instruction in Paris named a commission to study or create tests that would insure that mentally retarded children received an adequate education. The minister was also concerned that certain children were being placed in classes for the retarded not because they were retarded but because they had behaviour problems, and teachers did not want them in their classrooms. Even before Wissler's research, Binet, who was charged with developing the new test, had flatly rejected the Galtonian tradition, believing that Galton's tests measured fairly trivial abilities. He proposed instead that tests of intelligence should measure skills such as judgment, comprehension, and reasoning – the same kinds of skills measured on most intelligence tests today. Binet's early test was taken to the United States by a Stanford University psychologist, Lewis Terman, whose version came to be called the Stanford-Binet test. This test has been revised frequently and continues in use.

The Stanford-Binet test, and others like it, have traditionally yielded at the very least an overall score referred to as an intelligence quotient, or IQ. Some tests, such as the Wechsler Adult Intelligence Scale (Revised) and the Wechsler Intelligence Scale for Children (Revised) yield an overall IQ as well as separate IQs for verbal and performance subtests. An example of a verbal subtest would be vocabulary, whereas an example of a performance subtest would be picture arrangement, the latter requiring an examinee to arrange a set of pictures into a sequence so that they tell a comprehensible story.

IQ was originally computed as the ratio of mental age to chronological (physical) age, multiplied by 100. Thus, if a child of 10 had a mental age of 12 (that is, performed on the test at the level of an average 12-year-old), the child was assigned an IQ of (12/10) × 100, or 120. If the 10-year-old had a mental age of eight, the child's IQ would be (8/10) × 100, or 80. A score of 100, where the mental age equals the chronological age, is average.

As discussed above, the concept of mental age has fallen into disrepute, and few tests continue to involve the computation of mental ages. Many tests still yield an IQ, but they are most often computed on the basis of statistical distributions. The scores are assigned on the basis of what percentage of people of a given group would be expected to have a certain IQ. (See psychological testing.)

The distribution of IQ scores

Intelligence test scores follow an approximately normal distribution, meaning that most people score near the middle of the distribution of scores and that scores drop off fairly rapidly in frequency as one moves in either direction from the centre. For example, on the IQ scale, about two out of three scores fall between IQs of 85 and 115, and about 19 out of 20 scores fall between 70 and 130. Put another way, only one out of 20 scores differs from the average IQ (100) by more than 30 points.

It has been common to associate certain levels of IQ with labels. For example, at the upper end, the label “gifted” is sometimes assigned to people with IQs over a certain point, such as 130. And at the lower end, mental retardation has been classified into different degrees depending upon IQ, so that, for example, IQs of 70–84 have been classified as borderline retarded, IQs of 55–69 as mildly retarded, IQs of 40–54 as moderately retarded, IQs of 25–39 as severely retarded, and IQs below 25 as profoundly retarded. Labeling schemes like these, however, have pitfalls and are in some ways dangerous.

First, the labels assume that conventional intelligence tests provide sufficient information to classify someone as either gifted, on the one hand, or mentally retarded, on the other. But most authorities would agree that this assumption is almost certainly false. Conventional intelligence tests are useful in providing information about some people some of the time, but the information they provide is about a fairly narrow range of abilities. To label someone as mentally retarded solely on the basis of a single test score is to risk doing a potentially great disservice and injustice to that person. Most psychologists and other authorities recognize that social as well as strictly intellectual skills are important in classifying a person as retarded. If a person adapts well to the environment, then it seems inappropriate to refer to that person as mentally retarded, a term with inescapably pejorative connotations.

Second, giftedness is generally recognized as more than just a degree of intelligence, even broadly defined. Most psychologists who have studied gifted persons agree that a variety of aspects make up giftedness. Howard E. Gruber, the Swiss psychologist, and the American psychologist Mihaly Csikszentmihalyi are among a number of researchers who are skeptical that the giftedness observed in children has much to do with the giftedness observed in adults. Gruber believes that giftedness unfolds over the course of a lifetime and involves achievement at least as much as intelligence. Gifted people, he contends, have life plans that they seek to realize, and these plans develop over the course of many years. To measure giftedness merely in terms of a single test score would be, for Gruber, a trivialization of the concept.

Third, a given test score can mean different things to different people. An IQ score for a person who has grown up in a ghetto home and gone to an inadequate school does not have the same meaning as the same IQ score for someone who has grown up in an upper-middle-class suburban environment and gone to a well-endowed school. An IQ score also does not mean the same thing for a person whose first language is not English but who takes a test in English, as it does for a native English-speaker. Another factor is that some people are “test-anxious” and may do poorly on almost any standardized test. Based on these and similar drawbacks, it has come to be believed generally that scores have to be interpreted carefully on an individual basis.

Psychologists now believe that IQ represents only a part of intelligence, and intelligence is only one factor in both retardation and giftedness. Earlier rigid concepts in the field of intelligence measurement, which led to labeling, have had undesirable effects. The growth of a more recent concept, the malleability of intelligence, has also served to discredit labeling.

The malleability of intelligence

Intelligence has historically been conceptualized as a more or less fixed trait. This view perceives intelligence as something people are born with, and the function of development is to allow this genetic endowment to express itself. A number of investigators have taken the approach that intelligence is highly heritable, transmitted through the genes. Other investigators believe that intelligence is minimally heritable, if at all. Most authorities take an intermediate position.

Various methods are used to assess the heritability of intelligence. Notable among these is the study of identical twins reared apart. For a variety of reasons, identical twins are occasionally separated at or near birth. If the twins are raised apart, and if it is assumed that when twins are separated they are randomly distributed across environments (often a dubious assumption), then the twins would have in common all of their genes but none of their environment, except for chance environmental overlap. As a result, the correlation between their performance on intelligence tests can provide an estimate of the proportion of variation in test scores due to heredity. Another method of computing the hereditary effect on intelligence involves comparing the relationship between intelligence test scores of identical twins and those of fraternal twins.

Considering the large number of studies that have investigated the heritability of intelligence, it is surprising that so much disagreement exists among researchers. It has been estimated that roughly half the variation in intelligence test scores is caused by hereditary influences. But it is significant that estimates of heritability can differ among ethnic and racial groups, as well as across time within a single group. Moreover, the estimates are computed, for the most part, on the basis of intelligence test scores, so that the estimates are only for that part of intelligence measured by the tests.

Whatever the heritability factor of IQ, a separate issue is whether intelligence can be increased. Work by a New Zealand researcher, James Flynn, has shown that, in the late 20th century, scores on intelligence tests have been rising rather steadily throughout the world. Although the reason for the increase has not been satisfactorily explained, there is little doubt that this is a developing phenomenon requiring careful investigation.

Despite the general increase in scores, average IQs continue to vary both across countries and across different socioeconomic groups. For example, many researchers have found a positive correlation between socioeconomic status and IQ, although they disagree over the reason for the relationship. Most investigators agree that differences in educational opportunities play an important role, and some investigators believe that there is a hereditary basis for the difference as well. But there is simply no broad consensus on the issue of why the differences exist, and, again, it should be noted that the differences are based on IQ, not broadly defined intelligence.

It is important to understand that no matter how heritable intelligence is, some aspects of it are still malleable. Heritability of a trait is a separate issue from its malleability. A person's height, for example, is 90 percent heritable; the best predictor of height is the height of a person's parents. Yet, because of better nutrition and health care, average heights in the United States have climbed during the 20th century. Thus, with intervention, even a highly heritable trait can be modified. There is a growing body of evidence that aspects of intelligence, too, can be modified. Intelligence, in the view of many authorities, is not a fixed trait, with its level a foregone conclusion the day a person is born. A program of training in intellectual skills can increase some aspects of a person's level of intelligence. No training program – no environmental condition of any sort – can make a genius of a person with low measured intelligence. But some gains are possible, and programs have been developed for increasing intellectual skills. A main trend for psychologists in the intelligence field has been to combine testing and training functions in order to enable people to optimize their intelligence.

Reason

Reason, in philosophy, is the faculty or process of drawing logical inferences. The term “reason” is also used in several other, narrower senses. Reason is in opposition to sensation, perception, feeling, desire, as the faculty (the existence of which is denied by empiricists) by which fundamental truths are intuitively apprehended. These fundamental truths are the causes or “reasons” of all derivative facts. According to the German philosopher Immanuel Kant, reason is the power of synthesizing into unity, by means of comprehensive principles, the concepts that are provided by the intellect. That reason which gives a priori principles Kant calls “pure reason,” as distinguished from the “practical reason,” which is specially concerned with the performance of actions. In formal logic the drawing of inferences (frequently called “ratiocination,” from Latin ratiocinari, “to use the reasoning faculty”) is classified from Aristotle on as deductive (from generals to particulars) and inductive (from particulars to generals).

In theology, reason, as distinguished from faith, is the human intelligence exercised upon religious truth whether by way of discovery or by way of explanation. The limits within which the reason may be used have been laid down differently in different churches and periods of thought: on the whole, modern Christianity, especially in the Protestant churches, tends to allow to reason a wide field, reserving, however, as the sphere of faith the ultimate (supernatural) truths of theology.

Human intelligence // Encyclopædia Britannica. – .
Reason // Encyclopædia Britannica. – .

to D. Andreev’s texts

quoting .

to A. Andreeva’s texts

quoting .

to accompanying materials to D. Andreev's texts

quoting .

Local

.

External

Intellect
Reason
Category: Intelligence

on intellect

General works

Introductions that provide a frame of reference and the terminology necessary for understanding the study of intelligence are found in such comprehensive sources as

Harré, Rom; Lamb, Roger (eds.). The Encyclopedic Dictionary of Psychology. – 1983.

Wolman, Benjamin B. (ed.). Handbook of Intelligence: Theories, Measurements, and Applications. – 1985.

Gregory, Richard L. (ed.). The Oxford Companion to the Mind. – 1987.

Corsini, Raymond J. (ed.). Encyclopedia of Psychology. 4 vol. – 1984.

Applications and varieties of intelligence are discussed in

Sternberg, Robert J.; Wagner, Richard K. (eds.). Practical Intelligence: Nature and Origins of Competence in the Everyday World. – 1986.

A comprehensive but fairly elementary introduction to the field is

Kail, Robert; Pellegrino, James W. Human Intelligence: Perspectives and Prospects. – 1985.

For current research in the field, articles in Psychology Today (monthly) provide coverage on a general level.

Theories of intelligence

For early theories, see

Binet, Alfred; Simon, Théodore. The Development of Intelligence in Children: The Binet-Simon Scale, trans. from French. – 1916, reprinted 1983.

Reeves, Joan W. Thinking About Thinking: Studies in the Background of Some Psychological Approaches. – 1965. # A summary of Binet's work.

Spearman, Charles E. The Nature of “Intelligence” and the Principles of Cognition. – 1923, reprinted 1973.

Spearman, Charles E. The Abilities of Man: Their Nature and Measurement. – 1927, reissued 1970.

Later theories are presented in

Gardner, Howard. Frames of Mind: The Theory of Multiple Intelligences. – 1983.

Cattell, Raymond B. Intelligence: Its Structure, Growth, and Action. – 1987.

Sternberg, Robert J. Beyond IQ: A Triarchic Theory of Human Intelligence. – 1985.

Sternberg, Robert J. The Triarchic Mind: A New Theory of Human Intelligence. – 1988.

Cole, Michael; Means, Barbara. Comparative Studies of How People Think. – 1981.

Sperry, Roger. Science and Moral Priority: Merging Mind, Brain, and Human Values. – 1983.

Comprehensive reviews of the field are presented in

Sternberg, Robert J. (ed.). Handbook of Human Intelligence. – 1982.

Sternberg, Robert J. Human Abilities: An Information-Processing Approach. – 1985.

Sternberg, Robert J.; Detterman, Douglas K. (eds.). What Is Intelligence?: Contemporary Viewpoints on Its Nature and Definition. – 1986.

Development of intelligence

A definitive summary of Piaget's earlier work is presented in

Flavell, John H. The Developmental Psychology of Jean Piaget. – 1963.

Piaget's works on the mechanisms of intellectual development and fundamental cognitive processes are available in English translations:

Piaget, Jean. The Psychology of Intelligence, trans. by Malcolm Piercy and D.E. Berlyne. – 1950, reprinted 1981; originally published in French, 1947.

The Essential Piaget / Ed. by Howard E. Gruber and J. Jacques Vonèche. – 1977, reprinted 1982.

For other influential views, see

Vygotsky L.S. Mind in Society: The Development of Higher Psychological Processes / Trans. from Russian, ed. by Michael Cole et al. – 1978. # The cognitive-contextual theory of intellectual development.

Feuerstein, Reuven. The Dynamic Assessment of Retarded Performers: The Learning Potential Assessment Device, Theory, Instruments, and Techniques. – 1979, reprinted 1985. # An analysis of the role of learning experiences in intellectual development.

Measuring intelligence

Early traditional approaches to evaluating intelligence are presented in

Galton, Francis. Hereditary Genius: An Inquiry into Its Laws and Consequences. – 1869, reissued 1978.

Galton, Francis. Inquiries into Human Faculty and Its Development. – 1883, reissued 1973.

For American investigations at the beginning of the 20th century and their results, see

Thorndike, Edward L. et al. The Measurement of Intelligence. – 1927, reprinted 1973.

Developments and applications of Binet's tradition of mental testing are described in

Terman, Lewis M. The Measurement of Intelligence: An Explanation of and a Complete Guide for the Use of the Stanford Revision and Extension of the Binet-Simon Intelligence Scale. – 1916, reprinted 1975.

For later research in the field, see

Vernon, Philip E. The Measurement of Abilities / 2nd ed. – 1972.

Wechsler, David. Wechsler's Measurement and Appraisal of Adult Intelligence / 5th enl. ed., rev. by Joseph D. Matarazzo. – 1972.

Anastasi, Anne. Psychological Testing / 6th ed. – 1988.

Malleability of intelligence

Arguments for and against heritability as an essential trait of intelligence are presented in the works of

Jensen, Arthur R. Genetics and Education. – 1972.

Loehlin, John C.; Nichols, Robert C. Heredity, Environment, & Personality: A Study of 850 Sets of Twins. – 1976.

Vernon, Philip E. Intelligence, Heredity, and Environment. – 1979.

Eysenck H.J.; Kamin, Leon J. Intelligence, the Battle for the Mind. – 1981.

.

.


Science Consciousness

Веб-страница создана М.Н. Белгородским 15 марта 2013 г.
и последний раз обновлена 23 сентября 2013 г.
This web-page was created by M.N. Belgorodsky on March 15, 2013
and last updated on September 23, 2013.