Towards a Definition of Intelligence

Since the development of intelligence tests psychologists have tried to define intelligence. Till now no satisfactory definition has been given. In fact the problem is not so much how to define intelligence as well as how to explain individual differences obtained with so-called intelligence tests. In actual tests, knowledge is certainly a factor, which has to be accounted for. However, the knowledge factor should not occur in intelligence tests. If knowledge would be the explanatory factor, then the differences could easily be nullified by teaching the subject the knowledge which is required for the task. If the knowledge factor is completely cancelled out, then individual differences can only occur due to the speed and accuracy of work. If errors are allowed, then there is a certain trade-off between how fast the task is performed and how many mistakes are made in performing the task This trade-off is often referred to as the speed accuracy trade-off. A possible way to circumvent the problem of the speed accuracy trade-off is to allow the subject no errors. Another factor, which has to be cancelled out is learning or test habituation, therefore the task should be fully over learned before actually taking the test. As a final test measure one may use the time spent at the task or the amount of work in a given amount of time. However, in order to understand the nature of this measure it is not sufficient to have at one's disposal a single test score.

The progress of science is, among others, dependent on controlled observation and experimentation on the one hand and the development of models about the obtained data on the other hand. However, in order to check whether a model holds, one needs structural information (obtained from the observations and experiments). With structural information is meant the availability of data as patterns or structures (not as a single data point). Experimentation and/or controlled observation (a psychological test) are needed in order to eliminate the possible influence of unintended factors. The more such factors play a role in the emergence of the final data, the more complex the coming into existence of the data is and the more complex the models have to be in order to explain the data. Structural information is needed in order to make predictions possible, which is needed to check the model empirically. Predictions are about properties of data structures, not about single data points.

For example, if one has at one's disposal only the sixth digit (4) of the number sequence underneath

1 2 3 1 3 4 2 3 3 1 4 5 1 1 3 2 3 4 3 1 2 3 . . .

one is never in a position to find the underlying rule. One needs at least a part of the series to invent and check the underlying rule. The same holds in the empirical sciences. For example, in the case of the development of Bohrs original atom theory the data consisted of spectral lines, that is positional patterns of spectral lines. Experimentation was needed to study the spectral lines of elements, and not of compounds of elements, such as molecules and mixtures of molecules. It is impossible to make an explanatory model for the spectral composition of a single element. The spectral compositions of many elements are needed in order to develop and test a model to explain the various spectral compositions. A theory (or model) is always about data structures and not about a single data point.

In a similar way, in the case of pure speed tests, not a single reaction time is recorded, such as the total time which was needed for the task, but a series of reaction times dependent on a series of equivalent partial tasks. In the Attention Concentration Test these partial tasks are the consecutive bars which are presented to the subject. The various reaction times constitute a reaction time curve. What has to be explained are the individual reaction time curves both within subjects and between subjects. Inhibition theory gives such an explanation. In a certain model of inhibition theory (the Poisson inhibition model), the parameter a1, which is the rate of inhibition increase during periods of attention (work), is directly proportional with the standard deviation of the response times (Smit and van der Ven, 1995, formula 24). Therefore, the standard deviation (or the logarithm of the standard deviation) will be used as an operational definition of intelligence. It has been observed (van der Ven, 1998) that a1 is dependent on the task (how much attention does it require) and the subject (ability and effort).

The main reason why a1 is associated with intelligence is that a1 is also proprtional to the average stationary distraction time E(T0). In the Poisson Inhihibition model one has:

            a1
   E(T0) = ____      with a1 > 0, c0 > 0, c1 > 0
           c1 a0
That is, when a1 is high, then the average stationary distraction time is large. This means, that with high a1 the subject is interrupted by relatively long periods of distractions, which may also be more frequent. During these periods of distraction the subject is not able to continue mental activity. Information can be lost, especially regarding the all-over view of the problem, which will seriously impair subject's mental performance. In this view, lack of mental performance (or intelligence) essentially is a matter of increased rates of attentional failure.

In 1687 Newton wrote the Philosophiae Naturalis Principia Mathematica. It contains the statement of Newton's laws of motion forming the foundation of classical mechanics as well as his law of universal gravitation. By deriving Kepler's laws of planetary motion from the three laws of motion, he was the first to show that the motion of bodies (the apple falling from the tree) on Earth and of celestial bodies are governed by the same set of natural laws. In the time of Newton Kepler's laws of planetary motion, especially, his first law, which states that the orbit of a planet about a star is an ellipse with the star at one focus, were already well-known. However, nobody could explain the logic behind Kepler's first law. The merit of Newton's theory was that it provided such explanation.

Similarly, in 1998 Smit and van der Ven published the Poisson Inhibition Model. It was then also well-known, that trial-to-trial variability (the standard deviation of each subject's reaction times) frequently surpassed response speed as a predictor of intelligence (see among others Jensen, 1982 and Larson and Alderton, 1990). However, like in the case of Kepler's first law, nobody could tell why the standard deviation could be that important. The answer came from Inhibition Theory.

References

Jensen, A.R. (1982). Reaction time and psychometric "g". In H.J. Eysenck (Ed.), A model for intelligence. New York: Springer-Verlag.

Larson, G.E. and Alderton, D.L. (1990). Reaction Time Variability and Intelligence: A "Worst Performance" Analysis of Individual Differences. Intelligence 14, 309-325.

Smit, J.C. & van der Ven, A.H.G.S. (1995). Inhibition in Speed and Concentration Tests: the Poisson Inhibition Model. Journal of Mathematical Psychology, 39, 265-273.

Ven, A.H.G.S. van der (1998). Inhibition Theory and the Concept of Perseveration. In: Cornelia E. Dowling, Fred S. Roberts & Peter Theuns: Recent Progress in Mathematical Psycholgy. London: Lawrence Erlbaum Associates.