Theoretical Background of the Attention Concentration Test

Introduction

This text has three main divisions. The first division is about current intelligence tests, and, why they are less suited for the measurement of quantitative mental abilities. The second division is about concentration tests as a possible alternative. The third division is about inhibition theory, which will be used as a foundation for mental measurement.

Current Intelligence Tests

Everyone who is able to read this text is also able to solve any problem or puzzle invented by man, provided that he/she has the required knowledge and experience (practice) and is prepared to spend the necessary time to actually solve the problem. The time needed to solve a problem, which does not yet have an apparent solution at the start, may be interrupted by mental arrests, which are caused by non-quantitative factors, such as:
lack of knowledge,
lack of past experience, and
mind set.
An example of a mind set may be, that, when a person does not immediately see the solution for a problem, he/she might think it cannot be solved. Another example is that, when people have to solve a mathematical problem and think they do not have a gift for mathematics, they might conclude that they cannot solve the problem and, consequently, will not take the necessary time to discover whether the problem can be solved or not. The idea of mind set (in German: Einstellung) was first proposed by the early Wuerzburg cognitive psychologists. It is a mind set through which an individual contemplates a problem.

If the response time is used for the measurement of purely quantitative factors, then qualitative factors, such as the three mentioned above, should not play any part into the emergence of the final response time. The Attention Concentration Test is especially developed to measure quantitative mental ability factors, it therefore consists of only simple mental tasks which are easy to perform and which could be practiced before actual administration. In this way, qualitative factors such as knowledge, experierence, and mind set, are eliminated.

Before going into the nature of these quantitative factors, it will first be shown how the above mentioned qualitative factors influence the time needed to solve a problem, if it is solved at all. All mental problems can be divided into two categories: Rule Applying and Rule Finding Problems. Therefore, the subject will be separately` discussed for Rule Applying Problems and Rule Finding Problems.

Rule Applying Problems or Deductive Reasoning

Rule Applying problems always require sequentially ordered steps to achieve the solution. Each step is a transition from one state to another. The sequence of states is usually not immediately apparent, and, therefore, the task involves either trial and error, going from one state to the other in a arbitrary way, or systematic search of all possible states, and subsequently, searching for the minimum number of steps. Some examples are as follows:
The Tower of Hanoi Problem
The standard version of the Tower of Hanoi Problem consists of three pegs and a pyramide of three disks decreasing in size from bottom to top. The disks start out on a specific peg and the goal is to move the disks to another selected peg without putting a larger disk on top of a smaller one, and without moving more than one disk at a time. A demonstration of the Tower of Hanoi problem on INTERNET has been made by FWU-SHAN SHIEH. Click here. if you want to try solving the problem on your own.
The Water Jug Problem
In the Water-Jug Problem the problem task is to use three different sized jars to measure out a specified amount of milk (or any other liquid). The jars don't have measuring marks on the side. Usually, the largest jar is fully filled in the initial state of the task. It is only allowed to pour out the milk from one jar into the other until the latter is full or the former is empty.
Slide Puzzles
One may find many examples on the Internet. An example of a picture slide puzzle you can find here and an example of a number slide puzzle you can find here.
The solution always consists of passing through a number of states, starting from the initial state (the problem state) and ending in the end state (solution state). At each state there is always the possibility to go to a next state, however, this might be a state, which was encountered before. In this way one may enter a loop. This should be avoided. Therefore another next state should be looked for. It could be possible that for some states there is no new next state. In that case one enters into a dead end. If there is a final solution, then one can always go some states back, and follow another path. In that way one can still arrive at the required solution state. So, in principle, Rule Applying problems can always be solved. For the actual solution enough memory space should be available, as the subject has to keep track of the intermediate results, that is to keep track of the states that were already encountered. If the subject is allowed to make use of paper and pencil, or some other physical devise, such as a computer, he/she can always achieve the solution, although it may take a lot of time. However, in some cases the set of possible states and transtions may be too large. Some examples are the games of chess, checkers and go. Even for a computer, it would require too much memory space to cover all possible states and all possible transitions and too much time to run them through in order to find, at each state, the shortest track to a desired end state. So, although these problems are solvable in principle, they are almost impossible to solve in practice, because of the time/space limitations.

In Rule Applying problems, subjects frequently cannot solve the problem, because they think that they cannot solve the problem. For example, when they cannot instantaneously see the solution in the beginning, they erroneously come to the conclusion, that some specific knowledge is required, which they do not possess. This inhibits them to try to go from a current state (e.g. the initial state) to a next state. That experience will teach them whether some specific knowledge is required or not. They also do not know, that in order to go from one state to the next, it is not necessary to know what the final solution (the correct path) of the problem is, or whether there exists a solution at all. In summary, they do not know, that it is just a question of going from one state to the next. They have to be told that it is sufficient to ask oneself at each state: What do I do next, without bothering about whether that might be the solution or not. Here a failure in general education becomes manifest. Children have to be taught about this type of problem (Rule Applying) and how one always can achieve a solution (eventually by trial and error) if it exists.

When the subject knows about this general strategy, and he is not allowed to use pencil and paper, that is, he has to solve the problem mentally, then he should memorize intermediate results. This might be a hindrance for arriving at the final solution, because intermediate. results may be forgotten. However, if one is interested in measuring memory space, then one should use specific memory tests, in this case short-term memory tests and not tests which contain problem solving items. If the subject is allowed to use pencil and paper and knows about the general solving strategy (at each state you can always do something) and if they are prepared to continue until the solution is found, then they will always arrive at the solution. The time needed, however, will differ from subject to subject. Now one could argue that the time needed to solve the problem could be used as a measure of mental speed. However, the time needed to solve the problem, referred to as the solution time, can be long for some subject just because that particular subject accidently went through some paths, which did not lead to the solution state. Hence the fact that one subject used more time then others does not necessarily say anything about his general speed of information processing. Therefore, if a researcher is interested in measuring mental speed, he should make use of simple, repetitive tasks, in which fluctuations in test-taking time (between subjects as well as within subjects) "... cannot be attributed to the nature of the work, but only to the worker himself." (Spearman, 1927, p.321). Another source of variation in response time may be the fact that subjects, when being in a given state of the solving process, do not immediately see what a possible next new state might be, while that state actually does exist. They may think they have encountered a dead end, while in fact they have not. This may create superfluous loops which all take extra time. A longer solution time may also be caused by a temporarily mental breakdown usualy induced by the inclination to work too fast. Especially, when the subject is not allowed to make use of paper, pencil or other tools, and has to work from memory, then he might completely loose the overview of the problem and may have to start all over. This, however, is also a matter of getting to know that working faster is counter productive. Periods of loss of attention are quite normal and no reason to be become nervous.

Sometimes the number of steps in a problem may be small, for example, one, where a step consists of a transition from the initial state to the end state. However, the number of possible choices in which a step can be made may be large. Typical examples are the so-called Matchstick Problems. These problems are composed of collections of adjacent squares with each side representing a removable matchstick. The task is to remove a given number of matchsticks, leaving a certain amount of complete squares and no excess lines. The initial state consists of the intact collection of adjacent squares and the end state consists of the collection of adjacent squares with a cerain number of matchsticks removed. The subject has to check how many squares have been left and whether this number is equal to the number which was asked for. This has to be done repeatedly for all possible collections with the required number of matchsticks removed. The latter is a combinatorial problem. When the subject is not familiar with combinatorics, he may easily overlook some of the possibilities. However, this is a matter of experience and patience, and not of ability, whatsoever. Sometimes the solution requires the unusual resort to squares of larger-than-normal size, where a square of normal size is defined as a square consisting of four matchsticks. Functional fixedness might play a role here. For example, the subject could think that only squares of normal size are allowed. The most important reason for the emergence of functional fixedness might be that in presenting the problem, it is not explicitly mentioned, that squares of larger-than-normal size should be taken into account. In that case one should not be surprised when the subjects disregards these possibilities. If one knows that solutions of larger-than-normal size should also be included, then it will not be too difficult to solve the following triangular matchstick problem. How functional fixedness caused by misinterpreting the purpose of the puzzle can play a devastating role in solving the problem is demonstrated by a well-known riddle, in which one has to build four equilateral triangles, each triangle made up by three matchsticks, by using six matchsticks. In trying to solve this riddle, subjects usually implicitly assume that the solution should be given in a two-dimensional space. Naturally, they conclude that there is no solution.

When the problem consisist of only one step, i.e. the transition from the initial state to the end state, but the number of possible choices in which a step can be made is very large, then it may be worthwhile to use some model to describe the problem. The model may be easy to handle and may serve as a tool to find the final answer. A nice example is the Gold Coins Puzzle. This puzzle consists of a six-pointed star with 52 coins on the outer points of the star. The problem is to place the 26 remaining coins at the six intersections of the star in such a way that the total number of coins at any one of the six lines that make up the star must total 26. There are many ways in which to place arbitrarily the 26 coins at the six intersections. It may take a lot of time before the requested configuration will be obtained. In order to circumvent this problem, one could work with a model. In this particular case one could define the problem in terms of a set of simple linear equations of the form: x + y = c, where c is a known number. The numbers of coins to be placed at the six intersections are indicated by letters underneath.


                           3

                    9   a     b   7

                      f         c

                   12   e     d  11

                          10

Now one can redefine the problem as follows:

     a + b = 10
     b + c = 12
     c + d =  9
     d + e =  3
     e + f =  7
     f + a = 11

One must find values for a, b, c, d, e, and f, such that the given equalities hold. The use of the model, therefore, consists of two steps: Firstly, one must hit upon the idea of using linear equations. Then one must think of some method to solve the equations. This may take a lot of time. It may even be out of one's control if and when a solutions is born in one's mind. Sometimes, it may even be better to let the problem rest for a while, and wait until the mind itself at its own moment of choice comes up with an answer. One thing is clear, the time needed to solve the problem may be very unpredictable. In order to solve the equations one may think of the fact that there is only a limited number of solutions for d + e = 3:

     d = 0 and  e = 3
     d = 1 and  e = 2
     d = 2 and  e = 1
     d = 3 and  e = 0

If one would resort to d = 0 and e = 3, then the remaining equations can be solved:

     c + 0 =  9 => c = 9
     b + 9 = 12 => b = 3
     a + 3 = 10 => a = 7
     f + 7 = 11 => f = 4

So, for a = 7, b = 3, c = 9, d = 0, e = 3, and f = 4, there exists a solution to the problem. It is now also clear that there are more solutions than only one, namely four, one for d = 0, one for d = 1, one for d = 2 and one for d = 3. However, it is not always the case that an appropriate model to simplify the problem will be found. It may be the case that the required knowledge is present, but that the subject is not able to make the necessary association, due to the fact he/she has not sufficient experience with these applications. It may also be the case that the required knowledge is not available. In both cases a lack of knowledge and/or experience may hinderance the final solution. However, these have nothing to do with intellectual capacity.

Sometimes the seemingly apparent difficulty of Rule Applying problems is not caused by the number of steps, which again may even be very small, but by the fact that appyling the rule requires special abilities, such as spatial visualization. Well-known examples of visualization problems are:

Three Dimensional Space
The problem may, for example, consist of a two-dimensinal stimulus figure and a number of perspective drawings of three-dimensional objects (the response options). The stimulus figure (select here) is pictured as a flat piece of metal (or any other comparable material) which is to be either mentally bent, or rolled, or both. Lines indicate where the stimulus figure is to be bent. The subject must sort out which one of the drawings of three-dimensional objects can be made from the stimulus figure.
Cut Figures
A two-dimensional geometric figure is presented as a stimulus figure. It is supposed that the figure has been cut into three (or any other number) of parts. These three parts together with three 'wrong' parts are shown as response figures. One of the three parts is marked with a cross (or any other sign). The other two parts must be found by the subject.
Rotation
A simple two-dimensional a-symmetric geometric figure is presented as a stimulus figure. Five (or any other number) response figures are rotated and/or flipped over versions of the stimulus figure. Two (or any other number, less then the number of response figures) of the five figures have only been rotated. These two figures must be found by the subject.
The question arises why these figural problems may sometimes seem very difficult. They all require the manipulation of objects in in an imaginitive or virtual space. The problems become extremely easy when they are presented in such a way, that manipulation in physical space would be possible. For example, in the case of Three-dimensional Space Problems, one could offer the unfolded figures as physical objects, using paper or cardboard, and allow the subject to bend and/or roll the figures physically along the dotted lines. In the case of Cut Figures, one could copy the figure parts into separate physical objects and allow the subject to lay the figure parts against each other physically. In the case of Rotation Problems, one could offer the response figures as real physical objects to the subject and allow him/her to manipulate them, i.e. rotate them and/or flip them over in real space. One may expect that, when these problems are presented in this way, even nursery school children are able to solve them. When manipulation in real space is not allowed, the solution can only be found by manipulation in virtual space. One might think that visualization represents a separate mental abililty. However, this is not necessarily true, because when the subjects are given sufficient practice time and all the time they would need to solve each problem, then they would probably solve all problems. This implies that there are no differences between people with respect to the question of whether they could solve figural problems or not. No differences implies no specific ability. If one uses the solution time as a measure of performance, then it has not necessarily to be visualization why people have different solution times, it could also be mere speed of information processing. The reason why most people think that visualization represents a separate mental ability is probably connected to the fact that in primary school children get ample practice with subjects such as reading, writing, verbal comprehension, arithmetic, arithmetic story problems, etc. Now except for partial disabilities, such as word-blindness, differences in text comprehension and arithmetic reasoning (as far as time is not involved) are probably merely related to factors such as knowledge or experience. People cannot imagine that the same would be true for a subject such as visualization or space perception, which at the moment is not given very much attention to at primary schools. If so, then they would regard visualization as not being more difficult than reading, writing, simple computation, or driving an automobile.

In the case of problems which specifically require visualization, it is quite clear what should be done. It is not quite clear how it should be done. Especially when encountering the problem for the first time, the subject might feel quite uncomfortable not knowing how to apply the rule or what kind of strategy to use. As is always the case with rule applying problems, it must known that the only way to find the correct answer is simply to start and to persist. The only way to get used to the problem is to look for possible strategies, which, after some practice, can be performed in an automatic way. Naturally, it will take some time before a certain stage of practice is attained. However, when the task is overlearned, applying the rule does not seem very difficult anymore. So the question whether a rule is difficult to apply or not is only a matter of being familiar with the rule or not. Driving an automobile is a nice example. Subjects encounter car driving as very difficult when trying to drive a car for the first time in their life. However, after having taken some lessons and with ample practice, they find it difficult to imagine, that driving a car was once very difficult to do. The same holds true for the acquisition of language in early childhood, or learning to read and write at primary school age. If one has experienced visualization problems they may well find these problems easier. When familiarity still must be obtained during the test the problem will be experienced as more difficult.

The difficulties which children at elementary school frequently encounter with arithmetic word (or story) problems are a striking example of how the subject must have some experience in the past with the particular rule in question. For example, in the case of aritmetic word problems sometimes the solution is not difficult to find, when the problem is translated in the form of an algebraic equation. Take, for example the next problem:

First one takes the square of a certain number, next one substracts from the result the number eight. The final result is also equal to eight. What is the orignal number?
If one knows about equations, in which letters may represent numbers, then one knows that this problem can be written in the following form:

      2
     n  - 8 = 8

which implies that n**2 = 16 and n = 4 or n = -4. Therefore, the natural solution is 4. Now knowing about equations and knowing that letters may represent numbers all belongs to algebra. The reason why children at elementary school have difficulties with these kind of word problems simply is caused by the fact that they have not studied algebra yet, as this subject is in secondary school. But even people who are acquainted with elementary algebra, sometimes have difficulties in solving these problems, simply because they have not learned to describe these problems in terms of algebraic equations. They even might never have realized that it may be sensible to describe problems by using models. It is very clear that past experience plays a paramount role in finding the correct rule.

Rule Finding Problems or Inductive Reasoning

In Rule Finding problems all information available is organized according to some underlying rule and the problem is to derive the rule from the available information. This is always done by assuming or hypothizing a certain rule and subsequently applying the rule in question in order to check whether the available information (data) is consistent with the rule. If the data are inconsistent with the rule, a new rule must be hypothizied, which again must be checked against the data. The process of Rule Finding always consists of two stages: rule hypothizing and rule checking, which is equivalent to Rule Applying. The stage of rule hypothizing may also be called the stage of induction, because a rule is introduced (the latin verb 'inducere' means to introduce something). Sometimes a little reasoning is required in order to discover how the rule works in some particular case. This prcocess of reasoning may be called deduction. Typical examples of rule finding problems are the items from the IQ-test edited by Neeteson Internet Productions.

Note, that the process of theory building in the sciences completely corresponds to the process of Rule Finding as desribed above. In the sciences the available data are the data obtained under systematic observation or in experimentation. In the sciences, however, not only latent rules are required in order to be able to understand seeming irregularity, but also latent entities, such as latent objects. In many cases the particular theory at hand is called after the latent objects, which are assumed to exist in order to understand the data. An example is atomic theory. The latent reality is described in terms of atoms, and sub-atomic elements, such as protons, neutrons and electrons. The data, however, consist of the so-called black lines in the spectra of all sorts of materials. It is not accidental, that one of the standard works on this topic is called "Atomic Structure and Spectral Lines (see Sommerfeld, 1923).

Rule Finding Problems also may be impossible to solve when the underlying rule is not known to the subject or when he/she might have very little experience with the rule. Take, for example, the following number sequence:


       1 2 3 1 3 4 2 3 3 1 4 5 1 1 3 2 3 4 3 1 2 3 . . .

People, who try to solve this problem by using the usual mathematical number sequences, will not find the solution and for them the problem would seem almost impossible to solve. One might think that the following partitioning in sets of three and sets of two might work:

       1 2 3     4 2 3     4 5 1     2 3 4     2 3 . . .
             1 3       3 1       1 3       3 1     . . .

The sequence of the sets of three suggests that, that there should be an order of the numbers within the sets of three and an order of the sets of three within the sequence. However, this approach will not lead to the final solution. Now if one disregards for a moment the pairs 1 3 and 3 1 and applies a partioning of sets of four, one obtains the following result:

       1 2 3 4    2 3 4 5    1 2 3 4   2 3 . . .

The final solution appears to be very simple. However, even for mathematicians this is a difficult problem, simply, because they are not used to this kind of problem (lack of knowledge and experience). Note, that 'difficult' does not mean 'complex'. The underlying rule is very simple. People call a problem 'difficult', when they do not yet see the solution and think that they might not be able to solve it. When somebody says that a problem is difficult, it merely means that the person simple does not (yet) know the solution. It does not necessarily mean that the solution is complex.

In some cases the underlying rule does belong to the expert knowledge of the subject and it may still be the case that, for some reason, the subject is not able to hit upon the idea of the required rule. For example, the following problem might seem almost impossible to solve by many people.


       A       E F   H I   K . . . . ?
         B C D     G     J . . . . . ?

They suppose that some underlying sequence of numbers should explain the partioning of the letters assuming that the letters represent numbers, such as A = 1, B = 2, C = 3, etc. However, the actual rule is childishly simple: The letters above consists of straight lines only, while the letters below also have curved parts. Note, that subjects, who do not immediately see the solution would still call this problem very 'difficult'.

Speed and Concentration Tests

The measurement of quantitative mental factors already has a long tradition in psychology. It originally started with the introduction of so-called speed tests and concentration tests. The development of speed tests mainly originated from the Anglo-American tradition of intelligence measurement, whereas concentration tests came from the European tradition. The conventional speed tests required subjects to engage in repetitive activities, such as letter cancellation, detecting differences in simple shapes, adding three digits, and so on. In the majority of cases however, no attempt was made to time individual items. At the same time, exactly the same type of tests were used in Europe. However, in Europe, the duration of individual items or groupings of items were measured and employed in the assesment of subject performance. These tests, which are actually speed tests in the conventional meaning of the word, were refered to as concentration tests. The difference is not in test content or test instruction (work as quickly and as accurately as possible), but in performance registration. Instead of one gross measure such as number of items correct or total time needed to complete the test, individual item scores such as the time needed to complete each separate item or grouping of items were used. The characteristics of these two distinct trends in speed measurement can be summarized in terms of differences between time registration procedures used and not in test content. However, since these differences may be important for the measurement attention, from now on, only the history of concentration tests will be discussed further. Concentration tests were already used in the very beginning of this century. Binet (1900), for example, the author of the well-known intelligence scale, reports an extensive study on the measurement of concentration. He refers to it as "la force d'attention volontaire". He made use, among others, of a so-called letter cancellation test originally proposed by Bourdon (1895). This test consisted of crossing five letters, such as the vowels of the alphabet, in a meaningful text during 10 min. For each 1-min. period the number of crossed letters and the number of errors was recorded. Binet was well aware of the importance of the fluctuation in speed and error suggesting the mean deviation as a measure of performance. However, in reporting the final results he reported only general level scores such as the mean number of crossings and the mean number of errors during the first and second 5-min. periods.
Note, that the mean of a set of numbers is their total divided by their number, for example, the mean of the numbers 3, 5, 12, 5, 5 is equal to:
            3 + 5 + 12 + 5 + 5
            __________________ = 6
                    5

Note further, that the mean deviation of a set of numbers is the mean of the absolute values of the deviations from the mean, for example, the mean deviation of the numbers 3, 5, 12, 5, 5 is equal to:
            |3-6| + |5-6| + |12-6| + |5-6| + |5-6|
            ______________________________________ = 2.4
                              5

where |a-b| = a-b, when a is greater than b,
  and |a-b| = b-a, when a is smaller then b.

Moreover, the test was still subject to learning. However, concentration tests should be overlearned in advance, because the purpose of the test is to measure ability to concentrate and not learning ability.

Subsequent researchers became increasingly aware that concentration test tasks should be relatively easy. Learning effects should be avoided and the relevant information should be found in the short-term oscillation of the measure of performance. Godefroy (1915) stressed again the importance of the fluctuation in the response times. He also proposed the mean deviation of the response time as an indication of concentration. Spearman, several decades later, considered even oscillation to be a separate universal factor, in addition to what he called the general factor (not further identified) and perseveration (Spearman, 1927, p. 327). A typical manifestation of this factor (oscillation) "... is supplied by the fluctuations which always occur in any person's continuous output of mental work, even when this is so devised as to remain of approximately constant difficulty." (Spearman, 1927, p. 320). According to Spearman "... almost any kind of continuous work can be arranged so as to manifest the same phenomenon. In all cases alike, the output will throughout exhibit fluctuations that cannot be attributed to the nature of the work, but only to the worker himself." (p. 321). More recently, Jensen (1982), discussing his reaction time experiments, noted that trial-to-trial variability (the standard deviation of subject's reaction times) frequently surpassed response speed as a predictor of intelligence.


Note, that the standard deviation of a set of numbers is equal to the square root of the variance of these numbers. The variance of a set of numbers is defined as the mean of the squared deviations of the mean, for example, the variance of the numbers 3, 5, 12, 5, 5 is equal to:
         (3-6)**2 + (5-6)**2 + (12-6)**2 + (5-6)**2 + (5-6)**2
         _____________________________________________________ = 9.6
                                   5

The standard deviation is equal to sqrt(9.6) = 3.098. When the numbers are obtained from a sample and the sample variance is used to estimate the population variance then the sum of the squared deviations is divided by the number of deviations minus one. In the example, one obtains:
         (3-6)**2 + (5-6)**2 + (12-6)**2 + (5-6)**2 + (5-6)**2
         _____________________________________________________ = 12.0
                                   4

The sample standard deviation is equal to sqrt(12.0) = 3.464.
According to Larson and Alderton (1990), numerous studies suggest that Jensen's observation was correct and that measures of variability, such as the mean deviation or standard deviation have a robust statistical relationship to intelligence.

At present the typical concentration test consists of a simple mental task such as addition of one-digit numbers, cancellation of letters, crossing out sets of dots, etc. The task has to be performed for a relatively long period of time varying from 10 to 30 minutes. Performance is measured by a time series that consists of either a series of response times in which each response time is the result of a fixed number of responses, or a series of response counts in which each count is obtained in a fixed amount of time. A well-known example of the former is the Bourdon-Vos test (Vos, 1988), which is a children's version of the Bourdon-Wiersma test (see Huiskam and de Mare, 1947 and Kamphuis, 1962) used in The Netherlands. A well-known example of the latter is the Pauli test (see Arnold, 1964) used in Germany, which is a single digit addition task. The time series consists of the number of additions per minute during a thirty minute period.

Related Tasks

Concentration tests could be described as prolonged work tasks. However, prolonged work tasks also occur in experiments on reminiscence. The study of reminiscence also has a long history, which is shortly described in Eysenck and Frith (1977, chapter 1). "Reminiscence is a technical term, coined by Ballard in 1913, denoting improvement in the performance of a partially learned act that occurs while the subject is resting, i.e., not performing the act in question." (Eysenck and Frith, 1977, p.3). The reality of the phenomenon was first experimentally demonstrated by Oehrn (1895). In experiments on reminiscence the same task is always administered twice or more. Learning curves are obtained which usually include a pre-rest period of massed practice, a rest period, and a post-rest period. The tasks which are used are highly sensitive to learning. One is mainly interested in long-term trend effects, disregarding the short-term fluctuations of the individual response times. In contrast with reminiscence tasks, concentration tests typically consist of tasks which are already familiar to the subject before administration. Usually some practice trials are given before the actual test is administered in order to eliminate any remaining learning effects. In concentration tasks one is mainly interested in the short-term fluctuations of the response times. In reminiscence tasks the interest is primarily in the long-term trend. Although the tasks used in experiments on reminiscence are prolonged work tasks they should be clearly distinguised from the tasks used in concentration tests. The latter should be overlearned before actual test administartion, whereas whereas the former should not.

Additionally, vigilance tasks should be strictly distinguished from the tasks used in concentration or attention tests. In vigilance tasks the subject is required to keep watch for inconspicuous signals (either visual or auditory) over long periods of time (one hour or more). Systematic scientific investigation of vigilance was initiated by Mackworth (1950), who simulated the task of maintaining radar watch for submarines by using a clock pointer which moved on a series of steps. The subjects watched the pointer and reported the relatively infrequent occasions on which the pointer gave a double jump. "The most important finding is the so-called vigilance decrement: the probability of signal detection tends to decrease over time." (Eysenck, 1982, p. 80). Unlike vigilance tasks, concentration tests consist of stimuli (or items), which are presented over short periods of time (10 to 20 minutes), each requiring a response. Responses occur frequently, instead of infrequently as in vigilance tasks. In vigilance research one is mainly interested in studying the effect of fatigue or boredom. In concentration tests, the task should be completed before fatigue or boredom may play a role.

Inhibition Theory

As already was mentioned above, psychologists started using the mean deviation of the responses (response times or response counts) as an indication of attention. Although measures of variation, such as the mean deviation or the standard deviation may be intuitively appealing, they still lack any explicit theoretical foundation. One can always ask: What exactly is measured with the mean deviation (or the mean, if one still wants to use a measure indicating the level of performance)? This question can only be answered in the realm of an explanatory theory about the fluctuations of the response times (or response counts). Moreover, it could well be the case that, given the theory, a completely different measure should be preferred above the measure, which has intuitive appeal. Such a theory does exists and it will be referred to as the inhibition theory. According to inhibition theory, any mental activity, which requires a minimum amount of mental effort, is considered as a continuous flow of alternating periods of attention (or work) and distraction (or non-work). In periods of attention the person is actually working at the task, whereas in periods of distraction the person is not working on the task. Distractions are unconscious, involuntary periods of non-work. Distractions should not be confused with periods of non-work in which subjects consciously take time-out. The notion of intermediate periods of distraction has already been suggested by many authors, such as Peak and Boring (1926), Bills (1931, 1935, 1964) and Berger (1982). Scientific theories are always specified in terms of a formal model (usually a mathematical model) in order to be able to explain the observable data. For example, Newton's Classical Mechanics Model was a model specification of gravitation theory, which (the model) was especially developed to explain how the planets move in the sky. However, gravitation theory not only applies to the movement of the planets, but to the movement of any object in the universe, including falling bodies to earth. In the same way inhibition theory has also been specified in terms of more specific models, which were especially developed to explain the response times fluctuatuation in concentration tests. But it does not only apply to the responses in concentration tests, it is a general theory about the flow of attention during any mental activity. For all inhibition models developed thusfar, it is required, that performance is recorded as a series of response (or reaction) times. Subjects should be instructed to work as quickly and as accurately as possible and the items should be answered in a self-paced continuous manner, in which the subject cannot afford to take intermediate rest pauses between responses. Naturally, the task should be overlearned in advance.

The reaction time, which the subject needs for a certain response (a bar of colours or dice in the case of the IAT) is considered as the sum of a series of alternating real working times (or attention times) and non-working times (or distraction times), such as is shown in the next figure, where the letter a refers to an individual working time and the letter d refers to an individual distraction time.


---------------------------------------------------------------- (manifest)
_____......____........______......____....__________......____ (latent)
  a1    d1   a2    d2     a3    d3   a4  d4     a5      d5   a6

Note, that the individual work - and distraction times vary in a random way. The actual response will be given as soon as the sum of the individual working times has reached a threshold (A), which represents the total real working time, which is the time a subject needs to accomplish the amount of work to produce the response. The inhibition models all assume that the period occupied in producing a response always begins with a real working time and always ends with a real working time. They further assume that the total real working time is constant across responses. In the case of the Attention Concentration Test, this means that, for a given subject, the total real working time is assumed to be the same for each bar. Naturally, the total real working time may vary across subjects. This assumption of a constant total real work time can be experimentally induced by taking care that the amount of work needed for each response (or for each bar in the case of the IAT) is the same. But even if this is the case, then one could still argue that there might yet remain some minor variation in the total real working time across responses (or bars). However, it is assumed that this variation is negligable in comparison to the variation in total distraction time. The total distraction time is the sum of the individual distraction times. If all the individual working times are taken together as well as all the individual distrcation times, then one obtains the figure underneath.

_________________________________..............................
  a1   a2   a3   a4     a5     a6   d1     d2     d3   d4   d5

The observed, manifest response time can therefore be considered as the sum of the total real working time: A,

     A = a1 + a2 + a3 + a4 + a5 + a6,

and the total distraction time: D,

     D = d1 + d2 + d3 + d4 + d5.

That is T = A + D, where T represents the observed response time. Note, that differences in T are only caused by differences in D. For a given subject, across responses (or bars), there are only differences in D, the total distraction time. However, across subjects, or across different test administrations, there may also be differences in A, the total real working time. The total distraction time D is determined by the number of distractions and by the durations of the individual distraction times. The number of distraction, however, is determined by the durations of the individual working times. The longer the indivudual working times are, the shorter is the number of distractions. The shorter the indivudual working times are, the greater is the number of distractions. Anyway, the actual response time or reaction time can be considered as a series of alternating individual working times and distraction times. The question rises: how do these individual distraction and working times emerge in time.

In probability theory the probabilistic behavior of an individual distraction time (or working time) can be described in terms of its density or distribution function.


In probability theory the distribution function F(t) of t is the defined as the probability that the random variable T is less then or equal to t, which is equal to one minus the probability that the random variable T is greater then t:

   F(t) = 1 - P(T>t)

The density funcion f(t) of t is defined as the derivative of F(t):

   f(t) = dF(t)/dt = F'(t).


However, it is more natural to desribe these individual distraction or working times in terms of what is is known as the hazard rate or hazard function of t.
The hazard rate l(t) of t is defined as the ratio of the density function of t and one minus the distribution function of t:

            f(t)
   l(t) = ________
          1 - F(t)

It can easily be proved that

             d
   l(t) = - __ ln[1 - F(t)]
            dt

The quantity ln[1-F(t)] is known as the log survivor function, and response times are often summarized by plotting ln[1-F(t)] versus t, in which case the negative of the slope is the hazard function.
The non-mathematical reader, however, may not be familiar with concepts, such as probability density function, distribution function and hazard function, where the latter is especially important for the present discussion. One may understand the concept of hazard function or hazard rate as follows. Suppose a subject is in a state of distraction. If the switch from a state of distraction to a state of work, has not yet occurred, then one may think of some tendency for it to occur in the next instant of time. This tendency, denoted by l(t), may be time dependent, and, is therefore written as a function of time. It is exactly, what is meant by the mathematicians, when they refer to the concept of hazard rate or hazard function. One can think of this tendency as either staying constant, l(t) = c, or increasing, or decreasing as the time increases since the subject entered the current state, which in the example is the state of distraction. According to inhibition theory the subject is alternately in a state of work or attention and in a state of rest or distraction. The transition tendencies (or hazard rates) to switch from one state to the other are assumed to vary according to the level of a latent, unobservable quantity called inhibition. It is further assumed that inhibition, like fatigue, increases duringperiods of work and decreases duringperiods of rest (or distraction). Note that the inhibition is similar to fatigue, but not the same as fatigue. This idea of inhibition governing the times, in which the subject dwells in a state of work or rest, has already been suggested by Spearman when he relates oscillation in performance to what he assumes to be an alternating process of energy consumption (read inhibition increase) and energy recuperation (read inhibition decrease) (see Spearman, 1927, p. 327).
From now on subscripts will be indicated by small letters, since the markup language (HTML) which was used for this text, does not allow the use of subscripts.
The transition tendencies l0 (0 for rest) and l1 (1 for work) are assumed to depend on inhibition, where inhibition itself is dependent on time. Therefore the transition tendencies are denoted as l0[y(t)] and l1[y(t)], where y denotes inhibition. Note, that l0[y(t)] is the tendency to switch from a state of rest to a state of work, given that the subject is in a state of rest, and l1[y(t)] the tendency to switch from a state of work to a state of rest, given that the subject is in a state of work. The transition tendencies l0[y(t)] and l1[y(t)] are assumed to change with the level of inhibition in such a way that when inhibition is high, distractions will be long relative to the lenght of work intervals. This causes inhibition to decrease. Note, that inhibition decreases during distraction intervals. Likewise, when inhibition is low, distractions will be relatively short and as a result inhibition will rise. Note, that during working intervals inhibition increases. This makes it plausible that inhibition will tend to behave like a stationary process, fluctuating around a central region and tending to return to this region whenever it finds itself outside of it. For example, if the initial inhibition happens to be low, one will have short distractions (and hence short reaction times) in the beginning of the test. As a consequence, the inhibition gradually increases and this causes distractions (and hence also reaction times) to become longer. So, one gets an upward trend in the reaction time curve. This holds even when the subject is working with a constant speed during the whole test, from the beginning to the end. The opposite phenomenon, a downward trend in the reaction time curve, is to be expected when the initial inhibition is high (relative to the stationary mean value). Note, that trend (upward or downward) can also be caused by a gradual change in working speed. This trend, however, should not be confused with the trend due to the underlying inhibition process.

Inhibition Models

Scientific theories can only be validated by empirically checking whether the observed phenomena really corresponds to the predicted phenomena, that is the phenomena which are to be expected according to the theory. This is only possible when the theory is further specified in terms of a more formal model, which is usually a mathematical model. This also holds for inhibition theory. Several inhibition models have been proposed in the past. All of these models are based upon the following two assumptions:
  1. During periods of work (attention, processing) inhibition Y(t) increases in a linear way with a constant slope a1.
  2. During periods of rest (distraction, non-processing) inhibition Y(t) decreases in a linear way with a constant slope a0.
One of these models is the so-called beta-inhibition model (see Smit and van der Ven, 1995, page 269). The beta-inhibition model is a reaction time model.
In the beta-inhibition model, it is assumed that the inhibition Y(t) oscillates between two boundaries which are 0 and M (M for Maximum), where M is positive. Additionally, the following assumptions are introduced for the respective switch tendencies.
  1. The switch tendency l1[y(t)] (from work to rest) is described as:
    
                      c1M
       l1[y(t)] = __________          with c1 > 0
                     M - y(t)
    
    
    
  2. The switch tendency l0[y(t)] (from rest to work) is described as:
    
                     c0
       l0[y(t)] = ______              with c0 > 0.
                     y(t)
    
    
    
Note that, according to assumption iii. as y(t) goes to M (during a work interval), the switch tendency (transition rate) l1[y(t)] goes to infinity and this forces a switch (transition) to state 0 (rest) before the inhibition can reach M. Note further that, according to assumption iv. as y(t) goes to zero (during a distraction), the switch tendency (transition rate) l0[y(t)] goes to infinity and this forces a switch (transition) to state 1 (work) before the inhibition can reach zero.

The model has Y(t) fluctuating in the interval between 0 and M. The stationary distribution for Y(t)/M in this model is a beta distribution (reason to call it the beta inhibition model).


For the non-mathematician, it is sufficient to know that in the beta inhibition model the inhibition Y(t) fluctuates between two boundaries: a lower boundary, which is more or less arbitrarily set at 0 and an upper boundary, which is set to M where M is a positive number. During a period of work, when the inhibition Y(t) goes to M, the switch tendency l1[y(t)] becomes so strong that a switch to a state of distraction always occurs before the inhibition reaches M. Similarly, during a period of rest, when the inhibition Y(t) goes to 0, the switch tendency l0[y(t)] becomes so strong that a switch to a state of work always occurs before the inhibition reaches 0. The inhibition process, i.e. the fluctuation of Y(t) in real time is determind by the following six quantities:
  1. The upper boundary of Y(t): M. The lower boundary is equal to 0.
  2. The initial inhibition, i.e. the inhibition at the beginning of the task: y(0).
  3. The slope of inhibition increase during work intervals: a1.
  4. The slope of inhibition decrease during rest intervals: a0.
  5. The sensitivity of the tendency to switch from work to rest to inhibition increase: c1.
  6. The sensitivity of the tendency to switch from rest to work to inhibition decrease: c0.
Note that these quantities are all larger then zero. Moreover, the value of the initial inhibition y(0) should always be larger then 0 and smaller then M, i.e. 0 < y(0) < M. A simulation program for the beta inhibition model is now available. If you want to download this program, please, click here. Last update of this program was at June, 17, 1998.

In the last few years several models have been developed, which are closely related to the beta inhibition model. One of these models is known as the Poisson inhibition model.


This model is the same as the beta inhibition model except for assumption iii, which is now replaced by the following assumption.
  1. The switch tendency l1[y(t)] (from work to rest) is constant:
    
       l1[y(t)] = c1                with c1 > 0
    
    
    
So, the Poisson inhibition model is based on the assumptions i, ii, v and iv. Assmption v implies that in a state of work the tendency to switch to a state of distraction is no longer dependent on Y(t), but is constant with time. From a mathematical point of view the Poisson inhibition model has some very convenient properties, which make it possible to derive the most important statistical characteristics of the model. The model is described in more detail in Smit and van der Ven (1995). It can be proved that in the Poisson inhibition model the number of distractions has a Poisson distribution with mean c1A. This is the reason for the "Poisson" in the name of the model. The stationary distribution for Y(t) is a gamma distribution (reason why it has also been called the gamma inhibition model). Note, that the Poisson Inhibition model has no longer an upper boundary. It has only a lower boundary which is equal to 0.
For the non-mathematical reader it is sufficient to know, that in the case of the Poisson inhibition model the switch tendency during work is constant. A simulation program for the Poisson inhibition model is also available. If you want to download this program, please, click here. Last update of this program was at June, 17, 1998. One can also imagine a model in which the switch tendency during rest is constant.
In the Poisson inhibition model the switch tendency during a period of work is constant, i.e. it is the same at each moment of time, whereas the switch tendency during a period of distraction is dependent on the inhibition Y(t). One could also imagine that the switch tendency during a period of distraction is constant, whereas the switch tendency during a period of work is dependent on the inhibition Y(t). In that particular case assumption iv of the beta inhibition model is relaxed by the following assumption:
  1. The switch tendency l1[y(t)] (from work to rest) is constant:
    
       l0[y(t)] = c0                with c0 > 0
    
    
    
Consequently, this model would the be based on the assumptions i, ii, iii and vi. If you want to download the simulation program for this model, please, click here. Last update of this program was at June, 17, 1998. As can be seen from the simulations, the model has two disadvantages.
  1. Inhibition has an upper boundary: M, but no lower boundary.
  2. Inhibition can be negative.
These properties are less desirable, because one wants to have psychological attributes be positive. However, if one also wants to have a constant switch tendency during periods of distraction, one could proceed as follows. Instead of the underlying concept of inhibition: Y(t), one could work with the inverse of inhibition: 1/Y(t), and define Z(t) = 1/Y(t). One could call Z(t) mental energy, a concept, which has been introduced by Spearman (1927, Chapter IX, page 117). During work, mental energy will decrease, while during rest (distraction) mental energy will increase again. Consequently, assumptions i and ii are now replaced by the following assumptons:
  1. During periods of work (attention, processing) mental energy Z(t) decreases in a linear way with a constant slope a1.
  2. During periods of rest (distraction, non-processing) mental energy Z(t) increases in a linear way with a constant slope a0.
The new model would then be based on these assumptions, i.e. assumptions vii and viii and on the following assumptions:
  1. The switch tendency l1[z(t)] (from work to rest) is described as:
    
                     c1
       l1[z(t)] = ______              with c1 > 1.
                     z(t)
    
    
  2. The switch tendency l0[z(t)] (from rest to work) is described as:
    
       l0[z(t)] = c0             with c0 > 0.
    
    
If you want to download the simulation program for this model, please, click here. Last update of this program was at June, 17, 1998. Note, thet mental energy decreases during work periods and increases during rest periods. A model, which is very similar to this model has been discussed in van der Ven, Smit and Jansen ((1989). This model has the same structure as the Poisson inhibition model. In the Poisson inhibition model the switch tendency during work is constant, whereas in the present model the switch tendency during rest is constant.
Both, the Poisson inhibition model, in which the switch tendency during work is constant, as well as models, in which the switch tendency during rest is constant have mainly been developed for practical reasons: on the one hand these models preserve the dependency of the working and distraction times on inhibition (at least partly), on the other hand these models are mathematically more tractable.

The models, which have been discussed thusfar, all have in common that at least one of the two switch tendencies (from work to rest and from rest to work) is dependent on inhibition. In the beta inhibition model both are dependent on the inhibition, in the Poisson inhibition model only the switch tendency from rest to work is dependent on the inhibition and in the last model only the switch tendency from work to rest is dependent on the inhibition. However, one also could imagine that neither of the two switch tendencies is dependent on the inhibition. This model is known as the Poisson Erlang model and has been discussed in more detail in Pieters and van der Ven (1982). Actually, it was the first model which was published in the sequence of inhibition models. At that time the notion of inhibition as an explanatory concept was not yet introduced. Failure to explain certain statistical phenomena in the data gave rise to the development of the inhibition models.


In the Poisson Erlang model it is assumed that both switch tendencies are constant (and independent on inhibition).
  1. The switch tendency l1(t) (from work to rest) is described as:
    
       l1(t) = c1          with c1 > 0
    
    
  2. The switch tendency l0(t) (from rest to work) is described as:
    
       l0(t) = c0          with c0 > 0.
    
    

In this model the number of distractions has a Poisson distribution and, for each number of distractions, the total distraction time has an Erlang distribution. This is the reason for the "Poisson" and the "Erlang" in the name of the model.


The main difference between the previously discussed inhibition models and the Poisson Erlang model is that the former model can desrcibe a possible long-term trend in the reaction times. Reaction time curves usually show a long-term trend, which cannot be explained by the Poisson Erlang model. If you want to download a simulation program for the Poisson Erlang model, please, click here. Last update of this program was at June, 17, 1998.

                         To be continued ...

References

Abelson, A. (1911).
The measurement of mental ability of "backward" children. British Journal of Psychology, 4, 268-314.

Arnold, W. (1975).
Der Pauli-Test. New York: Springer-Verlag.

Berger, M. (1982).
The "Scientific Approach" to Intelligence: An Overview of its History with special Reference to Mental Speed. In: Eysenck, H.J. (1982). A Model of Intelligence. New York: Springer.

Bourdon, B. (1895).
Observations comparative sur la reconnaissance, la discrimination et l'association [Observations on memory, discrimination and association]. Revue Philosophique, 40, 153-185.

Breukelen, G.J.P. van, Jansen, R.W.T.L., Roskam, E.E.Ch.I., van der Ven, A.H.G.S. and Smit, J.C. (1987).
Concentration, speed and precision in simple mental tasks. .br;In: E.E. Roskam and R. Suck (Eds). Progress in mathematical psychology, Amsterdam: Elsevier.

Eysenck, M.W. (1982).
Attention and arousal. Berlin, Springer.

Godefroy, J.C.L. (1915).
Onderzoekingen over de aandachtsbepaling by gezonden en zielszieken [Studies on the measurement of concentration using healthy subjects and mentally ill subjects]. Groningen (The Netherlands), University of Groningen, Dissertation.

Bills, A.G. (1931).
Blocking: a new principle of mental fatigue. Amererican Journal of Psychology, 43, 230-275.

Bills, A.G. (1935).
Fatigue, oscillations and blocks. Journal of Experimental Psychology, 18, 562-573.

Bills, A.G. (1964).
A study of blocking and other response variables in psychotic brain-damage and personality-disturbed patients. Behavioral Research Therapy, 2, 99-106.

Binet, A. (1900).
Attention et adaptation [Attention and adaptation]. L'annee psychologique, 6, 248-404.

Eysenck, H.J. and Frith, C.D. (1977).
Reminiscence, motivation and personality. London: Plenum Press.

Hoel, P.G., Port, S.C. and Stone, C.J. (1972).
Introduction to Stochastic Processes. Boston: Houghton Mifflin.

Kamphuis, G.H. (1962).
Een bijdrage tot de geschiedenis van de Bourdon-test [A contribution to the history of the Bourdon test]. Nederlands Tijdschrift voor de Psychologie, 17, 247-268.

Larson, G.E. and Alderton, D.L. (1990).
Reaction Time Variability and Intelligence: A "Worst Performance" Analysis of Individual Differences. Intelligence 14, 309-325.

Mackworth, N.H. (1950).
Researches in the measurement of human performance. Med Res Counc Spec Rep Ser 268.

Oehrn, A. (1896).
Experimentelle Studitr.en zur Individualpsychologie [Experimental research on the study of individual differences]. Psychologische Arbeiten: 1, 92-151.

Peak, H. and Boring, E.G. (1926).
The factor of speed in intelligence. Journal of Experimental Psychology 9, 71.

Pieters, J.P.M. and van der Ven, A.H.G.S. (1982).
Precision, speed and distraction in time-limit tests. Applied psychological measurement, 6, 93-109.

Pieters, J.P.M. (1985).
Reaction time analysis of simple mental tasks: a general approach. Acta Psychologica, 59, 227-269.

Ripley, B.D. (1987).
Stochastic simulation. New York: Wiley.

Smit, J.C. and van der Ven, A.H.G.S. (1995).
Inhibition in Speed and Concentration Tests: The Poisson Inhibition Model. Journal of Mathematical Psychology. 39, 265-274.

Spearman, C. (1927).
The Abilities of Man. London: MacMillan.

van der Ven, A.H.G.S. and Smit, J.G. (1982).
Serial Reaction Times in Concentration Tests and Hull's Concept of Reactive Inhibition. .br;In: Micko, H.C. and Schulz, U. (Eds.) Formalization of Psychological Theories. Proceedings of the 13th European Mathematical Psychology Group Meeting, Bielefeld. Report of the Universitaet Bielefeld, Schwerpunkt Mathematisierung, D-4800 Bielefeld, F.R. Germany.

van der Ven, A.H.G.S., Smit, J.C. and Jansen, R.W.T.L. (1989).
Inhibition in prolonged work tasks. Applied psychological measurement, 13, 177-191.

Vos, P. (1988).
De Bourdon concentratietest voor kinderen [The Bourdon concentration test for children]. Lisse: Swetz en Zeitlinger.