If the response time is used for the measurement of purely quantitative factors, then qualitative factors, such as the three mentioned above, should not play any part into the emergence of the final response time. The Attention Concentration Test is especially developed to measure quantitative mental ability factors, it therefore consists of only simple mental tasks which are easy to perform and which could be practiced before actual administration. In this way, qualitative factors such as knowledge, experierence, and mind set, are eliminated.
Before going into the nature of these quantitative factors, it will first be shown how the above mentioned qualitative factors influence the time needed to solve a problem, if it is solved at all. All mental problems can be divided into two categories: Rule Applying and Rule Finding Problems. Therefore, the subject will be separately` discussed for Rule Applying Problems and Rule Finding Problems.
In Rule Applying problems, subjects frequently cannot solve the problem, because they think that they cannot solve the problem. For example, when they cannot instantaneously see the solution in the beginning, they erroneously come to the conclusion, that some specific knowledge is required, which they do not possess. This inhibits them to try to go from a current state (e.g. the initial state) to a next state. That experience will teach them whether some specific knowledge is required or not. They also do not know, that in order to go from one state to the next, it is not necessary to know what the final solution (the correct path) of the problem is, or whether there exists a solution at all. In summary, they do not know, that it is just a question of going from one state to the next. They have to be told that it is sufficient to ask oneself at each state: What do I do next, without bothering about whether that might be the solution or not. Here a failure in general education becomes manifest. Children have to be taught about this type of problem (Rule Applying) and how one always can achieve a solution (eventually by trial and error) if it exists.
When the subject knows about this general strategy, and he is not allowed to use pencil and paper, that is, he has to solve the problem mentally, then he should memorize intermediate results. This might be a hindrance for arriving at the final solution, because intermediate. results may be forgotten. However, if one is interested in measuring memory space, then one should use specific memory tests, in this case short-term memory tests and not tests which contain problem solving items. If the subject is allowed to use pencil and paper and knows about the general solving strategy (at each state you can always do something) and if they are prepared to continue until the solution is found, then they will always arrive at the solution. The time needed, however, will differ from subject to subject. Now one could argue that the time needed to solve the problem could be used as a measure of mental speed. However, the time needed to solve the problem, referred to as the solution time, can be long for some subject just because that particular subject accidently went through some paths, which did not lead to the solution state. Hence the fact that one subject used more time then others does not necessarily say anything about his general speed of information processing. Therefore, if a researcher is interested in measuring mental speed, he should make use of simple, repetitive tasks, in which fluctuations in test-taking time (between subjects as well as within subjects) "... cannot be attributed to the nature of the work, but only to the worker himself." (Spearman, 1927, p.321). Another source of variation in response time may be the fact that subjects, when being in a given state of the solving process, do not immediately see what a possible next new state might be, while that state actually does exist. They may think they have encountered a dead end, while in fact they have not. This may create superfluous loops which all take extra time. A longer solution time may also be caused by a temporarily mental breakdown usualy induced by the inclination to work too fast. Especially, when the subject is not allowed to make use of paper, pencil or other tools, and has to work from memory, then he might completely loose the overview of the problem and may have to start all over. This, however, is also a matter of getting to know that working faster is counter productive. Periods of loss of attention are quite normal and no reason to be become nervous.
Sometimes the number of steps in a problem may be small, for example, one, where a step consists of a transition from the initial state to the end state. However, the number of possible choices in which a step can be made may be large. Typical examples are the so-called Matchstick Problems. These problems are composed of collections of adjacent squares with each side representing a removable matchstick. The task is to remove a given number of matchsticks, leaving a certain amount of complete squares and no excess lines. The initial state consists of the intact collection of adjacent squares and the end state consists of the collection of adjacent squares with a cerain number of matchsticks removed. The subject has to check how many squares have been left and whether this number is equal to the number which was asked for. This has to be done repeatedly for all possible collections with the required number of matchsticks removed. The latter is a combinatorial problem. When the subject is not familiar with combinatorics, he may easily overlook some of the possibilities. However, this is a matter of experience and patience, and not of ability, whatsoever. Sometimes the solution requires the unusual resort to squares of larger-than-normal size, where a square of normal size is defined as a square consisting of four matchsticks. Functional fixedness might play a role here. For example, the subject could think that only squares of normal size are allowed. The most important reason for the emergence of functional fixedness might be that in presenting the problem, it is not explicitly mentioned, that squares of larger-than-normal size should be taken into account. In that case one should not be surprised when the subjects disregards these possibilities. If one knows that solutions of larger-than-normal size should also be included, then it will not be too difficult to solve the following triangular matchstick problem. How functional fixedness caused by misinterpreting the purpose of the puzzle can play a devastating role in solving the problem is demonstrated by a well-known riddle, in which one has to build four equilateral triangles, each triangle made up by three matchsticks, by using six matchsticks. In trying to solve this riddle, subjects usually implicitly assume that the solution should be given in a two-dimensional space. Naturally, they conclude that there is no solution.
When the problem consisist of only one step, i.e. the transition from the initial state to the end state, but the number of possible choices in which a step can be made is very large, then it may be worthwhile to use some model to describe the problem. The model may be easy to handle and may serve as a tool to find the final answer. A nice example is the Gold Coins Puzzle. This puzzle consists of a six-pointed star with 52 coins on the outer points of the star. The problem is to place the 26 remaining coins at the six intersections of the star in such a way that the total number of coins at any one of the six lines that make up the star must total 26. There are many ways in which to place arbitrarily the 26 coins at the six intersections. It may take a lot of time before the requested configuration will be obtained. In order to circumvent this problem, one could work with a model. In this particular case one could define the problem in terms of a set of simple linear equations of the form: x + y = c, where c is a known number. The numbers of coins to be placed at the six intersections are indicated by letters underneath.
3 9 a b 7 f c 12 e d 11 10Now one can redefine the problem as follows:
a + b = 10 b + c = 12 c + d = 9 d + e = 3 e + f = 7 f + a = 11One must find values for a, b, c, d, e, and f, such that the given equalities hold. The use of the model, therefore, consists of two steps: Firstly, one must hit upon the idea of using linear equations. Then one must think of some method to solve the equations. This may take a lot of time. It may even be out of one's control if and when a solutions is born in one's mind. Sometimes, it may even be better to let the problem rest for a while, and wait until the mind itself at its own moment of choice comes up with an answer. One thing is clear, the time needed to solve the problem may be very unpredictable. In order to solve the equations one may think of the fact that there is only a limited number of solutions for d + e = 3:
d = 0 and e = 3 d = 1 and e = 2 d = 2 and e = 1 d = 3 and e = 0If one would resort to d = 0 and e = 3, then the remaining equations can be solved:
c + 0 = 9 => c = 9 b + 9 = 12 => b = 3 a + 3 = 10 => a = 7 f + 7 = 11 => f = 4So, for a = 7, b = 3, c = 9, d = 0, e = 3, and f = 4, there exists a solution to the problem. It is now also clear that there are more solutions than only one, namely four, one for d = 0, one for d = 1, one for d = 2 and one for d = 3. However, it is not always the case that an appropriate model to simplify the problem will be found. It may be the case that the required knowledge is present, but that the subject is not able to make the necessary association, due to the fact he/she has not sufficient experience with these applications. It may also be the case that the required knowledge is not available. In both cases a lack of knowledge and/or experience may hinderance the final solution. However, these have nothing to do with intellectual capacity.
Sometimes the seemingly apparent difficulty of Rule Applying problems is not caused by the number of steps, which again may even be very small, but by the fact that appyling the rule requires special abilities, such as spatial visualization. Well-known examples of visualization problems are:
In the case of problems which specifically require visualization, it is quite clear what should be done. It is not quite clear how it should be done. Especially when encountering the problem for the first time, the subject might feel quite uncomfortable not knowing how to apply the rule or what kind of strategy to use. As is always the case with rule applying problems, it must known that the only way to find the correct answer is simply to start and to persist. The only way to get used to the problem is to look for possible strategies, which, after some practice, can be performed in an automatic way. Naturally, it will take some time before a certain stage of practice is attained. However, when the task is overlearned, applying the rule does not seem very difficult anymore. So the question whether a rule is difficult to apply or not is only a matter of being familiar with the rule or not. Driving an automobile is a nice example. Subjects encounter car driving as very difficult when trying to drive a car for the first time in their life. However, after having taken some lessons and with ample practice, they find it difficult to imagine, that driving a car was once very difficult to do. The same holds true for the acquisition of language in early childhood, or learning to read and write at primary school age. If one has experienced visualization problems they may well find these problems easier. When familiarity still must be obtained during the test the problem will be experienced as more difficult.
The difficulties which children at elementary school frequently encounter with arithmetic word (or story) problems are a striking example of how the subject must have some experience in the past with the particular rule in question. For example, in the case of aritmetic word problems sometimes the solution is not difficult to find, when the problem is translated in the form of an algebraic equation. Take, for example the next problem:
2 n - 8 = 8which implies that n**2 = 16 and n = 4 or n = -4. Therefore, the natural solution is 4. Now knowing about equations and knowing that letters may represent numbers all belongs to algebra. The reason why children at elementary school have difficulties with these kind of word problems simply is caused by the fact that they have not studied algebra yet, as this subject is in secondary school. But even people who are acquainted with elementary algebra, sometimes have difficulties in solving these problems, simply because they have not learned to describe these problems in terms of algebraic equations. They even might never have realized that it may be sensible to describe problems by using models. It is very clear that past experience plays a paramount role in finding the correct rule.
Note, that the process of theory building in the sciences completely corresponds to the process of Rule Finding as desribed above. In the sciences the available data are the data obtained under systematic observation or in experimentation. In the sciences, however, not only latent rules are required in order to be able to understand seeming irregularity, but also latent entities, such as latent objects. In many cases the particular theory at hand is called after the latent objects, which are assumed to exist in order to understand the data. An example is atomic theory. The latent reality is described in terms of atoms, and sub-atomic elements, such as protons, neutrons and electrons. The data, however, consist of the so-called black lines in the spectra of all sorts of materials. It is not accidental, that one of the standard works on this topic is called "Atomic Structure and Spectral Lines (see Sommerfeld, 1923).
Rule Finding Problems also may be impossible to solve when the underlying rule is not known to the subject or when he/she might have very little experience with the rule. Take, for example, the following number sequence:
1 2 3 1 3 4 2 3 3 1 4 5 1 1 3 2 3 4 3 1 2 3 . . .People, who try to solve this problem by using the usual mathematical number sequences, will not find the solution and for them the problem would seem almost impossible to solve. One might think that the following partitioning in sets of three and sets of two might work:
1 2 3 4 2 3 4 5 1 2 3 4 2 3 . . . 1 3 3 1 1 3 3 1 . . .The sequence of the sets of three suggests that, that there should be an order of the numbers within the sets of three and an order of the sets of three within the sequence. However, this approach will not lead to the final solution. Now if one disregards for a moment the pairs 1 3 and 3 1 and applies a partioning of sets of four, one obtains the following result:
1 2 3 4 2 3 4 5 1 2 3 4 2 3 . . .The final solution appears to be very simple. However, even for mathematicians this is a difficult problem, simply, because they are not used to this kind of problem (lack of knowledge and experience). Note, that 'difficult' does not mean 'complex'. The underlying rule is very simple. People call a problem 'difficult', when they do not yet see the solution and think that they might not be able to solve it. When somebody says that a problem is difficult, it merely means that the person simple does not (yet) know the solution. It does not necessarily mean that the solution is complex.
In some cases the underlying rule does belong to the expert knowledge of the subject and it may still be the case that, for some reason, the subject is not able to hit upon the idea of the required rule. For example, the following problem might seem almost impossible to solve by many people.
A E F H I K . . . . ? B C D G J . . . . . ?They suppose that some underlying sequence of numbers should explain the partioning of the letters assuming that the letters represent numbers, such as A = 1, B = 2, C = 3, etc. However, the actual rule is childishly simple: The letters above consists of straight lines only, while the letters below also have curved parts. Note, that subjects, who do not immediately see the solution would still call this problem very 'difficult'.
3 + 5 + 12 + 5 + 5 __________________ = 6 5Note further, that the mean deviation of a set of numbers is the mean of the absolute values of the deviations from the mean, for example, the mean deviation of the numbers 3, 5, 12, 5, 5 is equal to:
|3-6| + |5-6| + |12-6| + |5-6| + |5-6| ______________________________________ = 2.4 5 where |a-b| = a-b, when a is greater than b, and |a-b| = b-a, when a is smaller then b.
Subsequent researchers became increasingly
aware that concentration test tasks
should be relatively easy. Learning effects should be avoided
and the relevant information should be found in the short-term
oscillation of the measure of performance.
Godefroy (1915) stressed again the importance of the fluctuation in the
response times. He also proposed the mean deviation of the response time
as an indication of concentration. Spearman, several decades later,
considered even
oscillation to be a separate universal factor, in addition to
what he called the general factor (not further identified)
and perseveration (Spearman, 1927, p. 327). A typical
manifestation of this factor (oscillation) "... is supplied by the fluctuations which
always occur in any person's continuous output of mental work,
even when this is so devised as to remain of approximately constant
difficulty." (Spearman, 1927, p. 320). According to Spearman "...
almost any kind of continuous work can be arranged so as to manifest
the same phenomenon. In all cases alike, the output will throughout
exhibit fluctuations that cannot be
attributed to the nature of the work, but only to the worker himself."
(p. 321). More recently, Jensen (1982), discussing his reaction time
experiments, noted that trial-to-trial variability (the standard
deviation of subject's reaction times) frequently surpassed response
speed as a predictor of intelligence.
Note, that the standard deviation of a set of numbers is equal to
the square root of the variance of these numbers. The variance of
a set of numbers is defined as the mean
of the squared deviations of the mean, for example,
the variance of the numbers 3, 5, 12, 5, 5 is equal to:
(3-6)**2 + (5-6)**2 + (12-6)**2 + (5-6)**2 + (5-6)**2
_____________________________________________________ = 9.6
5
The standard deviation is equal to sqrt(9.6) = 3.098.
When the numbers are obtained from a sample and the sample variance
is used to estimate the population variance then
the sum of the squared deviations is divided by the number of deviations
minus one. In the example, one obtains:
(3-6)**2 + (5-6)**2 + (12-6)**2 + (5-6)**2 + (5-6)**2
_____________________________________________________ = 12.0
4
The sample standard deviation is equal to sqrt(12.0) = 3.464.
According to Larson and Alderton
(1990), numerous studies suggest that Jensen's observation was correct
and that measures of variability, such as the mean deviation or
standard deviation have a robust statistical relationship to intelligence.
At present the typical concentration test consists of a simple mental task such as addition of one-digit numbers, cancellation of letters, crossing out sets of dots, etc. The task has to be performed for a relatively long period of time varying from 10 to 30 minutes. Performance is measured by a time series that consists of either a series of response times in which each response time is the result of a fixed number of responses, or a series of response counts in which each count is obtained in a fixed amount of time. A well-known example of the former is the Bourdon-Vos test (Vos, 1988), which is a children's version of the Bourdon-Wiersma test (see Huiskam and de Mare, 1947 and Kamphuis, 1962) used in The Netherlands. A well-known example of the latter is the Pauli test (see Arnold, 1964) used in Germany, which is a single digit addition task. The time series consists of the number of additions per minute during a thirty minute period.
Additionally, vigilance tasks should be strictly distinguished from the tasks used in concentration or attention tests. In vigilance tasks the subject is required to keep watch for inconspicuous signals (either visual or auditory) over long periods of time (one hour or more). Systematic scientific investigation of vigilance was initiated by Mackworth (1950), who simulated the task of maintaining radar watch for submarines by using a clock pointer which moved on a series of steps. The subjects watched the pointer and reported the relatively infrequent occasions on which the pointer gave a double jump. "The most important finding is the so-called vigilance decrement: the probability of signal detection tends to decrease over time." (Eysenck, 1982, p. 80). Unlike vigilance tasks, concentration tests consist of stimuli (or items), which are presented over short periods of time (10 to 20 minutes), each requiring a response. Responses occur frequently, instead of infrequently as in vigilance tasks. In vigilance research one is mainly interested in studying the effect of fatigue or boredom. In concentration tests, the task should be completed before fatigue or boredom may play a role.
The reaction time, which the subject needs for a certain response (a bar of colours or dice in the case of the IAT) is considered as the sum of a series of alternating real working times (or attention times) and non-working times (or distraction times), such as is shown in the next figure, where the letter a refers to an individual working time and the letter d refers to an individual distraction time.
---------------------------------------------------------------- (manifest) _____......____........______......____....__________......____ (latent) a1 d1 a2 d2 a3 d3 a4 d4 a5 d5 a6Note, that the individual work - and distraction times vary in a random way. The actual response will be given as soon as the sum of the individual working times has reached a threshold (A), which represents the total real working time, which is the time a subject needs to accomplish the amount of work to produce the response. The inhibition models all assume that the period occupied in producing a response always begins with a real working time and always ends with a real working time. They further assume that the total real working time is constant across responses. In the case of the Attention Concentration Test, this means that, for a given subject, the total real working time is assumed to be the same for each bar. Naturally, the total real working time may vary across subjects. This assumption of a constant total real work time can be experimentally induced by taking care that the amount of work needed for each response (or for each bar in the case of the IAT) is the same. But even if this is the case, then one could still argue that there might yet remain some minor variation in the total real working time across responses (or bars). However, it is assumed that this variation is negligable in comparison to the variation in total distraction time. The total distraction time is the sum of the individual distraction times. If all the individual working times are taken together as well as all the individual distrcation times, then one obtains the figure underneath.
_________________________________.............................. a1 a2 a3 a4 a5 a6 d1 d2 d3 d4 d5The observed, manifest response time can therefore be considered as the sum of the total real working time: A,
A = a1 + a2 + a3 + a4 + a5 + a6,and the total distraction time: D,
D = d1 + d2 + d3 + d4 + d5.That is T = A + D, where T represents the observed response time. Note, that differences in T are only caused by differences in D. For a given subject, across responses (or bars), there are only differences in D, the total distraction time. However, across subjects, or across different test administrations, there may also be differences in A, the total real working time. The total distraction time D is determined by the number of distractions and by the durations of the individual distraction times. The number of distraction, however, is determined by the durations of the individual working times. The longer the indivudual working times are, the shorter is the number of distractions. The shorter the indivudual working times are, the greater is the number of distractions. Anyway, the actual response time or reaction time can be considered as a series of alternating individual working times and distraction times. The question rises: how do these individual distraction and working times emerge in time.
In probability theory
the probabilistic behavior of an individual distraction time
(or working time) can be
described in terms of its density or distribution function.
In probability theory the distribution function F(t)
of t is the
defined as the probability that the random variable T is less then
or equal to t, which is equal to one minus the probability that
the random variable T is greater then t:
F(t) = 1 - P(T>t)
The density funcion f(t) of t is defined as the derivative
of F(t):
f(t) = dF(t)/dt = F'(t).
However, it is more natural to desribe these individual distraction or
working times in terms of what is is known as the hazard rate
or hazard function of t.
The hazard rate l(t) of t is defined as the ratio of the density
function of t and one minus the distribution function of t:
f(t)
l(t) = ________
1 - F(t)
It can easily be proved that
d
l(t) = - __ ln[1 - F(t)]
dt
The quantity ln[1-F(t)] is known as the log survivor
function,
and response times are often summarized by plotting ln[1-F(t)]
versus
t, in which case the negative of the slope is the hazard
function.
The non-mathematical reader, however, may not be familiar with concepts,
such as probability density function, distribution function and hazard
function, where the latter is especially important for the present
discussion. One may understand the concept of hazard function or hazard
rate as follows.
Suppose a subject is in a state of distraction. If the switch from a
state of
distraction to a state of work, has not yet occurred, then one may think
of some tendency for it to occur in the next instant of time.
This tendency, denoted by
l(t), may be time dependent, and, is therefore written as a function of
time. It is exactly, what is meant by the mathematicians, when they
refer to the concept of hazard rate or hazard function. One can think of
this tendency as either staying constant, l(t) = c, or increasing,
or decreasing as the time increases since the subject entered the
current state, which in the example is the state of distraction.
According to inhibition theory the subject is alternately in
a state of work or attention and in a state of rest or distraction.
The transition tendencies (or hazard rates) to switch from one state
to the other are assumed to vary according to the level of a latent,
unobservable quantity called inhibition. It is further assumed
that inhibition, like fatigue, increases duringperiods of work and
decreases duringperiods of rest (or distraction). Note that the inhibition
is similar to fatigue, but not the same as fatigue. This idea of
inhibition governing the times, in which the subject dwells in a state
of work or rest, has already been suggested by Spearman when he relates
oscillation in performance to what he assumes to be an alternating
process of energy consumption (read inhibition increase) and energy
recuperation (read inhibition decrease) (see Spearman, 1927, p. 327).
From now on subscripts will be indicated by small letters, since the
markup language (HTML) which was used for this text, does not allow the
use of subscripts.
The transition tendencies l0 (0 for rest) and l1
(1 for work) are
assumed to depend on inhibition, where inhibition itself
is dependent on time. Therefore the transition tendencies are
denoted as l0[y(t)] and l1[y(t)], where y
denotes inhibition.
Note, that l0[y(t)] is the tendency to switch from a state of rest
to a state of work, given that the subject is in a state of rest,
and l1[y(t)] the tendency to switch from a state of work to a state of
rest, given that the subject is in a state of work.
The transition tendencies l0[y(t)] and l1[y(t)] are
assumed to change
with the level of inhibition in such a way that when inhibition
is high, distractions will be long relative to the lenght of work
intervals. This causes inhibition to decrease. Note, that inhibition
decreases during distraction intervals. Likewise, when inhibition is
low, distractions will be relatively short and as a result inhibition
will rise. Note, that during working intervals inhibition increases.
This makes it plausible that inhibition will tend to behave like a
stationary process, fluctuating around a central region and tending to
return to this region whenever it finds itself outside of it. For
example, if the initial inhibition happens to be low, one will have short
distractions (and hence short reaction times) in the beginning of the
test. As a consequence, the inhibition gradually increases and this
causes distractions (and hence also reaction times) to become longer.
So, one gets an upward trend in the reaction time curve. This holds
even when the subject is working with a constant speed during the whole
test, from the beginning to the end. The opposite phenomenon, a downward
trend in the reaction time curve, is to be expected when the initial
inhibition is high (relative to the stationary mean value). Note, that
trend (upward or downward) can also be caused by a gradual change in
working speed. This trend, however, should not be confused with the trend
due to the underlying inhibition process.
c1M l1[y(t)] = __________ with c1 > 0 M - y(t)
c0 l0[y(t)] = ______ with c0 > 0. y(t)
The model has Y(t) fluctuating in the interval between 0 and M. The stationary distribution for Y(t)/M in this model is a beta distribution (reason to call it the beta inhibition model).
In the last few years several models have been developed, which are
closely related to the beta inhibition model. One of these models is
known as the Poisson inhibition model.
This model is the same as the
beta inhibition model except for assumption iii, which is now
replaced by the following assumption.
So, the Poisson inhibition model is based on the assumptions
i, ii, v and iv. Assmption v implies
that in a state of work the tendency to switch to a state of distraction
is no longer dependent on Y(t), but is constant with time.
From a mathematical point of view
the Poisson inhibition model has some very convenient
properties, which make it possible to derive the most important
statistical characteristics of the model. The model is described in
more detail in Smit and van der Ven (1995). It can be proved that in
the Poisson inhibition model the number of distractions has a Poisson
distribution with mean c1A. This is the reason for the "Poisson"
in the name of the model. The stationary distribution for Y(t)
is a gamma distribution (reason why it has also been called the gamma
inhibition model). Note, that the Poisson Inhibition model has no longer
an upper boundary. It has only a lower boundary which is equal
to 0.
l1[y(t)] = c1 with c1 > 0
For the non-mathematical reader it is sufficient to know, that in the
case of the Poisson inhibition model the switch tendency during work is
constant.
A simulation program for the Poisson inhibition model is also available.
If you want to download this program, please,
click here.
Last update of this program was at June, 17, 1998.
One can also imagine a model in which the switch tendency during rest
is constant.
In the Poisson inhibition model the switch tendency
during a period of work is constant, i.e. it is the same at each moment
of time, whereas the switch tendency during a period of
distraction is dependent on the inhibition Y(t). One could also imagine that
the switch tendency during a period of distraction is
constant, whereas the switch tendency during a
period of work is dependent on the inhibition Y(t). In that particular case
assumption iv of the beta inhibition model is relaxed by the following
assumption:
Consequently, this model would the be based on the
assumptions i, ii,
iii and vi.
If you want to download the simulation program for this model, please,
click here.
Last update of this program was at June, 17, 1998.
As can be seen from the
simulations, the model has two disadvantages.
l0[y(t)] = c0 with c0 > 0
These properties are less desirable, because one wants to have psychological
attributes be positive. However, if one also wants to have a
constant switch tendency during periods of distraction, one could proceed as
follows. Instead of the underlying concept of inhibition: Y(t), one could
work with the inverse of inhibition: 1/Y(t), and define Z(t) = 1/Y(t).
One could call Z(t) mental energy, a concept, which has been introduced
by Spearman (1927, Chapter IX, page 117). During work, mental energy will decrease,
while during rest (distraction) mental energy will increase again.
Consequently, assumptions i and ii are now replaced
by the following assumptons:
The new model would then be based on these assumptions, i.e. assumptions
vii and viii and on the following assumptions:
If you want to download the simulation program for this model, please,
click here.
Last update of this program was at June, 17, 1998.
Note, thet mental energy decreases
during work periods and increases during rest periods.
A model, which is very similar to this model has been discussed in
van der Ven, Smit and Jansen ((1989).
This model has the same structure as the Poisson inhibition model.
In the Poisson inhibition model the switch tendency during work is constant,
whereas in the present model the switch tendency during rest is constant.
c1
l1[z(t)] = ______ with c1 > 1.
z(t)
l0[z(t)] = c0 with c0 > 0.
Both, the Poisson inhibition model, in which the switch tendency during
work is constant, as well as models, in which the switch tendency during
rest is constant have mainly been developed for practical reasons:
on the one hand these models preserve the dependency of the working
and distraction times on inhibition (at least partly),
on the other hand these models
are mathematically more tractable.
The models, which have been discussed thusfar, all have in common
that at least one of the two switch tendencies (from work to rest
and from rest to work) is dependent on inhibition. In the beta inhibition
model both are dependent on the inhibition, in the Poisson inhibition model
only the switch tendency from rest to work is dependent on the inhibition
and in the last model only the switch tendency from work to rest is
dependent on the inhibition. However, one also could imagine that neither
of the two switch tendencies is dependent on the inhibition. This model
is known as the Poisson Erlang model and has been discussed in more
detail in Pieters and van der Ven (1982). Actually, it was the first
model which was published in the sequence of inhibition models. At that
time the notion of inhibition as an explanatory concept was not yet
introduced. Failure to explain certain statistical phenomena in the data
gave rise to the development of the inhibition models.
In this model the number of distractions has a Poisson distribution
and, for each number of distractions, the total distraction time has an
Erlang distribution. This is the reason for the "Poisson" and the
"Erlang" in the name of the model.
In the Poisson Erlang model it is assumed that both switch tendencies are
constant (and independent on inhibition).
l1(t) = c1 with c1 > 0
l0(t) = c0 with c0 > 0.
The main difference between the previously discussed inhibition models
and the Poisson Erlang model is that the former model can desrcibe
a possible long-term trend
in the reaction times. Reaction time curves usually show a long-term
trend, which cannot be explained by the Poisson Erlang model.
If you want to download a simulation program for the Poisson Erlang
model, please, click here.
Last update of this program was at June, 17, 1998.
To be continued ...