Last edit: 05-03-17 Graham Wideman |
Personal
|
Psychology Experiment Framework
Article created: 98-07-10 |
A representative human-subject experiment application. In
each block of trials, the subject is asked to respond to each image presentation with
"yes" or "no" according to whether they recognize it as showing
a particular emotion, say "happiness". Investigators may look at
response-correctness and response times to see if these measurements correlate with other
factors of interest, say gender, ethnic background, age, or psychological diagnoses.
(McGivern, R. and others)
The animation here is a simulation of the actual experiment application... which is of
course a little more random. Animation stops after a while, reload to restart. |
Contents, this page
Significant Need, but Significant Challenges
By the late 80's I had become increasingly involved in developing software for
psychology research, and had discovered a field facing some significant software
development problems:
- Standard Structure, Diverse Details: The coarse
structure of experiments is fairly standardized, because each follows the scientific
method, and must feed results into known data analysis techniques. However, at a fine
level, each experiments is investigating something unique, which introduces novel
requirements for the detail-level behavior of the software each time. This hindered
attempts to commercialize psych experiment software.
- Application Behavior Eludes Parameterization: An
experiment is usually investigating complex or subtle behavior of the animal or human
subject, hence the software must implement complementary complex or subtle behavior. A few
commercial products had attempted to provide "no-programming-required"
flexibility by offering a host of parameters, but had generally provided a lesson that if
you want to specify behavior, a programming language is hard to beat. (Partial exception:
environments that allowed definition of state-machines, which would generate program
code.)
- Moving Target Requirements: An experiment is itself a
work-in-progress as the investigators try out different variations to get at the variables
of interest. Thus software requirements are a moving target for much of the life of
the experiment.
The above factors tended to argue for custom programming, but this generally faced a
number of problems:
- Programmers are Expensive: Many psych research groups
can't afford the services of an experienced professional software developer to develop
software from scratch. Some would hire student programmers, or attempt to learn
programming skills themselves, but this would often result in highly idiosyncratic and
unreliable software that resisted use by novice experimenters.
- Unfamiliar Application Domain: Few software developers
are familiar with the experiment field, so much time (= $$$) is wasted simply getting the
developer to understand the field (particularly if the developer is inexperienced at
eliciting requirements and doesn't understand the moving-target aspect.)
- Tricky Programming: The stimulus-presentation and
response-capture portions of the software often must function with near-millisecond
resolution, which is a programming specialty not suited to inexperienced programmers.
- Workflow Implications: There are also user-interface and
longer-term data management issues to consider (part of the overall lab process flow),
again not suited to inexperienced developers.
Bearing these factors in mind, there seemed a need for an experiment development
environment which didn't eliminate the need for an experienced programmer, but
instead allowed a programmer to build applications more quickly (focusing on the tricky
bits), while reducing the man-hours to the point that clients could afford.
The framework I developed had several features:
- Model Familiar to Investigators: Based on a model of
experiment design familiar to experimenters: subjects, runs, trials, independent variables
and levels, treatment combinations, dependent variables and so forth. These are
identified in initial discussions with the investigator, and are recorded in simple
syntax, which (in database terms) essentially captures the meta-data for the experiment.
- Screen Layout: In a separate process, the programmer uses a
simple syntax to lay out the screen which will appear as a control panel while the
experiment runs.
- Code Generator: These two input are fed to code generators to
automatically produce source code (Borland Pascal) for several of the application modules,
including the user interface, the ability to read experimenter-supplied sequence
information, the trial-to-trial sequencing, and the data management.
- Programmer Focus: The programmer is left to concentrate
on just two aspects of the application:
- any experiment-specific stimulus routines (say palette-swap animations, or sound-card
control)
- a "trial" procedure (typically a state-machine) which will vary its behavior
by responding to IV levels supplied to it by the generated code. Note that since the
meta-data was used to generate the data-management source code, all of the variables used
by the programmer (independent variable levels as input, dependent variables as output)
were type-checkable by the compiler, eliminating an otherwise sensitive source of coding
errors.
The resulting experiment applications have a number of critical features:
- DOS Target: At that time, a reasonable target
environment was DOS -- relatively cheap PCs and an operating system that would get out of
the way when asked.
- User-Determined Trial Parameters and Sequences: The IVs
are fixed at compile-time, but their trial-to-trial levels and sequences are not -- this
allows experimenters considerable latitude to incrementally refine the experiment without
requiring programmer intervention.
- Windowed UI: Consistent "windowed" user
interface (using TurboVision) for operators means that one set of instructions and
training (for experiment assistants) works across many applications
- Many Standard Features: Menus provide access to many
standard features, such as ability to choose from defined sequences each run, ability to
exercise and troubleshoot stimuli and attached hardware, practice trials, inspection of
experiment "structure" (IVs and their descriptions, levels of IVs and their
physical level definitions), control panel and so on.
- Robust Data Files, Processing Tool: All data files contained
meta-data (info that describes the data) so that a second tool I developed could read any
data file from any experiment application, combine data from multiple files, perform
summaries and manipulations suitable for preparing data for a variety of summary and
analysis destinations (such as spreadsheet, database, statistics package).
- Dual Monitor: Experiments that provided visual stimuli
supported a second screen for the experimenter control panel.
Over several years, many dozens of applications were built with this framework,
spawning hundreds of variants by way of different sets of parameters, stimuli and
sequences supplied by experimenters. Experiments involved animal or human subjects,
for projects at SDSU, UCSD, and some experiments that ran in sites across the US and
elsewhere (and continue to be run today). Personally, it provided a great
opportunity to be able to afford to be involved in fascinating and varied research, such
as autism, AIDS, ADD, and even some topics not beginning with "A".
One major factor gradually overtook this approach however: Newer users were
(understandably) less and less inclined to acquire even rudimentary DOS skills, and it's
not practical to work in a DOS environment without those skills. By about 1994
MS Windows 3.1 had become the PC users' environment of choice. In addition, clients wanted
to present visual and auditory stimuli that their Windows platforms appeared to be capable
of presenting.
However, there persisted a development "dark ages" in which programming was
either guru-stressingly difficult (working with the raw API) or unsettlingly vague and
unpredictable (VB), and in any case Windows was unsuited to doing anything with precise
timing. So it once again became uneconomical to try to satisfy clients changed
needs.
Recent developments, however, might make this approach worthwhile again:
- Much better programming environments in which there's a chance to get predictable
millisecond-scale behavior (Delphi, C++ Builder, DirectX).
- Ubiquitous relational database support (ODBC, MS Access) would readily replace
text-format data files.
It remains to be seen whether this trail will be pursued!
Go to: