gw_logo_08.gif (1982 bytes) 
Last edit: 05-09-15 Graham Wideman

Brain

AFNI -- All Helps on One Page
Article created: 2003-01-20

Overview

The NIMH AFNI home site provides a listing of the results of the -help option for all approx 230 AFNI programs, which is an excellent service. I wanted them all on one page for quick browsing, searching en masse etc, which is what is assembled here.

Special Request: If you anticipate using this page frequently, please save me some bandwidth, and yourself some time, by saving the page on your local computer. (File > Save As  or similar).

Some Notes

Topic Discussion
Version See bottom line of each help output where "auto-generated" date indicates date that this info was produced by the AFNI team. Note that I do not recreate this page frequently.
Listing Order The listing order is alphabetical, ignoring case, and ignoring initial "@" character on some AFNI commands/scripts.
Improved handling of brackets In a few cases this listing is slightly improved over the AFNI site: some AFNI programs emit help with angle brackets (greater-than/less-than ) as part of their "Usage" line. Where these should appear in the web version of AFNI documentation, these fool browsers into reading the brackets and enclosed text as unknown HTML tags, causing the browser not to show that text (eg: as in SUMA_AlignToExperiment <EXPERIMENT Anatomy> <SURFACE Anatomy> ). The listing below handles this  problem by translating these brackets to their "character entity"  form.

Help Output of all AFNI Programs

1dcat
Usage: 1dcat a.1D b.1D ...
where each file a.1D, b.1D, etc. is an ASCII file of numbers
arranged in rows and columns.
The row-by-row catenation of these files is written to stdout.

TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example:   3 5 7
               2 4 6
               0 3 3
               7 2 9
This example has 3 rows and 4 columns.  Each column is considered as
a timeseries in AFNI.  The convention is to store this type of data
in a filename ending in '.1D'.

When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
  'fred.1D[5]'            ==> use only column #5
  'fred.1D[5,9,17]'       ==> use columns #5, #9, and #12
  'fred.1D[5..8]'         ==> use columns #5, #6, #7, and #8
  'fred.1D[5..13(2)]'     ==> use columns #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  'fred.1D[0..$(3)]'      ==> use columns #0, #3, #6, #9, ....
Similarly, you select a subset of the rows using the '{...}' notation:
  'fred.1D{0..$(2)}'      ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
  'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....

You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
  '1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float.  For
example
   -a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1ddot
Usage: 1ddot [options] 1Dfile 1Dfile ...
- Prints out correlation matrix of the 1D files and
  their inverse correlation matrix.
- Output appears on stdout.

Options:
 -one  =  Make 1st vector be all 1's.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1deval
Usage: 1deval [options] -expr 'expression'
Evaluates the expression and writes the result to stdout.
Any single letter from a-z can be used as the independent
variable in the expression.

Options:
  -del d   = Use 'd' as the step for the variable in the
               expression [default = 1.0]
  -num n   = Evaluate the expression 'n' times.
               If -num is not used, then the length of an
               input time series is used.  If there are no
               time series input, then -num is required.
  -a q.1D  = Read time series file q.1D and assign it
               to the symbol 'a' (as in 3dcalc).
  -index i.1D = Read index column from file i.1D and
                 write it out as 1st column of output.
                 This option is useful when working with
                 surface data.
Examples:
  1deval -expr 'sin(2*PI*t)' -del 0.01 -num 101 > sin.1D
  1deval -expr 'a*b*x' -a fred.1D -b ethel.1D > x.1D

TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example:   3 5 7
               2 4 6
               0 3 3
               7 2 9
This example has 3 rows and 4 columns.  Each column is considered as
a timeseries in AFNI.  The convention is to store this type of data
in a filename ending in '.1D'.

When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
  'fred.1D[5]'            ==> use only column #5
  'fred.1D[5,9,17]'       ==> use columns #5, #9, and #12
  'fred.1D[5..8]'         ==> use columns #5, #6, #7, and #8
  'fred.1D[5..13(2)]'     ==> use columns #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  'fred.1D[0..$(3)]'      ==> use columns #0, #3, #6, #9, ....
Similarly, you select a subset of the rows using the '{...}' notation:
  'fred.1D{0..$(2)}'      ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
  'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....

You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
  '1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float.  For
example
   -a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1dfft
Usage: 1dfft [options] infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with the absolute
value of the FFT of the input columns.  The length of the file
will be 1+(FFT length)/2.

Options:
  -ignore sss = Skip the first 'sss' lines in the input file.
                [default = no skipping]
  -use uuu    = Use only 'uuu' lines of the input file.
                [default = use them all, Frank]
  -nfft nnn   = Set FFT length to 'nnn'.
                [default = length of data (# of lines used)]
  -tocx       = Save Re and Im parts of transform in 2 columns.
  -fromcx     = Convert 2 column complex input into 1 column
                  real output.
  -hilbert    = When -fromcx is used, the inverse FFT will
                  do the Hilbert transform instead.
  -nodetrend  = Skip the detrending of the input.

Nota Bene:
 * Each input time series has any quadratic trend of the
     form 'a+b*t+c*t*t' removed before the FFT, where 't'
     is the line number.
 * The FFT length will be a power-of-2 times at most one
     factor of 3 and one factor of 5.  The smallest such
     length >= to the specified FFT length will be used.
 * If the FFT length is longer than the file length, the
     data is zero-padded to make up the difference.
 * Do NOT call the output of this program the Power Spectrum!
     That is something else entirely.
 * If 'outfile' is '-', the output appears on stdout.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1dgrayplot
Usage: 1dgrayplot [options] tsfile
Graphs the columns of a *.1D type time series file to the screen,
sort of like 1dplot, but in grayscale.

Options:
 -install   = Install a new X11 colormap (for X11 PseudoColor)
 -ignore nn = Skip first 'nn' rows in the input file
                [default = 0]
 -flip      = Plot x and y axes interchanged.
                [default: data columns plotted DOWN the screen]
 -sep       = Separate scales for each column.
 -use mm    = Plot 'mm' points
                [default: all of them]
 -ps        = Don't draw plot in a window; instead, write it
              to stdout in PostScript format.
              N.B.: If you view this result in 'gv', you should
                    turn 'anti-alias' off, and switch to
                    landscape mode.

TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example:   3 5 7
               2 4 6
               0 3 3
               7 2 9
This example has 3 rows and 4 columns.  Each column is considered as
a timeseries in AFNI.  The convention is to store this type of data
in a filename ending in '.1D'.

When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
  'fred.1D[5]'            ==> use only column #5
  'fred.1D[5,9,17]'       ==> use columns #5, #9, and #12
  'fred.1D[5..8]'         ==> use columns #5, #6, #7, and #8
  'fred.1D[5..13(2)]'     ==> use columns #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  'fred.1D[0..$(3)]'      ==> use columns #0, #3, #6, #9, ....
Similarly, you select a subset of the rows using the '{...}' notation:
  'fred.1D{0..$(2)}'      ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
  'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....

You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
  '1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float.  For
example
   -a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1dnorm
Usage: 1dnorm infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, with each column being
L2 normalized.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1dplot
Usage: 1dplot [options] tsfile ...
Graphs the columns of a *.1D type time series file to the screen.

Options:
 -install   = Install a new X11 colormap.
 -sep       = Plot each column in a separate sub-graph.
 -one       = Plot all columns together in one big graph.
                [default = -sep]
 -dx xx     = Spacing between points on the x-axis is 'xx'
                [default = 1]
 -xzero zz  = Initial x coordinate is 'zz' [default = 0]
 -nopush    = Don't 'push' axes ranges outwards.
 -ignore nn = Skip first 'nn' rows in the input file
                [default = 0]
 -use mm    = Plot 'mm' points [default = all of them]
 -xlabel aa = Put string 'aa' below the x-axis
                [default = no axis label]
 -ylabel aa = Put string 'aa' to the left of the y-axis
                [default = no axis label]

 -stdin     = Don't read from tsfile; instead, read from
              stdin and plot it. You cannot combine input
              from stdin and tsfile(s).  If you want to do
              so, see program 1dcat.

 -ps        = Don't draw plot in a window; instead, write it
              to stdout in PostScript format.
              N.B.: If you view this result in 'gv', you should
                    turn 'anti-alias' off, and switch to
                    landscape mode.

 -xaxis b:t:n:m    = Set the x-axis to run from value 'b' to
                     value 't', with 'n' major divisions and
                     'm' minor tic marks per major division.
                     For example:
                       -xaxis 0:100:5:20
                     Setting 'n' to 0 means no tic marks or labels.

 -yaxis b:t:n:m    = Similar to above, for the y-axis.  These
                     options override the normal autoscaling
                     of their respective axes.

 -ynames aa bb ... = Use the strings 'aa', 'bb', etc., as
                     labels to the right of the graphs,
                     corresponding to each input column.
                     These strings CANNOT start with the
                     '-' character.

 -volreg           = Makes the 'ynames' be the same as the
                     6 labels used in plug_volreg for
                     Roll, Pitch, Yaw, I-S, R-L, and A-P
                     movements, in that order.

You may also select a subset of columns to display using
a tsfile specification like 'fred.1D[0,3,5]', indicating
that columns #0, #3, and #5 will be the only ones plotted.
For more details on this selection scheme, see the output
of '3dcalc -help'.

Example: graphing a 'dfile' output by 3dvolreg, when TR=5:
   1dplot -volreg -dx 5 -xlabel Time 'dfile[1..6]'

You can also input more than one tsfile, in which case the files
will all be plotted.  However, if the files have different column
lengths, the shortest one will rule.

The colors for the line graphs cycle between black, red, green, and
blue.  You can alter these colors by setting Unix environment
variables of the form AFNI_1DPLOT_COLOR_xx -- cf. README.environment.
You can alter the thickness of the lines by setting the variable
AFNI_1DPLOT_THIK to a value between 0.00 and 0.05 -- the units are
fractions of the page size.

TIMESERIES (1D) INPUT
---------------------
A timeseries file is in the form of a 1D or 2D table of ASCII numbers;
for example:   3 5 7
               2 4 6
               0 3 3
               7 2 9
This example has 3 rows and 4 columns.  Each column is considered as
a timeseries in AFNI.  The convention is to store this type of data
in a filename ending in '.1D'.

When specifying a timeseries file to an command-line AFNI program, you
can select a subset of columns using the '[...]' notation:
  'fred.1D[5]'            ==> use only column #5
  'fred.1D[5,9,17]'       ==> use columns #5, #9, and #12
  'fred.1D[5..8]'         ==> use columns #5, #6, #7, and #8
  'fred.1D[5..13(2)]'     ==> use columns #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  'fred.1D[0..$(3)]'      ==> use columns #0, #3, #6, #9, ....
Similarly, you select a subset of the rows using the '{...}' notation:
  'fred.1D{0..$(2)}'      ==> use rows #0, #2, #4, ....
You can also use both notations together, as in
  'fred.1D[1,3]{1..$(2)}' ==> columns #1 and #3; rows #1, #3, #5, ....

You can also input a 1D time series 'dataset' directly on the command
line, without an external file. The 'filename' for such input has the
general format
  '1D:n_1@val_1,n_2@val_2,n_3@val_3,...'
where each 'n_i' is an integer and each 'val_i' is a float.  For
example
   -a '1D:5@0,10@1,5@0,10@1,5@0'
specifies that variable 'a' be assigned to a 1D time series of 35,
alternating in blocks between values 0 and value 1.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1dsum
Usage: 1dsum [options] a.1D b.1D ...
where each file a.1D, b.1D, etc. is an ASCII file of numbers arranged
in rows and columns. The sum of each column is written to stdout.

Options:
  -ignore nn = skip the first nn rows of each file
  -use    mm = use only mm rows from each file
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1dsvd
Usage: 1dsvd [options] 1Dfile 1Dfile ...
- Computes SVD of the matrix formed by the 1D file(s).
- Output appears on stdout; to save it, use '>' redirection.

Options:
 -one     = Make 1st vector be all 1's.
 -cond    = Only print condition number (ratio of extremes)
 -sing    = Only print singular values
 -1Dright = Only output right eigenvectors, in a .1D format
            This can be useful for reducing the number of
            columns in a design matrix.  The singular values
            are printed at the top of each vector column.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
1dtranspose
Usage: 1dtranspose infile outfile
where infile is an AFNI *.1D file (ASCII list of numbers arranged
in columns); outfile will be a similar file, but transposed.
You can use a column subvector selector list on infile, as in
  1dtranspose 'fred.1D[0,3,7]' ethel.1D

* This program may produce files with lines longer than a
   text editor can handle.
* If 'outfile' is '-' (or missing entirely), output goes to stdout.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
24swap
Usage: 24swap [options] file ...
Swaps bytes pairs and/or quadruples on the files listed.
Options:
 -q            Operate quietly
 -pattern pat  'pat' determines the pattern of 2 and 4
                 byte swaps.  Each element is of the form
                 2xN or 4xN, where N is the number of
                 bytes to swap as pairs (for 2x) or
                 as quadruples (for 4x).  For 2x, N must
                 be divisible by 2; for 4x, N must be
                 divisible by 4.  The whole pattern is
                 made up of elements separated by colons,
                 as in '-pattern 4x39984:2x0'.  If bytes
                 are left over after the pattern is used
                 up, the pattern starts over.  However,
                 if a byte count N is zero, as in the
                 example below, then it means to continue
                 until the end of file.

 N.B.: You can also use 1xN as a pattern, indicating to
         skip N bytes without any swapping.
 N.B.: A default pattern can be stored in the Unix
         environment variable AFNI_24SWAP_PATTERN.
         If no -pattern option is given, the default
         will be used.  If there is no default, then
         nothing will be done.
 N.B.: If there are bytes 'left over' at the end of the file,
         they are written out unswapped.  This will happen
         if the file is an odd number of bytes long.
 N.B.: If you just want to swap pairs, see program 2swap.
         For quadruples only, see program 4swap.
 N.B.: This program will overwrite the input file!
         You might want to test it first.

 Example: 24swap -pat 4x8:2x0 fred
          If fred contains 'abcdabcdabcdabcdabcd' on input,
          then fred has    'dcbadcbabadcbadcbadc' on output.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
2dImReg

Program:          2dImReg 
Initial Release:  04 Feb 1998 
Latest Revision:  02 Dec 2002 

This program performs 2d image registration.  Image alignment is      
performed on a slice-by-slice basis for the input 3d+time dataset,    
relative to a user specified base image.                              
                                                                      
Usage:                                                                
2dImReg                                                               
-input fname           Filename of input 3d+time dataset to process   
-basefile fname        Filename of 3d+time dataset for base image     
                         (default = current input dataset)            
-base num              Time index for base image  (0 <= num)          
                         (default:  num = 3)                          
-nofine                Deactivate fine fit phase of image registration
                         (default:  fine fit is active)               
-fine blur dxy dphi    Set fine fit parameters                        
   where:                                                             
     blur = FWHM of blurring prior to registration (in pixels)        
               (default:  blur = 1.0)                                 
     dxy  = Convergence tolerance for translations (in pixels)        
               (default:  dxy  = 0.07)                                
     dphi = Convergence tolerance for rotations (in degrees)          
               (default:  dphi = 0.21)                                
                                                                      
-prefix pname     Prefix name for output 3d+time dataset              
                                                                      
-dprefix dname    Write files 'dname'.dx, 'dname'.dy, 'dname'.psi     
                    containing the registration parameters for each   
                    slice in chronological order.                     
                    File formats:                                     
                      'dname'.dx:    time(sec)   dx(pixels)           
                      'dname'.dy:    time(sec)   dy(pixels)           
                      'dname'.psi:   time(sec)   psi(degrees)         
-dmm              Change dx and dy output format from pixels to mm    
                                                                      
-rprefix rname    Write files 'rname'.oldrms and 'rname'.newrms       
                    containing the volume RMS error for the original  
                    and the registered datasets, respectively.        
                    File formats:                                     
                      'rname'.oldrms:   volume(number)   rms_error    
                      'rname'.newrms:   volume(number)   rms_error    
                                                                      
-debug            Lots of additional output to screen                 
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
2swap
Usage: 2swap [-q] file ...
-- Swaps byte pairs on the files listed.
   The -q option means to work quietly.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
3dAFNIto3D
Usage: 3dAFNIto3D [options] dataset
Reads in an AFNI dataset, and writes it out as a 3D file.

OPTIONS:
 -prefix ppp  = Write result into file ppp.3D;
                  default prefix is same as AFNI dataset's.
 -bin         = Write data in binary format, not text.
 -txt         = Write data in text format, not binary.

NOTES:
* At present, all bricks are written out in float format.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
3dAFNItoANALYZE
Usage: 3dAFNItoANALYZE [-4D] [-orient code] aname dset
Writes AFNI dataset 'dset' to 1 or more ANALYZE 7.5 format
.hdr/.img file pairs (one pair for each sub-brick in the
AFNI dataset).  The ANALYZE files will be named
  aname_0000.hdr aname_0000.img   for sub-brick #0
  aname_0001.hdr aname_0001.img   for sub-brick #1
and so forth.  Each file pair will contain a single 3D array.

* If the AFNI dataset does not include sub-brick scale
  factors, then the ANALYZE files will be written in the
  datum type of the AFNI dataset.
* If the AFNI dataset does have sub-brick scale factors,
  then each sub-brick will be scaled to floating format
  and the ANALYZE files will be written as floats.
* The .hdr and .img files are written in the native byte
  order of the computer on which this program is executed.

Options
-------
-4D [30 Sep 2002]:
 If you use this option, then all the data will be written to
 one big ANALYZE file pair named aname.hdr/aname.img, rather
 than a series of 3D files.  Even if you only have 1 sub-brick,
 you may prefer this option, since the filenames won't have
 the '_0000' appended to 'aname'.

-orient code [19 Mar 2003]:
 This option lets you flip the dataset to a different orientation
 when it is written to the ANALYZE files.  The orientation code is
 formed as follows:
   The code must be 3 letters, one each from the
   pairs {R,L} {A,P} {I,S}.  The first letter gives
   the orientation of the x-axis, the second the
   orientation of the y-axis, the third the z-axis:
      R = Right-to-Left          L = Left-to-Right
      A = Anterior-to-Posterior  P = Posterior-to-Anterior
      I = Inferior-to-Superior   S = Superior-to-Inferior
   For example, 'LPI' means
      -x = Left       +x = Right
      -y = Posterior  +y = Anterior
      -z = Inferior   +z = Superior
 * For display in SPM, 'LPI' or 'RPI' seem to work OK.
    Be careful with this: you don't want to confuse L and R
    in the SPM display!
 * If you DON'T use this option, the dataset will be written
    out in the orientation in which it is stored in AFNI
    (e.g., the output of '3dinfo dset' will tell you this.)
 * The dataset orientation is NOT stored in the .hdr file.
 * AFNI and ANALYZE data are stored in files with the x-axis
    varying most rapidly and the z-axis most slowly.
 * Note that if you read an ANALYZE dataset into AFNI for
    display, AFNI assumes the LPI orientation, unless you
    set environment variable AFNI_ANALYZE_ORIENT.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
3dAFNItoMINC
Usage: 3dAFNItoMINC [options] dataset
Reads in an AFNI dataset, and writes it out as a MINC file.

OPTIONS:
 -prefix ppp  = Write result into file ppp.mnc;
                  default prefix is same as AFNI dataset's.
 -floatize    = Write MINC file in float format.

NOTES:
* Multi-brick datasets are written as 4D (x,y,z,t) MINC
   files.
* If the dataset has complex-valued sub-bricks, then this
   program won't write the MINC file.
* If any of the sub-bricks have floating point scale
   factors attached, then the output will be in float
   format (regardless of the presence of -floatize).
* This program uses the MNI program 'rawtominc' to create
   the MINC file; rawtominc must be in your path.  If you
   don't have rawtominc, you must install the MINC tools
   software package from MNI.  (But if you don't have the
   MINC tools already, why do you want to convert to MINC
   format anyway?)
* At this time, you can find the MINC tools at
     ftp://ftp.bic.mni.mcgill.ca/pub/minc/
   You need the latest version of minc-*.tar.gz and also
   of netcdf-*.tar.gz.

-- RWCox - April 2002
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
3dAFNItoNIFTI
++ Program 3dAFNItoNIFTI: AFNI version=AFNI_2005_08_24_1751
Usage: 3dAFNItoNIFTI [options] dataset
Reads an AFNI dataset, writes it out as a NIfTI-1.1 (.nii) file.

NOTES:
* The nifti_tool program can be used to manipulate
   the contents of a NIfTI-1.1 file.
* The input dataset can actually be in any input format
   that AFNI can read directly (e.g., MINC-1).
* There is no 3dNIFTItoAFNI program, since AFNI programs
   can directly read .nii files.  If you wish to make such
   a conversion anyway, one way to do so is like so:
     3dcalc -a ppp.nii -prefix ppp -expr 'a'

OPTIONS:
  -prefix ppp = Write the NIfTI-1.1 file as 'ppp.nii'.
                  Default: the dataset's prefix is used.
                  If you want a compressed file, try
                  using a prefix like 'ppp.nii.gz'.
  -verb       = Be verbose = print progress messages.
                  Repeating this increases the verbosity
                  (maximum setting is 3 '-verb' options).
  -float      = Force the output dataset to be 32-bit
                  floats.  This option should be used when
                  the input AFNI dataset has different
                  float scale factors for different sub-bricks,
                  an option that NIfTI-1.1 does not support.

The following options affect the contents of the AFNI extension
field that is written by default into the NIfTI-1.1 header:

  -pure       = Do NOT write an AFNI extension field into
                  the output file.  Only use this option if
                  needed.  You can also use the 'nifti_tool'
                  program to strip extensions from a file.
  -denote     = When writing the AFNI extension field, remove
                  text notes that might contain subject
                  identifying information.
  -oldid      = Give the new dataset the input dataset's
                  AFNI ID code.
  -newid      = Give the new dataset a new AFNI ID code, to
                  distinguish it from the input dataset.
     **** N.B.:  -newid is now the default action.
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
3dAFNItoNIML
Usage: 3dAFNItoNIML [options] dset
 Dumps AFNI dataset header information to stdout in NIML format.
 Mostly for debugging and testing purposes!

 OPTIONS:
  -data          == Also put the data into the output (will be huge).
  -tcp:host:port == Instead of stdout, send the dataset to a socket.
                    (implies '-data' as well)

-- RWCox - Mar 2005
This page auto-generated on Thu Aug 25 16:49:35 EDT 2005
3dAFNItoRaw
Usage: 3dAFNItoRaw [options] dataset
Convert an AFNI brik file with multiple sub-briks to a raw file with
  each sub-brik voxel concatenated voxel-wise.
For example, a dataset with 3 sub-briks X,Y,Z with elements x1,x2,x3,...,xn,
  y1,y2,y3,...,yn and z1,z2,z3,...,zn will be converted to a raw dataset with
  elements x1,y1,z1, x2,y2,z2, x3,y3,z3, ..., xn,yn,zn 
The dataset is kept in the original data format (float/short/int)
Options:
  -output / -prefix = name of the output file (not an AFNI dataset prefix)
    the default output name will be rawxyz.dat

  -datum float = force floating point output. Floating point forced if any
    sub-brik scale factors not equal to 1.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dANALYZEtoAFNI
** DON'T USE THIS PROGRAM!  REALLY!
USE 3dcopy OR to3d INSTEAD.

IF YOU CHOOSE TO USE IT ANYWAY, PERHAPS
BECAUSE IT WORKS BETTER ON YOUR 12th
CENTURY PLANTAGENET ANALYZE FILES,
ADD THE OPTION -OK TO YOUR COMMAND
LINE.

Usage: 3dANALYZEtoAFNI [options] file1.hdr file2.hdr ...
This program constructs a 'volumes' stored AFNI dataset
from the ANALYZE-75 files file1.img file2.img ....
In this type of dataset, there is only a .HEAD file; the
.BRIK file is replaced by the collection of .img files.
- Other AFNI programs can read (but not write) this type
  of dataset.
- The advantage of using this type of dataset vs. one created
   with to3d is that you don't have to duplicate the image data
   into a .BRIK file, thus saving disk space.
- The disadvantage of using 'volumes' for a multi-brick dataset
   is that all the .img files must be kept with the .HEAD file
   if you move the dataset around.
- The .img files must be in the same directory as the .HEAD file.
- Note that you put the .hdr files on the command line, but it is
   the .img files that will be named in the .HEAD file.
- After this program is run, you must keep the .img files with
   the output .HEAD file.  AFNI doesn't need the .hdr files, but
   other programs (e.g., FSL, SPM) will want them as well.

Options:
 -prefix ppp   = Save the dataset with the prefix name 'ppp'.
                  [default='a2a']
 -view vvv     = Save the dataset in the 'vvv' view, where
                  'vvv' is one of 'orig', 'acpc', or 'tlrc'.
                  [default='orig']

 -TR ttt       = For multi-volume datasets, create it as a
                  3D+time dataset with TR set to 'ttt'.
 -fbuc         = For multi-volume datasets, create it as a
                  functional bucket dataset.
 -abuc         = For multi-volume datasets, create it as an
                  anatomical bucket dataset.
   ** If more than one ANALYZE file is input, and none of the
       above options is given, the default is as if '-TR 1s'
       was used.
   ** For single volume datasets (1 ANALYZE file input), the
       default is '-abuc'.

 -geomparent g = Use the .HEAD file from dataset 'g' to set
                  the geometry of this dataset.
   ** If you don't use -geomparent, then the following options
       can be used to specify the geometry of this dataset:
 -orient code  = Tells the orientation of the 3D volumes.  The code
                  must be 3 letters, one each from the pairs {R,L}
                  {A,P} {I,S}.  The first letter gives the orientation
                  of the x-axis, the second the orientation of the
                  y-axis, the third the z-axis:
                   R = right-to-left         L = left-to-right
                   A = anterior-to-posterior P = posterior-to-anterior
                   I = inferior-to-superior  S = superior-to-inferior
 -zorigin dz   = Puts the center of the 1st slice off at the
                  given distance ('dz' in mm).  This distance
                  is in the direction given by the corresponding
                  letter in the -orient code.  For example,
                    -orient RAI -zorigin 30
                  would set the center of the first slice at
                  30 mm Inferior.
   ** If the above options are NOT used to specify the geometry
       of the dataset, then the default is '-orient RAI', and the
       z origin is set to center the slices about z=0.

 It is likely that you will want to patch up the .HEAD file using
 program 3drefit.

 -- RWCox - June 2002.


** DON'T USE THIS PROGRAM!  REALLY!
USE 3dcopy OR to3d INSTEAD.

IF YOU CHOOSE TO USE IT ANYWAY, PERHAPS
BECAUSE IT WORKS BETTER ON YOUR 12th
CENTURY PLANTAGENET ANALYZE FILES,
ADD THE OPTION -OK TO YOUR COMMAND
LINE.-- KRH - April 2005.

This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dAnatNudge
Usage: 3dAnatNudge [options]
Moves the anat dataset around to best overlap the epi dataset.

OPTIONS:
 -anat aaa   = aaa is an 'scalped' (3dIntracranial) high-resolution
                anatomical dataset [a mandatory option]
 -epi eee    = eee is an EPI dataset [a mandatory option]
                The first [0] sub-brick from each dataset is used,
                unless otherwise specified on the command line.
 -prefix ppp = ppp is the prefix of the output dataset;
                this dataset will differ from the input only
                in its name and its xyz-axes origin
                [default=don't write new dataset]
 -step sss   = set the step size to be sss times the voxel size
                in the anat dataset [default=1.0]
 -x nx       = search plus and minus nx steps along the EPI
 -y ny          dataset's x-axis; similarly for ny and the
 -z nz          y-axis, and for nz and the z-axis
                [default: nx=1 ny=5 nz=0]
 -verb       = print progress reports (this is a slow program)

NOTES
*Systematically moves the anat dataset around and find the shift
  that maximizes overlap between the anat dataset and the EPI
  dataset.  No rotations are done.
*Note that if you use -prefix, a new dataset will be created that
  is a copy of the anat, except that it's origin will be shifted
  and it will have a different ID code than the anat.  If you want
  to use this new dataset as the anatomy parent for the EPI
  datasets, you'll have to use
    3drefit -apar ppp+orig eee1+orig eee2+orig ...
*If no new dataset is written (no -prefix option), then you
  can use the 3drefit command emitted at the end to modify
  the origin of the anat dataset.  (Assuming you trust the
  results - visual inspection is recommended!)
*The reason the default search grid is mostly along the EPI y-axis
  is that axis is usually the phase-encoding direction, which is
  most subject to displacement due to off-resonance effects.
*Note that the time this program takes will be proportional to
  (2*nx+1)*(2*ny+1)*(2*nz+1), so using a very large search grid
  will result in a very large usage of CPU time.
*Recommended usage:
 + Make a 1-brick function volume from a typical EPI dataset:
     3dbucket -fbuc -prefix epi_fb epi+orig
 + Use 3dIntracranial to scalp a T1-weighted volume:
     3dIntracranial -anat spgr+orig -prefix spgr_st
 + Use 3dAnatNudge to produce a shifted anat dataset
     3dAnatNudge -anat spgr_st+orig -epi epi_fb+orig -prefix spgr_nudge
 + Start AFNI and look at epi_fb overlaid in color on the
    anat datasets spgr_st+orig and spgr_nudge+orig, to see if the
    nudged dataset seems like a better fit.
 + Delete the nudged dataset spgr_nudge.
 + If the nudged dataset DOES look better, then apply the
    3drefit command output by 3dAnatNudge to spgr+orig.
*Note that the x-, y-, and z-axes for the epi and anat datasets
  may point in different directions (e.g., axial SPGR and
  coronal EPI).  The 3drefit command applies to the anat
  dataset, NOT to the EPI dataset.
*If the program runs successfully, the only thing set to stdout
  will be the 3drefit command string; all other messages go to
  stderr.  This can be useful if you want to capture the command
  to a shell variable and then execute it, as in the following
  csh fragment:
     set cvar = `3dAnatNudge ...`
     if( $cvar[1] == "3drefit" ) $cvar
  The test on the first sub-string in cvar allows for the
  possibility that the program fails, or that the optimal
  nudge is zero.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dAnhist
Usage: 3dAnhist [options] dataset
Input dataset is a T1-weighted high-res of the brain (shorts only).
Output is a list of peaks in the histogram, to stdout, in the form
  ( datasetname #peaks peak1 peak2 ... )
In the C-shell, for example, you could do
  set anhist = `3dAnhist -q -w1 dset+orig`
Then the number of peaks found is in the shell variable $anhist[2].

Options:
  -q  = be quiet (don't print progress reports)
  -h  = dump histogram data to Anhist.1D and plot to Anhist.ps
  -F  = DON'T fit histogram with stupid curves.
  -w  = apply a Winsorizing filter prior to histogram scan
         (or -w7 to Winsorize 7 times, etc.)
  -2  = Analyze top 2 peaks only, for overlap etc.

  -label xxx = Use 'xxx' for a label on the Anhist.ps plot file
                instead of the input dataset filename.
  -fname fff = Use 'fff' for the filename instead of 'Anhist'.

If the '-2' option is used, AND if 2 peaks are detected, AND if
the -h option is also given, then stdout will be of the form
  ( datasetname 2 peak1 peak2 thresh CER CJV count1 count2 count1/count2)
where 2      = number of peaks
      thresh = threshold between peak1 and peak2 for decision-making
      CER    = classification error rate of thresh
      CJV    = coefficient of joint variation
      count1 = area under fitted PDF for peak1
      count2 = area under fitted PDF for peak2
      count1/count2 = ratio of the above quantities
NOTA BENE
---------
* If the input is a T1-weighted MRI dataset (the usual case), then
   peak 1 should be the gray matter (GM) peak and peak 2 the white
   matter (WM) peak.
* For the definitions of CER and CJV, see the paper
   Method for Bias Field Correction of Brain T1-Weighted Magnetic
   Resonance Images Minimizing Segmentation Error
   JD Gispert, S Reig, J Pascau, JJ Vaquero, P Garcia-Barreno,
   and M Desco, Human Brain Mapping 22:133-144 (2004).
* Roughly speaking, CER is the ratio of the overlapping area of the
   2 peak fitted PDFs to the total area of the fitted PDFS.  CJV is
   (sigma_GM+sigma_WM)/(mean_WM-mean_GM), and is a different, ad hoc,
   measurement of how much the two PDF overlap.
* The fitted PDFs are NOT Gaussians.  They are of the form
   f(x) = b((x-p)/w,a), where p=location of peak, w=width, 'a' is
   a skewness parameter between -1 and 1; the basic distribution
   is defined by b(x)=(1-x^2)^2*(1+a*x*abs(x)) for -1 < x < 1.

-- RWCox - November 2004
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dANOVA
++ Program 3dANOVA: AFNI version=AFNI_2005_08_24_1751
This program performs single factor Analysis of Variance (ANOVA)      
on 3D datasets                                                        
                                                                      
---------------------------------------------------------------       
                                                                      
Usage:                                                                
-----                                                                 
                                                                      
3dANOVA                                                               
   -levels r                   : r = number of factor levels          
                                                                      
   -dset 1 filename            : data set for factor level 1          
         . . .                            . . .                       
   -dset 1 filename              data set for factor level 1          
         . . .                            . . .                       
   -dset r filename              data set for factor level r          
         . . .                             . . .                      
   -dset r filename              data set for factor level r          
                                                                      
  [-voxel num]                 : screen output for voxel # num        
                                                                      
  [-diskspace]                 : print out disk space required for    
                                 program execution                    
                                                                      
The following commands generate individual AFNI 2-sub-brick datasets: 
  (In each case, output is written to the file with the specified     
   prefix file name.)                                                 
                                                                      
  [-ftr prefix]                : F-statistic for treatment effect     
                                                                      
  [-mean i prefix]             : estimate of factor level i mean      
                                                                      
  [-diff i j prefix]           : difference between factor levels     
                                                                      
  [-contr c1...cr prefix]      : contrast in factor levels            
                                                                      
The following command generates one AFNI 'bucket' type dataset:       
                                                                      
  [-bucket prefix]             : create one AFNI 'bucket' dataset whose 
                                 sub-bricks are obtained by             
                                 concatenating the above output files;  
                                 the output 'bucket' is written to file 
                                 with prefix file name                  

N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used,       
      e.g., -dset 2 'fred+orig[3]'                                    

Example of 3dANOVA:                                                   
------------------                                                    
                                                                      
 Example is based on a study with one factor (independent variable)   
 called 'Pictures', with 3 levels:                                    
        (1) Faces, (2) Houses, and (3) Donuts                         
                                                                      
 The ANOVA is being conducted on subject Fred's data:                 
                                                                      
 3dANOVA -levels 3                     \                             
         -dset 1 fred_Faces+tlrc       \                             
         -dset 2 fred_Houses+tlrc      \                             
         -dset 3 fred_Donuts+tlrc      \                             
         -ftr Pictures                 \                             
         -mean 1 Faces                 \                             
         -mean 2 Houses                \                             
         -mean 3 Donuts                \                             
         -diff 1 2 FvsH                \                             
         -diff 2 3 HvsD                \                             
         -diff 1 3 FvsD                \                             
         -contr  1  1 -1 FHvsD         \                             
         -contr -1  1  1 FvsHD         \                             
         -contr  1 -1  1 FDvsH         \                             
         -bucket fred_ANOVA                                           

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
---------------------------------------------------
Also see HowTo#5 - Group Analysis on the AFNI website:                
http://afni.nimh.nih.gov/pub/dist/HOWTO/howto/ht05_group/html/index.shtml

This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dANOVA2

Program:          3dANOVA2 
Author:           B. Douglas Ward 
Initial Release:  09 Dec 1996 
Latest Revision:  02 Aug 2005 

This program performs two-factor ANOVA on 3D data sets 

Usage: 
3dANOVA2 
-type k          type of ANOVA model to be used:                      
                    k=1  fixed effects model  (A and B fixed)         
                    k=2  random effects model (A and B random)        
                    k=3  mixed effects model  (A fixed, B random)     
                                                                      
-alevels a                     a = number of levels of factor A       
-blevels b                     b = number of levels of factor B       
-dset 1 1 filename             data set for level 1 of factor A       
                                        and level 1 of factor B       
 . . .                           . . .                                
                                                                      
-dset i j filename             data set for level i of factor A       
                                        and level j of factor B       
 . . .                           . . .                                
                                                                      
-dset a b filename             data set for level a of factor A       
                                        and level b of factor B       
                                                                      
[-voxel num]                   screen output for voxel # num          
[-diskspace]                   print out disk space required for      
                                  program execution                   
                                                                      
                                                                      
The following commands generate individual AFNI 2 sub-brick datasets: 
  (In each case, output is written to the file with the specified     
   prefix file name.)                                                 
                                                                      
[-ftr prefix]                F-statistic for treatment effect         
[-fa prefix]                 F-statistic for factor A effect          
[-fb prefix]                 F-statistic for factor B effect          
[-fab prefix]                F-statistic for interaction              
[-amean i prefix]            estimate mean of factor A level i        
[-bmean j prefix]            estimate mean of factor B level j        
[-xmean i j prefix]          estimate mean of cell at level i of      
                                factor A, level j of factor B         
[-adiff i j prefix]          difference between levels i and j of     
                                factor A                              
[-bdiff i j prefix]          difference between levels i and j of     
                                factor B                              
[-xdiff i j k l prefix]      difference between cell mean at A=i,B=j  
                                and cell mean at A=k,B=l              
[-acontr c1 ... ca prefix]   contrast in factor A levels              
[-bcontr c1 ... cb prefix]   contrast in factor B levels              
[-xcontr c11 ... c1b c21 ... c2b  ...  ca1 ... cab  prefix]           
                             contrast in cell means                   
                                                                      
                                                                      
The following command generates one AFNI 'bucket' type dataset:       
                                                                      
[-bucket prefix]         create one AFNI 'bucket' dataset whose       
                           sub-bricks are obtained by concatenating   
                           the above output files; the output 'bucket'
                           is written to file with prefix file name   


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 4 'fred+orig[3]'                                        

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dANOVA3

Program:          3dANOVA3 
Author:           B. Douglas Ward 
Initial Release:  29 Jan 1997 
Latest Revision:  19 Jul 2004 

This program performs three-factor ANOVA on 3D data sets.           

Usage: 
3dANOVA3 
-type  k          type of ANOVA model to be used:                     
                         k = 1   A,B,C fixed;          AxBxC          
                         k = 2   A,B,C random;         AxBxC          
                         k = 3   A fixed; B,C random;  AxBxC          
                         k = 4   A,B fixed; C random;  AxBxC          
                         k = 5   A,B fixed; C random;  AxB,BxC,C(A)   
                                                                      
-alevels a                     a = number of levels of factor A       
-blevels b                     b = number of levels of factor B       
-clevels c                     c = number of levels of factor C       
-dset 1 1 1 filename           data set for level 1 of factor A       
                                        and level 1 of factor B       
                                        and level 1 of factor C       
 . . .                           . . .                                
                                                                      
-dset i j k filename           data set for level i of factor A       
                                        and level j of factor B       
                                        and level k of factor C       
 . . .                           . . .                                
                                                                      
-dset a b c filename           data set for level a of factor A       
                                        and level b of factor B       
                                        and level c of factor C       
                                                                      
[-voxel num]                   screen output for voxel # num          
[-diskspace]                   print out disk space required for      
                                  program execution                   
                                                                      
                                                                      
The following commands generate individual AFNI 2 sub-brick datasets: 
  (In each case, output is written to the file with the specified     
   prefix file name.)                                                 
                                                                      
[-fa prefix]                F-statistic for factor A effect           
[-fb prefix]                F-statistic for factor B effect           
[-fc prefix]                F-statistic for factor C effect           
[-fab prefix]               F-statistic for A*B interaction           
[-fac prefix]               F-statistic for A*C interaction           
[-fbc prefix]               F-statistic for B*C interaction           
[-fabc prefix]              F-statistic for A*B*C interaction         
                                                                      
[-amean i prefix]           estimate of factor A level i mean         
[-bmean i prefix]           estimate of factor B level i mean         
[-cmean i prefix]           estimate of factor C level i mean         
[-xmean i j k prefix]       estimate mean of cell at factor A level i,
                               factor B level j, factor C level k     
                                                                      
[-adiff i j prefix]         difference between factor A levels i and j
[-bdiff i j prefix]         difference between factor B levels i and j
[-cdiff i j prefix]         difference between factor C levels i and j
[-xdiff i j k l m n prefix] difference between cell mean at A=i,B=j,  
                               C=k, and cell mean at A=l,B=m,C=n      
                                                                      
[-acontr c1...ca prefix]    contrast in factor A levels               
[-bcontr c1...cb prefix]    contrast in factor B levels               
[-ccontr c1...cc prefix]    contrast in factor C levels               
                                                                      
                                                                      
The following command generates one AFNI 'bucket' type dataset:       
                                                                      
[-bucket prefix]         create one AFNI 'bucket' dataset whose       
                           sub-bricks are obtained by concatenating   
                           the above output files; the output 'bucket'
                           is written to file with prefix file name   


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 4 5 'fred+orig[3]'                                      

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dAttribute
Usage: 3dAttribute [options] aname dset
Prints (to stdout) the value of the attribute 'aname' from
the header of dataset 'dset'.  If the attribute doesn't exist,
prints nothing and sets the exit status to 1.

Options:
  -name = Include attribute name in printout
  -all  = Print all attributes [don't put aname on command line]
          Also implies '-name'.  Attributes print in whatever order
          they are in the .HEAD file, one per line.  You may want
          to do '3dAttribute -all elvis+orig | sort' to get them
          in alphabetical order.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dAutobox
Usage: 3dAutobox dataset
Computes size of a box that fits around the volume.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dAutomask
Usage: 3dAutomask [options] dataset
Input dataset is EPI 3D+time.
Output dataset is a brain-only mask dataset.
Method:
 + Uses 3dClipLevel algorithm to find clipping level.
 + Keeps only the largest connected component of the
   supra-threshold voxels, after an erosion/dilation step.
 + Writes result as a 'fim' type of functional dataset.
Options:
  -prefix ppp = Write mask into dataset with prefix 'ppp'.
                 [default='automask']
  -q          = Don't write progress messages (i.e., be quiet).
  -eclip      = After creating the mask, remove exterior
                 voxels below the clip threshold.
  -dilate nd  = Dilate the mask outwards 'nd' times.
  -SI hh      = After creating the mask, find the most superior
                 voxel, then zero out everything more than 'hh'
                 millimeters inferior to that.  hh=130 seems to
                 be decent (for human brains).
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dAutoTcorrelate
Usage: 3dAutoTcorrelate [options] dset
Computes the correlation coefficient between each pair of
voxels in the input dataset, and stores the output into
a new anatomical bucket dataset.

Options:
  -pearson  = Correlation is the normal Pearson (product moment)
                correlation coefficient [default].
  -spearman = Correlation is the Spearman (rank) correlation
                coefficient.
  -quadrant = Correlation is the quadrant correlation coefficient.

  -polort m = Remove polynomical trend of order 'm', for m=-1..3.
                [default is m=1; removal is by least squares].
                Using m=-1 means no detrending; this is only useful
                for data/information that has been pre-processed.

  -autoclip = Clip off low-intensity regions in the dataset,
  -automask =  so that the correlation is only computed between
               high-intensity (presumably brain) voxels.  The
               intensity level is determined the same way that
               3dClipLevel works.

  -prefix p = Save output into dataset with prefix 'p'
               [default prefix is 'ATcorr'].

  -time     = Save output as a 3D+time dataset instead
               of a anat bucket.

Notes:
 * The output dataset is anatomical bucket type of shorts.
 * The output file might be gigantic and you might run out
    of memory running this program.  Use at your own risk!
 * The program prints out an estimate of its memory usage
    when it starts.  It also prints out a progress 'meter'
    of 1 dot per 10 output sub-bricks.
 * This is a quick hack for Peter Bandettini. Now pay up.

-- RWCox - Jan 31 2002
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3daxialize
Usage: 3daxialize [options] dataset
Purpose: Read in a dataset and write it out as a new dataset
         with the data brick oriented as axial slices.
         The input dataset must have a .BRIK file.
         One application is to create a dataset that can
         be used with the AFNI volume rendering plugin.

Options:
 -prefix ppp  = Use 'ppp' as the prefix for the new dataset.
               [default = 'axialize']
 -verb        = Print out a progress report.

The following options determine the order/orientation
in which the slices will be written to the dataset:
 -sagittal    = Do sagittal slice order [-orient ASL]
 -coronal     = Do coronal slice order  [-orient RSA]
 -axial       = Do axial slice order    [-orient RAI]
                 This is the default AFNI axial order, and
                 is the one currently required by the
                 volume rendering plugin; this is also
                 the default orientation output by this
                 program (hence the program's name).

 -orient code = Orientation code for output.
                The code must be 3 letters, one each from the
                pairs {R,L} {A,P} {I,S}.  The first letter gives
                the orientation of the x-axis, the second the
                orientation of the y-axis, the third the z-axis:
                 R = Right-to-left         L = Left-to-right
                 A = Anterior-to-posterior P = Posterior-to-anterior
                 I = Inferior-to-superior  S = Superior-to-inferior
                If you give an illegal code (e.g., 'LPR'), then
                the program will print a message and stop.
          N.B.: 'Neurological order' is -orient LPI

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dBRAIN_VOYAGERtoAFNI
Usage: 3dBRAIN_VOYAGERtoAFNI -input BV_VOLUME.vmr
 Converts a BrainVoyager vmr dataset to AFNI's BRIK format
 The conversion is based on information from BrainVoyager's
 website: www.brainvoyager.com. Sample data and information
 provided by Adam Greenberg and Nikolaus Kriegeskorte.

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dbuc2fim

Program: 3dbuc2fim 
Author:  B. D. Ward 
Initial Release:  18 March 1998 
Latest Revision:  15 August 2001 

This program converts bucket sub-bricks to fim (fico, fitt, fift, ...)
type dataset.                                                       

Usage:                                                              

3dbuc2fim  -prefix pname  d1+orig[index]                              
     This produces a fim dataset.                                   

 -or-                                                               

3dbuc2fim  -prefix pname  d1+orig[index1]  d2+orig[index2]            
     This produces a fico (fitt, fift, ...) dataset,                  
     depending on the statistic type of the 2nd subbrick,             
     with   d1+orig[index1] -> intensity sub-brick of pname           
            d2+orig[index2] -> threshold sub-brick of pname         

 -or-                                                               

3dbuc2fim  -prefix pname  d1+orig[index1,index2]                      
     This produces a fico (fitt, fift, ...) dataset,                  
     depending on the statistic type of the 2nd subbrick,             
     with   d1+orig[index1] -> intensity sub-brick of pname           
            d1+orig[index2] -> threshold sub-brick of pname         

where the options are:
     -prefix pname = Use 'pname' for the output dataset prefix name.
 OR  -output pname     [default='buc2fim']

     -session dir  = Use 'dir' for the output dataset session directory.
                       [default='./'=current working directory]
     -verb         = Print out some verbose output as the program
                       proceeds 

Command line arguments after the above are taken as input datasets.  
A dataset is specified using one of these forms:
   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
Sub-brick indexes start at 0. 

N.B.: The sub-bricks are output in the order specified, which may
 not be the order in the original datasets.  For example, using
           fred+orig[5,3]
 will cause the sub-brick #5 in fred+orig to be output as the intensity
 sub-brick, and sub-brick #3 to be output as the threshold sub-brick 
 in the new dataset.

N.B.: The '$', '(', ')', '[', and ']' characters are special to
 the shell, so you will have to escape them.  This is most easily
 done by putting the entire dataset plus selection list inside
 single quotes, as in 'fred+orig[5,9]'.

This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dbucket
Concatenate sub-bricks from input datasets into one big
'bucket' dataset.
Usage: 3dbucket options
where the options are:
     -prefix pname = Use 'pname' for the output dataset prefix name.
 OR  -output pname     [default='buck']

     -session dir  = Use 'dir' for the output dataset session directory.
                       [default='./'=current working directory]
     -glueto fname = Append bricks to the end of the 'fname' dataset.
                       This command is an alternative to the -prefix 
                       and -session commands.                        
     -dry          = Execute a 'dry run'; that is, only print out
                       what would be done.  This is useful when
                       combining sub-bricks from multiple inputs.
     -verb         = Print out some verbose output as the program
                       proceeds (-dry implies -verb).
     -fbuc         = Create a functional bucket.
     -abuc         = Create an anatomical bucket.  If neither of
                       these options is given, the output type is
                       determined from the first input type.

Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
You can also add a sub-brick selection list after the end of the
dataset name.  This allows only a subset of the sub-bricks to be
included into the output (by default, all of the input dataset
is copied into the output).  A sub-brick selection list looks like
one of the following forms:
  fred+orig[5]                     ==> use only sub-brick #5
  fred+orig[5,9,17]                ==> use #5, #9, and #12
  fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
  fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  fred+orig[0..$(3)]

N.B.: The sub-bricks are output in the order specified, which may
 not be the order in the original datasets.  For example, using
  fred+orig[0..$(2),1..$(2)]
 will cause the sub-bricks in fred+orig to be output into the
 new dataset in an interleaved fashion.  Using
  fred+orig[$..0]
 will reverse the order of the sub-bricks in the output.

N.B.: Bucket datasets have multiple sub-bricks, but do NOT have
 a time dimension.  You can input sub-bricks from a 3D+time dataset
 into a bucket dataset.  You can use the '3dinfo' program to see
 how many sub-bricks a 3D+time or a bucket dataset contains.

N.B.: The '$', '(', ')', '[', and ']' characters are special to
 the shell, so you will have to escape them.  This is most easily
 done by putting the entire dataset plus selection list inside
 single quotes, as in 'fred+orig[5..7,9]'.

N.B.: In non-bucket functional datasets (like the 'fico' datasets
 output by FIM, or the 'fitt' datasets output by 3dttest), sub-brick
 [0] is the 'intensity' and sub-brick [1] is the statistical parameter
 used as a threshold.  Thus, to create a bucket dataset using the
 intensity from dataset A and the threshold from dataset B, and
 calling the output dataset C, you would type
    3dbucket -prefix C -fbuc 'A+orig[0]' -fbuc 'B+orig[1]'

WARNING: using this program, it is possible to create a dataset that
         has different basic datum types for different sub-bricks
         (e.g., shorts for brick 0, floats for brick 1).
         Do NOT do this!  Very few AFNI programs will work correctly
         with such datasets!
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dcalc
Program: 3dcalc                                                         
Author:  RW Cox et al                                                   
                                                                        
3dcalc - AFNI's calculator program                                      
                                                                        
     This program does voxel-by-voxel arithmetic on 3D datasets         
     (limited to inter-voxel computation).                              
                                                                        
     The program assumes that the voxel-by-voxel computations are being 
     performed on datasets that occupy the same space and have the same 
     orientations.                                                      
                                                                        
------------------------------------------------------------------------
Usage:                                                                  
-----                                                                   
       3dcalc -a dsetA [-b dsetB...] \                                 
              -expr EXPRESSION       \                                 
              [options]                                                 
                                                                        
Examples:                                                               
--------                                                                
1. Average datasets together, on a voxel-by-voxel basis:                
                                                                        
     3dcalc -a fred+tlrc -b ethel+tlrc -c lucy+tlrc \                  
            -expr '(a+b+c)/3' -prefix subjects_mean                     
                                                                        
2. Perform arithmetic calculations between the sub-bricks of a single   
   dataset by noting the sub-brick number on the command line:          
                                                                        
     3dcalc -a 'func+orig[2]' -b 'func+orig[4]' -expr 'sqrt(a*b)'       
                                                                        
3. Create a simple mask that consists only of values in sub-brick #0    
   that are greater than 3.14159:                                       
                                                                        
     3dcalc -a 'func+orig[0]' -expr 'ispositive(a-3.14159)' \          
            -prefix mask                                                
                                                                        
4. Normalize subjects' time series datasets to percent change values in 
   preparation for group analysis:                                      
                                                                        
   Voxel-by-voxel, the example below divides each intensity value in    
   the time series (epi_r1+orig) with the voxel's mean value (mean+orig)
   to get a percent change value. The 'ispositive' command will ignore  
   voxels with mean values less than 167 (i.e., they are labeled as     
  'zero' in the output file 'percent_change+orig') and are most likely  
   background/noncortical voxels.                                       
                                                                        
     3dcalc -a epi_run1+orig -b mean+orig     \                        
            -expr '100 * a/b * ispositive(b-167)' -prefix percent_chng  
                                                                        
5. Create a compound mask from a statistical dataset, where 3 stimuli   
   show activation.                                                     
      NOTE: 'step' and 'ispositive' are identical expressions that can  
            be used interchangeably:                                    
                                                                        
     3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \ 
            -expr 'step(a-4.2)*step(b-2.9)*step(c-3.1)'              \ 
            -prefix compound_mask                                       
                                                                        
6. Same as example #5, but this time create a mask of 8 different values
   showing all combinations of activations (i.e., not only where        
   everything is active, but also each stimulus individually, and all   
   combinations).  The output mask dataset labels voxel values as such: 
        0 = none active    1 = A only active    2 = B only active       
        3 = A and B only   4 = C only active    5 = A and C only        
        6 = B and C only   7 = all A, B, and C active                   
                                                                        
     3dcalc -a 'func+orig[12]' -b 'func+orig[15]' -c 'func+orig[18]' \ 
            -expr 'step(a-4.2)+2*step(b-2.9)+4*step(c-3.1)'          \ 
            -prefix mask_8                                              
                                                                        
7. Create a region-of-interest mask comprised of a 3-dimensional sphere.
   Values within the ROI sphere will be labeled as '1' while values     
   outside the mask will be labeled as '0'. Statistical analyses can    
   then be done on the voxels within the ROI sphere.                    
                                                                        
   The example below puts a solid ball (sphere) of radius 3=sqrt(9)     
   about the point with coordinates (x,y,z)=(20,30,70):                 
                                                                        
     3dcalc -a anat+tlrc                                              \
            -expr 'step((9-(x-20)*(x-20)-(y-30)*(y-30)-(z-70)*(z-70))'\
            -prefix ball                                                
                                                                        
 8. Some datsets are 'short' (16 bit) integers with a scalar attached,  
    which allow them to be smaller than float datasets and to contain   
    fractional values.                                                  
                                                                        
    Dataset 'a' is always used as a template for the output dataset. For
    the examples below, assume that datasets d1+orig and d2+orig consist
    of small integers.                                                  
                                                                        
    a) When dividing 'a' by 'b', the result should be scaled, so that a 
       value of 2.4 is not truncated to '2'. To avoid this truncation,  
       force scaling with the -fscale option:                           
                                                                        
          3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -fscale 
                                                                        
    b) If it is preferable that the result is of type 'float', then set 
       the output data type (datum) to float:                           
                                                                        
          3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot \      
                 -datum float                                           
                                                                        
    c) Perhaps an integral division is desired, so that 9/4=2, not 2.24.
       Force the results not to be scaled (opposite of example 8b) using
       the -nscale option:                                              
                                                                        
          3dcalc -a d1+orig -b d2+orig -expr 'a/b' -prefix quot -nscale 
                                                                        
------------------------------------------------------------------------
                                                                        
ARGUMENTS for 3dcalc (must be included on command line):                
--------------------  ----                                              
                                                                        
 -a dname    = Read dataset 'dname' and call the voxel values 'a' in the
               expression (-expr) that is input below. Up to 24 dnames  
               (-a, -b, -c, ... -z) can be included in a single 3dcalc  
               calculation/expression.                                  
               ** If some letter name is used in the expression, but    
                  not present in one of the dataset options here, then  
                  that variable is set to 0.                            
               ** If the letter is followed by a number, then that      
                  number is used to select the sub-brick of the dataset 
                  which will be used in the calculations.               
                     E.g., '-b3 dname' specifies that the variable 'b'  
                     refers to sub-brick '3' of that dataset            
                     (indexes in AFNI start at 0).                      
                                                                        
 -expr       = Apply the expression - within quotes - to the input      
               datasets (dnames), one voxel at time, to produce the     
               output dataset.                                          
------------------------------------------------------------------------
 OPTIONS for 3dcalc:                                                    
 -------                                                                
                                                                        
  -verbose   = Makes the program print out various information as it    
               progresses.                                              
                                                                        
  -datum type= Coerce the output data to be stored as the given type,   
               which may be byte, short, or float.                      
               [default = datum of first input dataset]                 
                                                                        
  -fscale    = Force scaling of the output to the maximum integer       
               range. This only has effect if the output datum is byte  
               or short (either forced or defaulted). This option is    
               often necessary to eliminate unpleasant truncation       
               artifacts.                                               
                 [The default is to scale only if the computed values   
                  seem to need it -- are all <= 1.0 or there is at      
                  least one value beyond the integer upper limit.]      
                                                                        
                ** In earlier versions of 3dcalc, scaling (if used) was 
                   applied to all sub-bricks equally -- a common scale  
                   factor was used.  This would cause trouble if the    
                   values in different sub-bricks were in vastly        
                   different scales. In this version, each sub-brick    
                   gets its own scale factor. To override this behavior,
                   use the '-gscale' option.                            
                                                                        
  -gscale    = Same as '-fscale', but also forces each output sub-brick 
               to get the same scaling factor.  This may be desirable   
               for 3D+time datasets, for example.                       
                                                                        
  -nscale    = Don't do any scaling on output to byte or short datasets.
               This may be especially useful when operating on mask     
               datasets whose output values are only 0's and 1's.       
               ** Another way to achieve the effect of '-b3' is described
                  below in the dataset 'INPUT' specification section.   
                                                                        
  -prefix pname = Use 'pname' for the output dataset prefix name.       
                  [default='calc']                                      
                                                                        
  -session dir  = Use 'dir' for the output dataset session directory.   
                  [default='./'=current working directory]              
                                                                        
  -dt tstep     = Use 'tstep' as the TR for manufactured 3D+time datasets.
                                                                        
  -TR tstep     = If not given, defaults to 1 second.                   
                                                                        
  -taxis N      = If only 3D datasets are input (no 3D+time or .1D files),
    *OR*          then normally only a 3D dataset is calculated.  With  
  -taxis N:tstep: this option, you can force the creation of a time axis
                  of length 'N', optionally using time step 'tstep'.  In
                  such a case, you will probably want to use the pre-   
                  defined time variables 't' and/or 'k' in your         
                  expression, or each resulting sub-brick will be       
                  identical. For example:                               
                  '-taxis 121:0.1' will produce 121 points in time,     
                  spaced with TR 0.1.                                   
                                                                        
            N.B.: You can also specify the TR using the -dt option.     
            N.B.: You can specify 1D input datasets using the           
                  '1D:n@val,n@val' notation to get a similar effect.    
                  For example:                                          
                     -dt 0.1 -w '1D:121@0'                              
                  will have pretty much the same effect as              
                     -taxis 121:0.1
            N.B.: For both '-dt' and '-taxis', the 'tstep' value is in 
                  seconds.  You can suffix it with 'ms' to specify that
                  the value is in milliseconds instead; e.g., '-dt 2000ms'.
                                                                        
  -rgbfac A B C = For RGB input datasets, the 3 channels (r,g,b) are    
                  collapsed to one for the purposes of 3dcalc, using the
                  formula value = A*r + B*g + C*b                       
                                                                        
                  The default values are A=0.299 B=0.587 C=0.114, which 
                  gives the grayscale intensity.  To pick out the Green 
                  channel only, use '-rgbfac 0 1 0', for example.  Note 
                  that each channel in an RGB dataset is a byte in the  
                  range 0..255.  Thus, '-rgbfac 0.001173 0.002302 0.000447'
                  will compute the intensity rescaled to the range 0..1.0
                  (i.e., 0.001173=0.299/255, etc.)                      
                                                                        
------------------------------------------------------------------------
DATASET TYPES:                                                          
-------------                                                           
                                                                        
 The most common AFNI dataset types are 'byte', 'short', and 'float'.   
                                                                        
 A byte value is an 8-bit signed integer (0..255), a short value ia a   
 16-bit signed integer (-32768..32767), and a float value is a 32-bit   
 real number.  A byte value has almost 3 decimals of accuracy, a short  
 has almost 5, and a float has approximately 7 (from a 23+1 bit         
 mantissa).                                                             
                                                                        
 Datasets can also have a scalar attached to each sub-brick. The main   
 use of this is allowing a short type dataset to take on non-integral   
 values, while being half the size of a float dataset.                  
                                                                        
 As an example, consider a short dataset with a scalar of 0.0001. This  
 could represent values between -32.768 and +32.767, at a resolution of 
 0.001.  One could represnt the difference between 4.916 and 4.917, for 
 instance, but not 4.9165. Each number has 15 bits of accuracy, plus a  
 sign bit, which gives 4-5 decimal places of accuracy. If this is not   
 enough, then it makes sense to use the larger type, float.             
                                                                        
------------------------------------------------------------------------
3D+TIME DATASETS:                                                       
----------------                                                        
                                                                        
 This version of 3dcalc can operate on 3D+time datasets.  Each input    
 dataset will be in one of these conditions:                            
                                                                        
    (A) Is a regular 3D (no time) dataset; or                           
    (B) Is a 3D+time dataset with a sub-brick index specified ('-b3'); or
    (C) Is a 3D+time dataset with no sub-brick index specified ('-b').  
                                                                        
 If there is at least one case (C) dataset, then the output dataset will
 also be 3D+time; otherwise it will be a 3D dataset with one sub-brick. 
 When producing a 3D+time dataset, datasets in case (A) or (B) will be  
 treated as if the particular brick being used has the same value at each
 point in time.                                                         
                                                                        
 Multi-brick 'bucket' datasets may also be used.  Note that if multi-brick
 (bucket or 3D+time) datasets are used, the lowest letter dataset will  
 serve as the template for the output; that is, '-b fred+tlrc' takes    
 precedence over '-c wilma+tlrc'.  (The program 3drefit can be used to  
 alter the .HEAD parameters of the output dataset, if desired.)         
                                                                        
------------------------------------------------------------------------
INPUT DATASET NAMES
-------------------
 An input dataset is specified using one of these forms:
    'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
 You can also add a sub-brick selection list after the end of the
 dataset name.  This allows only a subset of the sub-bricks to be
 read in (by default, all of a dataset's sub-bricks are input).
 A sub-brick selection list looks like one of the following forms:
   fred+orig[5]                     ==> use only sub-brick #5
   fred+orig[5,9,17]                ==> use #5, #9, and #12
   fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
   fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
 Sub-brick indexes start at 0.  You can use the character '$'
 to indicate the last sub-brick in a dataset; for example, you
 can select every third sub-brick by using the selection list
   fred+orig[0..$(3)]

 N.B.: The sub-bricks are read in the order specified, which may
 not be the order in the original dataset.  For example, using
   fred+orig[0..$(2),1..$(2)]
 will cause the sub-bricks in fred+orig to be input into memory
 in an interleaved fashion.  Using
   fred+orig[$..0]
 will reverse the order of the sub-bricks.

 N.B.: You may also use the syntax <A..B> after the name of an input 
 dataset to restrict the range of values read in to the numerical
 values in a..b, inclusive.  For example,
    fred+orig[5..7]<100..200>
 creates a 3 sub-brick dataset with values less than 100 or
 greater than 200 from the original set to zero.
 If you use the <> sub-range selection without the [] sub-brick
 selection, it is the same as if you had put [0..$] in front of
 the sub-range selection.

 N.B.: Datasets using sub-brick/sub-range selectors are treated as:
  - 3D+time if the dataset is 3D+time and more than 1 brick is chosen
  - otherwise, as bucket datasets (-abuc or -fbuc)
    (in particular, fico, fitt, etc datasets are converted to fbuc!)

 N.B.: The characters '$ ( ) [ ] < >'  are special to the shell,
 so you will have to escape them.  This is most easily done by
 putting the entire dataset plus selection list inside forward
 single quotes, as in 'fred+orig[5..7,9]', or double quotes "x".
                                                                        
** WARNING: you cannot combine sub-brick selection of the form          
               -b3 bambam+orig       (the old method)                   
            with sub-brick selection of the form                        
               -b  'bambam+orig[3]'  (the new method)                   
            If you try, the Doom of Mandos will fall upon you!          
                                                                        
------------------------------------------------------------------------
1D TIME SERIES:                                                         
--------------                                                          
                                                                        
 You can also input a '*.1D' time series file in place of a dataset.    
 In this case, the value at each spatial voxel at time index n will be  
 the same, and will be the n-th value from the time series file.        
 At least one true dataset must be input.  If all the input datasets    
 are 3D (single sub-brick) or are single sub-bricks from multi-brick    
 datasets, then the output will be a 'manufactured' 3D+time dataset.    
                                                                        
 For example, suppose that 'a3D+orig' is a 3D dataset:                  
                                                                        
   3dcalc -a a3D+orig -b b.1D -expr "a*b"                             
                                                                        
 The output dataset will 3D+time with the value at (x,y,z,t) being      
 computed by a3D(x,y,z)*b(t).  The TR for this dataset will be set      
 to 'tstep' seconds -- this could be altered later with program 3drefit.
 Another method to set up the correct timing would be to input an       
 unused 3D+time dataset -- 3dcalc will then copy that dataset's time    
 information, but simply do not use that dataset's letter in -expr.     
                                                                        
 If the *.1D file has multiple columns, only the first read will be     
 used in this program.  You can select a column to be the first by      
 using a sub-vector selection of the form 'b.1D[3]', which will         
 choose the 4th column (since counting starts at 0).                    
                                                                        
 '{...}' row selectors can also be used - see the output of '1dcat -help'
 for more details on these.  Note that if multiple timeseries or 3D+time
 or 3D bucket datasets are input, they must all have the same number of 
 points along the 'time' dimension.                                     
                                                                        
------------------------------------------------------------------------
'1D:' INPUT:                                                            
-----------                                                             
                                                                        
 You can input a 1D time series 'dataset' directly on the command line, 
 without an external file.  The 'filename for such input takes the      
 general format                                                         
                                                                        
   '1D:n_1@val_1,n_2@val_2,n_3@val_3,...'                               
                                                                        
 where each 'n_i' is an integer and each 'val_i' is a float.  For       
 example                                                                
                                                                        
    -a '1D:5@0,10@1,5@0,10@1,5@0'                                       
                                                                        
 specifies that variable 'a' be assigned to a 1D time series of 35,     
 alternating in blocks between values 0 and value 1.                    
                                                                        
------------------------------------------------------------------------
'I:*.1D' and 'J:*.1D' and 'K:*.1D' INPUT:                               
----------------------------------------                                
                                                                        
 You can input a 1D time series 'dataset' to be defined as spatially    
 dependent instead of time dependent using a syntax like:               
                                                                        
   -c I:fred.1D                                                         
                                                                        
 This indicates that the n-th value from file fred.1D is to be associated
 with the spatial voxel index i=n (respectively j=n and k=n for 'J: and 
 K: input dataset names).  This technique can be useful if you want to  
 scale each slice by a fixed constant; for example:                     
                                                                        
   -a dset+orig -b K:slicefactor.1D -expr 'a*b'                         
                                                                        
 In this example, the '-b' value only varies in the k-index spatial     
 direction.                                                             
                                                                        
------------------------------------------------------------------------
COORDINATES and PREDEFINED VALUES:                                      
---------------------------------                                       
                                                                        
 If you don't use '-x', '-y', or '-z' for a dataset, then the voxel     
 spatial coordinates will be loaded into those variables.  For example, 
 the expression 'a*step(x*x+y*y+z*z-100)' will zero out all the voxels  
 inside a 10 mm radius of the origin x=y=z=0.                           
                                                                        
 Similarly, the '-t' value, if not otherwise used by a dataset or *.1D  
 input, will be loaded with the voxel time coordinate, as determined    
 from the header file created for the OUTPUT.  Please note that the units
 of this are variable; they might be in milliseconds, seconds, or Hertz.
 In addition, slices of the dataset might be offset in time from one    
 another, and this is allowed for in the computation of 't'.  Use program
 3dinfo to find out the structure of your datasets, if you are not sure.
 If no input datasets are 3D+time, then the effective value of TR is    
 tstep in the output dataset, with t=0 at the first sub-brick.          
                                                                        
 Similarly, the '-i', '-j', and '-k' values, if not otherwise used,     
 will be loaded with the voxel spatial index coordinates.  The '-l'     
 (letter 'ell') value will be loaded with the temporal index coordinate.
                                                                        
 Otherwise undefined letters will be set to zero.  In the future,       
 new default values for other letters may be added.                     
                                                                        
 NOTE WELL: By default, the coordinate order of (x,y,z) is the order in 
 *********  which the data array is stored on disk; this order is output
            by 3dinfo.  The options below control can change this order:
                                                                        
 -dicom }= Sets the coordinates to appear in DICOM standard (RAI) order,
 -RAI   }= (the AFNI standard), so that -x=Right, -y=Anterior , -z=Inferior,
                                        +x=Left , +y=Posterior, +z=Superior.
                                                                        
 -SPM   }= Sets the coordinates to appear in SPM (LPI) order,           
 -LPI   }=                      so that -x=Left , -y=Posterior, -z=Inferior,
                                        +x=Right, +y=Anterior , +z=Superior.
                                                                        
------------------------------------------------------------------------
DIFFERENTIAL SUBSCRIPTS [22 Nov 1999]:                                  
-----------------------                                                 
                                                                        
 Normal calculations with 3dcalc are strictly on a per-voxel basis:
 there is no 'cross-talk' between spatial or temporal locations.
 The differential subscript feature allows you to specify variables
 that refer to different locations, relative to the base voxel.
 For example,
   -a fred+orig -b 'a[1,0,0,0]' -c 'a[0,-1,0,0]' -d 'a[0,0,2,0]'
 means: symbol 'a' refers to a voxel in dataset fred+orig,
        symbol 'b' refers to the following voxel in the x-direction,
        symbol 'c' refers to the previous voxel in the y-direction
        symbol 'd' refers to the 2nd following voxel in the z-direction

 To use this feature, you must define the base dataset (e.g., 'a')
 first.  Then the differentially subscripted symbols are defined
 using the base dataset symbol followed by 4 integer subscripts,
 which are the shifts in the x-, y-, z-, and t- (or sub-brick index)
 directions. For example,

   -a fred+orig -b 'a[0,0,0,1]' -c 'a[0,0,0,-1]' -expr 'median(a,b,c)'

 will produce a temporal median smoothing of a 3D+time dataset (this
 can be done more efficiently with program 3dTsmooth).

 Note that the physical directions of the x-, y-, and z-axes depend
 on how the dataset was acquired or constructed.  See the output of
 program 3dinfo to determine what direction corresponds to what axis.

 For convenience, the following abbreviations may be used in place of
 some common subscript combinations:

   [1,0,0,0] == +i    [-1, 0, 0, 0] == -i
   [0,1,0,0] == +j    [ 0,-1, 0, 0] == -j
   [0,0,1,0] == +k    [ 0, 0,-1, 0] == -k
   [0,0,0,1] == +l    [ 0, 0, 0,-1] == -l

 The median smoothing example can thus be abbreviated as

   -a fred+orig -b a+l -c a-l -expr 'median(a,b,c)'

 When a shift calls for a voxel that is outside of the dataset range,
 one of three things can happen:

   STOP => shifting stops at the edge of the dataset
   WRAP => shifting wraps back to the opposite edge of the dataset
   ZERO => the voxel value is returned as zero

 Which one applies depends on the setting of the shifting mode at the
 time the symbol using differential subscripting is defined.  The mode
 is set by one of the switches '-dsSTOP', '-dsWRAP', or '-dsZERO'.  The
 default mode is STOP.  Suppose that a dataset has range 0..99 in the
 x-direction.  Then when voxel 101 is called for, the value returned is

   STOP => value from voxel 99 [didn't shift past edge of dataset]
   WRAP => value from voxel 1  [wrapped back through opposite edge]
   ZERO => the number 0.0 

 You can set the shifting mode more than once - the most recent setting
 on the command line applies when a differential subscript symbol is
 encountered.

------------------------------------------------------------------------
PROBLEMS:
-------- 

 * Complex-valued datasets cannot be processed.
 * This program is not very efficient (but is faster than it once was).
 * Differential subscripts slow the program down even more.

------------------------------------------------------------------------
EXPRESSIONS:
----------- 

 Arithmetic expressions are allowed, using + - * / ** and parentheses.
 As noted above, datasets are referred to by single letter variable names.
 At this time, C relational, boolean, and conditional expressions are
 NOT implemented.  Built in functions include:

   sin  , cos  , tan  , asin  , acos  , atan  , atan2,                  
   sinh , cosh , tanh , asinh , acosh , atanh , exp  ,                  
   log  , log10, abs  , int   , sqrt  , max   , min  ,                  
   J0   , J1   , Y0   , Y1    , erf   , erfc  , qginv, qg ,             
   rect , step , astep, bool  , and   , or    , mofn ,                  
   sind , cosd , tand , median, lmode , hmode , mad  ,                  
   gran , uran , iran , eran  , lran  , orstat,                         
   mean , stdev, sem  , Pleg

 where:
 * qg(x)    = reversed cdf of a standard normal distribution
 * qginv(x) = inverse function to qg
 * min, max, atan2 each take 2 arguments ONLY
 * J0, J1, Y0, Y1 are Bessel functions (see Watson)
 * Pleg(m,x) is the m'th Legendre polynomial evaluated at x
 * erf, erfc are the error and complementary error functions
 * sind, cosd, tand take arguments in degrees (vs. radians)
 * median(a,b,c,...) computes the median of its arguments
 * mad(a,b,c,...) computes the MAD of its arguments
 * mean(a,b,c,...) computes the mean of its arguments
 * stdev(a,b,c,...) computes the standard deviation of its arguments
 * sem(a,b,c,...) computes the standard error of the mean of its arguments,
                  where sem(n arguments) = stdev(same)/sqrt(n)
 * orstat(n,a,b,c,...) computes the n-th order statistic of
    {a,b,c,...} - that is, the n-th value in size, starting
    at the bottom (e.g., orstat(1,a,b,c) is the minimum)
 * lmode(a,b,c,...) and hmode(a,b,c,...) compute the mode
    of their arguments - lmode breaks ties by choosing the
    smallest value with the maximal count, hmode breaks ties by
    choosing the largest value with the maximal count
    [median,lmode,hmode take a variable number of arguments]
 * gran(m,s) returns a Gaussian deviate with mean=m, stdev=s
 * uran(r)   returns a uniform deviate in the range [0,r]
 * iran(t)   returns a random integer in the range [0..t]
 * eran(s)   returns an exponentially distributed deviate
 * lran(t)   returns a logistically distributed deviate

 You may use the symbol 'PI' to refer to the constant of that name.
 This is the only 2 letter symbol defined; all input files are
 referred to by 1 letter symbols.  The case of the expression is
 ignored (in fact, it is converted to uppercase as the first step
 in the parsing algorithm).

 The following functions are designed to help implement logical
 functions, such as masking of 3D volumes against some criterion:
       step(x)    = {1 if x>0        , 0 if x<=0},
       astep(x,y) = {1 if abs(x) > y , 0 otherwise} = step(abs(x)-y)
       rect(x)    = {1 if abs(x)<=0.5, 0 if abs(x)>0.5},
       bool(x)    = {1 if x != 0.0   , 0 if x == 0.0},
    notzero(x)    = bool(x),
     iszero(x)    = 1-bool(x) = { 0 if x != 0.0, 1 if x == 0.0 },
     equals(x,y)  = 1-bool(x-y) = { 1 if x == y , 0 if x != y },
   ispositive(x)  = { 1 if x > 0; 0 if x <= 0 },
   isnegative(x)  = { 1 if x < 0; 0 if x >= 0 },
   and(a,b,...,c) = {1 if all arguments are nonzero, 0 if any are zero}
    or(a,b,...,c) = {1 if any arguments are nonzero, 0 if all are zero}
  mofn(m,a,...,c) = {1 if at least 'm' arguments are nonzero, 0 otherwise}
  argmax(a,b,...) = index of largest argument; = 0 if all args are 0
  argnum(a,b,...) = number of nonzero arguments

  [These last 5 functions take a variable number of arguments.]

 The following 27 new [Mar 1999] functions are used for statistical
 conversions, as in the program 'cdf':
   fico_t2p(t,a,b,c), fico_p2t(p,a,b,c), fico_t2z(t,a,b,c),
   fitt_t2p(t,a)    , fitt_p2t(p,a)    , fitt_t2z(t,a)    ,
   fift_t2p(t,a,b)  , fift_p2t(p,a,b)  , fift_t2z(t,a,b)  ,
   fizt_t2p(t)      , fizt_p2t(p)      , fizt_t2z(t)      ,
   fict_t2p(t,a)    , fict_p2t(p,a)    , fict_t2z(t,a)    ,
   fibt_t2p(t,a,b)  , fibt_p2t(p,a,b)  , fibt_t2z(t,a,b)  ,
   fibn_t2p(t,a,b)  , fibn_p2t(p,a,b)  , fibn_t2z(t,a,b)  ,
   figt_t2p(t,a,b)  , figt_p2t(p,a,b)  , figt_t2z(t,a,b)  ,
   fipt_t2p(t,a)    , fipt_p2t(p,a)    , fipt_t2z(t,a)    .

 See the output of 'cdf -help' for documentation on the meanings of
 and arguments to these functions.  (After using one of these, you
 may wish to use program '3drefit' to modify the dataset statistical
 auxiliary parameters.)

 Computations are carried out in double precision before being
 truncated to the final output 'datum'.

 Note that the quotes around the expression are needed so the shell
 doesn't try to expand * characters, or interpret parentheses.

 (Try the 'ccalc' program to see how the expression evaluator works.
  The arithmetic parser and evaluator is written in Fortran-77 and
  is derived from a program written long ago by RW Cox to facilitate
  compiling on an array processor hooked up to a VAX.  It's a mess,
  but it works - somewhat slowly.)
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dClipLevel
Usage: 3dClipLevel [options] dataset
Estimates the value at which to clip the anatomical dataset so
that background regions are set to zero.
Method:
  Find the median of all positive values >= clip value.
  Set the clip value to 0.50 of this median.
  Repeat until the clip value doesn't change.
Options:
  -mfrac ff = Use the number ff instead of 0.50 in the algorithm.
  -verb     = The clip value is always printed to stdout.  If
                this option is used to select verbose output,
                progress reports are printed to stderr as well.

N.B.: This program only works with byte- and short-valued
        datasets, and prints a warning message if any input
        voxels are negative.  If the dataset has more than one
        sub-brick, all sub-bricks are used to build the histogram.
N.B.: Use at your own risk!  You might want to use the AFNI Histogram
        plugin to see if the results are reasonable.  This program is
        likely to produce bad results on images gathered with local
        RF coils, or with pulse sequences with unusual contrasts.

A csh command line for the truly adventurous:
  afni -dset "v1:time+orig<`3dClipLevel 'v1:time+orig[4]'` .. 10000>"
(the dataset is from the 'sample96.tgz' data samples).  Can you
figure out what this does?
(Hint: each type of quote "'` means something different to csh.)
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dclust

Program: 3dclust 
Author:  RW Cox et al 
Date:    21 Jul 2005 

3dclust - performs simple-minded cluster detection in 3D datasets       
                                                                        
     This program can be used to find clusters of 'active' voxels and   
     print out a report about them.                                     
      * 'Active' refers to nonzero voxels that survive the threshold    
         that you (the user) have specified                             
      * Clusters are defined by a connectivity radius parameter 'rmm'   
                                                                        
      Note: by default, this program clusters on the absolute values    
            of the voxels                                               
----------------------------------------------------------------------- 
Usage: 3dclust [editing options] [other options] rmm vmul dset ...      
-----                                                                   
                                                                        
Examples:                                                               
--------                                                                
                                                                        
    3dclust         -1clip   0.3  5 2000 func+orig'[1]'                 
    3dclust -1noneg -1thresh 0.3  5 2000 func+orig'[1]'                 
    3dclust -1noneg -1thresh 0.3  5 2000 func+orig'[1]' func+orig'[3]   
                                                                        
    3dclust -noabs  -1clip 0.5   -dxyz=1  1  10 func+orig'[1]'          
    3dclust -noabs  -1clip 0.5            5 700 func+orig'[1]'          
                                                                        
    3dclust -noabs  -2clip 0 999 -dxyz=1 1  10 func+orig'[1]'           
                                                                        
    3dclust                   -1clip 0.3  5 3000 func+orig'[1]'         
    3dclust -quiet            -1clip 0.3  5 3000 func+orig'[1]'         
    3dclust -summarize -quiet -1clip 0.3  5 3000 func+orig'[1]'         
----------------------------------------------------------------------- 
                                                                        
Arguments (must be included on command line):                           
---------                                                               
                                                                        
   rmm            : cluster connection radius (in millimeters).         
                    All nonzero voxels closer than rmm millimeters      
                    (center-to-center distance) to the given voxel are  
                    included in the cluster.                            
                     * If rmm = 0, then clusters are defined by nearest-
                       neighbor connectivity                            
                                                                        
   vmul           : minimum cluster volume (micro-liters)               
                    i.e., determines the size of the volume cluster.    
                     * If vmul = 0, then all clusters are kept.         
                     * If vmul < 0, then the absolute vmul is the minimum
                          number of voxels allowed in a cluster.        
                                                                        
   dset           : input dataset (more than one allowed, but only the  
                    first sub-brick of the dataset)                     
                                                                        
 The results are sent to standard output (i.e., the screen)             
                                                                        
----------------------------------------------------------------------- 
                                                                        
Options:                                                                
-------                                                                 
                                                                        
* Editing options are as in 3dmerge (see 3dmerge -help)                 
  (including -1thresh, -1dindex, -1tindex, -dxyz=1 options)             
                                                                        
* -noabs      => Use the signed voxel intensities (not the absolute     
                 value) for calculation of the mean and Standard        
                 Error of the Mean (SEM)                                
                                                                        
* -summarize  => Write out only the total nonzero voxel                 
                 count and volume for each dataset                      
                                                                        
* -nosum      => Suppress printout of the totals                        
                                                                        
* -verb       => Print out a progress report (to stderr)                
                 as the computations proceed                            
                                                                        
* -quiet      => Suppress all non-essential output                      
                                                                        
* -mni        => If the input dataset is in +tlrc coordinates, this     
                 option will stretch the output xyz-coordinates to the  
                 MNI template brain.                                    
                                                                        
           N.B.1: The MNI template brain is about 5 mm higher (in S),   
                  10 mm lower (in I), 5 mm longer (in PA), and tilted   
                  about 3 degrees backwards, relative to the Talairach- 
                  Tournoux Atlas brain.  For more details, see          
                    http://www.mrc-cbu.cam.ac.uk/Imaging/mnispace.html  
           N.B.2: If the input dataset is not in +tlrc coordinates,     
                  then the only effect is to flip the output coordinates
                  to the 'LPI' (neuroscience) orientation, as if you    
                  gave the '-orient LPI' option.)                       
                                                                        
* -isovalue   => Clusters will be formed only from contiguous (in the   
                 rmm sense) voxels that also have the same value.       
                                                                        
           N.B.:  The normal method is to cluster all contiguous        
                  nonzero voxels together.                              
                                                                        
* -isomerge   => Clusters will be formed from each distinct value       
                 in the dataset; spatial contiguity will not be         
                 used (but you still have to supply rmm and vmul        
                 on the command line).                                  
                                                                        
           N.B.:  'Clusters' formed this way may well have components   
                   that are widely separated!                           
                                                                        
* -prefix ppp => Write a new dataset that is a copy of the              
                 input, but with all voxels not in a cluster            
                 set to zero; the new dataset's prefix is 'ppp'         
                                                                        
           N.B.:  Use of the -prefix option only affects the            
                  first input dataset                                   
----------------------------------------------------------------------- 
                                                                        
E.g., 3dclust -1clip 0.3  5  3000 func+orig'[1]'                        
                                                                        
  The above command tells 3dclust to find potential cluster volumes for 
  dataset func+orig, sub-brick #1, where the threshold has been set     
  to 0.3 (i.e., ignore voxels with an activation threshold of >0.3 or   
  <-0.3.  Voxels must be no more than 5 mm apart, and the cluster volume
  must be at least 3000 micro-liters in size.                           
                                                                        
Explanation of 3dclust Output:                                          
-----------------------------                                           
                                                                        
   Volume       : Number of voxels that make up the volume cluster      
                                                                        
   CM RL        : Center of mass (CM) for the cluster in the Right-Left 
                  direction (i.e., the coordinates for the CM)          
                                                                        
   CM AP        : Center of mass for the cluster in the                 
                  Anterior-Posterior direction                          
                                                                        
   CM IS        : Center of mass for the cluster in the                 
                  Inferior-Superior direction                           
                                                                        
   minRL, maxRL : Bounding box for the cluster, min and max             
                  coordinates in the Right-Left direction               
                                                                        
   minAP, maxAP : Min and max coordinates in the Anterior-Posterior     
                  direction of the volume cluster                       
                                                                        
   minIS, max IS: Min and max coordinates in the Inferior-Superior      
                  direction of the volume cluster                       
                                                                        
   Mean         : Mean value for the volume cluster                     
                                                                        
   SEM          : Standard Error of the Mean for the volume cluster     
                                                                        
   Max Int      : Maximum Intensity value for the volume cluster        
                                                                        
   MI RL        : Maximum Intensity value in the Right-Left             
                  direction of the volume cluster                       
                                                                        
   MI AP        : Maximum Intensity value in the Anterior-Posterior     
                  direction of the volume cluster                       
                                                                        
   MI IS        : Maximum Intensity value in the Inferior-Superior      
                  direction of the volume cluster                       
----------------------------------------------------------------------- 
                                                                        
Nota Bene:                                                              
                                                                        
   * The program does not work on complex- or rgb-valued datasets!      
                                                                        
   * Using the -1noneg option is strongly recommended!                  
                                                                        
   * 3D+time datasets are allowed, but only if you use the              
     -1tindex and -1dindex options.                                     
                                                                        
   * Bucket datasets are allowed, but you will almost certainly         
     want to use the -1tindex and -1dindex options with these.          
                                                                        
   * SEM values are not realistic for interpolated data sets!           
     A ROUGH correction is to multiply the SEM of the interpolated      
     data set by the square root of the number of interpolated          
     voxels per original voxel.                                         
                                                                        
   * If you use -dxyz=1, then rmm should be given in terms of           
     voxel edges (not mm) and vmul should be given in terms of          
     voxel counts (not microliters).  Thus, to connect to only          
     3D nearest neighbors and keep clusters of 10 voxels or more,       
     use something like '3dclust -dxyz=1 1.01 10 dset+orig'.            
     In the report, 'Volume' will be voxel count, but the rest of       
     the coordinate dependent information will be in actual xyz         
     millimeters.                                                       
                                                                        
  * The default coordinate output order is DICOM.  If you prefer        
    the SPM coordinate order, use the option '-orient LPI' or           
    set the environment variable AFNI_ORIENT to 'LPI'.  For more        
    information, see file README.environment.                           
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dCM
Usage: 3dCM [options] dset
Output = center of mass of dataset, to stdout.
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be averaged from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
  -automask    Generate the mask automatically.
  -set x y z   After computing the CM of the dataset, set the
                 origin fields in the header so that the CM
                 will be at (x,y,z) in DICOM coords.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dConvolve

Program:          3dConvolve 
Author:           B. Douglas Ward 
Initial Release:  28 June 2001 
Latest Revision:  28 Feb  2002 

Program to calculate the voxelwise convolution of given impulse response   
function (IRF) time series contained in a 3d+time dataset with a specified 
input stimulus function time series.  This program will also calculate     
convolutions involving multiple IRF's and multiple stimulus functions.     
Input options include addition of system noise to the estimated output.    
Output consists of an AFNI 3d+time dataset which contains the estimated    
system response.  Alternatively, if all inputs are .1D time series files,  
then the output will be a single .1D time series file.                     
                                                                       
Usage:                                                                 
3dConvolve                                                             
-input fname         fname = filename of 3d+time template dataset      
[-input1D]           flag to indicate all inputs are .1D time series   
[-mask mname]        mname = filename of 3d mask dataset               
[-censor cname]      cname = filename of censor .1D time series        
[-concat rname]      rname = filename for list of concatenated runs    
[-nfirst fnum]       fnum = number of first time point to calculate by 
                       convolution procedure.  (default = max maxlag)  
[-nlast  lnum]       lnum = number of last time point to calculate by  
                       convolution procedure.  (default = last point)  
[-polort pnum]       pnum = degree of polynomial corresponding to the  
                       baseline model  (default: pnum = 1)             
[-base_file bname]   bname = file containing baseline parameters       
                                                                       
-num_stimts num      num = number of input stimulus time series        
                       (default: num = 0)                              
-stim_file k sname   sname = filename of kth time series input stimulus
[-stim_minlag k m]   m = minimum time lag for kth input stimulus       
                       (default: m = 0)                                
[-stim_maxlag k n]   n = maximum time lag for kth input stimulus       
                       (default: n = 0)                                
[-stim_nptr k p]     p = number of stimulus function points per TR     
                       Note: This option requires 0 slice offset times 
                       (default: p = 1)                                
                                                                       
[-iresp k iprefix]   iprefix = prefix of 3d+time input dataset which   
                       contains the kth impulse response function      
                                                                       
[-errts eprefix]     eprefix = prefix of 3d+time input dataset which   
                       contains the residual error time series         
                       (i.e., noise which will be added to the output) 
                                                                       
[-sigma s]           s = std. dev. of additive Gaussian noise          
                       (default: s = 0)                                
[-seed d]            d = seed for random number generator              
                       (default: d = 1234567)                          
                                                                       
[-xout]              flag to write X matrix to screen                  
[-output tprefix]    tprefix = prefix of 3d+time output dataset which  
                       will contain the convolved time series data     
                       (or tprefix = prefix of .1D output time series  
                       if the -input1D option is used)                 
                                                                       
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dcopy
Usage 1: 3dcopy [-verb] [-denote] old_prefix new_prefix
  Will copy all datasets using the old_prefix to use the new_prefix;
    3dcopy fred ethel
  will copy   fred+orig.HEAD    to ethel+orig.HEAD
              fred+orig.BRIK    to ethel+orig.BRIK
              fred+tlrc.HEAD    to ethel+tlrc.HEAD
              fred+tlrc.BRIK.gz to ethel+tlrc.BRIK.gz

Usage 2: 3dcopy old_prefix+view new_prefix
  Will copy only the dataset with the given view (orig, acpc, tlrc).

Usage 3: 3dcopy old_dataset new_prefix
  Will copy the non-AFNI formatted dataset (e.g., MINC, ANALYZE, CTF)
  to the AFNI formatted dataset with the given new prefix.

Notes:
* The new datasets have new ID codes.  If you are renaming
   multiple datasets (as in Usage 1), then if the old +orig
   dataset is the warp parent of the old +acpc and/or +tlrc
   datasets, then the new +orig dataset will be the warp
   parent of the new +acpc and +tlrc datasets.  If any other
   datasets point to the old datasets as anat or warp parents,
   they will still point to the old datasets, not these new ones.
* The BRIK files are copied if they exist, keeping the compression
   suffix unchanged (if any).
* The old_prefix may have a directory name attached in front,
   as in 'gerard/manley/hopkins'.
* If the new_prefix does not have a directory name attached
   (i.e., does NOT look like 'homer/simpson'), then the new
   datasets will be written in the current directory ('./').
* The new_prefix cannot JUST be a directory (unlike the Unix
   utility 'cp'); you must supply a filename prefix, even if
   is identical to the filename prefix in old_prefix.
* The '-verb' option will print progress reports; otherwise, the
   program operates silently (unless an error is detected).
* The '-denote' option will remove any Notes from the file.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dCRUISEtoAFNI
Usage: 3dCRUISEtoAFNI -input CRUISE_HEADER.dx
 Converts a CRUISE dataset defined by a heder in OpenDX format
 The conversion is based on sample data and information
 provided by Aaron Carass from JHU's IACL iacl.ece.jhu.edu
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dDeconvolve
++ Program 3dDeconvolve: AFNI version=AFNI_2005_08_24_1751
Program to calculate the deconvolution of a measurement 3d+time dataset    
with a specified input stimulus time series.  This program will also       
perform multiple linear regression using multiple input stimulus time      
series. Output consists of an AFNI 'bucket' type dataset containing the    
least squares estimates of the linear regression coefficients, t-statistics
for significance of the coefficients, partial F-statistics for significance
of the individual input stimuli, and the F-statistic for significance of   
the overall regression.  Additional output consists of a 3d+time dataset   
containing the estimated system impulse response function.                 
                                                                       
Usage:                                                                 
3dDeconvolve
                                                                       
**** Input data and control options:                                   
-input fname         fname = filename of 3d+time input dataset         
                       (more than  one filename  can be  given)        
                       (here,   and  these  datasets  will  be)        
                       (catenated  in time;   if you do this, )        
                       ('-concat' is not needed and is ignored)        
[-input1D dname]     dname = filename of single (fMRI) .1D time series 
[-nodata [NT [TR]]   Evaluate experimental design only (no input data) 
[-mask mname]        mname = filename of 3d mask dataset               
[-automask]          build a mask automatically from input data        
                      (will be slow for long time series datasets)     
[-censor cname]      cname = filename of censor .1D time series        
[-concat rname]      rname = filename for list of concatenated runs    
[-nfirst fnum]       fnum = number of first dataset image to use in the
                       deconvolution procedure. (default = max maxlag) 
[-nlast  lnum]       lnum = number of last dataset image to use in the 
                       deconvolution procedure. (default = last point) 
[-polort pnum]       pnum = degree of polynomial corresponding to the  
                       null hypothesis  (default: pnum = 1)            
[-legendre]          use Legendre polynomials for null hypothesis      
[-nolegendre]        use power polynomials for null hypotheses         
                       (default is -legendre)                          
[-nodmbase]          don't de-mean baseline time series                
                       (i.e., polort>1 and -stim_base inputs)          
[-dmbase]            de-mean baseline time series (default if polort>0)
[-nocond]            don't calculate matrix condition number           
[-svd]               Use SVD instead of Gaussian elimination (default) 
[-nosvd]             Use Gaussian elimination instead of SVD           
[-rmsmin r]          r = minimum rms error to reject reduced model     
                                                                       
**** Input stimulus options:                                           
-num_stimts num      num = number of input stimulus time series        
                       (0 <= num)   (default: num = 0)                 
-stim_file k sname   sname = filename of kth time series input stimulus
[-stim_label k slabel] slabel = label for kth input stimulus           
[-stim_base k]       kth input stimulus is part of the baseline model  
[-stim_minlag k m]   m = minimum time lag for kth input stimulus       
                       (default: m = 0)                                
[-stim_maxlag k n]   n = maximum time lag for kth input stimulus       
                       (default: n = 0)                                
[-stim_nptr k p]     p = number of stimulus function points per TR     
                       Note: This option requires 0 slice offset times 
                       (default: p = 1)                                
                                                                       
[-stim_times k tname Rmodel]                                           
   Generate the k-th response model from a set of stimulus times       
   given in file 'tname'.  The response model is specified by the      
   'Rmodel' argument, which can be one of                              
     'GAM(p,q)'    = 1 parameter gamma variate                         
     'SPMG'        = 2 parameter SPM gamma variate + derivative        
     'POLY(b,c,n)' = n parameter polynomial expansion                  
     'SIN(b,c,n)'  = n parameter sine series expansion                 
     'TENT(b,c,n)' = n parameter tent function expansion               
     'BLOCK(d,p)'  = 1 parameter block stimulus of duration 'd'        
                     (can also be called 'IGFUN' which stands)         
                     (for 'incomplete gamma function'        )         
     'EXPR(b,c) exp1 ... expn' = n parameter; arbitrary expressions    
                                                                       
[-basis_normall a]                                                     
   Normalize all basis functions for '-stim_times' to have             
   amplitude 'a' (must have a > 0).  The peak absolute value           
   of each basis function will be scaled to be 'a'.                    
   NOTE: -basis_normall only affect -stim_times options that           
         appear LATER on the command line                              
                                                                       
[-slice_base k sname]                                                  
       Inputs the k'th stimulus time series from file sname,           
   AND specifies that this regressor belongs to the baseline,          
   AND specifies that the regressor is different for each slice in     
       the input 3D+time dataset.  The sname file should have exactly  
       nz columns of input, where nz=number of slices, OR it should    
       have exactly 1 column, in which case this input is the same     
       as using '-stim_file k sname' and '-stim_base k'.               
 N.B.: * You can't use -stim_minlag or -stim_maxlag or -stim_nptr      
         with this value of k.                                         
       * You can't use this option with -input1D or -nodata.           
       * The intended use of this option is to provide slice-          
         dependent physiological noise regressors, e.g., from program  
         1dCRphase.                                                    
                                                                       
**** General linear test (GLT) options:                                
-num_glt num         num = number of general linear tests (GLTs)       
                       (0 <= num)   (default: num = 0)                 
[-glt s gltname]     Perform s simultaneous linear tests, as specified 
                       by the matrix contained in file gltname         
[-glt_label k glabel]  glabel = label for kth general linear test      
[-gltsym gltname]    Read the GLT with symbolic names from the file    
                                                                       
[-TR_irc dt]                                                           
   Use 'dt' as the stepsize for computation of integrals in -IRC_times 
   options.  Default is to use value given in '-TR_times'.             
                                                                       
**** Options for output 3d+time datasets:                              
[-iresp k iprefix]   iprefix = prefix of 3d+time output dataset which  
                       will contain the kth estimated impulse response 
[-tshift]            Use cubic spline interpolation to time shift the  
                       estimated impulse response function, in order to
                       correct for differences in slice acquisition    
                       times. Note that this effects only the 3d+time  
                       output dataset generated by the -iresp option.  
[-sresp k sprefix]   sprefix = prefix of 3d+time output dataset which  
                       will contain the standard deviations of the     
                       kth impulse response function parameters        
[-fitts  fprefix]    fprefix = prefix of 3d+time output dataset which  
                       will contain the (full model) time series fit   
                       to the input data                               
[-errts  eprefix]    eprefix = prefix of 3d+time output dataset which  
                       will contain the residual error time series     
                       from the full model fit to the input data       
[-TR_times dt]                                                         
   Use 'dt' as the stepsize for output of -iresp and -sresp file       
   for response models generated by '-stim_times' options.             
   Default is same as time spacing in the '-input' 3D+time dataset.    
   The units here are in seconds!                                      
                                                                       
**** Options to control the contents of the output bucket dataset:     
[-fout]            Flag to output the F-statistics                     
[-rout]            Flag to output the R^2 statistics                   
[-tout]            Flag to output the t-statistics                     
[-vout]            Flag to output the sample variance (MSE) map        
[-nobout]          Flag to suppress output of baseline coefficients    
                     (and associated statistics)                       
[-nocout]          Flag to suppress output of regression coefficients  
                     (and associated statistics)                       
[-full_first]      Flag to specify that the full model statistics will 
                     appear first in the bucket dataset output         
[-bucket bprefix]  Create one AFNI 'bucket' dataset containing various 
                     parameters of interest, such as the estimated IRF 
                     coefficients, and full model fit statistics.      
                     Output 'bucket' dataset is written to bprefix.    
                                                                       
[-xsave]           Flag to save X matrix into file bprefix.xsave       
                     (only works if -bucket option is also given)      
[-noxsave]         Don't save X matrix (this is the default)           
[-cbucket cprefix] Save the regression coefficients (no statistics)    
                     into a dataset named 'cprefix'.  This dataset     
                     will be used in a -xrestore run instead of the    
                     bucket dataset, if possible.                      
                                                                       
[-xrestore f.xsave] Restore the X matrix, etc. from a previous run     
                     that was saved into file 'f.xsave'.  You can      
                     then carry out new -glt tests.  When -xrestore    
                     is used, most other command line options are      
                     ignored.                                          
                                                                       
**** The following options control the screen output only:             
[-quiet]             Flag to suppress most screen output               
[-xout]              Flag to write X and inv(X'X) matrices to screen   
[-xjpeg filename]    Write a JPEG file graphing the X matrix           
[-progress n]        Write statistical results for every nth voxel     
[-fdisp fval]        Write statistical results for those voxels        
                       whose full model F-statistic is > fval          

 -jobs J   Run the program with 'J' jobs (sub-processes).
             On a multi-CPU machine, this can speed the
             program up considerably.  On a single CPU
             machine, using this option is silly.
             J should be a number from 1 up to the
             number of CPU sharing memory on the system.
             J=1 is normal (single process) operation.
             The maximum allowed value of J is 32.
         * For more information on parallelizing, see
           http://afni.nimh.nih.gov/afni/doc/misc/afni_parallelize
         * Use -mask to get more speed; cf. 3dAutomask.

** NOTE **
This version of the program has been compiled to use
double precision arithmetic for most internal calculations.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dDeconvolve_f
**
** 3dDeconvolve_f is now disabled by default.
** It is dangerous, due to roundoff problems.
** Please use 3dDeconvolve from now on!
**
** HOWEVER, if you insist on using 3dDeconvolve_f, then:
**        + Use '-OK' as the first command line option.
**        + Check the matrix condition number;
**            if it is greater than 100, BEWARE!
**
** RWCox - July 2004
**
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3ddelay

Program: 3ddelay 
Author:  Ziad Saad (using B. Douglas Ward's 3dfim+ to read and write bricks) 
Date:    Jul 22 2005 

The program estimates the time delay between each voxel time series    
in a 3D+time dataset and a reference time series[1][2].                
The estimated delays are relative to the reference time series.
For example, a delay of 4 seconds means that the voxel time series 
is delayed by 4 seconds with respectto the reference time series.

                                                                       
Usage:                                                                 
3ddelay                                                                 
-input fname       fname = filename of input 3d+time dataset           
-ideal_file rname  rname = input ideal time series file name           
   The length of the reference time series should be equal to           
     that of the 3d+time data set. 
     The reference time series vector is stored in an ascii file.        
     The programs assumes that there is one value per line and that all  
     values in the file are part of the reference vector.                
     PS: Unlike with 3dfim, and FIM in AFNI, values over 33333 are treated
     as part of the time series.                                          
-fs fs             Sampling frequency in Hz. of data time series (1/TR). 
-T  Tstim          Stimulus period in seconds. 
                   If the stimulus is not periodic, you can set Tstim to 0.
[-prefix bucket]   The prefix for the results Brick.
                   The first subbrick is for Delay.
                   The second subbrick is for Covariance, which is an estimate
                   of the power in voxel time series at the frequencies present 
                   in the reference time series.
                   The third subbrick is for the Cross Correlation Coefficients between
                   FMRI time series and reference time series.
                   The fourth subbrick contains estimates of the Variance of voxel time series.
                   The default prefix is the prefix of the input 3D+time brick 
                   with a '.DEL' extension appended to it.
[-uS/-uD/-uR]      Units for delay estimates. (Seconds/Degrees/Radians)
                   You can't use Degrees or Radians as units unless 
                   you specify a value for Tstim > 0.
[-phzwrp]          Delay (or phase) wrap.
                   This switch maps delays from: 
                   (Seconds) 0->T/2 to 0->T/2 and T/2->T to -T/2->0
                   (Degrees) 0->180 to 0->180 and 180->360 to -180->0
                   (Radians) 0->pi to 0->pi and pi->2pi to -pi->0
                   You can't use this option unless you specify a 
                   value for Tstim > 0.

[-bias]            Do not correct for the bias in the estimates [1][2]
[-mask mname]      mname = filename of 3d mask dataset                 
                   only voxels with non-zero values in the mask would be 
                   considered.                                           
[-nfirst fnum]     fnum = number of first dataset image to use in      
                     the delay estimate. (default = 0)                 
[-nlast  lnum]     lnum = number of last dataset image to use in       
                     the delay estimate. (default = last)              
[-nodsamp ]        Do not correct a voxel's estimated delay by the time 
                   at which the slice containing that voxel was acquired.

[-co CCT]          Cross Correlation Coefficient threshold value.
                   This is only used to limit the ascii output (see below).
[-nodtrnd]         Do not remove the linear trend from the data time series.
                   Only the mean is removed. Regardless of this option, 
                   No detrending is done to the reference time series.
[-asc [out]]       Write the results to an ascii file for voxels with 
[-ascts [out]]     cross correlation coefficients larger than CCT.
                   If 'out' is not specified, a default name similar 
                   to the default output prefix is used.
                   -asc, only files 'out' and 'out.log' are written to disk (see ahead)
                   -ascts, an additional file, 'out.ts', is written to disk (see ahead)
                   There a 9 columns in 'out' which hold the following values:
                    1- Voxel Index (VI) : Each voxel in an AFNI brick has a unique index.
                          Indices map directly to XYZ coordinates.
                          See AFNI plugin documentations for more info.
                    2..4- Voxel coordinates (X Y Z): Those are the voxel slice coordinates.
                          You can see these coordinates in the upper left side of the 
                          AFNI window.To do so, you must first switch the voxel 
                          coordinate units from mm to slice coordinates. 
                          Define Datamode -> Misc -> Voxel Coords ?
                          PS: The coords that show up in the graph window
                              could be different from those in the upper left side 
                              of AFNI's main window.
                    5- Duff : A value of no interest to you. It is preserved for backward 
                          compatibility.
                    6- Delay (Del) : The estimated voxel delay.
                    7- Covariance (Cov) : Covariance estimate.
                    8- Cross Correlation Coefficient (xCorCoef) : Cross Correlation Coefficient.
                    9- Variance (VTS) : Variance of voxel's time series.

                   The file 'out' can be used as an input to two plugins:
                     '4Ddump' and '3D+t Extract'

                   The log file 'out.log' contains all parameter settings used for generating 
                   the output brick. It also holds any warnings generated by the plugin.
                   Some warnings, such as 'null time series ...' , or 
                   'Could not find zero crossing ...' are harmless. '
                   I might remove them in future versions.

                   A line (L) in the file 'out.ts' contains the time series of the voxel whose
                   results are written on line (L) in the file 'out'.
                   The time series written to 'out.ts' do not contain the ignored samples,
                   they are detrended and have zero mean.

                                                                      
Random Comments/Advice:
   The longer you time series, the better. It is generally recomended that
   the largest delay be less than N/10, N being the length of the time series.
   The algorithm does go all the way to N/2.

   If you have/find questions/comments/bugs about the plugin, 
   send me an E-mail: ziad@nih.gov

                          Ziad Saad Dec 8 00.

   [1] : Bendat, J. S. (1985). The Hilbert transform and applications to correlation measurements,
          Bruel and Kjaer Instruments Inc.
   [2] : Bendat, J. S. and G. A. Piersol (1986). Random Data analysis and measurement procedures, 
          John Wiley & Sons.
   Author's publications on delay estimation using the Hilbert Transform:
   [3] : Saad, Z.S., et al., Analysis and use of FMRI response delays. 
         Hum Brain Mapp, 2001. 13(2): p. 74-93.
   [4] : Saad, Z.S., E.A. DeYoe, and K.M. Ropella, Estimation of FMRI Response Delays. 
         Neuroimage, 2003. 18(2): p. 494-504.

This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dDespike
Usage: 3dDespike [options] dataset
Removes 'spikes' from the 3D+time input dataset and writes
a new dataset with the spike values replaced by something
more pleasing.

Method:
 * L1 fit a smooth-ish curve to each voxel time series
    [see -corder option for description of the curve].
 * Compute the MAD of the difference between the curve and
    the data time series (the residuals).
 * Estimate the standard deviation 'sigma' of the residuals
    as sqrt(PI/2)*MAD.
 * For each voxel value, define s = (value-curve)/sigma.
 * Values with s > c1 are replaced with a value that yields
    a modified s' = c1+(c2-c1)*tanh((s-c1)/(c2-c1)).
 * c1 is the threshold value of s for a 'spike' [default c1=2.5].
 * c2 is the upper range of the allowed deviation from the curve:
    s=[c1..infinity) is mapped to s'=[c1..c2)   [default c2=4].

Options:
 -ignore I  = Ignore the first I points in the time series:
               these values will just be copied to the
               output dataset [default I=0].
 -corder L  = Set the curve fit order to L:
               the curve that is fit to voxel data v(t) is

                       k=L [        (2*PI*k*t)          (2*PI*k*t) ]
 f(t) = a+b*t+c*t*t + SUM  [ d * sin(--------) + e * cos(--------) ]
                       k=1 [  k     (    T   )    k     (    T   ) ]

               where T = duration of time series;
               the a,b,c,d,e parameters are chosen to minimize
               the sum over t of |v(t)-f(t)| (L1 regression);
               this type of fitting is is insensitive to large
               spikes in the data.  The default value of L is
               NT/30, where NT = number of time points.

 -cut c1 c2 = Alter default values for the spike cut values
               [default c1=2.5, c2=4.0].
 -prefix pp = Save de-spiked dataset with prefix 'pp'
               [default pp='despike']
 -ssave ttt = Save 'spikiness' measure s for each voxel into a
               3D+time dataset with prefix 'ttt' [default=no save]
 -nomask    = Process all voxels
               [default=use a mask of high-intensity voxels, ]
               [as created via '3dAutomask -dilate 4 dataset'].

Caveats:
* Despiking may interfere with image registration, since head
   movement may produce 'spikes' at the edge of the brain, and
   this information would be used in the registration process.
   This possibility has not been explored.
* Check your data visually before and after despiking and
   registration!
   [Hint: open 2 AFNI controllers, and turn Time Lock on.]
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dDetrend
Usage: 3dDetrend [options] dataset
This program removes components from voxel time series using
linear least squares.  Each voxel is treated independently.
The input dataset may have a sub-brick selector string; otherwise,
all sub-bricks will be used.

General Options:
 -prefix pname = Use 'pname' for the output dataset prefix name.
                   [default='detrend']
 -session dir  = Use 'dir' for the output dataset session directory.
                   [default='./'=current working directory]
 -verb         = Print out some verbose output as the program runs.
 -replace      = Instead of subtracting the fit from each voxel,
                   replace the voxel data with the time series fit.
 -normalize    = Normalize each output voxel time series; that is,
                   make the sum-of-squares equal to 1.
           N.B.: This option is only valid if the input dataset is
                   stored as floats!
 -byslice      = Treat each input vector (infra) as describing a set of
                   time series interlaced across slices.  If NZ is the
                   number of slices and NT is the number of time points,
                   then each input vector should have NZ*NT values when
                   this option is used (usually, they only need NT values).
                   The values must be arranged in slice order, then time
                   order, in each vector column, as shown here:
                       f(z=0,t=0)       // first slice, first time
                       f(z=1,t=0)       // second slice, first time
                       ...
                       f(z=NZ-1,t=0)    // last slice, first time
                       f(z=0,t=1)       // first slice, second time
                       f(z=1,t=1)       // second slice, second time
                       ...
                       f(z=NZ-1,t=NT-1) // last slice, last time

Component Options:
These options determine the components that will be removed from
each dataset voxel time series.  They may be repeated to specify
multiple regression.  At least one component must be specified.

 -vector vvv   = Remove components proportional to the columns vectors
                   of the ASCII *.1D file 'vvv'.  You may use a
                   sub-vector selector string to specify which columns
                   to use; otherwise, all columns will be used.
                   For example:
                    -vector 'xyzzy.1D[3,5]'
                   will remove the 4th and 6th columns of file xyzzy.1D
                   from the dataset (sub-vector indexes start at 0).

 -expr eee     = Remove components proportional to the function
                   specified in the expression string 'eee'.
                   Any single letter from a-z may be used as the
                   independent variable in 'eee'.  For example:
                    -expr 'cos(2*PI*t/40)' -expr 'sin(2*PI*t/40)'
                   will remove sine and cosine waves of period 40
                   from the dataset.  Another example:
                    -expr '1' -expr 't' -expr 't*t'
                   will remove a quadratic trend from the data.

 -del ddd      = Use the numerical value 'ddd' for the stepsize
                   in subsequent -expr options.  If no -del option
                   is ever given, then the TR given in the dataset
                   header is used for 'ddd'; if that isn't available,
                   then 'ddd'=1.0 is assumed.  The j-th time point
                   will have independent variable = j * ddd, starting
                   at j=0.  For example:
                     -expr 'sin(x)' -del 2.0 -expr 'z**3'
                   means that the stepsize in 'sin(x)' is delta-x=TR,
                   but the stepsize in 'z**3' is delta-z = 2.

 N.B.: expressions are NOT calculated on a per-slice basis when the
        -byslice option is used.  If you want to do this, you could
        compute vectors with the required time series using 1devel.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3ddot
Usage: 3ddot [options] dset1 dset2
Output = correlation coefficient between 2 dataset bricks
         - you can use sub-brick selectors on the dsets
         - the result is a number printed to stdoutOptions:
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be averaged from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
  -mrange a b  Means to further restrict the voxels from
                 'mset' so that only those mask values
                 between 'a' and 'b' (inclusive) will
                 be used.  If this option is not given,
                 all nonzero values from 'mset' are used.
                 Note that if a voxel is zero in 'mset', then
                 it won't be included, even if a < 0 < b.
  -demean      Means to remove the mean from each volume
                 prior to computing the correlation.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dDTeig
Usage: 3dDTeig [options] dataset
Computes eigenvalues and eigenvectors for an input dataset of
 6 sub-bricks Dxx,Dxy,Dxz,Dyy,Dyz,Dzz.
 The results are stored in a 14-subbrick bucket dataset.
 The resulting 14-subbricks are
  lambda_1,lambda_2,lambda_3,
  eigvec_1[1-3],eigvec_2[1-3],eigvec_3[1-3],
  FA,MD.

The output is a bucket dataset.  The input dataset
may use a sub-brick selection list, as in program 3dcalc.
 Mean diffusivity (MD) calculated as simple average of eigenvalues.
 Fractional Anisotropy (FA) calculated according to Pierpaoli C, Basser PJ.
 Microstructural and physiological features of tissues elucidated by
 quantitative-diffusion tensor MRI, J Magn Reson B 1996; 111:209-19

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3ddup
Usage: 3ddup [options] dataset
 'Duplicates' a 3D dataset by making a warp-on-demand copy.
 Applications:
   - allows AFNI to resample a dataset to a new grid without
       destroying an existing data .BRIK
   - change a functional dataset to anatomical, or vice-versa

OPTIONS:
  -'type'           = Convert to the given 'type', which must be
                       chosen from the same list as in to3d
  -session dirname  = Write output into given directory (default=./)
  -prefix  pname    = Use 'pname' for the output directory prefix
                       (default=dup)
N.B.: Even if the new dataset is anatomical, it will not contain
      any markers, duplicated from the original, or otherwise.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dDWItoDT
Usage: 3dDWItoDT [options] gradient-file dataset
Computes 6 principle direction tensors from multiple gradient vectors
 and corresponding DTI image volumes.
 The program takes two parameters as input :  
    a 1D file of the gradient vectors with lines of ASCII floats Gxi,Gyi,Gzi.
    Only the non-zero gradient vectors are included in this file (no G0 line).
    a 3D bucket dataset with Np+1 sub-briks where the first sub-brik is the
    volume acquired with no diffusion weighting.
 Options:
   -prefix pname = Use 'pname' for the output dataset prefix name.
    [default='DT']

   -automask =  mask dataset so that the tensors are computed only for
    high-intensity (presumably brain) voxels.  The intensity level is
    determined the same way that 3dClipLevel works.

   -mask dset = use dset as mask to include/exclude voxels

   -nonlinear = compute iterative solution to avoid negative eigenvalues.
    This is the default method.

   -linear = compute simple linear solution.

   -reweight = recompute weight factors at end of iterations and restart

   -max_iter n = maximum number of iterations for convergence (Default=10).
    Values can range from -1 to any positive integer less than 101.
    A value of -1 is equivalent to the linear solution.
    A value of 0 results in only the initial estimate of the diffusion tensor
    solution adjusted to avoid negative eigenvalues.

   -max_iter_rw n = max number of iterations after reweighting (Default=5)
    values can range from 1 to any positive integer less than 101.

   -eigs = compute eigenvalues, eigenvectors, fractional anisotropy and mean
    diffusivity in sub-briks 6-19. Computed as in 3dDTeig

   -debug_briks = add sub-briks with Ed (error functional), Ed0 (orig. error),
     number of steps to convergence and I0 (modeled B0 volume)

   -cumulative_wts = show overall weight factors for each gradient level
    May be useful as a quality control

   -verbose nnnnn = print convergence steps every nnnnn voxels that survive to
    convergence loops (can be quite lengthy).

   -drive_afni nnnnn = show convergence graphs every nnnnn voxels that survive
    to convergence loops. AFNI must have NIML communications on (afni -niml).

 Example:
  3dDWItoDT -prefix rw01 -automask -reweight -max_iter 10 \
            -max_iter_rw 10 tensor25.1D grad02+orig.

 The output is a 6 sub-brick bucket dataset containing Dxx,Dxy,Dxz,Dyy,Dyz,Dzz.
 Additional sub-briks may be appended with the -eigs and -debug_briks options.
 These results are appropriate as the input to the 3dDTeig program.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dEntropy
Usage: 3dEntropy dataset ...
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dExtrema
++ Program 3dExtrema: AFNI version=AFNI_2005_08_24_1751
This program finds local extrema (minima or maxima) of the input       
dataset values for each sub-brick of the input dataset.  The extrema   
may be determined either for each volume, or for each individual slice.
Only those voxels whose corresponding intensity value is greater than  
the user specified data threshold will be considered.                  

Usage: 3dExtrema  options  datasets                                  
where the options are:                                                 
-prefix pname    = Use 'pname' for the output dataset prefix name.     
  OR                 [default = NONE; only screen output]              
-output pname                                                          
                                                                       
-session dir     = Use 'dir' for the output dataset session directory. 
                     [default='./'=current working directory]          
                                                                       
-quiet           = Flag to suppress screen output                      
                                                                       
-mask_file mname = Use mask statistic from file mname.                 
                   Note: If file mname contains more than 1 sub-brick, 
                   the mask sub-brick must be specified!               
-mask_thr m        Only voxels whose mask statistic is greater         
                   than m in abolute value will be considered.         
                                                                       
-data_thr d        Only voxels whose value (intensity) is greater      
                   than d in abolute value will be considered.         
                                                                       
-sep_dist d        Min. separation distance [mm] for distinct extrema  
                                                                       
Choose type of extrema (one and only one choice):                      
-minima            Find local minima.                                  
-maxima            Find local maxima.                                  
                                                                       
Choose form of binary relation (one and only one choice):              
-strict            >  for maxima,  <  for minima                       
-partial           >= for maxima,  <= for minima                       
                                                                       
Choose boundary criteria (one and only one choice):                    
-interior          Extrema must be interior points (not on boundary)   
-closure           Extrema may be boudary points                       
                                                                       
Choose domain for finding extrema (one and only one choice):           
-slice             Each slice is considered separately                 
-volume            The volume is considered as a whole                 
                                                                       
Choose option for merging of extrema (one and only one choice):        
-remove            Remove all but strongest of neighboring extrema     
-average           Replace neighboring extrema by average              
-weight            Replace neighboring extrema by weighted average     
                                                                       
Command line arguments after the above are taken to be input datasets. 


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dFDR

Program:          3dFDR 
Author:           B. Douglas Ward 
Initial Release:  31 January 2002 
Latest Revision:  31 January 2002 

This program implements the False Discovery Rate (FDR) algorithm for       
thresholding of voxelwise statistics.                                      
                                                                           
Program input consists of a functional dataset containing one (or more)    
statistical sub-bricks.  Output consists of a bucket dataset with one      
sub-brick for each input sub-brick.  For non-statistical input sub-bricks, 
the output is a copy of the input.  However, statistical input sub-bricks  
are replaced by their corresponding FDR values, as follows:                
                                                                           
For each voxel, the minimum value of q is determined such that             
                               E(FDR) <= q                                 
leads to rejection of the null hypothesis in that voxel. Only voxels inside
the user specified mask will be considered.  These q-values are then mapped
to z-scores for compatibility with the AFNI statistical threshold display: 
                                                                           
               stat ==> p-value ==> FDR q-value ==> FDR z-score            
                                                                           
Usage:                                                                     
  3dFDR                                                                    
    -input fname       fname = filename of input 3d functional dataset     
      OR                                                                   
    -input1D dname     dname = .1D file containing column of p-values      
                                                                           
    -mask_file mname   Use mask values from file mname.                    
                       Note: If file mname contains more than 1 sub-brick, 
                       the mask sub-brick must be specified!               
                       Default: No mask                                    
                                                                           
    -mask_thr m        Only voxels whose corresponding mask value is       
                       greater than or equal to m in absolute value will   
                       be considered.  Default: m=1                        
                                                                           
                       Constant c(N) depends on assumption about p-values: 
    -cind              c(N) = 1   p-values are independent across N voxels 
    -cdep              c(N) = sum(1/i), i=1,...,N   any joint distribution 
                       Default:  c(N) = 1                                  
                                                                           
    -quiet             Flag to suppress screen output                      
                                                                           
    -list              Write sorted list of voxel q-values to screen       
                                                                           
    -prefix pname      Use 'pname' for the output dataset prefix name.     
      OR                                                                   
    -output pname                                                          
                                                                           


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dfim

Program: 3dfim 
Author:  R. W. Cox and B. D. Ward 
Initial Release:  06 Sept 1996 
Latest Revision:  15 August 2001 

 Program:   3dfim 

Purpose:   Calculate functional image from 3d+time data file. 
Usage:     3dfim  [-im1 num]  -input fname  -prefix name 
              -ideal fname  [-ideal fname] [-ort fname] 
 
 options are:
 -im1 num        num   = index of first image to be used in time series 
                         correlation; default is 1  
  
 -input fname    fname = filename of 3d + time data file for input
  
 -prefix name    name  = prefix of filename for saving functional data
  
 -ideal fname    fname = filename of a time series to which the image data
                         is to be correlated. 
  
 -percent p      Calculate percentage change due to the ideal time series 
                 p     = maximum allowed percentage change from baseline 
                         Note: values greater than p are set equal to p. 
  
 -ort fname      fname = filename of a time series to which the image data
                         is to be orthogonalized 
  
             N.B.: It is possible to specify more than
             one ideal time series file. Each one is separately correlated
             with the image time series and the one most highly correlated
             is selected for each pixel.  Multiple ideals are specified
             using more than one '-ideal fname' option, or by using the
             form '-ideal [ fname1 fname2 ... ]' -- this latter method
             allows the use of wildcarded ideal filenames.
             The '[' character that indicates the start of a group of
             ideals can actually be any ONE of these: [{/%
             and the ']' that ends the group can be:  ]}/%
  
             [Format of ideal time series files:
             ASCII; one number per line;
             Same number of lines as images in the time series;
             Value over 33333 --> don't use this image in the analysis]
  
             N.B.: It is also possible to specify more than
             one ort time series file.  The image time series is  
             orthogonalized to each ort time series.  Multiple orts are 
             specified by using more than one '-ort fname' option, 
             or by using the form '-ort [ fname1 fname2 ... ]'.  This 
             latter method allows the use of wildcarded ort filenames.
             The '[' character that indicates the start of a group of
             ideals can actually be any ONE of these: [{/%
             and the ']' that ends the group can be:  ]}/%
  
             [Format of ort time series files:
             ASCII; one number per line;
             At least same number of lines as images in the time series]
  
  
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dfim+

Program: 3dfim+ 
Author:  B. Douglas Ward 
Initial Release:  28 April 2000 
Latest Revision:  29 October 2004 

Program to calculate the cross-correlation of an ideal reference waveform  
with the measured FMRI time series for each voxel.                         
                                                                       
Usage:                                                                 
3dfim+                                                                 
-input fname       fname = filename of input 3d+time dataset           
[-input1D dname]   dname = filename of single (fMRI) .1D time series   
[-mask mname]      mname = filename of 3d mask dataset                 
[-nfirst fnum]     fnum = number of first dataset image to use in      
                     the cross-correlation procedure. (default = 0)    
[-nlast  lnum]     lnum = number of last dataset image to use in       
                     the cross-correlation procedure. (default = last) 
[-polort pnum]     pnum = degree of polynomial corresponding to the    
                     baseline model  (pnum = 0, 1, etc.)               
                     (default: pnum = 1)                               
[-fim_thr p]       p = fim internal mask threshold value (0 <= p <= 1) 
                     (default: p = 0.0999)                             
[-cdisp cval]      Write (to screen) results for those voxels          
                     whose correlation stat. > cval  (0 <= cval <= 1)  
                     (default: disabled)                               
[-ort_file sname]  sname = input ort time series file name             
-ideal_file rname  rname = input ideal time series file name           
                                                                       
            Note:  The -ort_file and -ideal_file commands may be used  
                   more than once.                                     
            Note:  If files sname or rname contain multiple columns,   
                   then ALL columns will be used as ort or ideal       
                   time series.  However, individual columns or        
                   a subset of columns may be selected using a file    
                   name specification like 'fred.1D[0,3,5]', which     
                   indicates that only columns #0, #3, and #5 will     
                   be used for input.                                  

[-out param]       Flag to output the specified parameter, where       
                   the string 'param' may be any one of the following: 
                                                                       
    Fit Coef       L.S. fit coefficient for Best Ideal                
  Best Index       Index number for Best Ideal                        
    % Change       P-P amplitude of signal response / Baseline        
    Baseline       Average of baseline model response                 
 Correlation       Best Ideal product-moment correlation coefficient  
  % From Ave       P-P amplitude of signal response / Average         
     Average       Baseline + average of signal response              
  % From Top       P-P amplitude of signal response / Topline         
     Topline       Baseline + P-P amplitude of signal response        
 Sigma Resid       Std. Dev. of residuals from best fit               
         All       This specifies all of the above parameters       
 Spearman CC       Spearman correlation coefficient                   
 Quadrant CC       Quadrant correlation coefficient                   
                                                                       
            Note:  Multiple '-out' commands may be used.               
            Note:  If a parameter name contains imbedded spaces, the   
                   entire parameter name must be enclosed by quotes,   
                   e.g.,  -out 'Fit Coef'                                   
                                                                       
[-bucket bprefix]  Create one AFNI 'bucket' dataset containing the     
                   parameters of interest, as specified by the above   
                   '-out' commands.                                    
                   The output 'bucket' dataset is written to a file    
                   with the prefix name bprefix.                       
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dFourier
3dFourier 
(c) 1999 Medical College of Wisconsin
by T. Ross and K. Heimerl
Version 0.8 last modified 8-17-99

Usage: 3dFourier [options] dataset

The paramters and options are:
	dataset		an afni compatible 3d+time dataset to be operated upon
	-prefix name	output name for new 3d+time dataset [default = fourier]
	-lowpass f 	low pass filter with a cutoff of f Hz
	-highpass f	high pass filter with a cutoff of f Hz
	-ignore n	ignore the first n images [default = 1]
	-retrend	Any mean and linear trend are removed before filtering.
			This will restore the trend after filtering.

Note that by combining the lowpass and highpass options, one can construct
bandpass and notch filters

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dfractionize
Usage: 3dfractionize [options]

* For each voxel in the output dataset, computes the fraction
    of it that is occupied by nonzero voxels from the input.
* The fraction is stored as a short in the range 0..10000,
    indicating fractions running from 0..1.
* The template dataset is used only to define the output grid;
    its brick(s) will not be read into memory.  (The same is
    true of the warp dataset, if it is used.)
* The actual values stored in the input dataset are irrelevant,
    except in that they are zero or nonzero (UNLESS the -preserve
    option is used).

The purpose of this program is to allow the resampling of a mask
dataset (the input) from a fine grid to a coarse grid (defined by
the template).  When you are using the output, you will probably
want to threshold the mask so that voxels with a tiny occupancy
fraction aren't used.  This can be done in 3dmaskave, by using
3calc, or with the '-clip' option below.

Options are [the first 2 are 'mandatory options']:
  -template tset  = Use dataset 'tset' as a template for the output.
                      The output dataset will be on the same grid as
                      this dataset.

  -input iset     = Use dataset 'iset' for the input.
                      Only the sub-brick #0 of the input is used.
                      You can use the sub-brick selection technique
                      described in '3dcalc -help' to choose the
                      desired sub-brick from a multi-brick dataset.

  -prefix ppp     = Use 'ppp' for the prefix of the output.
                      [default prefix = 'fractionize']

  -clip fff       = Clip off voxels that are less than 'fff' occupied.
                      'fff' can be a number between 0.0 and 1.0, meaning
                      the fraction occupied, can be a number between 1.0
                      and 100.0, meaning the percent occupied, or can be
                      a number between 100.0 and 10000.0, meaning the
                      direct output value to use as a clip level.
                   ** Some sort of clipping is desirable; otherwise,
                        an output voxel that is barely overlapped by a
                        single nonzero input voxel will enter the mask.
                      [default clip = 0.0]

  -warp wset      = If this option is used, 'wset' is a dataset that
                      provides a transformation (warp) from +orig
                      coordinates to the coordinates of 'iset'.
                      In this case, the output dataset will be in
                      +orig coordinates rather than the coordinates
                      of 'iset'.  With this option:
                   ** 'tset' must be in +orig coordinates
                   ** 'iset' must be in +acpc or +tlrc coordinates
                   ** 'wset' must be in the same coordinates as 'iset'

  -preserve       = When this option is used, the program will copy
     or               the nonzero values of input voxels to the output
  -vote               dataset, rather than create a fractional mask.
                      Since each output voxel might be overlapped
                      by more than one input voxel, the program 'votes'
                      for which input value to preserve.  For example,
                      if input voxels with value=1 occupy 10% of an
                      output voxel, and inputs with value=2 occupy 20%
                      of the same voxel, then the output value in that
                      voxel will be set to 2 (provided that 20% is >=
                      to the clip fraction).
                   ** Voting can only be done on short-valued datasets,
                        or on byte-valued datasets.
                   ** Voting is a relatively time-consuming option,
                        since a separate loop is made through the
                        input dataset for each distinct value found.
                   ** Combining this with the -warp option does NOT
                        make a general +trlc to +orig transformer!
                        This is because for any value to survive the
                        vote, its fraction in the output voxel must be
                        >= clip fraction, regardless of other values
                        present in the output voxel.

Example usage:
 3dfractionize -template a+orig -input b+tlrc -warp anat+tlrc -clip 0.2

This program will also work in going from a coarse grid to a fine grid,
but it isn't clear that this capability has any purpose.
-- RWCox - February 1999
         - October 1999: added -warp and -preserve options
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dFriedman

Program: 3dFriedman 
Author:  B. Douglas Ward 
Initial Release:  23 July 1997 
Latest Revision:  02 December 2002 

This program performs nonparametric Friedman test for               
randomized complete block design experiments.                     

Usage:                                                              
3dFriedman                                                          
-levels s                      s = number of treatments             
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixname                Friedman statistics are written      
                                 to file prefixname                 


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dFWHM

Program: 3dFWHM 
Author:  B. Douglas Ward 
Initial Release:  20 February 1997 
Latest Revision:  08 March 2004 

This program estimates the Filter Width Half Maximum (FWHM).  

Usage: 
3dFWHM 
-dset file         file = name of input AFNI 3d dataset  
[-mask mname]      mname = filename of 3d mask dataset   
[-quiet]           suppress screen output                
[-out file]        file = name of output file            

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dhistog
Compute histogram of 3D Dataset
Usage: 3dhistog [editing options] [histogram options] dataset

The editing options are the same as in 3dmerge
 (i.e., the options starting with '-1').

The histogram options are:
  -nbin #   Means to use '#' bins [default=100]
            Special Case: for short or byte dataset bricks,
                          set '#' to zero to have the number
                          of bins set by the brick range.
  -dind i   Means to take data from sub-brick #i, rather than #0
  -omit x   Means to omit the value 'x' from the count;
              -omit can be used more than once to skip multiple values.
  -mask m   Means to use dataset 'm' to determine which voxels to use
  -doall    Means to include all sub-bricks in the calculation;
              otherwise, only sub-brick #0 (or that from -dind) is used.
  -notit    Means to leave the title line off the output.
  -log10    Output log10() of the counts, instead of the count values.
  -min x    Means specify minimum of histogram.
  -max x    Means specify maximum of histogram.

The histogram is written to stdout.  Use redirection '>' if you
want to save it to a file.  The format is a title line, then
three numbers printed per line:
  bottom-of-interval  count-in-interval  cumulative-count

-- by RW Cox (V Roopchansingh added the -mask option)

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dinfo
Prints out sort-of-useful information from a 3D dataset's header
Usage: 3dinfo [-verb OR -short] dataset [dataset ...]
  -verb means to print out lots of stuff
  -short means to print out less stuff
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dIntracranial

Program: 3dIntracranial 
Author:  B. D. Ward 
Initial Release:  04 June 1999 
Latest Revision:  21 July 2005 

3dIntracranial - performs automatic segmentation of intracranial region.
                                                                        
   This program will strip the scalp and other non-brain tissue from a  
   high-resolution T1 weighted anatomical dataset.                      
                                                                        
----------------------------------------------------------------------- 
                                                                        
Usage:                                                                  
-----                                                                   
                                                                        
3dIntracranial                                                          
   -anat filename   => Filename of anat dataset to be segmented         
                                                                        
   [-min_val   a]   => Minimum voxel intensity limit                    
                         Default: Internal PDF estimate for lower bound 
                                                                        
   [-max_val   b]   => Maximum voxel intensity limit                    
                         Default: Internal PDF estimate for upper bound 
                                                                        
   [-min_conn  m]   => Minimum voxel connectivity to enter              
                         Default: m=4                                   
                                                                        
   [-max_conn  n]   => Maximum voxel connectivity to leave              
                         Default: n=2                                   
                                                                        
   [-nosmooth]      => Suppress spatial smoothing of segmentation mask  
                                                                        
   [-mask]          => Generate functional image mask (complement)      
                         Default: Generate anatomical image            
                                                                        
   [-quiet]         => Suppress output to screen                        
                                                                        
   -prefix pname    => Prefix name for file to contain segmented image  
                                                                        
   ** NOTE **: The newer program 3dSkullStrip will probably give        
               better segmentation results than 3dIntracranial!         
----------------------------------------------------------------------- 
                                                                        
Examples:                                                               
--------                                                                
                                                                        
   3dIntracranial -anat elvis+orig -prefix elvis_strip                 
                                                                        
   3dIntracranial -min_val 30 -max_val 350 -anat elvis+orig -prefix strip
                                                                        
   3dIntracranial -nosmooth -quiet -anat elvis+orig -prefix elvis_strip 
                                                                        
----------------------------------------------------------------------- 
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dKruskalWallis

Program: 3dKruskalWallis 
Author:  B. Douglas Ward 
Initial Release:  23 July 1997 
Latest Revision:  02 Dec  2002 

This program performs nonparametric Kruskal-Wallis test for         
comparison of multiple treatments.                                

Usage:                                                              
3dKruskalWallis                                                     
-levels s                      s = number of treatments             
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset 1 filename               data set for treatment #1            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
 . . .                           . . .                              
-dset s filename               data set for treatment #s            
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixnamename            Kruskal-Wallis statistics are written
                                 to file prefixname                 


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dLocalstat
Usage: 3dLocalstat [options] dataset

This program computes statistics at each voxel, based on a
local neighborhood of that voxel.
 - The neighborhood is defined by the '-nbhd' option.
 - Statistics to be calculated are defined by the '-stat' option(s).

OPTIONS
-------
 -nbhd 'nnn' = The string 'nnn' defines the region around each
               voxel that will be extracted for the statistics
               calculation.  The format of the 'nnn' string are:
               * 'SPHERE(r)' where 'r' is the radius in mm;
                 the neighborhood is all voxels whose center-to-
                 center distance is less than or equal to 'r'.
                 ** A negative value for 'r' means that the region
                    is calculated using voxel indexes rather than
                    voxel dimensions; that is, the neighborhood
                    region is a "sphere" in voxel indexes of
                    "radius" abs(r).
               * 'RECT(a,b,c)' is a rectangular block which
                 proceeds plus-or-minus 'a' mm in the x-direction,
                 'b' mm in the y-direction, and 'c' mm in the
                 z-direction.  The correspondence between the
                 dataset xyz axes and the actual spatial orientation
                 can be determined by using program 3dinfo.
                 ** A negative value for 'a' means that the region
                    extends plus-and-minus abs(a) voxels in the
                    x-direction, rather than plus-and-minus a mm.
                    Mutatis mutandum for negative 'b' and/or 'c'.
               * If no '-nbhd' option is given, the region extracted
                 will just be the voxel and its 6 nearest neighbors.

 -stat sss   = Compute the statistic named 'sss' on the values
               extracted from the region around each voxel:
               * mean   = average of the values
               * stdev  = standard deviation
               * var    = variance (stdev*stdev)
               * cvar   = coefficient of variation = stdev/fabs(mean)
               * median = median of the values
               * MAD    = median absolute deviation
               * min    = minimum
               * max    = maximum
               * absmax = maximum of the absolute values
               * num    = number of the values in the region:
                          with the use of -mask or -automask,
                          the size of the region around any given
                          voxel will vary; this option lets you
                          map that size.  It may be useful if you
                          plan to compute a t-statistic (say) from
                          the mean and stdev outputs.
               * ALL    = all of the above, in that order
               More than one '-stat' option can be used.

 -mask mset  = Read in dataset 'mset' and use the nonzero voxels
               therein as a mask.  Voxels NOT in the mask will
               not be used in the neighborhood of any voxel. Also,
               a voxel NOT in the mask will have its statistic(s)
               computed as zero (0).
 -automask   = Compute the mask as in program 3dAutomask.
               -mask and -automask are mutually exclusive: that is,
               you can only specify one mask.

 -prefix ppp = Use string 'ppp' as the prefix for the output dataset.
               The output dataset is always stored as floats.

Author: RWCox - August 2005.  Instigator: ZSSaad.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dLRflip
Usage: 3dLRflip [-prefix ppp] dataset
Flips the Left-to-Right rows of a dataset.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dMannWhitney

Program: 3dMannWhitney 
Author:  B. Douglas Ward 
Initial Release:  23 July 1997 
Latest Revision:  02 Dec  2002 

This program performs nonparametric Mann-Whitney two-sample test. 

Usage: 
3dMannWhitney 
-dset 1 filename               data set for X observations          
 . . .                           . . .                              
-dset 1 filename               data set for X observations          
-dset 2 filename               data set for Y observations          
 . . .                           . . .                              
-dset 2 filename               data set for Y observations          
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixname                estimated population delta and       
                                 Wilcoxon-Mann-Whitney statistics   
                                 written to file prefixname         


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dmaskave
Usage: 3dmaskave [options] dataset
Computes average of all voxels in the input dataset
which satisfy the criterion in the options list.
If no options are given, then all voxels are included.
Options:
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be averaged from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
               SPECIAL CASE: If 'mset' is the string 'SELF',
                             then the input dataset will be
                             used to mask itself.  That is,
                             only nonzero voxels from the
                             #miv sub-brick will be used.
  -mindex miv  Means to use sub-brick #'miv' from the mask
                 dataset.  If not given, miv=0.
  -mrange a b  Means to further restrict the voxels from
                 'mset' so that only those mask values
                 between 'a' and 'b' (inclusive) will
                 be used.  If this option is not given,
                 all nonzero values from 'mset' are used.
                 Note that if a voxel is zero in 'mset', then
                 it won't be included, even if a < 0 < b.

  -dindex div  Means to use sub-brick #'div' from the dataset.
                 If not given, all sub-bricks will be processed.
  -drange a b  Means to only include voxels from the dataset whose
                 values fall in the range 'a' to 'b' (inclusive).
                 Otherwise, all voxel values are included.

  -slices p q  Means to only included voxels from the dataset
                 whose slice numbers are in the range 'p' to 'q'
                 (inclusive).  Slice numbers range from 0 to
                 NZ-1, where NZ can be determined from the output
                 of program 3dinfo.  The default is to include
                 data from all slices.
                 [There is no provision for geometrical voxel]
                 [selection except in the slice (z) direction]

  -sigma       Means to compute the standard deviation as well
                 as the mean.
  -median      Means to compute the median instead of the mean.
  -max         Means to compute the max instead of the mean.
  -min         Means to compute the min instead of the mean.
                 (-sigma is ignored with -median, -max, or -min)
  -dump        Means to print out all the voxel values that
                 go into the average.
  -udump       Means to print out all the voxel values that
                 go into the average, UNSCALED by any internal
                 factors.
                 N.B.: the scale factors for a sub-brick
                       can be found using program 3dinfo.
  -indump      Means to print out the voxel indexes (i,j,k) for
                 each dumped voxel.  Has no effect if -dump
                 or -udump is not also used.
                 N.B.: if nx,ny,nz are the number of voxels in
                       each direction, then the array offset
                       in the brick corresponding to (i,j,k)
                       is i+j*nx+k*nx*ny.
 -q     or
 -quiet        Means to print only the minimal results.
               This is useful if you want to create a *.1D file.

The output is printed to stdout (the terminal), and can be
saved to a file using the usual redirection operation '>'.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dmaskdump
Usage: 3dmaskdump [options] dataset dataset ...
Writes to an ASCII file values from the input datasets
which satisfy the mask criteria given in the options.
If no options are given, then all voxels are included.
This might result in a GIGANTIC output file.
Options:
  -mask mset   Means to use the dataset 'mset' as a mask:
                 Only voxels with nonzero values in 'mset'
                 will be printed from 'dataset'.  Note
                 that the mask dataset and the input dataset
                 must have the same number of voxels.
  -mrange a b  Means to further restrict the voxels from
                 'mset' so that only those mask values
                 between 'a' and 'b' (inclusive) will
                 be used.  If this option is not given,
                 all nonzero values from 'mset' are used.
                 Note that if a voxel is zero in 'mset', then
                 it won't be included, even if a < 0 < b.
  -index       Means to write out the dataset index values.
  -noijk       Means not to write out the i,j,k values.
  -xyz         Means to write the x,y,z coordinates from
                 the 1st input dataset at the start of each
                 output line.  These coordinates are in
                 the 'RAI' order.
  -o fname     Means to write output to file 'fname'.
                 [default = stdout, which you won't like]

  -cmask 'opts' Means to execute the options enclosed in single
                  quotes as a 3dcalc-like program, and produce
                  produce a mask from the resulting 3D brick.
       Examples:
        -cmask '-a fred+orig[7] -b zork+orig[3] -expr step(a-b)'
                  produces a mask that is nonzero only where
                  the 7th sub-brick of fred+orig is larger than
                  the 3rd sub-brick of zork+orig.
        -cmask '-a fred+orig -expr 1-bool(k-7)'
                  produces a mask that is nonzero only in the
                  7th slice (k=7); combined with -mask, you
                  could use this to extract just selected voxels
                  from particular slice(s).
       Notes: * You can use both -mask and -cmask in the same
                  run - in this case, only voxels present in
                  both masks will be dumped.
              * Only single sub-brick calculations can be
                  used in the 3dcalc-like calculations -
                  if you input a multi-brick dataset here,
                  without using a sub-brick index, then only
                  its 0th sub-brick will be used.
              * Do not use quotes inside the 'opts' string!

  -xbox x y z   Means to put a 'mask' down at the dataset (not DICOM)
                  coordinates of 'x y z' mm.  By default, this box is
                  1 voxel wide in each direction.  You can specify
                  instead a range of coordinates using a colon ':'
                  after the coordinates; for example:
                    -xbox 22:27 31:33 44
                  means a box from (x,y,z)=(22,31,44) to (27,33,44).

  -dbox x y z   Means the same as -xbox, but the coordinates are in
                  DICOM order (+x=Left, +y=Posterior, +z=Superior).
                  These coordinates correspond to those you'd enter
                  into the 'Jump to (xyz)' control in AFNI, and to
                  those output by default from 3dclust.
  -nbox x y z   Means the same as -xbot, but the coordinates are in
                  'neuroscience' order (+x=Right, +y=Anterior, +z=Superior)

  -ibox i j k   Means to put a 'mask' down at the voxel indexes
                  given by 'i j k'.  By default, this picks out
                  just 1 voxel.  Again, you can use a ':' to specify
                  a range (now in voxels) of locations.
       Notes: * Boxes are cumulative; that is, if you specify more
                  than 1 box, you'll get more than one region.
              * If a -mask and/or -cmask option is used, then
                  the intersection of the boxes with these masks
                  determines which voxels are output; that is,
                  a voxel must be inside some box AND inside the
                  mask in order to be selected for output.
              * If boxes select more than 1 voxel, the output lines
                  are NOT necessarily in the order of the options on
                  the command line.
              * Coordinates (for -xbox, -dbox, and -nbox) are relative
                  to the first dataset on the command line.

  -quiet        Means not to print progress messages to stderr.

Inputs after the last option are datasets whose values you
want to be dumped out.  These datasets (and the mask) can
use the sub-brick selection mechanism (described in the
output of '3dcalc -help') to choose which values you get.

Each selected voxel gets one line of output:
  i j k val val val ....
where (i,j,k) = 3D index of voxel in the dataset arrays,
and val = the actual voxel value.  Note that if you want
the mask value to be output, you have to include that
dataset in the dataset input list again, after you use
it in the '-mask' option.

N.B.: This program doesn't work with complex-valued datasets!

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dMax
Usage: 3dMax [options] dataset
Compute maximum and/or minimum voxel values of an input dataset

The output is a number to the console.  The input dataset
may use a sub-brick selection list, as in program 3dcalc.
Options :
  -quick = get the information from the header only (default)
  -slow = read the whole dataset to find the min and max values
  -min = print the minimum value in dataset
  -max = print the minimum value in dataset (default)
  -mean = print the mean value in dataset (implies slow)
  -count = print the number of voxels included (implies slow)
  -positive = include only positive voxel values (implies slow)
  -negative = include only negative voxel values (implies slow)
  -zero = include only zero voxel values (implies slow)
  -non-positive = include only voxel values 0 or negative (implies slow)
  -non-negative = include only voxel values 0 or greater (implies slow)
  -non-zero = include only voxel values not equal to 0 (implies slow)
  -mask dset = use dset as mask to include/exclude voxels
  -automask = automatically compute mask for dataset
    Can not be combined with -mask
  -help = print this help screen

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dMean
Usage: 3dMean [options] dset dset ...
Takes the voxel-by-voxel mean of all input datasets;
the main reason is to be faster than 3dcalc.

Options [see 3dcalc -help for more details on these]:
  -verbose    = Print out some information along the way.
  -prefix ppp = Sets the prefix of the output dataset.
  -datum ddd  = Sets the datum of the output dataset.
  -fscale     = Force scaling of the output to the maximum integer range.
  -gscale     = Same as '-fscale', but also forces each output sub-brick to
                  to get the same scaling factor.
  -nscale     = Don't do any scaling on output to byte or short datasets.

  -sd *OR*    = Calculate the standard deviation (variance/n-1) instead
  -stdev         of the mean (cannot be used with -sqr or -sum).

  -sqr        = Average the squares, instead of the values.
  -sum        = Just take the sum (don't divide by number of datasets).

N.B.: All input datasets must have the same number of voxels along
       each axis (x,y,z,t).
    * At least 2 input datasets are required.
    * Dataset sub-brick selectors [] are allowed.
    * The output dataset origin, time steps, etc., are taken from the
       first input dataset.
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dMedianFilter
Usage: 3dMedianFilter [options] dataset
Computes the median in a spherical nbhd around each point in the
input to produce the output.

Options:
  -irad x    = Radius in voxels of spherical regions
  -verb      = Be verbose during run
  -prefix pp = Use 'pp' for prefix of output dataset
  -automask  = Create a mask (a la 3dAutomask)

Output dataset is always stored in float format.  If the input
dataset has more than 1 sub-brick, only sub-brick #0 is processed.

-- Feb 2005 - RWCox
This page auto-generated on Thu Aug 25 16:49:36 EDT 2005
3dmerge

Program 3dmerge 
Last revision: 02 Nov 2001 

Edit and/or merge 3D datasets
Usage: 3dmerge [options] datasets ...
where the options are:
EDITING OPTIONS APPLIED TO EACH INPUT DATASET:
  -1thtoin         = Copy threshold data over intensity data.
                       This is only valid for datasets with some
                       thresholding statistic attached.  All
                       subsequent operations apply to this
                       substituted data.
  -2thtoin         = The same as -1thtoin, but do NOT scale the
                       threshold values from shorts to floats when
                       processing.  This option is only provided
                       for compatibility with the earlier versions
                       of the AFNI package '3d*' programs.
  -1noneg          = Zero out voxels with negative intensities
  -1abs            = Take absolute values of intensities
  -1clip val       = Clip intensities in range (-val,val) to zero
  -2clip v1 v2     = Clip intensities in range (v1,v2) to zero
  -1uclip val      = These options are like the above, but do not apply
  -2uclip v1 v2        any automatic scaling factor that may be attached
                       to the data.  These are for use only in special
                       circumstances.  (The 'u' means 'unscaled'.  Program
                       '3dinfo' can be used to find the scaling factors.)
               N.B.: Only one of these 'clip' options can be used; you cannot
                       combine them to have multiple clipping executed.
  -1thresh thr     = Use the threshold data to censor the intensities
                       (only valid for 'fith', 'fico', or 'fitt' datasets).
               N.B.: The value 'thr' is floating point, in the range
                           0.0 < thr < 1.0  for 'fith' and 'fico' datasets,
                       and 0.0 < thr < 32.7 for 'fitt' datasets.
  -1blur_sigma bmm = Gaussian blur with sigma = bmm (in mm)
  -1blur_rms bmm   = Gaussian blur with rms deviation = bmm
  -1blur_fwhm bmm  = Gaussian blur with FWHM = bmm
  -t1blur_sigma bmm= Gaussian blur of threshold with sigma = bmm(in mm)
  -t1blur_rms bmm  = Gaussian blur of threshold with rms deviation = bmm
  -t1blur_fwhm bmm = Gaussian blur of threshold with FWHM = bmm
  -1zvol x1 x2 y1 y2 z1 z2
                   = Zero out entries inside the 3D volume defined
                       by x1 <= x <= x2, y1 <= y <= y2, z1 <= z <= z2 ;
               N.B.: The ranges of x,y,z in a dataset can be found
                       using the '3dinfo' program. Dimensions are in mm.
               N.B.: This option may not work correctly at this time, but
                       I've not figured out why!

 CLUSTERING
  -dxyz=1  = In the cluster editing options, the spatial clusters
             are defined by connectivity in true 3D distance, using
             the voxel dimensions recorded in the dataset header.
             This option forces the cluster editing to behave as if
             all 3 voxel dimensions were set to 1 mm.  In this case,
             'rmm' is then the max number of grid cells apart voxels
             can be to be considered directly connected, and 'vmul'
             is the min number of voxels to keep in the cluster.
       N.B.: The '=1' is part of the option string, and can't be
             replaced by some other value.  If you MUST have some
             other value for voxel dimensions, use program 3drefit.
 
  The following cluster options are mutually exclusive: 
  -1clust rmm vmul = Form clusters with connection distance rmm
                       and clip off data not in clusters of
                       volume at least vmul microliters
  -1clust_mean rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the average
                            intensity of the cluster. 
  -1clust_max rmm vmul  = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the maximum
                            intensity of the cluster. 
  -1clust_amax rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the maximum
                            absolute intensity of the cluster. 
  -1clust_smax rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the maximum
                            signed intensity of the cluster. 
  -1clust_size rmm vmul = Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the size 
                            of the cluster (in multiples of vmul).   
  -1clust_order rmm vmul= Same as -1clust, but all voxel intensities 
                            within a cluster are replaced by the cluster
                            size index (largest cluster=1, next=2, ...).
 * If rmm is given as 0, this means to use the 6 nearest neighbors to
     form clusters of nonzero voxels.
 * If vmul is given as zero, then all cluster sizes will be accepted
     (probably not very useful!).
 * If vmul is given as negative, then abs(vmul) is the minimum number
     of voxels to keep.
 
  The following commands produce erosion and dilation of 3D clusters.  
  These commands assume that one of the -1clust commands has been used.
  The purpose is to avoid forming strange clusters with 2 (or more)    
  main bodies connected by thin 'necks'.  Erosion can cut off the neck.
  Dilation will minimize erosion of the main bodies.                   
  Note:  Manipulation of values inside a cluster (-1clust commands)    
         occurs AFTER the following two commands have been executed.   
  -1erode pv    For each voxel, set the intensity to zero unless pv %  
                of the voxels within radius rmm are nonzero.           
  -1dilate      Restore voxels that were removed by the previous       
                command if there remains a nonzero voxel within rmm.   
 
  The following filter options are mutually exclusive: 
  -1filter_mean rmm   = Set each voxel to the average intensity of the 
                          voxels within a radius of rmm. 
  -1filter_nzmean rmm = Set each voxel to the average intensity of the 
                          non-zero voxels within a radius of rmm. 
  -1filter_max rmm    = Set each voxel to the maximum intensity of the 
                          voxels within a radius of rmm. 
  -1filter_amax rmm   = Set each voxel to the maximum absolute intensity
                          of the voxels within a radius of rmm. 
  -1filter_smax rmm   = Set each voxel to the maximum signed intensity 
                          of the voxels within a radius of rmm. 
  -1filter_aver rmm   = Same idea as '_mean', but implemented using a
                          new code that should be faster.
 
  The following threshold filter options are mutually exclusive: 
  -t1filter_mean rmm   = Set each correlation or threshold voxel to the 
                          average of the voxels within a radius of rmm. 
  -t1filter_nzmean rmm = Set each correlation or threshold voxel to the 
                          average of the non-zero voxels within 
                          a radius of rmm. 
  -t1filter_max rmm    = Set each correlation or threshold voxel to the 
                          maximum of the voxels within a radius of rmm. 
  -t1filter_amax rmm   = Set each correlation or threshold voxel to the 
                          maximum absolute intensity of the voxels 
                          within a radius of rmm. 
  -t1filter_smax rmm   = Set each correlation or threshold voxel to the 
                          maximum signed intensity of the voxels 
                          within a radius of rmm. 
  -t1filter_aver rmm   = Same idea as '_mean', but implemented using a
                          new code that should be faster.
 
  -1mult factor    = Multiply intensities by the given factor
  -1zscore         = If the sub-brick is labeled as a statistic from
                     a known distribution, it will be converted to
                     an equivalent N(0,1) deviate (or 'z score').
                     If the sub-brick is not so labeled, nothing will
                     be done.

The above '-1' options are carried out in the order given above,
regardless of the order in which they are entered on the command line.

N.B.: The 3 '-1blur' options just provide different ways of
      specifying the radius used for the blurring function.
      The relationships among these specifications are
         sigma = 0.57735027 * rms = 0.42466090 * fwhm
      The requisite convolutions are done using FFTs; this is by
      far the slowest operation among the editing options.

OTHER OPTIONS:
  -datum type = Coerce the output data to be stored as the given type,
                  which may be byte, short, or float.
          N.B.: Byte data cannot be negative.  If this datum type is chosen,
                  any negative values in the edited and/or merged dataset
                  will be set to zero.
  -keepthr    = When using 3dmerge to edit exactly one dataset of a
                  functional type with a threshold statistic attached,
                  normally the resulting dataset is of the 'fim'
                  (intensity only) type.  This option tells 3dmerge to
                  copy the threshold data (unedited in any way) into
                  the output dataset.
          N.B.: This option is ignored if 3dmerge is being used to
                  combine 2 or more datasets.
          N.B.: The -datum option has no effect on the storage of the
                  threshold data.  Instead use '-thdatum type'.

  -doall      = Apply editing and merging options to ALL sub-bricks 
                  uniformly in a dataset.
          N.B.: All input datasets must have the same number of sub-bricks
                  when using the -doall option. 
          N.B.: The threshold specific options (such as -1thresh, 
                  -keepthr, -tgfisher, etc.) are not compatible with 
                  the -doall command.  Neither are the -1dindex or
                  the -1tindex options.
          N.B.: All labels and statistical parameters for individual 
                  sub-bricks are copied from the first dataset.  It is 
                  the responsibility of the user to verify that these 
                  are appropriate.  Note that sub-brick auxiliary data 
                  can be modified using program 3drefit. 

  -1dindex j  = Uses sub-brick #j as the data source , and uses sub-brick
  -1tindex k  = #k as the threshold source.  With these, you can operate
                  on any given sub-brick of the inputs dataset(s) to produce
                  as output a 1 brick dataset.  If desired, a collection
                  of 1 brick datasets can later be assembled into a
                  multi-brick bucket dataset using program '3dbucket'
                  or into a 3D+time dataset using program '3dTcat'.
          N.B.: If these options aren't used, j=0 and k=1 are the defaults

  The following option allows you to specify a mask dataset that
  limits the action of the 'filter' options to voxels that are
  nonzero in the mask:

  -1fmask mset = Read dataset 'mset' (which can include a
                  sub-brick specifier) and use the nonzero
                  voxels as a mask for the filter options.
                  Filtering calculations will not use voxels
                  that are outside the mask.  If an output
                  voxel does not have ANY masked voxels inside
                  the rmm radius, then that output voxel will
                  be set to 0.
         N.B.: * Only the -1filter_* and -t1filter_* options are
                 affected by -1fmask.
               * In the linear averaging filters (_mean, _nzmean,
                 and _expr), voxels not in the mask will not be used
                 or counted in either the numerator or denominator.
                 This can give unexpected results.  If the mask is
                 designed to exclude the volume outside the brain,
                 then voxels exterior to the brain, but within 'rmm',
                 will have a few voxels inside the brain included
                 in the filtering.  Since the sum of weights (the
                 denominator) is only over those few intra-brain
                 voxels, the effect will be to extend the significant
                 part of the result outward by rmm from the surface
                 of the brain.  In contrast, without the mask, the
                 many small-valued voxels outside the brain would
                 be included in the numerator and denominator sums,
                 which would barely change the numerator (since the
                 voxel values are small outside the brain), but would
                 increase the denominator greatly (by including many
                 more weights).  The effect in this case (no -1fmask)
                 is to make the filtering taper off gradually in the
                 rmm-thickness shell around the brain.
               * Thus, if the -1fmask is intended to clip off non-brain
                 data from the filtering, its use should be followed by
                 masking operation using 3dcalc:
      3dmerge -1filter_aver 12 -1fmask mask+orig -prefix x input+orig
      3dcalc  -a x -b mask+orig -prefix y -expr 'a*step(b)'
      rm -f x+orig.*
                 The desired result is y+orig - filtered using only
                 brain voxels (as defined by mask+orig), and with
                 the output confined to the brain voxels as well.

  The following option allows you to specify an almost arbitrary
  weighting function for 3D linear filtering:

  -1filter_expr rmm expr
     Defines a linear filter about each voxel of radius 'rmm' mm.
     The filter weights are proportional to the expression evaluated
     at each voxel offset in the rmm neighborhood.  You can use only
     these symbols in the expression:
         r = radius from center
         x = dataset x-axis offset from center
         y = dataset y-axis offset from center
         z = dataset z-axis offset from center
         i = x-axis index offset from center
         j = y-axis index offset from center
         k = z-axis index offset from center
     Example:
       -1filter_expr 12.0 'exp(-r*r/36.067)'
     This does a Gaussian filter over a radius of 12 mm.  In this
     example, the FWHM of the filter is 10 mm. [in general, the
     denominator in the exponent would be 0.36067 * FWHM * FWHM.
     This is the only way to get a Gaussian blur combined with the
     -1fmask option.  The radius rmm=12 is chosen where the weights
     get smallish.]  Another example:
       -1filter_expr 20.0 'exp(-(x*x+16*y*y+z*z)/36.067)'
     which is a non-spherical Gaussian filter.

  The following option lets you apply a 'Winsor' filter to the data:

  -1filter_winsor rmm nw
     The data values within the radius rmm of each voxel are sorted.
     Suppose there are 'N' voxels in this group.  We index the
     sorted voxels as s[0] <= s[1] <= ... <= s[N-1], and we call the
     value of the central voxel 'v' (which is also in array s[]).
                 If v < s[nw]    , then v is replaced by s[nw]
       otherwise If v > s[N-1-nw], then v is replace by s[N-1-nw]
       otherwise v is unchanged
     The effect is to increase 'too small' values up to some
     middling range, and to decrease 'too large' values.
     If N is odd, and nw=(N-1)/2, this would be a median filter.
     In practice, I recommend that nw be about N/4; for example,
       -dxyz=1 -1filter_winsor 2.5 19
     is a filter with N=81 that gives nice results.
   N.B.: This option is NOT affected by -1fmask
   N.B.: This option is slow!

MERGING OPTIONS APPLIED TO FORM THE OUTPUT DATASET:
 [That is, different ways to combine results. The]
 [following '-g' options are mutually exclusive! ]
  -gmean     = Combine datasets by averaging intensities
                 (including zeros) -- this is the default
  -gnzmean   = Combine datasets by averaging intensities
                 (not counting zeros)
  -gmax      = Combine datasets by taking max intensity
                 (e.g., -7 and 2 combine to 2)
  -gamax     = Combine datasets by taking max absolute intensity
                 (e.g., -7 and 2 combine to 7)
  -gsmax     = Combine datasets by taking max signed intensity
                 (e.g., -7 and 2 combine to -7)
  -gcount    = Combine datasets by counting number of 'hits' in
                  each voxel (see below for defintion of 'hit')
  -gorder    = Combine datasets in order of input:
                * If a voxel is nonzero in dataset #1, then
                    that value goes into the voxel.
                * If a voxel is zero in dataset #1 but nonzero
                    in dataset #2, then the value from #2 is used.
                * And so forth: the first dataset with a nonzero
                    entry in a given voxel 'wins'
  -gfisher   = Takes the arctanh of each input, averages these,
                  and outputs the tanh of the average.  If the input
                  datum is 'short', then input values are scaled by
                  0.0001 and output values by 10000.  This option
                  is for merging bricks of correlation coefficients.

  -nscale    = If the output datum is shorts, don't do the scaling
                  to the max range [similar to 3dcalc's -nscale option]

MERGING OPERATIONS APPLIED TO THE THRESHOLD DATA:
 [That is, different ways to combine the thresholds.  If none of these ]
 [are given, the thresholds will not be merged and the output dataset  ]
 [will not have threshold data attached.  Note that the following '-tg']
 [command line options are mutually exclusive, but are independent of  ]
 [the '-g' options given above for merging the intensity data values.  ]
  -tgfisher  = This option is only applicable if each input dataset
                  is of the 'fico' or 'fith' types -- functional
                  intensity plus correlation or plus threshold.
                  (In the latter case, the threshold values are
                  interpreted as correlation coefficients.)
                  The correlation coefficients are averaged as
                  described by -gfisher above, and the output
                  dataset will be of the fico type if all inputs
                  are fico type; otherwise, the output datasets
                  will be of the fith type.
         N.B.: The difference between the -tgfisher and -gfisher
                  methods is that -tgfisher applies to the threshold
                  data stored with a dataset, while -gfisher
                  applies to the intensity data.  Thus, -gfisher
                  would normally be applied to a dataset created
                  from correlation coefficients directly, or from
                  the application of the -1thtoin option to a fico
                  or fith dataset.

OPTIONAL WAYS TO POSTPROCESS THE COMBINED RESULTS:
 [May be combined with the above methods.]
 [Any combination of these options may be used.]
  -ghits count     = Delete voxels that aren't !=0 in at least
                       count datasets (!=0 is a 'hit')
  -gclust rmm vmul = Form clusters with connection distance rmm
                       and clip off data not in clusters of
                       volume at least vmul microliters

The '-g' and '-tg' options apply to the entire group of input datasets.

OPTIONS THAT CONTROL THE NAMES OF THE OUTPUT DATASET:
  -session dirname  = write output into given directory (default=./)
  -prefix  pname    = use 'pname' for the output directory prefix
                       (default=mrg)

NOTES:
 **  If only one dataset is read into this program, then the '-g'
       options do not apply, and the output dataset is simply the
       '-1' options applied to the input dataset (i.e., edited).
 **  A merged output dataset is ALWAYS of the intensity-only variety.
 **  You can combine the outputs of 3dmerge with other sub-bricks
       using the program 3dbucket.
 **  Complex-valued datasets cannot be merged.
 **  This program cannot handle time-dependent datasets without -doall.
 **  Note that the input datasets are specified by their .HEAD files,
       but that their .BRIK files must exist also!

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

 ** Input datasets using sub-brick selectors are treated as follows:
      - 3D+time if the dataset is 3D+time and more than 1 brick is chosen
      - otherwise, as bucket datasets (-abuc or -fbuc)
       (in particular, fico, fitt, etc. datasets are converted to fbuc)
 ** If you are NOT using -doall, and choose more than one sub-brick
     with the selector, then you may need to use -1dindex to further
     pick out the sub-brick on which to operate (why you would do this
     I cannot fathom).  If you are also using a thresholding operation
     (e.g., -1thresh), then you also MUST use -1tindex to choose which
     sub-brick counts as the 'threshold' value.  When used with sub-brick
     selection, 'index' refers the dataset AFTER it has been read in:
          -1dindex 1 -1tindex 3 'dset+orig[4..7]'
     means to use the #5 sub-brick of dset+orig as the data for merging
     and the #7 sub-brick of dset+orig as the threshold values.
 ** The above example would better be done with
          -1tindex 1 'dset+orig[5,7]'
     since the default data index is 0. (You would only use -1tindex if
     you are actually using a thresholding operation.)
 ** -1dindex and -1tindex apply to all input datasets.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dMINCtoAFNI
Usage: 3dMINCtoAFNI [-prefix ppp] dataset.mnc
Reads in a MINC formatted file and writes it out as an
AFNI dataset file pair with the given prefix.  If the
prefix option isn't used, the input filename will be
used, after the '.mnc' is chopped off.

NOTES:
* Setting environment variable AFNI_MINC_FLOATIZE to Yes
   will cause MINC datasets to be converted to floats on
   input.  Otherwise, they will be kept in their 'native'
   data type if possible, which may cause problems with
   scaling on occasion.
* The TR recorded in MINC files is often incorrect.  You may
   need to fix this (or other parameters) using 3drefit.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dnewid
Assigns a new ID code to a dataset; this is useful when making
a copy of a dataset, so that the internal ID codes remain unique.

Usage: 3dnewid dataset [dataset ...]
 or
       3dnewid -fun [n]
       to see what n randomly generated ID codes look like.
       (If the integer n is not present, 1 ID code is printed.)

How ID codes are created (here and in other AFNI programs):
----------------------------------------------------------
The AFNI ID code generator attempts to create a globally unique
string identifier, using the following steps.
1) A long string is created from the system identifier
   information ('uname -a'), the current epoch time in seconds
   and microseconds, the process ID, and the number of times
   the current process has called the ID code function.
2) This string is then hashed into a 128 bit code using the
   MD5 algorithm. (cf. file thd_md5.c)
3) This bit code is then converted to a 22 character string
   using Base64 encoding, replacing '/' with '-' and '+' with '_'.
   With these changes, the ID code can be used as a Unix filename
   or an XML name string. (cf. file thd_base64.c)
4) A 4 character prefix is attached at the beginning to produce
   the final ID code.  If you set the environment variable
   IDCODE_PREFIX to something, then its first 3 characters and an
   underscore will be used for the prefix of the new ID code,
   provided that the first character is alphabetic and the other
   2 alphanumeric; otherwise, the default prefix 'NIH_' will be
   used.
The source code is function UNIQ_idcode() in file niml.c.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dNLfim

Program:          3dNLfim 
Author:           B. Douglas Ward 
Initial Release:  19 June 1997 
Latest Revision:  07 May 2003 

This program calculates a nonlinear regression for each voxel of the  
input AFNI 3d+time data set.  The nonlinear regression is calculated  
by means of a least squares fit to the signal plus noise models which 
are specified by the user.                                            
                                                                      
Usage:                                                                
3dNLfim                                                               
-input fname       fname = filename of 3d + time data file for input  
[-mask mset]       Use the 0 sub-brick of dataset 'mset' as a mask    
                     to indicate which voxels to analyze (a sub-brick 
                     selector is allowed)  [default = use all voxels] 
[-ignore num]      num   = skip this number of initial images in the  
                     time series for regresion analysis; default = 3  
[-inTR]            set delt = TR of the input 3d+time dataset         
                     [The default is to compute with delt = 1.0 ]     
                     [The model functions are calculated using a      
                      time grid of: 0, delt, 2*delt, 3*delt, ... ]    
[-time fname]      fname = ASCII file containing each time point      
                     in the time series. Defaults to even spacing     
                     given by TR (this option overrides -inTR).       
-signal slabel     slabel = name of (non-linear) signal model         
-noise  nlabel     nlabel = name of (linear) noise model              
-sconstr k c d     constraints for kth signal parameter:              
                      c <= gs[k] <= d                                 
-nconstr k c d     constraints for kth noise parameter:               
                      c+b[k] <= gn[k] <= d+b[k]                       
[-nabs]            use absolute constraints for noise parameters:     
                      c <= gn[k] <= d                                 
[-nrand n]         n = number of random test points                   
[-nbest b]         b = find opt. soln. for b best test points         
[-rmsmin r]        r = minimum rms error to reject reduced model      
[-fdisp fval]      display (to screen) results for those voxels       
                     whose f-statistic is > fval                      
                                                                      
                                                                      
The following commands generate individual AFNI 2 sub-brick datasets: 
                                                                      
[-freg fname]      perform f-test for significance of the regression; 
                     output 'fift' is written to prefix filename fname
[-frsqr fname]     calculate R^2 (coef. of multiple determination);   
                     store along with f-test for regression;          
                     output 'fift' is written to prefix filename fname
[-fsmax fname]     estimate signed maximum of signal; store along     
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-ftmax fname]     estimate time of signed maximum; store along       
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-fpsmax fname]    calculate (signed) maximum percentage change of    
                     signal from baseline; output 'fift' is           
                     written to prefix filename fname                 
[-farea fname]     calculate area between signal and baseline; store  
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-fparea fname]    percentage area of signal relative to baseline;    
                     store with f-test for regression; output 'fift'  
                     is written to prefix filename fname              
[-fscoef k fname]  estimate kth signal parameter gs[k]; store along   
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-fncoef k fname]  estimate kth noise parameter gn[k]; store along    
                     with f-test for regression; output 'fift' is     
                     written to prefix filename fname                 
[-tscoef k fname]  perform t-test for significance of the kth signal  
                     parameter gs[k]; output 'fitt' is written        
                     to prefix filename fname                         
[-tncoef k fname]  perform t-test for significance of the kth noise   
                     parameter gn[k]; output 'fitt' is written        
                     to prefix filename fname                         
                                                                      
                                                                      
The following commands generate one AFNI 'bucket' type dataset:       
                                                                      
[-bucket n prefixname]   create one AFNI 'bucket' dataset containing  
                           n sub-bricks; n=0 creates default output;  
                           output 'bucket' is written to prefixname   
The mth sub-brick will contain:                                       
[-brick m scoef k label]   kth signal parameter regression coefficient
[-brick m ncoef k label]   kth noise parameter regression coefficient 
[-brick m tmax label]      time at max. abs. value of signal          
[-brick m smax label]      signed max. value of signal                
[-brick m psmax label]     signed max. value of signal as percent     
                             above baseline level                     
[-brick m area label]      area between signal and baseline           
[-brick m parea label]     signed area between signal and baseline    
                             as percent of baseline area              
[-brick m tscoef k label]  t-stat for kth signal parameter coefficient
[-brick m tncoef k label]  t-stat for kth noise parameter coefficient 
[-brick m resid label]     std. dev. of the full model fit residuals  
[-brick m rsqr  label]     R^2 (coefficient of multiple determination)
[-brick m fstat label]     F-stat for significance of the regression  
                                                                      
                                                                      
The following commands write the time series fit for each voxel       
to an AFNI 3d+time dataset:                                           
[-sfit fname]      fname = prefix for output 3d+time signal model fit 
[-snfit fname]     fname = prefix for output 3d+time signal+noise fit 
                                                                      

 -jobs J   Run the program with 'J' jobs (sub-processes).
             On a multi-CPU machine, this can speed the
             program up considerably.  On a single CPU
             machine, using this option is silly.
             J should be a number from 1 up to the
             number of CPU sharing memory on the system.
             J=1 is normal (single process) operation.
             The maximum allowed value of J is 32.
         * For more information on parallelizing, see
             http://afni.nimh.nih.gov/afni/doc/misc/parallize.html
         * Use -mask to get more speed; cf. 3dAutomask.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dnoise
Usage: 3dnoise [-blast] [-snr fac] [-nl x ] datasets ...
Estimates noise level in 3D datasets, and optionally
set voxels below the noise threshold to zero.
This only works on datasets that are stored as shorts,
and whose elements are all nonnegative.
  -blast   = Set values at or below the cutoff to zero.
               In 3D+time datasets, a spatial location
               is set to zero only if a majority of time
               points fall below the cutoff; in that case
               all the values at that location are zeroed.
  -snr fac = Set cutoff to 'fac' times the estimated
               noise level.  Default fac = 2.5.  What to
               use for this depends strongly on your MRI
               system -- I often use 5, but our true SNR
               is about 100 for EPI.
  -nl x    = Set the noise level to 'x', skipping the
               estimation procedure.  Also sets fac=1.0.
               You can use program 3dClipLevel to get an
               estimate of a value for 'x'.
Author -- RW Cox
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dNotes
Program: 3dNotes 
Author:  T. Ross 
(c)1999 Medical College of Wisconsin 
                                                                        
3dNotes - a program to add, delete and show notes for AFNI datasets.    
 
----------------------------------------------------------------------- 
                                                                        
Usage: 3dNotes [-a "string"] [-h "string"][-d num] [-help] dataset  
 
Examples: 
 
3dNotes -a      "Subject sneezed in scanner, Aug 13 2004" elvis+orig     
3dNotes -h      "Subject likes fried PB & banana sandwiches" elvis+orig  
3dNotes -HH     "Subject has left the building" elvis +orig              
3dNotes -d 2 -h "Subject sick of PB'n'banana sandwiches" elvis+orig  
 
----------------------------------------------------------------------- 
                                                                        
Explanation of Options:
---------------------- 
   dataset       : AFNI compatible dataset [required].
                                                                        
   -a   "str"  : Add the string "str" to the list of notes.
                                                                        
                   Note that you can use the standard C escape codes,
                   \n for newline \t for tab, etc.
                                                                        
   -h   "str"   : Append the string "str" to the dataset's history.  This
                    can only appear once on the command line.  As this is
                    added to the history, it cannot easily be deleted. But,
                    history is propagated to the children of this dataset.
                                                                        
   -HH  "str"   : Replace any existing history note with "str".  This 
                    line cannot be used with '-h'.
                                                                        
   -d   num       : deletes note number num.
                                                                        
   -help          : Displays this screen.
                                                                        
                                                                        
The default action, with no options, is to display the notes for the
dataset.  If there are options, all deletions occur first and esentially
simutaneously.  Then, notes are added in the order listed on the command
line.  If you do something like -d 10 -d 10, it will delete both notes 10
and 11.  Don't do that.

This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dnvals
Prints out the number of sub-bricks in a 3D dataset
Usage: 3dnvals [-verb] dataset
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dOverlap
Usage: 3dOverlap [options] dset1 dset2 ...
Output = count of number of voxels that are nonzero in ALL
         of the input dataset sub-bricks
The result is simply a number printed to stdout.  (If a single
brick was input, this is just the count of the number of nonzero
voxels in that brick.)
Options:
  -save ppp = Save the count of overlaps at each voxel into a
              dataset with prefix 'ppp' (properly thresholded,
              this could be used as a mask dataset).
Example:
  3dOverlap -save abcnum a+orig b+orig c+orig
  3dmaskave -mask 'abcnum+orig<3..3>' a+orig
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dpc
Principal Component Analysis of 3D Datasets
Usage: 3dpc [options] dataset dataset ...

Each input dataset may have a sub-brick selector list.
Otherwise, all sub-bricks from a dataset will be used.

OPTIONS:
  -dmean        = remove the mean from each input brick (across space)
  -vmean        = remove the mean from each input voxel (across bricks)
                    [N.B.: -dmean and -vmean are mutually exclusive]
                    [default: don't remove either mean]
  -vnorm        = L2 normalize each input voxel time series
                    [occurs after the de-mean operations above,]
                    [and before the brick normalization below. ]
  -normalize    = L2 normalize each input brick (after mean subtraction)
                    [default: don't normalize]
  -pcsave sss   = 'sss' is the number of components to save in the output;
                    it can't be more than the number of input bricks
                    [default = all of them = number of input bricks]
  -prefix pname = Name for output dataset (will be a bucket type);
                    also, the eigen-timeseries will be in 'pname'.1D
                    (all of them) and in 'pnameNN.1D' for eigenvalue
                    #NN individually (NN=00 .. 'sss'-1, corresponding
                    to the brick index in the output dataset)
                    [default value of pname = 'pc']
  -1ddum ddd    = Add 'ddd' dummy lines to the top of each *.1D file.
                    These lines will have the value 999999, and can
                    be used to align the files appropriately.
                    [default value of ddd = 0]
  -verbose      = Print progress reports during the computations
  -float        = Save eigen-bricks as floats
                    [default = shorts, scaled so that |max|=10000]
  -mask mset    = Use the 0 sub-brick of dataset 'mset' as a mask
                    to indicate which voxels to analyze (a sub-brick
                    selector is allowed) [default = use all voxels]

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dproject
Projection along cardinal axes from a 3D dataset
Usage: 3dproject [editing options]
        [-sum|-max|-amax|-smax] [-output root] [-nsize] [-mirror]
        [-RL {all | x1 x2}] [-AP {all | y1 y2}] [-IS {all | z1 z2}]
        [-ALL] dataset

Program to produce orthogonal projections from a 3D dataset.
  -sum     ==> Add the dataset voxels along the projection direction
  -max     ==> Take the maximum of the voxels [the default is -sum]
  -amax    ==> Take the absolute maximum of the voxels
  -smax    ==> Take the signed maximum of the voxels; for example,
                -max  ==> -7 and 2 go to  2 as the projected value
                -amax ==> -7 and 2 go to  7 as the projected value
                -smax ==> -7 and 2 go to -7 as the projected value
  -first x ==> Take the first value greater than x
  -nsize   ==> Scale the output images up to 'normal' sizes
               (e.g., 64x64, 128x128, or 256x256)
               This option only applies to byte or short datasets.
  -mirror  ==> The radiologists' and AFNI convention is to display
               axial and coronal images with the subject's left on
               the right of the image; the use of this option will
               mirror the axial and coronal projections so that
               left is left and right is right.

  -output root ==> Output projections will named
                   root.sag, root.cor, and root.axi
                   [the default root is 'proj']

  -RL all      ==> Project in the Right-to-Left direction along
                   all the data (produces root.sag)
  -RL x1 x2    ==> Project in the Right-to-Left direction from
                   x-coordinate x1 to x2 (mm)
                   [negative x is Right, positive x is Left]
                   [OR, you may use something like -RL 10R 20L
                        to project from x=-10 mm to x=+20 mm  ]

  -AP all      ==> Project in the Anterior-to-Posterior direction along
                   all the data (produces root.cor)
  -AP y1 y2    ==> Project in the Anterior-to-Posterior direction from
                   y-coordinate y1 to y2 (mm)
                   [negative y is Anterior, positive y is Posterior]
                   [OR, you may use something like -AP 10A 20P
                        to project from y=-10 mm to y=+20 mm  ]

  -IS all      ==> Project in the Inferior-to-Superior direction along
                   all the data (produces root.axi)
  -IS y1 y2    ==> Project in the Inferior-to-Superior direction from
                   z-coordinate z1 to z2 (mm)
                   [negative z is Inferior, positive z is Superior]
                   [OR, you may use something like -IS 10I 20S
                        to project from z=-10 mm to z=+20 mm  ]

  -ALL         ==> Equivalent to '-RL all -AP all -IS all'

* NOTE that a projection direction will not be used if the bounds aren't
   given for that direction; thus, at least one of -RL, -AP, or -IS must
   be used, or nothing will be computed!
* NOTE that in the directions transverse to the projection direction,
   all the data is used; that is, '-RL -5 5' will produce a full sagittal
   image summed over a 10 mm slice, irrespective of the -IS or -AP extents.
* NOTE that the [editing options] are the same as in 3dmerge.
   In particular, the '-1thtoin' option can be used to project the
   threshold data (if available).

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3drefit
Changes some of the information inside a 3D dataset's header.
Note that this program does NOT change the .BRIK file at all;
the main purpose of 3drefit is to fix up errors made when
using to3d.
To see the current values stored in a .HEAD file, use the command
'3dinfo dataset'.  Using 3dinfo both before and after 3drefit is
a good idea to make sure the changes have been made correctly!

Usage: 3drefit [options] dataset ...
where the options are
  -orient code    Sets the orientation of the 3D volume(s) in the .BRIK.
                  The code must be 3 letters, one each from the
                  pairs {R,L} {A,P} {I,S}.  The first letter gives
                  the orientation of the x-axis, the second the
                  orientation of the y-axis, the third the z-axis:
                     R = right-to-left         L = left-to-right
                     A = anterior-to-posterior P = posterior-to-anterior
                     I = inferior-to-superior  S = superior-to-inferior
               ** WARNING: when changing the orientation, you must be sure
                  to check the origins as well, to make sure that the volume
                  is positioned correctly in space.

  -xorigin distx  Puts the center of the edge voxel off at the given
  -yorigin disty  distance, for the given axis (x,y,z); distances in mm.
  -zorigin distz  (x=first axis, y=second axis, z=third axis).
                  Usually, only -zorigin makes sense.  Note that this
                  distance is in the direction given by the corresponding
                  letter in the -orient code.  For example, '-orient RAI'
                  would mean that '-zorigin 30' sets the center of the
                  first slice at 30 mm Inferior.  See the to3d manual
                  for more explanations of axes origins.
               ** SPECIAL CASE: you can use the string 'cen' in place of
                  a distance to force that axis to be re-centered.

  -xorigin_raw xx Puts the center of the edge voxel at the given COORDINATE
  -yorigin_raw yy rather than the given DISTANCE.  That is, these values
  -zorigin_raw zz directly replace the offsets in the dataset header,
                  without any possible sign changes.

  -duporigin cset Copies the xorigin, yorigin, and zorigin values from
                  the header of dataset 'cset'.

  -dxorigin dx    Adds distance 'dx' (or 'dy', or 'dz') to the center
  -dyorigin dy    coordinate of the edge voxel.  Can be used with the
  -dzorigin dz    values input to the 'Nudge xyz' plugin.
               ** WARNING: you can't use these options at the same
                  time you use -orient.

  -xdel dimx      Makes the size of the voxel the given dimension,
  -ydel dimy      for the given axis (x,y,z); dimensions in mm.
  -zdel dimz   ** WARNING: if you change a voxel dimension, you will
                  probably have to change the origin as well.

  -TR time        Changes the TR time to a new value (see 'to3d -help').
  -notoff         Removes the slice-dependent time-offsets.
  -Torg ttt       Set the time origin of the dataset to value 'ttt'.
                  (Time origins are set to 0 in to3d.)
               ** WARNING: these 3 options apply only to 3D+time datasets.

  -newid          Changes the ID code of this dataset as well.

  -nowarp         Removes all warping information from dataset.

  -apar aset      Set the dataset's anatomy parent dataset to 'aset'
               ** N.B.: The anatomy parent is the dataset from which the
                  transformation from +orig to +acpc and +tlrc coordinates
                  is taken.  It is appropriate to use -apar when there is
                  more than 1 anatomical dataset in a directory that has
                  been transformed.  In this way, you can be sure that
                  AFNI will choose the correct transformation.  You would
                  use this option on all the +orig dataset that are
                  aligned with 'aset' (i.e., that were acquired in the
                  same scanning session).
               ** N.B.: Special cases of 'aset'
                   aset = NULL --> remove the anat parent info from the dataset
                   aset = SELF --> set the anat parent to be the dataset itself

  -clear_bstat    Clears the statistics (min and max) stored for each sub-brick
                  in the dataset.  This is useful if you have done something to
                  modify the contents of the .BRIK file associated with this
                  dataset.
  -redo_bstat     Re-computes the statistics for each sub-brick.  Requires
                  reading the .BRIK file, of course.  Also does -clear_bstat
                  before recomputing statistics, so that if the .BRIK read
                  fails for some reason, then you'll be left without stats.

  -statpar v ...  Changes the statistical parameters stored in this
                  dataset.  See 'to3d -help' for more details.

  -markers        Adds an empty set of AC-PC markers to the dataset,
                  if it can handle them (is anatomical, is in the +orig
                  view, and isn't 3D+time).
               ** WARNING: this will erase any markers that already exist!

  -view code      Changes the 'view' to be 'code', where the string 'code'
                  is one of 'orig', 'acpc', or 'tlrc'.
               ** WARNING: The program will also change the .HEAD and .BRIK
                  filenames to match.  If the dataset filenames already
                  exist in the '+code' view, then this option will fail.
                  You will have to rename the dataset files before trying
                  to use '-view'.  If you COPY the files and then use
                  '-view', don't forget to use '-newid' as well!

  -label2 llll    Set the 'label2' field in a dataset .HEAD file to the
                  string 'llll'.  (Can be used as in AFNI window titlebars.)

  -denote         Means to remove all possibly-identifying notes from
                  the header.  This includes the History Note, other text
                  Notes, keywords, and labels.

  -byteorder bbb  Sets the byte order string in the header.
                  Allowable values for 'bbb' are:
                     LSB_FIRST   MSB_FIRST   NATIVE_ORDER
                  Note that this does not change the .BRIK file!
                  This is done by programs 2swap and 4swap.

  -appkey ll      Appends the string 'll' to the keyword list for the
                  whole dataset.
  -repkey ll      Replaces the keyword list for the dataset with the
                  string 'll'.
  -empkey         Destroys the keyword list for the dataset.

  -atrcopy dd nn  Copy AFNI header attribute named 'nn' from dataset 'dd'
                  into the header of the dataset(s) being modified.
                  For more information on AFNI header attributes, see
                  documentation file README.attributes. More than one
                  '-atrcopy' option can be used.
          **N.B.: This option is for those who know what they are doing!
                  It can only be used to alter attributes that are NOT
                  directly mapped into dataset internal structures, since
                  those structures are mapped back into attribute values
                  as the dataset is being written to disk.  If you want
                  to change such an attribute, you have to use the
                  corresponding 3drefit option directly.

  -atrstring n 'x' Copy the string 'x' into the dataset(s) being
                   modified, giving it the attribute name 'n'.
                   To be safe, the 'x' string should be in quotes.
          **N.B.: You can store attributes with almost any name in
                  the .HEAD file.  AFNI will ignore those it doesn't
                  know anything about.  This technique can be a way of
                  communicating information between programs.  However,
                  when most AFNI programs write a new dataset, they will
                  not preserve any such non-standard attributes.

  -'type'         Changes the type of data that is declared for this
                  dataset, where 'type' is chosen from the following:
       ANATOMICAL TYPES
         spgr == Spoiled GRASS             fse == Fast Spin Echo  
         epan == Echo Planar              anat == MRI Anatomy     
           ct == CT Scan                  spct == SPECT Anatomy   
          pet == PET Anatomy               mra == MR Angiography  
         bmap == B-field Map              diff == Diffusion Map   
         omri == Other MRI                abuc == Anat Bucket     
       FUNCTIONAL TYPES
          fim == Intensity                fith == Inten+Thr       
         fico == Inten+Cor                fitt == Inten+Ttest     
         fift == Inten+Ftest              fizt == Inten+Ztest     
         fict == Inten+ChiSq              fibt == Inten+Beta      
         fibn == Inten+Binom              figt == Inten+Gamma     
         fipt == Inten+Poisson            fbuc == Func-Bucket     
-copyaux auxset   Copies the 'auxiliary' data from dataset 'auxset'
                  over the auxiliary data for the dataset being
                  modified.  Auxiliary data comprises sub-brick labels,
                  keywords, and statistics codes.
                  '-copyaux' occurs BEFORE the '-sub' operations below,
                  so you can use those to alter the auxiliary data
                  that is copied from auxset.

The options below allow you to attach auxiliary data to sub-bricks
in the dataset.  Each option may be used more than once so that
multiple sub-bricks can be modified in a single run of 3drefit.

  -sublabel  n ll  Attach to sub-brick #n the label string 'll'.
  -subappkey n ll  Add to sub-brick #n the keyword string 'll'.
  -subrepkey n ll  Replace sub-brick #n's keyword string with 'll'.
  -subempkey n     Empty out sub-brick #n' keyword string

  -substatpar n type v ...
                  Attach to sub-brick #n the statistical type and
                  the auxiliary parameters given by values 'v ...',
                  where 'type' is one of the following:
         type  Description  PARAMETERS
         ----  -----------  ----------------------------------------
         fico  Cor          SAMPLES  FIT-PARAMETERS  ORT-PARAMETERS
         fitt  Ttest        DEGREES-of-FREEDOM
         fift  Ftest        NUMERATOR and DENOMINATOR DEGREES-of-FREEDOM
         fizt  Ztest        N/A
         fict  ChiSq        DEGREES-of-FREEDOM
         fibt  Beta         A (numerator) and B (denominator)
         fibn  Binom        NUMBER-of-TRIALS and PROBABILITY-per-TRIAL
         figt  Gamma        SHAPE and SCALE
         fipt  Poisson      MEAN

The following options allow you to modify VOLREG fields:
  -vr_mat <VAL1> ... <VAL12>   Use these twelve values for VOLREG_MATVEC_index.
  [-vr_mat_ind <INDEX>]        Index of VOLREG_MATVEC_index field to be modified. Optional, default index is 0.
                               Note: You can only modify one VOLREG_MATVEC_index at a time.
  -vr_center_old <X> <Y> <Z>   Use these 3 values for VOLREG_CENTER_OLD.
  -vr_center_base <X> <Y> <Z>  Use these 3 values for VOLREG_CENTER_BASE.

++ Last program update: 08 Jul 2005
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dRegAna

Program:          3dRegAna 
Author:           B. Douglas Ward 
Initial Release:  10 Oct 1997 
Latest Revision:  24 Aug 2005 

This program performs multiple linear regression analysis.          

Usage: 
3dRegAna 
-rows n                             number of input datasets          
-cols m                             number of X variables             
-xydata X11 X12 ... X1m filename    X variables and Y observations    
  .                                   .                               
  .                                   .                               
  .                                   .                               
-xydata Xn1 Xn2 ... Xnm filename    X variables and Y observations    
                                                                      
-model i1 ... iq : j1 ... jr   definition of linear regression model; 
                                 reduced model:                       
                                   Y = f(Xj1,...,Xjr)                 
                                 full model:                          
                                   Y = f(Xj1,...,Xjr,Xi1,...,Xiq)     
                                                                      
[-diskspace]       print out disk space required for program execution
[-workmem mega]    number of megabytes of RAM to use for statistical  
                   workspace  (default = 12)                          
[-rmsmin r]        r = minimum rms error to reject constant model     
[-fdisp fval]      display (to screen) results for those voxels       
                   whose F-statistic is > fval                        
                                                                      
[-flof alpha]      alpha = minimum p value for F due to lack of fit   
                                                                      
                                                                      
The following commands generate individual AFNI 2 sub-brick datasets: 
                                                                      
[-fcoef k prefixname]        estimate of kth regression coefficient   
                               along with F-test for the regression   
                               is written to AFNI `fift' dataset      
[-rcoef k prefixname]        estimate of kth regression coefficient   
                               along with coef. of mult. deter. R^2   
                               is written to AFNI `fith' dataset      
[-tcoef k prefixname]        estimate of kth regression coefficient   
                               along with t-test for the coefficient  
                               is written to AFNI `fitt' dataset      
                                                                      
                                                                      
The following commands generate one AFNI 'bucket' type dataset:       
                                                                      
[-bucket n prefixname]     create one AFNI 'bucket' dataset having    
                             n sub-bricks; n=0 creates default output;
                             output 'bucket' is written to prefixname 
The mth sub-brick will contain:                                       
[-brick m coef k label]    kth parameter regression coefficient       
[-brick m fstat label]     F-stat for significance of regression      
[-brick m rstat label]     coefficient of multiple determination R^2  
[-brick m tstat k label]   t-stat for kth regression coefficient      


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -xydata command. That is, if an input dataset contains
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -xydata 2.17 4.59 7.18  'fred+orig[3]'                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3drename
Usage 1: 3drename old_prefix new_prefix
  Will rename all datasets using the old_prefix to use the new_prefix;
    3drename fred ethel
  will change fred+orig.HEAD    to ethel+orig.HEAD
              fred+orig.BRIK    to ethel+orig.BRIK
              fred+tlrc.HEAD    to ethel+tlrc.HEAD
              fred+tlrc.BRIK.gz to ethel+tlrc.BRIK.gz

Usage 2: 3drename old_prefix+view new_prefix
  Will rename only the dataset with the given view (orig, acpc, tlrc).
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dresample
/var/www/html/pub/dist/bin/linux_gcc32/3dresample - reorient and/or resample a dataset

    This program can be used to change the orientation of a
    dataset (via the -orient option), or the dx,dy,dz
    grid spacing (via the -dxyz option), or change them
    both to match that of a master dataset (via the -master
    option).

    Note: if both -master and -dxyz are used, the dxyz values
          will override those from the master dataset.

 ** Warning: this program is not meant to transform datasets
             between view types (such as '+orig' and '+tlrc').

             For that purpose, please see '3dfractionize -help'.

------------------------------------------------------------

  usage: /var/www/html/pub/dist/bin/linux_gcc32/3dresample [options] -prefix OUT_DSET -inset IN_DSET

  examples:

    /var/www/html/pub/dist/bin/linux_gcc32/3dresample -orient asl -rmode NN -prefix asl.dset -inset in+orig
    /var/www/html/pub/dist/bin/linux_gcc32/3dresample -dxyz 1.0 1.0 0.9 -prefix 119.dset -inset in+tlrc
    /var/www/html/pub/dist/bin/linux_gcc32/3dresample -master master+orig -prefix new.dset -inset old+orig

  note:

    Information about a dataset's voxel size and orientation
    can be found in the output of program 3dinfo

------------------------------------------------------------

  options: 

    -help            : show this help information

    -hist            : output the history of program changes

    -debug LEVEL     : print debug info along the way
          e.g.  -debug 1
          default level is 0, max is 2

    -version         : show version information

    -dxyz DX DY DZ   : resample to new dx, dy and dz
          e.g.  -dxyz 1.0 1.0 0.9
          default is to leave unchanged

          Each of DX,DY,DZ must be a positive real number,
          and will be used for a voxel delta in the new
          dataset (according to any new orientation).

    -orient OR_CODE  : reorient to new axis order.
          e.g.  -orient asl
          default is to leave unchanged

          The orientation code is a 3 character string,
          where the characters come from the respective
          sets {A,P}, {I,S}, {L,R}.

          For example OR_CODE = LPI is the standard
          'neuroscience' orientation, where the x-axis is
          Left-to-Right, the y-axis is Posterior-to-Anterior,
          and the z-axis is Inferior-to-Superior.

    -rmode RESAM     : use this resampling method
          e.g.  -rmode Linear
          default is NN (nearest neighbor)

          The resampling method string RESAM should come
          from the set {'NN', 'Li', 'Cu', 'Bk'}.  These
          are for 'Nearest Neighbor', 'Linear', 'Cubic'
          and 'Blocky' interpolation, respectively.
          See 'Anat resam mode' under the 'Define Markers'
          window in afni.

    -master MAST_DSET: align dataset grid to that of MAST_DSET
          e.g.  -master master.dset+orig

          Get dxyz and orient from a master dataset.  The
          resulting grid will match that of the master.  This
          option can be used with -dxyz, but not with -orient.

    -prefix OUT_DSET : required prefix for output dataset
          e.g.  -prefix reori.asl.pickle

    -inset IN_DSET   : required input dataset to reorient
          e.g.  -inset old.dset+orig

------------------------------------------------------------

  Author: R. Reynolds - Version 1.8 <AUGUST 2005 3,>

This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dretroicor
Usage: 3dretroicor [options] dataset

Performs Retrospective Image Correction for physiological
motion effects, using a slightly modified version of the
RETROICOR algorithm described in:

  Glover, G. H., Li, T., & Ress, D. (2000). Image-based method
for retrospective correction of physiological motion effects in
fMRI: RETROICOR. Magnetic Resonance in Medicine, 44, 162-167.

Options (defaults in []'s):

 -ignore    = The number of initial timepoints to ignore in the
              input (These points will be passed through
              uncorrected) [0]
 -prefix    = Prefix for new, corrected dataset [retroicor]

 -card      = 1D cardiac data file for cardiac correction
 -cardphase = Filename for 1D cardiac phase output
 -threshold = Threshold for detection of R-wave peaks in input
              (Make sure it's above the background noise level;
              Try 3/4 or 4/5 times range plus minimum) [1]

 -resp      = 1D respiratory waveform data for correction
 -respphase = Filename for 1D resp phase output

 -order     = The order of the correction (2 is typical;
              higher-order terms yield little improvement
              according to Glover et al.) [2]

 -help      = Display this message and stop (must be first arg)

Dataset: 3D+time dataset to process

** The input dataset and at least one of -card and -resp are
    required.

NOTES
-----

The durations of the physiological inputs are assumed to equal
the duration of the dataset. Any constant sampling rate may be
used, but 40 Hz seems to be acceptable. This program's cardiac
peak detection algorithm is rather simplistic, so you might try
using the scanner's cardiac gating output (transform it to a
spike wave if necessary).

This program uses slice timing information embedded in the
dataset to estimate the proper cardiac/respiratory phase for
each slice. It makes sense to run this program before any
program that may destroy the slice timings (e.g. 3dvolreg for
motion correction).

Author -- Fred Tam, August 2002

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dROIstats
Usage: 3dROIstats -mask[n] mset [options] datasets
Options:
  -mask[n] mset Means to use the dataset 'mset' as a mask:
                 If n is present, it specifies which sub-brick
                 in mset to use a la 3dcalc.  Note: do not include
                 the brackets if specifing a sub-brick, they are
                 there to indicate that they are optional.  If not
                 present, 0 is assumed
                 Voxels with the same nonzero values in 'mset'
                 will be statisticized from 'dataset'.  This will
                 be repeated for all the different values in mset.
                 I.e. all of the 1s in mset are one ROI, as are all
                 of the 2s, etc.
                 Note that the mask dataset and the input dataset
                 must have the same number of voxels and that mset
                 must be BYTE or SHORT (i.e., float masks won't work
                 without the -mask_f2short option).
                 
  -mask_f2short  Tells the program to convert a float mask to short
                 integers, by simple rounding.  This option is needed
                 when the mask dataset is a 1D file, for instance
                 (since 1D files are read as floats).

                 Be careful with this, it may not be appropriate to do!

  -numROI n     Forces the assumption that the mask dataset's ROIs are
                 denoted by 1 to n inclusive.  Normally, the program
                 figures out the ROIs on its own.  This option is 
                 useful if a) you are certain that the mask dataset
                 has no values outside the range [0 n], b) there may 
                 be some ROIs missing between [1 n] in the mask data-
                 set and c) you want those columns in the output any-
                 way so the output lines up with the output from other
                 invocations of 3dROIstats.  Confused?  Then don't use
                 this option!

  -debug        Print out debugging information
  -quiet        Do not print out labels for columns or rows

The following options specify what stats are computed.  By default
the mean is always computed.

  -nzmean       Compute the mean using only non_zero voxels.  Implies
                 the oppisite for the normal mean computed
  -nzvoxels     Compute the number of non_zero voxels
  -minmax       Compute the min/max of all voxels
  -nzminmax     Compute the min/max of non_zero voxels
  -sigma        Means to compute the standard deviation as well
                 as the mean.
  -summary      Only output a summary line with the grand mean across all briks
                 in the input dataset. 

The output is printed to stdout (the terminal), and can be
saved to a file using the usual redirection operation '>'.

N.B.: The input datasets and the mask dataset can use sub-brick
      selectors, as detailed in the output of 3dcalc -help.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3drotate
Usage: 3drotate [options] dataset
Rotates and/or translates all bricks from an AFNI dataset.
'dataset' may contain a sub-brick selector list.

GENERIC OPTIONS:
  -prefix fname    = Sets the output dataset prefix name to be 'fname'
  -verbose         = Prints out progress reports (to stderr)

OPTIONS TO SPECIFY THE ROTATION/TRANSLATION:
-------------------------------------------
*** METHOD 1 = direct specification:
At most one of these shift options can be used:
  -ashift dx dy dz = Shifts the dataset 'dx' mm in the x-direction, etc.,
                       AFTER rotation.
  -bshift dx dy dz = Shifts the dataset 'dx' mm in the x-direction, etc.,
                       BEFORE rotation.
    The shift distances by default are along the (x,y,z) axes of the dataset
    storage directions (see the output of '3dinfo dataset').  To specify them
    anatomically, you can suffix a distance with one of the symbols
    'R', 'L', 'A', 'P', 'I', and 'S', meaning 'Right', 'Left', 'Anterior',
    'Posterior', 'Inferior', and 'Superior', respectively.

  -rotate th1 th2 th3
    Specifies the 3D rotation to be composed of 3 planar rotations:
       1) 'th1' degrees about the 1st axis,           followed by
       2) 'th2' degrees about the (rotated) 2nd axis, followed by
       3) 'th3' degrees about the (doubly rotated) 3rd axis.
    Which axes are used for these rotations is specified by placing
    one of the symbols 'R', 'L', 'A', 'P', 'I', and 'S' at the end
    of each angle (e.g., '10.7A').  These symbols denote rotation
    about the 'Right-to-Left', 'Left-to-Right', 'Anterior-to-Posterior',
    'Posterior-to-Anterior', 'Inferior-to-Superior', and
    'Superior-to-Inferior' axes, respectively.  A positive rotation is
    defined by the right-hand rule.

*** METHOD 2 = copy from output of 3dvolreg:
  -rotparent rset
    Specifies that the rotation and translation should be taken from the
    first 3dvolreg transformation found in the header of dataset 'rset'.
  -gridparent gset
    Specifies that the output dataset of 3drotate should be shifted to
    match the grid of dataset 'gset'.  Can only be used with -rotparent.
    This dataset should be one this is properly aligned with 'rset' when
    overlaid in AFNI.
  * If -rotparent is used, then don't use -matvec, -rotate, or -[ab]shift.
  * If 'gset' has a different number of slices than the input dataset,
    then the output dataset will be zero-padded in the slice direction
    to match 'gset'.
  * These options are intended to be used to align datasets between sessions:
     S1 = SPGR from session 1    E1 = EPI from session 1
     S2 = SPGR from session 2    E2 = EPI from session 2
 3dvolreg -twopass -twodup -base S1+orig -prefix S2reg S2+orig
 3drotate -rotparent S2reg+orig -gridparent E1+orig -prefix E2reg E2+orig
     The result will have E2reg rotated from E2 in the same way that S2reg
     was from S2, and also shifted/padded (as needed) to overlap with E1.

*** METHOD 3 = give the transformation matrix/vector directly:
  -matvec_dicom mfile
  -matvec_order mfile
    Specifies that the rotation and translation should be read from file
    'mfile', which should be in the format
           u11 u12 u13 v1
           u21 u22 u23 v2
           u31 u32 u33 u3
    where each 'uij' and 'vi' is a number.  The 3x3 matrix [uij] is the
    orthogonal matrix of the rotation, and the 3-vector [vi] is the -ashift
    vector of the translation.

*** METHOD 4 = copy the transformation from 3dTagalign:
  -matvec_dset mset
    Specifies that the rotation and translation should be read from
    the .HEAD file of dataset 'mset', which was created by program
    3dTagalign.
  * If -matvec_dicom is used, the matrix and vector are given in Dicom
     coordinate order (+x=L, +y=P, +z=S).  This is the option to use
     if mfile is generated using 3dTagalign -matvec mfile.
  * If -matvec_order is used, the the matrix and vector are given in the
     coordinate order of the dataset axes, whatever they may be.
  * You can't mix -matvec_* options with -rotate and -*shift.

*** METHOD 5 = input rotation+shift parameters from an ASCII file:
  -dfile dname  *OR*  -1Dfile dname
    With these methods, the movement parameters for each sub-brick
    of the input dataset are read from the file 'dname'.  This file
    should consist of columns of numbers in ASCII format.  Six (6)
    numbers are read from each line of the input file.  If the
    '-dfile' option is used, each line of the input should be at
    least 7 numbers, and be of the form
      ignored roll pitch yaw dS dL dP
    If the '-1Dfile' option is used, then each line of the input
    should be at least 6 numbers, and be of the form
      roll pitch yaw dS dL dP
          (These are the forms output by the '-dfile' and
           '-1Dfile' options of program 3dvolreg; see that
           program's -help output for the hideous details.)
    The n-th sub-brick of the input dataset will be transformed
    using the parameters from the n-th line of the dname file.
    If the dname file doesn't contain as many lines as the
    input dataset has sub-bricks, then the last dname line will
    be used for all subsequent sub-bricks.  Excess columns or
    rows will be ignored.
  N.B.: Rotation is always about the center of the volume.
          If the parameters are derived from a 3dvolreg run
          on a dataset with a different center in xyz-space,
          the results may not be what you want!
  N.B.: You can't use -dfile/-1Dfile with -points (infra).

POINTS OPTIONS (instead of datasets):
------------------------------------
 -points
 -origin xo yo zo
   These options specify that instead of rotating a dataset, you will
   be rotating a set of (x,y,z) points.  The points are read from stdin.
   * If -origin is given, the point (xo,yo,zo) is used as the center for
     the rotation.
   * If -origin is NOT given, and a dataset is given at the end of the
     command line, then the center of the dataset brick is used as
     (xo,yo,zo).  The dataset will NOT be rotated if -points is given.
   * If -origin is NOT given, and NO dataset is given at the end of the
     command line, then xo=yo=zo=0 is assumed.  You probably don't
     want this.
   * (x,y,z) points are read from stdin as 3 ASCII-formatted numbers per
     line, as in 3dUndump.  Any succeeding numbers on input lines will
     be copied to the output, which will be written to stdout.
   * The input (x,y,z) coordinates are taken in the same order as the
     axes of the input dataset.  If there is no input dataset, then
       negative x = R  positive x = L  }
       negative y = A  positive y = P  } e.g., the DICOM order
       negative z = I  positive z = S  }
     One way to dump some (x,y,z) coordinates from a dataset is:

      3dmaskdump -mask something+tlrc -o xyzfilename -noijk
                 '3dcalc( -a dset+tlrc -expr x -datum float )'
                 '3dcalc( -a dset+tlrc -expr y -datum float )'
                 '3dcalc( -a dset+tlrc -expr z -datum float )'

     (All of this should be on one command line.)
============================================================================

Example: 3drotate -prefix Elvis -bshift 10S 0 0 -rotate 30R 0 0 Sinatra+orig

This will shift the input 10 mm in the superior direction, followed by a 30
degree rotation about the Right-to-Left axis (i.e., nod the head forward).

============================================================================
Algorithm: The rotation+shift is decomposed into 4 1D shearing operations
           (a 3D generalization of Paeth's algorithm).  The interpolation
           (i.e., resampling) method used for these shears can be controlled
           by the following options:

 -Fourier = Use a Fourier method (the default: most accurate; slowest).
 -NN      = Use the nearest neighbor method.
 -linear  = Use linear (1st order polynomial) interpolation (least accurate).
 -cubic   = Use the cubic (3rd order) Lagrange polynomial method.
 -quintic = Use the quintic (5th order) Lagrange polynomial method.
 -heptic  = Use the heptic (7th order) Lagrange polynomial method.

 -Fourier_nopad = Use the Fourier method WITHOUT padding
                * If you don't mind - or even want - the wraparound effect
                * Works best if dataset grid size is a power of 2, possibly
                  times powers of 3 and 5, in all directions being altered.
                * The main use would seem to be to un-wraparound poorly
                  reconstructed images, by using a shift; for example:
                   3drotate -ashift 30A 0 0 -Fourier_nopad -prefix Anew A+orig
                * This option is also available in the Nudge Dataset plugin.

 -clipit  = Clip results to input brick range [now the default].
 -noclip  = Don't clip results to input brick range.

 -zpad n  = Zeropad around the edges by 'n' voxels during rotations
              (these edge values will be stripped off in the output)
        N.B.: Unlike to3d, in this program '-zpad' adds zeros in
               all directions.
        N.B.: The environment variable AFNI_ROTA_ZPAD can be used
               to set a nonzero default value for this parameter.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dRowFillin
Usage: 3dRowFillin [options] dataset
Extracts 1D rows in the given direction from a 3D dataset,
searches for blank (zero) regions, and fills them in if
the blank region isn't too large and it is flanked by
the same value on either edge.  For example:
     input row = 0 1 2 0 0 2 3 0 3 0 0 4 0
    output row = 0 1 2 2 2 2 3 3 3 0 0 4 0

OPTIONS:
 -maxgap N  = set the maximum length of a blank region that
                will be filled in to 'N' [default=9].
 -dir D     = set the direction of fill to 'D', which can
                be one of the following:
                  A-P, P-A, I-S, S-I, L-R, R-L, x, y, z
                The first 6 are anatomical directions;
                the last 3 are reference to the dataset
                internal axes [no default value].
 -prefix P  = set the prefix to 'P' for the output dataset.

N.B.: If the input dataset has more than one sub-brick,
      only the first one will be processed.

The intention of this program is to let you fill in slice gaps
made when drawing ROIs with the 'Draw Dataset' plugin.  If you
draw every 5th coronal slice, say, then you could fill in using
  3dRowFillin -maxgap 4 -dir A-P -prefix fredfill fred+orig

This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dSkullStrip
Usage: A program to extract the brain from surrounding.
  tissue from MRI T1-weighted images. The largely automated
  process consists of three steps:
  1- Preprocessing of volume to remove gross spatial image 
  non-uniformity artifacts and reposition the brain in
  a reasonable manner for convenience.
  2- Expand a spherical surface iteratively until it envelopes
  the brain. This is a modified version of the BET algorithm:
     Fast robust automated brain extraction, 
      by Stephen M. Smith, HBM 2002 v 17:3 pp 143-155
    Modifications include the use of:
     . outer brain surface
     . expansion driven by data inside and outside the surface
     . avoidance of eyes and ventricles
     . a set of operations to avoid the clipping of certain brain
       areas and reduce leakage into the skull in heavily shaded
       data
     . two additional processing stages to ensure convergence and
       reduction of clipped areas.
  3- The creation of various masks and surfaces modeling brain
     and portions of the skull

  3dSkullStrip  < -input VOL >
             [< -o_TYPE PREFIX >] [< -prefix Vol_Prefix >] 
             [< -spatnorm >] [< -no_spatnorm >] [< -write_spatnorm >]
             [< -niter N_ITER >] [< -ld LD >] 
             [< -shrink_fac SF >] [< -var_shrink_fac >] 
             [< -no_var_shrink_fac >] [< shrink_fac_bot_lim SFBL >]
             [< -pushout >] [< -no_pushout >] [< -exp_frac FRAC]
             [< -touchup >] [< -no_touchup >]
             [< -fill_hole R >] [< -NN_smooth NN_SM >]
             [< -smooth_final SM >] [< -avoid_vent >] [< -no_avoid_vent >]
             [< -use_skull >] [< -no_use_skull >] 
             [< -avoid_eyes >] [< -no_avoid_eyes >] 
             [< -perc_int PERC_INT >] 
             [< -max_inter_iter MII >] [-mask_vol]
             [< -debug DBG >] [< -node_dbg NODE_DBG >]
             [< -demo_pause >]

  NOTE: Program is in Beta mode, please report bugs and strange failures
        to ziad@nih.gov

  Mandatory parameters:
     -input VOL: Input AFNI (or AFNI readable) volume.
                 

  Optional Parameters:
     -o_TYPE PREFIX: prefix of output surface.
        where TYPE specifies the format of the surface
        and PREFIX is, well, the prefix.
        TYPE is one of: fs, 1d (or vec), sf, ply.
        More on that below.
     -prefix VOL_PREFIX: prefix of output volume.
        If not specified, the prefix is the same
        as the one used with -o_TYPE.
        The output volume is skull stripped version
        of the input volume. In the earlier version
        of the program, a mask volume was written out.
        You can still get that mask volume instead of the
        skull-stripped volume with the option -mask_vol . 
     -mask_vol: Output a mask volume instead of a skull-stripped
                volume.
                The mask volume containes:
                 0: Voxel outside surface
                 1: Voxel just outside the surface. This means the voxel
                    center is outside the surface but inside the 
                    bounding box of a triangle in the mesh. 
                 2: Voxel intersects the surface (a triangle), but center
                    lies outside.
                 3: Voxel contains a surface node.
                 4: Voxel intersects the surface (a triangle), center lies
                    inside surface. 
                 5: Voxel just inside the surface. This means the voxel
                    center is inside the surface and inside the 
                    bounding box of a triangle in the mesh. 
                 6: Voxel inside the surface. 
     -spat_norm: (Default) Perform spatial normalization first.
                 This is a necessary step unless the volume has
                 been 'spatnormed' already.
     -no_spatnorm: Do not perform spatial normalization.
                   Use this option only when the volume 
                   has been run through the 'spatnorm' process
     -spatnorm_dxyz DXYZ: Use DXY for the spatial resolution of the
                          spatially normalized volume. The default 
                          is the lowest of all three dimensions.
                          For human brains, use DXYZ of 1.0, for
                          primate brain, use the default setting.
     -write_spatnorm: Write the 'spatnormed' volume to disk.
     -niter N_ITER: Number of iterations. Default is 250
        For denser meshes, you need more iterations
        N_ITER of 750 works for LD of 50.
     -ld LD: Parameter to control the density of the surface.
             Default is 20. See CreateIcosahedron -help
             for details on this option.
     -shrink_fac SF: Parameter controlling the brain vs non-brain
             intensity threshold (tb). Default is 0.6.
              tb = (Imax - t2) SF + t2 
             where t2 is the 2 percentile value and Imax is the local
             maximum, limited to the median intensity value.
             For more information on tb, t2, etc. read the BET paper
             mentioned above. Note that in 3dSkullStrip, SF can vary across 
             iterations and might be automatically clipped in certain areas.
             SF can vary between 0 and 1.
             0: Intensities < median inensity are considered non-brain
             1: Intensities < t2 are considered non-brain
     -var_shrink_fac: Vary the shrink factor with the number of
             iterations. This reduces the likelihood of a surface
             getting stuck on large pools of CSF before reaching
             the outer surface of the brain. (Default)
     -no_var_shrink_fac: Do not use var_shrink_fac.
     -shrink_fac_bot_lim SFBL: Do not allow the varying SF to go
             below SFBL . Default 0.65. 
             This option helps reduce potential for leakage below 
             the cerebellum.
     -pushout: Consider values above each node in addition to values
               below the node when deciding on expansion. (Default)
     -no_pushout: Do not use -pushout.
     -exp_frac FRAC: Speed of expansion (see BET paper). Default is 0.1.
     -touchup: Perform touchup operations at end to include
               areas not covered by surface expansion. 
               Use -touchup -touchup for aggressive makeup.
               (Default is -touchup)
     -no_touchup: Do not use -touchup
     -fill_hole R: Fill small holes that can result from small surface
                   intersections caused by the touchup operation.
                   R is the maximum number of pixels on the side of a hole
                   that can be filled. Big holes are not filled.
                   If you use -touchup, the default R is 10. Otherwise 
                   the default is 0.
                   This is a less than elegant solution to the small
                   intersections which are usually eliminated
                   automatically. 
     -NN_smooth NN_SM: Perform Nearest Neighbor coordinate interpolation
                       every few iterations. Default is 72
     -smooth_final SM: Perform final surface smoothing after all iterations.
                       Default is 20 smoothing iterations.
                       Smoothing is done using Taubin's method, 
                       see SurfSmooth -help for detail.
     -avoid_vent: avoid ventricles. Default.
     -no_avoid_vent: Do not use -avoid_vent.
     -avoid_eyes: avoid eyes. Default
     -no_avoid_eyes: Do not use -avoid_eyes.
     -use_skull: Use outer skull to limit expansion of surface into
                 the skull due to very strong shading artifacts.
                 This option is buggy at the moment, use it only 
                 if you have leakage into skull.
     -no_use_skull: Do not use -use_skull (Default).
     -send_no_skull: Do not send the skull surface to SUMA if you are
                     using  -talk_suma
     -perc_int PERC_INT: Percentage of segments allowed to intersect
                         surface. Ideally this should be 0 (Default). 
                         However, few surfaces might have small stubborn
                         intersections that produce a few holes.
                         PERC_INT should be a small number, typically
                         between 0 and 0.1
     -max_inter_iter N_II: Number of iteration to remove intersection
                           problems. With each iteration, the program
                           automatically increases the amount of smoothing
                           to get rid of intersections. Default is 4
     -blur_fwhm FWHM: Blur dset after spatial normalization.
                      Recommended when you have lots of CSF in brain
                      and when you have protruding gyri (finger like)
                      Recommended value is 2..4. 
     -interactive: Make the program stop at various stages in the 
                   segmentation process for a prompt from the user
                   to continue or skip that stage of processing.
                   This option is best used in conjunction with options
                   -talk_suma and -feed_afni
     -demo_pause: Pause at various step in the process to facilitate
                  interactive demo while 3dSkullStrip is communicating
                  with AFNI and SUMA. See 'Eye Candy' mode below and
                  -talk_suma option. 

 Specifying output surfaces using -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.

  SUMA communication options:
      -talk_suma: Send progress with each iteration to SUMA.
      -refresh_rate rps: Maximum number of updates to SUMA per second.
                         The default is the maximum speed.
      -send_kth kth: Send the kth element to SUMA (default is 1).
                     This allows you to cut down on the number of elements
                     being sent to SUMA.
      -sh <SUMAHOST>: Name (or IP address) of the computer running SUMA.
                      This parameter is optional, the default is 127.0.0.1 
      -ni_text: Use NI_TEXT_MODE for data transmission.
      -ni_binary: Use NI_BINARY_MODE for data transmission.
                  (default is ni_binary).
      -feed_afni: Send updates to AFNI via SUMA's talk.


     -visual: Equivalent to using -talk_suma -feed_afni -send_kth 5

     -debug DBG: debug levels of 0 (default), 1, 2, 3.
        This is no Rick Reynolds debug, which is oft nicer
        than the results, but it will do.
     -node_dbg NODE_DBG: Output lots of parameters for node
                         NODE_DBG for each iteration.

  Tips:
     I ran the program with the default parameters on 200+ datasets.
     The results were quite good in all but a couple of instances, here
     are some tips on fixing trouble spots:

     Clipping in frontal areas, close to the eye balls:
        + Try -no_avoid_eyes option
     Clipping in general:
        + Use lower -shrink_fac, start with 0.5 then 0.4
     Some lobules are not included:
        + Use a denser mesh (like -ld 50) and increase iterations 
        (-niter 750). The program will take much longer to run in that case.
        + Instead of using denser meshes, you could try blurring the data 
        before skull stripping. Something like -blur_fwhm 2 did
        wonders for some of my data with the default options of 3dSkullStrip
        Blurring is a lot faster than increasing mesh density.
        + Use also a smaller -shrink_fac is you have lots of CSF between gyri.
     Massive chunks missing:
        + If brain has very large ventricles and lots of CSF between gyri, the
        ventricles will keep attracting the surface inwards. In such cases, use
        the -visual option to see what is happening and try these options to 
        reduce the severity of the problem:
            -blur_fwhm 2 -use_skull

 Eye Candy Mode: (previous restrictions removed)
  You can run BrainWarp and have it send successive iterations
 to SUMA and AFNI. This is very helpful in following the
 progression of the algorithm and determining the source
 of trouble, if any.
  Example:
     afni -niml -yesplugouts &
     suma -niml &
     3dSkullStrip -input Anat+orig -o_ply anat_brain -talk_suma -feed_afni -send_kth 5

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dSpatNorm
**ERROR: -help is unknown option!
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dStatClust

Program:          3dStatClust 
Author:           B. Douglas Ward 
Initial Release:  08 October 1999 
Latest Revision:  15 August 2001 

Perform agglomerative hierarchical clustering for user specified 
parameter sub-bricks, for all voxels whose threshold statistic   
is above a user specified value.

Usage: 3dStatClust options datasets 
where the options are:
-prefix pname    = Use 'pname' for the output dataset prefix name.
  OR                 [default='SC']
-output pname

-session dir     = Use 'dir' for the output dataset session directory.
                     [default='./'=current working directory]
-verb            = Print out verbose output as the program proceeds.

Options for calculating distance between parameter vectors: 
   -dist_euc        = Calculate Euclidean distance between parameters 
   -dist_ind        = Statistical distance for independent parameters 
   -dist_cor        = Statistical distance for correlated parameters 
The default option is:  Euclidean distance. 

-thresh t tname  = Use threshold statistic from file tname. 
                   Only voxels whose threshold statistic is greater 
                   than t in abolute value will be considered. 
                     [If file tname contains more than 1 sub-brick, 
                     the threshold stat. sub-brick must be specified!]
-nclust n        = This specifies the maximum number of clusters for 
                   output (= number of sub-bricks in output dataset).

Command line arguments after the above are taken as parameter datasets.


INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dSurf2Vol
3dSurf2Vol - map data from a surface domain to an AFNI volume domain

  usage: 3dSurf2Vol [options] -spec SPEC_FILE -surf_A SURF_NAME \
             -grid_parent AFNI_DSET -sv SURF_VOL \
             -map_func MAP_FUNC -prefix OUTPUT_DSET

    This program is meant to take as input a pair of surfaces,
    optionally including surface data, and an AFNI grid parent
    dataset, and to output a new AFNI dataset consisting of the
    surface data mapped to the dataset grid space.  The mapping
    function determines how to map the surface values from many
    nodes to a single voxel.

    Surfaces (from the spec file) are specified using '-surf_A'
    (and '-surf_B', if a second surface is input).  If two
    surfaces are input, then the computed segments over node
    pairs will be in the direction from surface A to surface B.

    The basic form of the algorithm is:

       o for each node pair (or single node)
           o form a segment based on the xyz node coordinates,
             adjusted by any '-f_pX_XX' options
           o divide the segment up into N steps, according to 
             the '-f_steps' option
           o for each segment point
               o if the point is outside the space of the output
                 dataset, skip it
               o locate the voxel in the output dataset which
                 corresponds to this segment point
               o if the '-cmask' option was given, and the voxel
                 is outside the implied mask, skip it
               o if the '-f_index' option is by voxel, and this
                 voxel has already been considered, skip it
               o insert the surface node value, according to the
                 user-specified '-map_func' option

  Surface Coordinates:

      Surface coordinates are assumed to be in the Dicom
      orientation.  This information may come from the option
      pair of '-spec' and '-sv', with which the user provides
      the name of the SPEC FILE and the SURFACE VOLUME, along
      with '-surf_A' and optionally '-surf_B', used to specify
      actual surfaces by name.  Alternatively, the surface
      coordinates may come from the '-surf_xyz_1D' option.
      See these option descriptions below.

      Note that the user must provide either the three options
      '-spec', '-sv' and '-surf_A', or the single option,
      '-surf_xyz_1D'.

  Surface Data:

      Surface domain data can be input via the '-sdata_1D'
      option.  In such a case, the data is with respect to the
      input surface.  The first column of the sdata_1D file
      should be a node index, and following columns are that
      node's data.  See the '-sdata_1D' option for more info.

      If the surfaces have V values per node (pair), then the
      resulting AFNI dataset will have V sub-bricks (unless the
      user applies the '-data_expr' option).

  Mapping Functions:

      Mapping functions exist because a single volume voxel may
      be occupied by multiple surface nodes or segment points.
      Depending on how dense the surface mesh is, the number of
      steps provided by the '-f_steps' option, and the indexing
      type from '-f_index', even a voxel which is only 1 cubic
      mm in volume may have quite a few contributing points.

      The mapping function defines how multiple surface values
      are combined to get a single result in each voxel.  For
      example, the 'max' function will take the maximum of all
      surface values contributing to each given voxel.

      Current mapping functions are listed under the '-map_func'
      option, below.

------------------------------------------------------------

  examples:

    1. Map a single surface to an anatomical volume domain,
       creating a simple mask of the surface.  The output
       dataset will be fred_surf+orig, and the orientation and
       grid spacing will follow that of the grid parent.  The
       output voxels will be 1 where the surface exists, and 0
       elsewhere.

    3dSurf2Vol                       \
       -spec         fred.spec                \
       -surf_A       pial                     \
       -sv           fred_anat+orig           \
       -grid_parent  fred_anat+orig           \
       -map_func     mask                     \
       -prefix       fred_surf

    2. Map the cortical grey ribbon (between the white matter
       surface and the pial surface) to an AFNI volume, where
       the resulting volume is restriced to the mask implied by
       the -cmask option.

       Surface data will come from the file sdata_10.1D, which
       has 10 values per node, and lists only a portion of the
       entire set of surface nodes.  Each node pair will be form
       a segment of 15 equally spaced points, the values from
       which will be applied to the output dataset according to
       the 'ave' filter.  Since the index is over points, each
       of the 15 points will have its value applied to the
       appropriate voxel, even multiple times.  This weights the
       resulting average by the fraction of each segment that
       occupies a given voxel.

       The output dataset will have 10 sub-bricks, according to
       the 10 values per node index in sdata_10.1D.

    3dSurf2Vol                       \
       -spec         fred.spec                               \
       -surf_A       smoothwm                                \
       -surf_B       pial                                    \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_func+orig[0]'                      \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -sdata_1D     sdata_10.1D                             \
       -map_func     ave                                     \
       -f_steps      15                                      \
       -f_index      points                                  \
       -prefix       fred_surf_ave

    3. The inputs in this example are identical to those in
       example 2, including the surface dataset, sdata_10.1D.
       Again, the output dataset will have 10 sub-bricks.

       The surface values will be applied via the 'max_abs'
       filter, with the intention of assigning to each voxel the
       node value with the most significance.  Here, the index
       method does not matter, so it is left as the default,
       'voxel'.

       In this example, each node pair segment will be extended
       by 20% into the white matter, and by 10% outside of the
       grey matter, generating a "thicker" result.

    3dSurf2Vol                       \
       -spec         fred.spec                               \
       -surf_A       smoothwm                                \
       -surf_B       pial                                    \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_func+orig[0]'                      \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -sdata_1D     sdata_10.1D                             \
       -map_func     max_abs                                 \
       -f_steps      15                                      \
       -f_p1_fr      -0.2                                    \
       -f_pn_fr       0.1                                    \
       -prefix       fred_surf_max_abs

    4. This is simliar to example 2.  Here, the surface nodes
       (coordinates) come from 'surf_coords_2.1D'.  But these
       coordinates do not happen to be in Dicomm orientation,
       they are in the same orientation as the grid parent, so
       the '-sxyz_orient_as_gpar' option is applied.

       Even though the data comes from 'sdata_10.1D', the output
       AFNI dataset will only have 1 sub-brick.  That is because
       of the '-data_expr' option.  Here, each applied surface
       value will be the average of the sines of the first 3
       data values (columns of sdata_10.1D).

    3dSurf2Vol                       \
       -surf_xyz_1D  surf_coords_2.1D                        \
       -sxyz_orient_as_gpar                                  \
       -grid_parent 'fred_func+orig[0]'                      \
       -sdata_1D     sdata_10.1D                             \
       -data_expr   '(sin(a)+sin(b)+sin(c))/3'               \
       -map_func     ave                                     \
       -f_steps      15                                      \
       -f_index      points                                  \
       -prefix       fred_surf_ave_sine

    5. In this example, voxels will get the maximum value from
       column 3 of sdata_10.1D (as usual, column 0 is used for
       node indices).  The output dataset will have 1 sub-brick.

       Here, the output dataset is forced to be of type 'short',
       regardless of what the grid parent is.  Also, there will
       be no scaling factor applied.

       To track the numbers for surface node #1234, the '-dnode'
       option has been used, along with '-debug'.  Additionally,
       '-dvoxel' is used to track the results for voxel #6789.

    3dSurf2Vol                       \
       -spec         fred.spec                               \
       -surf_A       smoothwm                                \
       -surf_B       pial                                    \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_func+orig[0]'                      \
       -sdata_1D     sdata_10.1D'[0,3]'                      \
       -map_func     max                                     \
       -f_steps      15                                      \
       -datum        short                                   \
       -noscale                                              \
       -debug        2                                       \
       -dnode        1234                                    \
       -dvoxel       6789                                    \
       -prefix       fred_surf_max

------------------------------------------------------------

  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE        : SUMA spec file

        e.g. -spec fred.spec

        The surface specification file contains the list of
        mappable surfaces that are used.

        See @SUMA_Make_Spec_FS and @SUMA_Make_Spec_SF.

        Note: this option, along with '-sv', may be replaced
              by the '-surf_xyz_1D' option.

    -surf_A SURF_NAME      : specify surface A (from spec file)
    -surf_B SURF_NAME      : specify surface B (from spec file)

        e.g. -surf_A smoothwm
        e.g. -surf_A lh.smoothwm
        e.g. -surf_B lh.pial

        This parameter is used to tell the program with surfaces
        to use.  The '-surf_A' parameter is required, but the
        '-surf_B' parameter is an option.

        The surface names must uniquely match those in the spec
        file, though a sub-string match is good enough.  The
        surface names are compared with the names of the surface
        node coordinate files.

        For instance, given a spec file that has only the left
        hemisphere in it, 'pial' should produce a unique match
        with lh.pial.asc.  But if both hemispheres are included,
        then 'pial' would not be unique (matching rh.pail.asc,
        also).  In that case, 'lh.pial' would be better.

    -sv SURFACE_VOLUME     : AFNI dataset

        e.g. -sv fred_anat+orig

        This is the AFNI dataset that the surface is mapped to.
        This dataset is used for the initial surface node to xyz
        coordinate mapping, in the Dicom orientation.

        Note: this option, along with '-spec', may be replaced
              by the '-surf_xyz_1D' option.

    -surf_xyz_1D SXYZ_NODE_FILE : 1D coordinate file

        e.g. -surf_xyz_1D my_surf_coords.1D

        This ascii file contains a list of xyz coordinates to be
        considered as a surface, or 2 sets of xyz coordinates to
        considered as a surface pair.  As usual, these points
        are assumed to be in Dicom orientation.  Another option
        for coordinate orientation is to use that of the grid
        parent dataset.  See '-sxyz_orient_as_gpar' for details.

        This option is an alternative to the pair of options, 
        '-spec' and '-sv'.

        The number of rows of the file should equal the number
        of nodes on each surface.  The number of columns should
        be either 3 for a single surface, or 6 for two surfaces.
        
        sample line of an input file (one surface):
        
        11.970287  2.850751  90.896111
        
        sample line of an input file (two surfaces):
        
        11.97  2.85  90.90    12.97  2.63  91.45
        

    -grid_parent AFNI_DSET : AFNI dataset

        e.g. -grid_parent fred_function+orig

        This dataset is used as a grid and orientation master
        for the output AFNI dataset.

    -map_func MAP_FUNC     : surface to dataset function

        e.g. -map_func max
        e.g. -map_func mask -f_steps 20

        This function applies to the case where multiple data
        points get mapped to a single voxel, which is expected
        since surfaces tend to have a much higher resolution
        than AFNI volumes.  In the general case data points come
        from each point on each partitioned line segment, with
        one segment per node pair.  Note that these segments may
        have length zero, such as when only a single surface is
        input.

        See "Mapping Functions" above, for more information.

        The current mapping function for one surface is:

          mask   : For each xyz location, set the corresponding
                   voxel to 1.

        The current mapping functions for two surfaces are as
        follows.  These descriptions are per output voxel, and
        over the values of all points mapped to a given voxel.

          mask2  : if any points are mapped to the voxel, set
                   the voxel value to 1

          ave    : average all values

          count  : count the number of mapped data points

          min    : find the minimum value from all mapped points

          max    : find the maximum value from all mapped points

          max_abs: find the number with maximum absolute value
                   (the resulting value will retain its sign)

    -prefix OUTPUT_PREFIX  : prefix for the output dataset

        e.g. -prefix anat_surf_mask

        This is used to specify the prefix of the resulting AFNI
        dataset.

  ------------------------------
  SUB-SURFACE DATA FILE OPTIONS:

    -sdata_1D SURF_DATA.1D : 1D sub-surface file, with data

        e.g. -sdata_1D roi3.1D

        This is used to specify a 1D file, which contains
        surface indices and data.  The indices refer to the
        surface(s) read from the spec file.
        
        The format of this data file is a surface index and a
        list of data values on each row.  To be a valid 1D file,
        each row must have the same number of columns.

  ------------------------------
  OPTIONS SPECIFIC TO SEGMENT SELECTION:

    (see "The basic form of the algorithm" for more details)

    -f_steps NUM_STEPS     : partition segments

        e.g. -f_steps 10
        default: -f_steps 2   (or 1, the number of surfaces)

        This option specifies the number of points to divide
        each line segment into, before mapping the points to the
        AFNI volume domain.  The default is the number of input
        surfaces (usually, 2).  The default operation is to have
        the segment endpoints be the actual surface nodes,
        unless they are altered with the -f_pX_XX options.

    -f_index TYPE          : index by points or voxels

        e.g. -f_index points
        e.g. -f_index voxels
        default: -f_index voxels

        Along a single segment, the default operation is to
        apply only those points mapping to a new voxel.  The
        effect of the default is that a given voxel will have
        at most one value applied per voxel pair.

        If the user applies this option with 'points' or 'nodes'
        as the argument, then every point along the segment will
        be applied.  This may be preferred if, for example, the
        user wishes to have the average weighted by the number
        of points occupying a voxel, not just the number of node
        pair segments.

    Note: the following -f_pX_XX options are used to alter the
          locations of the segment endpoints, per node pair.
          The segments are directed, from the node on the first
          surface to the node on the second surface.  To modify
          the first endpoint, use a -f_p1_XX option, and use
          -f_pn_XX to modify the second.

    -f_p1_fr FRACTION      : offset p1 by a length fraction

        e.g. -f_p1_fr -0.2
        e.g. -f_p1_fr -0.2  -f_pn_fr 0.2

        This option moves the first endpoint, p1, by a distance
        of the FRACTION times the original segment length.  If
        the FRACTION is positive, it moves in the direction of
        the second endpoint, pn.

        In the example, p1 is moved by 20% away from pn, which
        will increase the length of each segment.

    -f_pn_fr FRACTION      : offset pn by a length fraction

        e.g. -f_pn_fr  0.2
        e.g. -f_p1_fr -0.2  -f_pn_fr 0.2

        This option moves pn by a distance of the FRACTION times
        the original segment length, in the direction from p1 to
        pn.  So a positive fraction extends the segment, and a
        negative fraction reduces it.

        In the example above, using 0.2 adds 20% to the segment
        length past the original pn.

    -f_p1_mm DISTANCE      : offset p1 by a distance in mm.

        e.g. -f_p1_mm -1.0
        e.g. -f_p1_mm -1.0  -f_pn_fr 1.0

        This option moves p1 by DISTANCE mm., in the direction
        of pn.  If the DISTANCE is positive, the segment gets
        shorter.  If DISTANCE is negative, the segment will get
        longer.

        In the example, p1 is moved away from pn, extending the
        segment by 1 millimeter.

    -f_pn_mm DISTANCE      : offset pn by a distance in mm.

        e.g. -f_pn_mm  1.0
        e.g. -f_p1_mm -1.0  -f_pn_fr 1.0

        This option moves pn by DISTANCE mm., in the direction
        from the first point to the second.  So if DISTANCE is
        positive, the segment will get longer.  If DISTANCE is
        negative, the segment will get shorter.

        In the example, pn is moved 1 millimeter farther from
        p1, extending the segment by that distance.

  ------------------------------
  GENERAL OPTIONS:

    -cmask MASK_COMMAND    : command for dataset mask

        e.g. -cmask '-a fred_func+orig[2] -expr step(a-0.8)'

        This option will produce a mask to be applied to the
        output dataset.  Note that this mask should form a
        single sub-brick.

        This option follows the style of 3dmaskdump (since the
        code for it was, uh, borrowed from there (thanks Bob!)).

        See '3dmaskdump -help' for more information.

    -data_expr EXPRESSION  : apply expression to surface input

        e.g. -data_expr 17
        e.g. -data_expr '(a+b+c+d)/4'
        e.g. -data_expr '(sin(a)+sin(b))/2'

        This expression is applied to the list of data values
        from the surface data file input via '-sdata_1D'.  The
        expression is applied for each node or node pair, to the
        list of data values corresponding to that node.

        The letters 'a' through 'z' may be used as input, and
        refer to columns 1 through 26 of the data file (where
        column 0 is a surface node index).  The data file must
        have enough columns to support the expression.  Is is
        valid to have a constant expression without a data file.

    -datum DTYPE           : set data type in output dataset

        e.g. -datum short
        default: same as that of grid parent

        This option specifies the data type for the output AFNI
        dataset.  Valid choices are byte, short and float, which
        are 1, 2 and 4 bytes for each data point, respectively.

    -debug LEVEL           : verbose output

        e.g. -debug 2

        This option is used to print out status information 
        during the execution of the program.  Current levels are
        from 0 to 5.

    -dnode DEBUG_NODE      : extra output for that node

        e.g. -dnode 123456

        This option requests additional debug output for the
        given surface node.  This index is with respect to the
        input surface (included in the spec file, or through the
        '-surf_xyz_1D' option).

        This will have no effect without the '-debug' option.

    -dvoxel DEBUG_VOXEL    : extra output for that voxel

        e.g. -dvoxel 234567

        This option requests additional debug output for the
        given volume voxel.  This 1-D index is with respect to
        the output AFNI dataset.  One good way to find a voxel
        index to supply is from output via the '-dnode' option.

        This will have no effect without the '-debug' option.

    -hist                  : show revision history

        Display module history over time.

    -help                  : show this help

        If you can't get help here, please get help somewhere.

    -noscale               : no scale factor in output dataset

        If the output dataset is an integer type (byte, shorts
        or ints), then the output dataset may end up with a
        scale factor attached (see 3dcalc -help).  With this
        option, the output dataset will not be scaled.

    -sxyz_orient_as_gpar   : assume gpar orientation for sxyz

        This option specifies that the surface coordinate points
        in the '-surf_xyz_1D' option file have the orientation
        of the grid parent dataset.

        When the '-surf_xyz_1D' option is applied the surface
        coordinates are assumed to be in Dicom orientation, by
        default.  This '-sxyz_orient_as_gpar' option overrides
        the Dicom default, specifying that the node coordinates
        are in the same orientation as the grid parent dataset.

        See the '-surf_xyz_1D' option for more information.

    -version               : show version information

        Show version and compile date.

------------------------------------------------------------

  Author: R. Reynolds  - version  3.6a (March 22, 2005)

                (many thanks to Z. Saad and R.W. Cox)

This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dSurfMask
Usage: 3dSurfMask <-i_TYPE SURFACE> <-prefix PREFIX>
                <-grid_parent GRID_VOL> [-sv SURF_VOL] [-mask_only]
 
  Creates a volumetric dataset that marks the inside
    of the surface.  Voxels in the output dataset are set to the following
  values:
     0: Voxel outside surface
     1: Voxel just outside the surface. This means the voxel
        center is outside the surface but inside the 
        bounding box of a triangle in the mesh. 
     2: Voxel intersects the surface (a triangle), but center lies outside.
     3: Voxel contains a surface node.
     4: Voxel intersects the surface (a triangle), center lies inside surface. 
     5: Voxel just inside the surface. This means the voxel
        center is inside the surface and inside the 
        bounding box of a triangle in the mesh. 
     6: Voxel inside the surface. 

  Mandatory Parameters:
     -i_TYPE SURFACE: Specify input surface.
             You can also use -t* and -spec and -surf
             methods to input surfaces. See below
             for more details.
     -prefix PREFIX: Prefix of output dataset.
     -grid_parent GRID_VOL: Specifies the grid for the
                  output volume.
  Other parameters:
     -mask_only: Produce an output dataset where voxels
                 are 1 inside the surface and 0 outside,
                 instead of the more nuanced output above.

 Specifying input surfaces using -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           SF: Caret/SureFit format
           BV: BrainVoyager format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dSurfMaskDump
3dSurfMaskDump - dump ascii dataset values corresponding to a surface

This program is used to display AFNI dataset values that
correspond to a surface.  The surface points are mapped to xyz
coordinates, according to the SURF_VOL (surface volume) AFNI
dataset.  These coordinates are then matched to voxels in other
AFNI datasets.  So given any other AFNI dataset, this program
can output all of the sub-brick values that correspond to each
of the suface locations.  The user also has options to mask
regions for output.

Different mappings are allowed from the surface(s) to the grid
parent dataset.  The mapping function is a required parameter to
the program.

The current mapping functions are:

    ave       : for each node pair (from 2 surfaces), output the
                average of all voxel values along that line
                segment
    mask      : each node in the surface is mapped to one voxel
    midpoint  : for each node pair (from 2 surfaces), output the
                dataset value at their midpoint (in xyz space)

  usage: 3dSurfMaskDump [options] -spec SPEC_FILE -sv SURF_VOL \
                    -grid_parent AFNI_DSET -map_func MAP_FUNC

  examples:

    3dSurfMaskDump                       \
       -spec         fred.spec                \
       -sv           fred_anat+orig           \
       -grid_parent  fred_anat+orig           \
       -map_func     mask                     \

    3dSurfMaskDump                       \
       -spec         fred.spec                               \
       -sv           fred_anat+orig                          \
       -grid_parent 'fred_epi+orig[0]'                       \
       -map_func     mask                                    \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -debug        2                                       \
       -output       fred_surf_vals.txt

    3dSurfMaskDump                       \
       -spec         fred.spec                               \
       -sv           fred_anat+orig                          \
       -grid_parent  fred_anat+orig                          \
       -map_func     ave                                     \
       -m2_steps     10                                      \
       -m2_index     nodes                                   \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)' \
       -output       fred_surf_ave.txt


  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE        : SUMA spec file

        e.g. -spec fred.spec

        The surface specification file contains the list of
        mappable surfaces that are used.

        See @SUMA_Make_Spec_FS and @SUMA_Make_Spec_SF.

    -sv SURFACE_VOLUME     : AFNI dataset

        e.g. -sv fred_anat+orig

        This is the AFNI dataset that the surface is mapped to.
        This dataset is used for the intial surface node to xyz
        coordinate mapping, in the Dicomm orientation.

    -grid_parent AFNI_DSET : AFNI dataset

        e.g. -grid_parent fred_function+orig

        This dataset is used as a grid and orientation master
        for the output.  Output coordinates are based upon
        this dataset.

    -map_func MAP_FUNC     : surface to dataset function

        e.g. -map_func ave
        e.g. -map_func ave -m2_steps 10
        e.g. -map_func ave -m2_steps 10 -m2_index nodes
        e.g. -map_func mask
        e.g. -map_func midpoint

        Given one or more surfaces, there are many ways to
        select voxel locations, and to select corresponding
        values for the output dataset.  Some of the functions
        will have separate options.

        The current mapping functions are:

          ave      : Given 2 related surfaces, for each node
                     pair, output the average of the dataset
                     values located along the segment joining
                     those nodes.

                  -m2_steps NUM_STEPS :

                     The -m2_steps option may be added here, to
                     specify the number of points to use in the
                     average.  The default and minimum is 2.

                     e.g.  -map_func ave -m2_steps 10
                     default: -m2_steps 2

                  -m2_index TYPE :

                     The -m2_index options is used to specify
                     whether the average is taken by indexing
                     over distict nodes or over distict voxels.

                     For instance, when taking the average along
                     one node pair segment using 10 node steps,
                     perhaps 3 of those nodes may occupy one
                     particular voxel.  In this case, does the
                     user want the voxel counted only once, or 3
                     times?  Each case makes sense.
                     
                     Note that this will only make sense when
                     used along with the '-m2_steps' option.
                     
                     Possible values are "nodes", "voxels".
                     The default value is voxels.  So each voxel
                     along a segment will be counted only once.
                     
                     e.g.  -m2_index nodes
                     e.g.  -m2_index voxels
                     default: -m2_index voxels

          mask     : For each surface xyz location, output the
                     dataset values of each sub-brick.

          midpoint : Given 2 related surfaces, for each node
                     pair, output the dataset value with xyz
                     coordinates at the midpoint of the nodes.

  options:

    -cmask MASK_COMMAND    : (optional) command for dataset mask

        e.g. -cmask '-a fred_func+orig[2] -expr step(a-0.8)'

        This option will produce a mask to be applied to the
        output dataset.  Note that this mask should form a
        single sub-brick.

        This option follows the style of 3dmaskdump (since the
        code for it was, uh, borrowed from there (thanks Bob!)).

        See '3dmaskdump -help' for more information.

    -debug LEVEL           :  (optional) verbose output

        e.g. -debug 2

        This option is used to print out status information 
        during the execution of the program.  Current levels are
        from 0 to 4.

    -help                  : show this help

        If you can't get help here, please get help somewhere.

    -outfile OUTPUT_FILE   : specify a file for the output

        e.g. -outfile some_output_file
        e.g. -outfile mask_values_over_dataset.txt
        e.g. -outfile stderr
        default: write to stdout

        This is where the user will specify which file they want
        the output to be written to.  Note that the output file
        should not yet exist.

        Two special (valid) cases are stdout and stderr, either
        of which may be specified.

    -noscale               : no scale factor in output dataset

        If the output dataset is an integer type (byte, shorts
        or ints), then the output dataset may end up with a
        scale factor attached (see 3dcalc -help).  With this
        option, the output dataset will not be scaled.

    -version               : show version information

        Show version and compile date.


  Author: R. Reynolds  - version 2.3 (July 21, 2003)

                (many thanks to Z. Saad and R.W. Cox)

This page auto-generated on Thu Aug 25 16:49:37 EDT 2005
3dTagalign
Usage: 3dTagalign [options] dset
Rotates/translates dataset 'dset' to be aligned with the master,
using the tagsets embedded in their .HEAD files.

Options:
 -master mset  = Use dataset 'mset' as the master dataset
                   [this is a nonoptional option]

 -nokeeptags   = Don't put transformed locations of dset's tags
                   into the output dataset [default = keep tags]

 -matvec mfile = Write the matrix+vector of the transformation to
                   file 'mfile'.  This can be used as input to the
                   '-matvec_out2in' option of 3dWarp, if you want
                   to align other datasets in the same way (e.g.,
                   functional datasets).

 -rotate       = Compute the best transformation as a rotation + shift.
                   This is the default.

 -affine       = Compute the best transformation as a general affine
                   map rather than just a rotation + shift.  In all
                   cases, the transformation from input to output
                   coordinates is of the form
                      [out] = [R] [in] + [V]
                   where [R] is a 3x3 matrix and [V] is a 3-vector.
                   By default, [R] is computed as a proper (det=1)
                   rotation matrix (3 parameters).  The '-affine'
                   option says to fit [R] as a general matrix
                   (9 parameters).
           N.B.: An affine transformation can rotate, rescale, and
                   shear the volume.  Be sure to look at the dataset
                   before and after to make sure things are OK.

 -rotscl       = Compute transformation as a rotation times an isotropic
                   scaling; that is, [R] is an orthogonal matrix times
                   a scalar.
           N.B.: '-affine' and '-rotscl' do unweighted least squares.

 -prefix pp    = Use 'pp' as the prefix for the output dataset.
                   [default = 'tagalign']
 -verb         = Print progress reports
 -dummy        = Don't actually rotate the dataset, just compute
                   the transformation matrix and vector.  If
                   '-matvec' is used, the mfile will be written.

Nota Bene:
* Cubic interpolation is used.  The transformation is carried out
  using the same methods as program 3dWarp.

Author: RWCox - 16 Jul 2000, etc.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dTcat
Concatenate sub-bricks from input datasets into one big 3D+time dataset.
Usage: 3dTcat options
where the options are:
     -prefix pname = Use 'pname' for the output dataset prefix name.
 OR  -output pname     [default='tcat']

     -session dir  = Use 'dir' for the output dataset session directory.
                       [default='./'=current working directory]
     -glueto fname = Append bricks to the end of the 'fname' dataset.
                       This command is an alternative to the -prefix 
                       and -session commands.                        
     -dry          = Execute a 'dry run'; that is, only print out
                       what would be done.  This is useful when
                       combining sub-bricks from multiple inputs.
     -verb         = Print out some verbose output as the program
                       proceeds (-dry implies -verb).
                       Using -verb twice results in quite lengthy output.
     -rlt          = Remove linear trends in each voxel time series loaded
                       from each input dataset, SEPARATELY.  That is, the
                       data from each dataset is detrended separately.
                       At least 3 sub-bricks from a dataset must be input
                       for this option to apply.
             Notes: (1) -rlt removes the least squares fit of 'a+b*t'
                          to each voxel time series; this means that
                          the mean is removed as well as the trend.
                          This effect makes it impractical to compute
                          the % Change using AFNI's internal FIM.
                    (2) To have the mean of each dataset time series added
                          back in, use this option in the form '-rlt+'.
                          In this case, only the slope 'b*t' is removed.
                    (3) To have the overall mean of all dataset time
                          series added back in, use this option in the
                          form '-rlt++'.  In this case, 'a+b*t' is removed
                          from each input dataset separately, and the
                          mean of all input datasets is added back in at
                          the end.  (This option will work properly only
                          if all input datasets use at least 3 sub-bricks!)
                    (4) -rlt can be used on datasets that contain shorts
                          or floats, but not on complex- or byte-valued
                          datasets.

Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.

SUB-BRICK SELECTION:
You can also add a sub-brick selection list after the end of the
dataset name.  This allows only a subset of the sub-bricks to be
included into the output (by default, all of the input dataset
is copied into the output).  A sub-brick selection list looks like
one of the following forms:
  fred+orig[5]                     ==> use only sub-brick #5
  fred+orig[5,9,17]                ==> use #5, #9, and #12
  fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
  fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
Sub-brick indexes start at 0.  You can use the character '$'
to indicate the last sub-brick in a dataset; for example, you
can select every third sub-brick by using the selection list
  fred+orig[0..$(3)]

NOTES:
* The TR and other time-axis properties are taken from the
  first input dataset that is itself 3D+time.  If no input
  datasets contain such information, then TR is set to 1.0.
  This can be altered using the 3drefit program.

* The sub-bricks are output in the order specified, which may
  not be the order in the original datasets.  For example, using
     fred+orig[0..$(2),1..$(2)]
  will cause the sub-bricks in fred+orig to be output into the
  new dataset in an interleaved fashion.  Using
     fred+orig[$..0]
  will reverse the order of the sub-bricks in the output.
  If the -rlt option is used, the sub-bricks selected from each
  input dataset will be re-ordered into the output dataset, and
  then this sequence will be detrended.

* You can use the '3dinfo' program to see how many sub-bricks
  a 3D+time or a bucket dataset contains.

* The '$', '(', ')', '[', and ']' characters are special to
  the shell, so you will have to escape them.  This is most easily
  done by putting the entire dataset plus selection list inside
  single quotes, as in 'fred+orig[5..7,9]'.

* You may wish to use the 3drefit program on the output dataset
  to modify some of the .HEAD file parameters.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dTcorrelate
Usage: 3dTcorrelate [options] xset yset
Computes the correlation coefficient between corresponding voxel
time series in two input 3D+time datasets 'xset' and 'yset', and
stores the output in a new 1 sub-brick dataset.

Options:
  -pearson  = Correlation is the normal Pearson (product moment)
                correlation coefficient [default].
  -spearman = Correlation is the Spearman (rank) correlation
                coefficient.
  -quadrant = Correlation is the quadrant correlation coefficient.

  -polort m = Remove polynomical trend of order 'm', for m=-1..3.
                [default is m=1; removal is by least squares].
                Using m=-1 means no detrending; this is only useful
                for data/information that has been pre-processed.

  -ort r.1D = Also detrend using the columns of the 1D file 'r.1D'.
                Only one -ort option can be given.  If you want to use
                more than one, create a temporary file using 1dcat.

  -autoclip = Clip off low-intensity regions in the two datasets,
  -automask =  so that the correlation is only computed between
               high-intensity (presumably brain) voxels.  The
               intensity level is determined the same way that
               3dClipLevel works.

  -prefix p = Save output into dataset with prefix 'p'
               [default prefix is 'Tcorr'].

Notes:
 * The output dataset is functional bucket type, with one
    sub-brick, stored in floating point format.
 * Because both time series are detrended prior to correlation,
    the results will not be identical to using FIM or FIM+ to
    calculate correlations (whose ideal vector is not detrended).
 * This is a quick hack for Mike Beauchamp.  Thanks for you-know-what.

-- RWCox - Aug 2001
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dThreetoRGB
Usage #1: 3dThreetoRGB [options] dataset
Usage #2: 3dThreetoRGB [options] dataset1 dataset2 dataset3

Converts 3 sub-bricks of input to an RGB-valued dataset.
* If you have 1 input dataset, then sub-bricks [0..2] are
   used to form the RGB components of the output.
* If you have 3 input datasets, then the [0] sub-brick of
   each is used to form the RGB components, respectively.
* RGB datasets have 3 bytes per voxel, with values ranging
   from 0..255.

Options:
  -prefix ppp = Write output into dataset with prefix 'ppp'.
                 [default='rgb']
  -scale fac  = Multiply input values by 'fac' before using
                 as RGB [default=1].  If you have floating
                 point inputs in range 0..1, then using
                 '-scale 255' would make a lot of sense.
  -mask mset  = Only output nonzero values where the mask
                 dataset 'mset' is nonzero.
  -fim        = Write result as a 'fim' type dataset.
                 [this is the default]
  -anat       = Write result as a anatomical type dataset.
Notes:
* Input datasets must be byte-, short-, or float-valued.
* You might calculate the component datasets using 3dcalc.
* You can also create RGB-valued datasets in to3d, using
   2D raw PPM image files as input, or the 3Dr: format.
* RGB fim overlays are transparent in AFNI in voxels where all
   3 bytes are zero - that is, it won't overlay solid black.
* At present, there is limited support for RGB datasets.
   About the only thing you can do is display them in 2D
   slice windows in AFNI.

-- RWCox - April 2002
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dToutcount
Usage: 3dToutcount [options] dataset
Calculates number of 'outliers' a 3D+time dataset, at each
time point, and writes the results to stdout.

Options:
 -mask mset = Only count voxels in the mask dataset.
 -qthr q    = Use 'q' instead of 0.001 in the calculation
                of alpha (below): 0 < q < 1.

 -autoclip }= Clip off 'small' voxels (as in 3dClipLevel);
 -automask }=   you can't use this with -mask!

 -range     = Print out median+3.5*MAD of outlier count with
                each time point; use with 1dplot as in
                3dToutcount -range fred+orig | 1dplot -stdin -one
 -save ppp  = Make a new dataset, and save the outlier Q in each
                voxel, where Q is calculated from voxel value v by
                Q = -log10(qg(abs((v-median)/(sqrt(PI/2)*MAD))))
             or Q = 0 if v is 'close' to the median (not an outlier).
                That is, 10**(-Q) is roughly the p-value of value v
                under the hypothesis that the v's are iid normal.
              The prefix of the new dataset (float format) is 'ppp'.

 -polort nn = Detrend each voxel time series with polynomials of
                order 'nn' prior to outlier estimation.  Default
                value of nn=0, which means just remove the median.
                Detrending is done with L1 regression, not L2.

OUTLIERS are defined as follows:
 * The trend and MAD of each time series are calculated.
   - MAD = median absolute deviation
         = median absolute value of time series minus trend.
 * In each time series, points that are 'far away' from the
    trend are called outliers, where 'far' is defined by
      alpha * sqrt(PI/2) * MAD
      alpha = qginv(0.001/N) (inverse of reversed Gaussian CDF)
      N     = length of time series
 * Some outliers are to be expected, but if a large fraction of the
    voxels in a volume are called outliers, you should investigate
    the dataset more fully.

Since the results are written to stdout, you probably want to redirect
them to a file or another program, as in this example:
  3dToutcount -automask v1+orig | 1dplot -stdin

NOTE: also see program 3dTqual for a similar quality check.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dTqual
Usage: 3dTqual [options] dataset
Computes a `quality index' for each sub-brick in a 3D+time dataset.
The output is a 1D time series with the index for each sub-brick.
The results are written to stdout.

Note that small values of the index are 'good', indicating that
the sub-brick is not very different from the norm.  The purpose
of this program is to provide a crude way of screening FMRI
time series for sporadic abnormal images, such as might be
caused by large subject head motion or scanner glitches.

Do not take the results of this program too literally.  It
is intended as a GUIDE to help you find data problems, and no
more.  It is not an assurance that the dataset is good, and
it may indicate problems where nothing is wrong.

Sub-bricks with index values much higher than others should be
examined for problems.  How you determine what 'much higher' means
is mostly up to you.  I suggest graphical inspection of the indexes
(cf. EXAMPLE, infra).  As a guide, the program will print (stderr)
the median quality index and the range median-3.5*MAD .. median+3.5*MAD
(MAD=Median Absolute Deviation).  Values well outside this range might
be considered suspect; if the quality index were normally distributed,
then values outside this range would occur only about 1% of the time.

OPTIONS:
  -spearman = Quality index is 1 minus the Spearman (rank)
               correlation coefficient of each sub-brick
               with the median sub-brick.
               [This is the default method.]
  -quadrant = Similar to -spearman, but using 1 minus the
               quadrant correlation coefficient as the
               quality index.

  -autoclip = Clip off low-intensity regions in the median sub-brick,
  -automask =  so that the correlation is only computed between
               high-intensity (presumably brain) voxels.  The
               intensity level is determined the same way that
               3dClipLevel works.  This prevents the vast number
               of nearly 0 voxels outside the brain from biasing
               the correlation coefficient calculations.

  -clip val = Clip off values below 'val' in the median sub-brick.

  -range    = Print the median-3.5*MAD and median+3.5*MAD values
               out with EACH quality index, so that they
               can be plotted (cf. Example, infra).
     Notes: * These values are printed to stderr in any case.
            * This is only useful for plotting with 1dplot.
            * The lower value median-3.5*MAD is never allowed
                to go below 0.

EXAMPLE:
   3dTqual -range -automask fred+orig | 1dplot -one -stdin
will calculate the time series of quality indexes and plot them
to an X11 window, along with the median+/-3.5*MAD bands.

NOTE: cf. program 3dToutcount for a somewhat different quality check.

-- RWCox - Aug 2001
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dTSgen

Program: 3dTSgen 
Author:  B. Douglas Ward 
Date:    09 September 1999 

This program generates an AFNI 3d+time data set.  The time series for 
each voxel is generated according to a user specified signal + noise  
model.                                                              

Usage:                                                                
3dTSgen                                                               
-input fname       fname = filename of prototype 3d + time data file  
[-inTR]            set the TR of the created timeseries to be the TR  
                     of the prototype dataset                         
                     [The default is to compute with TR = 1.]         
                     [The model functions are called for a  ]         
                     [time grid of 0, TR, 2*TR, 3*TR, ....  ]         
-signal slabel     slabel = name of (non-linear) signal model         
-noise  nlabel     nlabel = name of (linear) noise model              
-sconstr k c d     constraints for kth signal parameter:              
                      c <= gs[k] <= d                                 
-nconstr k c d     constraints for kth noise parameter:               
                      c+b[k] <= gn[k] <= d+b[k]                       
-sigma  s          s = std. dev. of additive Gaussian noise           
[-voxel num]       screen output for voxel #num                       
-output fname      fname = filename of output 3d + time data file     
                                                                      
                                                                      
The following commands generate individual AFNI 1 sub-brick datasets: 
                                                                      
[-scoef k fname]   write kth signal parameter gs[k];                  
                     output 'fim' is written to prefix filename fname 
[-ncoef k fname]   write kth noise parameter gn[k];                   
                     output 'fim' is written to prefix filename fname 
                                                                      
                                                                      
The following commands generate one AFNI 'bucket' type dataset:       
                                                                      
[-bucket n prefixname]   create one AFNI 'bucket' dataset containing  
                           n sub-bricks; n=0 creates default output;  
                           output 'bucket' is written to prefixname   
The mth sub-brick will contain:                                       
[-brick m scoef k label]   kth signal parameter regression coefficient
[-brick m ncoef k label]   kth noise parameter regression coefficient 
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dTshift
Usage: 3dTshift [options] dataset
Shifts voxel time series from the input dataset so that the separate
slices are aligned to the same temporal origin.  By default, uses the
slicewise shifting information in the dataset header (from the 'tpattern'
input to program to3d).

Method:  detrend -> interpolate -> retrend (optionally)

The input dataset can have a sub-brick selector attached, as documented
in '3dcalc -help'.

The output dataset time series will be interpolated from the input to
the new temporal grid.  This may not be the best way to analyze your
data, but it can be convenient.

Warnings:
* Please recall the phenomenon of 'aliasing': frequencies above 1/(2*TR) can't
  be properly interpolated.  For most 3D FMRI data, this means that cardiac
  and respiratory effects will not be treated properly by this program.

* The images at the beginning of a high-speed FMRI imaging run are usually
  of a different quality than the later images, due to transient effects
  before the longitudinal magnetization settles into a steady-state value.
  These images should not be included in the interpolation!  For example,
  if you wish to exclude the first 4 images, then the input dataset should
  be specified in the form 'prefix+orig[4..$]'.  Alternatively, you can
  use the '-ignore ii' option.

* It seems to be best to use 3dTshift before using 3dvolreg.

Options:
  -verbose      = print lots of messages while program runs

  -TR ddd       = use 'ddd' as the TR, rather than the value
                  stored in the dataset header using to3d.
                  You may attach the suffix 's' for seconds,
                  or 'ms' for milliseconds.

  -tzero zzz    = align each slice to time offset 'zzz';
                  the value of 'zzz' must be between the
                  minimum and maximum slice temporal offsets.
            N.B.: The default alignment time is the average
                  of the 'tpattern' values (either from the
                  dataset header or from the -tpattern option)

  -slice nnn    = align each slice to the time offset of slice
                  number 'nnn' - only one of the -tzero and
                  -slice options can be used.

  -prefix ppp   = use 'ppp' for the prefix of the output file;
                  the default is 'tshift'.

  -ignore ii    = Ignore the first 'ii' points. (Default is ii=0.)
                  The first ii values will be unchanged in the output
                  (regardless of the -rlt option).  They also will
                  not be used in the detrending or time shifting.

  -rlt          = Before shifting, the mean and linear trend
  -rlt+         = of each time series is removed.  The default
                  action is to add these back in after shifting.
                  -rlt  means to leave both of these out of the output
                  -rlt+ means to add only the mean back into the output
                  (cf. '3dTcat -help')

  -Fourier = Use a Fourier method (the default: most accurate; slowest).
  -linear  = Use linear (1st order polynomial) interpolation (least accurate).
  -cubic   = Use the cubic (3rd order) Lagrange polynomial interpolation.
  -quintic = Use the quintic (5th order) Lagrange polynomial interpolation.
  -heptic  = Use the heptic (7th order) Lagrange polynomial interpolation.

  -tpattern ttt = use 'ttt' as the slice time pattern, rather
                  than the pattern in the input dataset header;
                  'ttt' can have any of the values that would
                  go in the 'tpattern' input to to3d, described below:

   alt+z = altplus   = alternating in the plus direction
   alt+z2            = alternating, starting at slice #1 instead of #0
   alt-z = altminus  = alternating in the minus direction
   alt-z2            = alternating, starting at slice #nz-2 instead of #nz-1
   seq+z = seqplus   = sequential in the plus direction
   seq-z = seqminus  = sequential in the minus direction
   @filename         = read temporal offsets from 'filename'

  For example if nz = 5 and TR = 1000, then the inter-slice
  time is taken to be dt = TR/nz = 200.  In this case, the
  slices are offset in time by the following amounts:

             S L I C E   N U M B E R
   tpattern    0   1   2   3   4   Comment
   --------- --- --- --- --- ---   -------------------------------
   altplus     0 600 200 800 400   Alternating in the +z direction
   alt+z2    400   0 600 200 800   Alternating, but starting at #1
   altminus  400 800 200 600   0   Alternating in the -z direction
   alt-z2    800 200 600   0 400   Alternating, starting at #nz-2 
   seqplus     0 200 400 600 800   Sequential  in the -z direction
   seqplus   800 600 400 200   0   Sequential  in the -z direction

  If @filename is used for tpattern, then nz ASCII-formatted numbers
  are read from the file.  These indicate the time offsets for each
  slice. For example, if 'filename' contains
     0 600 200 800 400
  then this is equivalent to 'altplus' in the above example.
  (nz = number of slices in the input dataset)

N.B.: if you are using -tpattern, make sure that the units supplied
      match the units of TR in the dataset header, or provide a
      new TR using the -TR option.

As a test of how well 3dTshift interpolates, you can take a dataset
that was created with '-tpattern alt+z', run 3dTshift on it, and
then run 3dTshift on the new dataset with '-tpattern alt-z' -- the
effect will be to reshift the dataset back to the original time
grid.  Comparing the original dataset to the shifted-then-reshifted
output will show where 3dTshift does a good job and where it does
a bad job.

-- RWCox - 31 October 1999

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dTsmooth
Usage: 3dTsmooth [options] dataset
Smooths each voxel time series in a 3D+time dataset and produces
as output a new 3D+time dataset (e.g., lowpass filter in time).

General Options:
  -prefix ppp  = Sets the prefix of the output dataset to be 'ppp'.
                   [default = 'smooth']
  -datum type  = Coerce output dataset to be stored as the given type.
                   [default = input data type]

Three Point Filtering Options [07 July 1999]
--------------------------------------------
The following options define the smoothing filter to be used.
All these filters  use 3 input points to compute one output point:
  Let a = input value before the current point
      b = input value at the current point
      c = input value after the current point
           [at the left end, a=b; at the right end, c=b]

  -lin = 3 point linear filter: 0.15*a + 0.70*b + 0.15*c
           [This is the default smoother]
  -med = 3 point median filter: median(a,b,c)
  -osf = 3 point order statistics filter:
           0.15*min(a,b,c) + 0.70*median(a,b,c) + 0.15*max(a,b,c)

  -3lin m = 3 point linear filter: 0.5*(1-m)*a + m*b + 0.5*(1-m)*c
              Here, 'm' is a number strictly between 0 and 1.

General Linear Filtering Options [03 Mar 2001]
----------------------------------------------
  -hamming N  = Use N point Hamming or Blackman windows.
  -blackman N     (N must be odd and bigger than 1.)
  -custom coeff_filename.1D (odd # of coefficients must be in a 
                             single column in ASCII file)
   (-custom added Jan 2003)
    WARNING: If you use long filters, you do NOT want to include the
             large early images in the program.  Do something like
                3dTsmooth -hamming 13 'fred+orig[4..$]'
             to eliminate the first 4 images (say).
 The following options determing how the general filters treat
 time points before the beginning and after the end:
  -EXTEND = BEFORE: use the first value; AFTER: use the last value
  -ZERO   = BEFORE and AFTER: use zero
  -TREND  = compute a linear trend, and extrapolate BEFORE and AFTER
 The default is -EXTEND.  These options do NOT affect the operation
 of the 3 point filters described above, which always use -EXTEND.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dTstat
Usage: 3dTstat [options] dataset
Computes one or more voxel-wise statistics for a 3D+time dataset
and stores them in a bucket dataset.

Options:
 -mean   = compute mean of input voxels [DEFAULT]
 -slope  = compute mean slope of input voxels vs. time
 -stdev  = compute standard deviation of input voxels
             [N.B.: this is computed after    ]
             [      the slope has been removed]
 -cvar   = compute coefficient of variation of input
             voxels = stdev/fabs(mean)
   **N.B.: You can add NOD to the end of the above 2
           options to turn off detrending, as in
             -stdevNOD or -cvarNOD

 -MAD    = compute MAD (median absolute deviation) of
             input voxels = median(|voxel-median(voxel)|)
             [N.B.: the trend is NOT removed for this]
 -DW    = compute Durbin-Watson Statistic of
             input voxels
             [N.B.: the trend is removed for this]
 -median = compute median of input voxels  [undetrended]
 -min    = compute minimum of input voxels [undetrended]
 -max    = compute maximum of input voxels [undetrended]
 -absmax    = compute absolute maximum of input voxels [undetrended]
 -argmin    = index of minimum of input voxels [undetrended]
 -argmax    = index of maximum of input voxels [undetrended]
 -argabsmax    = index of absolute maximum of input voxels [undetrended]

 -prefix p = use string 'p' for the prefix of the
               output dataset [DEFAULT = 'stat']
 -datum d  = use data type 'd' for the type of storage
               of the output, where 'd' is one of
               'byte', 'short', or 'float' [DEFAULT=float]
 -autocorr n = compute autocorrelation function and return
               first n coefficients
 -autoreg n = compute autoregression coefficients and return
               first n coefficients
    [N.B.: -autocorr 0 and/or -autoreg 0 will return coefficients
           equal to the length of the input data

The output is a bucket dataset.  The input dataset
may use a sub-brick selection list, as in program 3dcalc.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dttest
Gosset (Student) t-test sets of 3D datasets
Usage 1: 3dttest [options] -set1 datasets ... -set2 datasets ...
   for comparing the means of 2 sets of datasets (voxel by voxel).

Usage 2: 3dttest [options] -base1 bval -set2 datasets ...
   for comparing the mean of 1 set of datasets against a constant.

OUTPUTS:
 A single dataset is created that is the voxel-by-voxel difference
 of the mean of set2 minus the mean of set1 (or minus 'bval').
 The output dataset will be of the intensity+Ttest ('fitt') type.
 The t-statistic at each voxel can be used as an interactive
 thresholding tool in AFNI.

t-TESTING OPTIONS:
  -set1 datasets ... = Specifies the collection of datasets to put into
                         the first set. The mean of set1 will be tested
                         with a 2-sample t-test against the mean of set2.
                   N.B.: -set1 and -base1 are mutually exclusive!
  -base1 bval        = 'bval' is a numerical value that the mean of set2
                         will be tested against with a 1-sample t-test.
  -set2 datasets ... = Specifies the collection of datasets to put into
                         the second set.  There must be at least 2 datasets
                         in each of set1 (if used) and set2.
  -paired            = Specifies the use of a paired-sample t-test to
                         compare set1 and set2.  If this option is used,
                         set1 and set2 must have the same cardinality.
                   N.B.: A paired test is intended for use when the set1 and set2
                         dataset function values may be pairwise correlated.
                         If they are in fact uncorrelated, this test has less
                         statistical 'power' than the unpaired (default) t-test.
                         This loss of power is the price that is paid for
                         insurance against pairwise correlations.
  -unpooled          = Specifies that the variance estimates for set1 and
                         set2 be computed separately (not pooled together).
                         This only makes sense if -paired is NOT given.
                   N.B.: If this option is used, the number of degrees
                         of freedom per voxel is a variable, rather
                         than a constant.
  -dof_prefix ddd    = If '-unpooled' is also used, then a dataset with
                         prefix 'ddd' will be created that contains the
                         degrees of freedom (DOF) in each voxel.
                         You can convert the t-value in the -prefix
                         dataset to a z-score using the -dof_prefix dataset
                         using commands like so:
           3dcalc -a 'pname+orig[1]' -b ddd+orig \
                  -datum float -prefix ddd_zz -expr 'fitt_t2z(a,b)'
           3drefit -substatpar 0 fizt ddd_zz+orig
                         At present, AFNI is incapable of directly dealing
                         with datasets whose DOF parameter varies between
                         voxels.  Converting to a z-score (with no parameters)
                         is one way of getting around this difficulty.
  -workmem mega      = 'mega' specifies the number of megabytes of RAM
                         to use for statistical workspace.  It defaults
                         to 12.  The program will run faster if this is
                         larger (see the NOTES section below).

The -base1 or -set1 command line switches must follow all other options
(including those described below) except for the -set2 switch.

INPUT EDITING OPTIONS: The same as are available in 3dmerge.

OUTPUT OPTIONS: these options control the output files.
  -session  dirname  = Write output into given directory (default=./)
  -prefix   pname    = Use 'pname' for the output directory prefix
                       (default=tdif)
  -datum    type     = Use 'type' to store the output difference
                       in the means; 'type' may be short or float.
                       How the default is determined is described
                       in the notes below.

NOTES:
 ** To economize on memory, 3dttest makes multiple passes through
      the input datasets.  On each pass, the entire editing process
      will be carried out again.  For efficiency's sake, it is
      better to carry out the editing using 3dmerge to produce
      temporary datasets, and then run 3dttest on them.  This applies
      with particular force if a 'blurring' option is used.
      Note also that editing a dataset requires that it be read into
      memory in its entirety (so that the disk file is not altered).
      This will increase the memory needs of the program far beyond
      the level set by the -workmem option.
 ** The input datasets are specified by their .HEAD files,
      but their .BRIK files must exist also! This program cannot
      'warp-on-demand' from other datasets.
 ** This program cannot deal with time-dependent or complex-valued datasets!
      By default, the output dataset function values will be shorts if the
      first input dataset is byte- or short-valued; otherwise they will be
      floats.  This behavior may be overridden using the -datum option.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dUndump
Usage: 3dUndump [options] infile ...
Assembles a 3D dataset from an ASCII list of coordinates and
(optionally) values.

Options:
  -prefix ppp  = 'ppp' is the prefix for the output dataset
                   [default = undump].
  -master mmm  = 'mmm' is the master dataset, whose geometry
    *OR*           will determine the geometry of the output.
  -dimen I J K = Sets the dimensions of the output dataset to
                   be I by J by K voxels.  (Each I, J, and K
                   must be >= 2.)  This option can be used to
                   create a dataset of a specific size for test
                   purposes, when no suitable master exists.
          ** N.B.: Exactly one of -master or -dimen must be given.
  -mask kkk    = This option specifies a mask dataset 'kkk', which
                   will control which voxels are allowed to get
                   values set.  If the mask is present, only
                   voxels that are nonzero in the mask can be
                   set in the new dataset.
                   * A mask can be created with program 3dAutomask.
                   * Combining a mask with sphere insertion makes
                     a lot of sense (to me, at least).
  -datum type  = 'type' determines the voxel data type of the
                   output, which may be byte, short, or float
                   [default = short].
  -dval vvv    = 'vvv' is the default value stored in each
                   input voxel that does not have a value
                   supplied in the input file [default = 1].
  -fval fff    = 'fff' is the fill value, used for each voxel
                   in the output dataset that is NOT listed
                   in the input file [default = 0].
  -ijk         = Coordinates in the input file are (i,j,k) index
       *OR*        triples, as might be output by 3dmaskdump.
  -xyz         = Coordinates in the input file are (x,y,z)
                   spatial coordinates, in mm.  If neither
                   -ijk or -xyz is given, the default is -ijk.
          ** N.B.: -xyz can only be used with -master. If -dimen
                   is used to specify the size of the output dataset,
                   (x,y,z) coordinates are not defined (until you
                   use 3drefit to define the spatial structure).
  -srad rrr    = Specifies that a sphere of radius 'rrr' will be
                   filled about each input (x,y,z) or (i,j,k) voxel.
                   If the radius is not given, or is 0, then each
                   input data line sets the value in only one voxel.
                   * If '-master' is used, then 'rrr' is in mm.
                   * If '-dimen' is used, then 'rrr' is in voxels.
  -orient code = Specifies the coordinate order used by -xyz.
                   The code must be 3 letters, one each from the pairs
                   {R,L} {A,P} {I,S}.  The first letter gives the
                   orientation of the x-axis, the second the orientation
                   of the y-axis, the third the z-axis:
                     R = right-to-left         L = left-to-right
                     A = anterior-to-posterior P = posterior-to-anterior
                     I = inferior-to-superior  S = superior-to-inferior
                   If -orient isn't used, then the coordinate order of the
                   -master dataset is used to interpret (x,y,z) inputs.
          ** N.B.: If -dimen is used (which implies -ijk), then the
                   only use of -orient is to specify the axes ordering
                   of the output dataset.  If -master is used instead,
                   the output dataset's axes ordering is the same as the
                   -master dataset's, regardless of -orient.

Input File Format:
 The input file(s) are ASCII files, with one voxel specification per
 line.  A voxel specification is 3 numbers (-ijk or -xyz coordinates),
 with an optional 4th number giving the voxel value.  For example:

   1 2 3 
   3 2 1 5
   5.3 6.2 3.7
   // this line illustrates a comment

 The first line puts a voxel (with value given by -dval) at point
 (1,2,3).  The second line puts a voxel (with value 5) at point (3,2,1).
 The third line puts a voxel (with value given by -dval) at point
 (5.3,6.2,3.7).  If -ijk is in effect, and fractional coordinates
 are given, they will be rounded to the nearest integers; for example,
 the third line would be equivalent to (i,j,k) = (5,6,4).

Notes:
* This program creates a 1 sub-brick file.  You can 'glue' multiple
   files together using 3dbucket or 3dTcat to make multi-brick datasets.
* If an input filename is '-', then stdin is used for input.
* By default, the output dataset is of type '-fim', unless the -master
   dataset is an anat type. You can change the output type using 3drefit.
* You could use program 1dcat to extract specific columns from a
   multi-column rectangular file (e.g., to get a specific sub-brick
   from the output of 3dmaskdump), and use the output of 1dcat as input
   to this program.
* [19 Feb 2004] The -mask and -srad options were added this day.
   Also, a fifth value on an input line, if present, is taken as a
   sphere radius to be used for that input point only.  Thus, input
      3.3 4.4 5.5 6.6 7.7
   means to put the value 6.6 into a sphere of radius 7.7 mm centered
   about (x,y,z)=(3.3,4.4,5.5).

-- RWCox -- October 2000
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dUniformize

Program: 3dUniformize 
Author:  B. D. Ward 
Initial Release:  28 January 2000 
Latest Revision:  16 April 2003 

This program corrects for image intensity non-uniformity.

Usage: 
3dUniformize  
-anat filename    Filename of anat dataset to be corrected            
                                                                      
[-quiet]          Suppress output to screen                           
                                                                      
-prefix pname     Prefix name for file to contain corrected image     

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dVol2Surf
3dVol2Surf - map data from a volume domain to a surface domain

  usage: 3dVol2Surf [options] -spec SPEC_FILE -sv SURF_VOL \
                    -grid_parent AFNI_DSET -map_func MAP_FUNC

This program is used to map data values from an AFNI volume
dataset to a surface dataset.  A filter may be applied to the
volume data to produce the value(s) for each surface node.

The surface and volume domains are spacially matched via the
'surface volume' AFNI dataset.  This gives each surface node xyz
coordinates, which are then matched to the input 'grid parent'
dataset.  This grid parent is an AFNI dataset containing the
data values destined for output.

Typically, two corresponding surfaces will be input (via the
spec file and the '-surf_A' and '-surf_B' options), along with
a mapping function and relevant options.  The mapping function
will act as a filter over the values in the AFNI volume.

Note that an alternative to using a second surface with the
'-surf_B' option is to define the second surface by using the
normals from the first surface.  By default, the second surface
would be defined at a distance of 1mm along the normals, but the
user may modify the applied distance (and direction).  See the
'-use_norms' and '-norm_len' options for more details.

For each pair of corresponding surface nodes, let NA be the node
on surface A (such as a white/grey boundary) and NB be the
corresponding node on surface B (such as a pial surface).  The
filter is applied to the volume data values along the segment
from NA to NB (consider the average or maximum as examples of
filters).

Note: if either endpoint of a segment is outside the grid parent
      volume, that node (pair) will be skipped.

Note: surface A corresponds to the required '-surf_A' argument,
      while surface B corresponds to '-surf_B'.

By default, this segment only consists of the endpoints, NA and
NB (the actual nodes on the two surfaces).  However the number
of evenly spaced points along the segment may be specified with
the -f_steps option, and the actual locations of NA and NB may
be altered with any of the -f_pX_XX options, covered below.

As an example, for each node pair, one could output the average
value from some functional dataset along a segment of 10 evenly
spaced points, where the segment endpoints are defined by the
xyz coordinates of the nodes.  This is example 3, below.

The mapping function (i.e. filter) is a required parameter to
the program.

Brief descriptions of the current mapping functions are as
follows.  These functions are defined over a segment of points.

    ave       : output the average of all voxel values along the
                segment
    mask      : output the voxel value for the trivial case of a
                segment - defined by a single surface point
    median    : output the median value from the segment
    midpoint  : output the dataset value at the segment midpoint
    mode      : output the mode of the values along the segment
    max       : output the maximum volume value over the segment
    max_abs   : output the dataset value with max abs over seg
    min       : output the minimum volume value over the segment
    seg_vals  : output _all_ volume values over the segment (one
                sub-brick only)

  --------------------------------------------------

  examples:

    1. Apply a single surface mask to output volume values over
       each surface node.  Output is one value per sub-brick
       (per surface node).

    3dVol2Surf                                \
       -spec         fred.spec                \
       -surf_A       smoothwm                 \
       -sv           fred_anat+orig           \
       -grid_parent  fred_anat+orig           \
       -map_func     mask                     \
       -out_1D       fred_anat_vals.1D

    2. Apply a single surface mask to output volume values over
       each surface node.  In this case restrict input to the
       mask implied by the -cmask option.  Supply additional
       debug output, and more for surface node 1874

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -sv           fred_anat+orig                           \
       -grid_parent 'fred_epi+orig[0]'                        \
       -cmask       '-a fred_func+orig[2] -expr step(a-0.6)'  \
       -map_func     mask                                     \
       -debug        2                                        \
       -dnode        1874                                     \
       -out_niml     fred_epi_vals.niml

    3. Given a pair of related surfaces, for each node pair,
       break the connected line segment into 10 points, and
       compute the average dataset value over those points.
       Since the index is nodes, each of the 10 points will be
       part of the average.  This could be changed so that only
       values from distinct volume nodes are considered (by
       changing the -f_index from nodes to voxels).  Restrict
       input voxels to those implied by the -cmask option
       Output is one average value per sub-brick (per surface
       node).

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_1D       fred_func_ave.1D

    4. Similar to example 3, but restrict the output columns to
       only node indices and values (i.e. skip 1dindex, i, j, k
       and vals).

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -skip_col_1dindex                                      \
       -skip_col_i                                            \
       -skip_col_j                                            \
       -skip_col_k                                            \
       -skip_col_vals                                         \
       -out_1D       fred_func_ave_short.1D

    5. Similar to example 3, but each of the node pair segments
       has grown by 10% on the inside of the first surface,
       and 20% on the outside of the second.  This is a 30%
       increase in the length of each segment.  To shorten the
       node pair segment, use a '+' sign for p1 and a '-' sign
       for pn.
       As an interesting side note, '-f_p1_fr 0.5 -f_pn_fr -0.5'
       would give a zero length vector identical to that of the
       'midpoint' filter.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      voxels                                   \
       -f_p1_fr      -0.1                                     \
       -f_pn_fr      0.2                                      \
       -out_1D       fred_func_ave2.1D

    6. Similar to example 3, instead of computing the average
       across each segment (one average per sub-brick), output
       the volume value at _every_ point across the segment.
       The output here would be 'f_steps' values per node pair,
       though the output could again be restricted to unique
       voxels along each segment with '-f_index voxels'.
       Note that only sub-brick 0 will be considered here.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     seg_vals                                 \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_1D       fred_func_segvals_10.1D

    7. Similar to example 6, but make sure there is output for
       every node pair in the surfaces.  Since it is expected
       that some nodes are out of bounds (meaning that they lie
       outside the domain defined by the grid parent dataset),
       the '-oob_value' option is added to include a default
       value of 0.0 in such cases.  And since it is expected
       that some node pairs are "out of mask" (meaning that
       their resulting segment lies entirely outside the cmask),
       the '-oom_value' was added to output the same default
       value of 0.0.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -surf_B       pial                                     \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_func+orig                           \
       -cmask        '-a fred_func+orig[2] -expr step(a-0.6)' \
       -map_func     seg_vals                                 \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -oob_value    0.0                                      \
       -oom_value    0.0                                      \
       -out_1D       fred_func_segvals_10_all.1D

    8. This is a basic example of calculating the average along
       each segment, but where the segment is produced by only
       one surface, along with its set of surface normals.  The
       segments will be 2.5 mm in length.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_anat+orig                           \
       -use_norms                                             \
       -norm_len     2.5                                      \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_1D       fred_anat_norm_ave.2.5.1D

    9. This is the same as example 8, but where the surface
       nodes are restricted to the range 1000..1999 via the
       options '-first_node' and '-last_node'.

    3dVol2Surf                                                \
       -spec         fred.spec                                \
       -surf_A       smoothwm                                 \
       -sv           fred_anat+orig                           \
       -grid_parent  fred_anat+orig                           \
       -first_node   1000                                     \
       -last_node    1999                                     \
       -use_norms                                             \
       -norm_len     2.5                                      \
       -map_func     ave                                      \
       -f_steps      10                                       \
       -f_index      nodes                                    \
       -out_1D       fred_anat_norm_ave.2.5.1D

  --------------------------------------------------

  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE        : SUMA spec file

        e.g. -spec fred.spec

        The surface specification file contains the list of
        mappable surfaces that are used.

        See @SUMA_Make_Spec_FS and @SUMA_Make_Spec_SF.

    -surf_A SURF_NAME      : name of surface A (from spec file)
    -surf_B SURF_NAME      : name of surface B (from spec file)

        e.g. -surf_A smoothwm
        e.g. -surf_A lh.smoothwm
        e.g. -surf_B lh.pial

        This is used to specify which surface(s) will be used by
        the program.  The '-surf_A' parameter is required, as it
        specifies the first surface, whereas since '-surf_B' is
        used to specify an optional second surface, it is not
        required.

        Note that any need for '-surf_B' may be fulfilled using
        the '-use_norms' option.

        Note that any name provided must be in the spec file,
        uniquely matching the name of a surface node file (such
        as lh.smoothwm.asc, for example).  Note that if both
        hemispheres are represented in the spec file, then there
        may be both lh.pial.asc and rh.pial.asc, for instance.
        In such a case, 'pial' would not uniquely determine a
        a surface, but the name 'lh.pial' would.

    -sv SURFACE_VOLUME     : AFNI volume dataset

        e.g. -sv fred_anat+orig

        This is the AFNI dataset that the surface is mapped to.
        This dataset is used for the initial surface node to xyz
        coordinate mapping, in the Dicom orientation.

    -grid_parent AFNI_DSET : AFNI volume dataset

        e.g. -grid_parent fred_function+orig

        This dataset is used as a grid and orientation master
        for the output (i.e. it defines the volume domain).
        It is also the source of the output data values.

    -map_func MAP_FUNC     : filter for values along the segment

        e.g. -map_func ave
        e.g. -map_func ave -f_steps 10
        e.g. -map_func ave -f_steps 10 -f_index nodes

        The current mapping function for 1 surface is:

          mask     : For each surface xyz location, output the
                     dataset values of each sub-brick.

        Most mapping functions are defined for 2 related input
        surfaces (such as white/grey boundary and pial).  For
        each node pair, the function will be performed on the
        values from the 'grid parent dataset', and along the
        segment connecting the nodes.

          ave      : Output the average of the dataset values
                     along the segment.

          max      : Output the maximum dataset value along the
                     connecting segment.

          max_abs  : Output the dataset value with the maximum
                     absolute value along the segment.

          median   : Output the median of the dataset values
                     along the connecting segment.

          midpoint : Output the dataset value with xyz
                     coordinates at the midpoint of the nodes.

          min      : Output the minimum dataset value along the
                     connecting segment.

          mode     : Output the mode of the dataset values along
                     the connecting segment.

          seg_vals : Output all of the dataset values along the
                     connecting segment.  Here, only sub-brick
                     number 0 will be considered.

  ------------------------------

  options specific to functions on 2 surfaces:

          -f_steps NUM_STEPS :

                     Use this option to specify the number of
                     evenly spaced points along each segment.
                     The default is 2 (i.e. just use the two
                     surface nodes as endpoints).

                     e.g.     -f_steps 10
                     default: -f_steps 2

          -f_index TYPE :

                     This option specifies whether to use all
                     segment point values in the filter (using
                     the 'nodes' TYPE), or to use only those
                     corresponding to unique volume voxels (by
                     using the 'voxel' TYPE).

                     For instance, when taking the average along
                     one node pair segment using 10 node steps,
                     perhaps 3 of those nodes may occupy one
                     particular voxel.  In this case, does the
                     user want the voxel counted only once, or 3
                     times?  Each way makes sense.
                     
                     Note that this will only make sense when
                     used along with the '-f_steps' option.
                     
                     Possible values are "nodes", "voxels".
                     The default value is voxels.  So each voxel
                     along a segment will be counted only once.
                     
                     e.g.  -f_index nodes
                     e.g.  -f_index voxels
                     default: -f_index voxels

          -f_keep_surf_order :

                     Depreciated.

                     See required arguments -surf_A and -surf_B,
                     above.

          Note: The following -f_pX_XX options are used to alter
                the lengths and locations of the computational
                segments.  Recall that by default, segments are
                defined using the node pair coordinates as
                endpoints.  And the direction from p1 to pn is
                from the inner surface to the outer surface.

          -f_p1_mm DISTANCE :

                     This option is used to specify a distance
                     in millimeters to add to the first point of
                     each line segment (in the direction of the
                     second point).  DISTANCE can be negative
                     (which would set p1 to be farther from pn
                     than before).

                     For example, if a computation is over the
                     grey matter (from the white matter surface
                     to the pial), and it is wished to increase
                     the range by 1mm, set this DISTANCE to -1.0
                     and the DISTANCE in -f_pn_mm to 1.0.

                     e.g.  -f_p1_mm -1.0
                     e.g.  -f_p1_mm -1.0 -f_pn_mm 1.0

          -f_pn_mm DISTANCE :

                     Similar to -f_p1_mm, this option is used
                     to specify a distance in millimeters to add
                     to the second point of each line segment.
                     Note that this is in the same direction as
                     above, from point p1 to point pn.
                     
                     So a positive DISTANCE, for this option,
                     would set pn to be farther from p1 than
                     before, and a negative DISTANCE would set
                     it to be closer.

                     e.g.  -f_pn_mm 1.0
                     e.g.  -f_p1_mm -1.0 -f_pn_mm 1.0

          -f_p1_fr FRACTION :

                     Like the -f_pX_mm options above, this
                     is used to specify a change to point p1, in
                     the direction of point pn, but the change
                     is a fraction of the original distance,
                     not a pure change in millimeters.
                     
                     For example, suppose one wishes to do a
                     computation based on the segments spanning
                     the grey matter, but to add 20% to either
                     side.  Then use -0.2 and 0.2:

                     e.g.  -f_p1_fr -0.2
                     e.g.  -f_p1_fr -0.2 -f_pn_fr 0.2

          -f_pn_fr FRACTION :

                     See -f_p1_fr above.  Note again that the
                     FRACTION is in the direction from p1 to pn.
                     So to extend the segment past pn, this
                     FRACTION will be positive (and to reduce
                     the segment back toward p1, this -f_pn_fr
                     FRACTION will be negative).

                     e.g.  -f_pn_fr 0.2
                     e.g.  -f_p1_fr -0.2 -f_pn_fr 0.2

                     Just for entertainment, one could reverse
                     the order that the segment points are
                     considered by adjusting p1 to be pn, and
                     pn to be p1.  This could be done by adding
                     a fraction of 1.0 to p1 and by subtracting
                     a fraction of 1.0 from pn.

                     e.g.  -f_p1_fr 1.0 -f_pn_fr -1.0

  ------------------------------

  options specific to use of normals:

    Notes:

      o Using a single surface with its normals for segment
        creation can be done in lieu of using two surfaces.

      o Normals at surface nodes are defined by the average of
        the normals of the triangles including the given node.

      o The default normals have a consistent direction, but it
        may be opposite of what is should be.  For this reason,
        the direction is verified by default, and may be negated
        internally.  See the '-keep_norm_dir' option for more
        information.

    -use_norms             : use normals for second surface

        Segments are usually defined by connecting corresponding
        node pairs from two surfaces.  With this options the
        user can use one surface, along with its normals, to
        define the segments.

        By default, each segment will be 1.0 millimeter long, in
        the direction of the normal.  The '-norm_len' option
        can be used to alter this default action.

    -keep_norm_dir         : keep the direction of the normals

        Normal directions are verified by checking that the
        normals of the outermost 6 points point away from the
        center of mass.  If they point inward instead, then
        they are negated.

        This option will override the directional check, and
        use the normals as they come.

        See also -reverse_norm_dir, below.

    -norm_len LENGTH       : use LENGTH for node normals

        e.g.     -norm_len  3.0
        e.g.     -norm_len -3.0
        default: -norm_len  1.0

        For use with the '-use_norms' option, this allows the
        user to specify a directed distance to use for segments
        based on the normals.  So for each node on a surface,
        the computation segment will be from the node, in the
        direction of the normal, a signed distance of LENGTH.

        A negative LENGTH means to use the opposite direction
        from the normal.

        The '-surf_B' option is not allowed with the use of
        normals.

    -reverse_norm_dir      : reverse the normal directions

        Normal directions are verified by checking that the
        normals of the outermost 6 points point away from the
        center of mass.  If they point inward instead, then
        they are negated.

        This option will override the directional check, and
        reverse the direction of the normals as they come.

        See also -keep_norm_dir, above.

  ------------------------------

  general options:

    -cmask MASK_COMMAND    : (optional) command for dataset mask

        e.g. -cmask '-a fred_func+orig[2] -expr step(a-0.8)'

        This option will produce a mask to be applied to the
        input AFNI dataset.  Note that this mask should form a
        single sub-brick.

        This option follows the style of 3dmaskdump (since the
        code for it was, uh, borrowed from there (thanks Bob!)).

        See '3dmaskdump -help' for more information.

    -debug LEVEL           :  (optional) verbose output

        e.g. -debug 2

        This option is used to print out status information 
        during the execution of the program.  Current levels are
        from 0 to 5.

    -first_node NODE_NUM   : skip all previous nodes

        e.g. -first_node 1000
        e.g. -first_node 1000 -last_node 1999

        Restrict surface node output to those with indices as
        large as NODE_NUM.  In the first example, the first 1000
        nodes are ignored (those with indices from 0 through
        999).

        See also, '-last_node'.

    -dnode NODE_NUM        :  (optional) node for debug

        e.g. -dnode 1874

        This option is used to print out status information 
        for node NODE_NUM.

    -gp_index SUB_BRICK    : choose grid_parent sub-brick

        e.g. -gp_index 3

        This option allows the user to choose only a single
        sub-brick from the grid_parent dataset for computation.
        Note that this option is virtually useless when using
        the command-line, as the user can more directly do this
        via brick selectors, e.g. func+orig'[3]'.
        
        This option was written for the afni interface.

    -help                  : show this help

        If you can't get help here, please get help somewhere.

    -hist                  : show revision history

        Display module history over time.

        See also, -v2s_hist

    -last_node NODE_NUM    : skip all following nodes

        e.g. -last_node 1999
        e.g. -first_node 1000 -last_node 1999

        Restrict surface node output to those with indices no
        larger than NODE_NUM.  In the first example, nodes above
        1999 are ignored (those with indices from 2000 on up).

        See also, '-first_node'.

    -no_headers            : do not output column headers

        Column header lines all begin with the '#' character.
        With the '-no_headers' option, these lines will not be
        output.

    -oob_index INDEX_NUM   : specify default index for oob nodes

        e.g.     -oob_index -1
        default: -oob_index  0

        By default, nodes which lie outside the box defined by
        the -grid_parent dataset are considered out of bounds,
        and are skipped.  If an out of bounds index is provided,
        or an out of bounds value is provided, such nodes will
        not be skipped, and will have indices and values output,
        according to the -oob_index and -oob_value options.
        
        This INDEX_NUM will be used for the 1dindex field, along
        with the i, j and k indices.
        

    -oob_value VALUE       : specify default value for oob nodes

        e.g.     -oob_value -999.0
        default: -oob_value    0.0

        See -oob_index, above.
        
        VALUE will be output for nodes which are out of bounds.

    -oom_value VALUE       : specify default value for oom nodes

        e.g. -oom_value -999.0
        e.g. -oom_value    0.0

        By default, node pairs defining a segment which gets
        completely obscured by a command-line mask (see -cmask)
        are considered "out of mask", and are skipped.

        If an out of mask value is provided, such nodes will not
        be skipped.  The output indices will come from the first
        segment point, mapped to the AFNI volume.  All output vN
        values will be the VALUE provided with this option.

        This option is meaningless without a '-cmask' option.

    -out_1D OUTPUT_FILE    : specify a 1D file for the output

        e.g. -out_1D mask_values_over_dataset.1D

        This is where the user will specify which file they want
        the output to be written to.  In this case, the output
        will be in readable, column-formatted ASCII text.

        Note : the output file should not yet exist.
             : -out_1D or -out_niml must be used

    -out_niml OUTPUT_FILE  : specify a niml file for the output

        e.g. -out_niml mask_values_over_dataset.niml

        The user may use this option to get output in the form
        of a niml element, with binary data.  The output will
        contain (binary) columns of the form:

            node_index  value_0  value_1  value_2  ...

        A major difference between 1D output and niml output is
        that the value_0 column number will be 6 in the 1D case,
        but will be 2 in the niml case.  The index columns will
        not be used for niml output.

        Note : the output file should not yet exist.
             : -out_1D or -out_niml must be used

    -skip_col_nodes        : do not output node column
    -skip_col_1dindex      : do not output 1dindex column
    -skip_col_i            : do not output i column
    -skip_col_j            : do not output j column
    -skip_col_k            : do not output k column
    -skip_col_vals         : do not output vals column
    -skip_col_results      : only output ONE result column
                             (seems to make the most sense)
    -skip_col_non_results  : skip everything but the results
                             (i.e. only output result columns)

        These options are used to restrict output.  Each option
        will prevent the program from writing that column of
        output to the 1D file.

        For now, the only effect that these options can have on
        the niml output is by skipping nodes or results (all
        other columns are skipped by default).

    -v2s_hist              : show revision history for library

        Display vol2surf library history over time.

        See also, -hist

    -version               : show version information

        Show version and compile date.

  --------------------------------------------------

Output from the program defaults to 1D format, in ascii text.
For each node (pair) that results in output, there will be one
line, consisting of:

    node    : the index of the current node (or node pair)

    1dindex : the global index of the AFNI voxel used for output

              Note that for some filters (min, max, midpoint,
              median and mode) there is a specific location (and
              therefore voxel) that the result comes from.  It
              will be accurate (though median may come from one
              of two voxels that are averaged).

              For filters without a well-defined source (such as
              average or seg_vals), the 1dindex will come from
              the first point on the corresponding segment.

              Note: this will _not_ be output in the niml case.

    i j k   : the i j k indices matching 1dindex

              These indices are based on the orientation of the
              grid parent dataset.

              Note: these will _not_ be output in the niml case.

    vals    : the number of segment values applied to the filter

              Note that when -f_index is 'nodes', this will
              always be the same as -f_steps, except when using
              the -cmask option.  In that case, along a single 
              segment, some points may be in the mask, and some
              may not.

              When -f_index is 'voxels' and -f_steps is used,
              vals will often be much smaller than -f_steps.
              This is because many segment points may in a
              single voxel.

              Note: this will _not_ be output in the niml case.

    v0, ... : the requested output values

              These are the filtered values, usually one per
              AFNI sub-brick.  For example, if the -map_func
              is 'ave', then there will be one segment-based
              average output per sub-brick of the grid parent.

              In the case of the 'seg_vals' filter, however,
              there will be one output value per segment point
              (possibly further restricted to voxels).  Since
              output is not designed for a matrix of values,
              'seg_vals' is restricted to a single sub-brick.


  Author: R. Reynolds  - version  6.4 (June 2, 2005)

                (many thanks to Z. Saad and R.W. Cox)

This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dvolreg
Usage: 3dvolreg [options] dataset
Registers each 3D sub-brick from the input dataset to the base brick.
'dataset' may contain a sub-brick selector list.

OPTIONS:
  -verbose        Print progress reports.  Use twice for LOTS of output.
  -Fourier        Perform the alignments using Fourier interpolation.
  -heptic         Use heptic polynomial interpolation.
  -quintic        Use quintic polynomical interpolation.
  -cubic          Use cubic polynomial interpolation.
                    Default = Fourier [slowest and most accurate interpolator]
  -clipit         Clips the values in each output sub-brick to be in the same
                    range as the corresponding input volume.
                    The interpolation schemes can produce values outside
                    the input range, which is sometimes annoying.
                    [16 Apr 2002: -clipit is now the default]
  -noclip         Turns off -clipit
  -zpad n         Zeropad around the edges by 'n' voxels during rotations
                    (these edge values will be stripped off in the output)
              N.B.: Unlike to3d, in this program '-zpad' adds zeros in
                     all directions.
              N.B.: The environment variable AFNI_ROTA_ZPAD can be used
                     to set a nonzero default value for this parameter.
  -prefix fname   Use 'fname' for the output dataset prefix.
                    The program tries not to overwrite an existing dataset.
                    Default = 'volreg'.

  -base n         Sets the base brick to be the 'n'th sub-brick
                    from the input dataset (indexing starts at 0).
                    Default = 0 (first sub-brick).
  -base 'bset[n]' Sets the base brick to be the 'n'th sub-brick
                    from the dataset specified by 'bset', as in
                       -base 'elvis+orig[4]'
                    The quotes are needed because the '[]' characters
                    are special to the shell.

  -dfile dname    Save the motion parameters in file 'dname'.
                    The output is in 9 ASCII formatted columns:

                    n  roll  pitch  yaw  dS  dL  dP  rmsold rmsnew

           where:   n     = sub-brick index
                    roll  = rotation about the I-S axis }
                    pitch = rotation about the R-L axis } degrees CCW
                    yaw   = rotation about the A-P axis }
                      dS  = displacement in the Superior direction  }
                      dL  = displacement in the Left direction      } mm
                      dP  = displacement in the Posterior direction }
                   rmsold = RMS difference between input brick and base brick
                   rmsnew = RMS difference between output brick and base brick
       N.B.: If the '-dfile' option is not given, the parameters aren't saved.
       N.B.: The motion parameters are those needed to bring the sub-brick
             back into alignment with the base.  In 3drotate, it is as if
             the following options were applied to each input sub-brick:
              -rotate <ROLL>I <PITCH>R <YAW>A  -ashift <DS>S <DL>L <DP>P

  -1Dfile ename   Save the motion parameters ONLY in file 'ename'.
                    The output is in 6 ASCII formatted columns:

                    roll pitch yaw dS  dL  dP

                  This file can be used in FIM as an 'ort', to detrend
                  the data against correlation with the movements.
                  This type of analysis can be useful in removing
                  errors made in the interpolation.

  -rotcom         Write the fragmentary 3drotate commands needed to
                  perform the realignments to stdout; for example:
                    3drotate -rotate 7.2I 3.2R -5.7A -ashift 2.7S -3.8L 4.9P
                  The purpose of this is to make it easier to shift other
                  datasets using exactly the same parameters.

  -tshift ii      If the input dataset is 3D+time and has slice-dependent
                  time-offsets (cf. the output of 3dinfo -v), then this
                  option tells 3dvolreg to time shift it to the average
                  slice time-offset prior to doing the spatial registration.
                  The integer 'ii' is the number of time points at the
                  beginning to ignore in the time shifting.  The results
                  should like running program 3dTshift first, then running
                  3dvolreg -- this is primarily a convenience option.
            N.B.: If the base brick is taken from this dataset, as in
                  '-base 4', then it will be the time shifted brick.
                  If for some bizarre reason this is undesirable, you
                  could use '-base this+orig[4]' instead.

  -rotparent rset
    Specifies that AFTER the registration algorithm finds the best
    transformation for each sub-brick of the input, an additional
    rotation+translation should be performed before computing the
    final output dataset; this extra transformation is taken from
    the first 3dvolreg transformation found in dataset 'rset'.
  -gridparent gset
    Specifies that the output dataset of 3dvolreg should be shifted to
    match the grid of dataset 'gset'.  Can only be used with -rotparent.
    This dataset should be one this is properly aligned with 'rset' when
    overlaid in AFNI.
  * If 'gset' has a different number of slices than the input dataset,
    then the output dataset will be zero-padded in the slice direction
    to match 'gset'.
  * These options are intended to be used to align datasets between sessions:
     S1 = SPGR from session 1    E1 = EPI from session 1
     S2 = SPGR from session 2    E2 = EPI from session 2
 3dvolreg -twopass -twodup -base S1+orig -prefix S2reg S2+orig
 3dvolreg -rotparent S2reg+orig -gridparent E1+orig -prefix E2reg \
          -base 4 E2+orig
     Each sub-brick in E2 is registered to sub-brick E2+orig[4], then the
     rotation from S2 to S2reg is also applied, which shifting+padding
     applied to properly overlap with E1.
  * A similar effect could be done by using commands
 3dvolreg -twopass -twodup -base S1+orig -prefix S2reg S2+orig
 3dvolreg -prefix E2tmp -base 4 E2+orig
 3drotate -rotparent S2reg+orig -gridparent E1+orig -prefix E2reg E2tmp+orig
    The principal difference is that the latter method results in E2
    being interpolated twice to make E2reg: once in the 3dvolreg run to
    produce E2tmp, then again when E2tmp is rotated to make E2reg.  Using
    3dvolreg with the -rotparent and -gridparent options simply skips the
    intermediate interpolation.

          *** Please read file README.registration for more   ***
          *** information on the use of 3dvolreg and 3drotate ***

 Algorithm: Iterated linearized weighted least squares to make each
              sub-brick as like as possible to the base brick.
              This method is useful for finding SMALL MOTIONS ONLY.
              See program 3drotate for the volume shift/rotate algorithm.
              The following options can be used to control the iterations:
                -maxite     m = Allow up to 'm' iterations for convergence
                                  [default = 19].
                -x_thresh   x = Iterations converge when maximum movement
                                  is less than 'x' voxels [default=0.020000],
                -rot_thresh r = And when maximum rotation is less than
                                  'r' degrees [default=0.030000].
                -delta      d = Distance, in voxel size, used to compute
                                  image derivatives using finite differences
                                  [default=0.700000].
                -final   mode = Do the final interpolation using the method
                                  defined by 'mode', which is one of the
                                  strings 'NN', 'cubic', 'quintic', 'heptic',
                                  or 'Fourier'
                                  [default=mode used to estimate parameters].
            -weight 'wset[n]' = Set the weighting applied to each voxel
                                  proportional to the brick specified here
                                  [default=smoothed base brick].
                                N.B.: if no weight is given, and -twopass is
                                  engaged, then the first pass weight is the
                                  blurred sum of the base brick and the first
                                  data brick to be registered.
                   -edging ee = Set the size of the region around the edges of
                                  the base volume where the default weight will
                                  be set to zero.  If 'ee' is a plain number,
                                  then it is a voxel count, giving the thickness
                                  along each face of the 3D brick.  If 'ee' is
                                  of the form '5%', then it is a fraction of
                                  of each brick size.  For example, '5%' of
                                  a 256x256x124 volume means that 13 voxels
                                  on each side of the xy-axes will get zero
                                  weight, and 6 along the z-axis.  If this
                                  option is not used, then 'ee' is read from
                                  the environment variable AFNI_VOLREG_EDGING.
                                  If that variable is not set, then 5% is used.
                                N.B.: This option has NO effect if the -weight
                                  option is used.
                                N.B.: The largest % value allowed is 25%.
                     -twopass = Do two passes of the registration algorithm:
                                 (1) with smoothed base and data bricks, with
                                     linear interpolation, to get a crude
                                     alignment, then
                                 (2) with the input base and data bricks, to
                                     get a fine alignment.
                                  This method is useful when aligning high-
                                  resolution datasets that may need to be
                                  moved more than a few voxels to be aligned.
                  -twoblur bb = 'bb' is the blurring factor for pass 1 of
                                  the -twopass registration.  This should be
                                  a number >= 2.0 (which is the default).
                                  Larger values would be reasonable if pass 1
                                  has to move the input dataset a long ways.
                                  Use '-verbose -verbose' to check on the
                                  iterative progress of the passes.
                                N.B.: when using -twopass, and you expect the
                                  data bricks to move a long ways, you might
                                  want to use '-heptic' rather than
                                  the default '-Fourier', since you can get
                                  wraparound from Fourier interpolation.
                      -twodup = If this option is set, along with -twopass,
                                  then the output dataset will have its
                                  xyz-axes origins reset to those of the
                                  base dataset.  This is equivalent to using
                                  '3drefit -duporigin' on the output dataset.
                       -sinit = When using -twopass registration on volumes
                                  whose magnitude differs significantly, the
                                  least squares fitting procedure is started
                                  by doing a zero-th pass estimate of the
                                  scale difference between the bricks.
                                  Use this option to turn this feature OFF.
              -coarse del num = When doing the first pass, the first step is
                                  to do a number of coarse shifts in order to
                                  find a starting point for the iterations.
                                  'del' is the size of these steps, in voxels;
                                  'num' is the number of these steps along
                                  each direction (+x,-x,+y,-y,+z,-z).  The
                                  default values are del=10 and num=2.  If
                                  you don't want this step performed, set
                                  num=0.  Note that the amount of computation
                                  grows as num**3, so don't increase num
                                  past 4, or the program will run forever!
                             N.B.: The 'del' parameter cannot be larger than
                                   10% of the smallest dimension of the input
                                   dataset.
              -wtinp          = Use sub-brick[0] of the input dataset as the
                                  weight brick in the final registration pass.

 N.B.: * This program can consume VERY large quantities of memory.
          (Rule of thumb: 40 bytes per input voxel.)
          Use of '-verbose -verbose' will show the amount of workspace,
          and the steps used in each iteration.
       * ALWAYS check the results visually to make sure that the program
          wasn't trapped in a 'false optimum'.
       * The default rotation threshold is reasonable for 64x64 images.
          You may want to decrease it proportionally for larger datasets.
       * -twopass resets the -maxite parameter to 66; if you want to use
          a different value, use -maxite AFTER the -twopass option.
       * The -twopass option can be slow; several CPU minutes for a
          256x256x124 volume is a typical run time.
       * After registering high-resolution anatomicals, you may need to
          set their origins in 3D space to match.  This can be done using
          the '-duporigin' option to program 3drefit, or by using the
          '-twodup' option to this program.
</DL>
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dWarp
Usage: 3dWarp [options] dataset
Warp (spatially transform) a 3D dataset.
--------------------------
Transform Defining Options: [exactly one of these must be used]
--------------------------
  -matvec_in2out mmm = Read a 3x4 affine transform matrix+vector
                        from file 'mmm':
                         x_out = Matrix x_in + Vector

  -matvec_out2in mmm = Read a 3x4 affine transform matrix+vector
                         from file 'mmm':
                         x_in = Matrix x_out + Vector

     ** N.B.: The coordinate vectors described above are
               defined in DICOM ('RAI') coordinate order.
               (Also see the '-fsl_matvec option, below.)
     ** N.B.: Using the special name 'IDENTITY' for 'mmm'
               means to use the identity matrix.
     ** N.B.: You can put the matrix on the command line
               directly by using an argument of the form
       'MATRIX(a11,a12,a13,a14,a21,a22,a23,a24,a31,a32,a33,a34)'
               in place of 'mmm', where the aij values are the
               matrix entries (aij = i-th row, j-th column),
               separated by commas.
             * You will need the 'forward single quotes' around
               the argument.

  -tta2mni = Transform a dataset in Talairach-Tournoux Atlas
              coordinates to MNI-152 coordinates.
  -mni2tta = Transform a dataset in MNI-152 coordinates to
              Talairach-Tournoux Atlas coordinates.

  -matparent mset = Read in the matrix from WARPDRIVE_MATVEC_*
                     attributes in the header of dataset 'mset',
                     which must have been created by program
                     3dWarpDrive.  In this way, you can apply
                     a transformation matrix computed from
                     in 3dWarpDrive to another dataset.

     ** N.B.: The above option is analogous to the -rotparent
                option in program 3drotate.  Use of -matparent
                should be limited to datasets whose spatial
                coordinate system corresponds to that which
                was used for input to 3dWarpDrive (i.e., the
                input to 3dWarp should overlay properly with
                the input to 3dWarpDrive that generated the
                -matparent dataset).
              Sample usages:
 3dWarpDrive -affine_general -base d1+orig -prefix d2WW -twopass -input d2+orig
 3dWarp      -matparent d2WW+orig -prefix epi2WW epi2+orig

-----------------------
Other Transform Options:
-----------------------
  -linear     }
  -cubic      } = Chooses spatial interpolation method.
  -NN         } =   [default = linear]
  -quintic    }

  -fsl_matvec   = Indicates that the matrix file 'mmm' uses FSL
                    ordered coordinates ('LPI').  For use with
                    matrix files from FSL and SPM.

  -newgrid ddd  = Tells program to compute new dataset on a
                    new 3D grid, with spacing of 'ddd' mmm.
                  * If this option is given, then the new
                    3D region of space covered by the grid
                    is computed by warping the 8 corners of
                    the input dataset, then laying down a
                    regular grid with spacing 'ddd'.
                  * If this option is NOT given, then the
                    new dataset is computed on the old
                    dataset's grid.

  -gridset ggg  = Tells program to compute new dataset on the
                    same grid as dataset 'ggg'.

  -zpad N       = Tells program to pad input dataset with 'N'
                    planes of zeros on all sides before doing
                    transformation.
---------------------
Miscellaneous Options:
---------------------
  -prefix ppp   = Sets the prefix of the output dataset.

This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dWarpDrive
Usage: 3dWarpDrive [options] dataset
Warp a dataset to match another one (the base).

This program is a generalization of 3dvolreg.  It tries to find
a spatial transformation that warps a given dataset to match an
input dataset (given by the -base option).  It will be slow.

--------------------------
Transform Defining Options: [exactly one of these must be used]
--------------------------
  -shift_only         =  3 parameters (shifts)
  -shift_rotate       =  6 parameters (shifts + angles)
  -shift_rotate_scale =  9 parameters (shifts + angles + scale factors)
  -affine_general     = 12 parameters (3 shifts + 3x3 matrix)
  -bilinear_general   = 39 parameters (3 + 3x3 + 3x3x3)

  N.B.: At this time, the image intensity is NOT 
         adjusted for the Jacobian of the transformation.
  N.B.: -bilinear_general is not yet implemented.

-------------
Other Options:
-------------
  -linear   }
  -cubic    } = Chooses spatial interpolation method.
  -NN       } =   [default = linear; inaccurate but fast]
  -quintic  }     [for accuracy, try '-cubic -final quintic']

  -base bbb   = Load dataset 'bbb' as the base to which the
                  input dataset will be matched.
                  [This is a mandatory option]

  -verb       = Print out lots of information along the way.
  -prefix ppp = Sets the prefix of the output dataset.
  -input ddd  = You can put the input dataset anywhere in the
                  command line option list by using the '-input'
                  option, instead of always putting it last.

-----------------
Technical Options:
-----------------
  -maxite    m  = Allow up to 'm' iterations for convergence.
  -delta     d  = Distance, in voxel size, used to compute
                   image derivatives using finite differences.
                   [Default=1.0]
  -weight  wset = Set the weighting applied to each voxel
                   proportional to the brick specified here.
                   [Default=computed by program from base]
  -thresh    t  = Set the convergence parameter to be RMS 't' voxels
                   movement between iterations.  [Default=0.03]
  -twopass      = Do the parameter estimation in two passes,
                   coarse-but-fast first, then fine-but-slow second
                   (much like the same option in program 3dvolreg).
                   This is useful if large-ish warping is needed to
                   align the volumes.
  -final 'mode' = Set the final warp to be interpolated using 'mode'
                   instead of the spatial interpolation method used
                   to find the warp parameters.
  -parfix n v   = Fix the n'th parameter of the warp model to
                   the value 'v'.  More than one -parfix option
                   can be used, to fix multiple parameters.
  -1Dfile ename = Write out the warping parameters to the file
                   named 'ename'.  Each sub-brick of the input
                   dataset gets one line in this file.  Each
                   parameter in the model gets one column.
  -float        = Write output dataset in float format, even if
                   input dataset is short or byte.

----------------------
AFFINE TRANSFORMATIONS:
----------------------
The options below control how the affine tranformations
(-shift_rotate, -shift_rotate_scale, -affine_general)
are structured in terms of 3x3 matrices:

  -SDU or -SUD }= Set the order of the matrix multiplication
  -DSU or -DUS }= for the affine transformations:
  -USD or -UDS }=   S = triangular shear (params #10-12)
                    D = diagonal scaling matrix (params #7-9)
                    U = rotation matrix (params #4-6)
                  Default order is '-SDU', which means that
                  the U matrix is applied first, then the
                  D matrix, then the S matrix.

  -Supper      }= Set the S matrix to be upper or lower
  -Slower      }= triangular [Default=lower triangular]

  -ashift OR   }= Apply the shift parameters (#1-3) after OR
  -bshift      }= before the matrix transformation. [Default=after]

The matrices are specified in DICOM-ordered (x=-R+L,y=-A+P,z=-I+S)
coordinates as:

  [U] = [Rotate_y(param#6)] [Rotate_x(param#5)] [Rotate_z(param #4)]
        (angles are in degrees)

  [D] = diag( param#7 , param#8 , param#9 )

        [    1        0     0 ]        [ 1 param#10 param#11 ]
  [S] = [ param#10    1     0 ]   OR   [ 0    1     param#12 ]
        [ param#11 param#12 1 ]        [ 0    0        1     ]

 For example, the default (-SDU/-ashift/-Slower) has the warp
 specified as [x]_warped = [S] [D] [U] [x]_in + [shift].
 The shift vector comprises parameters #1, #2, and #3.

 The goal of the program is to find the warp parameters such that
   I([x]_warped) = s * J([x]_in)
 as closely as possible in a weighted least squares sense, where
 's' is a scaling factor (an extra, invisible, parameter), J(x)
 is the base image, I(x) is the input image, and the weight image
 is a blurred copy of J(x).

 Using '-parfix', you can specify that some of these parameters
 are fixed.  For example, '-shift_rotate_scale' is equivalent
 '-affine_general -parfix 10 0 -parfix 11 0 -parfix 12 0'.
 Don't attempt to use the '-parfix' option unless you understand
 this example!

-------------------------
  RWCox - November 2004
-------------------------
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dWavelets

Program: 3dWavelets 
Author:  B. Douglas Ward 
Initial Release:  28 March 2000 
Latest Revision:  02 December 2002 

Program to perform wavelet analysis of an FMRI 3d+time dataset.        
                                                                       
Usage:                                                                 
3dWavelets                                                             
-type wname          wname = name of wavelet to use for the analysis   
                     At present, there are only two choices for wname: 
                        Haar  -->  Haar wavelets                       
                        Daub  -->  Daubechies wavelets                 
-input fname         fname = filename of 3d+time input dataset         
[-input1D dname]     dname = filename of single (fMRI) .1D time series 
[-mask mname]        mname = filename of 3d mask dataset               
[-nfirst fnum]       fnum = number of first dataset image to use in    
                       the wavelet analysis. (default = 0)             
[-nlast  lnum]       lnum = number of last dataset image to use in     
                       the wavelet analysis. (default = last)          
[-fdisp fval]        Write (to screen) results for those voxels        
                       whose F-statistic is >= fval                    
                                                                       
Filter options:                                                        
[-filt_stop band mintr maxtr] Specify wavelet coefs. to set to zero    
[-filt_base band mintr maxtr] Specify wavelet coefs. for baseline model
[-filt_sgnl band mintr maxtr] Specify wavelet coefs. for signal model  
     where  band  = frequency band                                     
            mintr = min. value for time window (in TR)                 
            maxtr = max. value for time window (in TR)                 
                                                                       
Output options:                                                        
[-coefts cprefix]   cprefix = prefix of 3d+time output dataset which   
                       will contain the forward wavelet transform      
                       coefficients                                    
                                                                       
[-fitts  fprefix]   fprefix = prefix of 3d+time output dataset which   
                       will contain the full model time series fit     
                       to the input data                               
                                                                       
[-sgnlts sprefix]   sprefix = prefix of 3d+time output dataset which   
                       will contain the signal model time series fit   
                       to the input data                               
                                                                       
[-errts  eprefix]   eprefix = prefix of 3d+time output dataset which   
                       will contain the residual error time series     
                       from the full model fit to the input data       
                                                                       
The following options control the contents of the bucket dataset:      
[-fout]            Flag to output the F-statistics                     
[-rout]            Flag to output the R^2 statistics                   
[-cout]            Flag to output the full model wavelet coefficients  
[-vout]            Flag to output the sample variance (MSE) map        
                                                                       
[-stat_first]      Flag to specify that the full model statistics will 
                     appear prior to the wavelet coefficients in the   
                     bucket dataset output                             
                                                                       
[-bucket bprefix]  bprefix = prefix of AFNI 'bucket' dataset containing
                     parameters of interest, such as the F-statistic   
                     for significance of the wavelet signal model.     
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dWilcoxon

Program: 3dWilcoxon 
Author:  B. Douglas Ward 
Initial Release:  23 July 1997 
Latest Revision:  02 Dec  2002 

This program performs the nonparametric Wilcoxon signed-rank test 
for paired comparisons of two samples. 

Usage: 
3dWilcoxon                                                          
-dset 1 filename               data set for X observations          
 . . .                           . . .                              
-dset 1 filename               data set for X observations          
-dset 2 filename               data set for Y observations          
 . . .                           . . .                              
-dset 2 filename               data set for Y observations          
                                                                    
[-workmem mega]                number of megabytes of RAM to use    
                                 for statistical workspace          
[-voxel num]                   screen output for voxel # num        
-out prefixname                estimated population delta and       
                                 Wilcoxon signed-rank statistics are
                                 written to file prefixname         


N.B.: For this program, the user must specify 1 and only 1 sub-brick  
      with each -dset command. That is, if an input dataset contains  
      more than 1 sub-brick, a sub-brick selector must be used, e.g.: 
      -dset 2 'fred+orig[3]'                                          

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dWinsor
Usage: 3dWinsor [options] dataset
Apply a 3D 'Winsorizing' filter to a short-valued dataset.

Options:
 -irad rr   = include all points within 'distance'
                rr in the operation, where distance
                is defined as sqrt(i*i+j*j+k*k), and
                (i,j,k) are voxel index offsets
                [default rr=1.5]

 -cbot bb   = set bottom clip index to bb
                [default = 20% of the number of points]
 -ctop tt   = set top clip index to tt
                [default = 80% of the number of points]

 -nrep nn   = repeat filter nn times [default nn=1]
                if nn < 0, means to repeat filter until
                less than abs(n) voxels change

 -keepzero  = don't filter voxels that are zero
 -clip xx   = set voxels at or below 'xx' to zero

 -prefix pp = use 'pp' as the prefix for the output
                dataset [default pp='winsor']

 -mask mmm  = use 'mmm' as a mask dataset - voxels NOT
                in the mask won't be filtered
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dZcat
Usage: 3dZcat [options] dataset dataset ...
Concatenates datasets in the slice (z) direction.  Each input
dataset must have the same number of voxels in each slice, and
must have the same number of sub-bricks.

Options:
  -prefix pname = Use 'pname' for the output dataset prefix name.
                    [default='zcat']
  -datum type   = Coerce the output data to be stored as the given
                    type, which may be byte, short, or float.
  -fscale     = Force scaling of the output to the maximum integer
                  range.  This only has effect if the output datum
                  is byte or short (either forced or defaulted).
                  This option is sometimes necessary to eliminate
                  unpleasant truncation artifacts.
  -nscale     = Don't do any scaling on output to byte or short datasets.
                   This may be especially useful when operating on mask
                   datasets whose output values are only 0's and 1's.
  -verb         = Print out some verbositiness as the program
                    proceeds.

Command line arguments after the above are taken as input datasets.
A dataset is specified using one of these forms:
   'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.

INPUT DATASET NAMES
-------------------
This program accepts datasets that are modified on input according to the
following schemes:
  'r1+orig[3..5]'                                    {sub-brick selector}
  'r1+orig<100.200>'                                 {sub-range selector}
  'r1+orig[3..5]<100..200>'                          {both selectors}
  '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'  {calculation}
For the gruesome details, see the output of 'afni -help'.

Notes:
* You can use the '3dinfo' program to see how many slices a
    dataset comprises.
* There must be at least two datasets input (otherwise, the
    program doesn't make much sense, does it?).
* Each input dataset must have the same number of voxels in each
    slice, and must have the same number of sub-bricks.
* This program does not deal with complex-valued datasets.
* See the output of '3dZcutup -help' for a C shell script that
    can be used to take a dataset apart into single slice datasets,
    analyze them separately, and then assemble the results into
    new 3D datasets.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dZcutup
Usage: 3dZcutup [options] dataset
Cuts slices off a dataset in its z-direction, and writes a new
dataset.  The z-direction and number of slices in a dataset
can be determined using the 3dinfo program.
Options:
 -keep b t   = Keep slices numbered 'b' through 't', inclusive.
                 This is a mandatory option.  If you want to
                 create a single-slice dataset, this is allowed,
                 but AFNI may not display such datasets properly.
                 A single slice dataset would have b=t.  Slice
                 numbers start at 0.
 -prefix ppp = Write result into dataset with prefix 'ppp'
                 [default = 'zcutup']
Notes:
 * You can use a sub-brick selector on the input dataset.
 * 3dZcutup won't overwrite an existing dataset (I hope).
 * This program is adapted from 3dZeropad, which does the
     same thing, but along all 3 axes.
 * You can glue datasets back together in the z-direction
     using program 3dZcat.  A sample C shell script that
     uses these progams to carry out an analysis of a large
     dataset is:

  #!/bin/csh
  # Cut 3D+time dataset epi07+orig into individual slices

  foreach sl ( `count -dig 2 0 20` )
    3dZcutup -prefix zcut${sl} -keep $sl $sl epi07+orig

    # Analyze this slice with 3dDeconvolve separately

    3dDeconvolve -input zcut${sl}+orig.HEAD            \
                 -num_stimts 3                         \
                 -stim_file 1 ann_response_07.1D       \
                 -stim_file 2 antiann_response_07.1D   \
                 -stim_file 3 righthand_response_07.1D \
                 -stim_label 1 annulus                 \
                 -stim_label 2 antiann                 \
                 -stim_label 3 motor                   \
                 -stim_minlag 1 0  -stim_maxlag 1 0    \
                 -stim_minlag 2 0  -stim_maxlag 2 0    \
                 -stim_minlag 3 0  -stim_maxlag 3 0    \
                 -fitts zcut${sl}_fitts                \
                 -fout -bucket zcut${sl}_stats
  end

  # Assemble slicewise outputs into final datasets

  time 3dZcat -verb -prefix zc07a_fitts zcut??_fitts+orig.HEAD
  time 3dZcat -verb -prefix zc07a_stats zcut??_stats+orig.HEAD

  # Remove individual slice datasets

  /bin/rm -f zcut*
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dZeropad
Usage: 3dZeropad [options] dataset
Adds planes of zeros to a dataset (i.e., pads it out).

Options:
  -I n = adds 'n' planes of zero at the Inferior edge
  -S n = adds 'n' planes of zero at the Superior edge
  -A n = adds 'n' planes of zero at the Anterior edge
  -P n = adds 'n' planes of zero at the Posterior edge
  -L n = adds 'n' planes of zero at the Left edge
  -R n = adds 'n' planes of zero at the Right edge
  -z n = adds 'n' planes of zeros on EACH of the
          dataset z-axis (slice-direction) faces

 -RL a = These options specify that planes should be added/cut
 -AP b = symmetrically to make the resulting volume have
 -IS c = 'a', 'b', and 'c' slices in the respective directions.

 -mm   = pad counts 'n' are in mm instead of slices:
         * each 'n' is an integer
         * at least 'n' mm of slices will be added/removed:
            n =  3 and slice thickness = 2.5 mm ==> 2 slices added
            n = -6 and slice thickness = 2.5 mm ==> 3 slices removed

 -master mset = match the volume described in dataset 'mset':
                * mset must have the same orientation and grid
                   spacing as dataset to be padded
                * the goal of -master is to make the output dataset
                   from 3dZeropad match the spatial 'extents' of
                   mset (cf. 3dinfo output) as much as possible,
                   by adding/subtracting slices as needed.
                * you can't use -I,-S,..., or -mm with -master

 -prefix ppp = write result into dataset with prefix 'ppp'
                 [default = 'zeropad']

Nota Bene:
 * You can use negative values of n to cut planes off the edges
     of a dataset.  At least one plane must be added/removed
     or the program won't do anything.
 * Anat parent and Talairach markers are NOT preserved in the
     new dataset.
 * If the old dataset has z-slice-dependent time offsets, and
     if new (zero filled) z-planes are added, the time offsets
     of the new slices will be set to zero.
 * You can use program '3dinfo' to find out how many planes
     a dataset has in each direction.
 * Program works for byte-, short-, float-, and complex-valued
     datasets.
 * You can use a sub-brick selector on the input dataset.
 * 3dZeropad won't overwrite an existing dataset (I hope).

 Author: RWCox - July 2000
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
3dZregrid
Usage: 3dZregrid [option] dataset
Alters the input dataset's slice thickness and/or number.

OPTIONS:
 -dz D     = sets slice thickness to D mm
 -nz N     = sets slice count to N
 -zsize Z  = sets thickness of dataset (center-to-center of
              first and last slices) to Z mm
 -prefix P = write result in dataset with prefix P
 -verb     = write progress reports to stderr

At least one of '-dz', '-nz', or '-zsize' must be given.
On the other hand, using all 3 is over-specification.
The following combinations make sense:
 -dz only                   ==> N stays fixed from input dataset
                                 and then is like setting Z = N*D
 -dz and -nz together       ==> like setting Z = N*D
 -dz and -zsize together    ==> like setting N = Z/D
 -nz only                   ==> D stays fixed from input dataset
                                 and then is like setting Z = N*D
 -zsize only                ==> D stays fixed from input dataset
                                 and then is like setting N = Z/D
 -nsize and -zsize together ==> like setting D = Z/N

NOTES:
 * If the input is a 3D+time dataset with slice-dependent time
    offsets, the output will have its time offsets cleared.
    It probably makes sense to do 3dTshift BEFORE using this
    program in such a case.
 * The output of this program is centered around the same
    location as the input dataset.  Slices outside the
    original volume (e.g., when Z is increased) will be
    zero.  This is NOT the same as using 3dZeropad, which
    only adds zeros, and does not interpolate to a new grid.
 * Linear interpolation is used between slices.  However,
    new slice positions outside the old volume but within
    0.5 old slice thicknesses will get a copy of the last slice.
    New slices outside this buffer zone will be all zeros.

EXAMPLE:
 You have two 3D anatomical datasets from the same subject that
 need to be registered.  Unfortunately, the first one has slice
 thickness 1.2 mm and the second 1.3 mm.  Assuming they have
 the same number of slices, then do something like
  3dZregrid -dz 1.2 -prefix ElvisZZ Elvis2+orig
  3dvolreg -base Elvis1+orig -prefix Elvis2reg ElvisZZ+orig
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
@4Daverage
**********************************
This script is somewhat outdated.
I suggest you use 3dMean which is
faster, meaner and not limited to
the alphabet.   ZSS, 03/14/03
**********************************

\012Usage : @4Daverage <AVERAGE prefix brick 3D+t> <3D+t brik names...>
\012This script file uses 3Dcalc to compute average 3D+time bricks
example : @4Daverage NPt1av NPt1r1+orig NPt1r2+orig NPt1r3+orig
The output NPt1av+orig is the average of the three bricks
 NPt1r1+orig, NPt1r2+orig and NPt1r3+orig

You can use wildcards such as
 @4Daverage test ADzst2*.HEAD AFzst2r*.HEAD 
 Make sure you do not pass both .HEAD and .BRIK names.
 If you do so they will be counted twice.\012
The bricks to be averaged must be listed individually.
The total number of bricks that can be averaged at once (26)
is determined by 3dcalc.

\012Ziad Saad Nov 21 97, Marquette University
Modified to accept wild cards Jan 24 01, FIM/LBC/NIH
Ziad S. Saad (ziad@nih.gov)
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
4swap
Usage: 4swap [-q] file ...
-- Swaps byte quadruples on the files listed.
   The -q option means to work quietly.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
abut
ABUT:  put noncontiguous FMRI slices together [for to3d]

method: put zero valued slices in the gaps, then
        replicate images to simulate thinner slices

Usage:
   abut [-dzin thickness] [-dzout thickness] [-root name]
        [-linear | -blocky] [-verbose] [-skip n+gap] ... images ...

   -dzin   the thickness value in mm;  if not given,
             taken to be 1.0 (in which case, the output
             thickness and gap sizes are simply relative
             to the slice thickness, rather than absolute)

   -dzout  the output slice thickness, usually smaller than
             the input thickness;  if not given, the program
             will compute a value (the smaller the ratio
             dzout/dzin is, the more slices will be output)

   -root   'name' is the root (or prefix) for the output
             slice filename;  for example, '-root fred.'
             will result in files fred.0001, fred.0002, ...

   -linear if present, this flag indicates that subdivided slices
             will be linearly interpolated rather than simply
             replicated -- this will make the results smoother
             in the through-slice direction (if dzout < dzin)

   -blocky similar to -linear, but uses AFNI's 'blocky' interpolation
             when possible to put out intermediate slices.
             Both interpolation options only apply when dzout < dzin
             and when an output slice has a non-gappy neighbor.

   -skip   'n+gap' indicates that a gap is to be inserted
             between input slices #n and #n+1, where n=1,2,...;
             for example, -skip 6+5.5 means put a gap of 5.5 mm
             between slices 6 and 7.

   More than one -skip option is allowed.  They must all occur
   before the list of input image filenames.
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
adwarp

Program:          adwarp.c 
Author:           R. W. Cox and B. D. Ward 
Initial Release:  02 April 1999 
Latest Revision:  15 August 2001 

Usage: adwarp [options]
Resamples a 'data parent' dataset to the grid defined by an
'anat parent' dataset.  The anat parent dataset must contain
in its .HEAD file the coordinate transformation (warp) needed
to bring the data parent dataset to the output grid.  This
program provides a batch implementation of the interactive
AFNI 'Write' buttons, one dataset at a time.

  Example: adwarp -apar anat+tlrc -dpar func+orig

  This will create dataset func+tlrc (.HEAD and .BRIK).

Options (so to speak):
----------------------
-apar aset  = Set the anat parent dataset to 'aset'.  This
                is a nonoptional option (must be present).

-dpar dset  = Set the data parent dataset to 'dset'.  This
                is a nonoptional option (must be present).
              Note: dset may contain a sub-brick selector,
              e.g.,  -dpar 'dset+orig[2,5,7]'             

-prefix ppp = Set the prefix for the output dataset to 'ppp'.
                The default is the prefix of 'dset'.

-dxyz ddd   = Set the grid spacing in the output datset to
                'ddd' mm.  The default is 1 mm.

-verbose    = Print out progress reports.
-force      = Write out result even if it means deleting
                an existing dataset.  The default is not
                to overwrite.

-resam rrr  = Set resampling mode to 'rrr' for all sub-bricks
                     --- OR ---                              
-thr   rrr  = Set resampling mode to 'rrr' for threshold sub-bricks
-func  rrr  = Set resampling mode to 'rrr' for functional sub-bricks

The resampling mode 'rrr' must be one of the following:
                 NN = Nearest Neighbor
                 Li = Linear Interpolation
                 Cu = Cubic Interpolation
                 Bk = Blocky Interpolation

NOTE:  The default resampling mode is Li for all sub-bricks. 
This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
afni
GPL AFNI: Analysis of Functional NeuroImages, by RW Cox (rwcox@nih.gov)
This is Version AFNI_2005_08_24_1751
[[Precompiled binary linux_gcc32: Aug 25 2005]]

 ** This software was designed to be used only for research purposes. **
 ** Clinical uses are not recommended, and have never been evaluated. **
 ** This software comes with no warranties of any kind whatsoever,    **
 ** and may not be useful for anything.  Use it at your own risk!     **
 ** If these terms are not acceptable, you aren't allowed to use AFNI.**
 ** See 'Define Datamode->Misc->License Info' for more details.       **

----------------------------------------------------------------
USAGE 1: read in sessions of 3D datasets (created by to3d, etc.)
----------------------------------------------------------------
   afni [options] [session_directory ...]

   -purge       Conserve memory by purging data to disk.
                  [Use this if you run out of memory when running AFNI.]
                  [This will slow the code down, so use only if needed.]
   -posfunc     Set up the color 'pbar' to use only positive function values.
   -R           Recursively search each session_directory for more session
                  subdirectories.
       WARNING: This will descend the entire filesystem hierarchy from
                  each session_directory given on the command line.  On a
                  large disk, this may take a long time.  To limit the
                  recursion to 5 levels (for example), use -R5.
   -ignore N    Tells the program to 'ignore' the first N points in
                  time series for graphs and FIM calculations.
   -im1 N       Tells the program to use image N as the first one for
                  graphs and FIM calculations (same as '-ignore N-1')
   -tlrc_small  These options set whether to use the 'small' or 'big'
   -tlrc_big      Talairach brick size.  The compiled in default for
                  the program is now 'big', unlike AFNI 1.0x.
   -no1D        Tells AFNI not to read *.1D timeseries files from
                  the dataset directories.  The *.1D files in the
                  directories listed in the AFNI_TSPATH environment
                  variable will still be read (if this variable is
                  not set, then './' will be scanned for *.1D files.)

   -noqual      Tells AFNI not to enforce the 'quality' checks when
                  making the transformations to +acpc and +tlrc.
   -unique      Tells the program to create a unique set of colors
                  for each AFNI controller window.  This allows
                  different datasets to be viewed with different
                  grayscales or colorscales.  Note that -unique
                  will only work on displays that support 12 bit
                  PseudoColor (e.g., SGI workstations) or TrueColor.
   -orient code Tells afni the orientation in which to display
                  x-y-z coordinates (upper left of control window).
                  The code must be 3 letters, one each from the
                  pairs {R,L} {A,P} {I,S}.  The first letter gives
                  the orientation of the x-axis, the second the
                  orientation of the y-axis, the third the z-axis:
                   R = right-to-left         L = left-to-right
                   A = anterior-to-posterior P = posterior-to-anterior
                   I = inferior-to-superior  S = superior-to-inferior
                  The default code is RAI ==> DICOM order.  This can
                  be set with the environment variable AFNI_ORIENT.
                  As a special case, using the code 'flipped' is
                  equivalent to 'LPI' (this is for Steve Rao).
   -noplugins   Tells the program not to load plugins.
                  (Plugins can also be disabled by setting the
                   environment variable AFNI_NOPLUGINS.)
   -yesplugouts Tells the program to listen for plugouts.
                  (Plugouts can also be enabled by setting the
                   environment variable AFNI_YESPLUGOUTS.)
   -YESplugouts Makes the plugout code print out lots of messages
                  (useful for debugging a new plugout).
   -noplugouts  Tells the program NOT to listen for plugouts.
                  (This option is available to override
                   the AFNI_YESPLUGOUTS environment variable.)
   -skip_afnirc Tells the program NOT to read the file .afnirc
                  in the home directory.  See README.setup for
                  details on the use of .afnirc for initialization.
   -layout fn   Tells AFNI to read the initial windows layout from
                  file 'fn'.  If this option is not given, then
                  environment variable AFNI_LAYOUT_FILE is used.
                  If neither is present, then AFNI will do whatever
                  it feels like.

   -niml        If present, turns on listening for NIML-formatted
                  data from SUMA.  Can also be turned on by setting
                  environment variable AFNI_NIML_START to YES.
   -np port     If present, sets the NIML socket port number to 'port'.
                  This must be an integer between 1024 and 65535,
                  and must be the same as the '-np port' number given
                  to SUMA.  [default = 53211]

   -com ccc     This option lets you specify 'command strings' to
                  drive AFNI after the program startup is completed.
                  Legal command strings are described in the file
                  README.driver.  More than one '-com' option can
                  be used, and the commands will be executed in
                  the order they are given on the command line.
            N.B.: Most commands to AFNI contain spaces, so the 'ccc'
                  command strings will need to be enclosed in quotes.

 * If no session_directories are given, then the program will use
    the current working directory (i.e., './').
 * The maximum number of sessions is now set to  80.
 * The maximum number of datasets per session is 4096.
 * To change these maximums, you must edit file '3ddata.h' and then
    recompile this program.

-----------------------------------------------------
USAGE 2: read in images for 'quick and dirty' viewing
-----------------------------------------------------
(Most advanced features of AFNI will be disabled.)

   afni -im [options] im1 im2 im3 ...

   -im          Flag to read in images instead of 3D datasets
                  (Talaraich and functional stuff won't work)
   -dy yratio   Tells afni the downscreen pixel size is 'yratio' times
                  the across-screen (x) pixel dimension (default=1.0)
   -dz zratio   Tells afni the slice thickness is 'zratio' times
                  the x pixel dimension (default=1.0)
   -orient code Tells afni the orientation of the input images.
                  The code must be 3 letters, one each from the
                  pairs {R,L} {A,P} {I,S}.  The first letter gives
                  the orientation of the x-axis, the second the
                  orientation of the y-axis, the third the z-axis:
                   R = right-to-left         L = left-to-right
                   A = anterior-to-posterior P = posterior-to-anterior
                   I = inferior-to-superior  S = superior-to-inferior
                  (the default code is ASL ==> sagittal images).
                  Note that this use of '-orient' is different from
                  the use when viewing datasets.
   -resize      Tells afni that all images should be resized to fit
                  the size of the first one, if they don't already fit
                  (by default, images must all 'fit' or afni will stop)
   -datum type  Tells afni to convert input images into the type given:
                  byte, short, float, complex are the legal types.
 The image files (im1 ...) are the same formats as accepted by to3d.

 New image display options (alternatives to -im) [19 Oct 1999]:
   -tim         These options tell AFNI to arrange the input images
   -tim:<NT>    into a internal time-dependent dataset.  Suppose that
   -zim:<NZ>    there are N input 2D slices on the command line.
              * -tim alone means these are N points in time (1 slice).
              * -tim:<NT> means there are nt points in time (nt is
                  an integer > 1), so there are N/nt slices in space,
                  and the images on the command line are input in
                  time order first (like -time:tz in to3d).
              * -zim:<NZ> means there are nz slices in space (nz is
                  an integer > 1), so there are N/nz points in time,
                  and the images on the command line are input in
                  slice order first (like -time:zt in to3d).

 N.B.: You may wish to use the -ignore option to set the number of
        initial points to ignore in the time series graph if you use
        -tim or -zim, since there is no way to change this from
        within an AFNI run (the FIM menus are disabled).
 N.B.: The program 'aiv' (AFNI image viewer) can also be used to
        look at images.

-------------------------------------------------------
USAGE 3: read in datasets specified on the command line
-------------------------------------------------------

  afni -dset [options] dname1 dname2 ...

where 'dname1' is the name of a dataset, etc.  With this option, only
the chosen datasets are read in, and they are all put in the same
'session'.  Follower datasets are not created.

INPUT DATASET NAMES
-------------------
 An input dataset is specified using one of these forms:
    'prefix+view', 'prefix+view.HEAD', or 'prefix+view.BRIK'.
 You can also add a sub-brick selection list after the end of the
 dataset name.  This allows only a subset of the sub-bricks to be
 read in (by default, all of a dataset's sub-bricks are input).
 A sub-brick selection list looks like one of the following forms:
   fred+orig[5]                     ==> use only sub-brick #5
   fred+orig[5,9,17]                ==> use #5, #9, and #12
   fred+orig[5..8]     or [5-8]     ==> use #5, #6, #7, and #8
   fred+orig[5..13(2)] or [5-13(2)] ==> use #5, #7, #9, #11, and #13
 Sub-brick indexes start at 0.  You can use the character '$'
 to indicate the last sub-brick in a dataset; for example, you
 can select every third sub-brick by using the selection list
   fred+orig[0..$(3)]

 N.B.: The sub-bricks are read in the order specified, which may
 not be the order in the original dataset.  For example, using
   fred+orig[0..$(2),1..$(2)]
 will cause the sub-bricks in fred+orig to be input into memory
 in an interleaved fashion.  Using
   fred+orig[$..0]
 will reverse the order of the sub-bricks.

 N.B.: You may also use the syntax <A..B> after the name of an input 
 dataset to restrict the range of values read in to the numerical
 values in a..b, inclusive.  For example,
    fred+orig[5..7]<100..200>
 creates a 3 sub-brick dataset with values less than 100 or
 greater than 200 from the original set to zero.
 If you use the <> sub-range selection without the [] sub-brick
 selection, it is the same as if you had put [0..$] in front of
 the sub-range selection.

 N.B.: Datasets using sub-brick/sub-range selectors are treated as:
  - 3D+time if the dataset is 3D+time and more than 1 brick is chosen
  - otherwise, as bucket datasets (-abuc or -fbuc)
    (in particular, fico, fitt, etc datasets are converted to fbuc!)

 N.B.: The characters '$ ( ) [ ] < >'  are special to the shell,
 so you will have to escape them.  This is most easily done by
 putting the entire dataset plus selection list inside forward
 single quotes, as in 'fred+orig[5..7,9]', or double quotes "x".

CALCULATED DATASETS
-------------------
 Datasets may also be specified as runtime-generated results from
 program 3dcalc.  This type of dataset specifier is enclosed in
 quotes, and starts with the string '3dcalc(':
    '3dcalc( opt opt ... opt )'
 where each 'opt' is an option to program 3dcalc; this program
 is run to generate a dataset in the directory given by environment
 variable TMPDIR (default=/tmp).  This dataset is then read into
 memory, locked in place, and deleted from disk.  For example
    afni -dset '3dcalc( -a r1+orig -b r2+orig -expr 0.5*(a+b) )'
 will let you look at the average of datasets r1+orig and r2+orig.
 N.B.: using this dataset input method will use lots of memory!

-------------------------------
GENERAL OPTIONS (for any usage)
-------------------------------

   -q           Tells afni to be 'quiet' on startup
   -Dname=val   Sets environment variable 'name' to 'val' inside AFNI;
                  will supersede any value set in .afnirc.
   -gamma gg    Tells afni that the gamma correction factor for the
                  monitor is 'gg' (default gg is 1.0; greater than
                  1.0 makes the image contrast larger -- this may
                  also be adjusted interactively)
   -install     Tells afni to install a new X11 Colormap.  This only
                  means something for PseudoColor displays.  Also, it
                  usually cause the notorious 'technicolor' effect.
   -ncolors nn  Tells afni to use 'nn' gray levels for the image
                  displays (default is 80)
   -xtwarns     Tells afni to show any Xt warning messages that may
                  occur; the default is to suppress these messages.
   -tbar name   Uses 'name' instead of 'AFNI' in window titlebars.
   -flipim and  The '-flipim' option tells afni to display images in the
   -noflipim      'flipped' radiology convention (left on the right).
                  The '-noflipim' option tells afni to display left on
                  the left, as neuroscientists generally prefer.  This
                  latter mode can also be set by the Unix environment
                  variable 'AFNI_LEFT_IS_LEFT'.  The '-flipim' mode is
                  the default.
   -trace       Turns routine call tracing on, for debugging purposes.
   -TRACE       Turns even more verbose tracing on, for more debugging.
   -nomall      Disables use of the mcw_malloc() library routines.

N.B.: Many of these options, as well as the initial color set up,
      can be controlled by appropriate X11 resources.  See the
      file AFNI.Xdefaults for instructions and examples.

----------
REFERENCES
----------
The following papers describe some of the components of the AFNI package.

RW Cox.  AFNI: Software for analysis and visualization of functional
  magnetic resonance neuroimages.  Computers and Biomedical Research,
  29: 162-173, 1996.

  * The first AFNI paper, and the one I prefer you cite if you want to
    refer to the AFNI package as a whole.

RW Cox, A Jesmanowicz, and JS Hyde.  Real-time functional magnetic
  resonance imaging.  Magnetic Resonance in Medicine, 33: 230-236, 1995.

  * The first paper on realtime FMRI; describes the algorithm used in
    3dfim+, the interactive FIM calculations, and in the realtime plugin.

RW Cox and JS Hyde.  Software tools for analysis and visualization of
  FMRI Data.  NMR in Biomedicine, 10: 171-178, 1997.

  * A second paper about AFNI and design issues for FMRI software tools.

RW Cox and A Jesmanowicz.  Real-time 3D image registration for
  functional MRI.  Magnetic Resonance in Medicine, 42: 1014-1018, 1999.

  * Describes the algorithm used for image registration in 3dvolreg
    and in the realtime plugin.

ZS Saad, KM Ropella, RW Cox, and EA DeYoe.  Analysis and use of FMRI
  response delays.  Human Brain Mapping, 13: 74-93, 2001.

  * Describes the algorithm used in 3ddelay (cf. '3ddelay -help').

This page auto-generated on Thu Aug 25 16:49:38 EDT 2005
@AfniOrient2RAImap
Usage: @AfniOrient2RAImap <ORIENTATION code> .....
returns the index map fo the RAI directions

examples:
@AfniOrient2RAImap RAI
returns: 1 2 3
@AfniOrient2RAImap LSP
returns: -1 -3 -2

Ziad Saad (ziad@nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland
This page auto-generated on Thu Aug 25 16:49:39 EDT 2005
afni_vcheck
Usage: afni_version
 Prints out the AFNI version with which it was compiled,
 and checks across the Web for the latest version available.
N.B.: Doing the check across the Web will mean that your
      computer's access to our server will be logged here.
      If you don't want this, don't use this program!
This page auto-generated on Thu Aug 25 16:49:39 EDT 2005
aiv
Usage: aiv [-v] [-p xxxx ] image ...
AFNI Image Viewer program.
Shows the 2D images on the command line in an AFNI-like image viewer.
Can also read images in NIML '<MRI_IMAGE...>' format from a TCP/IP socket.
Image file formats are those supported by to3d:
 * various MRI formats (e.g., DICOM, GEMS I.xxx)
 * raw PPM or PGM
 * JPEG (if djpeg is in the path)
 * GIF, TIFF, BMP, and PNG (if netpbm is in the path)

The '-v' option will make aiv print out the image filenames
as it reads them - this can be a useful progress meter if
the program starts up slowly.

The '-p xxxx' option will make aiv listen to TCP/IP port 'xxxx'
for incoming images in the NIML '<MRI_IMAGE...>' format.  The
port number must be between 1024 and 65535, inclusive.  For
conversion to NIML '<MRI_IMAGE...>' format, see program im2niml.

Normally, at least one image must be given on the command line.
If the '-p xxxx' option is used, then you don't have to input
any images this way; however, since the program requires at least
one image to start up, a crude 'X' will be displayed.  When the
first image arrives via the socket, the 'X' image will be replaced.
Subsequent images arriving by socket will be added to the sequence.

-----------------------------------------------------------------
Sample program fragment, for sending images from one program
into a copy of aiv (which that program also starts up):

#include "mrilib.h"
NI_stream ns; MRI_IMAGE *im; float *far; int nx,ny;
system("aiv -p 4444 &");                               /* start aiv */
ns = NI_stream_open( "tcp:localhost:4444" , "w" ); /* connect to it */
while(1){
  /** ......... create 2D nx X ny data into the far array .........**/
  im = mri_new_vol_empty( nx , ny , 1 , MRI_float );  /* fake image */
  mri_fix_data_pointer( far , im );                  /* attach data */
  NI_element nel = mri_to_niml(im);      /* convert to NIML element */
  NI_write_element( ns , nel , NI_BINARY_MODE );     /* send to aiv */
  NI_free_element(nel); mri_clear_data_pointer(im); mri_free(im);
}
NI_stream_writestring( ns , "<NI_DO ni_verb="QUIT">" ) ;
NI_stream_close( ns ) ;  /* do this, or the above, if done with aiv */

-- Author: RW Cox
This page auto-generated on Thu Aug 25 16:49:39 EDT 2005
@Align_Centers
Usage: @Align_Centers <VOL1> <VOL2> 

Creates a copy of Vol1 (Vol1_Shft) with a modified origin
   to make the centers Vol1_Shft and Vol2 coincide.
   Vol1_Shft is written out to the directory containing Vol2.

Requires 3drefit newer than Oct. 02/02.

Ziad Saad (ziad@nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland

This page auto-generated on Thu Aug 25 16:49:39 EDT 2005
AlphaSim

Program:          AlphaSim 
Author:           B. Douglas Ward 
Initial Release:  18 June 1997 
Latest Revision:  02 December 2002 

This program performs alpha probability simulations.  

Usage: 
AlphaSim 
-nx n1        n1 = number of voxels along x-axis                      
-ny n2        n2 = number of voxels along y-axis                      
-nz n3        n3 = number of voxels along z-axis                      
-dx d1        d1 = voxel size (mm) along x-axis                       
-dy d2        d2 = voxel size (mm) along y-axis                       
-dz d3        d3 = voxel size (mm) along z-axis                       
[-mask mset]      Use the 0 sub-brick of dataset 'mset' as a mask     
                    to indicate which voxels to analyze (a sub-brick  
                    selector is allowed)  [default = use all voxels]  
                  Note:  The -mask command REPLACES the -nx, -ny, -nz,
                         -dx, -dy, and -dz commands.                  
[-fwhm s]     s  = Gaussian filter width (FWHM)                       
[-fwhmx sx]   sx = Gaussian filter width, x-axis (FWHM)               
[-fwhmy sy]   sy = Gaussian filter width, y-axis (FWHM)               
[-fwhmz sz]   sz = Gaussian filter width, z-axis (FWHM)               
[-sigma s]    s  = Gaussian filter width (1 sigma)                    
[-sigmax sx]  sx = Gaussian filter width, x-axis (1 sigma)            
[-sigmay sy]  sy = Gaussian filter width, y-axis (1 sigma)            
[-sigmaz sz]  sz = Gaussian filter width, z-axis (1 sigma)            
[-power]      perform statistical power calculations                  
[-ax n1]      n1 = extent of active region (in voxels) along x-axis   
[-ay n2]      n2 = extent of active region (in voxels) along y-axis   
[-az n3]      n3 = extent of active region (in voxels) along z-axis   
[-zsep z]     z = z-score separation between signal and noise         
-rmm r        r  = cluster connection radius (mm)                     
-pthr p       p  = individual voxel threshold probability             
-iter n       n  = number of Monte Carlo simulations                  
[-quiet]     suppress screen output                                   
[-out file]  file = name of output file                               
This page auto-generated on Thu Aug 25 16:49:39 EDT 2005
byteorder
Usage: byteorder
Prints out a string indicating the byte order of the CPU on
which the program is running.  For this computer, we have:

CPU byte order = LSB_FIRST
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
cat_matvec
Usage: cat_matvec [-MATRIX] matvec_spec matvec_spec ...

Catenates 3D rotation+shift matrix+vector transformations.
Each matvec_spec is of the form

  mfile [-opkey]

'mfile' specifies the matrix, and can take 4 forms:

=== FORM 1 ===
mfile is the name of an ASCII file with 12 numbers arranged
in 3 lines:
      u11 u12 u13 v1
      u21 u22 u23 v2
      u31 u32 u33 v3
where each 'uij' and 'vi' is a number.  The 3x3 matrix [uij]
is the matrix of the transform, and the 3-vector [vi] is the
shift.  The transform is [xnew] = [uij]*[xold] + [vi].

=== FORM 2 ===
mfile is of the form 'dataset::attribute', where 'dataset'
is the name of an AFNI dataset, and 'attribute' is the name
of an attribute in the dataset's header that contains a
matrix+vector.  Examples:
 'fred+orig::VOLREG_MATVEC_000000'        = fred+orig from 3dvolreg
 'fred+acpc::WARP_DATA'                   = fred+acpc warped in AFNI
 'fred+orig::WARPDRIVE_MATVEC_FOR_000000' = fred+orig from 3dWarpDrive
 'fred+orig::ROTATE_MATVEC_000000'        = fred+orig from 3drotate

=== FORM 3 ===
mfile is of the form
 'MATRIX(u11,u12,u13,v1,u21,u22,u23,v2,u31,u32,u33,v3)'
directly giving all 12 numbers on the command line.  You will
need the 'forward single quotes' around this argument.

=== FORM 4 ===
mfile is of the form
 '-rotate xI yR zA'
where 'x', 'y', and 'z' are angles in degrees, specifying rotations
about the I, R, and A axes respectively.  The letters 'I', 'R', 'A'
specify the axes, and can be altered as in program 3drotate.
(The 'quotes' are mandatory here because the argument contains spaces.)


=== COMPUTATIONS ===
If [U] [v] are the matrix/vector for the first mfile, and
   [A] [b] are the matrix/vector for the second mfile, then
the catenated transformation is
  matrix = [A][U]   vector = [A][v] + [b]
That is, the second mfile transformation follows the first.

The optional 'opkey' (operation key) following each mfile
starts with a '-', and then is a set of letters telling how
to treat the input.  The only opkey currently defined is

  -I = invert the transformation:
                     -1              -1
       [xold] = [uij]  [xnew] - [uij]  [vi]

The transformation resulting by catenating the transformations
is written to stdout in the same 3x4 ASCII file format.  This can
be used as input to '3drotate -matvec_dicom' (provided [uij] is a
proper orthogonal matrix), or to '3dWarp -matvec_xxx'.

N.B.: If only 9 numbers can be read from an mfile, then those
      values form the [uij] matrix, and the vector is set to zero.
N.B.: The '-MATRIX' option indicates that the resulting matrix will
      be written to stdout in the 'MATRIX(...)' format (FORM 3).
      This feature could be used, with clever scripting, to input
      a matrix directly on the command line to program 3dWarp.
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
ccalc
Usage: ccalc [-eval <EXPR>]
With no command line parameters:
Interactive numerical calculator, using the same
expression syntax as 3dcalc.  Mostly for playing.
With -eval <EXPR> option:
Calculates expr and quits. 
Do not use variables in expr.
Example: ccalc -eval '3 + 5 * sin(22)' 
or: ccalc -eval 3 +5 '*' 'sin(22)'
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
cdf
Usage 1: cdf [-v] -t2p statname t params
Usage 2: cdf [-v] -p2t statname p params
Usage 3: cdf [-v] -t2z statname t params

This program does various conversions using the cumulative distribution
function (cdf) of certain canonical probability functions.  The optional
'-v' indicates to be verbose -- this is for debugging purposes, mostly.

Usage 1: Converts a statistic 't' to a tail probability.
Usage 2: Converts a tail probability 'p' to a statistic.
Usage 3: Converts a statistic 't' to a N(0,1) value (or z-score)
         that has the same tail probability.

The parameter 'statname' refers to the type of distribution to be used.
The numbers in the params list are the auxiliary parameters for the
particular distribution.  The following table shows the available
distribution functions and their parameters:

   statname  Description  PARAMETERS
   --------  -----------  ----------------------------------------
       fico  Cor          SAMPLES  FIT-PARAMETERS  ORT-PARAMETERS
       fitt  Ttest        DEGREES-of-FREEDOM
       fift  Ftest        NUMERATOR and DENOMINATOR DEGREES-of-FREEDOM
       fizt  Ztest        N/A
       fict  ChiSq        DEGREES-of-FREEDOM
       fibt  Beta         A (numerator) and B (denominator)
       fibn  Binom        NUMBER-of-TRIALS and PROBABILITY-per-TRIAL
       figt  Gamma        SHAPE and SCALE
       fipt  Poisson      MEAN

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
@CheckForAfniDset
Usage: @CheckForAfniDset <NAME> .....
example: @CheckForAfniDset /Data/stuff/Hello+orig.HEAD
returns 0 if neither .HEAD nor .BRIK(.gz) exist
        1 if only .HEAD exists
        2 if both .HEAD and .BRIK(.gz) exist

Ziad Saad (ziad@nih.gov)
  SSCC/NIMH/ National Institutes of Health, Bethesda Maryland

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
cjpeg
usage: /var/www/html/pub/dist/bin/linux_gcc32/cjpeg [switches] [inputfile]
Switches (names may be abbreviated):
  -quality N     Compression quality (0..100; 5-95 is useful range)
  -grayscale     Create monochrome JPEG file
  -optimize      Optimize Huffman table (smaller file, but slow compression)
  -progressive   Create progressive JPEG file
  -targa         Input file is Targa format (usually not needed)
Switches for advanced users:
  -dct int       Use integer DCT method (default)
  -dct fast      Use fast integer DCT (less accurate)
  -dct float     Use floating-point DCT method
  -restart N     Set restart interval in rows, or in blocks with B
  -smooth N      Smooth dithered input (N=1..100 is strength)
  -maxmemory N   Maximum memory to use (in kbytes)
  -outfile name  Specify name for output file
  -verbose  or  -debug   Emit debug output
Switches for wizards:
  -baseline      Force baseline quantization tables
  -qtables file  Use quantization tables given in file
  -qslots N[,...]    Set component quantization tables
  -sample HxV[,...]  Set component sampling factors
  -scans file    Create multi-scan JPEG per script file
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
@clip_volume
Usage 1: A script to clip regions of a volume

   @clip_volume <-input VOL> <-below Zmm> [ [-and/-or] <-above Zmm> ]

   Mandatory parameters:
      -input VOL: Volume to clip
    + At least one of the options below:
      -below Zmm: Set to 0 slices below Zmm
                  Zmm (and all other coordinates) are in RAI
                  as displayed by AFNI on the top left corner
                  of the AFNI controller
      -above Zmm: Set to 0 slices above Zmm
      -left  Xmm: Set to 0 slices left of Xmm
      -right  Xmm: Set to 0 slices right of Xmm
      -anterior Ymm: Set to 0 slices anterior to Ymm
      -posterior Ymm: Set to 0 slices posterior to Ymm

    Optional parameters:
      -and (default): Combine with next clipping planes using 'and'
      -or           : Combine with next clipping planes using 'or'
      -verb         : Verbose, show command
      -prefix PRFX  : Use PRFX for output prefix. Default is the 
                      input prefix with _clp suffixed to it.

Example:
@clip_volume -below -30 -above 53 -left 20 -right -13 -anterior -15 \
             -posterior 42 -input ABanat+orig. -verb -prefix sample

Written by Ziad S. Saad (ziad@nih.gov)
                        SSCC/NIMH/NIH/DHHS

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
@CommandGlobb
Usage: @CommandGlobb -com <PROGRAM line Command> -session <OUTPUT> -newxt <EXTENSION> -list <BRICK 1> <BRICK 2> ...

<PROGRAM line Command> : The entire command line for the program desired
The command is best put between single quotes, do not use the \ to break a long line within the quotes
<BRIK*> : a list of bricks (or anything)
<EXTENSION> : if the program requires a -prefix option, then you can specify the extension
 which will get appended to the Brick names before +orig
<OUTPUT> : The output directory 

example
@CommandGlobb -com '3dinfo -v' -list *.HEAD
will execute 3dinfo -v on each of the A*.HEAD headers

@CommandGlobb -com '3dZeropad -z 4' -newxt _zpd4 -list ADzst*vr+orig.BRIK
will run 3dZeropad with the -z 4 option on all the bricks ADzst*vr+orig.BRIK

Ziad S. Saad (ziad@nih.gov). FIM/LBC/NIMH/NIH. Wed Jan 24 
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
CompareSurfaces
   Usage:    CompareSurfaces 
             -spec <SPEC file>
             -hemi <L R or>
             -sv1 <VOLPARENTALIGNED1.BRIK>
             -sv2 <VOLPARENTALIGNED2.BRIK> 
             [-prefix <FILEPREFIX>]

   NOTE: This program is now superseded by SurfToSurf

   This program calculates the distance, at each node in Surface 1 (S1) to Surface 2 (S2)
   The distances are computed along the local surface normal at each node in S1.
   S1 and S2 are the first and second surfaces encountered in the spec file, respectively.

   -spec <SPEC file>: File containing surface specification. This file is typically 
                      generated by @SUMA_Make_Spec_FS (for FreeSurfer surfaces) or 
                      @SUMA_Make_Spec_SF (for SureFit surfaces).
   -hemi <LEFT or right>: specify the hemisphere being processed 
   -sv1 <VOLUME BRIK parent>:volume parent BRIK for first surface 
   -sv2 <VOLUME BRIK parent>:volume parent BRIK for second surface 

Optional parameters:
   [-prefix <FILEPREFIX>]: Prefix for distance and node color output files.
                           Existing file will not be overwritten.
   [-onenode <INDEX>]: output results for node index only. 
                       This option is for debugging.
   [-noderange <ISTART> <ISTOP>]: output results from node istart to node istop only. 
                                  This option is for debugging.
   NOTE: -noderange and -onenode are mutually exclusive
   [-nocons]: Skip mesh orientation consistency check.
              This speeds up the start time so it is useful
              for debugging runs.

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

   For more help: http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm


   If you can't get help here, please get help somewhere.
++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005


    Shruti Japee LBC/NIMH/NIH shruti@codon.nih.gov Ziad S. Saad SSSC/NIMH/NIH ziad@nih.gov 

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
ConvertDset
Usage: 
  ConvertDset -o_TYPE -input DSET [-i_TYPE] [-prefix OUT_PREF]
  Converts a surface dataset from one format to another.
  Mandatory parameters:
     -o_TYPE: TYPE of output datasets
              where TYPE is one of:
           niml_asc (or niml): for ASCII niml format.
           niml_bi:            for BINARY niml format.
           1D:                 for AFNI's 1D ascii format.
     -input DSET: Input dataset to be converted.
  Optional parameters:
     -i_TYPE: TYPE of input datasets
              where TYPE is one of:
           niml: for niml data sets.
           1D:   for AFNI's 1D ascii format.
           1Dp:  like 1D but with no comments
                 or other 1D formatting gimmicks.
           dx: OpenDX format, expects to work on 1st
               object only.
           If no format is specified, the program will 
           guess however that might slow 
           operations down considerably.
     -prefix OUT_PREF: Output prefix for data set.
                       Default is something based
                       on the input prefix.
  Notes:
     -This program will not overwrite pre-existing files.
     -The new data set is given a new idcode.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

    Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov    Thu Apr  8 16:15:02 EDT 2004

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
ConvertSurface
Usage:  ConvertSurface <-i_TYPE inSurf> <-o_TYPE outSurf> 
    [<-sv SurfaceVolume [VolParam for sf surfaces]>] [-tlrc] [-MNI_rai/-MNI_lpi]
    reads in a surface and writes it out in another format.
    Note: This is a not a general utility conversion program. 
    Only fields pertinent to SUMA are preserved.
 Specifying input surfaces using -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
    -ipar_TYPE ParentSurf specifies the parent surface. Only used
            when -o_fsp is used, see -o_TYPE options.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying output surfaces using -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
    -orient_out STR: Output coordinates in STR coordinate system. 
                      STR is a three character string following AFNI's 
                      naming convention. The program assumes that the native  
                      orientation of the surface is RAI, unless you use the 
                      -MNI_lpi option. The coordinate transformation is carried 
                      out last, just before writing the surface to disk.
    -make_consistent: Check the consistency of the surface's mesh (triangle
                      winding). This option will write out a new surface even 
                      if the mesh was consistent.
                      See SurfQual -help for mesh checks.
    -acpc: Apply acpc transform (which must be in acpc version of 
        SurfaceVolume) to the surface vertex coordinates. 
        This option must be used with the -sv option.
    -tlrc: Apply Talairach transform (which must be a talairach version of 
        SurfaceVolume) to the surface vertex coordinates. 
        This option must be used with the -sv option.
    -MNI_rai/-MNI_lpi: Apply Andreas Meyer Lindenberg's transform to turn 
        AFNI tlrc coordinates (RAI) into MNI coord space 
        in RAI (with -MNI_rai) or LPI (with -MNI_lpi)).
        NOTE: -MNI_lpi option has not been tested yet (I have no data
        to test it on. Verify alignment with AFNI and please report
        any bugs.
        This option can be used without the -tlrc option.
        But that assumes that surface nodes are already in
        AFNI RAI tlrc coordinates .
   NOTE: The vertex coordinates coordinates of the input surfaces are only
         transformed if -sv option is used. If you do transform surfaces, 
         take care not to load them into SUMA with another -sv option.

    Options for applying arbitrary affine transform:
    [xyz_new] = [Mr] * [xyz_old - cen] + D + cen
    -xmat_1D mat: Apply transformation specified in 1D file mat.1D.
                  to the surface's coordinates.
                  [mat] = [Mr][D] is of the form:
                  r11 r12 r13 D1
                  r21 r22 r23 D2
                  r31 r32 r33 D3
    -xcenter x y z: Use vector cen = [x y z]' for rotation center.
                    Default is cen = [0 0 0]'
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

		 Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov 	 Wed Jan  8 13:44:29 EST 2003 
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
ConvexHull
Usage: A program to find the convex hull of a set of points.
  This program is a wrapper for the Qhull program.
  see copyright notice by running suma -sources.

  ConvexHull  
     usage 1: < -input VOL >
              < -isoval V | -isorange V0 V1 | -isocmask MASK_COM >
              [<-xform XFORM>]
     usage 2: < i_TYPE input surface >
              [<-sv SURF_VOL>]
     usage 3: < -input_1D XYZ >
     common optional:
              [< -o_TYPE PREFIX>]
              [< -debug DBG >]

  Mandatory parameters, choose one of three usage modes:
  Usage 1:
     You must use one of the following two options:
     -input VOL: Input AFNI (or AFNI readable) volume.
     You must use one of the following iso* options:
     -isoval V: Create isosurface where volume = V
     -isorange V0 V1: Create isosurface where V0 <= volume < V1
     -isocmask MASK_COM: Create isosurface where MASK_COM != 0
        For example: -isocmask '-a VOL+orig -expr (1-bool(a-V))' 
        is equivalent to using -isoval V. 
     NOTE: -isorange and -isocmask are only allowed with -xform mask
            See -xform below for details.

  Usage 2:
     -i_TYPE SURF:  Use the nodes of a surface model
                    for input. See help for i_TYPE usage
                    below.

  Usage 3:
     -input_1D XYZ: Construct the convex hull of the points
                    contained in 1D file XYZ. If the file has
                    more than 3 columns, use AFNI's [] selectors
                    to specify the XYZ columns.

  Optional Parameters:
     Usage 1 only:
     -xform XFORM:  Transform to apply to volume values
                    before searching for sign change
                    boundary. XFORM can be one of:
            mask: values that meet the iso* conditions
                  are set to 1. All other values are set
                  to -1. This is the default XFORM.
            shift: subtract V from the dataset and then 
                   search for 0 isosurface. This has the
                   effect of constructing the V isosurface
                   if your dataset has a continuum of values.
                   This option can only be used with -isoval V.
            none: apply no transforms. This assumes that
                  your volume has a continuum of values 
                  from negative to positive and that you
                  are seeking to 0 isosurface.
                  This option can only be used with -isoval 0.
     Usage 2 only:
     -sv SURF_VOL: Specify a surface volume which contains
                   a transform to apply to the surface node
                   coordinates prior to constructing the 
                   convex hull.
     All Usage:
     -o_TYPE PREFIX: prefix of output surface.
        where TYPE specifies the format of the surface
        and PREFIX is, well, the prefix.
        TYPE is one of: fs, 1d (or vec), sf, ply.
        Default is: -o_ply 

 Specifying input surfaces using -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying output surfaces using -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.


  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
count
Usage: count [options] bot top [step]

* Produces many numbered copies of the root and/or suffix,
    counting from 'bot' to 'top' with stride 'step'.
* If 'bot' > 'top', counts backwards with stride '-step'.
* If step is of the form 'R#', then '#' random counts are produced
    in the range 'bot..top' (inclusive).
* 'bot' and 'top' must not be negative; step must be positive.

Options:
  -digits n    prints numbers with 'n' digits [default=4]
  -root rrr    prints string 'rrr' before the number [default=empty]
  -suffix sss  prints string 'sss' after the number [default=empty]
  -scale fff   multiplies each number by the factor 'fff';
                 if this option is used, -digits is ignored and
                 the floating point format '%g' is used for output.
                 ('fff' can be a floating point number.)

The main application of this program is for use in C shell programming:
  foreach fred ( `count 1 20` )
     mv wilma.${fred} barney.${fred}
  end
The backward quote operator in the foreach statement executes the
count program, captures its output, and puts it on the command line.
The loop body renames each file wilma.0001 to wilma.0020 to barney.0001
to barney.0020.  Read the man page for csh to get more information.  In
particular, the csh built-in command '@' can be useful.
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
CreateIcosahedron
Usage: CreateIcosahedron [-rad r] [-rd recDepth] [-ld linDepth] 
                         [-ctr ctr] [-prefix fout] [-help]

   -rad r: size of icosahedron. (optional, default 100)

   -rd recDepth: recursive (binary) tesselation depth for icosahedron 
       (optional, default:3) 
       (recommended to approximate number of nodes in brain: 6
       let rd2 = 2 * recDepth
       Nvert = 2 + 10 * 2^rd2
       Ntri  = 20 * 2^rd2
       Nedge = 30 * 2^rd2

   -ld linDepth: number of edge divides for linear icosahedron tesselation
       (optional, default uses binary tesselation).
       Nvert = 2 + 10 * linDepth^2
       Ntri  = 20 * linDepth^2
       Nedge = 30 * linDepth^2

   -nums: output the number of nodes (vertices), triangles, edges, total volume and total area then quit

   -nums_quiet: same as -nums but less verbose. For the machine in you.

   -ctr ctr: coordinates of center of icosahedron. 
       (optional, default 0,0,0)

   -tosphere: project nodes to sphere.

   -prefix fout: prefix for output files. 
       (optional, default CreateIco)

   -help: help message

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005


       Brenna D. Argall LBC/NIMH/NIH bargall@codon.nih.gov 
       Ziad S. Saad     SSC/NIMH/NIH ziad@nih.gov
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
dicom_hdr
Usage: dicom_hdr [options] fname [...]
Prints information from the DICOM file 'fname' to stdout.

OPTIONS:
 -hex     = Include hexadecimal printout for integer values.
 -noname  = Don't include element names in the printout.
 -sexinfo = Dump Siemens EXtra INFO text (0029 1020), if present
             (can be VERY lengthy).
 -v n     = Dump n words of binary data also.

Based on program dcm_dump_file from the RSNA, developed at
the Mallinckrodt Institute of Radiology.  See the source
code file mri_dicom_hdr.c for their Copyright and license.

SOME SAMPLE OUTPUT LINES:

0028 0010      2 [1234   ] //              IMG Rows// 512
0028 0011      2 [1244   ] //           IMG Columns// 512
0028 0030     18 [1254   ] //     IMG Pixel Spacing//0.488281\0.488281
0028 0100      2 [1280   ] //    IMG Bits Allocated// 16
0028 0101      2 [1290   ] //       IMG Bits Stored// 12
0028 0102      2 [1300   ] //          IMG High Bit// 11

* The first 2 numbers on each line are the DICOM group and element tags,
   in hexadecimal.
* The next number is the number of data bytes, in decimal.
* The next number [in brackets] is the offset in the file of the data,
   in decimal.  This is where the data bytes start, and does not include
   the tag, Value Representation, etc.
* If -noname is NOT given, then the string in the '// ... //' region is
   the standard DICOM dictionary name for this data element.  If this string
   is blank, then this element isn't in the dictionary (e.g., is a private
   tag, or an addition to DICOM that I don't know about, ...).
* The value after the last '//' is the value of the data in the element.
* In the example above, we have a 512x512 image with 0.488281 mm pixels,
   with 12 bits (stored in 16 bits) per pixel.
* For vastly more detail on DICOM standard, you can start with the
   documents at ftp://afni.nimh.nih.gov/dicom/ (1000+ pages of PDF).
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
dicom_to_raw
Usage: dicom_to_raw fname ...
Reads images from DICOM file 'fname' and writes them to raw
file(s) 'fname.raw.0001' etc.
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
Dimon
Dimon - monitor real-time acquisition of DICOM image files
    (or GEMS 5.x I-files, as 'Imon')

    This program is intended to be run during a scanning session
    on a scanner, to monitor the collection of image files.  The
    user will be notified of any missing slice or any slice that
    is aquired out of order.

    When collecting DICOM files, it is recommended to run this
    once per run, only because it is easier to specify the input
    file pattern for a single run (it may be very difficult to
    predict the form of input filenames runs that have not yet
    occurred.

    This program can also be used off-line (away from the scanner)
    to organize the files, run by run.  If the DICOM files have
    a correct DICOM 'image number' (0x0020 0013), then Dimon can
    use the information to organize the sequence of the files, 
    particularly when the alphabetization of the filenames does
    not match the sequencing of the slice positions.  This can be
    used in conjunction with the '-GERT_Reco' option, which will
    write a script that can be used to create AFNI datasets.

    See the '-dicom_org' option, under 'other options', below.

    If no -quit option is provided, the user should terminate the
    program when it is done collecting images according to the
    input file pattern.

    Dimon can be terminated using <CTRL-C>.

  ---------------------------------------------------------------
  realtime notes for running afni remotely:

    - The afni program must be started with the '-rt' option to
      invoke the realtime plugin functionality.

    - If afni is run remotely, then AFNI_TRUSTHOST will need to be
      set on the host running afni.  The value of that variable
      should be set to the IP address of the host running Dimon.
      This may set as an environment variable, or via the .afnirc
      startup file.

    - The typical default security on a Linux system will prevent
      Dimon from communicating with afni on the host running afni.
      The iptables firewall service on afni's host will need to be
      configured to accept the communication from the host running
      Dimon, or it (iptables) will need to be turned off.
  ---------------------------------------------------------------
  usage: Dimon [options] -infile_prefix PREFIX
     OR: Dimon [options] -infile_pattern "PATTERN"

  ---------------------------------------------------------------
  examples (no real-time options):

    Dimon -infile_pattern 's8912345/i*'
    Dimon -infile_prefix   s8912345/i
    Dimon -help
    Dimon -infile_prefix   s8912345/i  -quit
    Dimon -infile_prefix   s8912345/i  -nt 120 -quit
    Dimon -infile_prefix   s8912345/i  -debug 2
    Dimon -infile_prefix   s8912345/i  -dicom_org -GERT_Reco -quit

  examples (with real-time options):

    Dimon -infile_prefix s8912345/i -rt 

    Dimon -infile_pattern 's*/i*' -rt 
    Dimon -infile_pattern 's*/i*' -rt -nt 120
    Dimon -infile_pattern 's*/i*' -rt -quit

  ** detailed real-time example:
    Dimon                                    \
       -infile_pattern 's*/i*'               \
       -rt -nt 120                           \
       -host some.remote.computer            \
       -rt_cmd "PREFIX 2005_0513_run3"     \
       -quit                                 

    This example scans data starting from directory 003, expects
    160 repetitions (TRs), and invokes the real-time processing,
    sending data to a computer called some.remote.computer.name
    (where afni is running, and which considers THIS computer to
    be trusted - see the AFNI_TRUSTHOST environment variable).

  ---------------------------------------------------------------
    Multiple DRIVE_AFNI commands are passed through '-drive_afni'
    options, one requesting to open an axial image window, and
    another requesting an axial graph, with 160 data points.

    See README.driver for acceptable DRIVE_AFNI commands.

    Also, multiple commands specific to the real-time plugin are
    passed via '-rt_cmd' options.  The PREFIX command sets the
    prefix for the datasets output by afni.  The GRAPH_XRANGE and
    GRAPH_YRANGE commands set the graph dimensions for the 3D
    motion correction graph (only).  And the GRAPH_EXPR command
    is used to replace the 6 default motion correction graphs with
    a single graph, according to the given expression, the square
    root of the average squared entry of the 3 rotaion parameters,
    roll, pitch and yaw, ignoring the 3 shift parameters, dx, dy
    and dz.

    See README.realtime for acceptable DRIVE_AFNI commands.

    Dimon                                                   \
       -infile_pattern 's*/i*.dcm'                         \
       -nt 160                                             \
       -rt                                                 \
       -host some.remote.computer.name                     \
       -drive_afni 'OPEN_WINDOW axialimage'                \
       -drive_afni 'OPEN_WINDOW axialgraph pinnum=160'     \
       -rt_cmd 'PREFIX eat.more.cheese'                    \
       -rt_cmd 'GRAPH_XRANGE 160'                          \
       -rt_cmd 'GRAPH_YRANGE 1.02'                         \
       -rt_cmd 'GRAPH_EXPR sqrt((d*d+e*e+f*f)/3)'            

  ---------------------------------------------------------------
  notes:

    - Once started, unless the '-quit' option is used, this
      program exits only when a fatal error occurs (single
      missing or out of order slices are not considered fatal).
      Otherwise, it keeps waiting for new data to arrive.

      With the '-quit' option, the program will terminate once
      there is a significant (~2 TR) pause in acquisition.

    - To terminate this program, use <CTRL-C>.

  ---------------------------------------------------------------
  main options:

    For DICOM images, either -infile_pattern or -infile_prefix
    is required.

    -infile_pattern PATTERN : specify pattern for input files

        e.g. -infile_pattern 'run1/i*.dcm'

        This option is used to specify a wildcard pattern matching
        the names of the input DICOM files.  These files should be
        sorted in the order that they are to be assembled, i.e.
        when the files are sorted alphabetically, they should be
        sequential slices in a volume, and the volumes should then
        progress over time (as with the 'to3d' program).

        The pattern for this option must be within quotes, because
        it will be up to the program to search for new files (that
        match the pattern), not the shell.

    -infile_prefix PREFIX   : specify prefix matching input files

        e.g. -infile_prefix run1/i

        This option is similar to -infile_pattern.  By providing
        only a prefix, the user need not use wildcard characters
        with quotes.  Using PREFIX with -infile_prefix is
        equivalent to using 'PREFIX*' with -infile_pattern (note
        the needed quotes).

        Note that it may not be a good idea to use, say 'run1/'
        for the prefix, as there might be a readme file under
        that directory.

        Note also that it is necessary to provide a '/' at the
        end, if the prefix is a directory (e.g. use run1/ instead
        of simply run1).

  ---------------------------------------------------------------
  real-time options:

    -rt                : specify to use the real-time facility

        With this option, the user tells 'Dimon' to use the real-time
        facility, passing each volume of images to an existing
        afni process on some machine (as specified by the '-host'
        option).  Whenever a new volume is aquired, it will be
        sent to the afni program for immediate update.

        Note that afni must also be started with the '-rt' option
        to make use of this.

        Note also that the '-host HOSTNAME' option is not required
        if afni is running on the same machine.

    -drive_afni CMND   : send 'drive afni' command, CMND

        e.g.  -drive_afni 'OPEN_WINDOW axialimage'

        This option is used to pass a single DRIVE_AFNI command
        to afni.  For example, 'OPEN_WINDOW axialimage' will open
        such an axial view window on the afni controller.

        Note: the command 'CMND' must be given in quotes, so that
              the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.driver for more details.

    -host HOSTNAME     : specify the host for afni communication

        e.g.  -host mycomputer.dot.my.network
        e.g.  -host 127.0.0.127
        e.g.  -host mycomputer
        the default host is 'localhost'

        The specified HOSTNAME represents the machine that is
        running afni.  Images will be sent to afni on this machine
        during the execution of 'Dimon'.

        Note that the enviroment variable AFNI_TRUSTHOST must be
        set on the machine running afni.  Set this equal to the
        name of the machine running Imon (so that afni knows to
        accept the data from the sending machine).

    -pause TIME_IN_MS : pause after each new volume

        e.g.  -pause 200

        In some cases, the user may wish to slow down a real-time
        process.  This option will cause a delay of TIME_IN_MS
        milliseconds after each volume is found.

    -rev_byte_order   : pass the reverse of the BYTEORDER to afni

        Reverse the byte order that is given to afni.  In case the
        detected byte order is not what is desired, this option
        can be used to reverse it.

        See the (obsolete) '-swap' option for more details.

    -rt_cmd COMMAND   : send COMMAND(s) to realtime plugin

        e.g.  -rt_cmd 'GRAPH_XRANGE 120'
        e.g.  -rt_cmd 'GRAPH_XRANGE 120 \n GRAPH_YRANGE 2.5'

        This option is used to pass commands to the realtime
        plugin.  For example, 'GRAPH_XRANGE 120' will set the
        x-scale of the motion graph window to 120 (repetitions).

        Note: the command 'COMMAND' must be given in quotes, so
        that the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.realtime for more details.

    -swap  (obsolete) : swap data bytes before sending to afni

        Since afni may be running on a different machine, the byte
        order may differ there.  This option will force the bytes
        to be reversed, before sending the data to afni.

        ** As of version 3.0, this option should not be necessary.
           'Dimon' detects the byte order of the image data, and then
           passes that information to afni.  The realtime plugin
           will (now) decide whether to swap bytes in the viewer.

           If for some reason the user wishes to reverse the order
           from what is detected, '-rev_byte_order' can be used.

    -zorder ORDER     : slice order over time

        e.g. -zorder alt
        e.g. -zorder seq
        the default is 'alt'

        This options allows the user to alter the slice
        acquisition order in real-time mode, simliar to the slice
        pattern of the '-sp' option.  The main differences are:
            o  only two choices are presently available
            o  the syntax is intentionally different (from that
               of 'to3d' or the '-sp' option)

        ORDER values:
            alt   : alternating in the Z direction (over time)
            seq   : sequential in the Z direction (over time)

  ---------------------------------------------------------------
  other options:

    -debug LEVEL       : show debug information during execution

        e.g.  -debug 2
        the default level is 1, the domain is [0,3]
        the '-quiet' option is equivalent to '-debug 0'

    -dicom_org         : organize files before other processing

        e.g.  -dicom_org

        When this flag is set, the program will attempt to read in
        all files subject to -infile_prefix or -infile_pattern,
        determine which are DICOM image files, and organize them
        into an ordered list of files per run.

        This may be necessary since the alphabetized list of files
        will not always match the sequential slice and time order
        (which means, for instance, that '*.dcm' may not list
        files in the correct order.

        In this case, if the DICOM files contain a valid 'image
        number' field (0x0020 0013), then they will be sorted
        before any further processing is done.

        Notes:

        - This does not work in real-time mode, since the files
          must all be organized before processing begins.

        - The DICOM images need valid 'image number' fields for
          organization to be possible (DICOM field 0x0020 0013).

        - This works will in conjunction with '-GERT_Reco', to
          create a script to make AFNI datasets.  There will be
          a single file per run that contains the image filenames
          for that run (in order).  This is fed to 'to3d'.

    -help              : show this help information

    -hist              : display a history of program changes

    -nice INCREMENT    : adjust the nice value for the process

        e.g.  -nice 10
        the default is 0, and the maximum is 20
        a superuser may use down to the minimum of -19

        A positive INCREMENT to the nice value of a process will
        lower its priority, allowing other processes more CPU
        time.

    -nt VOLUMES_PER_RUN : set the number of time points per run

        e.g.  -nt 120

        With this option, if a run stalls before the specified
        VOLUMES_PER_RUN is reached (notably including the first
        run), the user will be notified.

        Without this option, Dimon will compute the expected number
        of time points per run based on the first run (and will
        allow the value to increase based on subsequent runs).
        Therefore Dimon would not detect a stalled first run.

    -quiet             : show only errors and final information

    -quit              : quit when there is no new data

        With this option, the program will terminate once a delay
        in new data occurs.  This is most appropriate to use when
        the image files have already been collected.

    -sort_by_num_suffix : sort files according to numerical suffix

        e.g.  -sort_by_num_suffix

        With this option, the program will sort the input files
        according to the trailing '.NUMBER' in the filename.  This
        NUMBER will be evaluated as a positive integer, not via
        an alphabetic sort (so numbers need not be zero-padded).

        This is intended for use on interleaved files, which are
        properly enumerated, but only in the filename suffix.
        Consider a set of names for a single, interleaved volume:

          im001.1  im002.3  im003.5  im004.7  im005.9  im006.11
          im007.2  im008.4  im009.6  im010.8  im011.10

        Here the images were named by 'time' of acquisition, and
        were interleaved.  So an alphabetic sort is not along the
        slice position (z-order).  However the slice ordering was
        encoded in the suffix of the filenames.

        NOTE: the suffix numbers must be unique

    -start_file S_FILE : have Dimon process starting at S_FILE

        e.g.  -start_file 043/I.901

        With this option, any earlier I-files will be ignored
        by Dimon.  This is a good way to start processing a later
        run, if it desired not to look at the earlier data.

        In this example, all files in directories 003 and 023
        would be ignored, along with everything in 043 up through
        I.900.  So 043/I.901 might be the first file in run 2.

    -use_imon          : revert to Imon functionality

    -version           : show the version information

  ---------------------------------------------------------------
  GERT_Reco options:

    -GERT_Reco        : output a GERT_Reco_dicom script

        Create a script called 'GERT_Reco_dicom', similar to the
        one that Ifile creates.  This script may be run to create
        the AFNI datasets corresponding to the I-files.

    -gert_outdir OUTPUT_DIR  : set output directory in GERT_Reco

        e.g. -gert_outdir subject_A7
        e.g. -od subject_A7
        the default is '-gert_outdir .'

        This will add '-od OUTPUT_DIR' to the @RenamePanga command
        in the GERT_Reco script, creating new datasets in the
        OUTPUT_DIR directory, instead of the 'afni' directory.

    -sp SLICE_PATTERN  : set output slice pattern in GERT_Reco

        e.g. -sp alt-z
        the default is 'alt+z'

        This options allows the user to alter the slice
        acquisition pattern in the GERT_Reco script.

        See 'to3d -help' for more information.

  ---------------------------------------------------------------

  Author: R. Reynolds - version 2.1 (August 23, 2005)

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
djpeg
usage: /var/www/html/pub/dist/bin/linux_gcc32/djpeg [switches] [inputfile]
Switches (names may be abbreviated):
  -colors N      Reduce image to no more than N colors
  -fast          Fast, low-quality processing
  -grayscale     Force grayscale output
  -scale M/N     Scale output image by fraction M/N, eg, 1/8
  -bmp           Select BMP output format (Windows style)
  -gif           Select GIF output format
  -os2           Select BMP output format (OS/2 style)
  -pnm           Select PBMPLUS (PPM/PGM) output format (default)
  -targa         Select Targa output format
Switches for advanced users:
  -dct int       Use integer DCT method (default)
  -dct fast      Use fast integer DCT (less accurate)
  -dct float     Use floating-point DCT method
  -dither fs     Use F-S dithering (default)
  -dither none   Don't use dithering in quantization
  -dither ordered  Use ordered dither (medium speed, quality)
  -map FILE      Map to colors used in named image file
  -nosmooth      Don't use high-quality upsampling
  -onepass       Use 1-pass quantization (fast, low quality)
  -maxmemory N   Maximum memory to use (in kbytes)
  -outfile name  Specify name for output file
  -verbose  or  -debug   Emit debug output
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
DTIStudioFibertoSegments
Usage: DTIStudioFibertoSegments [options] dataset
Convert a DTIStudio Fiber file to a SUMA segment file
Options:
  -output / -prefix = name of the output file (not an AFNI dataset prefix)
    the default output name will be rawxyzseg.dat

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
@DTI_studio_reposition
@DTI_studio_reposition <DTI_HDR_VOLUME> <AFNI_REFERENCE_VOLUME>
This script reslices and repositions an DTI_studio ANALYZE
volume to match the original volume used to input data
into DTI studio
Check realignment with AFNI to be sure all went well.
Example:
FA35.hdr is a (renamed) volume from DTI_studio that contains
   fractional anisotropy
sample_dti_vol+orig is an AFNI volume of one DWI in the first
   gradient direction (54 slices). sample_dti_vol+orig was 
   created using: 
   to3d -prefix sample_dti_vol DTIepi2new-000[0-4]?.dcm DTIepi2new-0005[0-4].dcm 
To create a version of FA35 that is in alignment with sample_dti_vol do:
@DTI_studio_reposition FA35.hdr sample_dti_vol+orig

This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
ent16
Usage: ent16 [-%nn]
Computes an estimate of the entropy of stdin.
If the flag '-%75' is given (e.g.), then the
  exit status is 1 only if the input could be
  compressed at least 75%, otherwise the exit
  status is 0.  Legal values of 'nn' are 1..99.
In any case, the entropy and compression estimates
  are printed to stdout, even if no '-%nn' flag is.
  given.

METHOD: entropy is estimated by building a histogram
        of all 16 bit words in the input, then summing
        over -p[i]*log2(p[i]), i=0..65535.  Compression
        estimate seems to work pretty good for gzip -1
        in most cases of binary image data.

SAMPLE USAGE (csh syntax):
  ent16 -%75 < fred+orig.BRIK
  if( $status == 1 ) gzip -1v fred+orig.BRIK
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005
FD2
 Functional Display (32x32, 64x64, 128x128, 256x256) in X11 window.
 EPI images in 256x256 Signa format are accepted too.
 It displays EPI time or frequency course window.

 Usage: /var/www/html/pub/dist/bin/linux_gcc32/FD2 [options] image1, [image2, ..., image10000]

 Where options are:
    -d display       - X11 display
    -geom geometry   - initial geometry
    -nc #_of_colors  - initial number of colors [2-200] (def 64)
    -sp #_of_degree  - range of color spectrum [0-360] degree (def 240)
    -gam gamma       - gamma correction (1 for no correction)
    -num #_of_images - # of images in time course [2-10000].
    -im1 image_#     - first image in time course. Previous images will be 
                       filled with this one for proper timing of others.
    -ideal ref_file  - use ref_file for fim-like calculations
    -pcthresh #      - use # as threshold for correlation
    -extra           - files after this are not used in time course
                       (used instead of -num option)
    -fim_colors L thr1 pos1 neg2 ... thrL posL negL
                     - set up L thresholds and colors for FIM overlay
    -gsize x y       - set graph window size to x by y pixels
    -fmag val        - magnify scale of FFT by val
    -grid val        - initial grid separation 
    -phase           - image has negative values (for FFT)
    -cf              - center image to the frame
    -swap            - byte-swap data (default is no)
                       *** this is a new default!

 Events:

  Program quit      : <Q> or <Q>
  Change to colors  : <C>
  Change to B & W   : 
  Swap colors       : <S>
  Restore colors    : Button_3 at image center 
  Squeeze colors    : #2 or #3 button - right side of image
  Expand  colors    :                   left  side of image
  Circ. color bar   : #2 or #3 button at the color bar
  Color saturation  : #2 or #3 button - the top or bottom
  Exact image number: press <I>, enter_number, <CR>
  First image       : 1
  Last image        : l
  Next     image    : >
  Previous image    : <
                      dragging red pointer works too
  Scale Plot up         : +
  Scale Plot down       : -
  Increase Grid Spacing : G
  Decrease Grid Spacing : g
  Toggle Grid and Colors: r
  Toggle Frame colors   : R
  Increase matrix size  : M
  Decrease matrix size  : m
  Exact matrix size     : N #of_size <CR> (1 to 25 only)
  Save minigraph in ASCII file   : press <P>
    [with xxx_yyy.suffix filename] press <W>
  Save current image to a file   : press <S>
  Save averaged image (not norm) : press <X>
  Position frame in the image    : press Button_1 in the image area,
                                    drag cursor, and release button.
  Center frame on desired pixel  : press Button_1 over desired minigraph.
  Rotate image 90 deg. clockwise : press Button_3 in [Rot] window.
                counterclockwise : press Button_1 in [Rot] window.
  Change to differential display : press [Diff] window. Set first and
                                   last image for averaged reference.
  Average of set of images       : press [AvIm] (can be used in Diff mode).
  Compute FIM overlay            : press [FIM], choose ref file,threshold,
                                   then press [GO]

  Last image in time course      : L
  Toggle autoscale of images     : A
  Hide FIM overlay               : H
  Hide frame in image            : h
  Toggle overlay checkerboard    : O
  Read image into program        : F (for file)
  Remove image from program      : K (for kill)
  Move to image 1..9             : 1,2,...9
  Toggle common graph baselines  : b
  Toggle baseline to zero        : x

  Add/[subtract] 3600 from pixel : D / [d]
  In FT edit mode: 
                increase value   : Arrow Up
                decrease value   : Arrow Down
          Shift or Control Arrow : larger changes 
                undo last change : u
                undo all changes : U

 </P>
This page auto-generated on Thu Aug 25 16:49:40 EDT 2005 </S></I></S></Q></Q>
fftest
Usage: fftest [-q] len num nvec
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
file_tool
/var/www/html/pub/dist/bin/linux_gcc32/file_tool - display or modify sections of a file

    This program can be used to display or edit data in arbitrary
    files.  If no '-mod_data' option is provided (with DATA), it
    is assumed the user wishes only to display the specified data
    (using both '-offset' and '-length', or using '-ge_XXX').

  usage: /var/www/html/pub/dist/bin/linux_gcc32/file_tool [options] -infiles file1 file2 ...

  examples:

   ----- help examples -----

   1. get detailed help:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -help

   2. get descriptions of GE struct elements:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -help_ge

   ----- GEMS 4.x and 5.x display examples -----

   3. display GE header and extras info for file I.100:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge_all -infiles I.100

   4. display GEMS 4.x series and image headers for file I.100:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge4_all -infiles I.100

   5. display run numbers for every 100th I-file in this directory

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge_uv17 -infiles I.?42
      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -ge_run  -infiles I.?42

   ----- general value display examples -----

   6. display the 32 characters located 100 bytes into each file:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -offset 100 -length 32 -infiles file1 file2

   7. display the 8 4-byte reals located 100 bytes into each file:

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -disp_real4 -offset 100 -length 32 -infiles file1 file2

   ----- character modification examples -----

   8. in each file, change the 8 characters at 2515 to 'hi there':

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data "hi there" -offset 2515 -length 8 -infiles I.*

   9. in each file, change the 21 characters at 2515 to all 'x's
      (and print out extra debug info)

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -debug 1 -mod_data x -mod_type val -offset 2515 \
                -length 21 -infiles I.*

   ----- raw number modification examples -----

  10. in each file, change the 3 short integers starting at position
      2508 to '2 -419 17'

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data '2 -419 17' -mod_type sint2 -offset 2508 \
                -length 6 -infiles I.*

  11. in each file, change the 3 binary floats starting at position
      2508 to '-83.4 2 17' (and set the next 8 bytes to zero by
      setting the length to 20, instead of just 12).

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data '-83.4 2 17' -mod_type float4 -offset 2508 \
                -length 20 -infiles I.*

  12. in each file, change the 3 binary floats starting at position
      2508 to '-83.4 2 17', and apply byte swapping

      /var/www/html/pub/dist/bin/linux_gcc32/file_tool -mod_data '-83.4 2 17' -mod_type float4 -offset 2508 \
                -length 12 -swap_bytes -infiles I.*

  notes:

    o  Use of '-infiles' is required.
    o  Use of '-length' or a GE information option is required.
    o  As of this version, only modification with text is supported.
       Editing binary data is coming soon to a workstation near you.

  special options:

    -help              : show this help information
                       : e.g. -help

    -version           : show version information
                       : e.g. -version

    -hist              : show the program's modification history

    -debug LEVEL       : print extra info along the way
                       : e.g. -debug 1
                       : default is 0, max is 2

  required 'options':

    -infiles f1 f2 ... : specify input files to print from or modify
                       : e.g. -infiles file1
                       : e.g. -infiles I.*

          Note that '-infiles' should be the final option.  This is
          to allow the user an arbitrary number of input files.

  GE info options:

      -ge_all          : display GE header and extras info
      -ge_header       : display GE header info
      -ge_extras       : display extra GE image info
      -ge_uv17         : display the value of uv17 (the run #)
      -ge_run          : (same as -ge_uv17)
      -ge_off          : display file offsets for various fields

  GEMS 4.x info options:

      -ge4_all         : display GEMS 4.x series and image headers
      -ge4_image       : display GEMS 4.x image header
      -ge4_series      : display GEMS 4.x series header
      -ge4_study       : display GEMS 4.x study header

  raw ascii options:

    -length LENGTH     : specify the number of bytes to print/modify
                       : e.g. -length 17

          This includes numbers after the conversion to binary.  So
          if -mod_data is '2 -63 186', and -mod_type is 'sint2' (or
          signed shorts), then 6 bytes will be written (2 bytes for
          each of 3 short integers).

       ** Note that if the -length argument is MORE than what is
          needed to write the numbers out, the remaind of the length
          bytes will be written with zeros.  If '17' is given for
          the length, and 3 short integers are given as data, there 
          will be 11 bytes of 0 written after the 6 bytes of data.

    -mod_data DATA     : specify a string to change the data to
                       : e.g. -mod_data hello
                       : e.g. -mod_data '2 -17.4 649'
                       : e.g. -mod_data "change to this string"

          This is the data that will be writting into the modified
          file.  If the -mod_type is 'str' or 'char', then the
          output data will be those characters.  If the -mod_type
          is any other (i.e. a binary numerical format), then the
          output will be the -mod_data, converted from numerical
          text to binary.

       ** Note that a list of numbers must be contained in quotes,
          so that it will be processed as a single parameter.

    -mod_type TYPE     : specify the data type to write to the file
                       : e.g. -mod_type string
                       : e.g. -mod_type sint2
                       : e.g. -mod_type float4
                       : default is 'str'

        TYPE can be one of:

          str       : perform a string substitution
          char, val : perform a (repeated?) character substitution
          uint1     : single byte unsigned int   (binary write)
          sint1     : single byte   signed int   (binary write)
          uint2     : two    byte unsigned int   (binary write)
          sint2     : two    byte   signed int   (binary write)
          uint4     : four   byte unsigned int   (binary write)
          sint4     : four   byte   signed int   (binary write)
          float4    : four   byte floating point (binary write)
          float8    : eight  byte floating point (binary write)

          If 'str' is used, which is the default action, the data is
          replaced by the contents of the string DATA (from the
          '-mod_data' option).

          If 'char' is used, then LENGTH bytes are replaced by the
          first character of DATA, repeated LENGTH times.

          For any of the others, the list of numbers found in the
          -mod_data option will be written in the supplied binary
          format.  LENGTH must be large enough to accomodate this
          list.  And if LENGTH is higher, the output will be padded
          with zeros, to fill to the requesed length.

    -offset OFFSET     : use this offset into each file
                       : e.g. -offset 100
                       : default is 0

          This is the offset into each file for the data to be
          read or modified.

    -quiet             : do not output header information

  numeric options:

    -disp_int2         : display 2-byte integers
                       : e.g. -disp_int2

    -disp_int4         : display 4-byte integers
                       : e.g. -disp_int4

    -disp_real4        : display 4-byte real numbers
                       : e.g. -disp_real4

    -swap_bytes        : use byte-swapping on numbers
                       : e.g. -swap_bytes

          If this option is used, then byte swapping is done on any
          multi-byte numbers read from or written to the file.

  - R Reynolds, version: 3.2a (March 22, 2005), compiled: Aug 25 2005

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
fim2
 Usage: fim2 [options] image_files ...
 where 'image_files ...' is a sequence of MRI filenames,
  
 options are:
 -pcnt #         correlation coeff. threshold will be 1 - 0.01 * #
 -pcthresh #     correlation coeff. threshold will be #
 -im1 #          index of image file to use as first in time series;
                   default is 1; previous images are filled with this
                   image to synchronize with the reference time series
 -num #          number of images to actually use, if more than this
                   many are specified on the command line;  default is
                   to use all images
 -non            this option turns off the default normalization of
                   the output activation image;  the user should provide
                   a scaling factor via '-coef #', or '1' will be used
 -coef #         the scaling factor used to convert the activation output
                   from floats to short ints (if -non is also present)
  
 -ort fname      fname = filename of a time series to which the image data
                   will be orthogonalized before correlations are computed;
                   any number of -ort options (from 0 on up) may be used
 -ideal fname    fname = filename of a time series to which the image data
                   is to be correlated; exactly one such time series is
                   required;  if the -ideal option is not used, then the
                   first filename after all the options will be used
       N.B.: This version of fim2 allows the specification of more than
             one ideal time series file.  Each one is separately correlated
             with the image time series and the one most highly correlated
             is selected for each pixel.  Multiple ideals are specified
             using more than one '-ideal fname' option, or by using the
             form '-ideal [ fname1 fname2 ... ]' -- this latter method
             allows the use of wildcarded ideal filenames.
             The '[' character that indicates the start of a group of
             ideals can actually be any ONE of these: [{/%
             and the ']' that ends the group can be:  ]}/%
  
       [Format of ort and ideal time series files:
        ASCII; one number per line;
        Same number of lines as images in the time series;
        Value over 33333 --> don't use this image in the analysis]
  
 -polref #       use polynomials of order 0..# as extra 'orts';
 [or -polort #]    default is 0 (yielding a constant vector).
                   Use # = -1 to suppress this feature.
  
 -fimfile fname  fname = filename to save activation magnitudes in;
                   if not given, the last name on the command line will
                   be used
 -corr           if present, indicates to write correlation output to
                   image file 'fimfile.CORR' (next option is better)
 -corfile fname  fname = filename to save correlation image in;
                   if not present, and -corr is not present, correlation
                   image is not saved.
 -cnrfile fname  fname = filename to save contrast-to-noise image in;
                   if not present, will not be computed or saved;
                   CNR is scaled by 100 if images are output as shorts
                   and is written 'as-is' if output as floats (see -flim).
                   [CNR is defined here to be alpha/sigma, where
                    alpha = amplitude of normalized ideal in a pixel
                    sigma = standard deviation of pixel after removal
                            of orts and ideal
                    normalized ideal = ideal scaled so that trough-to-peak
                      height is one.]
 -sigfile fname  fname = filename to save standard deviation image in;
                   the standard deviation is of what is left after the
                   least squares removal of the -orts, -polrefs, and -ideal.
                  N.B.: This is always output in the -flim format!
 -fitfile fname  Image files of the least squares fit coefficients of
                   all the -ort and -polref time series that
                   are projected out of the data time series before
                   the -ideal is fit.  The actual filenames will
                   be fname.01 fname.02 ....
                   Their order is -orts, then -polrefs, and last -ideal.
                  N.B.: These are always output in the -flim format!
 -subort fname   A new timeseries of images is written to disk, with
                   names of the form 'fname.0001', etc.  These images
                   have the orts and polrefs (but not ideals) subtracted out.
                  N.B.: These are always output in the -flim format!
 -flim           If present, write outputs in mrilib 'float' format,
                   rather than scale+convert to integers
                   [The 'ftosh' program can later convert to short integers]
 -clean          if present, then output images won't have the +/- 10000
                   values forced into their corners for scaling purposes.
 -clip           if present, output correlations, etc., will be set to
                   zero in regions of low intensity.
 -q              if present, indicates 'quiet' operation.
 -dfspace[:0]    Indicates to use the 'dfspace' filter (a la imreg) to
                   register the images spatially before filtering.
 -regbase fname   Indicates to read image in file 'fname' as the base
                   image for registration.  If not given, the first image
                   in the time series that is used in the correlation
                   computations will be used.  This is also the image
                   that is used to define 'low intensity' for the -clip option.
  
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
float_scan
Usage: float_scan [options] input_filename
Scans the input file of IEEE floating point numbers for
illegal values: infinities and not-a-number (NaN) values.

Options:
  -fix     = Writes a copy of the input file to stdout (which
               should be redirected using '>'), replacing
               illegal values with 0.  If this option is not
               used, the program just prints out a report.
  -v       = Verbose mode: print out index of each illegal value.
  -skip n  = Skip the first n floating point locations
               (i.e., the first 4*n bytes) in the file

N.B.: This program does NOT work on compressed files, nor does it
      work on byte-swapped files (e.g., files transferred between
      Sun/SGI/HP and Intel platforms), nor does it work on images
      stored in the 'flim' format!

The program 'exit status' is 1 if any illegal values were
found in the input file.  If no errors were found, then
the exit status is 0. You can check the exit status by
using the shell variable $status.  A C-shell example:
   float_scan fff
   if ( $status == 1 ) then
      float_scan -fix fff > Elvis.Aaron.Presley
      rm -f fff
      mv Elvis.Aaron.Presley fff
   endif
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
from3d

Program:          from3d 
Author:           B. Douglas Ward 
Initial Release:  30 August 1996 
Latest Revision:  15 August 2001 

Usage:   from3d [options] -input fname -prefix rname
Purpose: Extract 2D image files from a 3D AFNI dataset.
Options:
-v             Print out verbose information during the run.
-nsize         Adjust size of 2D data file to be NxN, by padding
                 with zeros, where N is a power of 2.
-raw           Write images in 'raw' format (just the data bytes)
                 N.B.: there will be no header information saying
                       what the image dimensions are - you'll have
                       to get that information from the x and y
                       axis information output by 3dinfo.
-float         Write images as floats, no matter what they are in
                 the dataset itself.
-zfirst num    Set 'num' = number of first z slice to be extracted.
                 (default = 1)
-zlast num     Set 'num' = number of last z slice to be extracted.
                 (default = largest)
-tfirst num    Set 'num' = number of first time slice to be extracted.
                 (default = 1)
-tlast num     Set 'num' = number of last time slice to be extracted.
                 (default = largest)
-input fname   Read 3D dataset from file 'fname'.
                 'fname' may include a sub-brick selector list.
-prefix rname  Write 2D images using prefix 'rname'.

               (-input and -prefix are non-optional options: they)
               (must be present or the program will not execute. )

N.B.: * Image data is extracted directly from the dataset bricks.
         If a brick has a floating point scaling factor, it will NOT
         be applied.
      * Images are extracted parallel to the xy-plane of the dataset
         orientation (which can be determined by program 3dinfo).
         This is the order in which the images were input to the
         dataset originally, via to3d.
      * If either of these conditions is unacceptable, you can also
         try to use the Save:bkg function from an AFNI image window.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
FSread_annot
Usage:  
  FSread_annot   <-input ANNOTFILE>  
                 [-col_1D annot.1D.col]  
                 [-roi_1D annot.1D.roi] 
                 [-cmap_1D annot.1D.cmap]
                 [show_FScmap]
                 [-help]  
  Reads a FreeSurfer annotaion file and outputs
  an equivalent ROI file and/or a colormap file 
  for use with SUMA.

  Required options:
     -input ANNOTFILE: Binary formatted FreeSurfer
                       annotation file.
     AND one of the optional options.
  Optional options:
     -col_1D annot.1D.col: Write a 4-column 1D color file. 
                           The first column is the node
                           index followed by r g b values.
                           This color file can be imported 
                           using the 'c' option in SUMA.
                           If no colormap was found in the
                           ANNOTFILE then the file has 2 columns
                           with the second being the annotation
                           value.
     -roi_1D annot.1D.roi: Write a 5-column 1D roi file.
                           The first column is the node
                           index, followed by its index in the
                           colormap, followed by r g b values.
                           This roi file can be imported 
                           using the 'Load' button in SUMA's
                           'Draw ROI' controller.
                           If no colormap was found in the
                           ANNOTFILE then the file has 2 columns
                           with the second being the annotation
                           value. 
     -cmap_1D annot.1D.cmap: Write a 4-column 1D color map file.
                             The first column is the color index,
                             followed by r g b and flag values.
                             The name of each color is inserted
                             as a comment because 1D files do not
                             support text data.
     -show_FScmap: Show the info of the colormap in the ANNOT file.


++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
ftosh
ftosh: convert float images to shorts, by RW Cox
Usage: ftosh [options] image_files ...

 where the image_files are in the same format to3d accepts
 and where the options are

  -prefix pname:  The output files will be named in the format
  -suffix sname:  'pname.index.sname' where 'pname' and 'sname'
  -start  si:     are strings given by the first 2 options.
  -step   ss:     'index' is a number, given by 'si+(i-1)*ss'
                  for the i-th output file, for i=1,2,...
              *** Default pname = 'sh'
              *** Default sname = nothing at all
              *** Default si    = 1
              *** Default ss    = 1

  -nsize:         Enforce the 'normal size' option, to make
                  the output images 64x64, 128x128, or 256x256.

  -scale sval:    'sval' and 'bval' are numeric values; if
  -base  bval:    sval is given, then the output images are
  -top   tval:    formed by scaling the inputs by the formula
                  'output = sval*(input-bval)'.
              *** Default sval is determined by finding
                  V = largest abs(input-bval) in all the input
                  images and then sval = tval / V.
              *** Default tval is 32000; note that tval is only
                  used if sval is not given on the command line.
              *** Default bval is 0.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
ge_header
Usage: ge_header [-verb] file ...
Prints out information from the GE image header of each file.
Options:
 -verb: Print out some probably useless extra stuff.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
@GetAfniOrient
Usage: @GetAfniOrient <NAME> .....
example: @GetAfniOrient Hello+orig.HEAD
returns the orient code of Hello+orig.HEAD
Ziad Saad (ziad@nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
@GetAfniPrefix
Usage: @GetAfniPrefix <NAME> .....
example: @GetAfniPrefix /Data/stuff/Hello+orig.HEAD
returns the afni prefix of name (Hello)
Wildcards are treated as regular characters:
example: @GetAfniPrefix 'AAzst1r*+orig'
returns : AAzst1r*

Ziad Saad (ziad@nih.gov)
  LBC/NIMH/ National Institutes of Health, Bethesda Maryland

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
@GetAfniView
Usage: @GetAfniView <NAME> .....
example: @GetAfniView /Data/stuff/Hello+orig.HEAD
returns the afni view of Name (+orig)
Wildcards are treated as regular characters:
example: @GetAfniView 'AAzst1r*+orig'
returns : +orig

Ziad Saad (ziad@nih.gov)
LBC/NIMH/ National Institutes of Health, Bethesda Maryland

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
Ifile
Usage: Ifile [Options] <FILE List> 

	[-nt]: Do not use time stamp to identify complete scans.
	       Complete scans are identified from 'User Variable 17'
	       in the image header.
	[-sp Pattern]: Slice acquisition pattern.
	               Sets the slice acquisition pattern.
	               The default option is alt+z.
	               See to3d -help for acceptable options.
	[-od Output_Directory]: Set the output directory in @RenamePanga.
	                        The default is afni .

	<FILE List>: Strings of wildcards defining series of
	              GE-Real Time (GERT) images to be assembled
	              as an afni brick. Example:
	              Ifile '*/I.*'
	          or  Ifile '083/I.*' '103/I.*' '123/I.*' '143/I.*'

	The program attempts to identify complete scans from the list
	of images supplied on command line and generates the commands
	necessary to turn them into AFNI bricks using the script @RenamePanga.
	If at least one complete scan is identified, a script file named GERT_Reco
	is created and executing it creates the afni bricks placed in the afni directory.

How does it work?
	With the -nt option: Ifile uses the variable 'User Variable 17' in the 
	I file's header. This option appears to be augmented each time a new
	scan is started. (Thanks to S. Marrett for discovering the elusive variable.)
	Without -nt option: Ifile first examines the modification time for each image and 
	infers from that which images form a single scan. Consecutive images that are less 
	than T seconds apart belong to the same scan. T is set based on the mean
	time delay difference between successive images. The threshold currently
	used works for the test data that we have. If it fails for your data, let us
	know and supply us with the data. Once a set of images is grouped into a 
	scan the sequence of slice location is analysed and duplicate, missing slices,
	and incomplete volumes are detected. Sets of images that do not pass these tests
	are ignored.

Preserving Time Info: (not necessary with -nt option but does not hurt to preserve anyway)
	It is important to preserve the file modification time info as you copy or untar
	the data. If you neglect to do so and fail to write down where each scan ends
	and/or begins, you might have a hell of a time reconstructing your data.
	When copying image directories, use  cp -rp ???/*  and when untaring 
	the archive, use  tar --atime-preserve -xf Archive.tar  on linux.
	On Sun and SGI, tar -xf Archive.tar preserves the time info.

Future Improvements:
	Out of justifiable laziness, and for other less convincing reasons, I have left 
	Ifile and @RenamePanga separate. They can be combined into one program but it's usage
	would become more complicated. At any rate, the user should not notice any difference
	since all they have to do is run the script GERT_reco that is created by Ifile.

	   Dec. 12/01 (Last modified July 24/02) SSCC/NIMH 
	Robert W. Cox(rwcox@nih.gov) and Ziad S. Saad (ziad@nih.gov)

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
im2niml
Usage: im2niml imagefile [imagefile ...]
Converts the input image(s) to a text-based NIML element
and writes the result to stdout.  Sample usage:
 aiv -p 4444 &
 im2niml zork.jpg | nicat tcp:localhost:4444
-- Author: RW Cox.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imand
Usage: imand [-thresh #] input_images ... output_image
* Only pixels nonzero in all input images
* (and above the threshold, if given) will be output.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imaver
Usage: imaver out_ave out_sig input_images ...
       (use - to skip output of out_ave and/or out_sig)
* Computes the mean and standard deviation, pixel-by-pixel,
   of a whole bunch of images.
* Write output images in 'short int' format if inputs are
   short ints, otherwise output images are floating point.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imcalc
Do arithmetic on 2D images, pixel-by-pixel.
Usage: imcalc options
where the options are:
  -datum type = Coerce the output data to be stored as the given type,
                  which may be byte, short, or float.
                  [default = datum of first input image]
  -a dname    = Read image 'dname' and call the voxel values 'a'
                  in the expression.  'a' may be any letter from 'a' to 'z'.
               ** If some letter name is used in the expression, but not
                  present in one of the image options here, then that
                  variable is set to 0.
  -expr "expression"
                Apply the expression within quotes to the input images,
                  one voxel at a time, to produce the output image.
                  ("sqrt(a*b)" to compute the geometric mean, for example)
  -output name = Use 'name' for the output image filename.
                  [default='imcalc.out']

See the output of '3dcalc -help' for details on what kinds of expressions
are possible.  Note that complex-valued images cannot be processed (byte,
short, and float are OK).
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imcutup
Usage: imcutup [options] nx ny fname1
Breaks up larger images into smaller image files of size
nx by ny pixels.  Intended as an aid to using image files
which have been catenated to make one big 2D image.
OPTIONS:
  -prefix ppp = Prefix the output files with string 'ppp'
  -xynum      = Number the output images in x-first, then y [default]
  -yxnum      = Number the output images in y-first, then x
  -x.ynum     = 2D numbering, x.y format
  -y.xnum     = 2D numbering, y.x format
For example:
  imcutup -prefix Fred 64 64 3D:-1:0:256:128:1:zork.im
will break up the big 256 by 128 image in file zork.im
into 8 images, each 64 by 64.  The output filenames would be
  -xynum  => Fred.001 Fred.002 Fred.003 Fred.004
             Fred.005 Fred.006 Fred.007 Fred.008

  -yxnum  => Fred.001 Fred.003 Fred.005 Fred.007
             Fred.002 Fred.004 Fred.006 Fred.008

  -x.ynum => Fred.001.001 Fred.002.001 Fred.003.001 Fred.004.001
             Fred.001.002 Fred.002.002 Fred.003.002 Fred.004.002

  -y.xnum => Fred.001.001 Fred.001.002 Fred.001.003 Fred.001.004
             Fred.002.001 Fred.002.002 Fred.002.003 Fred.002.004

You may want to look at the input image file with
  afni -im fname  [then open the Sagittal image window]
before deciding on what to do with the image file.

N.B.: the file specification 'fname' must result in a single
      input 2D image - multiple images can't be cut up in one
      call to this program.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imdump
Usage: imdump input_image
* Prints out nonzero pixels in an image;
* Results to stdout; redirect (with >) to save to a file;
* Format: x-index y-index value, one pixel per line.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
immask
Usage: immask [-thresh #] [-mask mask_image] [-pos] input_image output_image
* Masks the input_image and produces the output_image;
* Use of -thresh # means all pixels with absolute value below # in
   input_image will be set to zero in the output_image
* Use of -mask mask_image means that only locations that are nonzero
   in the mask_image will be nonzero in the output_image
* Use of -pos means only positive pixels from input_image will be used
* At least one of -thresh, -mask, -pos must be used; more than one is OK.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
Imon
Imon - monitor real-time acquisition of I-files

    This program is intended to be run during a scanning session
    on a GE scanner, to monitor the collection of I-files.  The
    user will be notified of any missing slice or any slice that
    is aquired out of order.

    It is recommended that the user runs 'Imon' just after the
    scanner is first prepped, and then watches for error messages
    during the scanning session.  The user should terminate the
    program whey they are done with all runs.

    Note that 'Imon' can also be run separate from scanning, either
    to verify the integrity of I-files, or to create a GERT_Reco2
    script, which is used to create AFNI datasets.

    At the present time, the user must use <CTRL-C> to terminate
    the program.

  ---------------------------------------------------------------
  usage: Imon [options] -start_dir DIR

  ---------------------------------------------------------------
  examples (no real-time options):

    Imon -start_dir 003
    Imon -help
    Imon -start_dir 003 -GERT_reco2 -quit
    Imon -start_dir 003 -nt 120 -start_file 043/I.901
    Imon -debug 2 -nice 10 -start_dir 003

  examples (with real-time options):

    Imon -start_dir 003 -rt
    Imon -start_dir 003 -rt -host pickle
    Imon -start_dir 003 -nt 120 -rt -host pickle

  ** detailed real-time example:

    This example scans data starting from directory 003, expects
    160 repetitions (TRs), and invokes the real-time processing,
    sending data to a computer called some.remote.computer.name
    (where afni is running, and which considers THIS computer to
    be trusted - see the AFNI_TRUSTHOST environment variable).

    Multiple DRIVE_AFNI commands are passed through '-drive_afni'
    options, one requesting to open an axial image window, and
    another requesting an axial graph, with 160 data points.

    See README.driver for acceptable DRIVE_AFNI commands.

    Also, multiple commands specific to the real-time plugin are
    passed via the '-rt_cmd' options.  The 'REFIX command sets the
    prefix for the datasets output by afni.  The GRAPH_XRANGE and
    GRAPH_YRANGE commands set the graph dimensions for the 3D
    motion correction graph (only).  And the GRAPH_EXPR command
    is used to replace the 6 default motion correction graphs with
    a single graph, according to the given expression, the square
    root of the average squared entry of the 3 rotaion parameters,
    roll, pitch and yaw, ignoring the 3 shift parameters, dx, dy
    and dz.

    See README.realtime for acceptable DRIVE_AFNI commands.

    Imon                                                   \
       -start_dir 003                                      \
       -nt 160                                             \
       -rt                                                 \
       -host some.remote.computer.name                     \
       -drive_afni 'OPEN_WINDOW axialimage'                \
       -drive_afni 'OPEN_WINDOW axialgraph pinnum=160'     \
       -rt_cmd 'PREFIX eat.more.cheese'                    \
       -rt_cmd 'GRAPH_XRANGE 160'                          \
       -rt_cmd 'GRAPH_YRANGE 1.02'                         \
       -rt_cmd 'GRAPH_EXPR sqrt((d*d+e*e+f*f)/3)'            

  ---------------------------------------------------------------
  notes:

    - Once started, this program exits only when a fatal error
      occurs (single missing or out of order slices are not
      considered fatal).

      ** This has been modified.  The '-quit' option tells Imon
         to terminate once it runs out of new data to use.

    - To terminate this program, use <CTRL-C>.

  ---------------------------------------------------------------
  main option:

    -start_dir DIR     : (REQUIRED) specify starting directory

        e.g. -start_dir 003

        The starting directory, DIR, must be of the form 00n,
        where n is a digit.  The program then monitors all
        directories of the form ??n, created by the GE scanner.

        For instance, with the option '-start_dir 003', this
        program watches for new directories 003, 023, 043, etc.

  ---------------------------------------------------------------
  real-time options:

    -rt                : specify to use the real-time facility

        With this option, the user tells 'Imon' to use the real-time
        facility, passing each volume of images to an existing
        afni process on some machine (as specified by the '-host'
        option).  Whenever a new volume is aquired, it will be
        sent to the afni program for immediate update.

        Note that afni must also be started with the '-rt' option
        to make use of this.

        Note also that the '-host HOSTNAME' option is not required
        if afni is running on the same machine.

    -drive_afni CMND   : send 'drive afni' command, CMND

        e.g.  -drive_afni 'OPEN_WINDOW axialimage'

        This option is used to pass a single DRIVE_AFNI command
        to afni.  For example, 'OPEN_WINDOW axialimage' will open
        such an axial view window on the afni controller.

        Note: the command 'CMND' must be given in quotes, so that
              the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.driver for more details.

    -host HOSTNAME     : specify the host for afni communication

        e.g.  -host mycomputer.dot.my.network
        e.g.  -host 127.0.0.127
        e.g.  -host mycomputer
        the default host is 'localhost'

        The specified HOSTNAME represents the machine that is
        running afni.  Images will be sent to afni on this machine
        during the execution of 'Imon'.

        Note that the enviroment variable AFNI_TRUSTHOST must be
        set on the machine running afni.  Set this equal to the
        name of the machine running Imon (so that afni knows to
        accept the data from the sending machine).

    -rev_byte_order   : pass the reverse of the BYTEORDER to afni

        Reverse the byte order that is given to afni.  In case the
        detected byte order is not what is desired, this option
        can be used to reverse it.

        See the (obsolete) '-swap' option for more details.

    -rt_cmd COMMAND   : send COMMAND(s) to realtime plugin

        e.g.  -rt_cmd 'GRAPH_XRANGE 120'
        e.g.  -rt_cmd 'GRAPH_XRANGE 120 \n GRAPH_YRANGE 2.5'

        This option is used to pass commands to the realtime
        plugin.  For example, 'GRAPH_XRANGE 120' will set the
        x-scale of the motion graph window to 120 (repetitions).

        Note: the command 'COMMAND' must be given in quotes, so
        that the shell will send it as a single parameter.

        Note: this option may be used multiple times.

        See README.realtime for more details.

    -swap  (obsolete) : swap data bytes before sending to afni

        Since afni may be running on a different machine, the byte
        order may differ there.  This option will force the bytes
        to be reversed, before sending the data to afni.

        ** As of version 3.0, this option should not be necessary.
           'Imon' detects the byte order of the image data, and then
           passes that information to afni.  The realtime plugin
           will (now) decide whether to swap bytes in the viewer.

           If for some reason the user wishes to reverse the order
           from what is detected, '-rev_byte_order' can be used.

    -zorder ORDER     : slice order over time

        e.g. -zorder alt
        e.g. -zorder seq
        the default is 'alt'

        This options allows the user to alter the slice
        acquisition order in real-time mode, simliar to the slice
        pattern of the '-sp' option.  The main differences are:
            o  only two choices are presently available
            o  the syntax is intentionally different (from that
               of 'to3d' or the '-sp' option)

        ORDER values:
            alt   : alternating in the Z direction (over time)
            seq   : sequential in the Z direction (over time)

  ---------------------------------------------------------------
  other options:

    -debug LEVEL       : show debug information during execution

        e.g.  -debug 2
        the default level is 1, the domain is [0,3]
        the '-quiet' option is equivalent to '-debug 0'

    -help              : show this help information

    -hist              : display a history of program changes

    -nice INCREMENT    : adjust the nice value for the process

        e.g.  -nice 10
        the default is 0, and the maximum is 20
        a superuser may use down to the minimum of -19

        A positive INCREMENT to the nice value of a process will
        lower its priority, allowing other processes more CPU
        time.

    -nt VOLUMES_PER_RUN : set the number of time points per run

        e.g.  -nt 120

        With this option, if a run stalls before the specified
        VOLUMES_PER_RUN is reached (notably including the first
        run), the user will be notified.

        Without this option, Imon will compute the expected number
        of time points per run based on the first run (and will
        allow the value to increase based on subsequent runs).
        Therefore Imon would not detect a stalled first run.

    -quiet             : show only errors and final information

    -quit              : quit when there is no new data

        With this option, the program will terminate once a delay
        in new data occurs.  This is most appropriate to use when
        the image files have already been collected.

    -start_file S_FILE : have Imon process starting at S_FILE

        e.g.  -start_file 043/I.901

        With this option, any earlier I-files will be ignored
        by Imon.  This is a good way to start processing a later
        run, if it desired not to look at the earlier data.

        In this example, all files in directories 003 and 023
        would be ignored, along with everything in 043 up through
        I.900.  So 043/I.901 might be the first file in run 2.

    -version           : show the version information

  ---------------------------------------------------------------
  GERT_Reco2 options:

    -GERT_Reco2        : output a GERT_Reco2 script

        Create a script called 'GERT_Reco2', similar to the one
        that Ifile creates.  This script may be run to create the
        AFNI datasets corresponding to the I-files.

    -gert_outdir OUTPUT_DIR  : set output directory in GERT_Reco2

        e.g. -gert_outdir subject_A7
        e.g. -od subject_A7
        the default is '-gert_outdir afni'

        This will add '-od OUTPUT_DIR' to the @RenamePanga command
        in the GERT_Reco2 script, creating new datasets in the
        OUTPUT_DIR directory, instead of the 'afni' directory.

    -sp SLICE_PATTERN  : set output slice pattern in GERT_Reco2

        e.g. -sp alt-z
        the default is 'alt+z'

        This options allows the user to alter the slice
        acquisition pattern in the GERT_Reco2 script.

        See 'to3d -help' for more information.

  ---------------------------------------------------------------

  Author: R. Reynolds - version 3.3a (March 22, 2005)

                        (many thanks to R. Birn)

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imreg
Usage: imreg [options] base_image image_sequence ...
 * Registers each 2D image in 'image_sequence' to 'base_image'.
 * If 'base_image' = '+AVER', will compute the base image as
   the average of the images in 'image_sequence'.
 * If 'base_image' = '+count', will use the count-th image in the
   sequence as the base image.  Here, count is 1,2,3, ....

OUTPUT OPTIONS:
  -nowrite        Don't write outputs, just print progress reports.
  -prefix pname   The output files will be named in the format
  -suffix sname   'pname.index.sname' where 'pname' and 'sname'
  -start  si      are strings given by the first 2 options.
  -step   ss      'index' is a number, given by 'si+(i-1)*ss'
                  for the i-th output file, for i=1,2,...
                *** Default pname = 'reg.'
                *** Default sname = nothing at all
                *** Default si    = 1
                *** Default ss    = 1

  -flim           Write output in mrilib floating point format
                  (which can be converted to shorts using program ftosh).
                *** Default is to write images in format of first
                    input file in the image_sequence.

  -quiet          Don't write progress report messages.
  -debug          Write lots of debugging output!

  -dprefix dname  Write files 'dname'.dx, 'dname'.dy, 'dname'.phi
                    for use in time series analysis.

ALIGNMENT ALGORITHMS:
  -bilinear       Uses bilinear interpolation during the iterative
                    adjustment procedure, rather than the default
                    bicubic interpolation. NOT RECOMMENDED!
  -modes c f r    Uses interpolation modes 'c', 'f', and 'r' during
                    the coarse, fine, and registration phases of the
                    algorithm, respectively.  The modes can be selected
                    from 'bilinear', 'bicubic', and 'Fourier'.  The
                    default is '-modes bicubic bicubic bicubic'.
  -mlcF           Equivalent to '-modes bilinear bicubic Fourier'.

  -wtim filename  Uses the image in 'filename' as a weighting factor
                    for each voxel (the larger the value the more
                    importance is given to that voxel).

  -dfspace[:0]    Uses the 'iterated diffential spatial' method to
                    align the images.  The optional :0 indicates to
                    skip the iteration of the method, and to use the
                    simpler linear differential spatial alignment method.
                    ACCURACY: displacments of at most a few pixels.
                *** This is the default method (without the :0).

  -cmass            Initialize the translation estimate by aligning
                    the centers of mass of the images.
              N.B.: The reported shifts from the registration algorithm
                    do NOT include the shifts due to this initial step.

The new two options are used to play with the -dfspace algorithm,
which has a 'coarse' fit phase and a 'fine' fit phase:

  -fine blur dxy dphi  Set the 3 'fine' fit parameters:
                         blur = FWHM of image blur prior to registration,
                                  in pixels [must be > 0];
                         dxy  = convergence tolerance for translations,
                                  in pixels;
                         dphi = convergence tolerance for rotations,
                                  in degrees.

  -nofine              Turn off the 'fine' fit algorithm. By default, the
                         algorithm is on, with parameters 1.0, 0.07, 0.21.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imrotate
Usage: imrotate [-linear | -Fourier] dx dy phi input_image output_image
Shifts and rotates an image:
  dx pixels rightwards (not necessarily an integer)
  dy pixels downwards
  phi degrees clockwise
  -linear means to use bilinear interpolation (default is bicubic)
  -Fourier means to use Fourier interpolaion
Values outside the input_image are taken to be zero.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imstack
Usage: imstack [options] image_filenames ...
Stacks up a set of 2D images into one big file (a la MGH).
Options:
  -datum type   Converts the output data file to be 'type',
                  which is either 'short' or 'float'.
                  The default type is the type of the first image.
  -prefix name  Names the output files to be 'name'.b'type' and 'name'.hdr.
                  The default name is 'obi-wan-kenobi'.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imstat
Calculation of statistics of one or more images.
Usage: imstat [-nolabel] [-pixstat prefix] [-quiet] image_file ...
  -nolabel        = don't write labels on each file's summary line
  -quiet          = don't print statistics for each file
  -pixstat prefix = if more than one image file is given, then
                     'prefix.mean' and 'prefix.sdev' will be written
                     as the pixel-wise statistics images of the whole
                     collection.  These images will be in the 'flim'
                     floating point format.  [This option only works
                     on 2D images!]
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
imupsam
Usage: imupsam [-A] n input_image output_image
* Upsamples the input 2D image by a factor of n and
    writes result into output_image; n must be an
    integer in the range 2..30.
* 7th order polynomial interpolation is used in each
    direction.
* Inputs can be complex, float, short, PGM, PPM, or JPG.
* If input_image is in color (PPM or JPG), output will
    be PPM unless output_image ends in '.jpg'.
* If output_image is '-', the result will be written
    to stdout (so you could pipe it into something else).
* The '-A' option means to write the result in ASCII
    format: all the numbers for the file are output,
    and nothing else (no header info).
* Author: RW Cox -- 16 April 1999.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
inspec
Usage: inspec <-spec specfile> [-detail d] [-h/-help]
Outputs information found from specfile.
    -spec specfile: specfile to be read
    -detail d: level of output detail default is 1.
               Available levels are 1, 2 and 3.
    -h or -help: This message here.
++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

      Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov 
     Dec 2 03

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
IsoSurface
Usage: A program to perform isosurface extraction from a volume.
  Based on code by Thomas Lewiner (see below).

  IsoSurface  < -input VOL | -shape S GR >
              < -isoval V | -isorange V0 V1 | -isocmask MASK_COM >
              [< -o_TYPE PREFIX>]
              [< -debug DBG >]

  Mandatory parameters:
     You must use one of the following two options:
     -input VOL: Input volume.
     -shape S GR: Built in shape.
                  where S is the shape number, 
                  between 0 and 9 (see below). 
                  and GR is the grid size (like 64).
                  If you use -debug 1 with this option
                  a .1D volume called mc_shape*.1D is
                  written to disk. Watch the debug output
                  for a command suggesting how to turn
                  this 1D file into a BRIK volume for viewing
                  in AFNI.
     You must use one of the following iso* options:
     -isoval V: Create isosurface where volume = V
     -isorange V0 V1: Create isosurface where V0 <= volume < V1
     -isocmask MASK_COM: Create isosurface where MASK_COM != 0
        For example: -isocmask '-a VOL+orig -expr (1-bool(a-V))' 
        is equivalent to using -isoval V. 
     NOTE: -isorange and -isocmask are only allowed with -xform mask
            See -xform below for details.

  Optional Parameters:
     -xform XFORM:  Transform to apply to volume values
                    before searching for sign change
                    boundary. XFORM can be one of:
            mask: values that meet the iso* conditions
                  are set to 1. All other values are set
                  to -1. This is the default XFORM.
            shift: subtract V from the dataset and then 
                   search for 0 isosurface. This has the
                   effect of constructing the V isosurface
                   if your dataset has a continuum of values.
                   This option can only be used with -isoval V.
            none: apply no transforms. This assumes that
                  your volume has a continuum of values 
                  from negative to positive and that you
                  are seeking to 0 isosurface.
                  This option can only be used with -isoval 0.
     -o_TYPE PREFIX: prefix of output surface.
        where TYPE specifies the format of the surface
        and PREFIX is, well, the prefix.
        TYPE is one of: fs, 1d (or vec), sf, ply.
        Default is: -o_ply 

 Specifying output surfaces using -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.


     -debug DBG: debug levels of 0 (default), 1, 2, 3.
        This is no Rick Reynolds debug, which is oft nicer
        than the results, but it will do.

  Built In Shapes:
     0: Cushin
     1: Sphere
     2: Plane
     3: Cassini
     4: Blooby
     5: Chair
     6: Cyclide
     7: 2 Torus
     8: mc case
     9: Drip

  NOTE:
  The code for the heart of this program is a translation of:
  Thomas Lewiner's C++ implementation of the algorithm in:
  Efficient Implementation of Marching Cubes´ Cases with Topological Guarantees
  by Thomas Lewiner, Hélio Lopes, Antônio Wilson Vieira and Geovan Tavares 
  in Journal of Graphics Tools. 
  http://www-sop.inria.fr/prisme/personnel/Thomas.Lewiner/JGT.pdf

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
MakeColorMap
Usage1: 
MakeColorMap <-fn Fiducials_Ncol> [-pos] [-ah prefix] [-h/-help]
    Creates a colormap of N colors that contains the fiducial colors.
    -fn Fiducials_Ncol: Fiducial colors and their indices in the color
                        map are listed in file Fiducials_Ncol.
       Each row contains 4 tab delimited values:
       R G B i
       R G B values are between 0 and 1 and represent the 
       i-th color in the colormap. i should be between 0 and
       N-1, N being the total number of colors in the colormap.

Usage2: 
MakeColorMap <-f Fiducials> <-nc N> [-sl] [-ah prefix] [-h/-help]
    Creates a colormap of N colors that contains the fiducial colors.
    -f Fiducials:  Fiducial colors are listed in an ascii file Fiducials. 
       Each row contains 3 tab delimited R G B values between 0 and 1.
    -nc N: Total number of colors in the color map.
    -sl: (optional, default is NO) if used, the last color in the Fiducial 
       list is omitted. This is useful in creating cyclical color maps.

Usage3: 
MakeColorMap <-std MapName>
    Returns one of SUMA's standard colormaps. Choose from:
    rgybr20, ngray20, gray20, bw20, bgyr19, 
    matlab_default_byr64, roi128, roi256, roi64

Common options to all usages:
    -ah prefix: (optional, Afni Hex format.
                 default is RGB values in decimal form)
       use this option if you want a color map formatted to fit 
       in AFNI's .afnirc file. The colormap is written out as 
      prefix_01 = #xxxxxxx 
      prefix_02 = #xxxxxxx
       etc...
    -h or -help: displays this help message.

Example Usage 1: Creating a colormap of 20 colors that goes from 
Red to Green to Blue to Yellow to Red.

   The file FidCol_Nind contains the following:
   1 0 0 0
   0 1 0 5
   0 0 1 10
   1 1 0 15
   1 0 0 19

   The following command will generate the RGB colormap in decimal form:
   MakeColorMap -fn FidCol_Nind 

   The following command will generate the colormap and write it as 
   an AFNI color palette file:
   MakeColorMap -fn FidCol_Nind -ah TestPalette > TestPalette.pal

Example Usage 2: Creating a cyclical version of the colormap in usage 1:

   The file FidCol contains the following:
   1 0 0
   0 1 0
   0 0 1
   1 1 0
   1 0 0

   The following command will generate the RGB colormap in decimal form:
   MakeColorMap -f FidCol -sl -nc 20 

Example Usage 3: MakeColorMap -std ngray20 

To read in a new colormap into AFNI, either paste the contents of 
TestPalette.pal in your .afnirc file or read the .pal file using 
AFNI as follows:
1- run afni
2- Define Function --> right click on Inten (over colorbar) 
   --> Read in palette (choose TestPalette.pal)
3- set the #colors chooser (below colorbar) to 20 (the number of colors in 
   TestPalette.pal).
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 
++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

    Ziad S. Saad & Rick R. Reynolds SSCC/NIMH/NIH ziad@nih.gov    Tue Apr 23 14:14:48 EDT 2002

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
@make_stim_file
@make_stim_file - create a time series file, suitable for 3dDeconvolve

    This script reads in column headers and stimulus times for
    each header (integers), and computes a 'binary' file (all
    0s and 1s) with column headers, suitable for use as input to
    3dDeconvolve.

    The user must specify an output file on the command line (using
    -outfile), and may specify a maximum repetition number for rows
    of output (using -maxreps).
------------------------------
  Usage: @make_stim_file [options] -outfile OUTFILE

  examples:

    @make_stim_file -outfile green_n_gold
    @make_stim_file -outfile green_n_gold < my_input_file
    @make_stim_file -maxreps 200 -outfile green_n_gold -headers
    @make_stim_file -help
    @make_stim_file -maxreps 200 -outfile green_n_gold -debug 1
------------------------------
  options:

    -help            : show this help information

    -debug LEVEL     : print debug information along the way
          e.g. -debug 1
          the default is 0, max is 2

    -outfile OUTFILE : (required) results are sent to this output file
          e.g. -outfile green.n.gold.out

    -maxreps REPS    : use REPS as the maximum repeptition time
          e.g. -maxreps 200
          the default is to use the maximum rep time from the input

          This option basically pads the output columns with 0s,
          so that each column has REPS rows (of 1s and 0s).

    -no_headers      : do not include headers in output file
          e.g. -no_headers
          the default is print column headers (# commented out)
------------------------------
  Notes:

    1. It is probably easiest to use redirection from an input file
       for execution of the program.  That way, mistakes can be more
       easily fixed and retried.  See 'Sample execution 2'.

    2. Since most people start off with stimulus data in colums, and
       since this program requires input in rows for each header, it
       may be easiest to go through a few initial steps:
           - make sure all data is in integer form
           - make sure all blank spaces are filled with 0
           - save the file to an ascii data file (without headers)
           - use AFNI program '1dtranspose' to convert column data
             to row format
           - add the column headers back to the top of the ascii file

    3. The -maxreps option is recommended when using redirection, so
       that the user does not have to add the value to the bottom of
       the file.
------------------------------
  Sample execution 1: (typing input on command line)

    a. executing the following command:

       @make_stim_file -outfile red_blue_out

    b. and providing input data as follows:

       headers -> red blue
       'red' -> 2 4
       'blue' -> 2 3 5
       maxreps -> 6

    c. will produce 'red_blue_out', containing:

       red blue
       0 0
       1 1
       0 1
       1 0
       0 1
       0 0
------------------------------
  Sample execution 2: (using redirection)

    a. given input file 'my_input_file': (a text file with input data)

       red blue
       2 4
       2 3 5
       6

    b. run the script using redirection with -maxreps option

      @make_stim_file -maxreps 6 -outfile red_blue_out < my_input_file

    c. now there exists output file 'red_blue_out':

       red blue
       0 0
       1 1
       0 1
       1 0
       0 1
       0 0
------------------------------
  R. Reynolds
------------------------------
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
MapIcosahedron
Usage: MapIcosahedron <-spec specFile> 
                      [-rd recDepth] [-ld linDepth] 
                      [-morph morphSurf] 
                      [-it numIt] [-prefix fout] 
                      [-verb] [-help]

Creates new versions of the original-mesh surfaces using the mesh
of an icosahedron. 

   -spec specFile: spec file containing original-mesh surfaces
        including the spherical and warped spherical surfaces.

   -rd recDepth: recursive (binary) tesselation depth for icosahedron.
        (optional, default:3) See CreateIcosahedron for more info.

   -ld linDepth: number of edge divides for linear icosahedron tesselation 
        (optional, default uses binary tesselation).
        See CreateIcosahedron -help for more info.

   *Note: Enter -1 for recDepth or linDepth to let program 
          choose a depth that best approximates the number of nodes in
          original-mesh surfaces.

   -morph morphSurf: surface state to which icosahedron is inflated 
        accectable inputs are 'sphere.reg' and 'sphere'
        (optional, default uses sphere.reg over sphere).

   -it numIt: number of smoothing interations 
        (optional, default none).

   -prefix fout: prefix for output files.
        (optional, default MapIco)

   NOTE: See program SurfQual -help for more info on the following 2 options.
   [-sph_check]: Run tests for checking the spherical surface (sphere.asc)
                The program exits after the checks.
                This option is for debugging FreeSurfer surfaces only.

   [-sphreg_check]: Run tests for checking the spherical surface (sphere.reg.asc)
                The program exits after the checks.
                This option is for debugging FreeSurfer surfaces only.

   -sph_check and -sphreg_check are mutually exclusive.

   -verb: When specified, includes original-mesh surfaces 
       and icosahedron in output spec file.
       (optional, default does not include original-mesh surfaces)

NOTE 1: The algorithm used by this program is applicable
      to any surfaces warped to a spherical coordinate
      system. However for the moment, the interface for
      this algorithm only deals with FreeSurfer surfaces.
      This is only due to user demand and available test
      data. If you want to apply this algorithm using surfaces
      created by other programs such as SureFit and Caret, 
      Send ziad@nih.gov a note and some test data.

NOTE 2: At times, the standard-mesh surfaces are visibly
      distorted in some locations from the original surfaces.
      So far, this has only occurred when original spherical 
      surfaces had topological errors in them. 
      See SurfQual -help and SUMA's online documentation 
      for more detail.

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005


       Brenna D. Argall LBC/NIMH/NIH brenna.argall@nih.gov 
       Ziad S. Saad     SSC/NIMH/NIH ziad@nih.gov
          Fri Sept 20 2002

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
mayo_analyze
Usage: mayo_analyze file.hdr ...
Prints out info from the Mayo Analyze 7.5 header file(s)
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
mpeg_encode
Error:  Cannot open parameter file:  -help
Usage:  mpeg_encode [options] param_file
Options:
	-stat stat_file:  append stats to stat_file
	-quiet n:  don't report remaining time for at least n seconds
	-realquiet:  output nothing at all if successful
	-no_frame_summary:  suppress frame summary lines
	-float_dct:  use more accurate floating point DCT
	-gop gop_num:  encode only the numbered GOP
	-combine_gops:  combine GOP files instead of encode
	-frames first_frame last_frame:  encode only the specified frames
	-combine_frames:  combine frame files instead of encode
	-nice:  run slave processes nicely
	-max_machines num_machines:  use at most num_machines machines
	-snr:  print signal-to-noise ratio
	-bit_rate_info rate_file:  put bit rate in specified file
	-mv_histogram:  show histograms of motion vectors
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
mpegtoppm
Usage:  mpegtoppm [-prefix ppp] file.mpg
Writes files named 'ppp'000001.ppm, etc.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
mritopgm
Converts an image to raw pgm format.
Results go to stdout and should be redirected.
Usage:   mritopgm [-pp] input_image
Example: mritopgm fred.001 | ppmtogif > fred.001.gif

  The '-pp' option expresses a clipping percentage.
  That is, if this option is given, the pp%-brightest
  pixel is mapped to white; all above it are also white,
  and all below are mapped linearly down to black.
  The default is that pp=100; that is, the brightest
  pixel is white.  A useful operation for many MR images is
    mritopgm -99 fred.001 | ppmtogif > fred.001.gif
  This will clip off the top 1% of voxels, which are often
  super-bright due to arterial inflow effects, etc.
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
nifti1_test
Usage: nifti1_test [-n2|-n1|-na|-a2] infile [prefix]

 If prefix is given, then the options mean:
  -a2 ==> write an ANALYZE 7.5 file pair: prefix.hdr/prefix.img
  -n2 ==> write a NIFTI-1 file pair: prefix.hdr/prefix.img
  -n1 ==> write a NIFTI-1 single file: prefix.nii
  -na ==> write a NIFTI-1 ASCII+binary file: prefix.nia
  -za2 ==> write an ANALYZE 7.5 file pair: prefix.hdr.gz/prefix.img.gz
  -zn2 ==> write a NIFTI-1 file pair: prefix.hdr.gz/prefix.img.gz
  -zn1 ==> write a NIFTI-1 single file: prefix.nii.gz
  -zna ==> write a NIFTI-1 ASCII+binary file: prefix.nia.gz
 The default is '-n1'.

 If prefix is not given, then the header info from infile
 file is printed to stdout.

 Please note that the '.nia' format is NOT part of the
 NIFTI-1 specification, but is provided mostly for ease
 of visualization (e.g., you can edit a .nia file and
 change some header fields, then rewrite it as .nii)

sizeof(nifti_1_header)=348
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
nifti_stats
Demo program for computing NIfTI statistical functions.
Usage: nifti_stats [-q|-d|-1|-z] val CODE [p1 p2 p3]
 val can be a single number or in the form bot:top:step.
 default ==> output p = Prob(statistic < val).
  -q     ==> output is 1-p.
  -d     ==> output is density.
  -1     ==> output is x such that Prob(statistic < x) = val.
  -z     ==> output is z such that Normal cdf(z) = p(val).
  -h     ==> output is z such that 1/2-Normal cdf(z) = p(val).
 Allowable CODEs:
  CORREL      TTEST       FTEST       ZSCORE      CHISQ       BETA      
  BINOM       GAMMA       POISSON     NORMAL      FTEST_NONC  CHISQ_NONC
  LOGISTIC    LAPLACE     UNIFORM     TTEST_NONC  WEIBULL     CHI       
  INVGAUSS    EXTVAL      PVAL        LOGPVAL     LOG10PVAL 
 Following CODE are distributional parameters, as needed.

Results are written to stdout, 1 number per output line.
Example (piping output into AFNI program 1dplot):
 nifti_stats -d 0:4:.001 INVGAUSS 1 3 | 1dplot -dx 0.001 -stdin

Author - RW Cox - SSCC/NIMH/NIH/DHHS/USA/EARTH - March 2004

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
nifti_tool
nifti_tool

   - display, modify or compare nifti structures in datasets
   - copy a dataset by selecting a list of volumes from the original
   - copy a dataset, collapsing any dimensions, each to a single index
   - display a time series for a voxel, or more generally, the data
       from any collapsed image, in ASCII text

  This program can be used to display information from nifti datasets,
  to modify information in nifti datasets, to look for differences
  between two nifti datasets (like the UNIX 'diff' command), and to copy
  a dataset to a new one, either by restricting any dimensions, or by
  copying a list of volumes (the time dimension) from a dataset.

  Only one action type is allowed, e.g. one cannot modify a dataset
  and then take a 'diff'.

  one can display - any or all fields in the nifti_1_header structure
                  - any or all fields in the nifti_image structure
                  - the extensions in the nifti_image structure
                  - the time series from a 4-D dataset, given i,j,k
                  - the data from any collapsed image, given dims. list

  one can modify  - any or all fields in the nifti_1_header structure
                  - any or all fields in the nifti_image structure
          add/rm  - any or all extensions in the nifti_image structure
          remove  - all extensions and descriptions from the datasets

  one can compare - any or all field pairs of nifti_1_header structures
                  - any or all field pairs of nifti_image structures

  one can copy    - an arbitrary list of dataset volumes (time points)
                  - a dataset, collapsing across arbitrary dimensions
                    (restricting those dimensions to the given indices)

  Note: to learn about which fields exist in either of the structures,
        or to learn a field's type, size of each element, or the number
        of elements in the field, use either the '-help_hdr' option, or
        the '-help_nim' option.  No further options are required.
  ------------------------------

  usage styles:

    nifti_tool -help                 : show this help
    nifti_tool -help_hdr             : show nifti_1_header field info
    nifti_tool -help_nim             : show nifti_image field info

    nifti_tool -ver                  : show the current version
    nifti_tool -hist                 : show the modification history
    nifti_tool -nifti_ver            : show the nifti library version
    nifti_tool -nifti_hist           : show the nifti library history


    nifti_tool -copy_brick_list -infiles f1'[indices...]'
    nifti_tool -copy_collapsed_image I J K T U V W -infiles f1

    nifti_tool -disp_hdr [-field FIELDNAME] [...] -infiles f1 ...
    nifti_tool -disp_nim [-field FIELDNAME] [...] -infiles f1 ...
    nifti_tool -disp_exts -infiles f1 ...
    nifti_tool -disp_ts I J K [-dci_lines] -infiles f1 ...
    nifti_tool -disp_ci I J K T U V W [-dci_lines] -infiles f1 ...

    nifti_tool -mod_hdr  [-mod_field FIELDNAME NEW_VAL] [...] -infiles f1 ...
    nifti_tool -mod_nim  [-mod_field FIELDNAME NEW_VAL] [...] -infiles f1 ...

    nifti_tool -add_afni_ext    'extension in quotes' [...] -infiles f1 ...
    nifti_tool -add_comment_ext 'extension in quotes' [...] -infiles f1 ...
    nifti_tool -rm_ext INDEX [...] -infiles f1 ...
    nifti_tool -strip_extras -infiles f1 ...

    nifti_tool -diff_hdr [-field FIELDNAME] [...] -infiles f1 f2
    nifti_tool -diff_nim [-field FIELDNAME] [...] -infiles f1 f2

  ------------------------------

  options for copy actions:

    -copy_brick_list   : copy a list of volumes to a new dataset
    -cbl               : (a shorter, alternative form)

       This action allows the user to copy a list of volumes (over time)
       from one dataset to another.  The listed volumes can be in any
       order and contain repeats, but are of course restricted to
       the set of values {1, 2, ..., nt-1}, from dimension 4.

       This option is a flag.  The index list is specified with the input
       dataset, contained in square brackets.  Note that square brackets
       are special to most UNIX shells, so they should be contained
       within single quotes.  Syntax of an index list:

       notes:

         - indices start at zero
         - indices end at nt-1, which has the special symbol '$'
         - single indices should be separated with commas, ','
             e.g. -infiles dset0.nii'[0,3,8,5,2,2,2]'
         - ranges may be specified using '..' or '-' 
             e.g. -infiles dset0.nii'[2..95]'
             e.g. -infiles dset0.nii'[2..$]'
         - ranges may have step values, specified in ()
           example: 2 through 95 with a step of 3, i.e. {2,5,8,11,...,95}
             e.g. -infiles dset0.nii'[2..95(3)]'

       This functionality applies only to 4-dimensional datasets.

       e.g. to copy sub-bricks 0 and 7:
       nifti_tool -cbl -prefix new_07.nii -infiles dset0.nii'[0,7]'

       e.g. to copy an entire dataset:
       nifti_tool -cbl -prefix new_all.nii -infiles dset0.nii'[0..$]'

       e.g. to copy ever other time point, skipping the first three:
       nifti_tool -cbl -prefix new_partial.nii -infiles dset0.nii'[3..$(2)]'


    -copy_collapsed_image ... : copy a list of volumes to a new dataset
    -cci I J K T U V W        : (a shorter, alternative form)

       This action allows the user to copy a collapsed dataset, where
       some dimensions are collapsed to a given index.  For instance, the
       X dimension could be collapsed to i=42, and the time dimensions
       could be collapsed to t=17.  To collapse a dimension, set Di to
       the desired index, where i is in {0..ni-1}.  Any dimension that
       should not be collapsed must be listed as -1.

       Any number (of valid) dimensions can be collapsed, even down to a
       a single value, by specifying enough valid indices.  The resulting
       dataset will then have a reduced number of non-trivial dimensions.

       Assume dset0.nii has nim->dim[8] = { 4, 64, 64, 21, 80, 1, 1, 1 }.
       Note that this is a 4-dimensional dataset.

         e.g. copy the time series for voxel i,j,k = 5,4,17
         nifti_tool -cci 5 4 17 -1 -1 -1 -1 -prefix new_5_4_17.nii

         e.g. read the single volume at time point 26
         nifti_tool -cci -1 -1 -1 26 -1 -1 -1 -prefix new_t26.nii

       Assume dset1.nii has nim->dim[8] = { 6, 64, 64, 21, 80, 4, 3, 1 }.
       Note that this is a 6-dimensional dataset.

         e.g. copy all time series for voxel i,j,k = 5,0,17, with v=2
              (and add the command to the history)
         nifti_tool -cci 5 0 17 -1 -1 2 -1 -keep_hist -prefix new_5_0_17_2.nii

         e.g. copy all data where i=3, j=19 and v=2
              (I do not claim a good reason to do this)
         nifti_tool -cci 3 19 -1 -1 -1 2 -1 -prefix new_mess.nii

       See '-disp_ci' for more information (which displays/prints the
       data, instead of copying it to a new dataset).

  ------------------------------

  options for display actions:

    -disp_hdr          : display nifti_1_header fields for datasets

       This flag means the user wishes to see some of the nifti_1_header
       fields in one or more nifti datasets. The user may want to specify
       mutliple '-field' options along with this.  This option requires
       one or more files input, via '-infiles'.

       If no '-field' option is present, all fields will be displayed.

       e.g. to display the contents of all fields:
       nifti_tool -disp_hdr -infiles dset0.nii
       nifti_tool -disp_hdr -infiles dset0.nii dset1.nii dset2.nii

       e.g. to display the contents of select fields:
       nifti_tool -disp_hdr -field dim -infiles dset0.nii
       nifti_tool -disp_hdr -field dim -field descrip -infiles dset0.nii

    -disp_nim          : display nifti_image fields for datasets

       This flag option works the same way as the '-disp_hdr' option,
       except that the fields in question are from the nifti_image
       structure.

    -disp_exts         : display all AFNI-type extensions

       This flag option is used to display all nifti_1_extension data,
       for only those extensions of type AFNI (code = 4).  The only
       other option used will be '-infiles'.

       e.g. to display the extensions in datasets:
       nifti_tool -disp_exts -infiles dset0.nii
       nifti_tool -disp_exts -infiles dset0.nii dset1.nii dset2.nii

    -disp_ts I J K    : display ASCII time series at i,j,k = I,J,K

       This option is used to display the time series data for the voxel
       at i,j,k indices I,J,K.  The data is displayed in text, either all
       on one line (the default), or as one number per line (via the
       '-dci_lines' option).

       Notes:

         o This function applies only to 4-dimensional datasets.
         o The '-quiet' option can be used to suppress the text header,
           leaving only the data.
         o This option is short for using '-disp_ci' (display collapsed
           image), restricted to 4-dimensional datasets.  i.e. :
               -disp_ci I J K -1 -1 -1 -1

       e.g. to display the time series at voxel 23, 0, 172:
       nifti_tool -disp_ts 23 0 172            -infiles dset1_time.nii
       nifti_tool -disp_ts 23 0 172 -dci_lines -infiles dset1_time.nii
       nifti_tool -disp_ts 23 0 172 -quiet     -infiles dset1_time.nii

    -disp_collapsed_image  : display ASCII values for collapsed dataset
    -disp_ci I J K T U V W : (a shorter, alternative form)

       This option is used to display all of the data from a collapsed
       image, given the dimension list.  The data is displayed in text,
       either all on one line (the default), or as one number per line
       (by using the '-dci_lines' flag).

       The '-quiet' option can be used to suppress the text header.

       e.g. to display the time series at voxel 23, 0, 172:
       nifti_tool -disp_ci 23 0 172 -1 0 0 0 -infiles dset1_time.nii

       e.g. to display z-slice 14, at time t=68:
       nifti_tool -disp_ci -1 -1 14 68 0 0 0 -infiles dset1_time.nii

       See '-ccd' for more information, which copies such data to a new
       dataset, instead of printing it to the terminal window.

  ------------------------------

  options for modification actions:

    -mod_hdr           : modify nifti_1_header fields for datasets

       This action is used to modify some of the nifti_1_header fields in
       one or more datasets.  The user must specify a list of fields to
       modify via one or more '-mod_field' options, which include field
       names, along with the new (set of) values.

       The user can modify a dataset in place, or use '-prefix' to
       produce a new dataset, to which the changes have been applied.
       It is recommended to normally use the '-prefix' option, so as not
       to ruin a dataset.

       Note that some fields have a length greater than 1, meaning that
       the field is an array of numbers, or a string of characters.  In
       order to modify an array of numbers, the user must provide the
       correct number of values, and contain those values in quotes, so
       that they are seen as a single option.

       To modify a string field, put the string in quotes.

       The '-mod_field' option takes a field_name and a list of values.


       e.g. to modify the contents of various fields:
       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field qoffset_x -17.325
       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field dim '4 64 64 20 30 1 1 1 1'
       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field descrip 'beer, brats and cheese, mmmmm...'

       e.g. to modify the contents of multiple fields:
       nifti_tool -mod_hdr -prefix dnew -infiles dset0.nii  \
                  -mod_field qoffset_x -17.325 -mod_field slice_start 1

       e.g. to modify the contents of multiple files (must overwrite):
       nifti_tool -mod_hdr -overwrite -mod_field qoffset_x -17.325   \
                  -infiles dset0.nii dset1.nii

    -mod_nim          : modify nifti_image fields for datasets

       This action option is used the same way that '-mod_hdr' is used,
       except that the fields in question are from the nifti_image
       structure.

    -strip_extras     : remove extensions and descriptions from datasets

       This action is used to attempt to 'clean' a dataset of general
       text, in order to make it more anonymous.  Extensions and the
       nifti_image descrip field are cleared by this action.

       e.g. to strip all *.nii datasets in this directory:
       nifti_tool -strip -overwrite -infiles *.nii

  ------------------------------

  options for adding/removing extensions:

    -add_afni_ext EXT : add an AFNI extension to the dataset

       This option is used to add AFNI-type extensions to one or more
       datasets.  This option may be used more than once to add more than
       one extension.

       The '-prefix' option is recommended, to create a new dataset.
       In such a case, only a single file may be taken as input.  Using
       '-overwrite' allows the user to overwrite the current file, or
       to add the extension(s) to multiple files, overwriting them.

       e.g. to add a generic AFNI extension:
       nifti_tool -add_afni_ext 'wow, my first extension :)' -prefix dnew \
                  -infiles dset0.nii

       e.g. to add multiple AFNI extensions:
       nifti_tool -add_afni_ext 'wow, my first extension :)'      \
                  -add_afni_ext 'look, my second...'              \
                  -prefix dnew -infiles dset0.nii

       e.g. to add an extension, and overwrite the dataset:
       nifti_tool -add_afni_ext 'some AFNI extension' -overwrite \
                  -infiles dset0.nii dset1.nii 

    -add_comment_ext EXT : add a COMMENT extension to the dataset

       This option is used to add COMMENT-type extensions to one or more
       datasets.  This option may be used more than once to add more than
       one extension.  This option may also be used with '-add_afni_ext'.

       The '-prefix' option is recommended, to create a new dataset.
       In such a case, only a single file may be taken as input.  Using
       '-overwrite' allows the user to overwrite the current file, or
       to add the extension(s) to multiple files, overwriting them.

       e.g. to add a comment about the dataset:
       nifti_tool -add_comment 'converted from MY_AFNI_DSET+orig' \
                  -prefix dnew                                    \
                  -infiles dset0.nii

       e.g. to add multiple extensions:
       nifti_tool -add_comment  'add a comment extension'         \
                  -add_afni_ext 'and an AFNI XML style extension' \
                  -add_comment  'dataset copied from dset0.nii'   \
                  -prefix dnew -infiles dset0.nii

    -rm_ext INDEX     : remove the extension given by INDEX

       This option is used to remove any single extension from the
       dataset.  Multiple extensions require multiple options.

       notes  - extension indices begin with 0 (zero)
              - to view the current extensions, see '-disp_exts'
              - all exensions can be removed using ALL or -1 for INDEX

       e.g. to remove the extension #0:
       nifti_tool -rm_ext 0 -overwrite -infiles dset0.nii

       e.g. to remove ALL extensions:
       nifti_tool -rm_ext ALL -prefix dset1 -infiles dset0.nii
       nifti_tool -rm_ext -1  -prefix dset1 -infiles dset0.nii

       e.g. to remove the extensions #2, #3 and #5:
       nifti_tool -rm_ext 2 -rm_ext 3 -rm_ext 5 -overwrite -infiles dset0.nii

  ------------------------------

  options for showing differences:

    -diff_hdr         : display header field diffs between two datasets

       This option is used to find differences between two datasets.
       If any fields are different, the contents of those fields is
       displayed (unless the '-quiet' option is used).

       A list of fields can be specified by using multiple '-field'
       options.  If no '-field' option is given, all fields will be
       checked.

       Exactly two dataset names must be provided via '-infiles'.

       e.g. to display all nifti_1_header field differences:
       nifti_tool -diff_hdr -infiles dset0.nii dset1.nii

       e.g. to display selected nifti_1_header field differences:
       nifti_tool -diff_hdr -field dim -field intent_code  \
                  -infiles dset0.nii dset1.nii 

    -diff_nim         : display nifti_image field diffs between datasets

       This option works the same as '-diff_hdr', except that the fields
       in question are from the nifti_image structure.

  ------------------------------

  miscellaneous options:

    -debug LEVEL      : set the debugging level

       Level 0 will attempt to operate with no screen output, but errors.
       Level 1 is the default.
       Levels 2 and 3 give progressively more infomation.

       e.g. -debug 2

    -field FIELDNAME  : provide a field to work with

       This option is used to provide a field to display, modify or
       compare.  This option can be used along with one of the action
       options presented above.

       See '-disp_hdr', above, for complete examples.

       e.g. nifti_tool -field descrip
       e.g. nifti_tool -field descrip -field dim

    -infiles file0... : provide a list of files to work with

       This parameter is required for any of the actions, in order to
       provide a list of files to process.  If input filenames do not
       have an extension, the directory we be searched for any
       appropriate files (such as .nii or .hdr).

       See '-mod_hdr', above, for complete examples.

       e.g. nifti_tool -infiles file0.nii
       e.g. nifti_tool -infiles file1.nii file2 file3.hdr

    -mod_field NAME 'VALUE_LIST' : provide new values for a field

       This parameter is required for any the modification actions.
       If the user wants to modify any fields of a dataset, this is
       where the fields and values are specified.

       NAME is a field name (in either the nifti_1_header structure or
       the nifti_image structure).  If the action option is '-mod_hdr',
       then NAME must be the name of a nifti_1_header field.  If the
       action is '-mod_nim', NAME must be from a nifti_image structure.

       VALUE_LIST must be one or more values, as many as are required
       for the field, contained in quotes if more than one is provided.

       Use 'nifti_tool -help_hdr' to get a list of nifti_1_header fields
       Use 'nifti_tool -help_nim' to get a list of nifti_image fields

       See '-mod_hdr', above, for complete examples.

       e.g. modifying nifti_1_header fields:
            -mod_field descrip 'toga, toga, toga'
            -mod_field qoffset_x 19.4 -mod_field qoffset_z -11
            -mod_field pixdim '1 0.9375 0.9375 1.2 1 1 1 1'

    -keep_hist         : add the command as COMMENT (to the 'history')

        When this option is used, the current command will be added
        as a NIFTI_ECODE_COMMENT type extension.  This provides the
        ability to keep a history of commands affecting a dataset.

       e.g. -keep_hist

    -overwrite        : any modifications will be made to input files

       This option is used so that all field modifications, including
       extension additions or deletions, will be made to the files that
       are input.

       In general, the user is recommended to use the '-prefix' option
       to create new files.  But if overwriting the contents of the
       input files is prefered, this is how to do it.

       See '-mod_hdr' or '-add_afni_ext', above, for complete examples.

       e.g. -overwrite

    -prefix           : specify an output file to write change into

       This option is used to specify an output file to write, after
       modifications have been made.  If modifications are being made,
       then either '-prefix' or '-overwrite' is required.

       If no extension is given, the output extension will be '.nii'.

       e.g. -prefix new_dset
       e.g. -prefix new_dset.nii
       e.g. -prefix new_dset.hdr

    -quiet            : report only errors or requested information

       This option is equivalent to '-debug 0'.

  ------------------------------

  basic help options:

    -help             : show this help

       e.g.  nifti_tool -help

    -help_hdr         : show nifti_1_header field info

       e.g.  nifti_tool -help_hdr

    -help_nim         : show nifti_image field info

       e.g.  nifti_tool -help_nim

    -ver              : show the program version number

       e.g.  nifti_tool -ver

    -hist             : show the program modification history

       e.g.  nifti_tool -hist

    -nifti_ver        : show the nifti library version number

       e.g.  nifti_tool -nifti_ver

    -nifti_hist       : show the nifti library modification history

       e.g.  nifti_tool -nifti_hist

  ------------------------------

  R. Reynolds
  compiled: Aug 25 2005
  version 1.8 (April 19, 2005)

This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
@NoExt
Usage: @NoExt <NAME> <EXT1> <EXT2> .....
example: @NoExt Hello.HEAD HEAD BRIK
returns Hello
@NoExt Hello.BRIK HEAD BRIK
returns Hello
@NoExt Hello.Jon HEAD BRIK
returns Hello.Jon
\012Ziad Saad (ziad@nih.gov)\012LBC/NIMH/ National Institutes of Health, Bethesda Maryland\012
This page auto-generated on Thu Aug 25 16:49:41 EDT 2005
nsize
Usage: nsize image_in image_out
  Zero pads 'image_in' to NxN, N=64,128,256,512, or 1024, 
  whichever is the closest size larger than 'image_in'.
  [Works only for byte and short images.]
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
p2t
*** NOTE: This program has been superseded by program 'cdf' ***

Usage #1: p2t p dof
  where p   = double sided tail probability for t-distribution
        dof = number of degrees of freedom to use
  OUTPUT = t value that matches the input p

Usage #2: p2t p N L M
  where p   = double sided tail probability of beta distribution
        N   = number of measured data points
        L   = number of nuisance parameters (orts)
        M   = number of fit parameters
  OUTPUT = threshold for correlation coefficient

Usage #3: p2t p
  where p   = one sided tail probability of Gaussian distribution
  OUTPUT = z value for which P(x>z) = p

Usage #4: p2t p dof N
  where p   = double sided tail probability for distribution of
                the mean of N  iid zero-mean t-variables
        dof = number of degrees of freedom of each t-variable
        N   = number of t variables averaged
  OUTPUT = threshold for the t average statistic
  N.B.: The method used for this calculation is the Cornish-
        Fisher expansion in N, and is only an approximation.
        This also requires dof > 6, and the results will be
        less accurate as dof approaches 6 from above!
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
@parse_afni_name
Usage 1: A script to parse an AFNI name

   @parse_afni_name <NAME>

Outputs the path, prefix, view and sub-brick selection string.

This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
plugout_drive
Usage: plugout_drive [-host name] [-v]
This program connects to AFNI and sends commands
 that the user specifies interactively or on command line
 over to AFNI to be executed.

Options:
  -host name  Means to connect to AFNI running on the
                computer 'name' using TCP/IP.  The default is to
                connect on the current host using shared memory.
  -v          Verbose mode.
  -port pp    Use TCP/IP port number 'pp'.  The default is
                8099, but if two plugouts are running on the
                same computer, they must use different ports.
  -name sss   Use the string 'sss' for the name that AFNI assigns
                to this plugout.  The default is something stupid.
  -com 'ACTION DATA'  Execute the following command. For example:
                       -com 'SET_FUNCTION SomeFunction'
                       will switch AFNI's function (overlay) to
                       dataset with prefix SomeFunction. 
                      Make sure ACTION and DATA are together enclosed
                       in one pair of single quotes.
                      There are numerous actions listed in AFNI's
                       README.driver file.
                      You can use the option -com repeatedly. 
  -quit  Quit after you are done with all the -com commands.
         The default is for the program to wait for more
          commands to be typed at the terminal's prompt.

NOTE:
You will need to turn plugouts on in AFNI using one of the
following methods: 
 1. Including '-yesplugouts' as an option on AFNI's command line
 2. From AFNI: Define Datamode->Misc->Start Plugouts
 3. Set environment variable AFNI_YESPLUGOUTS to YES in .afnirc
Otherwise, AFNI won't be listening for a plugout connection.

This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
plugout_ijk
Usage: plugout_ijk [-host name] [-v]
This program connects to AFNI and send (i,j,k)
dataset indices to control the viewpoint.

Options:
  -host name  Means to connect to AFNI running on the
                computer 'name' using TCP/IP.  The default is to
                connect on the current host using shared memory.
  -v          Verbose mode.
  -port pp    Use TCP/IP port number 'pp'.  The default is
                8009, but if two plugouts are running on the
                same computer, they must use different ports.
  -name sss   Use the string 'sss' for the name that AFNI assigns
                to this plugout.  The default is something stupid.
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
plugout_tt
Usage: plugout_tt [-host name] [-v]
This program connects to AFNI and receives notification
whenever the user changes Talairach coordinates.

Options:
  -host name  Means to connect to AFNI running on the
                computer 'name' using TCP/IP.  The default is to
                connect on the current host using shared memory.
  -ijk        Means to get voxel indices from AFNI, rather
                than Talairach coordinates.
  -v          Verbose mode: prints out lots of stuff.
  -port pp    Use TCP/IP port number 'pp'.  The default is
                8001, but if two copies of this are running on
                the same computer, they must use different ports.
  -name sss   Use the string 'sss' for the name that AFNI assigns
                to this plugout.  The default is something stupid.
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
@Purify_1D
Usage: @Purify_1D [<-sub SUB_STRING>] dset1 dset2 ...
Purifies a series of 1D files for faster I/O into matlab.
  -sub SUB_STRING: You can use the sub-brick selection
                   mode, a la AFNI, to output a select
                   number of columns. See Example below.
  -suf STRING:     STRING is attached to the output prefix
                   which is formed from the input names

Example:
    @Purify_1D -sub '[0,3]' somedataset.1D.dset

Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov

This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
quickspec
Usage:  quickspec 
        <-tn TYPE NAME> ...
        <-tsn TYPE STATE NAME> ...
        [<-spec specfile>] [-h/-help]
  Use this spec file for quick and dirty way of 
  loading a surface into SUMA or the command line programs.

Options:
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           SF: Caret/SureFit format
           BV: BrainVoyager format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
   -spec specfile: Name of spec file output.
                   Default is quick.spec
                   The program will only overwrite 
                   quick.spec (the default) spec file.
   -h or -help: This message here.

  You can use any combinaton of -tn and -tsn options.
  Fields in the spec file that are (or cannot) be specified
  by this program are set to default values.

   This program was written to ward off righteous whiners and is
  not meant to replace the venerable @SUMA_Make_Spec_XX scripts.

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

      Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov 
		 Tue Dec 30

This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
@RenamePanga
Usage: @RenamePanga <DIR #> <FIRST # Image> <# slices> <# reps> <OUTPUT Root>
                   [-kp] [-i] [-oc] [-sp Pattern] [-od Output Directory]

Creates AFNI bricks from RealTime GE EPI series.

This script is designed to run from the directory where the famed RT image directories are copied to.
If the data were copied from fim3T-adw using @RTcp, this directory should be something like:
/mnt/arena/03/users/sdc-nfs/Data/RTime/2005.08.25/<PID>/<EXAM #>/

<DIR #> : (eg: 3) The directory number where the first image of the series is stored.
<FIRST # Image> : (eg: 19) The number of the first image in the series.
<# slices> : (eg: 18) The number of slices making up the imaged volume.
<# reps> : (eg: 160) The number of samples in your time series.
<OUTPUT Root> : (eg: PolcCw) The prefix for the output brick.
                 Bricks are automatically saved into the output directory
                 Unless you use -kp option, bricks are automatically named
                 <OUTPUT Root>_r# where # is generated each time you 
                 run the script and successfully create a new brick.

Optional Parameters:
-i : Launches to3d in interactive mode. This allows you to double check the automated settings.
 -kp: Forces @RenamePanga to use the prefix you designate without modification.
 -oc: Performs outliers check. This is useful to do but it slows to3d down and
  maybe annoying when checking your data while scanning. If you choose -oc, the
  outliers are written to a .1D file and placed in the output directory.
 -sp Pattern: Sets the slice acquisition pattern. The default option is alt+z.
  see to3d -help for various acceptable options.
 -od <OUTPUT Directory>: Directory where the output (bricks and 1D files) will
  be stored. The default directory is ./afni


A log file (MAPLOG_Panga) is created in the current directory.

Panga: A state of revenge.
***********
Dec 4 2001 Changes:
- No longer requires the program pad_str.
- Uses to3d to read geometric slice information.
- Allows for bypassing the default naming convention.
- You need to be running AFNI built after Dec 3 2001 to use this script.
- Swapping needs are now determined by to3d.
If to3d complains about not being able to determine swapping needs, check the data manually
- Geom parent option (-gp) has been removed.
- TR is no longer set from command line, it is obtained from the image headers.
Thanks to Jill W., Mike B. and Shruti J. for reporting bugs and testing the scripts.
***********

Usage: @RenamePanga <DIR #> <FIRST # Image> <# slices> <# reps> <OUTPUT Root>
                   [-kp] [-i] [-oc] [-sp Pattern] [-od Output Directory]

 Version 3.2 (09/02/03)  Ziad Saad (ziad@nih.gov) Dec 5 2001   SSCC/LBC/NIMH.
</DIR></DIR></DIR>
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
rmz
Usage: rmz [-q] [-#] filename ...
 -- Zeros out files before removing them
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
ROI2dataset
Usage: 
   ROI2dataset <-prefix dsetname> [...] <-input ROI1 ROI2 ...>
               [<-of ni_bi|ni_as|1D>] 
               [<-dom_par_id idcode>] 
    This program transforms a series of ROI files
    to a node dataset. This data set will contain
    the node indices in the first column and their
    ROI values in the second column.
    Duplicate node entries (nodes that are part of
    multiple ROIs) will get ignored. You will be
    notified when this occurs. 

Mandatory parameters:
    -prefix dsetname: Prefix of output dataset.
                      Program will not overwrite existing
                      datasets.
    -input ROI1 ROI2....: ROI files to turn into a 
                          data set. This parameter MUST
                          be the last one on command line.

Optional parameters:
(all optional parameters must be specified before the
 -input parameters.)
    -h | -help: This help message
    -of FORMAT: Output format of dataset. FORMAT is one of:
                ni_bi: NIML binary
                ni_as: NIML ascii (default)
                1D   : 1D AFNI format.
    -dom_par_id id: Idcode of domain parent.
                    When specified, only ROIs have the same
                    domain parent are included in the output.
                    If id is not specified then the first
                    domain parent encountered in the ROI list
                    is adopted as dom_par_id.
                    1D roi files do not have domain parent 
                    information. They will be added to the 
                    output data under the chosen dom_par_id.
    -pad_to_node max_index: Output a full dset from node 0 
                            to node max_index (a total of 
                            max_index + 1 nodes). Nodes that
                            are not part of any ROI will get
                            a default label of 0 unless you
                            specify your own padding label.
    -pad_val padding_label: Use padding_label (an integer) to
                            label nodes that do not belong
                            to any ROI. Default is 0.

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov 
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
rotcom
Usage: rotcom '-rotate aaI bbR ccA -ashift ddS eeL ffP' [dataset]

Prints to stdout the 4x3 transformation matrix+vector that would be
applied by 3drotate to the given dataset.

The -rotate and -ashift options combined must be input inside single
quotes (i.e., as one long command string):
 * These options follow the same form as specified by '3drotate -help'.
 * That is, if you include the '-rotate' component, it must be followed
   by 3 angles.
 * If you include the '-ashift' component, it must be followed by 3 shifts;
 * For example, if you only want to shift in the 'I' direction, you could use
     '-ashift 10I 0 0'.
 * If you only want to rotate about the 'I' direction, you could use
     '-rotate 10I 0R 0A'.

Note that the coordinate order for the matrix and vector is that of
the dataset, which can be determined from program 3dinfo.  This is the
only function of the 'dataset' command line argument.

If no dataset is given, the coordinate order is 'RAI', which means:
    -x = Right      [and so +x = Left     ]
    -y = Anterior   [    so +y = Posterior]
    -z = Inferior   [    so +z = Superior ]
For example, the output of command
   rotcom '-rotate 10I 0R 0A'
is the 3 lines below:
0.984808 -0.173648  0.000000  0.000
0.173648  0.984808  0.000000  0.000
0.000000  0.000000  1.000000  0.000

-- RWCox - Nov 2002
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
RSFgen

Program:          RSFgen 
Author:           B. Douglas Ward 
Initial Release:  06 July 1999 
Latest Revision:  13 March 2003 

Sample program to generate random stimulus functions.                  
                                                                       
Usage:                                                                 
RSFgen                                                          
-nt n            n = length of time series                             
-num_stimts p    p = number of input stimuli (experimental conditions) 
[-nblock i k]    k = block length for stimulus i  (1<=i<=p)            
                     (default: k = 1)                                  
[-seed s]        s = random number seed                                
[-quiet]         flag to suppress screen output                        
[-one_file]      place stimulus functions into a single .1D file       
[-one_col]       write stimulus functions as a single column of decimal
                   integers (default: multiple columns of binary nos.) 
[-prefix pname]  pname = prefix for p output .1D stimulus functions    
                   e.g., pname1.1D, pname2.1D, ..., pnamep.1D          
                                                                       
The following Random Permutation, Markov Chain, and Input Table options
are mutually exclusive.                                                
                                                                       
Random Permutation options:                                            
-nreps i r       r = number of repetitions for stimulus i  (1<=i<=p)   
[-pseed s]       s = stim label permutation random number seed         
                                     p                                 
                 Note: Require n >= Sum (r[i] * k[i])                  
                                    i=1                                
                                                                       
Markov Chain options:                                                  
-markov mfile    mfile = file containing the transition prob. matrix   
[-pzero z]       probability of a zero (i.e., null) state              
                     (default: z = 0)                                  
                                                                       
Input Table row permutation options:                                   
[-table dfile]   dfile = filename of column or table of numbers        
                 Note: dfile may have a column selector attached       
                 Note: With this option, all other input options,      
                       except -seed and -prefix, are ignored           
                                                                       
                                                                       
Warning: This program will overwrite pre-existing .1D files            
                                                                       
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
rtfeedme
Usage: rtfeedme [options] dataset [dataset ...]
Test the real-time plugin by sending all the bricks in 'dataset' to AFNI.
 * 'dataset' may include a sub-brick selector list.
 * If more than one dataset is given, multiple channel acquisition
    will be simulated.  Each dataset must then have the same datum
    and dimensions.
 * If you put the flag '-break' between datasets, then the datasets
    in each group will be transmitted in parallel, but the groups
    will be transmitted serially (one group, then another, etc.).
    + For example:
        rtfeedme A+orig B+orig -break C+orig -break D+orig
       will send the A and B datasets in parallel, then send
       the C dataset separately, then send the D dataset separately.
       (That is, there will be 3 groups of datasets.)
    + There is a 1 second delay between the end transmission for
       a group and the start transmission for the next group.
    + You can extend the inter-group delay by using a break option
       of the form '-break_20' to indicate a 20 second delay.
    + Within a group, each dataset must have the same datum and
       same x,y,z,t dimensions.  (Different groups don't need to
       be conformant to each other.)
    + All the options below apply to each group of datasets;
       i.e., they will all get the same notes, drive commands, ....

Options:
  -host sname =  Send data, via TCP/IP, to AFNI running on the
                 computer system 'sname'.  By default, uses the
                 current system, and transfers data using shared
                 memory.  To send on the current system using
                 TCP/IP, use the system 'localhost'.

  -dt ms      =  Tries to maintain an inter-transmit interval of
                 'ms' milliseconds.  The default is to send data
                 as fast as possible.

  -3D         =  Sends data in 3D bricks.  By default, sends in
                 2D slices.

  -buf m      =  When using shared memory, sets the interprocess
                 communications buffer to 'm' megabytes.  Has no
                 effect if using TCP/IP.  Default is m=1.
                 If you use m=0, then a 50 Kbyte buffer is used.

  -verbose    =  Be talkative about actions.
  -swap2      =  Swap byte pairs before sending data.

  -nzfake nz  =  Send 'nz' as the value of nzz (for debugging).

  -drive cmd  =  Send 'cmd' as a DRIVE_AFNI command; e.g.,
                   -drive 'OPEN_WINDOW A.axialimage'
                 If cmd contains blanks, it must be in 'quotes'.
                 Multiple -drive options may be used.

  -note sss   =  Send 'sss' as a NOTE to the realtime plugin.
                 Multiple -note options may be used.

  -gyr v      =  Send value 'v' as the y-range for realtime motion
                 estimation graphing.
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
SampBias
Usage:
  SampBias -spec SPECFILE -surf SURFNAME -plimit limit -dlimit limit -out FILE

  Mandatory parameters:
     -spec SpecFile: Spec file containing input surfaces.
     -surf SURFNAME: Name of input surface 
     -plimit limit: maximum length of path along surface in mm.
                    default is 50 mm
     -dlimit limit: maximum length of euclidean distance in mm.
                    default is 1000 mm
     -out FILE: output dataset


  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

 blame Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
ScaleToMap
Usage:  ScaleToMap <-input IntFile icol vcol>  
    [-cmap MapType] [-cmapfile Mapfile] [-cmapdb Palfile] [-frf] 
    [-clp/-perc_clp clp0 clp1] [-apr/-anr range]
    [-interp/-nointerp/-direct] [-msk msk0 msk1] [-nomsk_col]
    [-msk_col R G B] [-br BrightFact]
    [-h/-help] [-verb] [-showmap] [-showdb]

    -input IntFile icol vcol: input data.
       Infile: 1D formatted ascii file containing node values
       icol: index of node index column 
       (-1 if the node index is implicit)
       vcol: index of node value column.
       Example: -input ValOnly.1D -1 0 
       for a 1D file containing node values
       in the first column and no node indices.
       Example: -input NodeVal.1D 1 3
       for a 1D file containing node indices in
       the SECOND column and node values in the 
       FOURTH column (index counting begins at 0)
    -v and -iv options are now obsolete.
       Use -input option instead.
    -cmap MapName: (optional, default RGYBR20) 
       choose one of the standard colormaps available with SUMA:
       RGYBR20, BGYR19, BW20, GRAY20, MATLAB_DEF_BYR64, 
       ROI64, ROI128
       You can also use AFNI's default paned color maps:
       The maps are labeled according to the number of 
       panes and their sign. Example: afni_p10
       uses the positive 10-pane afni colormap.
       afni_n10 is the negative counterpart.
       These maps are meant to be used with
       the options -apr and -anr listed below.
       You can also load non-default AFNI colormaps
       from .pal files (AFNI's colormap format); see option
       -cmapdb below.
    -cmapdb Palfile: read color maps from AFNI .pal file
       In addition to the default paned AFNI colormaps, you
       can load colormaps from a .pal file.
       To access maps in the Palfile you must use the -cmap option
       with the label formed by the name of the palette, its sign
       and the number of panes. For example, to following palette:
       ***PALETTES deco [13]
       should be accessed with -cmap deco_n13
       ***PALETTES deco [13+]
       should be accessed with -cmap deco_p13
    -cmapfile Mapfile: read color map from Mapfile.
       Mapfile:1D formatted ascii file containing colormap.
               each row defines a color in one of two ways:
               R  G  B        or
               R  G  B  f     
       where R, G, B specify the red, green and blue values, 
       between 0 and 1 and f specifies the fraction of the range
       reached at this color. THINK values of right of AFNI colorbar.
       The use of fractions (it is optional) would allow you to create
       non-linear color maps where colors cover differing fractions of 
       the data range.
       Sample colormap with positive range only (a la AFNI):
               0  0  1  1.0
               0  1  0  0.8
               1  0  0  0.6
               1  1  0  0.4
               0  1  1  0.2
       Note the order in which the colors and fractions are specified.
       The bottom color of the +ve colormap should be at the bottom of the
       file and have the lowest +ve fraction. The fractions here define a
       a linear map so they are not necessary but they illustrate the format
       of the colormaps.
       Comparable colormap with negative range included:
               0  0  1   1.0
               0  1  0   0.6
               1  0  0   0.2
               1  1  0  -0.2
               0  1  1  -0.6
       The bottom color of the -ve colormap should have the 
       lowest -ve fraction. 
       You can use -1 -1 -1 for a color to indicate a no color
       (like the 'none' color in AFNI). Values mapped to this
       'no color' will be masked as with the -msk option.
       If your 1D color file has more than three or 4 columns,
       you can use the [] convention adopted by AFNI programs
       to select the columns you need.
    -frf: (optional) first row in file is the first color.
       As explained in the -cmapfile option above, the first 
       or bottom (indexed 0 )color of the colormap should be 
       at the bottom of the file. If the opposite is true, use
       the -frf option to signal that.
       This option is only useful with -cmapfile.
    -clp/-perc_clp clp0 clp1: (optional, default no clipping)
       clips values in IntVect. if -clp is used then values in vcol
       < clp0 are clipped to clp0 and > clp1 are clipped to clp1
       if -perc_clp is used them vcol is clipped to the values 
       corresponding to clp0 and clp1 percentile.
       The -clp/-prec_clp options are mutually exclusive with -apr/-anr.
    -apr range: (optional) clips the values in IntVect to [0 range].
       This option allows range of colormap to be set as in AFNI, 
       with Positive colorbar (Pos selected).
       This option is mutually exclusive with -clp/-perc_clp).
       set range = 0 for autoranging.
       If you use -apr and your colormap contains fractions, you
       must use a positive range colormap.
    -anr range: (optional) clips the values in IntVect to [-range range].
       This option allows range of colormap to be set as in AFNI, 
       with Negative colorbar (Pos NOT selected).
       This option is mutually exclusive with -clp/-perc_clp).
       set range = 0 for autoranging.
       If you use -anr and your colormap contains fractions, you
       must use a negative range colormap.
    -interp: (default) use color interpolation between colors in colormap
       If a value is assigned between two colors on the colorbar,
       it receives a color that is an interpolation between those two colors.
       This is the default behaviour in SUMA and AFNI when using the continuous
       colorscale. Mutually exclusive with -nointerp and -direct options.
    -nointerp: (optional) turns off color interpolation within the colormap
       Color assigniment is done a la AFNI when the paned colormaps are used.
       Mutually exclusive with -interp and -direct options.
    -direct: (optional) values (typecast to integers) are mapped directly
       to index of color in color maps. Example: value 4 is assigned
       to the 5th (index 4) color in the color map (same for values
       4.2 and 4.7). This mapping scheme is useful for ROI indexed type
       data. Negative data values are set to 0 and values >= N_col 
       (the number of colors in the colormap) are set to N_col -1
    -msk_zero: (optional) values that are 0 will get masked no matter
       what colormaps or mapping schemes you are using. 
       AFNI masks all zero values by default.
    -msk msk0 msk1: (optinal, default is no masking) 
       Values in vcol (BEFORE clipping is performed) 
       between [msk0 msk1] are masked by the masking color.
    -msk_col R G B: (optional, default is 0.3 0.3 0.3) 
       Sets the color of masked voxels.
    -nomsk_col: do not output nodes that got masked.
       It does not make sense to use this option with
       -msk_col.
    -br BrightFact: (optional, default is 1) 
       Applies a brightness factor to the colors 
       of the colormap and the mask color.
    -h or -help: displays this help message.

   The following options are for debugging and sanity checks.
    -verb: (optional) verbose mode.
    -showmap: (optional) print the colormap to the screen and quit.
       This option is for debugging and sanity checks.
    -showdb: (optional) print the colors and colormaps of AFNI
       along with any loaded from the file Palfile.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

    Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov 
      July 31/02 

This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
serial_helper
------------------------------------------------------------
/var/www/html/pub/dist/bin/linux_gcc32/serial_helper - pass motion parameters from socket to serial port

    This program is meant to receive registration (motion?)
    correction parameters from afni's realtime plugin, and to
    pass that data on to a serial port.

    The program is meant to run as a tcp server.  It listens
    for a connection, then processes data until a termination
    flag is received (sending data from the tcp socket to the
    serial port), closes the new connection, and goes back
    to a listening state.

    The basic outline is:

    open tcp server socket
    repeat forever:
        wait for a tcp client connection
        open a serial port
        while the client sends new data
            write that data to the serial port
        close the serial port and client socket

    The expected client is the realtime plugin to afni,
    plug_realtime.so.  If the afni user has their environment
    variable AFNI_REALTIME_MP_HOST_PORT set as HOST:PORT,
    then for EACH RUN, the realtime plugin will open a tcp
    connection to the given HOST and PORT, pass the magic hello
    data (0xabcdefab), pass the 6 motion parameters for each
    time point, and signal a closure by passing the magic bye
    data (0xdeaddead).

    On this server end, the 'repeat forever' loop will do the
    following.  First it will establish the connection by
    checking for the magic hello data.  If that data is found,
    the serial port will be opened.

    Then it will repeatedly check the incoming data for the
    magic bye data.  As long as that check fails, the data is
    assumed to be valid motion parameters.  And so 6 floats at a
    time are read from the incoming socket and passed to the
    serial port.

  usage: /var/www/html/pub/dist/bin/linux_gcc32/serial_helper [options] -serial_port FILENAME
------------------------------------------------------------
  examples:

    1. display this help :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -help

    2. display the module history :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -hist

    3. display the current version number :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -ver

  * 4. run normally, using the serial port file /dev/ttyS0 :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -serial_port /dev/ttyS0

  * 5. same as 4, but specify socket number 53214 :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -serial_port /dev/ttyS0 -sock_num 53214

    6. same as 5, but specify minmum and maximum bounds on
       the values :

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper                       \
            -serial_port /dev/ttyS0            \
            -sock_num 53214                    \
            -mp_min -12.7                      \
            -mp_max  12.7

    7. run the program in socket test mode, without serial
       communication, and printing all the incoming data

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -no_serial -debug 3

    8. same as 4, but use debug level 3 to see the parameters
       that will be passed on, and duplicate all output to the
       file, helper.output

       note: this command is for the t-shell, and will not work
             under bash (for bash do the 2>&1 thingy...)

        /var/www/html/pub/dist/bin/linux_gcc32/serial_helper -serial_port /dev/ttyS0 -debug 3 |& tee helper.out
------------------------------------------------------------
  program setup:

    1. Start '/var/www/html/pub/dist/bin/linux_gcc32/serial_helper' on the computer with the serial port that
       the motion parameters should be written to.  Example 3
       is the most likely case, though it might be useful to
       use example 8.

    2. On the computer which will be used to run 'afni -rt',
       set the environment variable AFNI_REALTIME_MP_HOST_PORT
       to the appropriate host:port pair.  See the '-sock_num'
       option below for more details.

       This variable can also be set in the ~/.cshrc file, or
       as part of the AFNI environment via the ~/.afnirc file.

    3. Start 'afni -rt'.  Be sure to request 'realtime' graphing
       of the '3D: realtime' Registration parameters.

    4. Start receiving data (sending it to the realtime plugin).

       Note that for testing purposes, I may work well to get a
       set of I-files (say, in directories 003, 023, etc.), and
       to use Imon to send not-so-real-time data to afni.  An
       example of Imon for this purpose might be:

           Imon -start_dir 003 -quit -rt -host localhost

       See 'Imon -help' for more information.
------------------------------------------------------------
  'required' parameter:

    -serial_port FILENAME : specify output serial port
                          : -serial_port /dev/ttyS0

        If the user is not using any of the 'special' options,
        below, then this parameter is required.

        The FILENAME is the device file for the serial port
        which will be used for output.
------------------------------
  special options (for information or testing):

    -help            : show this help information

    -hist            : show the module history

    -debug LEVEL     : set the debugging level to LEVEL
                     : e.g. -debug 2
                     : default is 0, max is 3

    -no_serial       : turn of serial port output

        This option is used for testing the incoming data,
        when output to a serial port is not desired.  The
        program will otherwise operate normally.

    -version         : show the current version number
------------------------------
  'normal' options:

    -mp_max MAX_VAL  : limit the maximum value of the MP data
                     : e.g. -mp_max 12.7
                     : default is 12.7

        If any incoming data is greater than this value, it will
        be set to this value.  The default of 12.7 is used to
        scale incoming floats to signed bytes.

    -mp_min MIN_VAL  : limit the minimum value of the MP data
                     : e.g. -mp_min -12.7
                     : default is -12.7

        If any incoming data is less than this value, it will
        be set to this value.  The default of -12.7 is used to
        scale incoming floats to signed bytes.

    -sock_num SOCK   : specify socket number to serve
                     : e.g. -sock_num 53214
                     : default is 53214

        This is the socket the program will use to listen for
        new connections.  This is the socket number that should
        be provided to the realtime plugin via the environment
        variable, AFNI_REALTIME_MP_HOST_PORT.

        On the machine the user run afni from, that environment
        variable should have the form HOST:PORT, where a basic
        example might be localhost:53214.
------------------------------------------------------------
  Authors: R. Reynolds, T. Ross  (March, 2004)
------------------------------------------------------------
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
sfim
MCW SFIM: Stepwise Functional IMages, by RW Cox

Usage: sfim [options] image_files ...

  + image_files are in the same format AFNI accepts
  + options are from the following:

  -sfint iname:   'iname' is the name of a file which has
                  the interval definitions; an example is
                    3*# 5*rest 4*A 5*rest 4*B 5*rest 4*A 5*rest
                  which says:
                    - ignore the 1st 3 images
                    - take the next 5 as being in task state 'rest'
                    - take the next 4 as being in task state 'A'
                    and so on;
                  task names that start with a nonalphabetic character
                  are like the '#' above and mean 'ignore'.
              *** the default 'iname' is 'sfint'

  -base bname:    'bname' is the task state name to use as the
                  baseline; other task states will have the mean
                  baseline state subtracted; if there are no task
                  states from 'iname' that match 'bname', this
                  subtraction will not occur.
              *** the default 'bname' is 'rest'

  -localbase:     if this option is present, then each non-base
                  task state interval has the mean of the two
                  nearest base intervals subtracted instead of the
                  grand mean of all the base task intervals.

  -prefix pname:  'pname' is the prefix for output image filenames for
                  all states:  the i'th interval with task state name
                  'fred' will be writen to file 'pname.fred.i'.
              *** the default 'pname' is 'sfim'

  Output files are the base-mean-removed averages for each non-base
  task interval, and simply the mean for each base task interval.
  Output images are in the 'flim' (floating pt. image) format, and
  may be converted to 16 bit shorts using the program 'ftosh'.
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
siemens_vision
Usage: siemens_vision [options] filename ...
Prints out information from the Siemens .ima file header(s).

The only option is to rename the file according to the
TextImageNumber field stored in the header.  The option is:

  -rename ppp

which will rename each file to the form 'ppp.nnnn.ima',
where 'nnnn' is the image number expressed with 4 digits.

When '-rename' is used, the header info from the input files
will not be printed.
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
sqwave
Usage: /var/www/html/pub/dist/bin/linux_gcc32/sqwave [-on #] [-off #] [-length #] [-cycles #]
      [-init #] [-onkill #] [-offkill #] [-initkill #] [-name name]
FatalError
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
strblast
Usage: strblast [options] TARGETSTRING filename ...
Finds exact copies of the target string in each of
the input files, and replaces all characters with
some junk string.

options:

  -help              : show this help

  -new_char CHAR     : replace TARGETSTRING with CHAR (repeated)

      This option is used to specify what TARGETSTRING is
      replaced with.  In this case, replace it with repeated
      copies of the character CHAR.

  -new_string STRING : replace TARGETSTRING with STRING

      This option is used to specify what TARGETSTRING is
      replaced with.  In this case, replace it with the string
      STRING.  If STRING is not long enough, then CHAR from the
      -new_char option will be used to complete the overwrite
      (or the character 'x', by default).

  -unescape          : parse TARGETSTRING for escaped characters
                       (includes '\t', '\n', '\r')

      If this option is given, strblast will parse TARGETSTRING
      replacing any escaped characters with their encoded ASCII
      values.

Examples:
  strings I.001 | more # see if Subject Name is present
  strblast 'Subject Name' I.*

  strblast -unescape "END OF LINE\n"       infile.txt
  strblast -new_char " " "BAD STRING"      infile.txt
  strblast -new_string "GOOD" "BAD STRING" infile.txt

Notes and Warnings:
  * strblast will modify the input files irreversibly!
      You might want to test if they are still usable.
  * strblast reads files into memory to operate on them.
      If the file is too big to fit in memory, strblast
      will fail.
  * strblast  will do internal wildcard expansion, so
      if there are too many input files for your shell to
      handle, you can do something like
         strblast 'Subject Name' 'I.*'
      and strblast will expand the 'I.*' wildcard for you.
This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
suma
Usage:  
 Mode 1: Using a spec file to specify surfaces
                suma -spec <SPEC file> 
                     [-sv <SURFVOL>] [-ah AfniHost]

   -spec <SPEC file>: File containing surface specification. 
                      This file is typically generated by 
                      @SUMA_Make_Spec_FS (for FreeSurfer surfaces) or 
                      @SUMA_Make_Spec_SF (for SureFit surfaces). 
                      The Spec file should be located in the directory 
                      containing the surfaces.
   [-sv <SURFVOL>]: Anatomical volume used in creating the surface 
                    and registerd to the current experiment's anatomical 
                    volume (using @SUMA_AlignToExperiment). 
                    This parameter is optional, but linking to AFNI is 
                    not possible without it.If you find the need for it 
                    (as some have), you can specify the SurfVol in the 
                    specfile. You can do so by adding the field 
                    SurfaceVolume to each surface in the spec file. 
                    In this manner, you can have different surfaces using
                    different surface volumes.
   [-ah <AFNIHOST>]: Name (or IP address) of the computer running AFNI. This 
                     parameter is optional, the default is localhost. 
                     When both AFNI and SUMA are on the same computer, 
                     communication is through shared memory. You can turn that 
                     off by explicitly setting AfniHost to 127.0.0.1
   [-niml]: Start listening for NIML-formatted elements.
   [-dev]: Allow access to options that are not well polished for consuption.

 Mode 2: Using -t_TYPE or -t* options to specify surfaces on command line.
         -sv, -ah, -niml and -dev are still applicable here. This mode 
         is meant to simplify the quick viewing of a surface model.
                suma [-i_TYPE surface] [-t* surface] 
         Surfaces specified on command line are place in a group
         called 'DefGroup'.
         If you specify nothing on command line, you will have a random
         surface created for you. Some of these surfaces are generated
         using Thomas Lewiner's sample volumes for creating isosurfaces.
         See suma -sources for a complete reference.

 Specifying input surfaces using -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           SF: Caret/SureFit format
           BV: BrainVoyager format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.

 Modes 1 & 2: You can mix the two modes for loading surfaces but the -sv
              option may not be properly applied.
              If you mix these modes, you will have two groups of
              surfaces loaded into SUMA. You can switch between them
              using the 'Switch Group' button in the viewer controller.

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 
   [-visuals] Shows the available glxvisuals and exits.
   [-version] Shows the current version number.
   [-latest_news] Shows the latest news for the current 
                  version of the entire SUMA package.
   [-all_latest_news] Shows the history of latest news.
   [-progs] Lists all the programs in the SUMA package.
   [-sources] Lists code sources used in parts of SUMA.

   For help on interacting with SUMA, press 'ctrl+h' with the mouse 
   pointer inside SUMA's window.
   For more help: http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm

   If you can't get help here, please get help somewhere.

   ++ SUMA version 2004_12_29
New Programs:
  + SurfClust: Program to find clusters of activation
               on the surface.
  + IsoSurface: Program to create isosurfaces from AFNI volumes.
  + ConvexHull: Program to create the convex hull of a set of
                points.
  + 3dSkullStrip: Program to remove the skull from anatomical 
                  volumes.
  + 3dCRUISEtoAFNI: Program to convert CRUISE volumes to AFNI
  + 3dBRAIN_VOYAGERtoAFNI: Program to convert BrainVoyager .vmr
                           volumes to AFNI
  + SurfMesh: Program to increase or decrease a mesh's density.
  + SurfMask: Program to find the volume enclosed by a surface.
  + SurfToSurf: Program to interpolate between non-isotopic surfaces.
Modifications:
  + SUMA:
    o Slight modification to threshold scale.
    o Added environment variable SUMA_ThresholdScalePower.
    o Fixed a few kinks in the surface controller.
    o Fixed ROI drawing trace on OSX.
    o Added geodesic distance measurements in ROI drawing
    controller.
    o Suma can read surfaces specified on command line.
    o Fixed bug reading AFNI generated niml files.
    o Useful axis displayed with F2 key.
    o Fixed bug with recursive function used to fill ROIs.
    o Support for reading CRUISE surfaces in OpenDX format
    o Support for reading BrainVoyager surfaces (.srf) format
    o Mouse motion effect is modulated with Zoom level
    o F8 toggles between orthographic and perspective viewing
  + ConvertSurface:
    o Option -make_consistent added to make the winding
    of the mesh consistent.  
  + SurfQual:
    o Checks and warns about mesh's winding inconsistency.
  + SurfSmooth:
    o Added NN_geom, nearest neighbor interpolation option.
    o Combined with -match_vol or -match_area, this geometry
    smoothing mode can be used to inflate surfaces.
  + SurfaceMetrics:
    o Option -vol calculates the volume of the closed surface.
  + SurfPatch:
    o Option -vol to calculate the volume between two isotopic
    surface patches.
  + ROI2dataset:
    o Option -pad_to_node and -pad_label to output datasets
    containing full node listings.
  + ConvertDset:
    o Option -o_1dp was added to write 1D file data only,
    without additional comments.

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005



    Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov 

This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
@SUMA_Align
ToExperiment
Usage: @SUMA_AlignToExperiment <EXPERIMENT Anatomy> <SURFACE Anatomy> 
                     [dxyz] [-wd] [-prefix PREFIX] [-EA_clip_below CLP]
creates a version of Surface Anatomy that is registered to Experiment Anatomy.

Mandatory parameters:
<EXPERIMENT Anatomy>: Name of high resolution anatomical data set in register 
        with experimental data.
<SURFACE Anatomy> Path and Name of high resolution antomical data set used to 
        create the surface.

Optional parameters:
   [DXYZ|-dxyz DXYZ]: This optional parameter indicates that the anatomical 
        volumes must be downsampled to dxyz mm voxel resolution before 
        registration. That is only necessary if 3dvolreg runs out of memory.
        You MUST have 3dvolreg that comes with afni distributions newer than 
        version 2.45l. It contains an option for reducing memory usage and 
        thus allow the registration of large data sets.
   [-wd]: Use 3dWarpDrive's general affine transform (12 param) instead of 
        3dvolreg's 6 parameters.
        If the anatomical coverage differs markedly between 'Experiment 
        Anatomy' and 'Surface Anatomy', you might need to use -EA_clip_below 
        option or you could end up with a very distorted brain.
   [-EA_clip_below CLP]: Set slices below CLPmm in 'Experiment Anatomy' to zero.
        Use this if the coverage of 'Experiment Anatomy' dataset
        extends far below the data in 'Surface Anatomy' dataset.
        To get the value of CLP, use AFNI to locate the slice
        below which you want to clip and set CLP to the z coordinate
        from AFNI's top left corner. Coordinate must be in RAI, DICOM.
   [-prefix PREFIX]: Use PREFIX for the output volume. Default is the prefix 
        of the 'Surface Anatomy' suffixed by _AlndExp.


NOTE: You must run the script from the directory where Experiment Anatomy resides.

Example 1: For datasets with no relative distortion and comparable coverage.
           Using 6 param. rigid body transform.
@SUMA_AlignToExperiment DemoSubj_spgrsa+orig. \
                        ../FreeSurfer/SUMA/DemoSubj_SurfVol+orig.

Example 2: For datasets with some distortion and different coverage.
           Using 12 param. transform and clipping of areas below cerebellum:
@SUMA_AlignToExperiment ABanat+orig. DemoSubj_SurfVol+orig. \
                       -wd -prefix DemoSubj_SurfVol_WD_AlndExp \
                       -EA_clip_below -30

More help may be found at http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm

Ziad Saad (ziad@nih.gov)
SSCC/NIMH/ National Institutes of Health, Bethesda Maryland

This page auto-generated on Thu Aug 25 16:49:42 EDT 2005
suma_change_spec
Unknown option: help
suma_change_spec:
 This program changes SUMA's surface specification (Spec) files.
 At minimum, the flags input and state are required.
Available flags:
  input: Which is the SUMA Spec file you want to change.
  state: The state within the Spec file you want to change.
  domainparent: The new Domain Parent for the state within the 
	Spec file you want to change.
  output: The name to which your new Spec file will be temporarily
	written to. (this flag is optional, if omitted the new Spec
	file will be temporarily written to 'input_file.change').
  remove: This flag will remove the automatically created backup.
  anatomical: This will add 'Anatomical = Y' to the selected
	SurfaceState.
Usage:
 This program will take the user given flags and create a spec file,
 named from the output flag or <INPUT>.change.  It will then take
 this new spec file and overwrite the original input file.  If the -remove
 flag is not used the original input file can be found at <INPUTFILE>.bkp.
 If the -remove is used the .bkp file will be  automatically deleted.

 ex. suma_change_spec -input <FILE> -state <STATENAME> 
	-domainparent <NEW_PARENT> -anatomical
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
@SUMA_Make_Spec_FS
@SUMA_Make_Spec_FS - prepare for surface viewing in SUMA

    This script goes through the following steps:
      - verify existence of necessary programs 
        (afni, to3d, suma, mris_convert)
      - determine the location of surface and COR files
      - creation of ascii surface files via 'mris_convert'
      - creation of left and right hemisphere SUMA spec files
      - creation of an AFNI dataset from the COR files via 'to3d'

      - all created files are stored in a new SUMA directory

  Usage: @SUMA_Make_Spec_FS [options] -sid SUBJECT_ID

  examples:

    @SUMA_Make_Spec_FS -sid subject1
    @SUMA_Make_Spec_FS -help
    @SUMA_Make_Spec_FS -fspath subject1/surface_stuff -sid subject1
    @SUMA_Make_Spec_FS -neuro -sid 3.14159265 -debug 1

  options:

    -help    : show this help information

    -debug LEVEL    : print debug information along the way
          e.g. -debug 1
          the default level is 0, max is 2

    -fspath PATH    : path to 'surf' and 'orig' directories
          e.g. -fspath subject1/surface_info
          the default PATH value is './', the current directory

          This is generally the location of the 'surf' directory,
          though having PATH end in surf is OK.  The mri/orig
          directory should also be located here.

          Note: when this option is provided, all file/path
          messages will be with respect to this directory.

    -neuro          : use neurological orientation
          e.g. -neuro
          the default is radiological orientation

          In the default radiological orientation, the subject's
          right is on the left side of the image.  In the
          neurological orientation, left is really left.

    -sid SUBJECT_ID : required subject ID for file naming


  notes:

    0. More help may be found at http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm
    1. Surface file names should look like 'lh.smoothwm'.
    2. Patches of surfaces need the word patch in their name, in
       order to use the correct option for 'mris_convert'.
    3. Flat surfaces must have .flat in their name.
    4. You can tailor the script to your needs. Just make sure you rename it or risk
       having your modifications overwritten with the next SUMA version you install.

     R. Reynolds (rickr@codon.nih.gov), Z. Saad (ziad@nih.gov)

This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
@SUMA_Make_Spec_SF
@SUMA_Make_Spec_SF - prepare for surface viewing in SUMA

    This script goes through the following steps:
      - determine the location of surfaces and 
        then AFNI volume data sets used to create them.
      - creation of left and right hemisphere SUMA spec files

      - all created files are stored in SURFACES directory

  Usage: @SUMA_Make_Spec_SF [options] -sid SUBJECT_ID

  examples:

    @SUMA_Make_Spec_SF -sid subject1
    @SUMA_Make_Spec_SF -help
    @SUMA_Make_Spec_SF -sfpath subject1/surface_stuff -sid subject1

  options:

    -help    : show this help information

    -debug LEVEL    : print debug information along the way
          e.g. -debug 1
          the default level is 0, max is 2

    -sfpath PATH    : path to directory containing 'SURFACES'
                      and AFNI volume used in creating the surfaces.
          e.g. -sfpath subject1/surface_models
          the default PATH value is './', the current directory

          This is generally the location of the 'SURFACES' directory,
          though having PATH end in SURFACES is OK.  

          Note: when this option is provided, all file/path
          messages will be with respect to this directory.


    -sid SUBJECT_ID : required subject ID for file naming


  notes:

    0. More help may be found at http://afni.nimh.nih.gov/ssc/ziad/SUMA/SUMA_doc.htm
    1. Surface file names should look like the standard names used by SureFit:
       rw_1mmLPI.L.full.segment_vent_corr.fiducial.58064.coord
       Otherwise the script cannot detect them. You will need to decide which
       surface is the most recent (the best) and the script helps you by listing
       the available surfaces with the most recent one first.
       This sorting ususally works except when the time stamps on the surface files
       are messed up. In such a case you just need to know which one to use.
       Once the fiducial surface is chosen, it's complimentary surfaces are selected
       using the node number in the file name.
    3. You can tailor the script to your needs. Just make sure you rename it or risk
       having your modifications overwritten with the next SUMA version you install.

     R. Reynolds (rickr@codon.nih.gov), Z. Saad (ziad@nih.gov)

This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfaceMetrics
Error Main_SUMA_SurfaceMetrics (SUMA_Load_Surface_Object.c:3246):
 Too few parameters

Usage: SurfaceMetrics <-Metric1> [[-Metric2] ...] 
                  <-spec SpecFile> <-surf_A insurf> 
                  [<-sv SurfaceVolume [VolParam for sf surfaces]>]
                  [-tlrc] [<-prefix prefix>]

Outputs information about a surface's mesh

   -Metric1: Replace -Metric1 with the following:
      -vol: calculates the volume of a surface.
            Volume unit is the cube of your surface's
            coordinates unit, obviously.
            Volume's sign depends on the orientation
            of the surface's mesh.
            Make sure your surface is a closed one
            and that winding is consistent.
            Use SurfQual to check the surface.
            If your surface's mesh has problems,
            the result is incorrect. 
            Volume is calculated using Gauss's theorem,
            see [Hughes, S.W. et al. 'Application of a new 
            discreet form of Gauss's theorem for measuring 
            volume' in Phys. Med. Biol. 1996].
      -conv: output surface convexity at each node.
         Output file is prefix.conv. Results in two columns:
         Col.0: Node Index
         Col.1: Convexity
         This is the measure used to shade sulci and gyri in SUMA.
         C[i] = Sum(dj/dij) over all neighbors j of i
         dj is the distance of neighboring node j to the tangent plane at i
         dij is the length of the segment ij
      -area: output area of each triangle. 
         Output file is prefix.area. Results in two columns:
         Col.0: Triangle Index
         Col.1: Triangle Area
      -curv: output curvature at each node.
         Output file is prefix.curv. Results in nine columns:
         Col.0: Node Index
         Col.1-3: vector of 1st principal direction of surface
         Col.4-6: vector of 2nd principal direction of surface
         Col.7: Curvature along T1
         Col.8: Curvature along T2
         Curvature algorithm by G. Taubin from: 
         'Estimating the tensor of curvature of surface 
         from a polyhedral approximation.'
      -edges: outputs info on each edge. 
         Output file is prefix.edges. Results in five columns:
         Col.0: Edge Index (into a SUMA structure).
         Col.1: Index of the first node forming the edge
         Col.2: Index of the second node forming the edge
         Col.3: Number of triangles containing edge
         Col.4: Length of edge.
      -node_normals: Outputs segments along node normals.
                     Segments begin at node and have a default
                     magnitude of 1. See option 'Alt+Ctrl+s' in 
                     SUMA for visualization.
      -face_normals: Outputs segments along triangle normals.
                     Segments begin at centroid of triangles and 
                     have a default magnitude of 1. See option 
                     'Alt+Ctrl+s' in SUMA for visualization.
      -normals_scale SCALE: Scale the normals by SCALE (1.0 default)
                     For use with options -node_normals and -face_normals
      -coords: Output coords of each node after any transformation 
         that is normally carried out by SUMA on such a surface.
         Col. 0: Node Index
         Col. 1: X
         Col. 2: Y
         Col. 3: Z
      -sph_coords: Output spherical coords of each node.
      -sph_coords_center x y z: Shift each node by  x y z
                                before calculating spherical
                                coordinates. Default is the
                                center of the surface.
          Both sph_coords options output the following:
          Col. 0: Node Index
          Col. 1: R (radius)
          Col. 2: T (azimuth)
          Col. 3: P (elevation)

      You can use any or all of these metrics simultaneously.

   -spec SpecFile: Name of specfile containing surface of interest.
                   If the surface does not have a spec file, use the 
                   program quickspec to create one.
   -surf_A insurf: Name of surface of interest. 
                   NOTE: i_TYPE inSurf option is now obsolete.

   -sv SurfaceVolume [VolParam for sf surfaces]: Specify a surface volume
                   for surface alignment. See ConvertSurface -help for more info.

   -tlrc: Apply Talairach transform to surface.
                   See ConvertSurface -help for more info.

   -prefix prefix: Use prefix for output files. (default is prefix of inSurf)
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov 
       Mon May 19 15:41:12 EDT 2003

This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfClust
Usage: A program to perform clustering analysis surfaces.
  SurfClust <-spec SpecFile> 
            <-surf_A insurf> 
            <-input inData.1D dcol_index> 
            <-rmm rad>
            [-amm2 minarea]
            [-prefix OUTPREF]  
            [-out_clusterdset] [-out_roidset] 
            [-out_fulllist]
            [-sort_none | -sort_n_nodes | -sort_area]

  The program can outputs a table of the clusters on the surface,
  a mask dataset formed by the different clusters and a clustered
  version of the input dataset.

  Mandatory parameters:
     -spec SpecFile: The surface spec file.
     -surf_A insurf: The input surface name.
     -input inData.1D dcol_index: The input 1D dataset
                                  and the index of the
                                  datacolumn to use
                                  (index 0 for 1st column).
                                  Values of 0 indicate 
                                  inactive nodes.
     -rmm rad: Maximum distance between an activated node
               and the cluster to which it belongs.
               Distance is measured on the surface's graph (mesh).

  Optional Parameters:
     -thresh_col tcolind: Index of thresholding column.
                          Default is column 0.
      -thresh tval: Apply thresholding prior to clustering.
                   A node n is considered if thresh_col[n] > tval.
     -athresh tval: Apply absolute thresholding prior to clustering.
                    A node n is considered if | thresh_col[n] | > tval.
     -amm2 minarea: Do not output resutls for clusters having
                    an area less than minarea.
     -prefix OUTPREF: Prefix for output.
                      Default is the prefix of 
                      the input dataset.
                      If this option is used, the
                      cluster table is written to a file called
                      OUTPREF_ClstTable_rXX_aXX.1D. Otherwise the
                      table is written to stdout. 
     -out_clusterdset: Output a clustered version of inData.1D 
                       preserving only the values of nodes that 
                       belong to clusters that passed the rmm and amm2
                       conditions above.
                       The clustered dset's prefix has
                       _Clustered_rXX_aXX affixed to the OUTPREF
     -out_roidset: Output an ROI dataset with the value
                   at each node being the rank of its
                   cluster. The ROI dataset's prefix has
                   _ClstMsk_rXX_aXX affixed to the OUTPREF
                   where XX represent the values for the
                   the -rmm and -amm2 options respectively.
                   The program will not overwrite pre-existing
                   dsets.
     -out_fulllist: Output a value for all nodes of insurf.
                    This option must be used in conjuction with
                    -out_roidset and/or out_clusterdset.
                    With this option, the output files might
                    be mostly 0, if you have small clusters.
                    However, you should use it if you are to 
                    maintain the same row-to-node correspondence
                    across multiple datasets.
     -sort_none: No sorting of ROI clusters.
     -sort_n_nodes: Sorting based on number of nodes
                    in cluster.
     -sort_area: Sorting based on area of clusters 
                 (default).
     -update perc: Pacify me when perc of the data have been
                   processed. perc is between 1% and 50%.
                   Default is no update.
     -no_cent: Do not find the central nodes.
               Finding the central node is a 
               relatively slow operation. Use
               this option to skip it.

  The cluster table output:
  A table where ach row shows results from one cluster.
  Each row contains 13 columns:   
     Col. 0  Rank of cluster (sorting order).
     Col. 1  Number of nodes in cluster.
     Col. 2  Total area of cluster. Units are the 
             the surface coordinates' units^2.
     Col. 3  Mean data value in cluster.
     Col. 4  Mean of absolute data value in cluster.
     Col. 5  Central node of cluster (see below).
     Col. 6  Weighted central node (see below).
     Col. 7  Minimum value in cluster.
     Col. 8  Node where minimum value occurred.
     Col. 9  Maximum value in cluster.
     Col. 10 Node where maximum value occurred.
     Col. 11 Variance of values in cluster.
     Col. 12 Standard error of the mean ( sqrt(variance / number of nodes) ).
   The CenterNode n is such that: 
   ( sum (Uia * dia * wi) ) - ( Uca * dca * sum (wi) ) is minimal
     where i is a node in the cluster
           a is an anchor node on the surface
           sum is carried over all nodes i in a cluster
           w. is the weight of a node 
              = 1.0 for central node 
              = value at node for the weighted central node
           U.. is the unit vector between two nodes
           d.. is the distance between two nodes on the graph
              (an approximation of the geodesic distance)
   If -no_cent is used, CenterNode columns are set to 0.

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfMeasures
SurfMeasures - compute measures from the surface dataset(s)

  usage: SurfMeasures [options] -spec SPEC_FILE -out_1D OUTFILE.1D

    This program is meant to read in a surface or surface pair,
    and to output and user-requested measures over the surfaces.
    The surfaces must be specified in the SPEC_FILE.

 ** Use the 'inspec' command for getting information about the
    surfaces in a spec file.

    The output will be a 1D format text file, with one column
    (or possibly 3) per user-specified measure function.  Some
    functions require only 1 surface, some require 2.

    Current functions (applied with '-func') include:

        ang_norms    : angular difference between normals
        ang_ns_A     : angular diff between segment and first norm
        ang_ns_B     : angular diff between segment and second norm
        coord_A      : xyz coordinates of node on first surface
        coord_B      : xyz coordinates of node on second surface
        n_area_A     : associated node area on first surface
        n_area_B     : associated node area on second surface
        n_avearea_A  : for each node, average area of triangles (surf A)
        n_avearea_B  : for each node, average area of triangles (surf B)
        n_ntri       : for each node, number of associated triangles
        node_vol     : associated node volume between surfaces
        nodes        : node number
        norm_A       : vector of normal at node on first surface
        norm_B       : vector of normal at node on second surface
        thick        : distance between surfaces along segment

------------------------------------------------------------

  examples:

    1. For each node on the surface smoothwm in the spec file,
       fred.spec, output the node number (the default action),
       the xyz coordinates, and the area associated with the
       node (1/3 of the total area of triangles having that node
       as a vertex).

        SurfMeasures                                   \
            -spec       fred1.spec                     \
            -sv         fred_anat+orig                 \
            -surf_A     smoothwm                       \
            -func       coord_A                        \
            -func       n_area_A                       \
            -out_1D     fred1_areas.1D                   

    2. For each node of the surface pair smoothwm and pial,
       display the:
         o  node index
         o  node's area from the first surface
         o  node's area from the second surface
         o  node's (approximate) resulting volume
         o  thickness at that node (segment distance)
         o  coordinates of the first segment node
         o  coordinates of the second segment node

         Additionally, display total surface areas, minimum and
         maximum thicknesses, and the total volume for the
         cortical ribbon (the sum of node volumes).

        SurfMeasures                                   \
            -spec       fred2.spec                     \
            -sv         fred_anat+orig                 \
            -surf_A     smoothwm                       \
            -surf_B     pial                           \
            -func       n_area_A                       \
            -func       n_area_B                       \
            -func       node_vol                       \
            -func       thick                          \
            -func       coord_A                        \
            -func       coord_B                        \
            -info_area                                 \
            -info_thick                                \
            -info_vol                                  \
            -out_1D     fred2_vol.1D                     

    3. For each node of the surface pair, display the:
         o  node index
         o  angular diff between the first and second norms
         o  angular diff between the segment and first norm
         o  angular diff between the segment and second norm
         o  the normal vectors for the first surface nodes
         o  the normal vectors for the second surface nodes
         o  angular diff between the segment and second norm

        SurfMeasures                                   \
            -spec       fred2.spec                     \
            -surf_A     smoothwm                       \
            -surf_B     pial                           \
            -func       ang_norms                      \
            -func       ang_ns_A                       \
            -func       ang_ns_B                       \
            -func       norm_A                         \
            -func       norm_B                         \
            -out_1D     fred2_norm_angles.1D             

    4. Similar to #3, but output extra debug info, and in
       particular, info regarding node 5000.

        SurfMeasures                                   \
            -spec       fred2.spec                     \
            -sv         fred_anat+orig                 \
            -surf_A     smoothwm                       \
            -surf_B     pial                           \
            -func       ang_norms                      \
            -func       ang_ns_A                       \
            -func       ang_ns_B                       \
            -debug      2                              \
            -dnode      5000                           \
            -out_1D     fred2_norm_angles.1D             

    5. For each node, output the volume, thickness and areas,
       but restrict the nodes to the list contained in column 0
       of file sdata.1D.  Furthermore, restrict those nodes to
       the mask inferred by the given '-cmask' option.

        SurfMeasures                                                   \
            -spec       fred2.spec                           \
            -sv         fred_anat+orig                       \
            -surf_A     smoothwm                             \
            -surf_B     pial                                 \
            -func       node_vol                             \
            -func       thick                                \
            -func       n_area_A                             \
            -func       n_area_B                             \
            -nodes_1D   'sdata.1D[0]'                        \
            -cmask      '-a sdata.1D[2] -expr step(a-1000)'  \
            -out_1D     fred2_masked.1D                  

------------------------------------------------------------

  REQUIRED COMMAND ARGUMENTS:

    -spec SPEC_FILE       : SUMA spec file

        e.g. -spec fred2.spec

        The surface specification file contains a list of
        related surfaces.  In order for a surface to be
        processed by this program, it must exist in the spec
        file.

    -surf_A SURF_NAME     : surface name (in spec file)
    -surf_B SURF_NAME     : surface name (in spec file)

        e.g. -surf_A smoothwm
        e.g. -surf_A lh.smoothwm
        e.g. -surf_B lh.pial

        This is used to specify which surface(s) will be used
        by the program.  The 'A' and 'B' correspond to other
        program options (e.g. the 'A' in n_area_A).

        The '-surf_B' parameter is required only when the user
        wishes to input two surfaces.

        Any surface name provided must be unique in the spec
        file, and must match the name of the surface data file
        (e.g. lh.smoothwm.asc).

    -out_1D OUT_FILE.1D   : 1D output filename

        e.g. -out_1D pickle_norm_info.1D

        This option is used to specify the name of the output
        file.  The output file will be in the 1D ascii format,
        with 2 rows of comments for column headers, and 1 row
        for each node index.

        There will be 1 or 3 columns per '-func' option, with
        a default of 1 for "nodes".

------------------------------------------------------------

  ALPHABETICAL LISTING OF OPTIONS:

    -cmask COMMAND        : restrict nodes with a mask

        e.g.     -cmask '-a sdata.1D[2] -expr step(a-1000)'

        This option will produce a mask to be applied to the
        list of surface nodes.  The total mask size, including
        zero entries, must match the number of nodes.  If a
        specific node list is provided via the '-nodes_1D'
        option, then the mask size should match the length of
        the provided node list.
        
        Consider the provided example using the file sdata.1D.
        If a surface has 100000 nodes (and no '-nodes_1D' option
        is used), then there must be 100000 values in column 2
        of the file sdata.1D.

        Alternately, if the '-nodes_1D' option is used, giving
        a list of 42 nodes, then the mask length should also be
        42 (regardless of 0 entries).

        See '-nodes_1D' for more information.

    -debug LEVEL          : display extra run-time info

        e.g.     -debug 2
        default: -debug 0

        Valid debug levels are from 0 to 5.

    -dnode NODE           : display extra info for node NODE

        e.g. -dnode 5000

        This option can be used to display extra information
        about node NODE during surface evaluation.

    -func FUNCTION        : request output for FUNCTION

        e.g. -func thick

        This option is used to request output for the given
        FUNCTION (measure).  Some measures produce one column
        of output (e.g. thick or ang_norms), and some produce
        three (e.g. coord_A).  These options, in the order they
        are given, determine the structure of the output file.

        Current functions include:

            ang_norms    : angular difference between normals
            ang_ns_A     : angular diff between segment and first norm
            ang_ns_B     : angular diff between segment and second norm
            coord_A      : xyz coordinates of node on first surface
            coord_B      : xyz coordinates of node on second surface
            n_area_A     : associated node area on first surface
            n_area_B     : associated node area on second surface
            n_avearea_A  : for each node, average area of triangles (surf A)
            n_avearea_B  : for each node, average area of triangles (surf B)
            n_ntri       : for each node, number of associated triangles
            node_vol     : associated node volume between surfaces
            nodes        : node number
            norm_A       : vector of normal at node on first surface
            norm_B       : vector of normal at node on second surface
            thick        : distance between surfaces along segment

          Note that the node volumes are approximations.  Places
          where either normal points in the 'wrong' direction
          will be incorrect, as will be the parts of the surface
          that 'encompass' this region.  Maybe we could refer
          to this as a mushroom effect...

          Basically, expect the total volume to be around 10%
          too large.

          ** for more accuracy, try 'SurfPatch -vol' **

    -help                 : show this help menu

    -hist                 : display program revision history

        This option is used to provide a history of changes
        to the program, along with version numbers.

  NOTE: the following '-info_XXXX' options are used to display
        pieces of 'aggregate' information about the surface(s).

    -info_all             : display all final info

        This is a short-cut to get all '-info_XXXX' options.

    -info_area            : display info on surface area(s)

        Display the total area of each triangulated surface.

    -info_norms           : display info about the normals

        For 1 or 2 surfaces, this will give (if possible) the
        average angular difference between:

            o the normals of the surfaces
            o the connecting segment and the first normal
            o the connecting segment and the second normal

    -info_thick           : display min and max thickness

        For 2 surfaces, this is used to display the minimum and
        maximum distances between the surfaces, along each of
        the connecting segments.

    -info_vol             : display info about the volume

        For 2 surfaces, display the total computed volume.
        Note that this node-wise volume computation is an
        approximation, and tends to run ~10 % high.

        ** for more accuracy, try 'SurfPatch -vol' **

    -nodes_1D NODELIST.1D : request output for only these nodes

        e.g.  -nodes_1D node_index_list.1D
        e.g.  -nodes_1D sdata.1D'[0]'

        The NODELIST file should contain a list of node indices.
        Output from the program would then be restricted to the
        nodes in the list.
        
        For instance, suppose that the file BA_04.1D contains
        a list of surface nodes that are located in Broadman's
        Area 4.  To get output from the nodes in that area, use:
        
            -nodes_1D BA_04.1D
        
        For another example, suppose that the file sdata.1D has
        node indices in column 0, and Broadman's Area indices in
        column 3.  To restrict output to the nodes in Broadman's
        area 4, use the pair of options:
        
            -nodes_1D 'sdata.1D[0]'                     \
            -cmask '-a sdata.1D[3] -expr (1-bool(a-4))' 

    -sv SURF_VOLUME       : specify an associated AFNI volume

        e.g. -sv fred_anat+orig

        If there is any need to know the orientation of the
        surface, a surface volume dataset may be provided.

    -ver                  : show version information

        Show version and compile date.

------------------------------------------------------------

  Author: R. Reynolds  - version 1.11 (October 6, 2004)

This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfMesh
Usage:
  SurfMesh <-i_TYPE SURFACE> <-o_TYPE OUTPUT> <-edges FRAC> 
           [-sv SURF_VOL]
 
  Example:
  SurfMesh -i_ply surf1.ply -o_ply surf1_half -edges 0.5

  Mandatory parameters:
     -i_TYPE SURFACE: Input surface. See below for details. 
              You can also use the -t* method or
              the -spec SPECFILE -surf SURFACE method.
     -o_TYPE OUTPUT: Output surface, see below.
     -edges FRAC: surface will be simplified to number of
              edges times FRAC (fraction). Default is .5
              refines surface if edges > 1

 Specifying input surfaces using -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           SF: Caret/SureFit format
           BV: BrainVoyager format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying a surface using -surf_? method:
    -surf_A SURFACE: specify the name of the first
            surface to load. If the program requires
            or allows multiple surfaces, use -surf_B
            ... -surf_Z .
            You need not use _A if only one surface is
            expected.
            SURFACE is the name of the surface as specified
            in the SPEC file. The use of -surf_ option 
            requires the use of -spec option.
 Specifying output surfaces using -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

 Originally written by Jakub Otwinowski.
 Now maintained by Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
 This program uses the GTS library gts.sf.net
 for fun read "Fast and memory efficient polygonal simplification" (1998) 
 and "Evaluation of memoryless simplification" (1999) by Lindstrom and Turk.
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfPatch
Usage:
  SurfPatch <-spec SpecFile> <-surf_A insurf> <-surf_B insurf> ...
            <-input nodefile inode ilabel> <-prefix outpref>  
            [-hits min_hits] [-masklabel msk] [-vol]

Usage 1:
  The program creates a patch of surface formed by nodes 
  in nodefile.
  Mandatory parameters:
     -spec SpecFile: Spec file containing input surfaces.
     -surf_X: Name of input surface X where X is a character
              from A to Z. If surfaces are specified using two
              files, use the name of the node coordinate file.
     -input nodefile inode ilabel: 
            nodefile is the file containing nodes defining the patch.
            inode is the index of the column containing the nodes
            ilabel is the index of the column containing labels of
                   the nodes in column inode. If you want to use
                   all the nodes in column indode, then set this 
                   parameter to -1 (default). 
                   If ilabel is not equal to 0 then the corresponding 
                   node is used in creating the patch.
                   See -masklabel option for one more variant.
     -prefix outpref: Prefix of output patch. If more than one surface
                      are entered, then the prefix will have _X added
                      to it, where X is a character from A to Z.
                      Output format depends on the input surface's.
                      With that setting, checking on pre-existing files
                      is only done before writing the new patch, which is
                      annoying. You can set the output type ahead of time
                      using -out_type option. This way checking for pre-existing
                      output files can be done at the outset.

  Optional parameters:
     -out_type TYPE: Type of all output patches, regardless of input surface type.
                     Choose from: FreeSurfer, SureFit, 1D and Ply.
     -hits min_hits: Minimum number of nodes specified for a triangle
                     to be made a part of the patch (1 <= min_hits <= 3)
                     default is 2.
     -masklabel msk: If specified, then only nodes that are labeled with
                     with msk are considered for the patch.
                     This option is useful if you have an ROI dataset file
                     and whish to create a patch from one out of many ROIs
                     in that file. This option must be used with ilabel 
                     specified (not = -1)

Usage 2:
  The program can also be used to calculate the volume between the same patch
  on two isotopic surfaces. See -vol option below.
      -vol: Calculate the volume formed by the patch on surf_A and
            and surf_B. For this option, you must specify two and
            only two surfaces with surf_A and surf_B options.
      -vol_only: Only calculate the volume, don't write out patches.

  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfQual
Usage: A program to check the quality of surfaces.
  SurfQual <-spec SpecFile> <-surf_A insurf> <-surf_B insurf> ...
             <-sphere> [-self_intersect] [-prefix OUTPREF]  

  Mandatory parameters:
     -spec SpecFile: Spec file containing input surfaces.
     -surf_X: Name of input surface X where X is a character
              from A to Z. If surfaces are specified using two
              files, use the name of the node coordinate file.
  Mesh winding consistency and 2-manifold checks are performed
  on all surfaces.
  Optional parameters:
     -self_intersect: Check if surface is self intersecting.
                      This option is rather slow, so be patient.
                      In the presence of intersections, the output file
                      OUTPREF_IntersNodes.1D.dset will contain the indices
                      of nodes forming segments that intersect the surface.
  Most other checks are specific to spherical surfaces (see option below).
     -sphere: Indicates that surfaces read are spherical.
              With this option you get the following output.
              - Absolute deviation between the distance (d) of each
                node from the surface's center and the estimated
                radius(r). The distances, abs (d - r), are 
                and written to the file OUTPREF_Dist.1D.dset .
                The first column represents node index and the 
                second is the absolute distance. A colorized 
                version of the distances is written to the file 
                OUTPREF_Dist.1D.col (node index followed 
                by r g b values). A list of the 10 largest absolute
                distances is also output to the screen.
              - Also computed is the cosine of the angle between 
                the normal at a node and the direction vector formed
                formed by the center and that node. Since both vectors
                are normalized, the cosine of the angle is the dot product.
                On a sphere, the abs(dot product) should be 1 or pretty 
                close. Nodes where abs(dot product) < 0.9 are flagged as
                bad and written out to the file OUTPREF_BadNodes.1D.dset .
                The file OUTPREF_dotprod.1D.dset contains the dot product 
                values for all the nodes. The files with colorized results
                are OUTPREF_BadNodes.1D.col and OUTPREF_dotprod.1D.col .
                A list of the bad nodes is also output to the screen for
                convenience. You can use the 'j' option in SUMA to have
                the cross-hair go to a particular node. Use 'Alt+l' to
                have the surface rotate and place the cross-hair at the
                center of your screen.
              NOTE: For detecting topological problems with spherical
                surfaces, I find the dot product method to work best.
  Optional parameters:
     -prefix OUTPREF: Prefix of output files. If more than one surface
                      are entered, then the prefix will have _X added
                      to it, where X is a character from A to Z.
                      THIS PROGRAM WILL OVERWRITE EXISTING FILES.
                      Default prefix is the surface's label.

  Comments:
     - The colorized (.col) files can be loaded into SUMA (with the 'c' 
     option. By focusing on the bright spots, you can find trouble spots
     which would otherwise be very difficult to locate.
     - You should also pay attention to the messages output when the 
     surfaces are being loaded, particularly to edges (segments that 
     join 2 nodes) are shared by more than 2 triangles. For a proper
     closed surface, every segment should be shared by 2 triangles. 
     For cut surfaces, segments belonging to 1 triangle only form
     the edge of that surface.
     - There are no utilities within SUMA to correct these defects.
     It is best to fix these problems with the surface creation
     software you are using.
     - Some warnings may be redundant. That should not hurt you.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfSmooth
Usage:  SurfSmooth <-spec SpecFile> <-surf_A insurf> <-met method> 

   Some methods require additional options detailed below.
   I recommend using the -talk_suma option to watch the 
   progression of the smoothing in real-time in suma.

   Method specific options:
      LB_FEM: <-input inData.1D> <-fwhm f>
              This method is used to filter data
              on the surface.
      LM: [-kpb k] [-lm l m] [-surf_out surfname]
          This method is used to filter the surface's
          geometry (node coordinates).
      NN_geom: smooth by averaging coordinates of 
               nearest neighbors.
               This method causes shrinkage of surface
               and is meant for test purposes only.

   Common options:
      [-Niter N] [-output out.1D] [-h/-help] 
      [-add_index] [-ni_text|-ni_binary] [-talk_suma]


   Detailed usage:
      -spec SpecFile: Name of specfile containing surface of interest.
                      If the surface does not have a spec file, use the 
                      program quickspec to create one.
      -surf_A insurf: Name of surface of interest. 
                      NOTE: i_TYPE inSurf option is now obsolete.
      -met method: name of smoothing method to use. Choose from:
                 LB_FEM: The method by Chung et al. 03.
                         This method is used for filtering 
                         data on the surface not for smoothing the
                         surface's geometry per se. See References below.
                 LM: The smoothing method proposed by G. Taubin 2000
                     This method is used for smoothing
                     a surface's geometry. See References below.
                 NN_geom: A simple nearest neighbor coordinate smoothing.
                          This interpolation method causes surface shrinkage
                          that might need to be corrected with the -match_*
                          options below. 

   Options for LB_FEM:
      -input inData.1D: file containing data (in 1D format)
                        Each column in inData.1D is processed separately.
                        The number of rows must equal the number of
                        nodes in the surface. You can select certain
                        columns using the [] notation adopted by AFNI's
                        programs.
      -fwhm f: Full Width at Half Maximum in surface coordinate units (usuallly mm)
               of an equivalent Gaussian filter had the surface been flat.
               With curved surfaces, the equation used to estimate FWHM is 
               an approximation. 
               Blurring on the surface depends on the geodesic instead 
               of the Euclidean disntaces. See Ref #1 for more details 
               on this parameter.

   Options for LM:
      -kpb k: Band pass frequency (default is 0.1).
              values should be in the range 0 < k < 10
              -lm and -kpb options are mutually exclusive.
      -lm l m: Lambda and Mu parameters. Sample values are:
               0.6307 and -.6732
      NOTE: -lm and -kpb options are mutually exclusive.
      -surf_out surfname: Writes the surface with smoothed coordinates
                          to disk. For SureFit and 1D formats, only the
                          coord file is written out.
      NOTE: -surf_out and -output are mutually exclusive.

   Options for NN_geom:
      -match_size r: Adjust node coordinates of smoothed surface to 
                   approximates the original's size.
                   Node i on the filtered surface is repositioned such 
                   that |c i| = 1/N sum(|cr j|) where
                   c and cr are the centers of the smoothed and original
                   surfaces, respectively.
                   N is the number of nodes that are within r [surface 
                   coordinate units] along the surface (geodesic) from node i.
                   j is one of the nodes neighboring i.
      -match_vol tol: Adjust node coordinates of smoothed surface to 
                   approximates the original's volume.
                   Nodes on the filtered surface are repositioned such
                   that the volume of the filtered surface equals, 
                   within tolerance tol, that of the original surface. 
                   See option -vol in SurfaceMetrics for information about
                   and calculation of the volume of a closed surface.
      -match_area tol: Adjust node coordinates of smoothed surface to 
                   approximates the original's surface.
                   Nodes on the filtered surface are repositioned such
                   that the surface of the filtered surface equals, 
                   within tolerance tol, that of the original surface. 
      -match_sphere rad: Project nodes of smoothed surface to a sphere
                   of radius rad. Projection is carried out along the 
                   direction formed by the surface's center and the node.

   Common options:
      -Niter N: Number of smoothing iterations (default is 100)
                For practical reasons, this number must be a multiple of 2
          NOTE: For LB_FEM method, the number of iterations controls the
                iteration steps (dt in Ref #1).
                dt = fwhm*fwhm / (16*Niter*log(2));
                dt must satisfy conditions that depend on the internodal
                distance and the spatial derivatives of the signals being 
                filtered on the surface.
                As a rule of thumb, if increasing Niter does not alter
                the results then your choice is fine (smoothing has converged).
                For an example of the artifact caused by small Niter see:
          http://afni.nimh.nih.gov/sscc/staff/ziad/SUMA/SuSmArt/DSart.html
      -output out.1D: Name of output file. 
                      The default is inData_sm.1D with LB_FEM method
                      and NodeList_sm.1D with LM method.
      -add_index : Output the node index in the first column.
                   This is not done by default.

  SUMA communication options:
      -talk_suma: Send progress with each iteration to SUMA.
      -refresh_rate rps: Maximum number of updates to SUMA per second.
                         The default is the maximum speed.
      -send_kth kth: Send the kth element to SUMA (default is 1).
                     This allows you to cut down on the number of elements
                     being sent to SUMA.
      -sh <SUMAHOST>: Name (or IP address) of the computer running SUMA.
                      This parameter is optional, the default is 127.0.0.1 
      -ni_text: Use NI_TEXT_MODE for data transmission.
      -ni_binary: Use NI_BINARY_MODE for data transmission.
                  (default is ni_binary).
      -feed_afni: Send updates to AFNI via SUMA's talk.


  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

   Sample commands lines for data smoothing:
      SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met LB_FEM   \
                  -input in.1D -Niter 100 -fwhm 8 -add_index         \
                  -output in_sm8.1D 
         This command filters (on the surface) the data in in.1D
         and puts the output in in_sm8.1D with the first column 
         containing the node index and the second containing the 
         filtered version of in.1D.

         The surface used in this example had no spec file, so 
         a quick.spec was created using:
         quickspec -tn 1D NodeList.1D FaceSetList.1D 

         You can colorize the input and output data using ScaleToMap:
         ScaleToMap  -input in.1D 0 1 -cmap BGYR19       \
                     -clp MIN MAX > in.1D.col            \
         ScaleToMap  -input in_sm8.1D 0 1 -cmap BGYR19   \
                     -clp MIN MAX > in_sm8.1D.col        \

         For help on using ScaleToMap see ScaleToMap -help
         Note that the MIN MAX represent the minimum and maximum
         values in in.1D. You should keep them constant in both 
         commands in order to be able to compare the resultant colorfiles.
         You can import the .col files with the 'c' command in SUMA.

         You can send the data to SUMA with each iteration.
         To do so, start SUMA with these options:
         suma -spec quick.spec -niml &
         and add these options to SurfSmooth's command line above:
         -talk_suma -refresh_rate 5

   Sample commands lines for surface smoothing:
      SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met LM    \
                  -output NodeList_sm100.1D -Niter 100 -kpb 0.1   
         This command smoothes the surface's geometry. The smoothed
         node coordinates are written out to NodeList_sm100.1D. 

   Sample command for considerable surface smoothing and inflation
   back to original volume:
       SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met NN_geom \
                   -output NodeList_inflated_mvol.1D -Niter 1500 \
                   -match_vol 0.01
   Sample command for considerable surface smoothing and inflation
   back to original area:
       SurfSmooth  -spec quick.spec -surf_A NodeList.1D -met NN_geom \
                   -output NodeList_inflated_marea.1D -Niter 1500 \
                   -match_area 0.01

   References: 
      (1) M.K. Chung et al.   Deformation-based surface morphometry
                              applied to gray matter deformation. 
                              Neuroimage 18 (2003) 198-213
          M.K. Chung   Statistical morphometry in computational
                       neuroanatomy. Ph.D. thesis, McGill Univ.,
                       Montreal, Canada
      (2) G. Taubin.       Mesh Signal Processing. 
                           Eurographics 2000.

   See Also:   
       ScaleToMap to colorize the output, however it is better
       to load surface datasets directly into SUMA and colorize
       them interactively.

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
SurfToSurf
Usage: SurfToSurf <-i_TYPE S1> [<-sv SV1>]
                  <-i_TYPE S2> [<-sv SV1>]
                  [<-prefix PREFIX>]
                  [<-output_params PARAM_LIST>]
                  [<-node_indices NODE_INDICES>]
                  [<-proj_dir PROJ_DIR>]
                  [<-data DATA>]
                  [<-node_debug NODE>]
                  [<-debug DBG_LEVEL>]
                  [-make_consistent]
 
  This program is used to interpolate data from one surface (S2)
 to another (S1), assuming the surfaces are quite similar in
 shape but having different meshes (non-isotopic).
 This is done by projecting each node (nj) of S1 along the normal
 at nj and finding the closest triangle t of S2 that is intersected
 by this projection. Projection is actually bidirectional.
 If such a triangle t is found, the nodes (of S2) forming it are 
 considered to be the neighbors of nj.
 Values (arbitrary data, or coordinates) at these neighboring nodes
 are then transfered to nj using barycentric interpolation or 
 nearest-node interpolation.
 Nodes whose projections fail to intersect triangles in S2 are given
 nonsensical values of -1 and 0.0 in the output.

 Mandatory input:
  Two surfaces are required at input. See -i_TYPE options
  below for more information. 

 Optional input:
  -prefix PREFIX: Specify the prefix of the output file.
                  The output file is in 1D format at the moment.
                  Default is SurfToSurf
  -output_params PARAM_LIST: Specify the list of mapping
                             parameters to include in output
     PARAM_LIST can have any or all of the following:
        NearestTriangleNodes: Use Barycentric interpolation (default)
                              and output indices of 3 nodes from S2
                              that neighbor nj of S1
        NearestNode: Use only the closest node from S2 (of the three 
                     closest neighbors) to nj of S1 for interpolation
                     and output the index of that closest node.
        NearestTriangle: Output index of triangle t from S2 that
                         is the closest to nj along its projection
                         direction. 
        DistanceToSurf: Output distance (signed) from nj, along 
                        projection direction to S2.
                        This is the parameter output by the precursor
                        program CompareSurfaces
        ProjectionOnSurf: Output coordinates of projection of nj onto 
                          triangle t of S2.
        Data: Output the data from S2, interpolated onto S1
              If no data is specified via the -data option, then
              the XYZ coordinates of SO2's nodes are considered
              the data.
  -data DATA: 1D file containing data to be interpolated.
              Each row i contains data for node i of S2.
              You must have one row for each node making up S2.
              In other terms, if S2 has N nodes, you need N rows
              in DATA. 
              Each column of DATA is processed separately (think
              sub-bricks, and spatial interpolation).
              You can use [] selectors to choose a subset 
              of columns.
              If -data option is not specified and Data is in PARAM_LIST
              then the XYZ coordinates of SO2's nodes are the data.
  -node_indices NODE_INDICES: 1D file containing the indices of S1
                              to consider. The default is all of the
                              nodes in S1. Only one column of values is
                              allowed here, use [] selectors to choose
                              the column of node indices if NODE_INDICES
                              has multiple columns in it.
  -proj_dir PROJ_DIR: 1D file containing projection directions to use
                      instead of the node normals of S1.
                      Each row should contain one direction for each
                      of the nodes forming S1.
  -make_consistent: Force a consistency check and correct triangle 
                    orientation of S1 if needed. Triangles are also
                    oriented such that the majority of normals point
                    away from center of surface.
                    The program might not succeed in repairing some
                    meshes with inconsistent orientation.

 Specifying input surfaces using -i_TYPE options: 
    -i_TYPE inSurf specifies the input surface,
            TYPE is one of the following:
       fs: FreeSurfer surface. 
           If surface name has .asc it is assumed to be
           in ASCII format. Otherwise it is assumed to be
           in BINARY_BE (Big Endian) format.
           Patches in Binary format cannot be read at the moment.
       sf: SureFit surface. 
           You must specify the .coord followed by the .topo file.
       vec (or 1D): Simple ascii matrix format. 
            You must specify the coord (NodeList) file followed by 
            the topo (FaceSetList) file.
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
            Only vertex and triangulation info is preserved.
       bv: BrainVoyager format. 
           Only vertex and triangulation info is preserved.
       dx: OpenDX ascii mesh format.
           Only vertex and triangulation info is preserved.
           Requires presence of 3 objects, the one of class 
           'field' should contain 2 components 'positions'
           and 'connections' that point to the two objects
           containing node coordinates and topology, respectively.
 Specifying surfaces using -t* options: 
   -tn TYPE NAME: specify surface type and name.
                  See below for help on the parameters.
   -tsn TYPE STATE NAME: specify surface type state and name.
        TYPE: Choose from the following (case sensitive):
           1D: 1D format
           FS: FreeSurfer ascii format
           PLY: ply format
           SF: Caret/SureFit format
           BV: BrainVoyager format
        NAME: Name of surface file. 
           For SF and 1D formats, NAME is composed of two names
           the coord file followed by the topo file
        STATE: State of the surface.
           Default is S1, S2.... for each surface.
 Specifying a Surface Volume:
    -sv SurfaceVolume [VolParam for sf surfaces]
       If you supply a surface volume, the coordinates of the input surface.
        are modified to SUMA's convention and aligned with SurfaceVolume.
        You must also specify a VolParam file for SureFit surfaces.
 Specifying a surface specification (spec) file:
    -spec SPEC: specify the name of the SPEC file.
 Specifying a surface using -surf_? method:
    -surf_A SURFACE: specify the name of the first
            surface to load. If the program requires
            or allows multiple surfaces, use -surf_B
            ... -surf_Z .
            You need not use _A if only one surface is
            expected.
            SURFACE is the name of the surface as specified
            in the SPEC file. The use of -surf_ option 
            requires the use of -spec option.
 Specifying output surfaces using -o_TYPE options: 
    -o_TYPE outSurf specifies the output surface, 
            TYPE is one of the following:
       fs: FreeSurfer ascii surface. 
       fsp: FeeSurfer ascii patch surface. 
            In addition to outSurf, you need to specify
            the name of the parent surface for the patch.
            using the -ipar_TYPE option.
            This option is only for ConvertSurface 
       sf: SureFit surface. 
           For most programs, you are expected to specify prefix:
           i.e. -o_sf brain. In some programs, you are allowed to 
           specify both .coord and .topo file names: 
           i.e. -o_sf XYZ.coord TRI.topo
           The program will determine your choice by examining 
           the first character of the second parameter following
           -o_sf. If that character is a '-' then you have supplied
           a prefix and the program will generate the coord and topo names.
       vec (or 1D): Simple ascii matrix format. 
            For most programs, you are expected to specify prefix:
            i.e. -o_1D brain. In some programs, you are allowed to 
            specify both coord and topo file names: 
            i.e. -o_1D brain.1D.coord brain.1D.topo
            coord contains 3 floats per line, representing 
            X Y Z vertex coordinates.
            topo contains 3 ints per line, representing 
            v1 v2 v3 triangle vertices.
       ply: PLY format, ascii or binary.
  Common Debugging Options:
   [-trace]: Turns on In/Out debug and Memory tracing.
             For speeding up the tracing log, I recommend 
             you redirect stdout to a file when using this option.
             For example, if you were running suma you would use:
             suma -spec lh.spec -sv ... > TraceFile
             This option replaces the old -iodbg and -memdbg.
   [-TRACE]: Turns on extreme tracing.
   [-nomall]: Turn off memory tracing.
   [-yesmall]: Turn on memory tracing (default).
   [-novolreg]: Ignore any Volreg or Tagalign transformations
                present in the Surface Volume.
  NOTE: For programs that output results to stdout
    (that is to your shell/screen), the debugging info
    might get mixed up with your results.
 

++ SUMA version 2004_12_29

CVS tag:
   SUMA_2005_04_29_1733

Compile Date:
   Aug 25 2005

       Ziad S. Saad SSCC/NIMH/NIH ziad@nih.gov     
       Shruti Japee LBC/NIMH/NIH  shruti@codon.nih.gov 
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
tfim
MCW TFIM: t-tests on sets of functional images, by RW Cox

Usage 1: tfim [options] -set1 image_files ... -set2 image_files ...
Usage 2: tfim [options] -base1 bval -set2 image_files ...

In usage 1, the collection of images files after '-set1' and the
collection after '-set2' are averaged and differenced, and the
difference is tested for significance with a 2 sample Student t-test.

In usage 2, the collection of image files after '-set2' is averaged
and then has the constant numerical value 'bval' subtracted, and the
difference is tested for significance with a 1 sample Student t-test.

N.B.: The input images can be in the usual 'short' or 'byte'
      formats, or in the floating point 'flim' format.
N.B.: If in either set of images, a given pixel has zero variance
      (i.e., is constant), then the t-test is not performed.
      In that pixel, the .tspm file will be zero.

Options are:

 -prefix pname: 'pname' is used as the prefix for the output
                  filenames.  The output image files are
                   + pname.diff = average of set2 minus average of set1
                                  (or minus 'bval')
                   + pname.tspm = t-statistic of difference
                  Output images are in the 'flim' (floating pt. image)
                  format, and may be converted to 16 bit shorts using
                  the program 'ftosh'.
              *** The default 'pname' is 'tfim', if -prefix isn't used.
 -pthresh pval: 'pval' is a numeric value between 0 and 1, giving
                  the significance level (per voxel) to threshold the
                  output with; voxels with (2-sided) t-statistic
                  less significant than 'pval' will have their diff
                  output zeroed.
              *** The default is no threshold, if -pthresh isn't used.
 -eqcorr dval:  If present, this option means to write out the file
                   pname.corr = equivalent correlation statistic
                              =  t/sqrt(dof+t^2)
                  The number 'dval' is the value to use for 'dof' if
                  dval is positive.  This would typically be the total
                  number of data images used in forming the image sets,
                  if the image sets are from sfim or fim.
                  If dval is zero, then dof is computed from the number
                  of images in -set1 and -set2; if these are averages
                  from program sfim, then dof will be smallish, which in
                  turn means that significant corr values will be higher
                  than you may be used to from using program fim.
              *** The default is not to write, if -eqcorr isn't used.
 -paired:       If present, this means that -set1 and -set2 should be
                  compared using a paired sample t-test.  This option is
                  illegal with the -base1 option.  The number of samples
                  in the two sets of images must be equal.
                  [This test is implemented by subtracting -set1 images
                   from the -set2 images, then testing as in '-base1 0'.]
              *** The default is to do an unpaired test, if -paired isn't
                  used.  In that case, -set1 and -set2 don't need to have
                  the same number of images.
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
to3d
to3d: 2D slices into 3D datasets for AFNI, by RW Cox
Usage: to3d [options] image_files ...
       Creates 3D datasets for use with AFNI from 2D image files

The available options are
  -help   show this message
  -'type' declare images to contain data of a given type
          where 'type' is chosen from the following options:
       ANATOMICAL TYPES
         spgr == Spoiled GRASS
          fse == Fast Spin Echo
         epan == Echo Planar
         anat == MRI Anatomy
           ct == CT Scan
         spct == SPECT Anatomy
          pet == PET Anatomy
          mra == MR Angiography
         bmap == B-field Map
         diff == Diffusion Map
         omri == Other MRI
         abuc == Anat Bucket
       FUNCTIONAL TYPES
          fim == Intensity
         fith == Inten+Thr
         fico == Inten+Cor
         fitt == Inten+Ttest
         fift == Inten+Ftest
         fizt == Inten+Ztest
         fict == Inten+ChiSq
         fibt == Inten+Beta
         fibn == Inten+Binom
         figt == Inten+Gamma
         fipt == Inten+Poisson
         fbuc == Func-Bucket
                 [for paired (+) types above, images are fim first,]
                 [then followed by the threshold (etc.) image files]

  -statpar value value ... value [* NEW IN 1996 *]
     This option is used to supply the auxiliary statistical parameters
     needed for certain dataset types (e.g., 'fico' and 'fitt').  For
     example, a correlation coefficient computed using program 'fim2'
     from 64 images, with 1 ideal, and with 2 orts could be specified with
       -statpar 64 1 2

  -prefix  name      will write 3D dataset using prefix 'name'
  -session name      will write 3D dataset into session directory 'name'
  -geomparent fname  will read geometry data from dataset file 'fname'
                       N.B.: geometry data does NOT include time-dependence
  -anatparent fname  will take anatomy parent from dataset file 'fname'

  -nosave  will suppress autosave of 3D dataset, which normally occurs
           when the command line options supply all needed data correctly

  -view type [* NEW IN 1996 *]
    Will set the dataset's viewing coordinates to 'type', which
    must be one of these strings:  orig acpc tlrc

TIME DEPENDENT DATASETS [* NEW IN 1996 *]
  -time:zt nz nt TR tpattern  OR  -time:tz nt nz TR tpattern

    These options are used to specify a time dependent dataset.
    '-time:zt' is used when the slices are input in the order
               z-axis first, then t-axis.
    '-time:tz' is used when the slices are input in the order
               t-axis first, then z-axis.

    nz  =  number of points in the z-direction (minimum 1)
    nt  =  number of points in the t-direction
            (thus exactly nt * nz slices must be read in)
    TR  =  repetition interval between acquisitions of the
            same slice, in milliseconds (or other units, as given below)

    tpattern = Code word that identifies how the slices (z-direction)
               were gathered in time.  The values that can be used:

       alt+z = altplus   = alternating in the plus direction
       alt+z2            = alternating, starting at slice #1
       alt-z = altminus  = alternating in the minus direction
       alt-z2            = alternating, starting at slice #nz-2
       seq+z = seqplus   = sequential in the plus direction
       seq-z = seqminus  = sequential in the minus direction
       zero  = simult    = simultaneous acquisition
               @filename = read temporal offsets from 'filename'

    For example if nz = 5 and TR = 1000, then the inter-slice
    time is taken to be dt = TR/nz = 200.  In this case, the
    slices are offset in time by the following amounts:

                    S L I C E   N U M B E R
      tpattern        0    1    2    3    4  Comment
      ----------   ---- ---- ---- ---- ----  -------------------------------
      altplus         0  600  200  800  400  Alternating in the +z direction
      alt+z2        400    0  600  200  800  Alternating, but starting at #1
      altminus      400  800  200  600    0  Alternating in the -z direction
      alt-z2        800  200  600    0  400  Alternating, starting at #nz-2 
      seqplus         0  200  400  600  800  Sequential  in the +z direction
      seqminus      800  600  400  200    0  Sequential  in the -z direction
      simult          0    0    0    0    0  All slices acquired at once

    If @filename is used for tpattern, then nz ASCII-formatted numbers are
    read from the file.  These are used to indicate the time offsets (in ms)
    for each slice. For example, if 'filename' contains
       0 600 200 800 400
    then this is equivalent to 'altplus' in the above example.

    Notes:
      * Time-dependent functional datasets are not yet supported by
          to3d or any other AFNI package software.  For many users,
          the proper dataset type for these datasets is '-epan'.
      * Time-dependent datasets with more than one value per time point
          (e.g., 'fith', 'fico', 'fitt') are also not allowed by to3d.
      * If you use 'abut' to fill in gaps in the data and/or to
          subdivide the data slices, you will have to use the @filename
          form for tpattern, unless 'simult' or 'zero' is acceptable.
      * At this time, the value of 'tpattern' is not actually used in
          any AFNI program.  The values are stored in the dataset
          .HEAD files, and will be used in the future.
      * The values set on the command line can't be altered interactively.
      * The units of TR can be specified by the command line options below:
            -t=ms or -t=msec  -->  milliseconds (the default)
            -t=s  or -t=sec   -->  seconds
            -t=Hz or -t=Hertz -->  Hertz (for chemical shift images?)
          Alternatively, the units symbol ('ms', 'msec', 's', 'sec',
            'Hz', or 'Hertz') may be attached to TR in the '-time:' option,
            as in '-time:zt 16 64 4.0sec alt+z'
 ****** 15 Aug 2005 ******
      * Millisecond time units are no longer stored in AFNI dataset
          header files.  For backwards compatibility, the default unit
          of TR (i.e., without a suffix 's') is still milliseconds, but
          this value will be converted to seconds when the dataset is
          written to disk.  Any old AFNI datasets that have millisecond
          units for TR will be read in to all AFNI programs with the TR
          converted to seconds.

  -Torg ttt = set time origin of dataset to 'ttt' [default=0.0]

COMMAND LINE GEOMETRY SPECIFICATION [* NEW IN 1996 *]
   -xFOV   <DIMEN1><DIREC1>-<DIMEN2><DIREC2>
     or       or
   -xSLAB  <DIMEN1><DIREC1>-<DIREC2>

   (Similar -yFOV, -ySLAB, -zFOV and -zSLAB option are also present.)

 These options specify the size and orientation of the x-axis extent
 of the dataset.  <DIMEN#> means a dimension (in mm); <DIREC> is
 an anatomical direction code, chosen from
      A (Anterior)    P (Posterior)    L (Left)
      I (Inferior)    S (Superior)     R (Right)
 Thus, 20A-30P means that the x-axis of the input images runs from
 20 mm Anterior to 30 mm Posterior.  For convenience, 20A-20P can be
 abbreviated as 20A-P.

 -xFOV  is used to mean that the distances are from edge-to-edge of
          the outermost voxels in the x-direction.
 -xSLAB is used to mean that the distances are from center-to-center
          of the outermost voxels in the x-direction.

 Under most circumstance, -xFOV , -yFOV , and -zSLAB would be the
 correct combination of geometry specifiers to use.  For example,
 a common type of run at MCW would be entered as
    -xFOV 120L-R -yFOV 120A-P -zSLAB 60S-50I

Z-AXIS SLICE OFFSET ONLY
 -zorigin distz  Puts the center of the 1st slice off at the
                 given distance ('distz' in mm).  This distance
                 is in the direction given by the corresponding
                 letter in the -orient code.  For example,
                   -orient RAI -zorigin 30
                 would set the center of the first slice at
                 30 mm Inferior.
    N.B.: This option has no effect if the FOV or SLAB options
          described above are used.

INPUT IMAGE FORMATS [* SIGNIFICANTLY CHANGED IN 1996 *]
  Image files may be single images of unsigned bytes or signed shorts
  (64x64, 128x128, 256x256, 512x512, or 1024x1024) or may be grouped
  images (that is, 3- or 4-dimensional blocks of data).
  In the grouped case, the string for the command line file spec is like

    3D:hglobal:himage:nx:ny:nz:fname   [16 bit input]
    3Ds:hglobal:himage:nx:ny:nz:fname  [16 bit input, swapped bytes]
    3Db:hglobal:himage:nx:ny:nz:fname  [ 8 bit input]
    3Di:hglobal:himage:nx:ny:nz:fname  [32 bit input]
    3Df:hglobal:himage:nx:ny:nz:fname  [floating point input]
    3Dc:hglobal:himage:nx:ny:nz:fname  [complex input]
    3Dd:hglobal:himage:nx:ny:nz:fname  [double input]

  where '3D:' or '3Ds': signals this is a 3D input file of signed shorts
        '3Db:'          signals this is a 3D input file of unsigned bytes
        '3Di:'          signals this is a 3D input file of signed ints
        '3Df:'          signals this is a 3D input file of floats
        '3Dc:'          signals this is a 3D input file of complex numbers
                         (real and imaginary pairs of floats)
        '3Dd:'          signals this is a 3D input file of double numbers
                         (will be converted to floats)
        hglobal = number of bytes to skip at start of whole file
        himage  = number of bytes to skip at start of each 2D image
        nx      = x dimension of each 2D image in the file
        ny      = y dimension of each 2D image in the file
        nz      = number of 2D images in the file
        fname   = actual filename on disk to read

  * The ':' separators are required.  The k-th image starts at
      BYTE offset hglobal+(k+1)*himage+vs*k*nx*ny in file 'fname'
      for k=0,1,...,nz-1.
  * Here, vs=voxel length=1 for bytes, 2 for shorts, 4 for ints and floats,
      and 8 for complex numbers.
  * As a special case, hglobal = -1 means read data starting at
      offset len-nz*(vs*nx*ny+himage), where len=file size in bytes.
      (That is, to read the needed data from the END of the file.)
  * Note that there is no provision for skips between data rows inside
      a 2D slice, only for skips between 2D slice images.
  * The int, float, and complex formats presume that the data in
      the image file are in the 'native' format for this CPU; that is,
      there is no provision for data conversion (unlike the 3Ds: format).
  * Double input will be converted to floats (or whatever -datum is)
      since AFNI doesn't support double precision datasets.
  * Whether the 2D image data is interpreted as a 3D block or a 3D+time
      block depends on the rest of the command line parameters.  The
      various 3D: input formats are just ways of inputting multiple 2D
      slices from a single file.
  * SPECIAL CASE: If fname is ALLZERO, then this means not to read
      data from disk, but instead to create nz nx*ny images filled
      with zeros.  One application of this is to make it easy to create
      a dataset of a specified geometry for use with other programs.

The 'raw pgm' image format is also supported; it reads data into 'byte' images.

* ANALYZE (TM) .hdr/.img files can now be read - give the .hdr filename on
  the command line.  The program will detect if byte-swapping is needed on
  these images, and can also set the voxel grid sizes from the first .hdr file.
  If the 'funused1' field in the .hdr is positive, it will be used to scale the
  input values.  If the environment variable AFNI_ANALYZE_FLOATIZE is YES, then
  .img files will be converted to floats on input.

* Siemens .ima image files can now be read.  The program will detect if
  byte-swapping is needed on these images, and can also set voxel grid
  sizes and orientations (correctly, I hope).
* Some Siemens .ima files seems to have their EPI slices stored in
  spatial order, and some in acquisition (interleaved) order.  This
  program doesn't try to figure this out.  You can use the command
  line option '-sinter' to tell the program to assume that the images
  in a single .ima file are interleaved; for example, if there are
  7 images in a file, then without -sinter, the program will assume
  their order is '0 1 2 3 4 5 6'; with -sinter, the program will
  assume their order is '0 2 4 6 1 3 5' (here, the number refers
  to the slice location in space).

* GEMS I.* (IMGF) 16-bit files can now be read. The program will detect
  if byte-swapping is needed on these images, and can also set voxel
  grid sizes and orientations.  It can also detect the TR in the
  image header.  If you wish to rely on this TR, you can set TR=0
  in the -time:zt or -time:tz option.
* If you use the image header's TR and also use @filename for the
  tpattern, then the values in the tpattern file should be fractions
  of the true TR; they will be multiplied by the true TR once it is
  read from the image header.

 NOTES:
  * Not all AFNI programs support all datum types.  Shorts and
      floats are safest. (See the '-datum' option below.)
  * If '-datum short' is used or implied, then int, float, and complex
      data will be scaled to fit into a 16 bit integer.  If the '-gsfac'
      option below is NOT used, then each slice will be SEPARATELY
      scaled according to the following choice:
      (a) If the slice values all fall in the range -32767 .. 32767,
          then no scaling is performed.
      (b) Otherwise, the image values are scaled to lie in the range
          0 .. 10000 (original slice min -> 0, original max -> 10000).
      This latter option is almost surely not what you want!  Therefore,
      if you use the 3Di:, 3Df:, or 3Dc: input methods and store the
      data as shorts, I suggest you supply a global scaling factor.
      Similar remarks apply to '-datum byte' scaling, with even more force.
  * To3d now incoporates POSIX filename 'globbing', which means that
      you can input filenames using 'escaped wildcards', and then to3d
      will internally do the expansion to the list of files.  This is
      only desirable because some systems limit the number of command-line
      arguments to a program.  It is possible that you would wish to input
      more slice files than your computer supports.  For example,
          to3d exp.?.*
      might overflow the system command line limitations.  The way to do
      this using internal globbing would be
          to3d exp.\?.\*
      where the \ characters indicate to pass the wildcards ? and *
      through to the program, rather than expand them in the shell.
      (a) Note that if you choose to use this feature, ALL wildcards in
          a filename must be escaped with \ or NONE must be escaped.
      (b) Using the C shell, it is possible to turn off shell globbing
          by using the command 'set noglob' -- if you do this, then you
          do not need to use the \ character to escape the wildcards.
      (c) Internal globbing of 3D: file specifiers is supported in to3d.
          For example, '3D:0:0:64:64:100:sl.\*' could be used to input
          a series of 64x64x100 files with names 'sl.01', 'sl.02' ....
          This type of expansion is specific to to3d; the shell will not
          properly expand such 3D: file specifications.
      (d) In the C shell (csh or tcsh), you can use forward single 'quotes'
          to prevent shell expansion of the wildcards, as in the command
              to3d '3D:0:0:64:64:100:sl.*'
    The globbing code is adapted from software developed by the
    University of California, Berkeley, and is copyrighted by the
    Regents of the University of California (see file mcw_glob.c).

RGB datasets [Apr 2002]
-----------------------
You can now create RGB-valued datasets.  Each voxel contains 3 byte values
ranging from 0..255.  RGB values may be input to to3d in one of two ways:
 * Using raw PPM formatted 2D image files.
 * Using JPEG formatted 2D files.
 * Using TIFF, BMP, GIF, PNG formatted 2D files [if netpbm is installed].
 * Using the 3Dr: input format, analogous to 3Df:, etc., described above.
RGB datasets can be created as functional FIM datasets, or as anatomical
datasets:
 * RGB fim overlays are transparent in AFNI only where all three
    bytes are zero - that is, you can't overlay solid black.
 * At present, there is limited support for RGB datasets.
    About the only thing you can do is display them in 2D slice
    viewers in AFNI.
You can also create RGB-valued datasets using program 3dThreetoRGB.

Other Data Options
------------------
  -2swap
     This option will force all input 2 byte images to be byte-swapped
     after they are read in.
  -4swap
     This option will force all input 4 byte images to be byte-swapped
     after they are read in.
  -8swap
     This option will force all input 8 byte images to be byte-swapped
     after they are read in.
  BUT PLEASE NOTE:
     Input images that are auto-detected to need byte-swapping
     (GEMS I.*, Siemens *.ima, ANALYZE *.img, and 3Ds: files)
     will NOT be swapped again by one of the above options.
     If you want to swap them again for some bizarre reason,
     you'll have to use the 'Byte Swap' button on the GUI.
     That is, -2swap/-4swap will swap bytes on input files only
     if they haven't already been swapped by the image input
     function.

  -zpad N   OR
  -zpad Nmm 
     This option tells to3d to write 'N' slices of all zeros on each side
     in the z-direction.  This will make the dataset 'fatter', but make it
     simpler to align with datasets from other scanning sessions.  This same
     function can be accomplished later using program 3dZeropad.
   N.B.: The zero slices will NOT be visible in the image viewer in to3d, but
          will be visible when you use AFNI to look at the dataset.
   N.B.: If 'mm' follows the integer N, then the padding is measured in mm.
          The actual number of slices of padding will be rounded up.  So if
          the slice thickness is 5 mm, then '-zpad 16mm' would be the equivalent
          of '-zpad 4' -- that is, 4 slices on each z-face of the volume.
   N.B.: If the geometry parent dataset was created with -zpad, the spatial
          location (origin) of the slices is set using the geometry dataset's
          origin BEFORE the padding slices were added.  This is correct, since
          you need to set the origin on the current dataset as if the padding
          slices were not present.
   N.B.: Unlike the '-zpad' option to 3drotate and 3dvolreg, this adds slices
          only in the z-direction.
   N.B.: You can set the environment variable 'AFNI_TO3D_ZPAD' to provide a
          default for this option.

  -gsfac value
     will scale each input slice by 'value'.  For example,
     '-gsfac 0.31830989' will scale by 1/Pi (approximately).
     This option only has meaning if one of '-datum short' or
     '-datum byte' is used or implied.  Otherwise, it is ignored.

  -datum type
     will set the voxel data to be stored as 'type', which is currently
     allowed to be short, float, byte, or complex.
     If -datum is not used, then the datum type of the first input image
     will determine what is used.  In that case, the first input image will
     determine the type as follows:
        byte       --> byte
        short      --> short
        int, float --> float
        complex    --> complex
     If -datum IS specified, then all input images will be converted
     to the desired type.  Note that the list of allowed types may
     grow in the future, so you should not rely on the automatic
     conversion scheme.  Also note that floating point datasets may
     not be portable between CPU architectures.

  -nofloatscan
     tells to3d NOT to scan input float and complex data files for
     illegal values - the default is to scan and replace illegal
     floating point values with zeros (cf. program float_scan).

  -in:1
     Input of huge 3D: files (with all the data from a 3D+time run, say)
     can cause to3d to fail from lack of memory.  The reason is that
     the images are from a file are all read into RAM at once, and then
     are scaled, converted, etc., as needed, then put into the final
     dataset brick.  This switch will cause the images from a 3D: file
     to be read and processed one slice at a time, which will lower the
     amount of memory needed.  The penalty is somewhat more I/O overhead.

NEW IN 1997:
  -orient code
     Tells the orientation of the 3D volumes.  The code must be 3 letters,
     one each from the pairs {R,L} {A,P} {I,S}.  The first letter gives
     the orientation of the x-axis, the second the orientation of the
     y-axis, the third the z-axis:
        R = right-to-left         L = left-to-right
        A = anterior-to-posterior P = posterior-to-anterior
        I = inferior-to-superior  S = superior-to-inferior
     Note that the -xFOV, -zSLAB constructions can convey this information.

NEW IN 2001:
  -skip_outliers
     If present, this tells the program to skip the outlier check that is
     automatically performed for 3D+time datasets.  You can also turn this
     feature off by setting the environment variable AFNI_TO3D_OUTLIERS
     to "No".
  -text_outliers
    If present, tells the program to only print out the outlier check
     results in text form, not graph them.  You can make this the default
     by setting the environment variable AFNI_TO3D_OUTLIERS to "Text".
    N.B.: If to3d is run in batch mode, then no graph can be produced.
          Thus, this option only has meaning when to3d is run with the
          interactive graphical user interface.
  -save_outliers fname
    Tells the program to save the outliers count into a 1D file with
    name 'fname'.  You could graph this file later with the command
       1dplot -one fname
    If this option is used, the outlier count will be saved even if
    nothing appears 'suspicious' (whatever that means).
  NOTES on outliers:
    * See '3dToutcount -help' for a description of how outliers are
       defined.
    * The outlier count is not done if the input images are shorts
       and there is a significant (> 1%) number of negative inputs.
    * There must be at least 6 time points for the outlier count to
       be carried out.

OPTIONS THAT AFFECT THE X11 IMAGE DISPLAY
   -gamma gg    the gamma correction factor for the
                  monitor is 'gg' (default gg is 1.0; greater than
                  1.0 makes the image contrast larger -- this may
                  also be adjusted interactively)
   -ncolors nn  use 'nn' gray levels for the image
                  displays (default is 80)
   -xtwarns     turn on display of Xt warning messages
This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
@TTxform_anat
A script to transform an antomical dataset
to match a template in TLRC space. 
Usage: @TTxform_anat [options] <-base template> <-input anat>
Mandatory parameters:
   -base template :  Skull-stripped volume in TLRC space (+tlrc)
   -input anat    :  Original (with skull) anatomical volume (+orig)
Optional parameters:
   -no_ss         :  Do not stip skull of input data set
                     (because skull has already been removed
                      or because template still has the skull)
   -keep_view     :  Do not mark output dataset as +tlrc
   -pad_base  MM  :  Pad the base dset by MM mm in each directions.
                     That is needed to  make sure that datasets
                     requiring wild rotations do not get cropped
                     Default is MM = 30
   -verb          :  Yakiti yak yak

Example:
@TTxform_anat -base N27_SurfVol_NoSkull+tlrc. -input DemoSubj_spgrsa+orig.

This page auto-generated on Thu Aug 25 16:49:43 EDT 2005
@UpdateAfni
\012Usage: @UpdateAfni
Updates AFNI on your computer using wget
\012If you are using the program for the first time,
you must add some info about your computer into the script
You can easily do so by modifying the template in the block SETDESTIN.
\012IMPORTANT: Rename this script once you modify it. Otherwise,  
it will get overwritten whenever you update your AFNI distribution.
\012Before the update begins, executables from the current version
are copied into $localBIN.bak directory
\012For more info, see:
http://afni.nimh.nih.gov/~cox/afni_wget.html
\012Ziad Saad (ziad@nih.gov)\012LBC/NIMH/ National Institutes of Health, Bethesda Maryland\012
This page auto-generated on Thu Aug 25 16:49:44 EDT 2005
Vecwarp
Usage: Vecwarp [options]
Transforms (warps) a list of 3-vectors into another list of 3-vectors
according to the options.  Error messages, warnings, and informational
messages are written to stderr.  If a fatal error occurs, the program
exits with status 1; otherwise, it exits with status 0.

OPTIONS:
 -apar aaa   = Use the AFNI dataset 'aaa' as the source of the
               transformation; this dataset must be in +acpc
               or +tlrc coordinates, and must contain the
               attributes WARP_TYPE and WARP_DATA which describe
               the forward transformation from +orig coordinates
               to the 'aaa' coordinate system.
             N.B.: The +orig version of this dataset must also be
                   readable, since it is also needed when translating
                   vectors between SureFit and AFNI coordinates.
                   Only the .HEAD files are actually used.

 -matvec mmm = Read an affine transformation matrix-vector from file
               'mmm', which must be in the format
                   u11 u12 u13 v1
                   u21 u22 u23 v2
                   u31 u32 u33 v3
               where each 'uij' and 'vi' is a number.  The forward
               transformation is defined as
                   [ xout ]   [ u11 u12 u13 ] [ xin ]   [ v1 ]
                   [ yout ] = [ u21 u22 u23 ] [ yin ] + [ v2 ]
                   [ zout ]   [ u31 u32 u33 ] [ zin ]   [ v3 ]

 Exactly one of -apar or -matvec must be used to specify the
 transformation.

 -forward    = -forward means to apply the forward transformation;
   *OR*        -backward means to apply the backward transformation
 -backward     * For example, if the transformation is specified by
                  '-apar fred+tlrc', then the forward transformation
                  is from +orig to +tlrc coordinates, and the backward
                  transformation is from +tlrc to +orig coordinates.
               * If the transformation is specified by -matvec, then
                  the matrix-vector read in defines the forward
                  transform as above, and the backward transformation
                  is defined as the inverse.
               * If neither -forward nor -backward is given, then
                  -forward is the default.

 -input iii  = Read input 3-vectors from file 'iii' (from stdin if
               'iii' is '-' or the -input option is missing).  Input
               data may be in one of the following ASCII formats:

               * SureFit .coord files:
                   BeginHeader
                   lines of text ...
                   EndHeader
                   count
                   int x y z
                   int x y z
                   et cetera...
                 In this case, everything up to and including the
                 count is simply passed through to the output.  Each
                 (x,y,z) triple is transformed, and output with the
                 int label that precedes it.  Lines that cannot be
                 scanned as 1 int and 3 floats are treated as comments
                 and are passed to through to the output unchanged.
               N.B.: SureFit coordinates are
                   x = distance Right    of Left-most      dataset corner
                   y = distance Anterior to Posterior-most dataset corner
                   z = distance Superior to Inferior-most  dataset corner
                 For example, if the transformation is specified by
                   -forward -apar fred+tlrc
                 then the input (x,y,z) are relative to fred+orig and the
                 output (x,y,z) are relative to fred+tlrc.  If instead
                   -backward -apar fred+tlrc
                 is used, then the input (x,y,z) are relative to fred+tlrc
                 and the output (x,y,z) are relative to fred+orig.
                 For this to work properly, not only fred+tlrc must be
                 readable by Vecwarp, but fred+orig must be as well.
                 If the transformation is specified by -matvec, then
                 the matrix-vector transformation is applied to the
                 (x,y,z) vectors directly, with no coordinate shifting.

               * AFNI .1D files with 3 columns
                   x y z
                   x y z
                   et cetera...
                 In this case, each (x,y,z) triple is transformed and
                 written to the output.  Lines that cannot be scanned
                 as 3 floats are treated as comments and are passed
                 through to the output unchanged.
               N.B.: AFNI (x,y,z) coordinates are in DICOM order:
                   -x = Right     +x = Left
                   -y = Anterior  +y = Posterior
                   -z = Inferior  +z = Superior

 -output ooo = Write the output to file 'ooo' (to stdout if 'ooo'
               is '-', or if the -output option is missing).  If the
               file already exists, it will not be overwritten unless
               the -force option is also used.

 -force      = If the output file already exists, -force can be
               used to overwrite it.  If you want to use -force,
               it must come before -output on the command line.

EXAMPLES:

  Vecwarp -apar fred+tlrc -input fred.orig.coord > fred.tlrc.coord

This transforms the vectors defined in original coordinates to
Talairach coordinates, using the transformation previously defined
by AFNI markers.

  Vecwarp -apar fred+tlrc -input fred.tlrc.coord -backward > fred.test.coord

This does the reverse transformation; fred.test.coord should differ from
fred.orig.coord only by roundoff error.

Author: RWCox - October 2001
This page auto-generated on Thu Aug 25 16:49:44 EDT 2005
waver
Usage: waver [options] > output_filename
Creates an ideal waveform timeseries file.
The output goes to stdout, and normally would be redirected to a file.

Options: (# refers to a number; [xx] is the default value)
  -WAV = Sets waveform to Cox special                    [default]
           (cf. AFNI FAQ list for formulas)
  -GAM = Sets waveform to form t^b * exp(-t/c)
           (cf. Mark Cohen)

  -EXPR "expression" = Sets waveform to the expression given,
                         which should depend on the variable 't'.
     e.g.: -EXPR "step(t-2)*step(12-t)*(t-2)*(12-t)"
     N.B.: The peak value of the expression on the '-dt' grid will
           be scaled to the value given by '-peak'; if this is not
           desired, set '-peak 0', and the 'natural' peak value of
           the expression will be used.

  -FILE dt wname = Sets waveform to the values read from the file
                   'wname', which should be a single column .1D file
                   (i.e., 1 ASCII number per line).  The 'dt value
                   is the time step (in seconds) between lines
                   in 'wname'; the first value will be at t=0, the
                   second at t='dt', etc.  Intermediate time values
                   will be linearly interpolated.  Times past the
                   the end of the 'wname' file length will have
                   the waveform value set to zero.
               *** N.B.: If the -peak option is used AFTER -FILE,
                         its value will be multiplied into the result.

These options set parameters for the -WAV waveform.
  -delaytime #   = Sets delay time to # seconds                [2]
  -risetime #    = Sets rise time to # seconds                 [4]
  -falltime #    = Sets fall time to # seconds                 [6]
  -undershoot #  = Sets undershoot to # times the peak         [0.2]
                     (this should be a nonnegative factor)
  -restoretime # = Sets time to restore from undershoot        [2]

These options set parameters for the -GAM waveform:
  -gamb #        = Sets the parameter 'b' to #                 [8.6]
  -gamc #        = Sets the parameter 'c' to #                 [0.547]
  -gamd #        = Sets the delay time to # seconds            [0.0]

These options apply to all waveform types:
  -peak #        = Sets peak value to #                        [100]
  -dt #          = Sets time step of output AND input          [0.1]
  -TR #          = '-TR' is equivalent to '-dt'

The default is just to output the waveform defined by the parameters
above.  If an input file is specified by one the options below, then
the timeseries defined by that file will be convolved with the ideal
waveform defined above -- that is, each nonzero point in the input
timeseries will generate a copy of the waveform starting at that point
in time, with the amplitude scaled by the input timeseries value.

  -xyout         = Output data in 2 columns:
                     1=time 2=waveform (useful for graphing)
                     [default is 1 column=waveform]

  -input infile  = Read timeseries from *.1D formatted 'infile';
                     convolve with waveform to produce output
              N.B.: you can use a sub-vector selector to choose
                    a particular column of infile, as in
                      -input 'fred.1D[3]'

  -inline DATA   = Read timeseries from command line DATA;
                     convolve with waveform to produce output
                     DATA is in the form of numbers and
                     count@value, as in
                     -inline 20@0.0 5@1.0 30@0.0 1.0 20@0.0 2.0
     which means a timeseries with 20 zeros, then 5 ones, then 30 zeros,
     a single 1, 20 more zeros, and a final 2.
     [The '@' character may actually be any of: '@', '*', 'x', 'X'.
      Note that * must be typed as \* to prevent the shell from
      trying to interpret it as a filename wildcard.]

  -tstim DATA    = Read discrete stimulation times from the command line
                     and convolve the waveform with delta-functions at
                     those times.  In this input format, the times do
                     NOT have to be at intervals of '-dt'.  For example
                       -dt 2.0 -tstim 5.6 9.3 13.7 16.4
                     specifies a TR of 2 s and stimuli at 4 times
                     (5.6 s, etc.) that do not correspond to integer
                     multiples of TR.  DATA values cannot be negative.
                   If the DATA is stored in a file, you can read it
                     onto the command line using something like
                       -tstim `cat filename`
                     where using the backward-single-quote operator
                     of the usual Unix shells.
   ** 12 May 2003: The times after '-tstim' can now also be specified
                     in the format 'a:b', indicating a continuous ON
                     period from time 'a' to time 'b'.  For example,
                       -dt 2.0 -tstim 13.2:15.7 20.3:25.3
                     The amplitude of a response of duration equal to
                     'dt' is equal the the amplitude of a single impulse
                     response (which is the special case a=b).  N.B.: This
                     means that something like '5:5.01' is very different
                     from '5' (='5:5').  The former will have a small amplitude
                     because of the small duration, but the latter will have
                     a large amplitude because the case of an instantaneous
                     input is special.  It is probably best NOT to mix the
                     two types of input to '-tstim' for this reason.
                     Compare the graphs from the 2 commands below:
                       waver -dt 1.0 -tstim 5:5.1 | 1dplot -stdin
                       waver -dt 1.0 -tstim 5     | 1dplot -stdin
                     If you prefer, you can use the form 'a%c' to indicate
                     an ON interval from time=a to time=a+c.
   ** 13 May 2005: You can now add an amplitude to each response individually.
                     For example
                       waver -dt 1.0 -peak 1.0 -tstim 3.2 17.9x2.0 23.1x-0.5
                     puts the default response amplitude at time 3.2,
                     2.0 times the default at time 17.9, and -0.5 times
                     the default at time 23.1.

  -when DATA     = Read time blocks when stimulus is 'on' (=1) from the
                     command line and convolve the waveform with with
                     a zero-one input.  For example:
                       -when 20..40 60..80
                     means that the stimulus function is 1.0 for time
                     steps number 20 to 40, and 60 to 80 (inclusive),
                     and zero otherwise.  (The first time step is
                     numbered 0.)

  -numout NN     = Output a timeseries with NN points; if this option
                     is not given, then enough points are output to
                     let the result tail back down to zero.

At least one option is required, or the program will just print this message
to stdout.  Only one of the 3 timeseries input options above can be used.

If you have the 'xmgr' graphing program, then a useful way to preview the
results of this program is through a command pipe like
   waver -dt 0.25 -xyout -inline 16@1 40@0 16@1 40@0 | xmgr -source stdin
Using the cruder AFNI package program 1dplot, you can do something like:
   waver -GAM -tstim 0 7.7 | 1dplot -stdin

If a square wave is desired, see the 'sqwave' program.
This page auto-generated on Thu Aug 25 16:49:44 EDT 2005
whereami
** bad option -help
This page auto-generated on Thu Aug 25 16:49:44 EDT 2005
whirlgif
whirlgif Rev 1.00 (C) 1996 by Kevin Kadow
                  (C) 1991,1992 by Mark Podlipec

whirlgif is a quick program that reads a series of GIF files, and produces
a single gif file composed of those images.

Usage: whirlgif [-v] [-trans index ] [-time delay] [-o outfile]
                [-loop] [-i incfile] file1 [ -time delay] file2

options:
   -v              verbose mode
   -loop [count]   add the Netscape 'loop' extension.
   -time delay     inter-frame timing.
   -trans index    set the colormap index 'index' to be transparent
   -o outfile      write the results to 'outfile'
   -i incfile      read a list of names from 'incfile'

TIPS

If you don't specify an output file, the GIF will be sent to stdout. This is
a good thing if you're using this in a CGI script, a very bad thing if you
run this from a terminal and forget to redirect stdout.

The output file (if any) and -loop _MUST_ be specified before any gif images.

You can specify several delay statements on the command line to change
the delay between images in the middle of an animation, e.g.

      whirlgif -time 5 a.gif b.gif c.gif -time 100 d.gif -time 5 e.gif f.gif

Although it's generally considered to be evil, you can also specify
several transparency statements on the command line, to change the transparent
color in the middle of an animation. This may cause problems for some programs.


BUGS
  + The loop 'count' is ineffective because Netspcape always loops infinitely.
  + Should be able to specify delay in an 'incfile' list (see next bug).
  + Does not handle filenames starting with a - (hypen), except in 'incfile'.

This program is available from http://www.msg.net/utility/whirlgif/
-------------------------------------------------------------------
Kevin Kadow     kadokev@msg.net
Based on 'txtmerge' written by:
Mark Podlipec   podlipec@wellfleet.com
This page auto-generated on Thu Aug 25 16:49:44 EDT 2005
Xphace
Usage: Xphace im1 [im2]
Image mergerizing.
Image files are in PGM format.
This page auto-generated on Thu Aug 25 16:49:44 EDT 2005

 

 


Go to:   gw_logo_08.gif (1982 bytes) 

Back to Brain Software