DragonFly On-Line Manual Pages
GMTMATH(1) Generic Mapping Tools GMTMATH(1)
NAME
gmtmath - Reverse Polish Notation calculator for data tables
SYNOPSIS
gmtmath [ -At_f(t).d ] [ -Ccols ] [ -Fcols ] [ -H[i][nrec] ] [ -I ] [
-Nn_col/t_col ] [ -Q ] [ -S[f|l] ] [ -Tt_min/t_max/t_inc[*]|tfile ] [
-V ] [ -b[i|o][s|S|d|D[ncol]|c[var1/...]] ] [ -f[i|o]colinfo ] [
-m[i|o][flag] ] operand [ operand ] OPERATOR [ operand ] OPERATOR ... =
[ outfile ]
DESCRIPTION
gmtmath will perform operations like add, subtract, multiply, and
divide on one or more table data files or constants using Reverse
Polish Notation (RPN) syntax (e.g., Hewlett-Packard calculator-style).
Arbitrarily complicated expressions may therefore be evaluated; the
final result is written to an output file [or standard output]. When
two data tables are on the stack, each element in file A is modified by
the corresponding element in file B. However, some operators only
require one operand (see below). If no data tables are used in the
expression then options -T, -N can be set (and optionally -b to
indicate the data domain). If STDIN is given, <stdin> will be read and
placed on the stack as if a file with that content had been given on
the command line. By default, all columns except the "time" column are
operated on, but this can be changed (see -C).
operand
If operand can be opened as a file it will be read as an ASCII
(or binary, see -bi) table data file. If not a file, it is
interpreted as a numerical constant or a special symbol (see
below). The special argument STDIN means that stdin will be
read and placed on the stack; STDIN can appear more than once if
necessary.
outfile
The name of a table data file that will hold the final result.
If not given then the output is sent to stdout.
OPERATORS
Choose among the following 131 operators. "args" are the number
of input and output arguments.
Operator args Returns
ABS 1 1 abs (A).
ACOS 1 1 acos (A).
ACOSH 1 1 acosh (A).
ACOT 1 1 acot (A).
ACSC 1 1 acsc (A).
ADD 2 1 A + B.
AND 2 1 NaN if A and B == NaN, B if A == NaN, else A.
ASEC 1 1 asec (A).
ASIN 1 1 asin (A).
ASINH 1 1 asinh (A).
ATAN 1 1 atan (A).
ATAN2 2 1 atan2 (A, B).
ATANH 1 1 atanh (A).
BEI 1 1 bei (A).
BER 1 1 ber (A).
CEIL 1 1 ceil (A) (smallest integer >= A).
CHICRIT 2 1 Critical value for chi-squared-distribution, with
alpha = A and n = B.
CHIDIST 2 1 chi-squared-distribution P(chi2,n), with chi2 = A
and n = B.
COL 1 1 Places column A on the stack.
CORRCOEFF 2 1 Correlation coefficient r(A, B).
COS 1 1 cos (A) (A in radians).
COSD 1 1 cos (A) (A in degrees).
COSH 1 1 cosh (A).
COT 1 1 cot (A) (A in radians).
COTD 1 1 cot (A) (A in degrees).
CPOISS 2 1 Cumulative Poisson distribution F(x,lambda), with
x = A and lambda = B.
CSC 1 1 csc (A) (A in radians).
CSCD 1 1 csc (A) (A in degrees).
D2DT2 1 1 d^2(A)/dt^2 2nd derivative.
D2R 1 1 Converts Degrees to Radians.
DDT 1 1 d(A)/dt Central 1st derivative.
DILOG 1 1 dilog (A).
DIV 2 1 A / B.
DUP 1 2 Places duplicate of A on the stack.
EQ 2 1 1 if A == B, else 0.
ERF 1 1 Error function erf (A).
ERFC 1 1 Complementary Error function erfc (A).
ERFINV 1 1 Inverse error function of A.
EXCH 2 2 Exchanges A and B on the stack.
EXP 1 1 exp (A).
FACT 1 1 A! (A factorial).
FCRIT 3 1 Critical value for F-distribution, with alpha =
A, n1 = B, and n2 = C.
FDIST 3 1 F-distribution Q(F,n1,n2), with F = A, n1 = B,
and n2 = C.
FLIPUD 1 1 Reverse order of each column.
FLOOR 1 1 floor (A) (greatest integer <= A).
FMOD 2 1 A % B (remainder after truncated division).
GE 2 1 1 if A >= B, else 0.
GT 2 1 1 if A > B, else 0.
HYPOT 2 1 hypot (A, B) = sqrt (A*A + B*B).
I0 1 1 Modified Bessel function of A (1st kind, order
0).
I1 1 1 Modified Bessel function of A (1st kind, order
1).
IN 2 1 Modified Bessel function of A (1st kind, order
B).
INRANGE 3 1 1 if B <= A <= C, else 0.
INT 1 1 Numerically integrate A.
INV 1 1 1 / A.
ISNAN 1 1 1 if A == NaN, else 0.
J0 1 1 Bessel function of A (1st kind, order 0).
J1 1 1 Bessel function of A (1st kind, order 1).
JN 2 1 Bessel function of A (1st kind, order B).
K0 1 1 Modified Kelvin function of A (2nd kind, order
0).
K1 1 1 Modified Bessel function of A (2nd kind, order
1).
KEI 1 1 kei (A).
KER 1 1 ker (A).
KN 2 1 Modified Bessel function of A (2nd kind, order
B).
KURT 1 1 Kurtosis of A.
LE 2 1 1 if A <= B, else 0.
LMSSCL 1 1 LMS scale estimate (LMS STD) of A.
LOG 1 1 log (A) (natural log).
LOG10 1 1 log10 (A) (base 10).
LOG1P 1 1 log (1+A) (accurate for small A).
LOG2 1 1 log2 (A) (base 2).
LOWER 1 1 The lowest (minimum) value of A.
LRAND 2 1 Laplace random noise with mean A and std.
deviation B.
LSQFIT 1 0 Let current table be [A | b]; return least
squares solution x = A \ b.
LT 2 1 1 if A < B, else 0.
MAD 1 1 Median Absolute Deviation (L1 STD) of A.
MAX 2 1 Maximum of A and B.
MEAN 1 1 Mean value of A.
MED 1 1 Median value of A.
MIN 2 1 Minimum of A and B.
MOD 2 1 A mod B (remainder after floored division).
MODE 1 1 Mode value (Least Median of Squares) of A.
MUL 2 1 A * B.
NAN 2 1 NaN if A == B, else A.
NEG 1 1 -A.
NEQ 2 1 1 if A != B, else 0.
NOT 1 1 NaN if A == NaN, 1 if A == 0, else 0.
NRAND 2 1 Normal, random values with mean A and std.
deviation B.
OR 2 1 NaN if A or B == NaN, else A.
PLM 3 1 Associated Legendre polynomial P(A) degree B
order C.
PLMg 3 1 Normalized associated Legendre polynomial P(A)
degree B order C (geophysical convention).
POP 1 0 Delete top element from the stack.
POW 2 1 A ^ B.
PQUANT 2 1 The B'th Quantile (0-100%) of A.
PSI 1 1 Psi (or Digamma) of A.
PV 3 1 Legendre function Pv(A) of degree v = real(B) +
imag(C).
QV 3 1 Legendre function Qv(A) of degree v = real(B) +
imag(C).
R2 2 1 R2 = A^2 + B^2.
R2D 1 1 Convert Radians to Degrees.
RAND 2 1 Uniform random values between A and B.
RINT 1 1 rint (A) (nearest integer).
ROOTS 2 1 Treats col A as f(t) = 0 and returns its roots.
ROTT 2 1 Rotate A by the (constant) shift B in the t-
direction.
SEC 1 1 sec (A) (A in radians).
SECD 1 1 sec (A) (A in degrees).
SIGN 1 1 sign (+1 or -1) of A.
SIN 1 1 sin (A) (A in radians).
SINC 1 1 sinc (A) (sin (pi*A)/(pi*A)).
SIND 1 1 sin (A) (A in degrees).
SINH 1 1 sinh (A).
SKEW 1 1 Skewness of A.
SQR 1 1 A^2.
SQRT 1 1 sqrt (A).
STD 1 1 Standard deviation of A.
STEP 1 1 Heaviside step function H(A).
STEPT 1 1 Heaviside step function H(t-A).
SUB 2 1 A - B.
SUM 1 1 Cumulative sum of A.
TAN 1 1 tan (A) (A in radians).
TAND 1 1 tan (A) (A in degrees).
TANH 1 1 tanh (A).
TCRIT 2 1 Critical value for Student's t-distribution, with
alpha = A and n = B.
TDIST 2 1 Student's t-distribution A(t,n), with t = A, and
n = B.
TN 2 1 Chebyshev polynomial Tn(-1<A<+1) of degree B.
UPPER 1 1 The highest (maximum) value of A.
XOR 2 1 B if A == NaN, else A.
Y0 1 1 Bessel function of A (2nd kind, order 0).
Y1 1 1 Bessel function of A (2nd kind, order 1).
YN 2 1 Bessel function of A (2nd kind, order B).
ZCRIT 1 1 Critical value for the normal-distribution, with
alpha = A.
ZDIST 1 1 Cumulative normal-distribution C(x), with x = A.
SYMBOLS
The following symbols have special meaning:
PI 3.1415926...
E 2.7182818...
EULER 0.5772156...
TMIN Minimum t value
TMAX Maximum t value
TINC t increment
N The number of records
T Table with t-coordinates
OPTIONS
-A Requires -N and will partially initialize a table with values
from the given file containing t and f(t) only. The t is placed
in column t_col while f(t) goes into column n_col - 1 (see -N).
-C Select the columns that will be operated on until next
occurrence of -C. List columns separated by commas; ranges like
1,3-5,7 are allowed. -C (no arguments) resets the default
action of using all columns except time column (see -N). -Ca
selects all columns, including time column, while -Cr reverses
(toggles) the current choices.
-F Give a comma-separated list of desired columns or ranges that
should be part of the output (0 is first column) [Default
outputs all columns].
-H Input file(s) has header record(s). If used, the default number
of header records is N_HEADER_RECS. Use -Hi if only input data
should have header records [Default will write out header
records if the input data have them]. Blank lines and lines
starting with # are always skipped.
-I Reverses the output row sequence from ascending time to
descending [ascending].
-N Select the number of columns and the column number that contains
the "time" variable. Columns are numbered starting at 0 [2/0].
-Q Quick mode for scalar calculation. Shorthand for -Ca -N 1/0 -T
0/0/1.
-S Only report the first or last row of the results [Default is all
rows]. This is useful if you have computed a statistic (say the
MODE) and only want to report a single number instead of
numerous records with identical values. Append l to get the
last row and f to get the first row only [Default].
-T Required when no input files are given. Sets the t-coordinates
of the first and last point and the equidistant sampling
interval for the "time" column (see -N). Append * if you are
specifying the number of equidistant points instead. If there
is no time column (only data columns), give -T with no
arguments; this also implies -Ca. Alternatively, give the name
of a file whose first column contains the desired t-coordinates
which may be irregular.
-V Selects verbose mode, which will send progress reports to stderr
[Default runs "silently"].
-bi Selects binary input. Append s for single precision [Default is
d (double)]. Uppercase S or D will force byte-swapping.
Optionally, append ncol, the number of columns in your binary
input file if it exceeds the columns needed by the program. Or
append c if the input file is netCDF. Optionally, append
var1/var2/... to specify the variables to be read.
-bo Selects binary output. Append s for single precision [Default
is d (double)]. Uppercase S or D will force byte-swapping.
Optionally, append ncol, the number of desired columns in your
binary output file. [Default is same as input, but see -F]
-m Multiple segment file(s). Segments are separated by a special
record. For ASCII files the first character must be flag
[Default is '>']. For binary files all fields must be NaN and
-b must set the number of output columns explicitly. By default
the -m setting applies to both input and output. Use -mi and
-mo to give separate settings to input and output.
ASCII FORMAT PRECISION
The ASCII output formats of numerical data are controlled by parameters
in your .gmtdefaults4 file. Longitude and latitude are formatted
according to OUTPUT_DEGREE_FORMAT, whereas other values are formatted
according to D_FORMAT. Be aware that the format in effect can lead to
loss of precision in the output, which can lead to various problems
downstream. If you find the output is not written with enough
precision, consider switching to binary output (-bo if available) or
specify more decimals using the D_FORMAT setting.
NOTES ON OPERATORS
(1) The operators PLM and PLMg calculate the associated Legendre
polynomial of degree L and order M in x which must satisfy -1 <= x <=
+1 and 0 <= M <= L. x, L, and M are the three arguments preceding the
operator. PLM is not normalized and includes the Condon-Shortley phase
(-1)^M. PLMg is normalized in the way that is most commonly used in
geophysics. The C-S phase can be added by using -M as argument. PLM
will overflow at higher degrees, whereas PLMg is stable until ultra
high degrees (at least 3000).
(2) Files that have the same names as some operators, e.g., ADD, SIGN,
=, etc. should be identified by prepending the current directory (i.e.,
./LOG).
(3) The stack depth limit is hard-wired to 100.
(4) All functions expecting a positive radius (e.g., LOG, KEI, etc.)
are passed the absolute value of their argument.
(5) The DDT and D2DT2 functions only work on regularly spaced data.
(6) All derivatives are based on central finite differences, with
natural boundary conditions.
(7) ROOTS must be the last operator on the stack, only followed by =.
EXAMPLES
To take the square root of the content of the second data column being
piped through gmtmath by process1 and pipe it through a 3rd process,
use
process1 | gmtmath STDIN SQRT = | process3
To take log10 of the average of 2 data files, use
gmtmath file1.d file2.d ADD 0.5 MUL LOG10 = file3.d
Given the file samples.d, which holds seafloor ages in m.y. and
seafloor depth in m, use the relation depth(in m) = 2500 + 350 * sqrt
(age) to print the depth anomalies:
gmtmath samples.d T SQRT 350 MUL 2500 ADD SUB = | lpr
To take the average of columns 1 and 4-6 in the three data sets
sizes.1, sizes.2, and sizes.3, use
gmtmath -C 1,4-6 sizes.1 sizes.2 ADD sizes.3 ADD 3 DIV = ave.d
To take the 1-column data set ages.d and calculate the modal value and
assign it to a variable, try
set mode_age = `gmtmath -S -T ages.d MODE =`
To evaluate the dilog(x) function for coordinates given in the file
t.d:
gmtmath -T t.d T DILOG = dilog.d
To use gmtmath as a RPN Hewlett-Packard calculator on scalars (i.e., no
input files) and calculate arbitrary expressions, use the -Q option.
As an example, we will calculate the value of Kei (((1 + 1.75)/2.2) +
cos (60)) and store the result in the shell variable z:
set z = `gmtmath -Q 1 1.75 ADD 2.2 DIV 60 COSD ADD KEI =`
To use gmtmath as a general least squares equation solver, imagine that
the current table is the augmented matrix [ A | b ] and you want the
least squares solution x to the matrix equation A * x = b. The
operator LSQFIT does this; it is your job to populate the matrix
correctly first. The -A option will facilitate this. Suppose you have
a 2-column file ty.d with t and b(t) and you would like to fit a the
model y(t) = a + b*t + c*H(t-t0), where H is the Heaviside step
function for a given t0 = 1.55. Then, you need a 4-column augmented
table loaded with t in column 1 and your observed y(t) in column 3.
The calculation becomes
gmtmath -N 4/1 -A ty.d -C0 1 ADD -C2 1.55 STEPT ADD -Ca LSQFIT =
solution.d
Note we use the -C option to select which columns we are working on,
then make active all the columns we need (here all of them, with -Ca)
before calling LSQFIT. The second and fourth columns (col numbers 1
and 3) are preloaded with t and y(t), respectively, the other columns
are zero. If you already have a precalculated table with the augmented
matrix [ A | b ] in a file (say lsqsys.d), the least squares solution
is simply
gmtmath -T lsqsys.d LSQFIT = solution.d
REFERENCES
Abramowitz, M., and I. A. Stegun, 1964, Handbook of Mathematical
Functions, Applied Mathematics Series, vol. 55, Dover, New York.
Holmes, S. A., and W. E. Featherstone, 2002, A unified approach to the
Clenshaw summation and the recursive computation of very high degree
and order normalised associated Legendre functions. Journal of
Geodesy, 76, 279-299.
Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery,
1992, Numerical Recipes, 2nd edition, Cambridge Univ., New York.
Spanier, J., and K. B. Oldman, 1987, An Atlas of Functions, Hemisphere
Publishing Corp.
SEE ALSO
GMT(1), grdmath(1)
GMT 4.5.14 1 Nov 2015 GMTMATH(1)