GISS GCM ModelE
Frequently Asked Questions About the GISS Model
Here are some frequently asked questions and some standard responses, please feel free to add new questions (and answers!), or amend the response to make it clearer or more complete.
- Why has all the coding been moved out of main?
- Why is modular code so great anyway?
- Why are there three files for every module, why not just one?
- How do I know where do find definitions and declarations then?
- Why doesn't the model produce 'nice' output as it runs?
- Something is wrong with the advection, my tracers are going haywire!
- The model crashed in the dynamics near the pole. What should I do?
- Why should I switch to using modelE?
- Why do I get single precision answers from double precision variables?, and what are all those 'd0' doing?
- Why does GISS sometimes use the line number format for FORTRAN files?
- Is line number format 'harder' to edit?
- Don't revision control systems (rcs or cvs) provide equal functionality?
- Will the update system ever change?
- The model core dumps!
- What did we do?
- Why did we do this?
- How was it done?
- How does it work?
- So what are the actual interfaces?
- Any examples?
- What is preprocessing?
- How to set up global preprocessing options in a rundeck
- How to use global preprocessing options in a source file
- How does it work?
- Any recommendations on usage?
1) Why has all the coding been moved out of main?
In order to increase the modularity of the code, many tasks that were previously in main and input (input/output, initialisations, diagnostics etc.) have been moved to the relevant physical module. Thus, main should not vary as much between model versions. Also, changes to a module (to the number of prognostic variables, the functionality etc.) should now only concern that module and not the rest of the model. This should make it easier to 'plug and play' new functionality.
2) Why is modular code so great anyway?
So that if you want to change something, you only need to do it in the part of the code directly affected. There should be much less unforeseen consequences to seemingly minor changes.
3) Why are there three files for every module, why not just one?
There are generally three logical parts of each module: a common block (that owns the variables on the main model grid and/or saves output from a module that might be used by another part of the code), the local physics (for instance, the pure 1D column physics) and the drivers (the programs that access the local physics routines, do initiallisation etc.). The reason that they are in separate files is mainly because of the logical dependencies between the main model and module variables. Consider a situation where the module needs information from another module - i.e. there is a USE statement in one of the drivers. Conceivably, that module might USE a variable from this module also. Since we now use an automatically generated list of dependencies in the makefile, this would create a circular dependency if the USE statements and the variable definitions were in the same file. Therefore, at minimum the common block and the drivers must be seperated. We choose generally to make the local physics a seperate file also because it is not usually dependent on anything except the CONSTANT module and possibly the main model common block. Thus it does not need to be recompiled everytime something in the drivers or common block changes. However, sometimes the local physics is in the same file as the COM module.
4) How do I know where do find definitions and declarations then?
The rules are relatively simple (but not yet implemented 100% of the time).
- Variables defined on the model grid are in the COM file.
- Module specific parameters are in the local physics module.
- local variables are defined wherever they are needed.
- Variables from other parts of the code are listed in the USE only statements.
One exception to this is when local parameters are required by another part of the model (i.e. for the input NAMELIST), then the variable might be defined in the local physics module, USE'd by the COM module and only accessed through COM. - Drivers can USE both the COM variables and the local physics parameters.
5) Why doesn't the model produce 'nice' output as it runs?
The GISS GCM output is of the form of accumulation files, and since the accumulated quantities are neither averaged or scaled, looking directly at these 'acc' files is not very illuminating. So it is a fact that the running output is not 'nice'. However there are a number of reasons why we do it this way. Firstly, the accumulated arrays are generally the basic model values. The 'nice' diagnostics that are required are generally a function of a number of these variables. Accumualted arrays and files can be concatenated very easily (and thus averaged trivially) without having to worry about the length of time each array was accumulated (say a different number of days in two different months). The average of derived quantities (such as the albedo) is not the same as the quantity derived from the average (which is usually what is required). Accumulated arrays allow you to do both depending on what is needed. A great many diagnostics can be derived after the fact, without having to have thought about them ahead of time.
6) Something is wrong with the advection, my tracers are going haywire!
This is not a problem in the advection. The GISS dynamical core is unique in that it uses more information than just the mean tracer amount in a particular grid-box. We currently use two flavours of tracer advection, the linear and the quadratic upstream schemes. The linear scheme carries 3 first order moments along with the mean, while the quadratic scheme carries an additional 6 second order moments. These can be thought of as representing the spatial gradients of the tracer concentration at the centre of a gridbox. Thus a higher degree of accuracy for the tracer distribution in the box can be found as a Taylor series expansion in the moments. This is used in the advection routines to accurately move tracer mass from box to box. However, as should be clear, if the moments are incorrect or corrupted in any way, the advection will be compromised, even to the point where negaitve tracer amounts might appear. This is usually what has happened when the model crashes after advecting tracers.
What could cause the the moments to be corrupted? Generally, the coding of the tracers did not consider the effects on the moments of a particular physical process. The most serious omission is not to reduce the moments in the same proportion as the tracer mass is reduced. This omission can leave tracer moments significantly larger than the mean tracer amount, and implies a sub-grid scale tracer profile that is not positive definite. Please read the document Using tracer moments in the GISS GCM for further information on how to use the moments in a useful fashion.
7) The model crashed in the dynamics near the pole. What should I do?
Occasionally (every 15-20 model years), the model will produce very
fast velocities in the lower stratosphere near the pole (levels 7 or 8
for the standard layering). This will produce a number of warnings
from the advection (such as
limitq warning: abs(a)>1) and then
finally a crash (
limitq error: new sn < 0"). There are a number of
things you can do to get past such an error: i) Go back to the last
monthly rsf file and use ISTART=4, ii) change DT to a smaller value
(like 180 sec), run past the crash for a couple of days, and then
increase DT back to normal afterwards.
The second option is more straightforward and can be dealt with automatically (see here for more details). The first option is not guaranteed to work unless the number of hours that have elapsed from the start of the run to the end of the last month are not an integer multiple of NRAD. (This is to ensure that the model will follow a different path). If there is a problem, then going back to the previous months restart file generally works.
Please make a note in the rundeck that this happened, and how you fixed it.
8) Why should I switch to using modelE?
It is of course up to you. However, there are a number of reasons why it makes sense for you to make the effort involved in upgrading.
- modelE fixed a number of bugs (some major, some minor) that may still exist in your code.
- subsequent changes and upgrades for modelE will be significantly easy than it was for model II'
- adding diagnostics is much more straightforward
- acc files now have more information (NOT YET IMPLEMENTED) and 'pd'-like programs are now more robust
- modelE is double precision throughout
- modelE will eventually be the only model version supported by the programmers.
- modelE is more modular, more flexible and easier to understand
- modelE has made explicit most of the dependencies that were hidden in model II'. Thus minor changes are less likely to have devastating knock on effects.
- modelE is written using a lot of Fortran 90, and a lot less Fortran 66.
- modelE comes with automatic Makefile generation from the rundeck.
- modelE comes ready for parallelisation, coupling and tracers.
9) Why do I get single precision answers from double precision variables?, and what are all those 'd0' doing?
All numbers in the GCM should be accurate to the degree of their representation, however, many are not. This mostly stems from the automatic conversion that takes place when a single precision or integer number is converted to a double precision variable. In the following examples the double precision variables will only be accurate to 6 or so decimal places (instead of the 12 or so expected).
REAL*8 X X = 0.1 => 0.10000000149011612 X = 1/3 => 0. (integer division) X = 1./3. => 0.3333333432674408 X = 1./3 => 0.3333333432674408 X = 0.3333333333333333 => 0.3333333432674408 (!!!!)
To get double precision results you must use 'd' ie. X=0.1d0 or 1d-1 X=1d0/3d0 or 1./3d0 or even 1d0/3.
Note that for decimals expressable exactly in binary formulation,
there are no problems, ie. X=0.5 is the same as X=5d-1. Where integer
division is concerned, the integer is converted to the type of the
numerator (I think). Thus
1./IM gives only single precision.
REAL*8 :: FIM = IM, BYIM = 1./FIM gives double precision
(since the denominator is already double precision).
On some compilers there is a compiler option (such as -r8) that removes these problems, but not all. Hence for maximum portability we are trying to be explicit about writing those 'd0's out.
10) Why does GISS use the line number format for FORTRAN files?
Line number format is the practice of using the first 8 characters of a line to insert line number information in an 'F8.3' format.
Historically, this is a holdover from the days of the punchcard when all lines required a line number. Subsequently, it was used at GISS to make additions/corrections to files without having to copy the entire rest of the code (the update system). Further refinements have led to functional updates that allow the 'piggy-backing' of specific functionality (such as tracer code, or ocean coupling) on top of the underlying model code in a relatively model-version independent way. We have also found it very useful in tracking bugs, notifying users about code changes and referencing in general.
In modelE line number format has been replaced by the CVS revision control system and preprocessing options.
11) Is line number format 'harder' to edit?
Since the format is written in ASCII is just as easy as any other text format to edit in any standard editor. In addition we have tools which work very well that a) allow standard FORTRAN mode in Emacs to work with line-numbers (giss-fortran-mode.el) b) utilities that fill-in any missing line numbers, so that you don't ever need to put them in by hand (fillnum). This could be combined with the makefile, so that the it would occur automatically c) you can choose whatever format you like to edit (.U file, a .f file (line numbers first or last)) with options to upd
12) Don't revision control systems (rcs or cvs) provide equal functionality?
One part of the update file system (of which line numbers are part) is indeed to save space and store only changes from a base file and this is replicated - and improved upon with rcs or cvs. However, the other part of the system involves the addition of functionality that is generally independent of the base code (i.e the tracer code, the stratospheric code, the coupling to the ocean, options for diagnostic output etc), which can be applied directly to any version of the base code, hopefully with only minimal changes. This is not replicated by standard rcs/cvs systems (and indeed they do not try). This functionality is to some extent similar to the use of pre-processor directives, but as the number of options grow (as they will), this produces unwieldy and overly complicated base code. Thirdly, line numbers are a very easy way to refer to code sections, either to describe what is going on, or to refer to specific lines, for instance, when a bug is discovered.
13) Will the update system ever change?
We have already seen an increase in functionality of the update system that have made it much more useful for precisely the functions that rcs/cvs does not cover. However, for the new modelE code that was built directly in CVS, it will probably not be re-introduced.
14) The model core dumps!
This problem can arise from a multitude of causes; only the most common ones unrelated to (heretofore undetected) programming errors can be outlined here.
- The disk on which your simulation is running has filled up.
- If you are running a model version known to work in a computing environment other than yours, check that your "stack" is large enough. See the stacksize discussion of the System Requirements section for details. Linux systems in particular have small stack defaults.
2. FAQs about the Parameter database.
1) What did we do?
We replaced parameter lists JC and RC and corresponding common blocks with a Parameter Database.
2) Why did we do this?
The following goals were pursued:
- make restart files more universal, so that they would remain compatible if some parameters are added or removed
- make introduction of new parameters more simple and straightforward
- allow each module to work with its own parameters independently, i.e. add/remove parameters without editing any source code outside of the module which is using them
- make the code more readable and more self-documenting and get rid of structures like BLOCK DATA which are not in Fortran 90 style
3) How was it done?
We created a module which keeps a database of all parameters used in the model and provided a library of interface routines. These interface routines are used to get the values of parameters from the database (or to set new parameters in the database). Rundeck parameters are loaded automatically (with one call in INPUT) into this database when the program starts. This database is saved to restart file each time when restart file is written.
4) How does it work?
For each parameter the database keeps its name as a character
string and its value (or list of values for an array). To get a value
of a parameter
"my_param" and assign it to the
var one would do:
call get_param( "my_param", var )
If one wants
to create a new parameter in the database with the name
"new_par" and set it to the value var one would do:
call set_param( "new_par", var )
It is recommended that you use the same name for the variable and for its name in the database, i.e.:
call set_param( "new_par", new_par ) call get_param( "my_param", my_param )
This will improve the readability of the code.
At the restart first all parameters from the rundeck are loaded to the database with the names as they appear in the rundeck. Then the restart file is read and those parameters which were saved to it and were not present in the rundeck are added to the database. If some parameter was present both in the rundeck and in the restart file then preference will be given to the rundeck and restart file value will be ignored. This is done so that you can always overwrite certain parameters using rundeck.
5) So what are the actual interfaces?
Rundeck: All parameters have been moved out of the namelist block
to a new block which starts with
&&PARAMETERS and ends with
&&END_PARAMETERS This block is being read by a special parser
(not a namelist). The syntax of this block is similar to that of the
namelist with following restrictions:
- only one parameter per line can be specified
- character strings should be single quoted (like
- arrays can be assigned only as a whole, no support for single elements
- all elements of an array should be on the same line
- repeaters (i.e. expressions like
2*123) are not supported
- only linear arrays are supported
- the only types supported are:
integer, real*8, character*1to
Since the parser reads parameters from the rundeck right into the database there is no need to describe those parameters in advance in any part of the code. The drawback is that the parser has to derive the type of the data and the size of arrays from the information provided in the rundeck. The following convention is used:
- if single quoted (
') strings are present on the line then the data are
- else if decimal points (
.) are present then the data is
- else the data is
- The number of data entries on the line specifies the dimension of the array. If only one data entry is present then the parameter is a scalar (or an array of dimension one which is equivalent).
The library utilizes Fortran 90 interface blocks in
such a way that the type of parameters (i.e.
character) is recognized automatically, so the the names of
subroutines are the same for all types. If the dimension of the
parameter is given then it is treated as an array. If dimension is
omitted then it is a scalar. Beware that Fortran 90 treats scalars
and arrays of dimension one differently, so if you confuse them it
may generate an error.
The formal arguments in the subroutines below are:
character*(*)- the name of the parameter which is a character string no longer than 32 bytes
value- a scalar variable or a linear array of type:
integer, real*8,character*1 to character*16
integer- dimension of an array, should be omitted for scalars
character*1- an optional "option" (
integer- unit number for reading/writing
logical- reading "overwrite" option
Simple copy routines to copy parameters to/from database:
set_param( name, value, dim, opt )- put a parameter with the name <name> and the value <value> to the database
get_param( name, value, dim )- copy the value of the parameter <name> from the database to the variable <value>
These functions will check if the parameter is present in the database and will generate an error if you are trying to get parameter which is not in the database or if you are trying to set parameter which is already set. If you really need to overwrite parameter which is already in the database use opt='o'. Otherwise <opt> should be omitted.
Query logical function:
is_set_param( name )- returns
.true.if parameter <name> is present in the database,
A convenient function which is a combination of those above:
sync_param( name, value, dim ) - literally consists of the following
if( is_set_param( name ) ) then get_param( name, value, dim ) else set_param( name, value, dim ) endif
So what it does, it checks if parameter <name> is in database and if so copies it to the <value>. Otherwise it leaves <value> unchanged and copies it to the database with the name <name>. This function is provided as a convenient tool, since this is what you will do most often. At the restart each module will check if certain parameters were provided in the rundeck or in the restart file (i.e. they are in the database already) and use those values. If not then it will use default values and will also copy them to the database so that they are saved to the restart file for future reference.
Subroutines to work with pointers:
alloc_param( name, pvalue, initval, dim ) - allocates space in the
database for the parameter <name>, fills it with data provided
in <initval> and returns pointer to it in <pvalue>
get_pparam( name, pvalue, dim ) - returns pointer <pvalue>
to the parameter data of the parameter <name> in the database
These functions are provided as an additional tool and actually are not needed if parameters in the database are treated as "parameters", i.e. they are set only once at the beginning of the run and then used as constants. But it appeared that some data which was in "parameters" common block was changed during the run (some actually quite often and in different places). To keep the parameter database up-to-date with those changes one would have to call "set_param" each time after the data is changed which is rather awkward. Instead one can use pointers to the parameter data in the database. Pointers in Fortran 90 are nothing else but aliases of the objects they point to. So if you get a pointer to a parameter from the database and then work with it as if it were a regular variable, you will automatically update it in the database and there will be no need to call "set_param". So this subroutines can be used if one wants to keep a dynamic variable in the database (so that it is automatically saved to the restart file).
read_param( kunit, ovrwrt ) - reads the parameter database from
the unit <kunit>
write_param( kunit ) - writes the parameter database to the unit
integer- the unit number from/to reading/writing is performed
ovrwrt- logical - if .true. then reading overwrites those parameters that are already in the database. If .false. then those parameters which are already in the database are left unchanged and only new parameters are added.
Other useful subroutines:
print_param( kunit ) - does formatted output to the unit <kunit>
in a way similar to namelist
integer, inetnt(in)- the unit number corresponding to the file to which we want print the database
query_param( n, name, dim, ptype ) - returns the the information
about the parameter by its number in the database <n>. It
returns 'EMPTY' in the <name> if parameter with such <n>
doesn't exist (i.e. if <n> is bigger then the number of
The name is always converted to the low case. It returns 'EMPTY' (upper case) if <n> doesn't correspond to any parameter
integer, intent(out)- returns the dimension of the parameter (i.e. <dim> = 1 for scalars, <dim> > 1 for arrays)
character*1, intent(out)- returns the type of the parameter:
integer 'r'- for
real*8 'c'- for
parse_params( kunit ) - parse the information in the file <kunit>
and load the parameters into the database. It overwrites the existing
integer- the unit number corresponding to the file with the rundeck information (it should contain a block starting with &&PARAMETERS and ending with &&END_PARAMETERS)
This subroutine is actually located in a separate module PARSER, but it seemed logical to describe it here.
6) Any examples?
Here is an example of typical usage of "query_param". One should keep in mind that though "get_param" is a generic interface one should call it with the arguments of correct type to extract the information. That's why one needs to use "select case" below.
subroutine ex_param ! this is an example subroutine which shows how to loop ! over all parameters in the database ! it does the same thing as print_param USE PARAM integer, parameter :: MAXDIM=64 character*32 name integer n, dim character*1 ptype integer ic(MAXDIM) real*8 rc(MAXDIM) character*16 cc(MAXDIM) n = 1 print *, 'printing parameter database' do call query_param( n, name, dim, ptype ) if (name == 'EMPTY' ) exit if ( dim > MAXDIM ) then print *, 'dim of param ',name,' is > MAXDIM' stop 'MAXDIM too small' endif select case( ptype ) case ('i') !integer call get_param( name, ic, dim ) print *, name, ( ic(i), i=1, dim ) case ('r') !real call get_param( name, rc, dim ) print *, name, ( rc(i), i=1, dim ) case ('c') !character call get_param( name, cc, dim ) print *, name, (cc(i), i=1, dim ) end select n = n + 1 enddo end subroutine ex_param
3. FAQs about Preprocessing instructions.
1) What is preprocessing?
The preprocessor is a program that runs before the actual compiler starts and does certain editing to the source code according to preprocessing instructions. All preprocessing instructions start with # in the first column. The most typical example of preprocessor usage in Fortran code would be:
............... some fortran code .............. #ifdef OPTION_A ............... fortran code specific for OPTION_A .............. #endif .............. more fortran code ..............
In the above example the code between
#endif will be taken into account only if the name
OPTION_A was defined (with the instruction
OPTION_A) somewhere earlier in the file. Otherwise it will be
treated as commented out. So preprocessor allows you to optionally
include/exclude certain parts of the code. There are also other
useful preprocessing commands, like name substitution. Though with
name substitutions one should be very careful in fixed format
Fortran, since it is easy to create a line which after substitutions
will be longer then 72 characters. The following is a list of typical
#define #undef #include #ifdef #ifndef
#if #else #endif
SGI manual pages provide good explanations to those
instructions. On SGI type
2) How to set up global preprocessing options in a rundeck?
Your rundeck may contain an optional block which starts with a line Preprocessor Options and ends with a line End Preprocessor Options. Everything between those lines is treated as a set of preprocessing definitions and will be included into corresponding source files. Here is a simple example of preprocessing block:
Preprocessor Options #define TRACERS_ON ! include tracers code End Preprocessor Options
It defines the name
TRACERS_ON. The text which starts
'!' is considered as comment and is ignored. Trailing spaces and
empty lines are also ignored.
3) How to use global preprocessing options in a source file?
If you want to use global preprocessing options in a certain source file you should include a line:
at the very beginning of such file. Otherwise all global preprocessing names will stay undefined (the compiler will not give any warnings about this). A typical use of a preprocessing option would be:
#include "rundeck_opts.h" .......... some code #ifdef TRACERS_ON some tracers code here #endif some code
The code between
#ifdef TRACERS_ON and
will be included only when global name
TRACERS_ON is defined in a
rundeck. Otherwise this code will be ignored by the compiler.
4) How does it work?
When the Makefile starts to compile the model it reads the global
preprocessing options block from the rundeck and compares it to the
contents of the file
rundeck_opts.h. If they are
identical the file
rundeck_opts.h is left unchanged.
Otherwise it is overwritten with the new preprocessing block. File
rundeck_opts.h should be included into each source file
which wants to use global preprocessing options. This is done by
putting a line
in the beginning of the source file. Note that one should use CPP
#include, which starts from the first
position), and not the Fortran
include. When the Makefile
checks dependencies it takes into account dependency on
rundeck_opts.h, so that when
rundeck_opts.h is changed
all files which
#include it are automatically recompiled.
5) Any recommendations on usage?
There is an understanding that global preprocessing options should
be used only when there is no other convenient way to reach the same
goal. One should keep in mind that once the global preprocessing
block in a rundeck is changed, all files that include
rundeck_opts.h will be recompiled which most probably
will force the recompile of the entire model. So one should limit
such options to those which would not change too often from rundeck
to rundeck. This functionality is introduced mainly for the options
which are global (i.e. used in many source files at the same time)
and which need to be documented (that's why this block is in a
rundeck). Typical example would be an option which controls inclusion
of tracers code into the model (as in example above). There is also a
general convention that the names defined by preprocessor are
constructed of capital letters only with '_' introduced for
USE_OPTION_A. I suggest that we
observe it. This helps to distinguish them from the rest of the code
which is usually in lower case. And yes, names used by preprocessor
are case-sensitive (unlike names in Fortran).