Welcome to the Climate Data Operators¶
- Welcome to the Climate Data Operators
- Open Development or How to Get Help
- Installation and Supported Platforms
- Download / Compile / Install
- Known Problems
- CDO Mailing Lists
- Using CDO at MPIM and DKRZ
CDO is a large tool set for working on climate and NWP model data. NetCDF 3/4, GRIB 1/2 including SZIP and JPEG compression, EXTRA, SERVICE and IEG are supported as IO-formats. Apart from that CDO can be used to analyse any kind of gridded data not related to climate science.
CDO has very small memory requirements and can process files larger than the physical memory.
CDO is open source and released under the terms of the GNU General Public License v2 (GPL).
There is no man-page since operator descriptions are built into the interpreter:
cdo -h [operator]More documentation is available:
- ECA Climate Indices Package
- Tutorial, continuously under development
- Tutorial de CDO em português Autor: Guilherme Martins
- Reference card, for everyone with a really large coffee mug
- OpenMP support
- Citing CDO
Open Development or How to Get Help¶
We encourage users to use both forum and issue tracking system. To be most helpfull, we recommend the following:
- Include the version of CDO in your postings. Use
- If you are not sure about using forum or issue list, use the forum first. This will clarify things.
- Do not create forum entries and issues on the same topic. This is annoying and does not solve your problem.
- Almost all problems have to do with files. Especially NetCDF can be very specific. To face such problems the original data is needed. This does not mean, that the full data set is needed. Please use following methods to shrink the data before uploading:
- Select a single variable from the file, which causes problems. This can be with CDO using selvar
- If your problem arises with a single timestep, send us only one! Use operators from the seltime
- Remap to a coarse grid with CDO's remapping facilities.
- If the file is still too large, there might be a public ftp server, where you can upload it.
- If the data cannot be uploaded, include the output from
ncdump -hof your input NetCDF data.
- For tracking down errors during the configuration, it's good to have the config.log file.
Installation and Supported Platforms¶
CDO should easily compile on every Posix compatible operating system like IBM's AIX, HP-UX, Sun's Solaris as well as on most Linux distributions, BSD variants and cygwin. Thus it is possible to use CDO similarly on general purpose PCs and unix-based high performance clusters.
In case of HPC, it is quite common to install software via source code compilation, because theses machines tend to be highly tuned beasts. Special libraries, special compilers, special directories make binary software delivery simply useless even if operating systems support package management systems like
rpm (e.g. AIX). That's why CDO uses a customisable building process with autoconf and automake. For more commonly used Unix systems, some progress have been made to ease installation of CDO. Further information can be found here:
Download / Compile / Install¶
CDO is distributed as source code - it has to be compiled and installed by the user. Please download the current release from here. For high portability CDO is built with autotools. After unpacking the archive, check all configure options with
Most important options are described in the manual. Some functionality (e.g. IO formats) will only be available when CDO is built/linked against the corresponding library. If you need to install those libaries too, you may consider using libs4cdo, a preconfigured package which contains all external functionality for CDO. After successful configuration type
make && make install
Errors with operator chaining and netCDF4/HDF5 files¶
CDO is a multi-threaded application. When chaining operators then all these operators are running in parallel on different threads. Therefor all external libraries should be compiled thread safe. Using non-threadsafe libraries could cause unexpected errors! Especially netCDF4(HDF5) in combination with operator chaining cause problems, if the HDF5 library is not compiled threadsafe.
The errors could vary for different runs. Typical error messages are:
Error (xxx) : NetCDF: HDF error cdo(xxx) malloc: *** error for object xxx: pointer being freed was not allocated segmentation fault (core dumped) Bus error (core dumped)A workaround is to change the output file format to standard netCDF
cdo -f nc fldmean -selname,XX ifile.nc4 ofile.ncSince CDO version 1.5.8 you can lock the I/O with the option -L. This will serialize all I/O accesses.
cdo -L fldmean -selname,XX ifile.nc4 ofile.nc4
netCDF with packed data¶
Packing reduces the data volume by reducing the precision of the stored numbers. In netCDF it is implemented using the attributes add_offset and scale_factor. CDO supports netCDF files with packed data but can not automatically repack the data. That means the attributes add_offset and scale_factor are never changed. If you are using a CDO operator which change the range of the data you also have to take care that the modified data can be packed with the same add_offset and scale_factor. Otherwise the result could be wrong. You will get the following error message if some data values are out of the range of the packed datatype:
Error (cdf_put_vara_double) : NetCDF: Numeric conversion not representableIn this case you have to change the data type to single or double precision floating-point. This can be done with the CDO option -b F32 or -b F64.
Lost netCDF variables/dimensions after processing with CDO¶
CDO process only the data variables and the corresponding coordinate variables of a netCDF file. All coordinate variables and dimensions which are not assigned to a data variable will be lost after processing with CDO!
Static build with netcdf 4.x incl. dap¶
For a static binary linked to a netcdf 4.1.1 default installation the dependencies of dap have to be added manually. This is because nc-config does not keep trac of them. Add
LIBS='-lcurl -lgssapi_krb5 -lssl -lcrypto -ldl -lidn -ldes425 -lkrb5 -lk5crypto -lcom_err -lkrb5support -lresolv'to the ./configure call. You may need the shared runtime environment of you compiler. For gcc, add -lgcc_s to LIBS. If this does not work, dependencies can be checked through package management or with ldd, if a shared version is available. Kerberos related bindings are described be the krb5-config script. Like CDO itself netcdf uses libtool for building. It keeps track of further dependencies and uses runtime library paths for linking to shared libs. That's why it is recommended to user shared instead of static linking.
EXTRA formatted files with mixed precision¶
The EXTRA format has a header section with 4 integer values followed by the data section. The header and data section can have an accuracy of 4 or 8 bytes (single or double precision). There is no real standard for the EXTRA format but the header and data section should have the same precision. An EXTRA file with a header precision of 4 bytes and a data precision of 8 bytes couldn't be processed with CDO since version 1.4.2.
CDO Mailing Lists¶
Two electronic mailing lists are available for users to subscribe to:
is a read-only low volume list for important announcements and new release information about CDO.
is a read-only low volume list for announcements of new CDO installations at MPIM and DKRZ.
You can subscribe to the lists by filling out the form on the following web pages:
We also use a newsfeed for announcing releases.
Using CDO at MPIM and DKRZ¶Users at MPIM and DKRZ, find the executable (cdo) of the installed CDO version in
The following machines are supported:
|MPIM||Linux Cluster (thunder)||squeeze-x64 (x86_64)|
The latest and all previously installed CDO versions are available by the module system. Use
module load cdo/1.X.Yto load CDO version 1.X.Y, or
module load cdoto load the latest version.