Project

General

Profile

Welcome to the Climate Data Operators

CDO is a large tool set for working on climate and NWP model data. NetCDF 3/4, GRIB 1/2 including SZIP (or AEC) and JPEG compression, EXTRA, SERVICE and IEG are supported as IO-formats. Apart from that CDO can be used to analyse any kind of gridded data not related to climate science.
CDO has very small memory requirements and can process files larger than the physical memory.

CDO is open source and released under the 3-clause BSD License (BSD-3-Clause).

Documentation

The CDO User Guide is available as pdf and html.

There is no man-page since operator descriptions are built into the interpreter:

cdo -h [operator]

More documentation is available:

How to Get Help

We encourage users to use both Forums and Issues tracking system. If you are not sure about using Forums or Issues list, use the Forums first especially the Support list.

To be most helpful, we recommend the following:

  • Please include the version of CDO in your postings. Use
    cdo -V
  • If possible, check your calls with the latest CDO release - problems may be solved already
  • Input Files: Almost all problems have to do with files. To face such problems the original data is needed. This does not mean, that the full data set is needed. Please use following methods to shrink the data before uploading:
    • Select a single variable from the file, which causes problems. This can be with CDO using selvar
    • If your problem arises with a single timestep, send us only one! Use operators from the seltime
    • Remap to a coarse grid with CDO's remapping facilities.
    • If the file is still too large, there might be a public ftp server, where you can upload it.
    • If the data cannot be uploaded, include the output from ncdump -h of your input NetCDF data.
  • Building Problem: Please submit the config.log file created by the configure call.
  • No X-posting: Do NOT create Forums entries and Issues on the same topic. This is annoying and does not solve your problem.

To see if there is already an answer to your question you can search in the FAQ, the Forums and Issues lists and the internet.

Installation and Supported Platforms

CDO should easily compile on every Posix compatible operating system like IBM's AIX, HP-UX, Sun's Solaris as well as on most Linux distributions, BSD variants and cygwin. Thus it is possible to use CDO similarly on general purpose PCs and unix-based high performance clusters.
In case of HPC, it is quite common to install software via source code compilation, because theses machines tend to be highly tuned beasts. Special libraries, special compilers, special directories make binary software delivery simply useless even if operating systems support package management systems like rpm (e.g. AIX). That's why CDO uses a customisable building process with autoconf and automake. For more commonly used Unix systems, some progress have been made to ease installation of CDO. Further information can be found here:

External links:

Download / Compile / Install

CDO is distributed as source code - it has to be compiled and installed by the user. Please download the current release from here. For high portability CDO is built with autotools. After unpacking the archive, check all configure options with

./configure --help

Most important options are described in the manual. Some functionality (e.g. IO formats) will only be available when CDO is built/linked against the corresponding library. If you need to install those libaries too, you may consider using libs4cdo, a preconfigured package which contains all external functionality for CDO. After successful configuration type

make && make install

Common Issues and Known Problems

Segfault with netcdf4 files

Netcdf4 is based on the hdf5 libary library, which can be build thread-safe or non-thread-safe. Depending on this, concurrent IO on netcdf4 files (like in operator chains) may lead to segmentation faults in the underlying hdf5 library. Because CDO has to deal with whatever hdf5 installation is on the target system, there is a special CDO command line option for serialization of IO: -L. Please add it to the CDO calls accordingly!

make check fails with tsformat.test.8 - Errors with operator chaining and netCDF4/HDF5 files

CDO is a multi-threaded application. When chaining operators possibly all operators are running in parallel on different threads. Therefor all external libraries should be compiled thread safe. Using non-threadsafe libraries could cause unexpected errors! Especially netCDF4(HDF5) in combination with operator chaining can cause problems, if the HDF5 library is not compiled thread-safe.

If you compile CDO yourself, you can check this by running

make check
Usually the test called tsformat.test number 8 will fail on systems with non-threadsafe hdf5 installations.

The runtime errors could vary for different runs. Typical error messages are:

Error (xxx) : NetCDF: HDF error
cdo(xxx) malloc: *** error for object xxx: pointer being freed was not allocated
segmentation fault (core dumped)
Bus error (core dumped)
A workaround is to change the output file format to standard netCDF
cdo -f nc fldmean -selname,XX ifile.nc4 ofile.nc
Since CDO version 1.5.8 you can lock the I/O with the option -L. This will serialize all I/O accesses.
cdo -L fldmean -selname,XX ifile.nc4 ofile.nc4

netCDF with packed data

Packing reduces the data volume by reducing the precision of the stored numbers. In NetCDF it is implemented using the attributes add_offset and scale_factor. CDO supports NetCDF files with packed data but can not automatically repack the data. That means the attributes add_offset and scale_factor are never changed. If you are using a CDO operator which change the range of the data you also have to take care that the modified data can be packed with the same add_offset and scale_factor. Otherwise the result could be wrong. You will get the following error message if some data values are out of the range of the packed datatype:

Error (cdf_put_vara_double) : NetCDF: Numeric conversion not representable
In this case you have to change the data type to single or double precision floating-point. This can be done with the CDO option -b F32 or -b F64.
As of CDO release 2.3.0, NetCDF data is always written out unpacked if the operator changes the range of the data. Use the new operator pack to repack the data if required:
cdo pack -<operator>  infile outfile

Wrong result of binary operation for missing value of 0 or 1

Be careful when using binary operations (module: Expr, Cond, Comp, Compc, ...) on data with a missing value of 0 or 1.
The result will be wrong in most cases, because it is impossible to distinguish such missing values from the result.

Lost netCDF variables/dimensions after processing with CDO

CDO processes only the data variables and the associated coordinate variables of a netCDF file. All coordinate
variables and dimensions that are not assigned to a data variable will be lost after processing with CDO!

Static build with netcdf 4.x incl. dap

For a static binary linked to a netcdf 4.1.1 default installation the dependencies of dap have to be added manually. This is because nc-config does not keep trac of them. Add

LIBS='-lcurl -lgssapi_krb5 -lssl -lcrypto -ldl -lidn -ldes425 -lkrb5 -lk5crypto -lcom_err -lkrb5support -lresolv'
to the ./configure call. You may need the shared runtime environment of you compiler. For gcc, add -lgcc_s to LIBS. If this does not work, dependencies can be checked through package management or with ldd, if a shared version is available. Kerberos related bindings are described be the krb5-config script. Like CDO itself netcdf uses libtool for building. It keeps track of further dependencies and uses runtime library paths for linking to shared libs. That's why it is recommended to user shared instead of static linking.

GRIB1 encoding/decoding

To encode and decode GRIB1 records CDO uses the internal library cgribex. This is a lightweight version of the ECMWF GRIBEX library. cgribex was highly optimized for ECHAM data and doesn't support the full GRIB1 standard. When CDO is configured with the ecCodes library, then cgribex is still used for GRIB1 data.
To use ecCodes to encode/decode GRIB1 data, either cgribex can be disabled during configuration (--disable-cgribex) or the CDO option --eccodes can be used.
If you encountered a problem with the cgribex library use ecCodes with the CDO option --eccodes.

SZIP compressed GRIB1 files

SZIP compression of GRIB1 records is a local extension to the GRIB standard at the MPI for Metereology. SZIP compressed GRIB1 records can only be decoded correctly with tools from the MPI for Metereology (e.g. CDO). It is neither recommended to share nor to generate those files outside the MPI for Metereology!

CDO Mailing Lists

There is no CDO mailing list anymore. We use a newsfeed for announcing releases.

Using CDO at MPIM and DKRZ

CDO is installed on the computer systems of MPIM and DKRZ.
The latest and all previously installed CDO versions are available by the module system. Use

module load cdo/2.X.Y
to load CDO version 2.X.Y, or
module load cdo
to load the latest version.