Welcome to the Climate Data Operators

CDO is a large tool set for working on climate and NWP model data. NetCDF 3/4, GRIB 1/2 including SZIP and JPEG compression, EXTRA, SERVICE and IEG are supported as IO-formats. Apart from that CDO can be used to analyse any kind of gridded data not related to climate science.
CDO has very small memory requirements and can process files larger than the physical memory.

CDO is open source and released under the terms of the GNU General Public License v2 (GPL).

Documentation

There are operators for the following topics:
  • File information and file operations
  • Selection and Comparision
  • Modification of meta data
  • Arithmetic operations
  • Statistical analysis
  • Regression and Interpolation
  • Vector and spectral Transformations
  • Formatted I/O
  • Climate indices

Full documentation is available as html or pdf. For an overview, see the operator catalogue or alphabetic list.

There is no man-page since operator descriptions are built into the interpreter:

cdo -h [operator]

More documentation is available: External links:

Open Development

We encourage users to use both forum and issue tracking system. To be most helpfull, we recommend the following:

  • Include the version of CDO in your postings. Use
    cdo -V
  • If you are not sure about using forum or issue list, use the forum first. This will clarify things.
  • Do not create forum entries and issues on the same topic. This is annoying and does not solve your problem.
  • Almost all problems have to do with files. Especially NetCDF can be very specific. To face such problems the original data is needed. This does not mean, that the full data set is needed. Please use following methods to shrink the data before uploading:
    • Select a single variable from the file, which causes problems. This can be with CDO using selvar
    • If your problem arises with a single timestep, send us only one! Use operators from the seltime
    • Remap to a coarse grid with CDO's remapping facilities.
    • If the file is still too large, there might be a public ftp server, where you can upload it.
    • If the data cannot be uploaded, include the output from ncdump -h of your input NetCDF data.
  • For tracking down errors during the configuration, it's good to have the config.log file.

Supported Platforms

CDO should easily compile on every Posix compatible operating system like IBM's AIX, HP-UX, Sun's Solaris as well as on most Linux distributions, BSD variants and cygwin. Thus it is possible to use CDO similarly on general purpose PCs and unix-based high performance clusters.
In case of HPC, it is quite common to install software via source code compilation, because theses machines tend to be highly tuned beasts. Special libraries, special compilers, special directories make binary software delivery simply useless even if operating systems support package management systems like rpm (e.g. AIX). That's why CDO uses a customisable building process with autoconf and automake. For more commonly used Unix systems, some progress have been made to ease installation of CDO. Further information can be found here:

Download / Compile / Install

CDO is distributed as source code - it has to be compiled and installed by the user. Please download the current release from here. For high portability CDO is built with autotools. After unpacking the archive, check all configure options with

./configure --help

Most important options are described in the manual. Some functionality (e.g. IO formats) will only be available when CDO is built/linked against the corresponding library. If you need to install those libaries too, you may consider using libs4cdo, a preconfigured package which contains all external functionality for CDO. After successful configuration type

make && make install

Known Problems

netCDF with packed data

Packing reduces the data volume by reducing the precision of the stored numbers. In netCDF it is implemented using the attributes add_offset and scale_factor. CDO supports netCDF files with packed data but can not automatically repack the data. That means the attributes add_offset and scale_factor are never changed. If you are using a CDO operator which change the range of the data you also have to take care that the modified data can be packed with the same add_offset and scale_factor. Otherwise you have to change the data type to single or double precision floating-point. This can be done with the CDO option -b F32 or -b F64.

Lost netCDF variables/dimensions after processing with CDO

CDO process only the data variables and the corresponding coordinate variables of a netCDF file. All coordinate variables and dimensions which are not assigned to a data variable will be lost after processing with CDO!

Static build with netcdf 4.x incl. dap

For a static binary linked to a netcdf 4.1.1 default installation the dependencies of dap have to be added manually. This is because nc-config does not keep trac of them. Add

LIBS='-lcurl -lgssapi_krb5 -lssl -lcrypto -ldl -lidn -ldes425 -lkrb5 -lk5crypto -lcom_err -lkrb5support -lresolv'
to the ./configure call. You may need the shared runtime environment of you compiler. For gcc, add -lgcc_s to LIBS. If this does not work, dependencies can be checked through package management or with ldd, if a shared version is available. Kerberos related bindings are described be the krb5-config script. Like CDO itself netcdf uses libtool for building. It keeps track of further dependencies and uses runtime library paths for linking to shared libs. That's why it is recommended to user shared instead of static linking.

EXTRA formatted files with mixed precision

The EXTRA format has a header section with 4 integer values followed by the data section. The header and data section can have an accuracy of 4 or 8 bytes (single or double precision). There is no real standard for the EXTRA format but the header and data section should have the same precision. An EXTRA file with a header precision of 4 bytes and a data precision of 8 bytes couldn't be processed with CDO since version 1.4.2.

CDO Mailing Lists

Two electronic mailing lists are available for users to subscribe to:

  • cdo-announce
    is a read-only low volume list for important announcements and new release information about CDO.
  • cdo-intern
    is a read-only low volume list for announcements of new CDO installations at MPIM and DKRZ.

You can subscribe to the lists by filling out the form on the following web pages:

https://lists.mpimet.mpg.de/mailman/listinfo/cdo-announce
https://lists.mpimet.mpg.de/mailman/listinfo/cdo-intern

We also use a newsfeed for announcing releases.

Using CDO at MPIM and DKRZ

Users at MPIM and DKRZ, find the executable (cdo) of the installed CDO version in /client/bin.
The following machines are supported:
Site Machine System
DKRZ IBM HRLE2 (blizzard) aix-6.1.0
DKRZ Linux Server (lizard) rhel55-x64
ZMAW Linux Cluster (thunder) squeeze-x64 (x86_64)
MPIM Linux Cluster (squall) lenny-x64 (x86_64)

The latest and all previously installed CDO versions are available by the module system. Use

module load cdo/1.X.Y
to load CDO version 1.X.Y, or
module load cdo
to load the latest version.