Large Memory usage with HDF5 input files
When a field file has been written by a single process (or a few processes), and then read by a large number of processes, a large amount of memory is being used. The reason for this is that the smallest "unit" that can be read from the HDF5 file by a single process is the data written by a single process that wrote the file. Therefore, if a single process wrote the entire HDF5 field file, all processes must read the entire file in order to obtain the information that they need. Later, most processes discard most of the information, but before this is done, a large amount of memory has been used. A solution to this problem was attempted in !1351 (merged)