Friday 15 June 2012

python - IOError: Can't read data (Can't open directory) - Missing gzip compression filter -



python - IOError: Can't read data (Can't open directory) - Missing gzip compression filter -

i have never worked hdf5 files before, , started received illustration files. i've been checking out basics h5py, looking @ different groups in these files, names, keys, values , on. works fine, until want @ datasets saved in groups. .shape , .dtype, when seek accessing random value indexing (e.g. grp["dset"][0]), next error:

ioerror traceback (most recent phone call last) <ipython-input-45-509cebb66565> in <module>() 1 print geno["matrix"].shape 2 print geno["matrix"].dtype ----> 3 geno["matrix"][0] /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_hl/dataset.pyc in __getitem__(self, args) 443 mspace = h5s.create_simple(mshape) 444 fspace = selection._id --> 445 self.id.read(mspace, fspace, arr, mtype) 446 447 # patch output numpy /home/sarah/anaconda/lib/python2.7/site-packages/h5py/h5d.so in h5py.h5d.datasetid.read (h5py/h5d.c:2782)() /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_proxy.so in h5py._proxy.dset_rw (h5py/_proxy.c:1709)() /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_proxy.so in h5py._proxy.h5py_h5dread (h5py/_proxy.c:1379)() ioerror: can't read info (can't open directory)

i've posted problem in h5py google group, suggested there might filter on dataset don't have installed. hdf5 file created using gzip compression, should portable standard, far understood. know might missing here? can't find description of error or similar problems anywhere, , file, including problematic dataset, can opened hdfview software.

edit apparently, error occurs because, reason, gzip compression filter not available on system. if seek create illustration file gzip compression, happens:

--------------------------------------------------------------------------- valueerror traceback (most recent phone call last) <ipython-input-33-dd7b9e3b6314> in <module>() 1 grp = f.create_group("subgroup") ----> 2 grp_dset = grp.create_dataset("dataset", (50,), dtype="uint8", chunks=true, compression="gzip") /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_hl/group.pyc in create_dataset(self, name, shape, dtype, data, **kwds) 92 """ 93 ---> 94 dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds) 95 dset = dataset.dataset(dsid) 96 if name not none: /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_hl/dataset.pyc in make_new_dset(parent, shape, dtype, data, chunks, compression, shuffle, fletcher32, maxshape, compression_opts, fillvalue, scaleoffset, track_times) 97 98 dcpl = filters.generate_dcpl(shape, dtype, chunks, compression, compression_opts, ---> 99 shuffle, fletcher32, maxshape, scaleoffset) 100 101 if fillvalue not none: /home/sarah/anaconda/lib/python2.7/site-packages/h5py/_hl/filters.pyc in generate_dcpl(shape, dtype, chunks, compression, compression_opts, shuffle, fletcher32, maxshape, scaleoffset) 101 102 if compression not in encode: --> 103 raise valueerror('compression filter "%s" unavailable' % compression) 104 105 if compression == 'gzip': valueerror: compression filter "gzip" unavailable

does have experience that? installation of hdf5 library h5py bundle didn't seem go wrong...

can't comment - reputation low.

i had same issue, ran "conda update anaconda" , problem gone.

python linux hdf5 anaconda h5py

No comments:

Post a Comment