This repository contains Python 2/3 readers for interacting with Sigproc filterbank (.fil), HDF5 (.h5) and guppi raw (.raw) files, as used in the Breakthrough Listen search for intelligent life.
Sometimes the pip
installation can fail if a system dependency is not installed. To fix this, make sure you have curl
and install the required system dependencies with the command bellow:
curl https://raw.githubusercontent.com/UCBerkeleySETI/blimpy/master/dependencies.txt | xargs -n 1 sudo apt install --no-install-recommends -y
The latest release can be installed via pip directly from this repository:
python3 -m pip install -U git+https://github.com/UCBerkeleySETI/blimpy
Or, the latest version of the development code can be installed from the github repo and then run python setup.py install
or pip install .
(with sudo if required), or by using the following terminal command:
python3 -m pip install -U https://github.com/UCBerkeleySETI/blimpy/tarball/master
To install everything required to run the unit tests, run:
python3 -m pip install -e .[full]
You will need numpy
, h5py
, astropy
, scipy
, and matplotlib
as dependencies. A pip install
should pull in numpy, h5py, and astropy, but you may still need to install scipy and matplotlib separately.
To interact with compressed files, you'll need the hdf5plugin
package too.
Note that h5py generally needs to be installed in this way:
$ python3 -m pip install --no-binary=h5py h5py
After installation, the following command will display the metadata (header) values and some information about the data matrix:
watutil -i
Other command line utilities available post-installation:
bl_scrunch
, Scrunch an HDF5 file (.h5) or a Sigproc Filterbank file (.fil) to an output HDF5 file. This has the effect of down-sampling adjacent fine frequency cells.bldice
, Extract a smaller frequency region from a Filterbank file (.fil or .h5 file).calcload
, Calculate the Waterfall max_load value needed to load the entire data array for a given Filterbank file.dsamp
, Down-sample (time-dimension only) from one Filterbank file to another.fil2h5
, Convert a .fil file into .h5 format.h52fil
, Convert an .h5 file into .fil format.peek
, Display a selected portion of values from the data matrix of a Filterbank file.rawhdr
, Display the header fields of a raw guppi file.rawutil
, Plot data in a guppi raw file.srcname
, Patch the header source_name field in a .h5 file.stax
, For a collection of .h5 or .fil files sharing the same frequency range, create a vertical stack of waterfall plots as a single PNG file.stix
, For a single very large Filterbank file, create a horizontal or vertical stack of waterfall plots as a single PNG file.watutil
, Information/read/write/plot utility for Filterbank files.
Use the -h
flag to any of the above command line utilities to display their available arguments.
The blimpy.Waterfall
provides a Python API for interacting with filterbank data. It supports all BL filterbank data products; see this example Jupyter notebook for an overview.
From the python, ipython or jupiter notebook environments.
from blimpy import Waterfall
fb = Waterfall('/path/to/filterbank.fil')
#fb = Waterfall('/path/to/filterbank.h5') #works the same way
fb.info()
data = fb.data
The Guppi Raw format can be read using the GuppiRaw
class from guppi.py
:
from blimpy import GuppiRaw
gr = GuppiRaw('/path/to/guppirawfile.raw')
header, data = gr.read_next_data_block()
or
from blimpy import GuppiRaw
gr = GuppiRaw('/path/to/guppirawfile.raw')
for header, data_x, data_y in gr.get_data():
# process data
Note: most users should start analysis with filterbank files, which are smaller in size and have been generated from the guppi raw files.
The blimpy images are pushed to a public repository after each successful build on Travis. If you have Docker installed, you can run the following commands to pull our images, which have the environment and dependencies set up for you.
docker pull fx196/blimpy:py3_kern_stable
Here is a more complete guide on using blimpy in Docker.
A detailed overview of the data formats used in Breakthrough Listen can be found in our data format paper. An archive of data files from the Breakthrough Listen program is provided at seti.berkeley.edu/opendata.