This single cell pytorch dataloader / lighting datamodule is designed to be used with:
and:
It allows you to:
- load thousands of datasets containing millions of cells in a few seconds.
- preprocess the data per dataset and download it locally (normalization, filtering, etc.)
- create a more complex single cell dataset
- extend it to your need
built on top of lamindb
and the .mapped()
function by Sergey: https://github.com/Koncopd
The package has been designed together with the scPRINT paper and model.
I needed to create this Data Loader for my PhD project. I am using it to load & preprocess thousands of datasets containing millions of cells in a few seconds. I believed that individuals employing AI for single-cell RNA sequencing and other sequencing datasets would eagerly utilize and desire such a tool, which presently does not exist.
pip install scdataloader
# or
pip install scDataLoader[dev] # for dev dependencies
lamin init --storage ./testdb --name test --schema bionty
if you start with lamin and had to do a lamin init
, you will also need to populate your ontologies. This is because scPRINT is using ontologies to define its cell types, diseases, sexes, ethnicities, etc.
you can do it manually or with our function:
from scdataloader.utils import populate_my_ontology
populate_my_ontology() #to populate everything (recommended) (can take 2-10mns)
populate_my_ontology( #the minimum to the tool
organisms: List[str] = ["NCBITaxon:10090", "NCBITaxon:9606"],
sex: List[str] = ["PATO:0000384", "PATO:0000383"],
celltypes = None,
ethnicities = None,
assays = None,
tissues = None,
diseases = None,
dev_stages = None,
)
If you want to use the latest version of scDataLoader and work on the code yourself use git clone
and pip -e
instead of pip install
.
git clone https://github.com/jkobject/scDataLoader.git
pip install -e scDataLoader[dev]
# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty
from scdataloader import utils, Preprocessor, DataModule
# preprocess datasets
preprocessor = Preprocessor(
do_postp=False,
force_preprocess=True,
)
adata = preprocessor(adata)
art = ln.Artifact(adata, description="test")
art.save()
ln.Collection(art, name="test", description="test").save()
datamodule = DataModule(
collection_name="test",
organisms=["NCBITaxon:9606"], #organism that we will work on
how="most expr", # for the collator (most expr genes only will be selected)
max_len=1000, # only the 1000 most expressed
batch_size=64,
num_workers=1,
validation_split=0.1,
)
# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty
from scdataloader import utils, Preprocessor, SimpleAnnDataset, Collator, DataLoader
# preprocess dataset
preprocessor = Preprocessor(
do_postp=False,
force_preprocess=True,
)
adata = preprocessor(adata)
# create dataset
adataset = SimpleAnnDataset(
adata, obs_to_output=["organism_ontology_term_id"]
)
# create collator
col = Collator(
organisms="NCBITaxon:9606",
valid_genes=adata.var_names,
max_len=2000, #maximum number of genes to use
how="some" |"most expr"|"random_expr",
# genelist = [geneA, geneB] if how=='some'
)
# create dataloader
dataloader = DataLoader(
adataset,
collate_fn=col,
batch_size=64,
num_workers=4,
shuffle=False,
)
# predict
for batch in tqdm(dataloader):
gene_pos, expression, depth = (
batch["genes"],
batch["x"],
batch["depth"],
)
model.predict(
gene_pos,
expression,
depth,
)
# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty
from scdataloader import utils
from scdataloader.preprocess import LaminPreprocessor, additional_postprocess, additional_preprocess
# preprocess datasets
DESCRIPTION='preprocessed by scDataLoader'
cx_dataset = ln.Collection.using(instance="laminlabs/cellxgene").filter(name="cellxgene-census", version='2023-12-15').one()
cx_dataset, len(cx_dataset.artifacts.all())
do_preprocess = LaminPreprocessor(additional_postprocess=additional_postprocess, additional_preprocess=additional_preprocess, skip_validate=True, subset_hvg=0)
preprocessed_dataset = do_preprocess(cx_dataset, name=DESCRIPTION, description=DESCRIPTION, start_at=6, version="2")
# create dataloaders
from scdataloader import DataModule
import tqdm
datamodule = DataModule(
collection_name="preprocessed dataset",
organisms=["NCBITaxon:9606"], #organism that we will work on
how="most expr", # for the collator (most expr genes only will be selected)
max_len=1000, # only the 1000 most expressed
batch_size=64,
num_workers=1,
validation_split=0.1,
test_split=0)
for i in tqdm.tqdm(datamodule.train_dataloader()):
# pass #or do pass
print(i)
break
# with lightning:
# Trainer(model, datamodule)
see the notebooks in docs:
You can use the command line to preprocess a large database of datasets like here for cellxgene. this allows parallelizing and easier usage.
scdataloader --instance "laminlabs/cellxgene" --name "cellxgene-census" --version "2023-12-15" --description "preprocessed for scprint" --new_name "scprint main" --start_at 10 >> scdataloader.out
The main way to use
please refer to the scPRINT documentation and lightning documentation for more information on command line usage
import bionty as bt
bt.reset_sources()
# Run via CLI: lamin load <your instance>
import lnschema_bionty as lb
lb.dev.sync_bionty_source_to_latest()
from scdataloader import utils
utils.populate_ontologies() # this might take from 5-20mins
Read the CONTRIBUTING.md file.
This project is licensed under the MIT License - see the LICENSE file for details.
Awesome single cell dataloader created by @jkobject