Bridging computer vision and biology. BioCLIP models learn hierarchical representations of the natural world, enabling advanced species classification, trait prediction, and more.
Choose the right BioCLIP model from the latest BioCLIP 2 (ViT-L/14) to BioCAP and the original BioCLIP (ViT-B/16).
Go to Models →Explore TreeOfLife datasets for training biological vision models, from 10M to 214M images across hundreds of thousands of taxa.
Go to Training Data →Evaluate your model on biologically relevant benchmarks, including Rare Species and IDLE-OO Camera Traps.
Go to Benchmarks →Use pybioclip, TreeOfLife-toolbox, TaxonoPy, and other tools for data processing or to integrate BioCLIP into your Python code or computational pipeline.
Go to Software →Try interactive demos for zero-shot classification and open-ended species identification without writing any code.
Go to Demos →Read the research papers behind BioCLIP, BioCLIP 2, BioCAP, and the TreeOfLife datasets.
Go to Papers →The central warehouse for all BioCLIP assets. This collection aggregates all versions of the models, the training datasets, benchmarks, and interactive demos.
Use this if you need direct access to raw model weights (SafeTensors/PyTorch), want to access the TreeOfLife or benchmark datasets, or are looking for easy by-image predictions.