SST Segmentation
Segment traits from specimen images and apply automatic landmarking.
Learning Objectives
By the end of this tutorial, you will be able to:
- Understand how SST segments traits in a set of specimen images given just one annotated example.
- Use the provided SST tutorial notebook to segment traits in butterfly images or your own images.
- Use X-AnyLabeling to annotate a specimen image with the help of Segment Anything Model 2 (SAM 2), a powerful AI model.
Prerequisites
- No local setup is required for the hands-on portion of SST. We provide a ready-to-use tutorial notebook. You can further explore SST in its official repository if interested. You can also try SST in CyVerse. An installation guide is provided below in the Setup section.
- You are welcome to bring your own specimen images of interest. To try SST on your own images, X-AnyLabeling is required for annotation. An installation guide is provided below in the Setup section. If you do not have your own images but would still like to try X-AnyLabeling, please install it as well. We provide some butterfly images here to annotate.
Setup
(Optional) Installing SST in CyVerse
Follow the steps below to install SST in CyVerse.
Step 0: In the CyVerse Discovery Environment, launch Jupyter Lab PyTorch GPU. Click "Go to Analysis" and wait for "Launching VICE app: jupyter-lab-pytorch-gpu" to complete.
Step 1: Click Terminal to open a new terminal session. This should place you in ~/data-store. Create a working directory for fast local I/O:
Step 2: Install uv for fast environment setup:
Step 3: Create and activate a virtual environment:
Step 4: Clone and install SST:
Step 5: Download SAM 2 checkpoint:
cd checkpoints
wget https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt
cd ..
Step 6: Create data directories for the demo:
Then download example images from here and upload them to ~/data-store/this-session/SST/demo_input.
(Optional) Installing X-AnyLabeling
Follow the steps below to install X-AnyLabeling on your local machine.
Option 1: Install from source (for GPU-enabled machine)
Step 0: If you do not have Conda, download and install Miniconda from the official website.
Step 1: Choose a working directory and open the terminal. Create a new Conda environment with Python version 3.12, and activate it:
Step 2: Follow the instructions here to install PyTorch and TorchVision dependencies.
Step 3: Install SAM 2 on a GPU-enabled machine:
Step 4: Clone the repository of X-AnyLabeling:
Step 5: Install the dependencies for X-AnyLabeling (choose the version that fits for your machine):
pip install -U uv
# CPU [Windows/Linux/macOS]
uv pip install -e .[cpu]
# CUDA 12.x is the default GPU option [Windows/Linux]
uv pip install -e .[gpu]
# CUDA 11.x [Windows/Linux]
uv pip install -e .[gpu-cu11]
Step 6: After installation, you can verify it by running the following command:
Step 7: After verification, you can run the application directly:
Option 2: Download released package
Download a released package from GitHub Releases and run it directly. It is CPU-only and may be slower when running.
Background
SST achieves high-quality trait segmentation across specimen images using only one labeled image per species. Standard deep learning methods require extensive labeled images to train effective segmentation models. SST concatenates specimen images into a "pseudo-video" and reframes trait segmentation as a tracking task. By doing so, SST requires only one labeled image as the first frame and leverages video segmentation models such as SAM 2 to propagate segments to subsequent frames.
Steps
SST
The following steps are also detailed in the tutorial notebook.
-
Clone the SST repository and set up the environment.
-
Download the SAM 2 checkpoint.
-
Use provided butterfly images or upload your own images.
-
Define the support image, support mask, and query images, then run SST trait segmentation to segment the query images.
-
Visualize the results.
If you are using SST in CyVerse, run:
uv run python src/sst/segment.py \
--support_image demo_input/support_image.jpg \
--support_mask demo_input/support_mask.png \
--query_images demo_input/query_images/ \
--output demo_output/ \
--output_format "png"
Then you can see the results in ~/data-store/this-session/SST/demo_output.
X-AnyLabeling
-
Open X-AnyLabeling (see Setup).
-
In the left sidebar, click Open Dir (the first icon) and select the directory containing the images you want to label.
-
In the left sidebar, click Auto Labeling (the "AI" icon at the bottom). A No Model button will then appear on the top toolbar. Click it and select "Segment Anything 2.1 (Large)". Note: If your machine has limited hardware resources, you may experience lag or freezing. In such cases, please select a smaller Segment Anything 2.1 model, such as Base, Small, or Tiny.
-
Use Point (Q) to include an area, Point (E) to exclude an area, and +Rect to select an area within a bounding box. You can use Clear (B) to remove all current annotations. Combine these tools to precisely label your target areas. Once you finish an area, use Finish (F) to save your annotation. Each time you click Finish (F), the annotation is automatically saved to a JSON file in the same directory as the image you are currently annotating. The JSON file will share the same filename as the image.
-
You can also click the Move and edit the selected polygons icon in the middle of the left sidebar to manually adjust the area points using the mouse.
Note: X-AnyLabeling saves model weights on your local machine. If you wish to delete them later, you can find them at the following default paths:
- Windows:
C:\Users\<Your User Name>\xanylabeling_data\models\ - macOS:
/Users/<Your User Name>/xanylabeling_data/models/ - Linux:
/home/<Your User Name>/xanylabeling_data/models/