biobench.webssl
DinoVisionTransformer(img_size=224, patch_size=16, in_chans=3, embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True, ffn_bias=True, proj_bias=True, drop_path_rate=0.0, drop_path_uniform=False, init_values=None, embed_layer=PatchEmbed, act_layer=torch.nn.GELU, block_fn=Block, ffn_layer='mlp', block_chunks=1, num_register_tokens=0, interpolate_antialias=False, interpolate_offset=0.1)
¶
Bases: Module
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img_size
|
(int, tuple)
|
input image size |
224
|
patch_size
|
(int, tuple)
|
patch size |
16
|
in_chans
|
int
|
number of input channels |
3
|
embed_dim
|
int
|
embedding dimension |
768
|
depth
|
int
|
depth of transformer |
12
|
num_heads
|
int
|
number of attention heads |
12
|
mlp_ratio
|
int
|
ratio of mlp hidden dim to embedding dim |
4.0
|
qkv_bias
|
bool
|
enable bias for qkv if True |
True
|
proj_bias
|
bool
|
enable bias for proj in attn if True |
True
|
ffn_bias
|
bool
|
enable bias for ffn if True |
True
|
drop_path_rate
|
float
|
stochastic depth rate |
0.0
|
drop_path_uniform
|
bool
|
apply uniform drop rate across blocks |
False
|
weight_init
|
str
|
weight init scheme |
required |
init_values
|
float
|
layer-scale init values |
None
|
embed_layer
|
Module
|
patch embedding layer |
PatchEmbed
|
act_layer
|
Module
|
MLP activation layer |
GELU
|
block_fn
|
Module
|
transformer block class |
Block
|
ffn_layer
|
str
|
"mlp", "swiglu", "swiglufused" or "identity" |
'mlp'
|
block_chunks
|
(int) split block sequence into block_chunks units for FSDP wrap |
1
|
|
num_register_tokens
|
(int) number of extra cls tokens (so-called "registers") |
0
|
|
interpolate_antialias
|
(str) flag to apply anti-aliasing when interpolating positional embeddings |
False
|
|
interpolate_offset
|
(float) work-around offset to apply when interpolating positional embeddings |
0.1
|
Source code in src/biobench/webssl.py
582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 | |
DropPath(drop_prob=None)
¶
NestedTensorBlock(dim, num_heads, mlp_ratio=4.0, qkv_bias=False, proj_bias=True, ffn_bias=True, drop=0.0, attn_drop=0.0, init_values=None, drop_path=0.0, act_layer=torch.nn.GELU, norm_layer=torch.nn.LayerNorm, attn_class=Attention, ffn_layer=Mlp)
¶
Bases: Block
Source code in src/biobench/webssl.py
forward_nested(x_list)
¶
x_list contains a list of tensors to nest together and run
Source code in src/biobench/webssl.py
PatchEmbed(img_size=224, patch_size=16, in_chans=3, embed_dim=768, norm_layer=None, flatten_embedding=True)
¶
Bases: Module
2D image to patch embedding: (B,C,H,W) -> (B,N,D)
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
img_size
|
int | tuple[int, int]
|
Image size. |
224
|
patch_size
|
int | tuple[int, int]
|
Patch token size. |
16
|
in_chans
|
int
|
Number of input image channels. |
3
|
embed_dim
|
int
|
Number of linear projection output channels. |
768
|
norm_layer
|
Callable | None
|
Normalization layer. |
None
|
Source code in src/biobench/webssl.py
get_attn_bias_and_cat(x_list, branges=None)
¶
this will perform the index select, cat the tensors, and provide the attn_bias from cache
Source code in src/biobench/webssl.py
init_weights_vit_timm(module, name='')
¶
ViT weight initialization, original timm impl (for reproducibility)
Source code in src/biobench/webssl.py
webssl_dino1b_full2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-1B DINOv2's "giant2" architecture / ViT-little g Close to ViT-giant, with embed-dim 1536 and 24 heads => embed-dim per head 64
Source code in src/biobench/webssl.py
webssl_dino2b_full2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-2B (LLM-inspired scaling)
Source code in src/biobench/webssl.py
webssl_dino2b_heavy2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-2B (LLM-inspired scaling)
Source code in src/biobench/webssl.py
webssl_dino2b_light2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-2B (LLM-inspired scaling)
Source code in src/biobench/webssl.py
webssl_dino300m_full2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-300M DINOv2's "large" architecture / ViT-L
Source code in src/biobench/webssl.py
webssl_dino3b_full2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-3B (LLM-inspired scaling)
Source code in src/biobench/webssl.py
webssl_dino3b_heavy2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-3B (LLM-inspired scaling)
Source code in src/biobench/webssl.py
webssl_dino3b_light2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-3B (LLM-inspired scaling)
Source code in src/biobench/webssl.py
webssl_dino5b_full2b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-5B (LLM-inspired scaling)
Source code in src/biobench/webssl.py
webssl_dino7b_full8b_224(img_size=224, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-7B (LLM-inspired scaling) pretrained with 224x224 resolution
Source code in src/biobench/webssl.py
webssl_dino7b_full8b_378(img_size=378, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-7B (LLM-inspired scaling) pretrained with 378x378 resolution
Source code in src/biobench/webssl.py
webssl_dino7b_full8b_518(img_size=518, patch_size=14, num_register_tokens=0, **kwargs)
¶
Web-DINO ViT-7B (LLM-inspired scaling) pretrained with 518x518 resolution