Skip to content

Commit 506a444

Browse files
authored
Docs: avoid duplicate target links (#1299)
* Docs: avoid duplicate target links Signed-off-by: Adam J. Stewart <ajstewart426@gmail.com> * Correct format of anonymous hyperlink target Signed-off-by: Adam J. Stewart <ajstewart426@gmail.com> * Update all remaining links Signed-off-by: Adam J. Stewart <ajstewart426@gmail.com> * Display colab badge links Signed-off-by: Adam J. Stewart <ajstewart426@gmail.com> * Document more decoders Signed-off-by: Adam J. Stewart <ajstewart426@gmail.com> --------- Signed-off-by: Adam J. Stewart <ajstewart426@gmail.com>
1 parent e380568 commit 506a444

13 files changed

Lines changed: 33 additions & 51 deletions

File tree

docs/quickstart.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
Segmentation model is just a PyTorch nn.Module, which can be created as easy as:
77

88
.. code-block:: python
9-
9+
1010
import segmentation_models_pytorch as smp
1111
1212
model = smp.Unet(
@@ -65,5 +65,5 @@ Check the following examples:
6565
:target: https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb
6666
:alt: Open In Colab
6767

68-
- Finetuning notebook on Oxford Pet dataset with `PyTorch Lightning <https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb>`_ |colab-badge|
69-
- Finetuning script for cloth segmentation with `PyTorch Lightning <https://github.com/ternaus/cloths_segmentation>`_
68+
- Finetuning notebook on Oxford Pet dataset with `PyTorch Lightning <https://github.com/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb>`__ |colab-badge|
69+
- Finetuning script for cloth segmentation with `PyTorch Lightning <https://github.com/ternaus/cloths_segmentation>`__

docs/save_load.rst

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -70,8 +70,8 @@ For example:
7070
Saving with preprocessing transform (Albumentations)
7171
----------------------------------------------------
7272

73-
You can save the preprocessing transform along with the model and push it to the Hub.
74-
This can be useful when you want to share the model with the preprocessing transform that was used during training,
73+
You can save the preprocessing transform along with the model and push it to the Hub.
74+
This can be useful when you want to share the model with the preprocessing transform that was used during training,
7575
to make sure that the inference pipeline is consistent with the training pipeline.
7676

7777
.. code:: python
@@ -104,12 +104,13 @@ Conclusion
104104

105105
By following these steps, you can easily save, share, and load your models, facilitating collaboration and reproducibility in your projects. Don't forget to replace the placeholders with your actual model paths and names.
106106

107-
|colab-badge|
107+
|binary-segmentation-intro|
108+
|save-load-model-and-share-with-hf-hub|
108109

109-
.. |colab-badge| image:: https://colab.research.google.com/assets/colab-badge.svg
110+
.. |binary-segmentation-intro| image:: https://colab.research.google.com/assets/colab-badge.svg
110111
:target: https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/binary_segmentation_intro.ipynb
111112
:alt: Open In Colab
112113

113-
.. |colab-badge| image:: https://colab.research.google.com/assets/colab-badge.svg
114+
.. |save-load-model-and-share-with-hf-hub| image:: https://colab.research.google.com/assets/colab-badge.svg
114115
:target: https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/save_load_model_and_share_with_hf_hub.ipynb
115116
:alt: Open In Colab

segmentation_models_pytorch/decoders/deeplabv3/model.py

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414

1515

1616
class DeepLabV3(SegmentationModel):
17-
"""DeepLabV3_ implementation from "Rethinking Atrous Convolution for Semantic Image Segmentation"
17+
"""`DeepLabV3`__ implementation from "Rethinking Atrous Convolution for Semantic Image Segmentation"
1818
1919
Args:
2020
encoder_name: Name of the classification model that will be used as an encoder (a.k.a backbone)
@@ -52,9 +52,7 @@ class DeepLabV3(SegmentationModel):
5252
Returns:
5353
``torch.nn.Module``: **DeepLabV3**
5454
55-
.. _DeeplabV3:
56-
https://arxiv.org/abs/1706.05587
57-
55+
.. __: https://arxiv.org/abs/1706.05587
5856
"""
5957

6058
@supports_config_loading
@@ -141,7 +139,7 @@ def load_state_dict(self, state_dict, *args, **kwargs):
141139

142140

143141
class DeepLabV3Plus(SegmentationModel):
144-
"""DeepLabV3+ implementation from "Encoder-Decoder with Atrous Separable
142+
"""`DeepLabV3+`__ implementation from "Encoder-Decoder with Atrous Separable
145143
Convolution for Semantic Image Segmentation"
146144
147145
Args:
@@ -180,9 +178,7 @@ class DeepLabV3Plus(SegmentationModel):
180178
Returns:
181179
``torch.nn.Module``: **DeepLabV3Plus**
182180
183-
Reference:
184-
https://arxiv.org/abs/1802.02611v3
185-
181+
.. __: https://arxiv.org/abs/1802.02611v3
186182
"""
187183

188184
@supports_config_loading

segmentation_models_pytorch/decoders/dpt/model.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515

1616
class DPT(SegmentationModel):
1717
"""
18-
DPT is a dense prediction architecture that leverages vision transformers in place of convolutional networks as
18+
`DPT`__ is a dense prediction architecture that leverages vision transformers in place of convolutional networks as
1919
a backbone for dense prediction tasks
2020
2121
It assembles tokens from various stages of the vision transformer into image-like representations at various resolutions
@@ -69,6 +69,7 @@ class DPT(SegmentationModel):
6969
Returns:
7070
``torch.nn.Module``: DPT
7171
72+
.. __: https://arxiv.org/abs/2103.13413
7273
"""
7374

7475
# fails for encoders with prefix tokens

segmentation_models_pytorch/decoders/fpn/model.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212

1313

1414
class FPN(SegmentationModel):
15-
"""FPN_ is a fully convolution neural network for image semantic segmentation.
15+
"""`FPN`__ is a fully convolution neural network for image semantic segmentation.
1616
1717
Args:
1818
encoder_name: Name of the classification model that will be used as an encoder (a.k.a backbone)
@@ -51,9 +51,7 @@ class FPN(SegmentationModel):
5151
Returns:
5252
``torch.nn.Module``: **FPN**
5353
54-
.. _FPN:
55-
http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf
56-
54+
.. __: http://presentations.cocodataset.org/COCO17-Stuff-FAIR.pdf
5755
"""
5856

5957
@supports_config_loading

segmentation_models_pytorch/decoders/linknet/model.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414

1515
class Linknet(SegmentationModel):
16-
"""Linknet_ is a fully convolution neural network for image semantic segmentation. Consist of *encoder*
16+
"""`Linknet`__ is a fully convolution neural network for image semantic segmentation. Consist of *encoder*
1717
and *decoder* parts connected with *skip connections*. Encoder extract features of different spatial
1818
resolution (skip connections) which are used by decoder to define accurate segmentation mask. Use *sum*
1919
for fusing decoder blocks with skip connections.
@@ -67,8 +67,7 @@ class Linknet(SegmentationModel):
6767
Returns:
6868
``torch.nn.Module``: **Linknet**
6969
70-
.. _Linknet:
71-
https://arxiv.org/abs/1707.03718
70+
.. __: https://arxiv.org/abs/1707.03718
7271
"""
7372

7473
@supports_config_loading

segmentation_models_pytorch/decoders/manet/model.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414

1515
class MAnet(SegmentationModel):
16-
"""MAnet_ : Multi-scale Attention Net. The MA-Net can capture rich contextual dependencies based on
16+
"""`MAnet`__: Multi-scale Attention Net. The MA-Net can capture rich contextual dependencies based on
1717
the attention mechanism, using two blocks:
1818
1919
- Position-wise Attention Block (PAB), which captures the spatial dependencies between pixels in a global view
@@ -72,9 +72,7 @@ class MAnet(SegmentationModel):
7272
Returns:
7373
``torch.nn.Module``: **MAnet**
7474
75-
.. _MAnet:
76-
https://ieeexplore.ieee.org/abstract/document/9201310
77-
75+
.. __: https://ieeexplore.ieee.org/abstract/document/9201310
7876
"""
7977

8078
@supports_config_loading

segmentation_models_pytorch/decoders/pan/model.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414

1515
class PAN(SegmentationModel):
16-
"""Implementation of PAN_ (Pyramid Attention Network).
16+
"""Implementation of `PAN`__ (Pyramid Attention Network).
1717
1818
Note:
1919
Currently works with shape of input tensor >= [B x C x 128 x 128] for pytorch <= 1.1.0
@@ -54,9 +54,7 @@ class PAN(SegmentationModel):
5454
Returns:
5555
``torch.nn.Module``: **PAN**
5656
57-
.. _PAN:
58-
https://arxiv.org/abs/1805.10180
59-
57+
.. __: https://arxiv.org/abs/1805.10180
6058
"""
6159

6260
@supports_config_loading

segmentation_models_pytorch/decoders/pspnet/model.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313

1414

1515
class PSPNet(SegmentationModel):
16-
"""PSPNet_ is a fully convolution neural network for image semantic segmentation. Consist of
16+
"""`PSPNet`__ is a fully convolution neural network for image semantic segmentation. Consist of
1717
*encoder* and *Spatial Pyramid* (decoder). Spatial Pyramid build on top of encoder and does not
1818
use "fine-features" (features of high spatial resolution). PSPNet can be used for multiclass segmentation
1919
of high resolution images, however it is not good for detecting small objects and producing accurate,
@@ -68,8 +68,7 @@ class PSPNet(SegmentationModel):
6868
Returns:
6969
``torch.nn.Module``: **PSPNet**
7070
71-
.. _PSPNet:
72-
https://arxiv.org/abs/1612.01105
71+
.. __: https://arxiv.org/abs/1612.01105
7372
"""
7473

7574
@supports_config_loading

segmentation_models_pytorch/decoders/segformer/model.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212

1313

1414
class Segformer(SegmentationModel):
15-
"""Segformer is simple and efficient design for semantic segmentation with Transformers
15+
"""`Segformer`__ is simple and efficient design for semantic segmentation with Transformers
1616
1717
Args:
1818
encoder_name: Name of the classification model that will be used as an encoder (a.k.a backbone)
@@ -45,9 +45,7 @@ class Segformer(SegmentationModel):
4545
Returns:
4646
``torch.nn.Module``: **Segformer**
4747
48-
.. _Segformer:
49-
https://arxiv.org/abs/2105.15203
50-
48+
.. __: https://arxiv.org/abs/2105.15203
5149
"""
5250

5351
@supports_config_loading

0 commit comments

Comments
 (0)