Skip to content

Commit 0650d55

Browse files
committed
DOC adding backlinks to docstrings
1 parent 05ad745 commit 0650d55

File tree

121 files changed

+818
-121
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

121 files changed

+818
-121
lines changed

doc/modules/clustering.rst

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -879,13 +879,11 @@ classes according to some similarity metric.
879879

880880
.. currentmodule:: sklearn.metrics
881881

882+
.. _adjusted_rand_score:
882883

883884
Adjusted Rand index
884885
-------------------
885886

886-
Presentation and usage
887-
~~~~~~~~~~~~~~~~~~~~~~
888-
889887
Given the knowledge of the ground truth class assignments ``labels_true``
890888
and our clustering algorithm assignments of the same samples
891889
``labels_pred``, the **adjusted Rand index** is a function that measures
@@ -1000,13 +998,11 @@ random labelings by defining the adjusted Rand index as follows:
1000998
* `Wikipedia entry for the adjusted Rand index
1001999
<http://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index>`_
10021000

1001+
.. _mutual_info_score:
10031002

10041003
Mutual Information based scores
10051004
-------------------------------
10061005

1007-
Presentation and usage
1008-
~~~~~~~~~~~~~~~~~~~~~~
1009-
10101006
Given the knowledge of the ground truth class assignments ``labels_true`` and
10111007
our clustering algorithm assignments of the same samples ``labels_pred``, the
10121008
**Mutual Information** is a function that measures the **agreement** of the two
@@ -1168,12 +1164,11 @@ calculated using a similar form to that of the adjusted Rand index:
11681164
* `Wikipedia entry for the Adjusted Mutual Information
11691165
<http://en.wikipedia.org/wiki/Adjusted_Mutual_Information>`_
11701166

1167+
.. _homogeneity_completeness:
1168+
11711169
Homogeneity, completeness and V-measure
11721170
---------------------------------------
11731171

1174-
Presentation and usage
1175-
~~~~~~~~~~~~~~~~~~~~~~
1176-
11771172
Given the knowledge of the ground truth class assignments of the samples,
11781173
it is possible to define some intuitive metric using conditional entropy
11791174
analysis.
@@ -1329,9 +1324,6 @@ mean of homogeneity and completeness**:
13291324
Silhouette Coefficient
13301325
----------------------
13311326

1332-
Presentation and usage
1333-
~~~~~~~~~~~~~~~~~~~~~~
1334-
13351327
If the ground truth labels are not known, evaluation must be performed using
13361328
the model itself. The Silhouette Coefficient
13371329
(:func:`sklearn.metrics.silhouette_score`)

doc/modules/covariance.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -248,6 +248,7 @@ paper. It is the same algorithm as in the R ``glasso`` package.
248248
graphical lasso" <http://biostatistics.oxfordjournals.org/content/9/3/432.short>`_,
249249
Biostatistics 9, pp 432, 2008
250250

251+
.. _robust_covariance:
251252

252253
Robust Covariance Estimation
253254
============================

doc/modules/decomposition.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -554,9 +554,9 @@ structure of the error covariance :math:`\Psi`:
554554
* :math:`\Psi = \sigma^2 \mathbf{I}`: This assumption leads to
555555
the probabilistic model of :class:`PCA`.
556556

557-
* :math:`\Psi = diag(\psi_1, \psi_2, \dots, \psi_n)`: This model is called Factor
558-
Analysis, a classical statistical model. The matrix W is sometimes called
559-
the "factor loading matrix".
557+
* :math:`\Psi = diag(\psi_1, \psi_2, \dots, \psi_n)`: This model is called
558+
:class:`FactorAnalysis`, a classical statistical model. The matrix W is
559+
sometimes called the "factor loading matrix".
560560

561561
Both model essentially estimate a Gaussian with a low-rank covariance matrix.
562562
Because both models are probabilistic they can be integrated in more complex

doc/modules/ensemble.rst

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -780,6 +780,8 @@ accessed via the ``feature_importances_`` property::
780780

781781
.. currentmodule:: sklearn.ensemble.partial_dependence
782782

783+
.. _partial_dependence:
784+
783785
Partial dependence
784786
..................
785787

@@ -989,10 +991,10 @@ calculated as follows:
989991
================ ========== ========== ==========
990992
classifier class 1 class 2 class 3
991993
================ ========== ========== ==========
992-
classifier 1 w1 * 0.2 w1 * 0.5 w1 * 0.3
993-
classifier 2 w2 * 0.6 w2 * 0.3 w2 * 0.1
994+
classifier 1 w1 * 0.2 w1 * 0.5 w1 * 0.3
995+
classifier 2 w2 * 0.6 w2 * 0.3 w2 * 0.1
994996
classifier 3 w3 * 0.3 w3 * 0.4 w3 * 0.3
995-
weighted average 0.37 0.4 0.3
997+
weighted average 0.37 0.4 0.3
996998
================ ========== ========== ==========
997999

9981000
Here, the predicted class label is 2, since it has the
@@ -1031,7 +1033,7 @@ Vector Machine, a Decision Tree, and a K-nearest neighbor classifier::
10311033
:scale: 75%
10321034

10331035
Using the `VotingClassifier` with `GridSearch`
1034-
---------------------------------------------
1036+
----------------------------------------------
10351037

10361038
The `VotingClassifier` can also be used together with `GridSearch` in order
10371039
to tune the hyperparameters of the individual estimators::

doc/modules/feature_extraction.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -826,6 +826,7 @@ Some tips and tricks:
826826
Customizing the vectorizer can also be useful when handling Asian languages
827827
that do not use an explicit word separator such as whitespace.
828828

829+
.. _image_feature_extraction:
829830

830831
Image feature extraction
831832
========================

doc/modules/feature_selection.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,8 @@ improve estimators' accuracy scores or to boost their performance on very
1313
high-dimensional datasets.
1414

1515

16+
.. _variance_threshold:
17+
1618
Removing features with low variance
1719
===================================
1820

@@ -45,6 +47,8 @@ so we can select using the threshold ``.8 * (1 - .8)``::
4547
As expected, ``VarianceThreshold`` has removed the first column,
4648
which has a probability :math:`p = 5/6 > .8` of containing a zero.
4749

50+
.. _univariate_feature_selection:
51+
4852
Univariate feature selection
4953
============================
5054

@@ -101,6 +105,7 @@ univariate p-values:
101105

102106
:ref:`example_feature_selection_plot_feature_selection.py`
103107

108+
.. _rfe:
104109

105110
Recursive feature elimination
106111
=============================

doc/modules/grid_search.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,8 @@ evaluated and the best combination is retained.
7272
classifier (here a linear SVM trained with SGD with either elastic
7373
net or L2 penalty) using a :class:`pipeline.Pipeline` instance.
7474

75+
.. _randomized_parameter_search:
76+
7577
Randomized Parameter Optimization
7678
=================================
7779
While using a grid of parameter settings is currently the most widely used

doc/modules/kernel_approximation.rst

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ kernel function or a precomputed kernel matrix.
4343
The number of samples used - which is also the dimensionality of the features computed -
4444
is given by the parameter ``n_components``.
4545

46+
.. _rbf_kernel_approx:
4647

4748
Radial Basis Function Kernel
4849
----------------------------
@@ -98,6 +99,7 @@ use of larger feature spaces more efficient.
9899

99100
* :ref:`example_plot_kernel_approximation.py`
100101

102+
.. _additive_chi_kernel_approx:
101103

102104
Additive Chi Squared Kernel
103105
---------------------------
@@ -130,6 +132,7 @@ with the approximate feature map provided by :class:`RBFSampler` to yield an app
130132
feature map for the exponentiated chi squared kernel.
131133
See the [VZ2010]_ for details and [VVZ2010]_ for combination with the :class:`RBFSampler`.
132134

135+
.. _skewed_chi_kernel_approx:
133136

134137
Skewed Chi Squared Kernel
135138
-------------------------

doc/modules/linear_model.rst

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -266,6 +266,7 @@ They also tend to break when the problem is badly conditioned
266266

267267
* :ref:`example_linear_model_plot_lasso_model_selection.py`
268268

269+
.. _elastic_net:
269270

270271
Elastic Net
271272
===========
@@ -486,6 +487,9 @@ previously chosen dictionary elements.
486487
<http://blanche.polytechnique.fr/~mallat/papiers/MallatPursuit93.pdf>`_,
487488
S. G. Mallat, Z. Zhang,
488489

490+
491+
.. _bayesian_regression:
492+
489493
Bayesian Regression
490494
===================
491495

@@ -752,6 +756,8 @@ while with ``loss="hinge"`` it fits a linear support vector machine (SVM).
752756

753757
* :ref:`sgd`
754758

759+
.. _perceptron:
760+
755761
Perceptron
756762
==========
757763

doc/modules/manifold.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -149,6 +149,7 @@ The overall complexity of Isomap is
149149
<http://www.sciencemag.org/content/290/5500/2319.full>`_
150150
Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. Science 290 (5500)
151151

152+
.. _locally_linear_embedding:
152153

153154
Locally Linear Embedding
154155
========================

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy