Fork me on GitHub

sklearn.neighbors.radius_neighbors_graph

sklearn.neighbors.radius_neighbors_graph(X, radius, mode='connectivity', metric='minkowski', p=2, metric_params=None, include_self=False)[source]

Computes the (weighted) graph of Neighbors for points in X

Neighborhoods are restricted the points at a distance lower than radius.

Read more in the User Guide.

Parameters:

X : array-like or BallTree, shape = [n_samples, n_features]

Sample data, in the form of a numpy array or a precomputed BallTree.

radius : float

Radius of neighborhoods.

mode : {‘connectivity’, ‘distance’}, optional

Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, in ‘distance’ the edges are Euclidean distance between points.

metric : string, default ‘minkowski’

The distance metric used to calculate the neighbors within a given radius for each sample point. The DistanceMetric class gives a list of available metrics. The default distance is ‘euclidean’ (‘minkowski’ metric with the param equal to 2.)

include_self: bool, default=False :

Whether or not to mark each sample as the first nearest neighbor to itself. If None, then True is used for mode=’connectivity’ and False for mode=’distance’ as this will preserve backwards compatibilty.

p : int, default 2

Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.

metric_params: dict, optional :

additional keyword arguments for the metric function.

Returns:

A : sparse matrix in CSR format, shape = [n_samples, n_samples]

A[i, j] is assigned the weight of edge that connects i to j.

See also

kneighbors_graph

Examples

>>> X = [[0], [3], [1]]
>>> from sklearn.neighbors import radius_neighbors_graph
>>> A = radius_neighbors_graph(X, 1.5, mode='connectivity', include_self=True)
>>> A.toarray()
array([[ 1.,  0.,  1.],
       [ 0.,  1.,  0.],
       [ 1.,  0.,  1.]])
Previous