You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Division by zero is unavoidable in a vectorized implementation of p_norm since zeros appear along the diagonal of the matrix. There are (at least) two strategies to overcome this:
Catch the numpy division by zero errors and convert the output to nan.
Add some small epsilon along the diagonal so that we no longer have zeros.
* P_norm layout
A Kamada-Kawai-like algorithm with a variable p-norm distance function.
Now also passing dim, center, scale as kwargs to layout callables.
* Set p_norm as find_embedding default
* p_norm unit test
test_dimension also changed as the behavior is slightly different now.
* Minor changes
Layout.d changed to Layout.dim, documentation changes, split a test.
* Timeout changed to perf_counter
* Fixed typo added TODO #122
Division by zero is unavoidable in a vectorized implementation of
p_norm
since zeros appear along the diagonal of the matrix. There are (at least) two strategies to overcome this:numpy
division by zero errors and convert the output tonan
.Strategy 1. is currently implemented.
minorminer/minorminer/layout/layout.py
Line 125 in 3c0331f
We should compare run-time differences and layout quality differences between the two strategies.
The text was updated successfully, but these errors were encountered: