Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized anomaly score calculation for PatchCore for both num_neighb… #633

Merged
merged 7 commits into from
Nov 2, 2022
2 changes: 1 addition & 1 deletion anomalib/models/patchcore/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ dataset:
category: bottle
image_size: 224
train_batch_size: 32
test_batch_size: 1
test_batch_size: 32
num_workers: 8
transform_config:
train: null
Expand Down
13 changes: 10 additions & 3 deletions anomalib/models/patchcore/torch_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,8 +153,12 @@ def nearest_neighbors(self, embedding: Tensor, n_neighbors: int) -> Tuple[Tensor
Tensor: Patch scores.
Tensor: Locations of the nearest neighbor(s).
"""
distances = torch.cdist(embedding, self.memory_bank, p=2.0) # euclidean norm
patch_scores, locations = distances.topk(k=n_neighbors, largest=False, dim=1)
distances = torch.cdist(embedding, self.memory_bank, p=2.0) # euclidean norm
if n_neighbors == 1:
# when n_neighbors is 1, speed up computation by using min instead of topk
patch_scores, locations = distances.min(1)
else:
patch_scores, locations = distances.topk(k=n_neighbors, largest=False, dim=1)
return patch_scores, locations

def compute_anomaly_score(self, patch_scores: Tensor, locations: Tensor, embedding: Tensor) -> Tensor:
Expand All @@ -168,6 +172,9 @@ def compute_anomaly_score(self, patch_scores: Tensor, locations: Tensor, embeddi
Tensor: Image-level anomaly scores
"""

# Don't need to compute weights if num_neighbors is 1
if self.num_neighbors == 1:
return patch_scores.amax(1)
# 1. Find the patch with the largest distance to it's nearest neighbor in each image
max_patches = torch.argmax(patch_scores, dim=1) # (m^test,* in the paper)
# 2. Find the distance of the patch to it's nearest neighbor, and the location of the nn in the membank
Expand All @@ -179,7 +186,7 @@ def compute_anomaly_score(self, patch_scores: Tensor, locations: Tensor, embeddi
# 4. Find the distance of the patch features to each of the support samples
distances = torch.cdist(embedding[max_patches].unsqueeze(1), self.memory_bank[support_samples], p=2.0)
# 5. Apply softmax to find the weights
weights = (1 - F.softmax(distances.squeeze()))[..., 0]
weights = (1 - F.softmax(distances.squeeze(), 1))[..., 0]
# 6. Apply the weight factor to the score
score = weights * score # S^* in the paper
return score