Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MTAD-GAT #9

Closed
wants to merge 4 commits into from
Closed

MTAD-GAT #9

wants to merge 4 commits into from

Conversation

gonzachiar
Copy link
Collaborator

  • Add MTAD-GAT implementation (training and inference).
  • Inference and metrics are done in a terrible way, this will change in the future.
  • Added new function to load datasets easier

@gonzachiar gonzachiar requested a review from fede-bello June 23, 2024 19:38
X,
Y,
test_size=test_size,
shuffle=shuffle,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm I think we shouldn't be doing shuffling nor stratifying in time-series

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Be careful, it's done in several places

class SlidingWindowDataset(Dataset):
def __init__(
self,
data: torch.Tensor, # TODO: Check if a tensor is a numpy ArrayLike
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seem to be:

However, we can treat PyTorch tensors as NumPy arrays without the need for explicit conversion:

https://numpy.org/doc/stable/user/basics.interoperability.html



# TODO: Standarize
def adjust_predicts(score, label, threshold, pred=None, calc_latency=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow this function sucks. I would try to improve a little bit that for with a million of if's if possible. I don't like that the last return returns just a value instead of a tuple, feels like a trouble maker. I'd add return predict, None like in the first return statement

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know about this. It just feels like a lot of logic (that we don't really understand) to run only one evaluation method. Maybe we can remove that evaluation method for now and if it's really useful we re-consider it? The pot_eval

from models.mtad_gat.model import MTAD_GAT
from models.mtad_gat.utils import adjust_anomaly_scores

# TODO:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Collaborator

@fede-bello fede-bello left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't forget to add docstrings

@gonzachiar gonzachiar closed this Oct 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants