-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sktime
time series anomalies/changepoint interface upgrade, moving skchange
to 2nd or 1st party?
#8
Comments
sktime
time series anomalies/changepoint interface upgradesktime
time series anomalies/changepoint interface upgrade, moving skchange
to 2nd or 1st party?
Hey @fkiraly! I have actually been meaning to contact you about the points you raise, so it's great that you found your way here on your own through my collaborator. First, I'd be happy to help on the rework of the annotator/anomalies/changepoints/segmentation interface. What is the best way for me to contribute? Is it mainly by following and contributing to the issues you mention? And get started by following the instructions here https://www.sktime.net/en/latest/get_involved/contributing.html? I've also been thinking about whether Meanwhile, could you elaborate what making skchange a 2nd party library means? What needs to be done in sktime and skchange to make that happen? I know you have adapted several separate packages, e.g. tsfresh, to the sktime framework. Is that an example of a 2nd party integration? |
Thanks for the reply, @Tveten!
Normally, a looser contribution pattern such as via issues etc would be what I would recommend, but the annotation module is "special" in the sense that it has been in an experimental stage for a while and we are moving it to a maturing stage, consolidating interfaces, so it is earlier stage than other modules. Given this and your dependencies, I think it might be best if we have regular touch points, with @Alex-JG3 and possibly users of Concretely, I suggest:
For synchronous, we have sync meetings on Mondays 12 UTC (workstream tech meetings) and Fridays 13 UTC (meetups, content varies, presentations or tech planning). Serendipitously, there is also the upcoming roadmap planning meeting 2024-2025 on June 21, i.e., very soon! It might make sense for you to join and suggest priority roadmap items based on your use cases and user base. On the discord, we can send quick messages to schedule if any other timings are needed. |
I would say it's up to you, if you feel you have the capacity to maintain. As you say, the trade-off is between maintenance burden and agility. In any model, we are happy to help. As said, the annotation module is less consolidated than others, so it would be important imo to coordinate, especially when it concerns framework architecture and roadmap.
Let me be a bit more precise on what I mean with the different models. The below applies for packages usable via
|
There are also some typical transitions we've seen over years:
|
Thanks for the thorough reply! I think a 2nd party solution sounds like a good solution for now, then we can see what happens down the line. So a few concrete steps towards this is:
Something like that? I'm very open to additional suggestions! |
@Tveten, makes absolute sense. I would suggest as an additional point, what would be very useful is joining the discussion and/or active development on improvements to the 2nd party developer experience, together with @felipeangelimvieira ( This PR came out of that: sktime/sktime#6588 I will open an issue with some of the current improvements around this, and also "indexing", i.e., discoverability via |
@fkiraly Fantastic! |
Here: sktime/sktime#6639 |
FYI, we've now refactored the annotation module tests so they can be used in 3rd and 2nd party packages via The change should be available in the next release this week. What would be great if we could jointly progress API conformance, and also widen the interface so it allows things you want, such as multivariate, panel, etc. Are you back from holiday? |
Great! I'm back as of today. I'll start working on the API conformance this week. |
excellent! The Meanwhile, I'll also be working on some of the suggestions you and others made in the annotation design module, e.g., multivariate, use in pipelines as transformations, etc. |
Hey @fkiraly! I have now attempted conforming to the new annotator API. I honestly think it was pretty hard. It took me quite a long time to understand the design and what the new requirements for me as an extender are. Which methods do I have to implement and which are optional? How do all the new methods work together? What restrictions do they put on the output types of different detectors' predict method? What do I have to do to add support for a new task like collective anomaly detection? After studying the new class more, I have the following comments:
That being said, I like the overall design regards getting rid of the fmt and labels arguments, and using predict for sparse output and transform for dense output. I think the design is on the right track! Due to the difficulties I had, I ended up coding up my own suggestion of a base class, which you can find here: Here are some of the main differences between BaseDetector and BaseSeriesAnnotator: BaseDetector is as lightweight as possible.The main methods are:
In addition there are two abstract converter methods:
where .transform(X) = sparse_to_dense(self.predict(X)). So for a minimal version of a detector to be implemented, the required methods to implement are _fit, _predict and sparse_to_dense. Regarding the naming "score_transform" vs. "transform_scores": I think detector.score_transform is closer to the natural language "apply the detector's score transform to data X", while transform_scores means "_scores" in a subscript sense. This is a minor point of personal preference, and I really have no problem using transform_scores if that makes conformance with sktime easier. Common detector types = subclasses of BaseDetectorFor common detector types like anomaly detectors or changepoint detectors, subclasses of the form
define the output formats of .predict and .transform. This fully separates point anomaly detectors from changepoint detectors and any other detector types. This improves readability and maintainability in my opinion. For full examples, see:
A new anomaly detector can then inherit from PointAnomalyDetector, and only need to implement _fit and _predict. Adding support for a new generic detector -- collective anomalies in my case -- is also simply by following the same recipe; make a class A new exotic detector can also be added by inheriting directly from BaseDetector and implementing _fit, _predict and sparse_to_dense (and other methods optionally). Output formatsFor clarity and maintainability, I have made the decision to decrease the number of options of output types. At least for now. On a high level:
FinallySorry for the long read and that I just went ahead and made my separate implementation rather than conforming. I hope some of the ideas are useful for the BaseAnnotator design. All feedback is very welcome! |
@Tveten, I had a look at your proposal, interesting! I very much like that it is leaner! though I see a key issue regarding stability: you inherit from Further, can I ask for an explanation: how do you account for the case where the user may want to use the same object to return a segmentation vs the changepoints? |
Fix in both sktime and skchange to let the dependency of BaseDetector be on BaseEstimator: |
Just to make sure we speak about the same thing:
With this definition, a list of changepoints is equivalent to a segmentation where the inclusive end-point of the intervals is the changepoint and each interval has a unique label. However, the changepoint representation is sparser when applicable. Since all the methods in skchange are changepoint detectors and doesn't do any grouping of the resulting segments, I have chosen to drop the interval-based segmentation representation for now. Just to keep things as simple as possible. However, as kind of a middle ground, the transform method of changepoint detectors return segment labels. The transform of changepoint detectors could also have returned a dense 0-1-indicator of changepoint locations, but I think the segment labels are much more useful. Currently my design thinking is that .predict should be "as sparse a representaton as possible", while .transform should be as "dense as possible" in the sense of containing a lot of information. This is also why I have chosen to give each collective anomaly separate labels, rather than labelling them all as "1" like point anomaly detectors. For example. If you really want the interval-based segmentation at some point, and maybe it's more relevant for the annotators in sktime, I think the best solution would be to make a BaseSegmentor or something that defines the output format. Then each concrete detector needs to decide whether it fits best as ChangepointDetector or BaseSegmentor, where I would always go with the sparsest choice. I guess you could also make an adaptor that converts any ChangepointDetector into a Segmentor. |
Agreed with the first, for the second I'd specifically say it should be the "most reasonable expectation" for a return when the condition is for that return to be a univariate time series and have the same index as the input.
Yes, segmentations can also be label-less or overlapping though. If you get a segmentation from consecutive points, they will always cover the range from first to past point exhaustively and non-overlapping, but in my conceptual model that is not a necessary property for general segmentations.
Yes, of course, this is sth I've been thinking about, but I recoil from this solution for two reasons:
The situation also reminds me where we had different API designs for univariate, multivariate forecasters; or different base classes even for single-time-series transformations and panel (collection-of-) time series transformations, that did not age well as it led to many adapters that turned out unnecessary, and case distinctions in composites. |
I see all your points. Regarding the point-based ChangepointDetector vs. interval-based Segmentor, I guess the choice/trade-off is:
From the skchange perspective, the number of different tasks isn't that big, so I don't see an issue with one base class per task. At least yet. From the sktime perspective, where the annotator is slightly more general, I can see how it might be better to fit all the tasks into a single base class. The cost is to keep track of all the private "adaptor" methods like _sparse_points_to_dense, _sparse_segments_to_dense etc., and which restrictions they put on the output types in the end. From a developer/extender perspective, I found this quite involved to keep track of, but maybe I'm overcomplicating and simply like to keep matters isolated. One base class split I would consider though, is to distinguish anomaly from change detection. It could turn out hard to maintain a design where the output format of point/collective anomaly detectors could influence the output format of changepoint detectors or segmentors and vice versa. It is just simpler for me to think about what the output format of change detectors or segmentors should be, without taking account of what the format for anomaly detectors should be simultaneously. |
Here is another "compromise" option: there could be a joint base class for all detectors, but multiple sub-base-classes with clear extension pattern. That is, there would be two or three extension templates depending on the subtasks, e.g., changepoint, anomaly, segment. |
Isn't that what I've implemented now? Without the explicit extension templates. |
yes, indeed. And we have since then decided to move to joint base class, see sktime/sktime#7323 - so this issue is closed very nicely! |
Great package!
I was pointed to this by one of your collaborators, and wanted to let you know that we are currently reworking the anomalies/changepoints interface - the API had some inconsistencies for a while and we are planning to move it to a more mature state over the next minor release cycles.
It would be great to pool ideas and perhaps work on this together!
@Alex-JG3 (sktime core developer) is currently driving the rework.
Relevant issues:
Input and feedback on the interface designs are much appreciated! Criticism especially, as you are a "consumer" of the interface.
Given the current state of skchange, we could also help:
skchange
from 3rd party to a 2nd party library, with synced API checks, CI, and indexing of the algorithms through thesktime
index, while retaining it as a separate library.sktime
proper with ownership and maintenance assigned to authors.What do you think? FYI @Alex-JG3
The text was updated successfully, but these errors were encountered: