You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please leave a comment if there are any changes or additions that you would like to see in the next version.
Current suggestions.
Changes to package structure
Containerized methods submission
This will mean that a container, instead of python function can be submitted. The default methods will also need to wrapped inside a container.
Containerized simulation submission
This will entail that people can submit specific simulations to TVC_benchmarker in containers. Some work will be needed regarding additional information that is needed to be supplied for inclusion in future reports (e.g. motivations about certain parameters/assumptions). This should hopefully remove all individual biases from tvc_benchmarker (or crowdsource the bias)
Different evaluation possibilities
Simulation 4 could be evaluated with accuracy of state membership instead. This allows for class membership methods.
Metadata for containers
Some methods (e.g. classifiers or that use spectral properties) will only be applicable in certain simulations. Having metadata for both simulations and methods to match which methods are compatible with which simulations will assist here.
Continuous integration to evaluate new methods validity
At the moment there is some manual work if someone submits a new method. This can be automated.
Changes to default simulations
Add burn in
When there is an autocorrelation it may take a 1-3 time points for it to stabilize. So by adding a burn in, these get excluded.
Make mu_r and alpha fully independent
So a possible reformulation (raised by reviewer) would allow these two parameters to be independent from each other. This needs to however be thoroughly tested that it has no effect on other parameters, especially with the other changes that will be made.
Simulate training data
Some methods may tune parameters on training data. One possibility is to provide an extra dataset which those methods can use.
Some parts of this is hard. These multivariate time-series will have some modularity clustering, and this can be inferred from empirical data. However, it has to be decided if the between network communication should occur at certain events (e.g. all nodes from network A increase in communication with network B) or if nodes do it independently of others. The level of synchronous between network communication could affect multivariate methods. This should probably be made a parameter that is varied.
Timeline for new suggestions
Summer 2019. Then the new ideas will be locked and the coding will begin.
Update March 2020: Other projects have taken priority and I have not been able to spend a lot of time on this as of yet. But will try and return to it whenever I have the time.
The text was updated successfully, but these errors were encountered:
Please leave a comment if there are any changes or additions that you would like to see in the next version.
Current suggestions.
Changes to package structure
Containerized methods submission
This will mean that a container, instead of python function can be submitted. The default methods will also need to wrapped inside a container.
Containerized simulation submission
This will entail that people can submit specific simulations to TVC_benchmarker in containers. Some work will be needed regarding additional information that is needed to be supplied for inclusion in future reports (e.g. motivations about certain parameters/assumptions). This should hopefully remove all individual biases from tvc_benchmarker (or crowdsource the bias)
Different evaluation possibilities
Simulation 4 could be evaluated with accuracy of state membership instead. This allows for class membership methods.
Metadata for containers
Some methods (e.g. classifiers or that use spectral properties) will only be applicable in certain simulations. Having metadata for both simulations and methods to match which methods are compatible with which simulations will assist here.
Continuous integration to evaluate new methods validity
At the moment there is some manual work if someone submits a new method. This can be automated.
Changes to default simulations
Add burn in
When there is an autocorrelation it may take a 1-3 time points for it to stabilize. So by adding a burn in, these get excluded.
Make mu_r and alpha fully independent
So a possible reformulation (raised by reviewer) would allow these two parameters to be independent from each other. This needs to however be thoroughly tested that it has no effect on other parameters, especially with the other changes that will be made.
Simulate training data
Some methods may tune parameters on training data. One possibility is to provide an extra dataset which those methods can use.
Add spectral properties.
Multiple possibilities here:
Other suggestions welcome
Add multivariate simulations
Multiple possibilities here:
Some parts of this is hard. These multivariate time-series will have some modularity clustering, and this can be inferred from empirical data. However, it has to be decided if the between network communication should occur at certain events (e.g. all nodes from network A increase in communication with network B) or if nodes do it independently of others. The level of synchronous between network communication could affect multivariate methods. This should probably be made a parameter that is varied.
Timeline for new suggestions
Summer 2019. Then the new ideas will be locked and the coding will begin.
Update March 2020: Other projects have taken priority and I have not been able to spend a lot of time on this as of yet. But will try and return to it whenever I have the time.
The text was updated successfully, but these errors were encountered: