-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MOEAD for a multi-objective minimisation problem #3
Comments
Hi @samantha2017, thank you for your interest. Short(ish) answers:
The minimization/maximization is handled by the weights vector. See in
and also there's a specific case that set solutions above max-weight to a penalty value
The weight parameter in In the 2-objective problem, the objectives are
Which makes sense: One of the solution is a On the other end, there's a solution with 10 items, a weight of If I edit the vector to be
Here the algorithm explicitly searched for the heaviest and most valuable objects. The solutions are much higher in value here, but none of them are lighter than Hope this is helpful. Feel free to continue to discuss or let me know if there's anything you think can be improved with the code or documentation, and don't hesitate if you have further questions! |
Hi @mbelmadani, Thank you so much for the great explanation, very helpful! And the idea of initial weights rather than uniform is really cool, I should work on it! By the way, actually the problem that I am solving is EMO for binary classification with minority and majority class rates as conflicting objectives. Here I have a few question for you that would be great if you could help me, please. 1- In NSGA2 the final internal population is return as the approximation to the Pareto Front (PF). So the number of solutions in PF of NSGA2 is equal to the pop_size. However, in your MOEAD implementation, you have used halloffame which is a DEAP PF (tools.ParetoFront()). The PF of MOEAD is the solutions in halloffame and the number of solutions is not necessary equal to the pop_size. Now, for the purpose of comparison between the PF of these two algorithms, is it fair to compare these two PFs although the number of solutions are not the same? (I have seen in the original MOEAD paper (Qingfu Zhang) doesn't use the external population in MOEAD and instead it uses the final population. I tried this but the PF was terrible!) 2- In my problem (which is application of EMO) I mainly care about finding a set of solutions with good trade-off close to the z-point. So this means that I care less about the diversity of solutions along the PF. With the current MOEAD I get better solutions close to the ideal point compared to NSGA2 (means the middle part of the PF of MOEAD dominates the solutions in PF of NSGA2). Considering I would like to bold this outcome, do you have any suggestion about appropriate metrics/indicators for comparisons? HyperVolume is not great as in my problem MOEAD does not have great exploration, but it has good exploitation. DEAP has convergence, diversity, IGD, not sure any of them is appropriate for this purpose. Thank you so much in advance for your time, |
Sorry for the delay,
Maybe an additional suggestion: You could plot objectives' stats log output and see how they individually progress one versus another. Maybe one of your objective is exploited more rapidly with one algorithm versus the other. For me I could tell which method worked better because one of my objective wasn't finite (a p-value) and the other was (accuracy, i.e. a percentage) so I was looking for solutions with the lowest p-value possible while maintaining a reasonable accuracy. It wasn't necessarily true that the absolute lowest p-value was the best solution but generally speaking it did follow that trend. In case you're interested, when I was developing an MOEA for my thesis, I had a few assumptions I could make: a) I had some external validation tool I could use to validate solutions. Specifically, it was for prediction of DNA binding sites, and I was able to check if I predicted the correct/expected ones, and how close my prediction is to the reference target (giving me p-values/E-Values). So to me, the best solution was the pareto front that gave the best predictions after validation. Maybe you could design a data set where you know what the optimal solution would be and see which method comes the closest. b) It was very cheap/fast for me to test all the solutions in a PF. However it was generally not possible for me to find a consistent way to know which solution in the PF was going to be the best (either the middle, or one of the objective being the highest, etc.), though later I found an objective that would make much smaller fronts (1 or 2 solutions) that were pretty optimal and might have been better in practice. But at the time I wasn't concerned about PF size and was able to just test everything every time I had a new PF, and in my case NSGA-IIR was very consistent at identifying the "best" solutions. Hope this helps, will try to get back sooner this time if you have any questions! |
Hi @mbelmadani , thanks for this thread. I am trying to evaluate the performance of MOEA/D on benchmark functions and was wondering if have developed a problem file similar to the knapsack.py for the benchmarks. Since you are using Tchebycheff approach for decomposition in moead.py. I wanted to make sure what do you mean by the lines 394 and 395. Is it related to the implementation in jmetal and DEAP, or is it related to the nature of problem (MIN OR MAXIMIZE)? |
Hi @NimishVerma , at the line Line 394 in 4fcedb4
I don't have any other benchmarks. You could take a look at DEAP and JMetal to see if they have any toy examples you could replicate. I think my master's thesis had an implementation of MOEA/D (https://github.com/mbelmadani/motifgp/blob/master/motifgp/motifgp.py) but I did not benchmark/publish any results ( If you're using this as a benchmark and are interested in 3-objective problems, consider followings issue #6 , it appears the method to initialize default weights might have a bug for small population sizes and overall not super uniform. |
Hi @mbelmadani , thanks for the quick response. I have already implemented bi-objective benchmarks so that wont be an issue. I am curious I flipped Line 394 in 4fcedb4
|
Try leaving the code as is, but set your minimization/maximization in the
DEAP creator object:
https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/knapsack.py#L93
`1.0` is for maximization objectives and `-1.0` for minimization. This lets
you mix and match direction between objectives.
The order of the weights matches the order of the values returned by the
evaluation function. E.g.
https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/knapsack.py#L114
…On Mon, Oct 19, 2020 at 10:52 AM Nimish Verma ***@***.***> wrote:
Hi @mbelmadani <https://github.com/mbelmadani> , thanks for the quick
response. I have already implemented bi-objective benchmarks so that wont
be an issue. I am curious I flipped
https://github.com/mbelmadani/moead-py/blob/4fcedb4a4df9bf98634c81a2aac1d42367a9e183/moead.py#L394
again since I am working on minimization problems, but I end up getting an
empty HOF. Am I doing it wrong by flipping the operator?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADLWUTVC23P2BYJDIQYJZWTSLRHBTANCNFSM4H6YFMZQ>
.
|
Hi,
I am testing your MOEAD implementation on a minimisation problem and comparing the results with NSGA2. The results show that MOEAD does not show enough exploration compared to NSGA2. As the current version works for the knapsack problem , should I expect this version works for a minimisation problem as well, or I should do some alteration inside the MOEAD code to set it up for a minimisation problem?
Thank you,
The text was updated successfully, but these errors were encountered: