-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarification on Constraints and Penalties #177
Comments
Hi. Have you defined the constraints as inequalities according to the scheme expression >= 0.0? Antonio |
Hello Antonio,
I am using NSGAII, with the dominance_comparator: How is the" solution.constraints[c]" exploited by the DominanceWithConstraintsComparator()? Is there a way to access other comparators with NSGAII or to parameterize this one? Any understanding of constraints handling would be helpful. Last, would it be helpful to have unsat constraints get a negative penalty proportional to their unsat degree? For now they are just "counted" with "-1". What would be the simplest way, maybe through an "observer" to know the proportion or non-viable solutions in each population? Thanks for your support! |
Hi everyone,
I’m working on a mono-objective optimization problem with penalties based on constraint violations. Currently, the evaluate function calculates the objectives, then calls __evaluate_constraints, which sets penalties. The full penalty score is stored in solution.constraints[0].
However, this doesn’t seem to guide the search as expected. Would it help to declare one constraint for each variable (I have 5–30 variables) and update solution.constraints[i] based on violations for each associated variable? This might give more precise guidance than a single penalty score.
Also, I’ve found that treating penalties as a second objective works better, achieving a penalty of 0 while keeping good values for the first objective.
Thanks for your support!
The text was updated successfully, but these errors were encountered: