Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on Constraints and Penalties #177

Open
yhamadi75 opened this issue Nov 12, 2024 · 2 comments
Open

Clarification on Constraints and Penalties #177

yhamadi75 opened this issue Nov 12, 2024 · 2 comments

Comments

@yhamadi75
Copy link

Hi everyone,

I’m working on a mono-objective optimization problem with penalties based on constraint violations. Currently, the evaluate function calculates the objectives, then calls __evaluate_constraints, which sets penalties. The full penalty score is stored in solution.constraints[0].

However, this doesn’t seem to guide the search as expected. Would it help to declare one constraint for each variable (I have 5–30 variables) and update solution.constraints[i] based on violations for each associated variable? This might give more precise guidance than a single penalty score.

Also, I’ve found that treating penalties as a second objective works better, achieving a penalty of 0 while keeping good values for the first objective.

Thanks for your support!

@ajnebro
Copy link
Contributor

ajnebro commented Nov 14, 2024

Hi.
If you add a large number of constraints that could lead to many unfeasible solutions, which would make more difficult the search.

Have you defined the constraints as inequalities according to the scheme expression >= 0.0?

Antonio

@yhamadi75
Copy link
Author

Hello Antonio,
In my settings, each constraint can be transformed as inequalities >=0.
I have done the following:

  • the problem is mono objective with k constraints.
  • evaluate() computes the current solution objective then calls __evaluate_constraints
  • In __evaluate_constraints, an unsat constraint c gets a negative penalty: solution.constraints[c] = -penalty.

I am using NSGAII, with the dominance_comparator:
algorithm = NSGAII(
problem=problem,
population_size=100,
offspring_population_size=100,
mutation=PolynomialMutation(probability=1.0 / problem.number_of_variables(), distribution_index=20),
crossover=SBXCrossover(probability=1.0, distribution_index=20),
termination_criterion=StoppingByEvaluations(max_evaluations=max_evaluations),
dominance_comparator=DominanceWithConstraintsComparator(),
)
As a result, invalid solutions are filtered out, but in the end the objective value is not fantastic. This is compared against a bi-objective NSGAII which minimizes both penalties and objective.

How is the" solution.constraints[c]" exploited by the DominanceWithConstraintsComparator()? Is there a way to access other comparators with NSGAII or to parameterize this one? Any understanding of constraints handling would be helpful.

Last, would it be helpful to have unsat constraints get a negative penalty proportional to their unsat degree? For now they are just "counted" with "-1".

What would be the simplest way, maybe through an "observer" to know the proportion or non-viable solutions in each population?

Thanks for your support!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants