Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update bounds for StateGoals with target for keep_soft_constaints = False #1101

Open
SGeeversAtVortech opened this issue Feb 11, 2019 · 4 comments

Comments

@SGeeversAtVortech
Copy link
Contributor

In GitLab by @tpiovesan on Feb 11, 2019, 22:57

With the keep_soft_constraints = False option and for a StateGoal with target, instead of commuting soft constraints into hard one we could update the bounds of the state.
The main advantage would be that we prevent the creation of slack variables, thus simplifying the task for the solver. A downside is that a solver relaxes constraints more than the bounds on variables. So we would create 'harder' constraints. Although I don't see how this may create real issues.

One has to make sure that when using HomotopyMixin the original bounds are used at the first priority.

@baayen: opinions?

@SGeeversAtVortech
Copy link
Contributor Author

In GitLab by @baayen on Feb 13, 2019, 07:50

I think this is a nice idea worth exploring.

Because the implementation has the potential of being quite hairy (i.e., what to do with subclasses overriding bounds()), I would suggest to do some benchmarking first to see whether the effort is worth it.

@SGeeversAtVortech
Copy link
Contributor Author

In GitLab by @vreeken on Feb 28, 2019, 16:47

Some findings:

  • Moving these types of constraints to bounds seems to help singular matrices warnings (IPOPT+ma97) with a polder case. Seems to be related to Update bounds for StateGoals with target for keep_soft_constaints = False #1101.
  • Cannot move all StateGoal constraints to bounds, as self.bounds() does not support ensemble_member as an argument.
  • Hypothetically moving to bounds might have an effect on goals which are initialized with a delay (e.g. when a priority is already done), and might be referring to bounds() as a function range to scale themselves (like StateGoals are doing).

tl;dr:
Getting rid of these simple constraints made by GPMixin helps, but we cannot solve it in GPMixin.

EDIT (6 months after):
We could overrule transcribe() in GPMixin, and change the lbx / ubx directly ourselves. The only thing we need is an additional routine to know where what variable is in the state vector. This information is available internally, but not easily available publicly yet (self._CollocatedIntegratedOptimizationProblem__indices). Of course one could use the public self.state_vector(), check what indices it is indexing, but that's a bit slow probably.

Not sure how "pretty" overriding transcribe() is though, it's very low level.

@SGeeversAtVortech
Copy link
Contributor Author

In GitLab by @vreeken on Feb 28, 2019, 19:14

Quite slow but much more of a "catch all" would be to preprocess in OptimizationProblem. It takes about 5 seconds to do this for a 96 hour run of Rijnland (80000 variables/constraints). About 80% of the time is spent calculating the Jacobian. It loops three times (--> performing substitutions twice; last iteration we find there's nothing left to substitute).

def get_xvals(g_jac_csr, inds, b_vals, c_vals):
    """
    Turns simple constraints in the coefficient matrix ``g_jac_csr`` into a
    constant. For example, the constraint "a * X + b = c" would get result in
    X = (c - b) / a.

    We assume that the indices that are passed in are of rows containing only a single coefficient.
    
    :param g_jac_csr: A sparse CSR matrix representation of the Jacobian of the constraints
    :param inds: The rows/constraints that should be turned into values
    :param b_vals: The constants in the constraints
    :param c_vals: The right hand side
    :returns: A tuple of (X values, X indices, flip)
    """
    offsets = g_jac_csr.indptr[np.where(inds)]
    a_coefficients = g_jac_csr.data[offsets]
    x_inds = g_jac_csr.indices[offsets]
    b_coefficients = b_vals[inds]
    c_coefficients = c_vals[inds]
    x_vals = (c_coefficients - b_coefficients) / a_coefficients
    return x_vals, x_inds, (a_coefficients < 0)

def move_constraints_to_bounds(x, lbx, ubx, g, lbg, ubg, f):
    lbg = ca.veccat(*lbg).toarray().ravel()
    ubg = ca.veccat(*ubg).toarray().ravel()

    while True:
        # Find the linear constraints
        g_sjac = ca.Function('Af', [x], [ca.jacobian(g, x)])
        g_sb = ca.Function('bf', [x], [g])

        g_jac_eval = g_sjac(np.nan)
        g_jac_csc = g_jac_eval.tocsc()
        g_jac_csr = g_jac_csc.tocsr()

        b = g_sb(0.0).toarray().ravel()

        g_is_linear = ~np.isnan(g_jac_csc.sum(1).A.ravel())  # Summing this axis is faster with CSC compared to CSR

        # Find the rows in the jacobian with only a single entry
        g_single_variable = (np.diff(g_jac_csr.indptr) == 1)

        # The intersection of all selections are constraints like we want
        g_constant_assignment = g_is_linear & g_single_variable

        # Find the rows which are equality constraints
        g_eq_constraint = (lbg == ubg)
        x_vals, x_inds, _ = get_xvals(g_jac_csr, g_constant_assignment & g_eq_constraint, b, lbg)
        
        # TODO: Insert check that value is within bounds
        lbx[x_inds] = x_vals
        ubx[x_inds] = x_vals

        # Find the rows which are inequality constraints >= lbg
        g_ineq_constraint_lbg = np.isfinite(lbg) & np.isposinf(ubg)
        x_vals, x_inds, flip_inds = get_xvals(g_jac_csr, g_constant_assignment & g_ineq_constraint_lbg, b, lbg)
        lbx[x_inds[~flip_inds]] = np.maximum(x_vals[~flip_inds], lbx[x_inds[~flip_inds]])
        ubx[x_inds[flip_inds]] = np.minimum(x_vals[flip_inds], ubx[x_inds[flip_inds]])

        # Find the rows which are inequality constraints <= ubg
        g_ineq_constraint_ubg = np.isneginf(lbg) & np.isfinite(ubg)
        x_vals, x_inds, flip_inds = get_xvals(g_jac_csr, g_constant_assignment & g_ineq_constraint_ubg, b, ubg)
        lbx[x_inds[flip_inds]] = np.maximum(x_vals[flip_inds], lbx[x_inds[flip_inds]])
        ubx[x_inds[~flip_inds]] = np.minimum(x_vals[~flip_inds], ubx[x_inds[~flip_inds]])

        # Remove the constraints that we will move to bounds
        inds = g_constant_assignment & (g_eq_constraint | g_ineq_constraint_lbg | g_ineq_constraint_ubg)
        if not np.any(inds):
            # No more replacements to do
            break
        g = g[np.flatnonzero(~inds).tolist()]  # Indexing is only fast with a list of _Python_ ints
        lbg = lbg[~inds]
        ubg = ubg[~inds]

        # Replace constant variables in constraints
        x_inds = (lbx == ubx)
        x_vals = lbx[x_inds]
        f, g = ca.substitute([f, g], [x[np.flatnonzero(x_inds).tolist()]], [x_vals])

    return f, g, lbg, ubg

I call it like this in OptimizationProblem:

nlp['f'], nlp['g'], lbg, ubg = move_constraints_to_bounds(nlp['x'], lbx, ubx, nlp['g'], lbg, ubg, nlp['f'])

@SGeeversAtVortech
Copy link
Contributor Author

In GitLab by @vreeken on Aug 30, 2019, 17:15

We could overrule transcribe() in GPMixin, and change the lbx / ubx directly ourselves. The only thing we need is an additional routine to know where what variable is in the state vector. This information is available internally, but not easily available publicly yet (self._CollocatedIntegratedOptimizationProblem__indices). Of course one could use the public self.state_vector(), check what indices it is indexing, but that's a bit slow probably.

Not sure how "pretty" overriding transcribe() is though, it's very low level.

The above snippet for code is still much better to fix a lot more potential problems though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant