-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch to Project.toml, require Julia 1, test agreement with CUTEst #36
Conversation
Thanks for the update. The problem we have here is that we use these JuMP models through NLPModelsJuMP, which uses MathProgBase. So, to update to JuMP 0.20, we'd need to Update NLPModelsJuMP to use MathOptInterface. For now we need to use JuMP 0.18 then. To compute the objective at various points, you can use NLPModelsJuMP: julia> model = srosenbr(2)
julia> nlp = MathProgNLPModel(model)
julia> obj(nlp, [0.0; 0.1])
2.0 |
Ah, that's fine then, this can wait until that's ready. Thanks for the very helpful tip about NLPModels (I should have figured that out on my own). I have an application where I'd like to evaluate the objective for intervals as a way of computing rigorous bounds. But I get this: julia> obj(nlp, [0.0; 0.1])
2.0
julia> using IntervalArithmetic
julia> iv(x) = prevfloat(x)..nextfloat(x)
iv (generic function with 1 method)
julia> obj(nlp, iv.([0.0; 0.1]))
ERROR: MethodError: no method matching forward_eval(::Array{Float64,1}, ::Array{Float64,1}, ::Array{ReverseDiffSparse.NodeData,1}, ::SparseArrays.SparseMatrixCSC{Bool,Int64}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Interval{Float64},1}, ::Array{Float64,1}, ::Array{Float64,1}, ::Array{Float64,1}; user_operators=ReverseDiffSparse.UserOperatorRegistry(Dict{Symbol,Int64}(), MathProgBase.SolverInterface.AbstractNLPEvaluator[], Dict{Symbol,Int64}(), Any[], Any[], Any[]))
Closest candidates are:
forward_eval(::AbstractArray{T,1}, ::AbstractArray{T,1}, ::AbstractArray{ReverseDiffSparse.NodeData,1}, ::Any, ::Any, ::Any, ::AbstractArray{T,1}, ::Any, ::Any, ::Any; user_operators) where T at /home/tim/.julia/packages/ReverseDiffSparse/BQuct/src/forward.jl:20
forward_eval(::AbstractArray{T,1}, ::AbstractArray{T,1}, ::AbstractArray{ReverseDiffSparse.NodeData,1}, ::Any, ::Any, ::Any, ::AbstractArray{T,1}, ::Any, ::Any) where T at /home/tim/.julia/packages/ReverseDiffSparse/BQuct/src/forward.jl:20 got unsupported keyword argument "user_operators"
forward_eval(::AbstractArray{T,1}, ::AbstractArray{T,1}, ::AbstractArray{ReverseDiffSparse.NodeData,1}, ::Any, ::Any, ::Any, ::AbstractArray{T,1}, ::Any) where T at /home/tim/.julia/packages/ReverseDiffSparse/BQuct/src/forward.jl:20 got unsupported keyword argument "user_operators"
Stacktrace:
[1] forward_eval_all(::JuMP.NLPEvaluator, ::Array{Interval{Float64},1}) at /home/tim/.julia/packages/JuMP/I7whV/src/nlp.jl:445
[2] eval_f(::JuMP.NLPEvaluator, ::Array{Interval{Float64},1}) at /home/tim/.julia/packages/JuMP/I7whV/src/nlp.jl:477
[3] obj(::MathProgNLPModel, ::Array{Interval{Float64},1}) at /home/tim/.julia/packages/NLPModelsJuMP/Ao59I/src/mpb_model.jl:135
[4] top-level scope at REPL[11]:1 Any thoughts? |
Unfortunately, that issue is in a very large backlog because it involves understanding MOI internals. If you change the On the |
It is already set to |
Sorry, I meant fix at 0.18, so it uses the MPB version of JuMP.
|
Also fixes deprecated syntax
OK, I reverted to JuMP 0.18. More importantly, I discovered a bug in one of the functions ( julia> length(problem_status)
168
julia> sum(kv[2].samexval for kv in problem_status)
144
julia> sum(kv[2].sameobjval for kv in problem_status)
137
julia> sum(kv[2].samegradval for kv in problem_status)
124 Looks like there is probably stuff to fix. Is it an explicit goal to default to the same problem size as CUTEst, or is that one area where disagreement is allowed? (Note that the remaining fields redefine the problem so it has the same number of vars as CUTEst.) |
Thanks for the tests. I'm wary of a comparison with CUTEst because bugs in the SIF files are very hard to detect (and there must be some). The problems here have been implemented directly from reference papers or report. I would rather compare against AMPL problems (with AmplNLReader) but not all of them have been coded up in AMPL though. |
Here are my fixes so far, I am getting info mostly by reference to https://github.com/mpf/Optimization-Test-Problems, and with one exception those have agreed with CUTEst: diff --git a/src/arglina.jl b/src/arglina.jl
index d0beada..c910649 100644
--- a/src/arglina.jl
+++ b/src/arglina.jl
@@ -12,7 +12,7 @@
export arglina
"Linear function with `n` parameters and `m` observations - full rank"
-function arglina(n::Int=100, m::Int=200)
+function arglina(n::Int=100, m::Int=2n)
m < n && @warn("arglina: must have m ≥ n")
m = max(m, n)
diff --git a/src/arglinb.jl b/src/arglinb.jl
index 2b207cc..59edf0e 100644
--- a/src/arglinb.jl
+++ b/src/arglinb.jl
@@ -12,7 +12,7 @@
export arglinb
"Linear function with `n` parameters and `m` observations - rank 1"
-function arglinb(n::Int=10, m::Int=20)
+function arglinb(n::Int=10, m::Int=2n)
m < n && @warn("arglinb: must have m ≥ n")
m = max(m, n)
diff --git a/src/arglinc.jl b/src/arglinc.jl
index defb46a..5b8d531 100644
--- a/src/arglinc.jl
+++ b/src/arglinc.jl
@@ -12,7 +12,7 @@
export arglinc
"Linear function with `n` parameters and `m` observations - rank 1, zero columns and rows"
-function arglinc(n::Int=10, m::Int=20)
+function arglinc(n::Int=10, m::Int=2n)
m < n && @warn("arglinc: must have m ≥ n")
m = max(m, n)
diff --git a/src/beale.jl b/src/beale.jl
index 1141436..5e68118 100644
--- a/src/beale.jl
+++ b/src/beale.jl
@@ -22,7 +22,7 @@ function beale(args...)
@NLobjective(
nlp,
Min,
- (1.5 + x[1] * (1.0 - x[2]))^2 + (2.25 + x[1] * (1.0 - x[2]^2))^2 + (2.625 + x[1] * (1.0 - x[2]^3))^2
+ (-1.5 + x[1] * (1.0 - x[2]))^2 + (-2.25 + x[1] * (1.0 - x[2]^2))^2 + (-2.625 + x[1] * (1.0 - x[2]^3))^2
)
return nlp
diff --git a/src/broydn7d.jl b/src/broydn7d.jl
index 2bdcee4..86b3866 100644
--- a/src/broydn7d.jl
+++ b/src/broydn7d.jl
@@ -50,14 +50,14 @@ function broydn7d(n :: Int=100, p :: Float64=7/3)
nlp = Model()
- @variable(nlp, x[i=1:n], start=(-1.0))
+ @variable(nlp, x[i=1:n], start=1.0)
@NLobjective(
nlp,
Min,
- abs(1 - 2 * x[2] + (3 - x[1] / 2) * x[1])^p +
- sum(abs(1 - x[i-1] - 2 * x[i+1] + (3 - x[i] / 2) * x[i])^p for i=2:n-1) +
- abs(1 - x[n-1] + (3 - x[n] / 2) * x[n])^p +
+ abs(1 - 2 * x[2] + (3 - 2x[1]) * x[1])^p +
+ sum(abs(1 - x[i-1] - 2 * x[i+1] + (3 - 2x[i]) * x[i])^p for i=2:n-1) +
+ abs(1 - x[n-1] + (3 - 2x[n]) * x[n])^p +
sum(abs(x[i] + x[i + n2])^p for i=1:n2)
)
diff --git a/src/brybnd.jl b/src/brybnd.jl
index a4f1ebe..8d5ae88 100644
--- a/src/brybnd.jl
+++ b/src/brybnd.jl
@@ -44,7 +44,7 @@ function brybnd(n :: Int=100; ml :: Int=5, mu :: Int=1)
nlp = Model()
- @variable(nlp, x[i=1:n], start=(-1.0))
+ @variable(nlp, x[i=1:n], start=1.0) # Note: some start at -1, e.g., https://github.com/mpf/Optimization-Test-Problems/blob/master/cute/brybnd.mod
@NLobjective(
nlp,
@@ -54,11 +54,12 @@ function brybnd(n :: Int=100; ml :: Int=5, mu :: Int=1)
x[i] * (2 + 5 * x[i]^2) + 1 -
sum(
x[j] * (1 + x[j])
- for j = max(1, i-ml) : min(n, i+mu) if j != i
+ for j = (max(1, i-ml) : min(n, i+mu)) if j != i # does this != syntax actually work?
)
)^2 for i=1:n
)
)
+ error("this is broken")
return nlp
end
diff --git a/src/nondia.jl b/src/nondia.jl
index 0e44632..cd2a859 100644
--- a/src/nondia.jl
+++ b/src/nondia.jl
@@ -33,7 +33,7 @@ function nondia(n :: Int=100)
@NLobjective(
nlp,
Min,
- (x[1] - 1.0)^2 + sum((100.0*x[1] - x[i-1]^2)^2 for i=2:n)
+ (x[1] - 1.0)^2 + 100*sum((x[1] - x[i-1]^2)^2 for i=2:n)
)
return nlp
diff --git a/src/scosine.jl b/src/scosine.jl
index a387734..c2ccb41 100644
--- a/src/scosine.jl
+++ b/src/scosine.jl
@@ -26,10 +26,7 @@ function scosine(n :: Int=100)
n = max(2, n)
nlp = Model()
- p = zeros(n)
- for i=1:n
- p[i] = exp(6.0 * (i-1) / (n-1))
- end
+ p = [exp(12.0 * (i-1) / (n-1)) for i = 1:n]
@variable(nlp, x[i=1:n], start=1.0/p[i])
|
Ah, now I see there is https://github.com/JuliaSmoothOptimizers/AmplNLReader.jl. I don't have ampl, unfortunately. |
Many thanks for checking!
This is all quite difficult to check because there are variants of all these problems in the literature and existing models. Checking is mostly done by visual inspection. That's why we tried to give extensive references in the comments. |
Wow, really interesting! Is there any kind of central place for reporting errors in AMPL models? That seems pretty important, given that people use what are nominally the same test problems across many papers. (And, do they come with version numbers so one can report which set of bugs one was working with? 😄 ) Is it safe to rule out the possibility of a typo in a paper, and that test suites have incorporated corrections? Seems like that should be clearly documented if it happened, so I suppose it's safe to take the paper as the ultimate authority unless there is a clear comment to the contrary. For the record I provided the list in #37. I've reduced this to the fixes. What do you want to do about the part of this that is a comparison with CUTEst? Keep it or discard it? |
Found more errors ( |
src/morebv.jl
Outdated
@@ -41,7 +41,9 @@ function morebv(n :: Int=100) | |||
@NLobjective( | |||
nlp, | |||
Min, | |||
sum((2.0 * x[i] - x[i-1] - x[i+1] + (h^2 / 2.0) * (x[i] + (i - 1) * h + 1)^3)^2 for i=2:n-1) | |||
sum((2.0 * x[i] - x[i-1] - x[i+1] + (h^2 / 2.0) * (x[i] + (i - 1) * h + 1)^3)^2 for i=2:n-1) + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the further improvements. I now think the initialization setting x0[1] = x0[n] = 0
was a misreading of the problem, and that http://www.cs.cas.cz/matonoha/download/V1081.pdf suggests starting all points at 0.5. I made that further change. But note that https://github.com/mpf/Optimization-Test-Problems/blob/master/cute/morebv.mod seems to use a very different initialization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. Let's add a note about the initial point that you choose, if you don't mind.
test/runtests.jl
Outdated
x0 = getvalue.(omodel[:x]) | ||
d = JuMP.NLPEvaluator(omodel) | ||
MathProgBase.initialize(d, [:Grad]) | ||
x0 = getvalue.(omodel[:x]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We created NLPModelsJuMP
for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly, I get an OutOfMemoryError()
with NLPModelsJuMP
but not with JuMP.NLPEvaluator
.
test/runtests.jl
Outdated
samenvars, samexval, sameobjval, samegradval = true, true, true, true | ||
isok = false | ||
if isa(x0, AbstractVector) | ||
if length(x0) != length(cmodel.meta.x0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you can query cmodel.meta.nvar
.
test/runtests.jl
Outdated
d, x0 = optpr_problem(probname, length(cmodel.meta.x0)) | ||
end | ||
if (x0 ≈ cmodel.meta.x0) | ||
samexval = false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand here. Is this back to front?
test/runtests.jl
Outdated
gc, go = sort!(grad(cmodel, x0)), sort!(grad(d, x0)) | ||
if !(gc ≈ go) | ||
samegradval = false | ||
c = gc \ go |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's happening here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trying to see if it's just linearly rescaled, e.g., if f(x) -> c*f(x)
then the gradient components would be multiplied by a constant. I'll get rid of that.
Thanks for combing through the problems so carefully! Here are some answers:
The only place I can think of is mpf's repository. Unfortunately, there wasn't any good bug-tracking facility in place when those problems were created, so no version number I'm afraid. In CUTEst, we decided to leave the buggy problems untouched for historical heritage reasons (so as not to invalidate numerical results produced with them in the literature), and to create new problems for the corrected version. Not great but hey. You can find them here: https://bitbucket.org/optrove/sif/src/master/
Yes, we take the paper as the reference problem formulation. If a typo were to be detected, we would probably create a new problem with the corrected version.
I like it, but I would suggest to use NLPModelsJuMP in there so that the comparisons still work when (one day) we upgrade to MOI. Also I would suggest to move it to a subfolder and not make it a required test because some CUTEst problems have been corrected, but the corrected problem has a different name. |
OK, I have made the suggested changes. I dropped the fancy "try to detect a permutation of the coordinates" because that seems to be too difficult to do generically. This just reports on any difference. |
Also worth noting that while this may correct all bugs here in the unconstrained problems, I did not check any problems with constraints. My current application is for unconstrained methods so this is all I need for now. But #37 at least lists them. Some of the disagreements are just in precision so #37 applies a bit of subjective interpretation to report the ones that I think really count. |
Many thanks @timholy ! |
This is a very handy package, but it's not compatible with newer versions of JuMP because of its current registration: https://github.com/JuliaRegistries/General/blob/master/O/OptimizationProblems/Compat.toml. Tagging a new version will fix that.
One thing I'd like to do is extract the various objectives and try evaluating them at different points. How, for example, would you do this?