Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Marina #27

Merged
merged 23 commits into from
Jun 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions examples/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,12 @@ Gurobi = "2e9cd046-0924-5485-92f1-d5272153d98b"
HiGHS = "87dc4568-4c63-4d18-b0c0-bb2238e4078b"
Images = "916415d5-f1e6-5110-898d-aaa5f9f070e0"
JuMP = "4076af6c-e467-56ae-b986-b466b2749572"
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
PyPlot = "d330b81b-6aea-500a-939a-2ce795aea3ee"
QuasiMonteCarlo = "8a4e6c94-4038-4cdc-81c3-7e6ffdb2a71b"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
TestImages = "5e47fb64-e119-507b-a336-dd2b206d9990"
517 changes: 517 additions & 0 deletions examples/conv_neural_networks.ipynb

Large diffs are not rendered by default.

3,952 changes: 3,952 additions & 0 deletions examples/neural_networks.ipynb

Large diffs are not rendered by default.

225 changes: 225 additions & 0 deletions examples/neural_networks_parallel.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,225 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Example: Neural Networks – Running bound tightening in parallel"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"using Distributed\n",
"using Gurobi\n",
"using Gogeta\n",
"using JuMP\n",
"using Random\n",
"using Flux"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<span style=\"color: red;\">In this example, we want to show how neural networks can be more quickly formulated as MILP using parallel computing. As in the previous example, we initialize the arbitrary neural network with random weights (the model is exactly the same as in the example about NN).</span>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this example, we want to show how the neural networks can be faster formulated as MILP formulation using parallel computing. As in the previous example, we initialize the arbitrary neural network with random weights (the model in exactly the same as in the example about NN)."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Chain(\n",
" Dense(2 => 10, relu), \u001b[90m# 30 parameters\u001b[39m\n",
" Dense(10 => 50, relu), \u001b[90m# 550 parameters\u001b[39m\n",
" Dense(50 => 20, relu), \u001b[90m# 1_020 parameters\u001b[39m\n",
" Dense(20 => 5, relu), \u001b[90m# 105 parameters\u001b[39m\n",
" Dense(5 => 1), \u001b[90m# 6 parameters\u001b[39m\n",
") \u001b[90m # Total: 10 arrays, \u001b[39m1_711 parameters, 7.309 KiB."
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"begin\n",
" Random.seed!(1234);\n",
"\n",
" model = Chain(\n",
" Dense(2 => 10, relu),\n",
" Dense(10 => 50, relu),\n",
" Dense(50 => 20, relu),\n",
" Dense(20 => 5, relu),\n",
" Dense(5 => 1)\n",
" )\n",
"end"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Using `addprocs()`, we inntiailize 4 parallel processes or 'workers'."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"4-element Vector{Int64}:\n",
" 22\n",
" 23\n",
" 24\n",
" 25"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"addprocs(4)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to prevent Gurobi from obtaining a new licence for each 'worker', we need to specify the same `Gurobi` environment for each one"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@everywhere using Gurobi\n",
"@everywhere ENV = Ref{Gurobi.Env}()\n",
"\n",
"@everywhere function init_env()\n",
" global ENV\n",
" ENV[] = Gurobi.Env()\n",
"end\n",
"\n",
"for worker in workers()\n",
" fetch(@spawnat worker init_env())\n",
"end"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Regardless of the solver, we must also specify `Gurobi` as optimizer."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@everywhere using JuMP\n",
"@everywhere function set_solver!(jump)\n",
" set_optimizer(jump, () -> Gurobi.Optimizer(ENV[]))\n",
" set_silent(jump)\n",
"end"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As before, we need to define boundaries for innitial variables in which MILP formulation gurantees the same output as neural network"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"init_U = [1.0, 1.0];\n",
"init_L = [-1.0, -1.0];"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Once the workers are set up, you can use `NN_formulate!()` function with a parameter `parallel=true` "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"@everywhere using Gogeta\n",
"jump = Model()\n",
"@time U, L = NN_formulate!(jump, model, init_U, init_L; bound_tightening=\"standard\", silent=true, parallel=true);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The function will again update constraints of the empty jump model and output the boundaries for the neurons. You can also change `bound_tightening` parameter to other approaches. Once you got this formulation, the model can be optmized in the same way as before."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"output_neuron = jump_model[:x][maximum(keys(jump_model[:x].data))]\n",
"@objective(jump_model, Max, output_neuron)\n",
"optimize!(jump_model)\n",
"\n",
"println(\"The model found next solution:\\n\", value.(jump_model[:x][0, :]))\n",
"println(\"With objective function: \", objective_value(jump_model) )\n",
"solution = Float32.([i for i in value.(jump_model[:x][0, :])])\n",
"println(\"The output of the NN for solution given by jump model: \", NN_model(solution)[1])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Julia 1.10.0",
"language": "julia",
"name": "julia-1.10"
},
"language_info": {
"file_extension": ".jl",
"mimetype": "application/julia",
"name": "julia",
"version": "1.10.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading
Loading