Skip to content

Commit

Permalink
Added thirdparty PSO
Browse files Browse the repository at this point in the history
  • Loading branch information
GallVp committed Oct 5, 2019
1 parent 01a2df0 commit dc85aaf
Show file tree
Hide file tree
Showing 36 changed files with 2,718 additions and 4 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,4 +49,5 @@ $ git clone https://github.com/GallVp/emgGO

emgGO uses following third party libraries. The licenses for these libraries can be found next to source files in their respective libs/thirdpartlib folders.

1. `energyop` Copyright (c) 2014, Hooman Sedghamiz. Source is available [here](https://au.mathworks.com/matlabcentral/fileexchange/45406-teager-keiser-energy-operator-vectorized).
1. `energyop` Copyright (c) 2014, Hooman Sedghamiz. Source is available [here](https://au.mathworks.com/matlabcentral/fileexchange/45406-teager-keiser-energy-operator-vectorized).
2. `PSOt` Copyright (c) 2005, Brian Birge. Source is available [here](https://au.mathworks.com/matlabcentral/fileexchange/7506-particle-swarm-optimization-toolbox).
Binary file not shown.
148 changes: 148 additions & 0 deletions libs/PSOt/DemoPSOBehavior.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
% demopsobehavior.m
% demo of the pso.m function
% the pso tries to find the minimum of the f6 function, a standard
% benchmark
%
% on the plots, blue is current position, green is Pbest, and red is Gbest

% Brian Birge
% Rev 3.0
% 2/27/06

clear all
close all
clc
help demopsobehavior
warning off

functnames = {'ackley','alpine','DeJong_f2','DeJong_f3','DeJong_f4',...
'Foxhole','Griewank','NDparabola',...
'Rastrigin','Rosenbrock','f6','f6mod','tripod',...
'f6_bubbles_dyn','f6_linear_dyn','f6_spiral_dyn'};
disp('Static test functions, minima don''t change w.r.t. time/iteration:');
disp(' 1) Ackley');
disp(' 2) Alpine');
disp(' 3) DeJong_f2');
disp(' 4) DeJong_f3');
disp(' 5) DeJong_f4');
disp(' 6) Foxhole');
disp(' 7) Griewank');
disp(' 8) NDparabola (for this demo N = 2)');
disp(' 9) Rastrigin');
disp('10) Rosenbrock');
disp('11) Schaffer f6');
disp('12) Schaffer f6 modified (5 f6 functions translated from each other)');
disp('13) Tripod');
disp(' ');
disp('Dynamic test functions, minima/environment evolves over time/iteration:');
disp('14) f6_bubbles_dyn');
disp('15) f6_linear_dyn');
disp('16) f6_spiral_dyn');

functchc=input('Choose test function ? ');
functname = functnames{functchc};

disp(' ');
disp('1) Intense graphics, shows error topology and surfing particles');
disp('2) Default PSO graphing, shows error trend and particle dynamics');
disp('3) no plot, only final output shown, fastest');
plotfcn=input('Choose plotting function ? ');
if plotfcn == 1
plotfcn = 'goplotpso4demo';
shw = 1; % how often to update display
elseif plotfcn == 2
plotfcn = 'goplotpso';
shw = 1; % how often to update display
else
plotfcn = 'goplotpso';
shw = 0; % how often to update display
end


% set flag for 'dynamic function on', only used at very end for tracking plots
dyn_on = 0;
if functchc==15 | functchc == 16 | functchc == 17
dyn_on = 1;
end

%xrng=input('Input search range for X, e.g. [-10,10] ? ');
%yrng=input('Input search range for Y ? ');
xrng=[-30,30];
yrng=[-40,40];
disp(' ');
% if =0 then we look for minimum, =1 then max
disp('0) Minimize')
disp('1) Maximize')
minmax=input('Choose search goal ?');
% minmax=0;
disp(' ');
mvden = input('Max velocity divisor (2 is a good choice) ? ');
disp(' ');
ps = input('How many particles (24 - 30 is common)? ');
disp(' ');
disp('0) Common PSO - with inertia');
disp('1) Trelea model 1');
disp('2) Trelea model 2');
disp('3) Clerc Type 1" - with constriction');
modl = input('Choose PSO model ? ');
% note: if errgoal=NaN then unconstrained min or max is performed
if minmax==1
% errgoal=0.97643183; % max for f6 function (close enough for termination)
errgoal=NaN;
else
% errgoal=0; % min
errgoal=NaN;
end
minx = xrng(1);
maxx = xrng(2);
miny = yrng(1);
maxy = yrng(2);

%--------------------------------------------------------------------------
dims=2;
varrange=[];
mv=[];
for i=1:dims
varrange=[varrange;minx maxx];
mv=[mv;(varrange(i,2)-varrange(i,1))/mvden];
end

ac = [2.1,2.1];% acceleration constants, only used for modl=0
Iwt = [0.9,0.6]; % intertia weights, only used for modl=0
epoch = 400; % max iterations
wt_end = 100; % iterations it takes to go from Iwt(1) to Iwt(2), only for modl=0
errgrad = 1e-99; % lowest error gradient tolerance
errgraditer=100; % max # of epochs without error change >= errgrad
PSOseed = 0; % if=1 then can input particle starting positions, if= 0 then all random
% starting particle positions (first 20 at zero, just for an example)
PSOseedValue = repmat([0],ps-10,1);

psoparams=...
[shw epoch ps ac(1) ac(2) Iwt(1) Iwt(2) ...
wt_end errgrad errgraditer errgoal modl PSOseed];

% run pso
% vectorized version
[pso_out,tr,te]=pso_Trelea_vectorized(functname, dims,...
mv, varrange, minmax, psoparams,plotfcn,PSOseedValue);


%--------------------------------------------------------------------------
% display best params, this only makes sense for static functions, for dynamic
% you'd want to see a time history of expected versus optimized global best
% values.
disp(' ');
disp(' ');
disp(['Best fit parameters: ']);
disp([' cost = ',functname,'( [ input1, input2 ] )']);
disp(['---------------------------------']);
disp([' input1 = ',num2str(pso_out(1))]);
disp([' input2 = ',num2str(pso_out(2))]);
disp([' cost = ',num2str(pso_out(3))]);
disp([' mean cost = ',num2str(mean(te))]);
disp([' # of epochs = ',num2str(tr(end))]);

%% optional, save picture
%set(gcf,'InvertHardcopy','off');
%print -dmeta
%print('-djpeg',['demoPSOBehavior.jpg']);
112 changes: 112 additions & 0 deletions libs/PSOt/ReadME.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
-------------------------------------------------------------
-------------------------------------------------------------
PSOt, particle swarm optimization toolbox for matlab.

May be distributed freely as long as none of the files are
modified.

Send suggestions to [email protected]

Updates will be posted periodically at the Mathworks User
Contributed Files website (www.mathworks.com) under the
Optimization category.

To install:
Extract into any directory you want but make sure the matlab
path points to that directory and the subdirectories
'hiddenutils' and 'testfunctions'.

Enjoy! - Brian Birge

-------------------------------------------------------------
-------------------------------------------------------------

INFO
Quick start: just type ... out = pso_Trelea_vectorized('f6',2)
and watch it work!

This is a PSO toolbox implementing Common, Clerc 1", and
Trelea types along with an alpha version of tracking changing
environments. It can search for min, max, or 'distance' of
user developed cost function. Very easy to use and hack with
reasonably good documentation (type help for any function and
it should tell you what you need) and will take advantage of
vectorized cost functions. It uses similar syntax to Matlab's
optimization toolbox. Includes a suite of static and dynamic
test functions. It also includes a dedicated PSO based neural
network trainer for use with Mathwork's neural network toolbox.

Run 'DemoPSOBehavior' to explore the various functions, options,
and visualizations.

Run 'demoPSOnet' to see a neural net trained with PSO
(requires neural net toolbox).


This toolbox is in constant development and I welcome
suggestions. The main program 'pso_Trelea_vectorized.m' lists
various papers you can look at in the comments.

Usage ideas: to find a global min/max, to optimize training of
neural nets, error topology change tracking, teaching PSO,
investigate Emergence, tune control systems/filters, paradigm
for multi-agent interaction, etc.

-------------------------------------------------------------
-------------------------------------------------------------


Files included:


** in main directory:

0) ReadMe.txt - this file, duh
1) A Particle Swarm Optimization (PSO) Primer.pdf - powerpoint converted to pdf presentation explaining the very basics of PSO
2) DemoPSOBehavior.m - demo script, useful to see how the pso main function is called
3) goplotpso4demo.m - plotting routine called by the demo script, useful to see how custom plotting can be developed though this routine slows down the PSO a lot
4) goplotpso.m - default plotting routine used by pso algorithm
5) pso_Trelea_vectorized.m - main PSO algorithm function, implements Common, Trelea 1&2, Clerc 1", and an alpha version of tracking environmental changes.



** in 'hiddenutils'

1) forcerow, forcecol.m - utils to force a vector to be a row or column, superseded by Matlab 7 functions I believe but I think they are still called in the main algo
2) normmat.m - takes a matrix and reformats the data to fit between a new range, very flexible
3) linear_dyn, spiral_dyn.m - helpers for the dynamic test functions listed in the 'testfunctions' directory



** in 'testfunctions'

A bunch of useful functions (mostly 2D) for testing. See help for each one for specifics. Here's a list of the names:

Static test functions, minima don't change w.r.t. time/iteration:
1) Ackley
2) Alpine
3) DeJong_f2
4) DeJong_f3
5) DeJong_f4
6) Foxhole
7) Griewank
8) NDparabola
9) Rastrigin
10) Rosenbrock
11) Schaffer f6
12) Schaffer f6 modified (5 f6 functions translated from each other)
13) Tripod

Dynamic test functions, minima/environment evolves over time (NOT iteration, though easily modifed to do so):
14) f6_bubbles_dyn
15) f6_linear_dyn
16) f6_spiral_dyn



** in 'nnet' (all these require Matlab's Neural Net toolbox)

1) demoPSOnet - standalone demo to show neural net training
2) trainpso - the neural net toolbox plugin, set net.trainFcn to this
3) pso_neteval - wrapper used by trainpso to call the main PSO optimizer, this is the cost function that PSO will optimize
4) goplotpso4net - default graphing plugin for trainpso, shows net architecture, relative weight indications, error, and PSO details on run
Loading

0 comments on commit dc85aaf

Please sign in to comment.