Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major refactor to be consistent with cogent core coding principles #129

Merged
merged 10 commits into from
Aug 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions decoder/softmax.go
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ func (sm *SoftMax) InitLayer(ncats int, layers []emer.Layer) {
sm.Layers = layers
nin := 0
for _, ly := range sm.Layers {
nin += ly.Shape().Len()
nin += ly.AsEmer().Shape.Len()
}
sm.Init(ncats, nin)
}
Expand Down Expand Up @@ -143,12 +143,13 @@ func (sm *SoftMax) ValuesTsr(name string) *tensor.Float32 {
func (sm *SoftMax) Input(varNm string, di int) {
off := 0
for _, ly := range sm.Layers {
tsr := sm.ValuesTsr(ly.Name())
ly.UnitValuesTensor(tsr, varNm, di)
lb := ly.AsEmer()
tsr := sm.ValuesTsr(lb.Name)
lb.UnitValuesTensor(tsr, varNm, di)
for j, v := range tsr.Values {
sm.Inputs[off+j] = v
}
off += ly.Shape().Len()
off += lb.Shape.Len()
}
}

Expand Down
10 changes: 5 additions & 5 deletions doc.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@ and easier support for making permuted random lists, etc.

* netview provides the NetView interactive 3D network viewer, implemented in the Cogent Core 3D framework.

* path is a separate package for defining patterns of connectivity between layers
(i.e., the ProjectionSpecs from C++ emergent). This is done using a fully independent
structure that *only* knows about the shapes of the two layers, and it returns a fully general
bitmap representation of the pattern of connectivity between them. The leabra.Path code
then uses these patterns to do all the nitty-gritty of connecting up neurons.
* path is a separate package for defining patterns of connectivity between layers.
This is done using a fully independent structure that *only* knows about the shapes
of the two layers, and it returns a fully general bitmap representation of the pattern
of connectivity between them. The leabra.Path code then uses these patterns to do
all the nitty-gritty of connecting up neurons.
This makes the pathway code *much* simpler compared to the ProjectionSpec in C++ emergent,
which was involved in both creating the pattern and also all the complexity of setting up the
actual connections themselves. This should be the *last* time any of those pathway patterns
Expand Down
8 changes: 4 additions & 4 deletions ecmd/std.go
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ func LogFilename(logName, netName, runName string) string {

// ProcStd processes the standard args, after Parse has been called
// for help, note, params, tag and wts
func (ar *Args) ProcStd(params *emer.Params) {
func (ar *Args) ProcStd(params *emer.NetParams) {
if ar.Bool("help") {
ar.Usage()
os.Exit(0)
Expand All @@ -55,8 +55,8 @@ func (ar *Args) ProcStd(params *emer.Params) {
mpi.Printf("note: %s\n", note)
}
if pars := ar.String("params"); pars != "" {
params.ExtraSets = pars
mpi.Printf("Using ParamSet: %s\n", params.ExtraSets)
// params.ExtraSets = pars // todo:
// mpi.Printf("Using ParamSet: %s\n", params.ExtraSets)
}
if tag := ar.String("tag"); tag != "" {
params.Tag = tag
Expand All @@ -71,7 +71,7 @@ func (ar *Args) ProcStd(params *emer.Params) {
// setting the log files for standard log file names using netName
// and params.RunName to identify the network / sim and run params, tag,
// and starting run number
func (ar *Args) ProcStdLogs(logs *elog.Logs, params *emer.Params, netName string) {
func (ar *Args) ProcStdLogs(logs *elog.Logs, params *emer.NetParams, netName string) {
runName := params.RunName(ar.Int("run")) // used for naming logs, stats, etc
if ar.Bool("epclog") {
fnm := LogFilename("epc", netName, runName)
Expand Down
2 changes: 1 addition & 1 deletion egui/netview.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,6 @@ func (gui *GUI) SaveNetData(extra string) {
if gui.NetData == nil {
return
}
ndfn := gui.NetData.Net.Name() + "_" + extra + ".netdata.gz"
ndfn := gui.NetData.Net.AsEmer().Name + "_" + extra + ".netdata.gz"
gui.NetData.SaveJSON(core.Filename(ndfn))
}
21 changes: 11 additions & 10 deletions elog/context.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import (
"fmt"
"log"

"cogentcore.org/core/base/errors"
"cogentcore.org/core/tensor"
"cogentcore.org/core/tensor/stats/metric"
"cogentcore.org/core/tensor/stats/stats"
Expand Down Expand Up @@ -233,27 +234,27 @@ func (ctx *Context) ItemColTensorScope(scope etime.ScopeKey, itemNm string) tens
///////////////////////////////////////////////////
// Network

// Layer returns layer by name as the emer.Layer interface --
// you may then need to convert to a concrete type depending.
// Layer returns layer by name as the emer.Layer interface.
// May then need to convert to a concrete type depending.
func (ctx *Context) Layer(layNm string) emer.Layer {
return ctx.Net.LayerByName(layNm)
return errors.Log1(ctx.Net.AsEmer().EmerLayerByName(layNm))
}

// GetLayerTensor gets tensor of Unit values on a layer for given variable
// from current ctx.Di data parallel index.
func (ctx *Context) GetLayerTensor(layNm, unitVar string) *tensor.Float32 {
ly := ctx.Layer(layNm)
tsr := ctx.Stats.F32Tensor(layNm)
ly.UnitValuesTensor(tsr, unitVar, ctx.Di)
ly.AsEmer().UnitValuesTensor(tsr, unitVar, ctx.Di)
return tsr
}

// GetLayerRepTensor gets tensor of representative Unit values on a layer for given variable
// GetLayerSampleTensor gets tensor of representative Unit values on a layer for given variable
// from current ctx.Di data parallel index.
func (ctx *Context) GetLayerRepTensor(layNm, unitVar string) *tensor.Float32 {
func (ctx *Context) GetLayerSampleTensor(layNm, unitVar string) *tensor.Float32 {
ly := ctx.Layer(layNm)
tsr := ctx.Stats.F32Tensor(layNm)
ly.UnitValuesRepTensor(tsr, unitVar, ctx.Di)
ly.AsEmer().UnitValuesSampleTensor(tsr, unitVar, ctx.Di)
return tsr
}

Expand All @@ -265,10 +266,10 @@ func (ctx *Context) SetLayerTensor(layNm, unitVar string) *tensor.Float32 {
return tsr
}

// SetLayerRepTensor sets tensor of representative Unit values on a layer for given variable
// SetLayerSampleTensor sets tensor of representative Unit values on a layer for given variable
// to current ctx.Di data parallel index.
func (ctx *Context) SetLayerRepTensor(layNm, unitVar string) *tensor.Float32 {
tsr := ctx.GetLayerRepTensor(layNm, unitVar)
func (ctx *Context) SetLayerSampleTensor(layNm, unitVar string) *tensor.Float32 {
tsr := ctx.GetLayerSampleTensor(layNm, unitVar)
ctx.SetTensor(tsr)
return tsr
}
Expand Down
12 changes: 7 additions & 5 deletions elog/stditems.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
"reflect"
"time"

"cogentcore.org/core/base/errors"
"cogentcore.org/core/math32/minmax"
"cogentcore.org/core/tensor/stats/split"
"cogentcore.org/core/tensor/stats/stats"
Expand Down Expand Up @@ -292,26 +293,27 @@ func (lg *Logs) RunStats(stats ...string) {
// to it so there aren't any duplicate items.
// di is a data parallel index di, for networks capable of processing input patterns in parallel.
func (lg *Logs) AddLayerTensorItems(net emer.Network, varNm string, mode etime.Modes, etm etime.Times, layClasses ...string) {
layers := net.LayersByClass(layClasses...)
en := net.AsEmer()
layers := en.LayersByClass(layClasses...)
for _, lnm := range layers {
clnm := lnm
cly := net.LayerByName(clnm)
cly := errors.Log1(en.EmerLayerByName(clnm))
itmNm := clnm + "_" + varNm
itm, has := lg.ItemByName(itmNm)
if has {
itm.Write[etime.Scope(mode, etm)] = func(ctx *Context) {
ctx.SetLayerRepTensor(clnm, varNm)
ctx.SetLayerSampleTensor(clnm, varNm)
}
} else {
lg.AddItem(&Item{
Name: itmNm,
Type: reflect.Float32,
CellShape: cly.RepShape().Sizes,
CellShape: cly.AsEmer().SampleShape.Sizes,
FixMin: true,
Range: minmax.F32{Max: 1},
Write: WriteMap{
etime.Scope(mode, etm): func(ctx *Context) {
ctx.SetLayerRepTensor(clnm, varNm)
ctx.SetLayerSampleTensor(clnm, varNm)
}}})
}
}
Expand Down
12 changes: 4 additions & 8 deletions emer/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,11 @@
Docs: [GoDoc](https://pkg.go.dev/github.com/emer/emergent/v2/emer)

Package emer provides minimal interfaces for the basic structural elements of neural networks
including:
Package emer provides minimal interfaces for the basic structural elements of neural networks including:
* emer.Network, emer.Layer, emer.Unit, emer.Path (pathway that interconnects layers)

These interfaces are intended to be just sufficient to support visualization and generic
analysis kinds of functions, but explicitly avoid exposing ANY of the algorithmic aspects,
so that those can be purely encoded in the implementation structs.
These interfaces are intended to be just sufficient to support visualization and generic analysis kinds of functions, but explicitly avoid exposing ANY of the algorithmic aspects, so that those can be purely encoded in the implementation structs.

At this point, given the extra complexity it would require, these interfaces do not support
the ability to build or modify networks.
At this point, given the extra complexity it would require, these interfaces do not support the ability to build or modify networks.

Also added support for managing parameters in the `emer.Params` object, which handles standard parameter set logic and support for applying to networks, and the new `NetSize` map for configuring network size.
Also added support for managing parameters in the `emer.Params` object, which handles standard parameter set logic and support for applying to networks, and the `NetSize` map for configuring network size.

91 changes: 0 additions & 91 deletions emer/enumgen.go

This file was deleted.

Loading
Loading