Skip to content

Commit

Permalink
Added parallel configuration for conditional layers (#57)
Browse files Browse the repository at this point in the history
* added version to run-compare

* updated run-compare to take command line args

* fixed argument parsing order for extra args

* proposed meaned loss function after recon + kl

* added experiment to isolate adv-cond config

* updated workflow memory requirements

* fixed find issues in run-compare

* fixed run-compare ls issues

* added mean and variance to tensorboard log

* renamed experiment config to parallel

* removed layer and made concat_config ConcatBlockConfig

* added parallel support for conditional layers

* added comments for alternate loss addition

* added scripts to gather from tsv file
  • Loading branch information
jdenhof authored Nov 21, 2024
1 parent 1527a22 commit f05869e
Show file tree
Hide file tree
Showing 10 changed files with 487 additions and 48 deletions.
130 changes: 130 additions & 0 deletions configs/model/parallel/adversarial-conditional.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
class_path: cmmvae.models.CMMVAEModel
init_args:
kl_annealing_fn:
class_path: cmmvae.modules.base.annealing_fn.LinearKLAnnealingFn
init_args:
min_kl_weight: 0.1
max_kl_weight: 0.5
warmup_steps: 1e4
climax_steps: 6e4
record_gradients: false
adv_weight: 25
gradient_record_cap: 20
autograd_config:
class_path: cmmvae.config.AutogradConfig
init_args:
adversarial_gradient_clip:
class_path: cmmvae.config.GradientClipConfig
init_args:
val: 10
algorithm: norm
vae_gradient_clip:
class_path: cmmvae.config.GradientClipConfig
init_args:
val: 10
algorithm: norm
expert_gradient_clip:
class_path: cmmvae.config.GradientClipConfig
init_args:
val: 10
algorithm: norm
module:
class_path: cmmvae.modules.CMMVAE
init_args:
vae:
class_path: cmmvae.modules.CLVAE
init_args:
latent_dim: 128
encoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 512, 256 ]
dropout_rate: 0.0
use_batch_norm: True
use_layer_norm: False
activation_fn: torch.nn.ReLU
return_hidden: True
decoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 128, 256, 512 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
conditional_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 128 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: True
activation_fn: null
concat_config:
class_path: cmmvae.modules.base.ConcatBlockConfig
init_args:
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
conditionals:
- assay
- dataset_id
- donor_id
- species
- tissue
selection_order:
- parallel
experts:
class_path: cmmvae.modules.base.Experts
init_args:
experts:
- class_path: cmmvae.modules.base.Expert
init_args:
id: human
encoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 60530, 1024, 512 ]
dropout_rate: [ 0.1, 0.0 ]
use_batch_norm: True
use_layer_norm: False
activation_fn: torch.nn.ReLU
decoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 512, 1024, 60530 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
- class_path: cmmvae.modules.base.Expert
init_args:
id: mouse
encoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 52437, 1024, 512 ]
dropout_rate: [ 0.1, 0.0 ]
use_batch_norm: True
use_layer_norm: False
activation_fn: torch.nn.ReLU
decoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 512, 1024, 52437 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
adversarials:
- class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 256, 128, 64, 1 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn:
- torch.nn.ReLU
- torch.nn.ReLU
- torch.nn.Sigmoid
127 changes: 127 additions & 0 deletions configs/model/parallel/conditional.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,127 @@
class_path: cmmvae.models.CMMVAEModel
init_args:
kl_annealing_fn:
class_path: cmmvae.modules.base.annealing_fn.LinearKLAnnealingFn
init_args:
min_kl_weight: 0.1
max_kl_weight: 1.0
warmup_steps: 1e4
climax_steps: 4e4
record_gradients: false
adv_weight: 0
gradient_record_cap: 20
autograd_config:
class_path: cmmvae.config.AutogradConfig
init_args:
adversarial_gradient_clip:
class_path: cmmvae.config.GradientClipConfig
init_args:
val: 10
algorithm: norm
vae_gradient_clip:
class_path: cmmvae.config.GradientClipConfig
init_args:
val: 10
algorithm: norm
expert_gradient_clip:
class_path: cmmvae.config.GradientClipConfig
init_args:
val: 10
algorithm: norm
module:
class_path: cmmvae.modules.CMMVAE
init_args:
vae:
class_path: cmmvae.modules.CLVAE
init_args:
latent_dim: 128
encoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 512, 256 ]
dropout_rate: 0.0
use_batch_norm: True
use_layer_norm: False
activation_fn: torch.nn.ReLU
return_hidden: True
decoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 128, 256, 512 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
conditional_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 128 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: True
activation_fn: null
concat_config:
class_path: cmmvae.modules.base.ConcatBlockConfig
init_args:
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
conditionals:
- assay
- dataset_id
- donor_id
- species
- tissue
selection_order:
- parallel
experts:
class_path: cmmvae.modules.base.Experts
init_args:
experts:
- class_path: cmmvae.modules.base.Expert
init_args:
id: human
encoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 60530, 1024, 512 ]
dropout_rate: [ 0.1, 0.0 ]
use_batch_norm: True
use_layer_norm: False
activation_fn: torch.nn.ReLU
decoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 512, 1024, 60530 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
- class_path: cmmvae.modules.base.Expert
init_args:
id: mouse
encoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 52437, 1024, 512 ]
dropout_rate: [ 0.1, 0.0 ]
use_batch_norm: True
use_layer_norm: False
activation_fn: torch.nn.ReLU
decoder_config:
class_path: cmmvae.modules.base.FCBlockConfig
init_args:
layers: [ 512, 1024, 52437 ]
dropout_rate: 0.0
use_batch_norm: False
use_layer_norm: False
activation_fn: torch.nn.ReLU
adversarials:
# - class_path: cmmvae.modules.base.FCBlockConfig
# init_args:
# layers: [ 256, 128, 64, 1 ]
# dropout_rate: 0.0
# use_batch_norm: False
# use_layer_norm: False
# activation_fn: torch.nn.Sigmoid
18 changes: 18 additions & 0 deletions scripts/gather-norms.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import tensorflow as tf
import pandas as pd

# Replace with the path to your TensorBoard log file
log_file = "/mnt/projects/debruinz_project/denhofja/cmmvae/lightning_logs/run-experiment/adversarial-conditional.5a9df4a./events.out.tfevents.1730760440.g001.clipper.gvsu.edu.3079995.0"
data = []

for event in tf.compat.v1.train.summary_iterator(log_file):
for value in event.summary.value:
# Modify 'grad_norm' to the exact tag name used for gradient norms in your logs
if "grad_norm" in value.tag:
data.append(
{"step": event.step, "grad_norm": value.simple_value, "tag": value.tag}
)

# Convert to DataFrame and save to CSV
df = pd.DataFrame(data)
df.to_csv("gradient_data.json")
Loading

0 comments on commit f05869e

Please sign in to comment.