Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

custome model error #32

Open
mhmd97z opened this issue Sep 19, 2023 · 7 comments
Open

custome model error #32

mhmd97z opened this issue Sep 19, 2023 · 7 comments

Comments

@mhmd97z
Copy link

mhmd97z commented Sep 19, 2023

Hi there,
I tried to verify a customized model using alpha-beta-crown, but I got an error I could not figure out how to fix. I'd appreciate it if you could have a look at it.

yaml file:

general:
  enable_incomplete_verification: False
  loss_reduction_func: max
  conv_mode: matrix
model:
  name: cmars
  input_shape: 19
specification:
  vnnlib_path: cmars/vnnlib/large_1/N8/h32/basic.vnnlib
attack:
  pgd_order: skip
solver:
  batch_size: 500000  
  bound_prop_method: forward+backward
  beta-crown:
    iteration: 10  
bab:
  initial_max_domains: 100
  branching:
    method: naive  # Split on input space.
    input_split:
      enable: True
      adv_check: .inf

The model is defined in model_defs.py:

def cmars() -> nn.Sequential:
    return nn.Sequential(
            nn.Linear(19, 32),
            nn.ReLU(),
            nn.Linear(32, 32),
            nn.ReLU(),
            nn.Linear(32, 140),
            nn.ReLU(),
            nn.Linear(140, 1),
    )

basic.vnnlib

(declare-const X_0 Real)
(declare-const X_1 Real)
(declare-const X_2 Real)
(declare-const X_3 Real)
(declare-const X_4 Real)
(declare-const X_5 Real)
(declare-const X_6 Real)
(declare-const X_7 Real)
(declare-const X_8 Real)
(declare-const X_9 Real)
(declare-const X_10 Real)
(declare-const X_11 Real)
(declare-const X_12 Real)
(declare-const X_13 Real)
(declare-const X_14 Real)
(declare-const X_15 Real)
(declare-const X_16 Real)
(declare-const X_17 Real)
(declare-const X_18 Real)

(declare-const Y_0 Real)

(assert (or 
	(and (<= X_0 0.11281310021877289) (>= X_0 0.08347102999687195) (<= X_1 -0.04856841638684273) (>= X_1 -0.07791049033403397) (<= X_2 -0.004555195104330778) (>= X_2 -0.033897269517183304) (<= X_3 0.06879998743534088) (>= X_3 0.03945791721343994) (<= X_4 0.1568262130022049) (>= X_4 0.12748414278030396) (<= X_5 0.2448524385690689) (>= X_5 0.21551036834716797) (<= X_6 0.2595234811306) (>= X_6 0.23018139600753784) (<= X_7 0.33287864923477173) (>= X_7 0.303536593914032) (<= X_8 0.42090487480163574) (>= X_8 0.391562819480896) (<= X_9 0.42090487480163574) (>= X_9 0.391562819480896) (<= X_10 0.391562819480896) (>= X_10 0.36222073435783386) (<= X_11 0.47958904504776) (>= X_11 0.4502469599246979) (<= X_12 0.6262993812561035) (>= X_12 0.5969573259353638) (<= X_13 0.6996545791625977) (>= X_13 0.6703125238418579) (<= X_14 0.7143256068229675) (>= X_14 0.6849835515022278) (<= X_15 0.6262993812561035) (>= X_15 0.5969573259353638) (<= X_16 0.5529442429542542) (>= X_16 0.5236021280288696) (<= X_17 0.4502469599246979) (>= X_17 0.42090487480163574) (<= X_18 0.4942600727081299) (>= X_18 0.46491798758506775) (>= Y_0 0))
))

The error:

Traceback (most recent call last):
  File "abcrown.py", line 661, in <module>
    main()
  File "abcrown.py", line 584, in main
    refined_betas=refined_betas, attack_images=all_adv_candidates, attack_margins=attack_margins)
  File "abcrown.py", line 410, in complete_verifier
    rhs=rhs, timeout=timeout, attack_images=this_spec_attack_images)
  File "abcrown.py", line 200, in bab
    rhs=rhs, timeout=timeout, branching_method=arguments.Config["bab"]["branching"]["method"])
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/batch_branch_and_bound_input_split.py", line 164, in input_bab_parallel
    bounding_method=bounding_method,
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/beta_CROWN_solver.py", line 1141, in build_the_model
    x=(x,), C=self.c, method=bounding_method, cutter=self.cutter, bound_upper=False)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 1319, in compute_bounds
    self.check_prior_bounds(final)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 847, in check_prior_bounds
    self.check_prior_bounds(n)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 847, in check_prior_bounds
    self.check_prior_bounds(n)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 847, in check_prior_bounds
    self.check_prior_bounds(n)
  [Previous line repeated 6 more times]
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 855, in check_prior_bounds
    node.inputs[i], prior_checked=True)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/bound_general.py", line 883, in compute_intermediate_bounds
    node=node, concretize=True)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/forward_bound.py", line 32, in forward_general
    self.forward_general(node=l_pre, offset=offset)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/forward_bound.py", line 40, in forward_general
    linear = node.linear = node.bound_forward(self.dim_in, *inp)
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/linear.py", line 651, in bound_forward
    y.upper.transpose(-1, -2) if y.upper is not None else None
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/linear.py", line 509, in bound_forward
    lw, lb, uw, ub = BoundLinear.bound_forward_mul(x.lw, x_lb, x.uw, x_ub, w)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "/home/mzi/sys-rl-verif/alpha-beta-CROWN/complete_verifier/auto_LiRPA/operators/linear.py", line 457, in bound_forward_mul
    def bound_forward_mul(x_lw: Tensor, x_lb: Tensor, x_uw: Tensor, x_ub: Tensor, w: Tensor):
        w_pos, w_neg = w.clamp(min=0), w.clamp(max=0)
        lw = x_lw.matmul(w_pos) + x_uw.matmul(w_neg)
             ~~~~~~~~~~~ <--- HERE
        uw = x_uw.matmul(w_pos) + x_lw.matmul(w_neg)
        lb = x_lb.matmul(w_pos) + x_ub.matmul(w_neg)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (19x1 and 19x32)
@mhmd97z
Copy link
Author

mhmd97z commented Sep 19, 2023

I guess this has something to do with defining input_shape: 19. instead, it should be (1, 19), but it is not straightforward to define that in a yaml file.
Plus, I could not find any example that includes input_shape in exp_configs folder.
Also, I tried running custom_specs.yaml, but it throws the following assertion error:
AssertionError: vnnlib does not have shape information, please specify by --input_shape

@huanzhang12
Copy link
Member

Hi @mhmd97z thank you for reporting this issue to us!

As a workaround, can you try to change bound_prop_method: forward+backward to bound_prop_method: backward? This will use CROWN as the bounding method. In addition, we will release a new version of the software in October, and we will try to fix this issue in the upcoming release.

@shizhouxing It looks like it is an issue with forward mode bound propagation. Can you take a look?

@shizhouxing
Copy link
Member

@huanzhang12 As mentioned by @mhmd97z, the issue is gone when the input shape is set to [1, 19]. The issue is not related to the forward bound propagation.

@shizhouxing
Copy link
Member

@mhmd97z You may refer to the help message regarding input_shape:
https://github.com/Verified-Intelligence/alpha-beta-CROWN/blob/main/complete_verifier/arguments.py#L118

@huanzhang12
Copy link
Member

@shizhouxing If the input shape is wrong, we should give an error early, not deep inside the forward mode bound propagation code. Can you ensure that we check the shape early and the code does not crash inside bound propagation?

@shizhouxing
Copy link
Member

shizhouxing commented Sep 20, 2023

@shizhouxing If the input shape is wrong, we should give an error early, not deep inside the forward mode bound propagation code. Can you ensure that we check the shape early and the code does not crash inside bound propagation?

I think we only need to check if the batch dimension is missing. If the shape is totally wrong, BoundedModule can’t be created. Or the batch dimension can just be omitted when specifying the input shape.

@mhmd97z
Copy link
Author

mhmd97z commented Sep 20, 2023

thanks for the response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants