Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Erroneous literal string in AutoAttack metadata #2548

Open
lockwoodar opened this issue Jan 3, 2025 · 0 comments · May be fixed by #2550
Open

Erroneous literal string in AutoAttack metadata #2548

lockwoodar opened this issue Jan 3, 2025 · 0 comments · May be fixed by #2550
Assignees
Labels
bug Something isn't working

Comments

@lockwoodar
Copy link
Collaborator

Describe the bug
A clear and concise description of what the bug is.

With changes made in ART 1.19.0 to allow specification of pool size on parallel AutoAttack execution, a simple bug was introduced into metadata returned. Instead of of returning the actual data intended for num_attacks, a literal string is returned instead (with no replacements). This also impacts line #345

To Reproduce
Steps to reproduce the behavior:

I am using a HEART-library with ART 1.19.0 venv to illustrate reproducibility in this case:

# assert ART 1.19.0 is installed within venv
(heart-dev) adam@lockwood:~/ibm/heart-sandbox$ conda list | grep -i "adversarial"
adversarial-robustness-toolbox 1.19.0                   pypi_0    pypi

# repl
(heart-dev) adam@lockwood:~/ibm/heart-sandbox$ python
Python 3.11.11 | packaged by conda-forge | (main, Dec  5 2024, 14:17:24) [GCC 13.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>>
>>> from tests.utils import get_cifar10_image_classifier_pt
>>> from art.utils import load_dataset
>>> from art.attacks.evasion.auto_attack import AutoAttack
>>> from art.attacks.evasion.projected_gradient_descent.projected_gradient_descent_pytorch import (
...   ProjectedGradientDescentPyTorch,
... )
>>> from os import cpu_count
>>> import numpy as np
>>>
>>>
>>> labels = ["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
>>> ptc = get_cifar10_image_classifier_pt(from_logits=True, is_jatic=False)
>>> pool = cpu_count() - 1
>>> print(cpu_count())
>>>
>>>
>>> (x_train, y_train), (_, _), _, _ = load_dataset("cifar10")
>>> x_train = x_train[:10].transpose(0, 3, 1, 2).astype("float32")
>>> y_train = y_train[:10]
>>>
>>>
>>> attacks = []
>>> attacks.append(
...   ProjectedGradientDescentPyTorch(
...     estimator=ptc, norm=np.inf, eps=0.1, max_iter=10, targeted=False, batch_size=32, verbose=False
...   )
... )
>>>
>>>
>>> attack_noparallel = AutoAttack(estimator=ptc, attacks=attacks, targeted=True, parallel_pool_size = 0)
>>> attack_parallel = AutoAttack(estimator=ptc, attacks=attacks, targeted=True, parallel_pool_size = pool)
>>>
>>>
>>> no_parallel_adv = attack_noparallel.generate(x=x_train, y=y_train)
>>> parallel_adv = attack_parallel.generate(x=x_train, y=y_train)
>>>
>>>
>>> # print attack_noparallel metadata
>>> print(repr(attack_noparallel))
AutoAttack(targeted=True, parallel_pool_size=0, num_attacks={len(self.attacks)})
BestAttacks:
image 1: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 2: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 3: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 4: n/a
image 5: n/a
image 6: n/a
image 7: n/a
image 8: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 9: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 10: n/a
>>>
>>>
>>> # print attack_parallel metadata
>>> print(repr(attack_parallel))
AutoAttack(targeted=True, parallel_pool_size=15, num_attacks={len(self.args)})
BestAttacks:
image 1: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 2: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 3: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 4: n/a
image 5: n/a
image 6: n/a
image 7: n/a
image 8: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 9: ProjectedGradientDescentPyTorch(norm=inf, eps=0.1, eps_step=0.1, targeted=True, num_random_init=0, batch_size=32, minimal=False, summary_writer=None, decay=None, max_iter=10, random_eps=False, verbose=False, )
image 10: n/a
>>>

Expected behavior
In the above example, you can see that both non-parallel and parallel executions return metadata with a string literal of num_attacks={len(self.attacks)}. This is due to the concatenation not being an f-string after the line break. The expected behavior is that num_attacks is replaced by the actual value.

Screenshots
n/a

System information (please complete the following information):

  • OS = Ubuntu 22.04.4 LTS
  • Python version = 3.11
  • ART version or commit number = 1.19.0
  • PyTorch version = 2.3.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
1 participant