aiml.attack package

Submodules

aiml.attack.adversarial_attacks module

adversarial_attacks.py

This module contains eight adversarial attacks from the ART library:

1.AutoProjectedGradientDescent, 2.CarliniL0Method, 3.CarliniL2Method, 4.CarliniLInfMethod, 5.DeepFool, 6.PixelAttack, 7.SquareAttack, 8.ZooAttack

aiml.attack.adversarial_attacks.auto_projected_cross_entropy(estimator, eps=0.3, batch_size=32, eps_step=0.1, norm=inf)[source]

Create an Auto Projected Gradient Descent attack instance with cross-entropy loss.

Parameters:
  • estimator – The classifier to attack.

  • batch_size (int) – Batch size for the attack.

  • norm – Norm to use for the attack.

  • eps (float) – Maximum perturbation allowed.

  • eps_step (float) – Step size of the attack.

Returns:

An instance of AutoProjectedGradientDescent.

aiml.attack.adversarial_attacks.auto_projected_difference_logits_ratio(estimator, eps=0.3, batch_size=32, eps_step=0.1, norm=inf)[source]

Create an Auto Projected Gradient Descent attack instance with difference logits ratio loss.

Parameters:
  • estimator – The classifier to attack.

  • batch_size (int) – Batch size for the attack.

  • norm – Norm to use for the attack.

  • eps (float) – Maximum perturbation allowed.

  • eps_step (float) – Step size of the attack.

Returns:

An instance of AutoProjectedGradientDescent.

aiml.attack.adversarial_attacks.carlini_L0_attack(classifier, confidence=0.0, batch_size=32, learning_rate=0.01, binary_search_steps=10, max_iter=10, targeted=False, initial_const=0.01, mask=None, warm_start=True, max_halving=5, max_doubling=5, verbose=True)[source]

Create a Carlini L0 attack instance.

Parameters:
  • classifier – The classifier to attack.

  • batch_size (int) – Batch size for the attack.

  • confidence (float) – Confidence parameter.

  • targeted (bool) – Whether the attack is targeted.

  • learning_rate (float) – Learning rate for optimization.

  • binary_search_steps (int) – Number of binary search steps.

  • max_iter (int) – Maximum number of optimization iterations.

  • initial_const (float) – Initial constant for optimization.

  • mask – Mask for the attack.

  • warm_start (bool) – Whether to use warm-starting.

  • max_halving (int) – Maximum number of times to halve the constant.

  • max_doubling (int) – Maximum number of times to double the constant.

  • verbose (bool) – Whether to display verbose output.

Returns:

An instance of CarliniL0Method.

aiml.attack.adversarial_attacks.carlini_L2_attack(classifier, confidence=0.0, batch_size=32, learning_rate=0.01, binary_search_steps=10, max_iter=10, targeted=False, initial_const=0.01, max_halving=5, max_doubling=5, verbose=True)[source]

Create a Carlini L2 attack instance.

Parameters:
  • classifier – The classifier to attack.

  • batch_size (int) – Batch size for the attack.

  • confidence (float) – Confidence parameter.

  • targeted (bool) – Whether the attack is targeted.

  • learning_rate (float) – Learning rate for optimization.

  • binary_search_steps (int) – Number of binary search steps.

  • max_iter (int) – Maximum number of optimization iterations.

  • initial_const (float) – Initial constant for optimization.

  • max_halving (int) – Maximum number of times to halve the constant.

  • max_doubling (int) – Maximum number of times to double the constant.

  • verbose (bool) – Whether to display verbose output.

Returns:

An instance of CarliniL2Method.

aiml.attack.adversarial_attacks.carlini_Linf_attack(classifier, confidence=0.0, batch_size=32, learning_rate=0.01, max_iter=10, targeted=False, decrease_factor=0.9, initial_const=1e-05, largest_const=20.0, const_factor=2.0, verbose=True)[source]

Create a Carlini Linf attack instance.

Parameters:
  • classifier – The classifier to attack.

  • batch_size (int) – Batch size for the attack.

  • confidence (float) – Confidence parameter.

  • targeted (bool) – Whether the attack is targeted.

  • learning_rate (float) – Learning rate for optimization.

  • max_iter (int) – Maximum number of optimization iterations.

  • decrease_factor (float) – Factor for decreasing the constant.

  • initial_const (float) – Initial constant for optimization.

  • largest_const (float) – Maximum constant for optimization.

  • const_factor (float) – Factor for modifying the constant.

  • verbose (bool) – Whether to display verbose output.

Returns:

An instance of CarliniLInfMethod.

aiml.attack.adversarial_attacks.deep_fool_attack(classifier, epsilon=1e-06, batch_size=32, max_iter=100, nb_grads=10, verbose=True)[source]

Create a Deep Fool attack instance.

Parameters:
  • classifier – The classifier to attack.

  • batch_size (int) – Batch size for the attack.

  • max_iter (int) – Maximum number of iterations.

  • epsilon (float) – Perturbation size.

  • nb_grads (int) – Number of gradients to compute.

  • verbose (bool) – Whether to display verbose output.

Returns:

An instance of DeepFool.

aiml.attack.adversarial_attacks.pixel_attack(classifier, max_iter=100, th=None, es=1, targeted=False, verbose=True)[source]

Create a Pixel Attack instance.

Parameters:
  • classifier – The classifier to attack.

  • th – Threshold for attack.

  • es (int) – Early stopping criterion.

  • max_iter (int) – Maximum number of iterations.

  • targeted (bool) – Whether the attack is targeted.

  • verbose (bool) – Whether to display verbose output.

Returns:

An instance of PixelAttack.

aiml.attack.adversarial_attacks.square_attack(estimator, eps=0.3, batch_size=32, max_iter=100, norm=inf, adv_criterion=None, loss=None, p_init=0.8, nb_restarts=1, verbose=True)[source]

Create a Square Attack instance.

Parameters:
  • estimator – The estimator to attack.

  • batch_size (int) – Batch size for the attack.

  • norm – Norm to use for the attack.

  • adv_criterion – Adversarial criterion for the attack.

  • loss – Loss function for the attack.

  • max_iter (int) – Maximum number of iterations.

  • eps (float) – Maximum perturbation allowed.

  • p_init (float) – Initial perturbation scaling factor.

  • nb_restarts (int) – Number of restarts for the attack.

  • verbose (bool) – Whether to display verbose output.

Returns:

An instance of SquareAttack.

aiml.attack.adversarial_attacks.zoo_attack(classifier, confidence=0.0, batch_size=32, learning_rate=0.01, max_iter=10, binary_search_steps=1, targeted=False, initial_const=0.001, abort_early=True, use_resize=True, use_importance=True, nb_parallel=128, variable_h=0.0001, verbose=True)[source]

Create a Zoo Attack instance.

Parameters:
  • classifier – The classifier to attack.

  • batch_size (int) – Batch size for the attack.

  • confidence (float) – Confidence parameter.

  • targeted (bool) – Whether the attack is targeted.

  • learning_rate (float) – Learning rate for optimization.

  • max_iter (int) – Maximum number of optimization iterations.

  • binary_search_steps (int) – Number of binary search steps.

  • initial_const (float) – Initial constant for optimization.

  • abort_early (bool) – Whether to abort early during optimization.

  • use_resize (bool) – Whether to use resize during optimization.

  • use_importance (bool) – Whether to use importance during optimization.

  • nb_parallel (int) – Number of parallel threads.

  • variable_h (float) – Variable for determining step size.

  • verbose (bool) – Whether to display verbose output.

Returns:

An instance of ZooAttack.

aiml.attack.attack_evaluation module

attack_evaluation.py

This module contains a function attack_evaluation that use the inputted attack method and parameter, generate adversarial images by changing the given images a little using adversarial attack. Then output the images into the “img” folder and return accuracy.

aiml.attack.attack_evaluation.attack_evaluation(attack_n, para_n, model, classifer, dataset, batch_size_attack, num_threads_attack, device, nb_classes, require_n=3, dry=False, attack_para_list=[], now_time='0')[source]

Check the performance of adversarial attack methods against the ML model.

Parameters:
  • attack_n (int) – Attack number (0 to 7).

  • para_n (int) – Parameter number for selecting a combination of attack parameters.

  • model (MLModel) – The machine learning model.

  • classifier (PytorchClassifier) – The PyTorch classifier defined using the ART library.

  • dataset – The dataset to be modified with adversarial attacks.

  • batch_size_attack (int) – Parameter for adversarial images data loader.

  • num_threads_attack (int) – Parameter for adversarial images data loader.

  • device (str) – “cpu” or “gpu”.

  • nb_classes (int) – The number of possible labels.

  • require_n (int) – For every label, how many images marked as this label will be modified to get adversarial images.

  • dry (bool) – When True, the code only tests one example.

  • attack_para_list (list) – List of parameter combinations for the attack.

  • now_time (string) – program start time

Returns:

Accuracy of the classifier on the adversarial examples as a percentage (1 = 100%).

Return type:

float

Module contents