aiml.evaluation package

Submodules

aiml.evaluation.check_accuracy module

check_accuracy.py

This module defines functions for calculating the accuracy of a given dataset on a model.

aiml.evaluation.check_accuracy.check_accuracy(model, dataloader, device)[source]

Calculate the accuracy of a dataset using a pre-trained machine learning model.

Parameters:
  • model – The pre-trained machine learning model.

  • dataloader – The dataloader for the dataset to be tested.

  • device (str) – The device to use, either ‘cpu’ or ‘gpu’.

Returns:

The accuracy of the dataset when tested on the provided model.

Return type:

float

aiml.evaluation.check_accuracy.check_accuracy_with_flags(model, dataloader, device)[source]

Calculate the accuracy of a dataset using a machine learning model.

Parameters:
  • model – The machine learning model for evaluation.

  • dataloader – The dataloader for the dataset to be tested.

  • device (str) – The device to use, either ‘cpu’ or ‘gpu’.

Returns:

The accuracy of the dataset when tested on the provided model. list: A list showing which images were correctly recognized (True) or not (False).

Return type:

float

aiml.evaluation.dynamic module

dynamic.py

This module provides the decide_attack function which will decide the next attack to be applied and its parameter.

aiml.evaluation.dynamic.decide_attack(result_list, attack_para_list=[], now_time='0', ori_acc=0.9)[source]

Write the results of the previous attack to a text file and determine the next attack and its parameters based on attack history.

Parameters:

result_list

A list where the first element is the overall mark, and the subsequent

elements are lists containing the history of previous attacks. Sublists stores the attack number, parameter number, and accuracy.

attack_para_list: list that store parameter for each attack now_time(string):program start time ori_acc(float): original accuracy(accuracy that test clean image by the model)

Returns:

The number of the next attack

(could be the same or the next one in the attack_method_list).

next_parameter_number (int): The number of the next parameter. continue_testing (bool): Whether to continue testing attacks or not.

Return type:

next_attack_number (int)

aiml.evaluation.evaluate module

evaluate.py

This module provides the evaluate function which will evaluate the model with the given data and attack methods.

aiml.evaluation.evaluate.evaluate(input_model, input_test_data, input_train_data=None, input_shape=None, clip_values=None, nb_classes=None, batch_size_attack=64, num_threads_attack=0, batch_size_train=64, batch_size_test=64, num_workers=4, require_n=3, dry=False, attack_para_list=[[[0.03], [0.06], [0.13], [0.25]], [[0.03], [0.06], [0.13], [0.25]], [[0], [10], [100]], [[0], [10], [100]], [[0], [10], [100]], [[1e-06]], [[100]], [[0.03], [0.06], [0.13], [0.25]], [[0], [10], [100]]])[source]

Evaluate the model’s performance using the provided data and attack methods.

Parameters:
  • input_model (str or model) – A string of the name of the machine learning model or the machine learning model itself.

  • input_test_data (str or dataset) – A string of the name of the testing dataset or the testing dataset itself.

  • input_train_data (str or dataset, optional) – A string of the name of the training dataset or the training dataset itself (default is None).

  • input_shape (tuple, optional) – Shape of input data (default is None).

  • clip_values (tuple, optional) – Range of input data values (default is None).

  • nb_classes (int, optional) – Number of classes in the dataset (default is None).

  • batch_size_attack (int, optional) – Batch size for attack testing (default is 64).

  • num_threads_attack (int, optional) – Number of threads for attack testing (default is 0).

  • batch_size_train (int, optional) – Batch size for training data (default is 64).

  • batch_size_test (int, optional) – Batch size for test data (default is 64).

  • num_workers (int, optional) – Number of workers to use for data loading (default is half of the available CPU cores).

  • require_n (int, optional) – the number of adversarial images for each class.

  • dry (bool, optional) – When True, the code should only test one example.

  • attack_para_list (list, optional) – List of parameter combinations for the attack.

Returns:

None.

Module contents