Skip to content

Commit

Permalink
Merge pull request #248 from AI-SDC/development
Browse files Browse the repository at this point in the history
merge dev into main
  • Loading branch information
rpreen authored Oct 30, 2023
2 parents 59ab093 + ac2441a commit 6a5f477
Show file tree
Hide file tree
Showing 29 changed files with 460 additions and 89 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,9 @@ ipython_config.py
# pyenv
.python-version

# development files
development_files/

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ repos:

# Black format Python and notebooks
- repo: https://github.com/psf/black
rev: 23.9.1
rev: 23.10.0
hooks:
- id: black-jupyter

Expand Down
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,11 @@
# Changelog

## Version 1.1.2 (Oct 30, 2023)

Changes:
* Fix a bug related to the `rules.json` path when running from package ([#247](https://github.com/AI-SDC/AI-SDC/pull/247))
* Update user stories ([#247](https://github.com/AI-SDC/AI-SDC/pull/247))

## Version 1.1.1 (Oct 19, 2023)

Changes:
Expand Down
6 changes: 3 additions & 3 deletions CITATION.cff
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
cff-version: 1.2.0
title: AI-SDC
version: 1.1.1
doi: 10.5281/zenodo.10021954
date-released: 2023-10-19
version: 1.1.2
doi:
date-released: 2023-10-30
license: MIT
repository-code: https://github.com/AI-SDC/AI-SDC
languages:
Expand Down
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@

A collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, see [Smith et al. (2022)](https://doi.org/10.48550/arXiv.2212.01233).

### User Guides

A collection of user guides can be found in the 'user_stories' folder of this repository. These guides include configurable examples from the perspective of both a researcher and a TRE, with separate scripts for each. Instructions on how to use each of these scripts and which scripts to use are included in the README of the ['user_stories'](https://github.com/AI-SDC/AI-SDC/tree/user_story_visibility/user_stories) folder.

## Content

* `aisdc`
Expand Down
4 changes: 3 additions & 1 deletion aisdc/attacks/attack_report_formatter.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

import json
import os
import pathlib
import pprint
import shutil
from datetime import date
Expand Down Expand Up @@ -153,7 +154,8 @@ def _is_instance_based_model(self, instance_based_model_score):

def _tree_min_samples_leaf(self, min_samples_leaf_score):
# Find min samples per leaf requirement
risk_appetite_path = "./aisdc/safemodel/rules.json"
base_path = pathlib.Path(__file__).parents[1]
risk_appetite_path = os.path.join(base_path, "safemodel", "rules.json")
min_samples_leaf_appetite = None

with open(risk_appetite_path, "r+", encoding="utf-8") as f:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
project = "GRAIMATTER"
copyright = "2023, GRAIMATTER and SACRO Project Team"
author = "GRAIMATTER and SACRO Project Team"
release = "1.1.1"
release = "1.1.2"

# -- General configuration ---------------------------------------------------

Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

setup(
name="aisdc",
version="1.1.1",
version="1.1.2",
license="MIT",
maintainer="Jim Smith",
maintainer_email="james.smith@uwe.ac.uk",
Expand Down
184 changes: 104 additions & 80 deletions user_stories/README.md
Original file line number Diff line number Diff line change
@@ -1,80 +1,104 @@
## User story 1: Ideal Case
- User creates an object "target" of type aisdc.attacks.target.Target and provides a separate code file that does the translation between the data in the format provided and the data in the format to be input to the machine any model.
- User creates a model "model" from the safeXClassifier class and calls model.fit().
- User calls model.preliminary_check() to make sure their hyper-parameters are within the TRE risk appetite for algorithm X.
- User calls model.run_attack(target) for different attack types and iterates over different hyper-parameters until they have an accurate model, and they interpret attack results as safe.
- User calls model.request_release() with parameters modelsavefile.sav and again passing the target object (without it request_release does not run attacks).
- LIRA, worst_case, and attribute_inference attacks are run automatically,
- results are stored ready for the TRE output checkers to look at.
- System also saves the results of model.posthoc_check() for poor practice, model edits etc.
- TRE checker has everything they need to make a decision with no further processing.

## User story 2: Next Case
- User provides Target object and code, uses safeXClassifier() but does not pass data object to request_release() or save processed form of data.
- safeXClassifer report checks for class disclosure and TRE risk appetite for algorithm X.
- TRE output checker has to manually recreate processed data using code provided.
- TRE output checker is unable to run any attacks UNLESS they also know exactly which rows from the dataset were used for training and testing.
- So dataset object needs to store those specific details OR use fixed values for seed (e.g. to sklearn.train_test_split() ) and be extremely transparent about how stratification was done.
- If TRE has enough info to recreate train/test processed data, then they can
- Run attacks from script.
- Then the post-processing script
- Then make a judgement.

## User Story 3: User provides dataset object but does not use safeXClassifier
- In this case we don’t currently have any checking for TRE-approved hyper-parameters or for class disclosure.
- But if it is a type where we have a safemodel version, we could create functionality to load it and then check hyper-parameters using existing code
- This raises the issue of whether safeModelClassifiers should have a load() option ?? – Is currently commented out
- Could also provide method for checking for k-anonymity (and possible pure nodes) where appropriate by refactoring safemodels.
- TREs need to manually configure and start scripts to do LIRA, Worst_Case and Attribute_Inference attacks
- NB this assumes their classifier outputs probabilities.

## User Story 4: User does not use safeXClassifier, or provide dataset object
### but does provide description of pre-processing,
### and provides output probabilities for the train and test set they have used (and true classes?)
#### Status: in progress, still to create the TRE script
- We cannot assume that the TRE has the capability to get the right bits of pre-processing code from their source code.
- Do we insist on this (would be needed for ‘outside world’)? what if this is commercially sensitive?
- TRE can in theory run LIRA and worst-case but not attribute inference attacks.
- There is a risk that they have misidentified the train/test splits to give us ones which make the classifier look less disclosive
- But this probably falls outside our remit?
- Recommend reject???
-We could automate generalisation (as lower bound) and worst case attacks if they give output probabilities
– so we need to specify format
- TRE would need actual copies of processed data to run LIRA

**THIS would be the version that let people use R **

## User Story 5: User creates differentially private algorithm (not via our code) and provides sufficient details to create data object.
#### Status: not implemented yet
- How do we know what the actual epsilon value is?
- If it is a keras model we can reload and query it if they have stored the training object as part of the model save (we need epochs, dataset size, L2 norm clip, noise values).
- But then their stored model probably has disclosive values in anyway …
- So would have to delete before release.
- And anyway, are keras models safe against attacks that change ‘trainable’ to true for different layers and then do repeated queries viz, attacks of federated learning.
- If it is non keras then do, we take it on trust??
- Probably yes that comes under safe researcher??

- TRE can recreate processed training and test sets and run attacks.
- Does the actual epsilon value matter if we are doing that?
- Yes probably, because it is the sort of thing a TRE may well set as a policy.

## User Story 6: Worst Case
#### Status: not implemented yet
- User makes R model for a tree-based classifier that we have not experimented with.
- TREs get researcher to provide at minimum the processed train and test files.

- From those we can’t run LIRA (because what would shadow models be?)
- but we can worst-case from the command line or a script if their model outputs probabilities.
- And we can measure generalisation error.
- But not attribute inference.
- We have no way of checking against class disclosure e.g. all training items in a specific subgroup ending in a ‘pure’ node.

- Very hard to check and recommend release

## 7: User provides safemodel with no data
- User loads in data and pre-processes out with Target object
- User uses SafeDecisionTreeClassifier
- User calls request_release() themselves, but does not pass data object to request_release() or save processed form of data.
- SafeDecisionTreeClassifier report checks for class disclosure and TRE risk appetite for algorithm X.
- User may send the dataset to TRE, but does not provide details of pre-processing, nor gives details about which samples were used for training/testing
- TRE has to rely on their own judgement and what the researcher has told them - AISDC in this case cannot provide any additional assistance
# User Stories
In this section there are code examples of how the AI-SDC tools can be used by both a researchers in Trusted Research Environment (TRE) and a TRE output checkers. Each project is unique and therefore how AI-SDC tools are applied may vary from case to case. The user guides have been split into 8 'user stories', each designed to fit a different use-case.

The following diagram is intended to identify the closest use-case match to projects:

![User Stories](user_stories_flow_chart.drawio.png)

## General description
The user stories are coding examples intended to maximise the chances of successfully and smoothly egressing a Machine Learning (ML) model from the TRE. These guides are useful to create appropriate ML models and the metadata files necessary for output checking of the ML model prior to the egress. Saving time and effort, and ultimately optimising costs.

Each user story consists of at least 2 files:
> - **user_story_[x]_researcher_template.[py/R]** Example on how to generate a ML model for the TRE users/researchers.
> - **user_story_[x]_tre.py** Example on how to perform attacks and generate a report.
Extra examples on how to use [safemodels](https://github.com/AI-SDC/AI-SDC/tree/development/example_notebooks) and perform [attacks](https://github.com/AI-SDC/AI-SDC/tree/development/examples) can be found following the corresponding links.

## Programming languages

Although, AI-SDC tools is written in Python, some projects may use a different programming language to create their target model. However, where possible Python should be preferred as more extensive risk-disclosure testing has been performed.

While most of the stories are Python examples, `user_story_4` is written in R.

## Instructions

**For researchers or users**
1. Select the best use-story match to the project.
2. Familiarise yourself with the relevant user-story example, and discuss this with the TRE. Understanding how the process work for both sides will increase the changes of smooth project.
3. Pre-process data and generate the ML model as appropriate for the project inside the TRE. Remember to follow the relevant researcher user story example code (**user_story_[x]_researcher.[py/R]**).
4. Make sure you generated all metadata, data and files required for output checking.
5. Fill out the `default_config.yaml` with the appropriate fields. An example of this file can be found [here](https://github.com/AI-SDC/AI-SDC/blob/user_story_visibility/user_stories/default_config.yaml) with required experiment parameters.
6. Run the command `python generate_disclosure_risk_report.py`.
7. View all the required output files in the **release_files** folder, where the all the required files, data and metadata for egress are placed. A folder called **training_artefacts** is also created, this will include training and testing data and any detailed results of attacks.

*Alternative to steps 5 and 6*

5. Create a new configuration file using the same format in the [default_config.yaml](https://github.com/AI-SDC/AI-SDC/blob/user_story_visibility/user_stories/default_config.yaml) file with a different name.
6. Run the command `python generate_disclosure_risk_report.py --config <your_config_file_name>`.

**For TRE output checkers**
1. Select the best use-story match to the project. Preferably, this should have been agreed with the user/researcher beforehand.
2. Familiarise yourself with the relevant user-story example, both for researchers and TRE. Understanding how the process work both sides will increase the chances of a smoothly running project.
3. Once the researcher/user requests the ML model egress, perform attacks according to corresponding **user_story_[x]_tre.py** example.
4. Look at the reports produced and make a judgment for model egress.

## The user stories in detail

Unless otherwise specified, the stories are for Python machine learning models.

### User story 1: Ideal Case

The user is familiar with AI-SDC tools. Also, the ML classifiers chosen for the project has been wrapped with the SafeModel class. This class, as the name indicates, ensures that the most leaky ML hyperparameters cannot be set, and therefore reducing the risk of data leakage from the generated model. Moreover, the user created the `Target` object provided by AI-SDC tools. This ensures an easier process to generate the data and metadata files required for the model release.

The user can perform attacks to check the viability of their model for release and have the opportunity to make changes where necessary. Once the user is satisfied, and generated all the attacks, the TRE staff performs an output check and makes a decision. This way the user optimises the time for release.

### User story 2: SafeModel class and Target object employed

The user is familiar with AI-SDC tools. Also, the ML classifiers chosen for the project has been wrapped with the SafeModel class. This class, as the name indicates, ensures that the most leaky ML hyperparameters cannot be set, and therefore reducing the risk of data leakage from the generated model.

In this example, the user does not use the `Target` object provided by AI-SDC tools, and does not call the function function `request_release` which provides all the data and metadata files required for output check. This means that the output checker has to recreate the processed data (code provided by user). The user also needs to state which rows of the data were used for training and testing of the model. Once all of this is fulfilled, the output checker can run attacks, generate reports and make a decision on model release.

### User Story 3: User provides dataset object but does not use SafeModel

There exist a vast number of classifiers available and only for a few the SafeModle wrapper exists. Therefore, for some purposes will not be possible to use the SafeModel class.

However, the user has provided a copy of their training data alongside the model to be released. By using this package, the TRE can therefore check the hyperparameters of the model, as well as running attacks and generating reports which will help the TRE to make a decision regarding whether the model should be released.

### User Story 4: User does not use safeXClassifier, or provide dataset object
#### but does provide description of pre-processing, and provides output probabilities for the train and test set they have used

In this example, a researcher has a model (written in Python or R for example) which makes a prediction based on some data. The researcher has not provided a copy of their training data, but has provided a list of output probabilities for each class their model predicts, for each sample in their dataset, in a .csv file format.

The TRE, by using this package and this user story, can run some of the attacks available in this package. Doing so will generate a report, which will help the TRE to make a decision on whether the model should be released.

### User Story 5: User creates differentially private algorithm (not via our code) and provides sufficient details to create data object.
##### Status: not yet implemented

In this example, a researcher has built a differentially private algorithm, but no details of training/testing data.

At time of writing (October 2023), we are not currently in a position to be able to automate epsilon value claims.

Additionally, some packages include disclosive values as metadata embedded in their models, which would need to be extracted and removed prior to release.

We are therefore not able to recommend release of these models at time of writing, although this work is still ongoing.

### User Story 6: Worst Case
##### Status: not yet implemented

In this example, a researcher has built a model which has not yet been tested by the aisdc package, and have not provided details of their training or testing data.

At time of writing (October 2023), experiments are still being done to determine what we can tell in terms of class disclosure.

Therefore, this user story is still under experimentation/implementation.

### User Story 7: User provides safemodel with no data

In this example, a user builds a model using the SafeModel class, and wraps their data in a Target object. However, the researcher forgets to call request_release() or Target.save(), which prevents any useful information regarding training data or model to the TRE.

Because of this, we are unable to proceed with this release, and the user is requested to call one of the above functions.

### User Story 8: User provides safemodel with no data

In this example, a user builds a model but does not use a SafeModel, and does not wrap their training/testing data in a Target object. The user only provides a description of the pre-processing which has been done.

Unfortunately, at this point, we cannot provide a recommendation to either release or reject the model. The researcher should be prompted to either wrap their data in a Target object, or provide a copy of their training and testing data.
2 changes: 1 addition & 1 deletion user_stories/default_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# All other parameters need to be set by either the researcher or TRE

# Scenario to be run
user_story: 4
user_story: UNDEFINED

### Details of experiments and files - please replace these with the relevant filenames

Expand Down
9 changes: 8 additions & 1 deletion user_stories/generate_disclosure_risk_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
from user_story_3 import user_story_3_tre
from user_story_4 import user_story_4_tre
from user_story_7 import user_story_7_tre
from user_story_8 import user_story_8_tre

if __name__ == "__main__":
parser = argparse.ArgumentParser(
Expand Down Expand Up @@ -49,7 +50,11 @@
)

user_story = config["user_story"]
if user_story == 1:
if user_story == "UNDEFINED":
print(
"User story not selected, please select a user story by referring to user_stories_flow_chart.png and adding the relevant number to the the first line of 'default_config.yaml'"
)
elif user_story == 1:
user_story_1_tre.run_user_story(config)
elif user_story == 2:
user_story_2_tre.run_user_story(config)
Expand All @@ -59,5 +64,7 @@
user_story_4_tre.run_user_story(config)
elif user_story == 7:
user_story_7_tre.run_user_story(config)
elif user_story == 8:
user_story_8_tre.run_user_story(config)
else:
raise NotImplementedError(f"User story {user_story} has not been implemented")
Loading

0 comments on commit 6a5f477

Please sign in to comment.