Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Membership Inference Attacks #557

Open
joshua-oss opened this issue May 11, 2023 · 0 comments
Open

Add Membership Inference Attacks #557

joshua-oss opened this issue May 11, 2023 · 0 comments

Comments

@joshua-oss
Copy link
Contributor

Tabular queries and tabular synthetic data are amenable to attack by the "Leave One Out" idealized attack in [1]. This can be used in conjunction with the Bayesian empirical privacy estimation utility at [2].

Effectiveness of LOO attack depends on choosing the best candidates to allow singling out while being maximally mutually distinguishable. We could provide a utility to choose these candidates and run the MIA attack.

Desiderata:
P0: Enable strongest possible LOO attack
P1: Enable LOO attack with slightly relaxed assumptions about how distinguishable the candidates are
P1: Enable LOO attack with candidates chosen uniformly at random (show average success rate of adversary)
P2: Enable LOO attack based on metadata about which columns are considered more or less public, and what portion of population the adversary is assumed to have auxiliary information about.

[1] https://arxiv.org/abs/2111.09679
[2] https://github.com/microsoft/responsible-ai-toolbox-privacy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant