Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于第15章差分隐私下联邦学习的代码 #13

Open
Niujie-hot opened this issue Dec 11, 2022 · 9 comments
Open

关于第15章差分隐私下联邦学习的代码 #13

Niujie-hot opened this issue Dec 11, 2022 · 9 comments

Comments

@Niujie-hot
Copy link

第15章中差分隐私下联邦学习的代码,在联邦学习进行模型聚合后,添加噪声会导致模型预测值均为Nan,导致loss为Nan,acc为10,请问这是为什么呀

@xiaoqu1
Copy link

xiaoqu1 commented Feb 9, 2023

请问有答案了吗

@xiaoqu1
Copy link

xiaoqu1 commented Feb 11, 2023

试了试,多跑几轮就好了。
image

@Niujie-hot
Copy link
Author

答案

我自己写了一个resnet18网络,然后简化了一下网络,就没有出现过这种情况了

@Niujie-hot
Copy link
Author

试了试,多跑几轮就好了。 image

好的,谢谢

@ZL-BUAA
Copy link

ZL-BUAA commented Apr 4, 2023

您好,请问简化的网络有么,超参数调多少会好呀

@Niujie-hot
Copy link
Author

您好,请问简化的网络有么,超参数调多少会好呀

就是要resnet18原来的参数,只是减少了一部分层数

@ZL-BUAA
Copy link

ZL-BUAA commented Apr 4, 2023

请问您具体减少的是那些层呢?

@ZL-BUAA
Copy link

ZL-BUAA commented Apr 5, 2023

您好,代码里差分隐私用的是高斯机制,按照你代码里设置的超参数,对应的\epsilon应该是多少呢

@TyrQueen
Copy link

您好,请问在客户端侧计算裁剪系数norm_scale时,他的分母是0这个错误该怎么通过调整系数来修正。
File "DP-FedAvg/client.py", line 62, in local_train
norm_scale = min(1, self.conf['C'] / (model_norm))
ZeroDivisionError: float division by zero

我看了一下问题出在model.py文件

def model_norm(model_1, model_2):
    squared_sum = 0
    for name, layer in model_1.named_parameters():
        # print(torch.mean(layer.data), torch.mean(model_2.state_dict()[name].data))
        squared_sum += torch.sum(torch.pow(layer.data - model_2.state_dict()[name].data, 2))
    # print(squared_sum)
    return math.sqrt(squared_sum)

这里的两个模型参数是完全相同的,所以得到的squared_sum也是0。

所以想问这个地方应该如何修改呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants