Skip to content

Commit

Permalink
Merge pull request #5134 from FederatedAI/develop-1.11.3
Browse files Browse the repository at this point in the history
Update 1.11.3 to Master
  • Loading branch information
dylan-fan authored Sep 8, 2023
2 parents 8767db5 + 1f3e579 commit 5fdc5cf
Show file tree
Hide file tree
Showing 29 changed files with 579 additions and 369 deletions.
6 changes: 3 additions & 3 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
[submodule "fateboard"]
path = fateboard
url = https://github.com/FederatedAI/FATE-Board.git
branch = v1.11.1
branch = v1.11.2
[submodule "eggroll"]
path = eggroll
url = https://github.com/WeBankFinTech/eggroll.git
branch = v2.5.1
branch = v2.5.2
[submodule "fateflow"]
path = fateflow
url = https://github.com/FederatedAI/FATE-Flow.git
branch = v1.11.1
branch = v1.11.2
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ Deploying FATE to multiple nodes to achieve scalability, reliability and managea
- [EggRoll](https://github.com/WeBankFinTech/eggroll): A simple high-performance computing framework for (federated) machine learning.
- [AnsibleFATE](https://github.com/FederatedAI/AnsibleFATE): A tool to optimize and automate the configuration and deployment operations via Ansible.
- [FATE-Builder](https://github.com/FederatedAI/FATE-Builder): A tool to build package and docker image for FATE and KubeFATE.
- [FATE-LLM](https://github.com/FederatedAI/FATE-LLM/blob/main/README.md)
- [FATE-LLM](https://github.com/FederatedAI/FATE-LLM/blob/main/README.md) : A framework to support federated learning for large language models(LLMs).
## Documentation

### FATE Design
Expand Down
7 changes: 7 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
## Release 1.11.3
### Major Features and Improvements
> FederatedML
* FedAVGTrainer update code strcuture: support OffsitetTuningTrainer
* FedAVGTrainer update log format: report batch progress instead of batch index


## Release 1.11.2
### Major Features and Improvements
> FederatedML
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ wget https://webank-ai-1251170195.cos.ap-guangzhou.myqcloud.com/fate/${version}/
scp *.tar.gz app@192.168.0.1:/data/projects/install
scp *.tar.gz app@192.168.0.2:/data/projects/install
```
注意: 当前文档需要部署的FATE version>=1.7.0,${version}替换为如1.11.2,不带v字符
注意: 当前文档需要部署的FATE version>=1.7.0,${version}替换为如1.11.3,不带v字符
### 5.2 操作系统参数检查

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行**
Expand Down
2 changes: 1 addition & 1 deletion deploy/standalone-deploy/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ export version={FATE version for this deployment}
example:

```bash
export version=1.11.2
export version=1.11.3
```

### 2.2 Pulling mirrors
Expand Down
4 changes: 2 additions & 2 deletions deploy/standalone-deploy/README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,13 +35,13 @@
设置部署所需环境变量(注意, 通过以下方式设置的环境变量仅在当前终端会话有效, 若打开新的终端会话, 如重新登录或者新窗口, 请重新设置)

```bash
export version={本次部署的FATE版本号, 如1.11.2}
export version={本次部署的FATE版本号, 如1.11.3}
```

样例:

```bash
export version=1.11.2
export version=1.11.3
```

### 2.2 拉取镜像
Expand Down
1 change: 0 additions & 1 deletion doc/federatedml_component/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ provide:
| [Homo-LR](logistic_regression.md) | HomoLR | Build homo logistic regression model through multiple parties. | Table, values are instances. | Table, values are instances. | | Logistic Regression Model, consists of model-meta and model-param. |
| [Homo-NN](homo_nn.md) | HomoNN | Build homo neural network model through multiple parties. | Table, values are instances. | Table, values are instances. | | Neural Network Model, consists of model-meta and model-param. |
| [Hetero Secure Boosting](ensemble.md) | HeteroSecureBoost | Build hetero secure boosting model through multiple parties | Table, values are instances. | Table, values are instances. | | SecureBoost Model, consists of model-meta and model-param. |
| [Hetero Fast Secure Boosting](ensemble.md) | HeteroFastSecureBoost | Build hetero secure boosting model through multiple parties in layered/mix manners. | Table, values are instances. | Table, values are instances. | | FastSecureBoost Model, consists of model-meta and model-param. |
| [Evaluation](evaluation.md) | Evaluation | Output the model evaluation metrics for user. | Table(s), values are instances. | | | |
| [Hetero Pearson](correlation.md) | HeteroPearson | Calculate hetero correlation of features from different parties. | Table, values are instances. | | | |
| [Hetero-NN](hetero_nn.md) | HeteroNN | Build hetero neural network model. | Table, values are instances. | Table, values are instances. | | Hetero Neural Network Model, consists of model-meta and model-param. |
Expand Down
1 change: 0 additions & 1 deletion doc/federatedml_component/README.zh.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,6 @@ Federatedml模块包括许多常见机器学习算法联邦化实现。所有模
| [Homo-LR](logistic_regression.md) | HomoLR | 通过多方构建横向逻辑回归模块。 | Table, 值为Instance | | | Logistic回归模型,由模型本身和模型参数组成 |
| [Homo-NN](homo_nn.md) | HomoNN | 通过多方构建横向神经网络模块。 | Table, 值为Instance | | | 神经网络模型,由模型本身和模型参数组成 |
| [Hetero Secure Boosting](ensemble.md) | HeteroSecureBoost | 通过多方构建纵向Secure Boost模块。 | Table,值为Instance | | | SecureBoost模型,由模型本身和模型参数组成 |
| [Hetero Fast Secure Boosting](ensemble.md) | HeteroFastSecureBoost | 使用分层/混合模式快速构建树模型 | Table,值为Instance | Table,值为Instance | | FastSecureBoost模型 |
| [Evaluation](evaluation.md) | Evaluation | 为用户输出模型评估指标。 | Table(s), 值为Instance | | | |
| [Hetero Pearson](correlation.md) | HeteroPearson | 计算来自不同方的特征的Pearson相关系数。 | Table, 值为Instance | | | |
| [Hetero-NN](hetero_nn.md) | HeteroNN | 构建纵向神经网络模块。 | Table, 值为Instance | | | 纵向神经网络模型 |
Expand Down
2 changes: 1 addition & 1 deletion eggroll
8 changes: 4 additions & 4 deletions fate.env
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
FATE=1.11.2
FATEFlow=1.11.1
FATEBoard=1.11.1
EGGROLL=2.5.1
FATE=1.11.3
FATEFlow=1.11.2
FATEBoard=1.11.2
EGGROLL=2.5.2
CENTOS=7.2
UBUNTU=16.04
PYTHON=3.8
Expand Down
21 changes: 17 additions & 4 deletions python/fate_client/pipeline/component/homo_nn.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,8 @@
'loss': None,
'optimizer': None,
'nn_define': None,
'ds_config': None
'ds_config': None,
'server_init': False
}
except Exception as e:
print(e)
Expand All @@ -65,7 +66,10 @@ class HomoNN(FateComponent):
torch_seed, global random seed
loss, loss function from fate_torch
optimizer, optimizer from fate_torch
ds_config, config for deepspeed
model, a fate torch sequential defining the model structure
server_init, whether to initialize the model, loss and optimizer on server, if configs are provided, they will be used. In
current version this option is specially designed for offsite-tuning
"""

@extract_explicit_parameter
Expand All @@ -82,7 +86,9 @@ def __init__(self,
loss=None,
optimizer: OptimizerType = None,
ds_config: dict = None,
model: Sequential = None, **kwargs):
model: Sequential = None,
server_init: bool = False,
**kwargs):

explicit_parameters = copy.deepcopy(DEFAULT_PARAM_DICT)
if 'name' not in kwargs["explict_parameters"]:
Expand All @@ -94,8 +100,15 @@ def __init__(self,
self.input = Input(self.name, data_type="multi")
self.output = Output(self.name, data_type='single')
self._module_name = "HomoNN"
self._updated = {'trainer': False, 'dataset': False,
'torch_seed': False, 'loss': False, 'optimizer': False, 'model': False}
self._updated = {
'trainer': False,
'dataset': False,
'torch_seed': False,
'loss': False,
'optimizer': False,
'model': False,
'ds_config': False,
'server_init': False}
self._set_param(kwargs["explict_parameters"])
self._check_parameters()

Expand Down
35 changes: 25 additions & 10 deletions python/fate_client/pipeline/component/nn/backend/torch/cust.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,12 @@
from pipeline.component.nn.backend.torch.base import FateTorchLayer, FateTorchLoss
import difflib

ML_PATH = 'federatedml.nn'
LLM_PATH = "fate_llm"

MODEL_PATH = None
LOSS_PATH = None
LLM_MODEL_PATH = '{}.model_zoo'.format(LLM_PATH)
MODEL_PATH = '{}.model_zoo'.format(ML_PATH)
LOSS_PATH = '{}.loss'.format(ML_PATH)


def str_simi(str_a, str_b):
Expand Down Expand Up @@ -45,9 +48,14 @@ class CustModel(FateTorchLayer, nn.Module):

def __init__(self, module_name, class_name, **kwargs):
super(CustModel, self).__init__()
assert isinstance(module_name, str), 'name must be a str, specify the module in the model_zoo'
assert isinstance(class_name, str), 'class name must be a str, specify the class in the module'
self.param_dict = {'module_name': module_name, 'class_name': class_name, 'param': kwargs}
assert isinstance(
module_name, str), 'name must be a str, specify the module in the model_zoo'
assert isinstance(
class_name, str), 'class name must be a str, specify the class in the module'
self.param_dict = {
'module_name': module_name,
'class_name': class_name,
'param': kwargs}
self._model = None

def init_model(self):
Expand All @@ -62,11 +70,18 @@ def forward(self, x):
def get_pytorch_model(self, module_path=None):

if module_path is None:
return get_class(
self.param_dict['module_name'],
self.param_dict['class_name'],
self.param_dict['param'],
MODEL_PATH)
try:
return get_class(
self.param_dict['module_name'],
self.param_dict['class_name'],
self.param_dict['param'],
MODEL_PATH)
except BaseException:
return get_class(
self.param_dict['module_name'],
self.param_dict['class_name'],
self.param_dict['param'],
LLM_MODEL_PATH)
else:
return get_class(
self.param_dict['module_name'],
Expand Down
2 changes: 1 addition & 1 deletion python/fate_client/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "fate_client"
version = "1.11.2"
version = "1.11.3"
description = "Clients for FATE, including flow_client and pipeline"
authors = ["FederatedAI <contact@FedAI.org>"]
license = "Apache-2.0"
Expand Down
2 changes: 1 addition & 1 deletion python/fate_client/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@

setup_kwargs = {
"name": "fate-client",
"version": "1.11.2",
"version": "1.11.3",
"description": "Clients for FATE, including flow_client and pipeline",
"long_description": "FATE Client\n===========\n\nTools for interacting with FATE.\n\nquick start\n-----------\n\n1. (optional) create virtual env\n\n .. code-block:: bash\n\n python -m venv venv\n source venv/bin/activate\n\n\n2. install FATE Client\n\n .. code-block:: bash\n\n pip install fate-client\n\n\nPipeline\n========\n\nA high-level python API that allows user to design, start,\nand query FATE jobs in a sequential manner. For more information,\nplease refer to this `guide <./pipeline/README.rst>`__\n\nInitial Configuration\n---------------------\n\n1. Configure server information\n\n .. code-block:: bash\n\n # configure values in pipeline/config.yaml\n # use real ip address to configure pipeline\n pipeline init --ip 127.0.0.1 --port 9380 --log-directory ./logs\n\n\nFATE Flow Command Line Interface (CLI) v2\n=========================================\n\nA command line interface providing series of commands for user to design, start,\nand query FATE jobs. For more information, please refer to this `guide <./flow_client/README.rst>`__\n\nInitial Configuration\n---------------------\n\n1. Configure server information\n\n .. code-block:: bash\n\n # configure values in conf/service_conf.yaml\n flow init -c /data/projects/fate/conf/service_conf.yaml\n # use real ip address to initialize cli\n flow init --ip 127.0.0.1 --port 9380\n\n",
"author": "FederatedAI",
Expand Down
2 changes: 1 addition & 1 deletion python/fate_test/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "fate_test"
version = "1.11.2"
version = "1.11.3"
description = "test tools for FATE"
authors = ["FederatedAI <contact@FedAI.org>"]
license = "Apache-2.0"
Expand Down
Loading

0 comments on commit 5fdc5cf

Please sign in to comment.