Replies: 1 comment
-
A standard way of achieving variable re-use is through OmegaConf's variable interpolation feature. I strongly recommend reading through the OmegaConf documentation. Using variable interpolation to re-use parameters, you config would look like this: split:
run: False
# mandatory:
root_path: D:/breast_seg/db_test
data_dim: 3
train_dim: 3
train:
run: False
# mandatory:
root_path: ${split.root_path}
data_dim: ${split.data_dim}
train_dim: ${split.train_dim}
predict:
run: True
# mandatory:
root_path: ${split.root_path}
data_dim: ${split.data_dim}
train_dim: ${split.train_dim} |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am currently trying to replace the usage of argparse with hydra files to set the hyperparameters of a deep learning neural network.
I succeeded in using a config.yaml file linked to a hydra main file to run a training and a prediction.
However, I am loading three .py files for the process and there are some common parameters between them (file path, number of labels for example).
Is there a way of using a parameter several times in a config.yaml file supported by hydra ?
Main file structure:
Config file:
Best,
Beta Was this translation helpful? Give feedback.
All reactions