Data and scripts necessary to run genomic partitioning and prediciton models on free amino acid traits measured in a diverse panel of 313 Arabidopsis lines.
- R v3.6.0
- PLINK v1.90b4 64-bit
- Miniconda3 (includes conda v4.6.7)
- Snakemake v5.4.2 (install to virtual environment)
To install snakemake in a virtual environment, run:
conda env create --name multiblup --file environment.yaml
For future use, activate this environment with:
source activate multiblup
Edit the config.yaml file to specify the paths for the genotype and phenotype data.
Before running Snakemake, download the Arabidopsis Regional Mapping (RegMap) data (Horton et al. 2012):
cd data/external
wget https://github.com/Gregor-Mendel-Institute/atpolydb/blob/master/250k_snp_data/call_method_75.tar.gz
tar -xvf call_method_75.tar.gz
-
data/raw/aa360_raw_ratios.csv
Raw measurements (nmol/mg seed) of 65 free amino acid traits measured in 313 accessions of Arabidopsis thaliana as reported by Angelovici et al. 2013 -
data/processed/aa360_BLUEs.txt Environment adjusted best linear unbiased estimates (BLUEs) for 65 free amino acid traits. Calculated using the
HAPPI-GWAS
pipeline from Slaten et al. 2020. Check outnotebooks/01-calculate_BLUEs.Rmd
for details. -
data/processed/aa360_covars.txt
Principal components from genotype data to model population structure. Environment adjusted best linear unbiased estimates (BLUEs) for 65 free amino acid traits. Calculated using theHAPPI-GWAS
pipeline from Slaten et al. 2020. Check outnotebooks/01-calculate_BLUEs.Rmd
for details. -
data/processed/aa360_covars.txt
Principal components from genotype data to model population structure.
The snakemake workflow includes a Snakefile
(specifies steps of the workflow and variables) and rule files in rules/
to run specific commands.
This file specifies variable names for use in a snakemake workflow. There are notes on how to include specific pathways and MapMan bincodes. Other variables include trait names, the number of random SNP sets, and the number of cross validations to perform.
The rule all:
section is a pseudorule that tracks the expected output of each command in the rule files.
To run the workflow, edit cluster configuration settings in submit.json
then run submit.sh
rules/common.smk - specifies location of config.yaml file
- filter and convert genotype data to PED format
- exports TAIR 10 ensembl gene ids for SNP data
- calculate SNP weightings (these aren't actually used, could skip)
- run PCA and exports covariate file (including two PCs here - recommend adjusting depending on data)
- create training and testing sets for cross validation
- Note: may need to run this separately on command line, repeating the command via loops/snakemake sometimes does not generate unique folds
- export kinship matrix for all SNPs
- estimate variances and heritability
- genomic prediction and cross-validation (REML + calculating BLUPs)
- summarize GBLUP output (
reports/gblup.Rdata
)
- filter and export list of pathway SNPs (includes a 2.5 kb buffer before and after each pathway gene)
- calculate kinship matrices for each pathway (list1) and all remaining SNPs (list2)
- estimate variances and heritability for each SNP partition
- genomic prediction and cross-validation with multiple kernels/random effects (REML + calculating BLUPs)
- summarize MultiBLUP output (
reports/multiblup.RData
)
- first option: generate 5000 random gene groups with a uniform distribution of SNPs. This is useful if you want to examine influences of partition size on heritability explained/model fit or if you are looking to compare a lot of different partitions with varying size to an empirical distribution (output
reports/lr_null_results.csv
) - second option: generate 1000 random gene groups for each pathway (excludes pathway SNPs and samples a similar number of SNPs/genes)
- calculate kinship matrices for each random group
- estimate variances and heritability for each random SNP set
- summarize results across all 1000 gene groups (
reports/null_dist_results_pathways/{pathway}_null.csv
)
R notebooks to summarize results.
Using raw FAA trait data, calculate BLUEs for each trait using the HAPPI-GWAS
pipeline.
Checks quality of model output and summarizes prediction results for the GBLUP and MultiBLUP models (e.g. proportion of heritability explained, likelihood ratio, prediction accuracy, reliability, bias, MSE)
Examines properties of the random gene groups (e.g. distribution of likelihood ratio) and performs quantile regression to establish 95 percentiles for the proportion of heritability explained and likelihood ratio (see nice discussion of this approach in Edwards et al. 2016)
Identifies pathways that pass significance criteria based on comparison to random gene groups with the same number of SNPs (proportion of h2, likelihood ratio, and increase in prediction accuracy).
Code to create figures used in the manuscript.
Project based on the cookiecutter data science project template. #cookiecutterdatascience
├── LICENSE
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- manuscript documents
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- R notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── environment.yaml <- The requirements file for reproducing the analysis conda environment
│
├── src <- Source code for use in this project.
│ ├── data <- Scripts to download or generate data
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ predictions
│ │
│ └── visualization <- Scripts to create exploratory and results oriented visualizations
│
|── Snakefile <- Snakemake workflow to execute analyses
│
└── submit.json <- Configuration settings to run snakemake on a computing cluster