Releases: 4dn-dcic/tibanna
0.8.4
- The issue of auto-determined EBS size being sometimes not an integer fixed.
- Now input files in the unicorn input json can be written in the format of
s3://bucket/key
instead of{'bucket_name': bucket, 'object_key': key}
- command can be written in the format of a list for aesthetic purpose (e.g.
[command1, command2, command3]
is equivalent tocommand1; command2; command3
)
0.8.3
- A newly introduced issue of
--usergroup
not working properly withdeploy_unicorn
/deploy_core
is now fixed. - Now one can specify
mem
(in GB) andcpu
instead ofinstance_type
. The most cost-effective instance type will be auto-determined. - Now one can set
behavior_on_capacity_limit
toother_instance_types
, in which case tibanna will try the top 10 instance types in the order of decreasing hourly cost. - EBS size can be specified in the format of
3x
,5.5x
, etc. to make it 3 (or 5.5) times the total input size.
0.8.2
- A newly introduced issues with
tibanna_4dn
CLI is now fixed - One can now directly send in a command and a container image without any CWL/WDL (language =
shell
). - One can now send a local/remote(http or s3) Snakemake workflow file to awsem and run it (either the whole thing, a step or multiple steps in it). (language =
snakemake
) - Output target and input file dictionary keys can now be a file name instead of an argument name (must start with
file://
)- input file dictionary keys must be
/data1/input
,/data1/out
or either/data1/shell
or/data1/snakemake
(depending on the language option).
- input file dictionary keys must be
- With shell / snakemake option, one can also
exec
into the running docker container after sshing into the EC2 instance. - The
dependency
field can be in args, config or outside both in the input json.
0.8.1
deploy_core
(anddeploy_unicorn
) not working in a non-venv environment fixed- local CWL/WDL files and CWL/WDL files on S3 are supported.
- new issue with opening the browser with
run_workflow
fixed
0.8.0
Tibanna can now be installed via pip install tibanna! (no need to git clone)
Tibanna now has its own CLI! Instead of invoke run_workflow
, one should use tibanna run_workflow
.
Tibanna’s API now has its own class! Instead of from core.utils import run_workflow, one should use the following.
from tibanna.core import API
API().run_workflow(…)
The API run_workflow()
can now directly take an input json file as well as an input dictionary (both through input_json
parameter).
The rerun CLI now has --appname_filter
option exposed
The rerun_many CLI now has --appname-filter
, --shutdown-min
, --ebs-size
, --ebs-type
, --ebs-iops
, --key-name
, --name
options exposed. The API also now has corresponding parameters.
The stat CLI now has API and both has a new parameter n
(-n
) that prints out the first n lines only. The option -v
(--verbose
) is not replaced by -l
(–long
)
Pony
-
tibanna_4dn
is now separate fromtibanna
and inherits fromtibanna
classes and uses modules fromtibanna
. -
Unicorn deployment
tibanna deploy_unicorn
- Pony deployment
tibanna_4dn deploy_pony
0.8.0b1
0.7.0
Now Tibanna uses Python3.6 (Python2.7 is deprecated)
-
newly introduced issue with non-list secondary output target handling fixed
-
fixed the issue with top command reporting from ec2 not working any more
-
now the
run_workflow
function does not later the original input dictionary -
auto-terminates instance when CPU utilization is zero (inactivity) for an hour (mostly due to aws-related issue but could be others).
-
The
rerun
function with a run name that contains a uuid at the end(to differentiate identical run names) now removes it from run_name before adding another uuid. -
Pony
- md5/filesize for extra files nor working - fixed
- Tibanna initiator now uses data environment
- fdn connection exception handling is added to Tibanna initiator
- A newly introduced error of updating wfr status upon awsem error not working is now fixed
- spot instance capacity / instance limit handling options added
0.6.1
- Default public bucket support is depricated now, since it also allows access to all buckets in one's own account. The users must specify buckets at deployment, even for public buckets. If the user doesn't specify any bucket, the deployed Tibanna will only have access to the public tibanna test buckets of the 4dn account.
- A newly introduced issue of
rerun
with norun_name
inconfig
fixed.
Pony
- Newly introduced issue with
bed2multivec
handling atupdate_ffmeta_awsem
now fixed
0.6.0
-
The input json can now be simplified.
app_name
,app_version
,input_parameters
,secondary_output_target
,secondary_files
fields can now be omitted (now optional)instance_type
,ebs_size
,EBS_optimized
can be omitted if benchmark is provided (app_name
is a required field to use benchmark)ebs_type
,ebs_iops
,shutdown_min
can be omitted if using default ('gp2', '', 'now', respectively)password
andkey_name
can be omitted if user doesn't care to ssh into running/failed instances
-
issue with rerun with a short run name containing uuid now fixed
-
pony
bedtomultivec
workflow's higlass registration support added
0.5.9
- Wrong requirement of
SECRET
env is removed from unicorn installation - deploy_unicorn without specified buckets also works
- deploy_unicorn now has
--usergroup
option - cloud metric statistics aggregation with runs > 24 hr now fixed
invoke -l
lists all invoke commandsinvoke add_user
,invoke list
andinvoke users
addedlog()
function not assuming default step function fixed- fixed
invoke log
working only for currently running jobs
pony
- app_version auto-inserted to workflow run title
- output_quality_metric field removed from workflow run objects
- better error message for update_ffmeta for AWSEM error