You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently experiment results are semi-manually coded using bayesian style reasoning to come up with the weights.
It's however possible to do this using a more rigorous approach that makes use of well established graph based modeling systems such as bayesian networks.
Work on this has started already since a few months and had a very fruitful conversation about this topic with Joss who provided key insight.
As part of this activity the plan is to move this forward by doing some more modeling using bayes networks and see how it works.
Some sub-activities as part of this might include:
Coming up with labeled data (probably enriched with what we have from the feedback reporting system) to validate the model and/or bootstrap/train it
Build some kind of web interface to make it easier to label data quickly (currently it's too many clicks to do it via explorer for many measurements)
Refine and experiment with different features for the bayes net
Iterate on various configurations of the bayes network
Consider extending the observation data format to make it easier to extract the necessary features
The text was updated successfully, but these errors were encountered:
There are still a few critical theoretical hurdles that need to be overcome, which are questions I would like to pose to people that have more experience about this, namely:
What are some best-practices or rules of thumb to determine optimal cardinality for the nodes and when it's appropriate to split a particular proposition into more sub-propositions?
How do you deal with the fact that the state of a particular proposition might be undefined? Is it OK for it to just be T | F or is it recommended to explicitly add the "unknown" state?
Are there best practices on the optimal cardinality of the CPD tables? (pgmpy has a hard limit of 32, but manually populating tables even of width 10+ is extremely tedious) Are there tricks to try and split the nodes up in a such a way to keep the cardinality low?
Currently experiment results are semi-manually coded using bayesian style reasoning to come up with the weights.
It's however possible to do this using a more rigorous approach that makes use of well established graph based modeling systems such as bayesian networks.
Work on this has started already since a few months and had a very fruitful conversation about this topic with Joss who provided key insight.
As part of this activity the plan is to move this forward by doing some more modeling using bayes networks and see how it works.
Some sub-activities as part of this might include:
The text was updated successfully, but these errors were encountered: