Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify decision process #103

Open
j--- opened this issue Mar 28, 2022 · 2 comments
Open

Clarify decision process #103

j--- opened this issue Mar 28, 2022 · 2 comments

Comments

@j---
Copy link

j--- commented Mar 28, 2022

Thanks for this work. I find the project is clear about its goals, and is clear about its operations and documentation for how to run the script. However, it's not clear to me what decision the score will inform, and that should be reflected in the design of the score. So I understand the goal is to secure critical projects. But what is the decision?
Is it a go/no-go decision on additional support to a project? If that were the case, then there would need to be a lot of work on deciding what score is the "cut line" and ensuring that the partition into those two groups is stable. This might be hard if the function is continuous (or noise is continuous) because there needs to be especially low variation and noise around the cut line -- which would require a whole lot of precise work and study of parameters, etc.
Is it an incremental resourcing decision where the ordinal ordering of the projects will determine proportional support to the project? Then every pairwise ordering of scores needs to be stable, but there is no special cut-line that has particular sensitivity.
Will the actual score be used to say how much more important some project is than another one? Right now, since it's a log function in the algorithm, would I be right in assuming this is a log scale of importance? Analogous to a Richter scale of severity. That is, is a 0.8 10 times more critical than a 0.7 and 10 times less critical than a 0.9?

@oliverchang
Copy link
Contributor

@calebbrown is planning some big fixes here to make this project more easily usable and the scoring more accurate, configurable and transparent.

@j---
Copy link
Author

j--- commented Mar 29, 2022

Thanks. Is there an issue or project I should watch to follow that progress? I'm not sure that transparency by itself will fix this design question, but it will help make sure we can ask it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants