Since summer we have dicussed potential metrics for ranking projects. In this post I would like to suggest a concrete model for how to combine them.
Components of the model
The idea would be to combine four different sets of metrics that then add up to a score that determines the rank. Here is a link to an overview of the quantitative model in a Google sheet.
The maximum score for the ranking is 200. The project with the score of 200 has the top rank. Here are the four sets of metrics:
-
Platform Data (max 25 points)
Data we can extract directly from profiles or donation data: number of Donations, activity, amount recently donated, number of hearts -
GIVholder Opinion (max 25 points)
A normalized score for the GIVPower Rank -
Maker Opinion (max 25 points)
A normalized score for the Budget Box Rank -
Impact Evaluator Opinion (max 25 points)
A score for the social impact (potential) of the project. This is a half automatic, half human assessment based on the three possible kinds of data: awards/certificates OR evidence (studies, etc.) OR a strong theory of change. Details on this can be found in this Google Doc (Social Impact Metrics for the Giveth Ranking).
In addition:
- Verification status adds another 100 points, so a verified project will always be ranked higher than a non-verified project.
- If two projects have the same score, the one with the most recent project update is ranked higher.
- We could hard-code that there are always a number of brand new projects (or “projects of the month”, etc.) visible high up in the ranking (or next to the ranked projects).
- Users could have the option to rank using only a subset of these metrics.
This mixture of metrics should ideally ensure a good balance between popularity vs expert opinion, between objective data and subjective assessment, between automatic data and data that requires real eyes of the team, and it should limit the influence of metrics that can be more easily gamed.
The scores (=weights) of the different metrics could be adapted over time if the current distribution turns out to be off.
Next Steps
Ashley has already talked to Carlos to get feedback on the general availability of the internal data sources. We have a general green light here but should of course assess in detail.Next step should be feedback from you on this post and then some kind of vote.
Potential following steps are then:
- Checking availability of external data sources (impact certificates, etc.).
- Building a template for integrating this into the verification process.
- Designing the technical specs for the ranking algorithm.
- Translating into code.
- Training Impact Assessors
Let Ashley and me know what you think!