What is this about?
How should projects be ranked on the platform? The question is important because it will influence visibility (and thereby how many donations a project receives) and because especially the first six projects on the start page will give visitors an impression what Giveth stands for. Ranking algorithms are always a value judgments of a platform.
The rank of a project could also one day influence GIVBack distributions (which would be an extension of GIVpower). But that’s further down the road because it is more complicated and optional.
This post is meant to present a draft model and to kick off a discussion, not to present a full solution. It is really important for me to understand how people feel about certain metrics/factors that should go into this.
This post is also a preparation for the next Strategy Session of the Connect WG on the topic of impact on June 28 (wanna join?).
Both this post and the upcoming session are also meant to help structure some of the discussion we already had in a pretty wild forum thread back in Jan-March on verification, trust ratings, etc.
More on next steps below, at the end of this post.
Ranking Factors
Here is a possible model with 8 factors that could go into the calculation of a project rank on the platform. “Project rank” means the default overall ranking displayed when a visitor comes to giveth.io. Users can of course sort/filter by any of these factors separately. The ranking should also be used do determine the order of results when a filter is used or when a user enters a search term.
Here are the factors:
-
Verification Status
That’s a fairly easy one, and it’s more of an implicit factor. Projects that are not verified do not even show up on the platform, and it should stay that way. There are still ideas about how to potentially improve verification criteria, but it is essentially meant to confirm the “public good intention” of a project as a basis for GIVBack eligibility. -
Popularity among donors/GIV holders
This is the GIVpower curation mechanism, and it should of course be a factor in the overall ranking of projects. GIVpower takes into account the number of tokens used to “vote” in favor of a project and the number of rounds the donor is willing to lock tokens. GIVpower users can also delegate their voting power. -
Number of donations
Despite the option to delegate voting power, GIVpower is mostly a plutocratic principle: If you hold more tokens, you have more voting power. That is consistent with Giveth aiming to be a donor-centric platform, but even donors themselves will expect more than just seeing a reflection of their own peers’ preferences (which is not a proxy for project impact or quality). One more factor that could be included is the number of donations a project has received, which is roughly equivalent to the number of supporters it has, i.e. something like a public vote of confidence. This would make sure we also factor in the preferences of donors that, for whatever reason, decide not to participate in GIVpower. It also creates an incentive for projects to get more people to donate. -
Likes from users
This is already implemented on the platform as an option for users to signal a preference for a project. It should be given some weight. However, its signaling power is weaker than an actual donation. Users might distribute likes without ever donating to anything. Also, it’s possible that users use the “like” feature basically as a bookmarking feature to have an overview of projects they want to check out for whatever reason. So it’s not entirely clear if this a “vote” in favor of a project. -
Social Impact
This one is notoriously elusive, but we should at least try to factor it in. We will discuss this in more detail in the upcoming Strategy Session. One simple start could be to give projects a boost that have really solid evidence for impact (through some kind of beneficiary survey or even a study). Some of the flagship projects we are trying to get on the platform (UBI projects or Extinction Rebellion, for example) can provide that and demonstrate best practice. This factor tends to favor bigger and more mature projects (which can afford to spend a little bit of time and thought on this), but it should count for something nonetheless.
One of the challenges here is that we either invite projects to self-assess this (which invites manipulation) or Giveth needs to verify this in some form (which might not be scalable). To be discussed. -
Activity
How active are projects? How much are they willing to engage with the public and/or donors? Similar to impact, this is not easy to operationalize (Is it updates on the platform? Twitter activity? Something else?) and to make it both manipulation-resistant and scalable. Ashley is working on this, and it is also part of the upcoming Strategy Session.
It would also be helpful to clarify how important this is for us and why. Is this about knowing that a project is still alive? Or about how well they communicate in general? Or about how much they engage with the Giveth Community and thereby qualify as candidates to become (with Giveth’s help) DAOs and microeconomies one day? -
Freshness
Newcomers on the platform should have a chance to be visible. With only the critera above, chances are slim that they will. Maybe we could make sure that one of the six projects displayed on the Giveth start page is one that has joined (or been verified) not more that two weeks before. -
Team Pick/Expert Curation
From time to time the Giveth team (maybe supported by impact experts) could explicitly recommend a project, maybe indicated by a label like “Project of the Month” or “Team Pick”. Something like this is happening already: The highlighted projects in the monthly newsletter are exactly that. But right now these do not show up in the rankings on the platform. Maybe, similar to “freshness”, one of the six projects on the start page could always be one of these.
Weighting
For the ranking, the factors could be given different weights in the ranking algorithm. For example:
- 20% GIVpower votes
- 10% Number of donations
- 10% Likes from users
- 40% Social Impact
- 20% Activity
In addition
- non-visible if unverified
- one “fresh” in the first six projects
- one “pick” in the first six projects
What I think should not be factored in, by the way, is the amount already donated. Users can easily filter by that, but the ranking logic should not favor projects just because they have fundraised a lot already. In that case we would have the same projects always at the top of the list.
We could of course play with something like “trending projects” (most donations/donated amount in the last month/week/24h). This might create some excitement (just as “trending” creates on Twitter). But I am not sure if this sets the right incentives for donors.
Donor needs, impact needs, decentralization vs. centralization
From a bird’s eye view, a ranking should express Giveth’s loyalty to two things:
- What donors/token holders like and want: ability to trust the platform, finding exciting projects, finding emotionally gripping projects, expressing preferences for projects, using the GIV tokens in the economy, co-directing the path Giveth is on
- What creates actual social impact: channeling as much funding as possible to projects which most efficiently and effectively provide public goods and which actually change the life of people out there
If we only feel loyal to the first one, we end up with projects that sound sexy and are very good at communicating. That might work for a while, but without some minimum standard of impact validation, Giveth will potentially compromise its credibility.
Some of the right balance between these two can be achieved through decentralized mechanisms. And we should use these as much as possible. For philosophical reasons but also because Giveth itself cannot handle too much complexity (=workload) with the little staff resources it has. Donor preference (GIVpower), number of donations, likes, “freshness”: These can all be handled in a decentralized manner (or can be calculated already from the data we have).
But we also need to make sure that we consider factors that cannot easily be decentralized or just calculated right now. These might have to be taken care of by the Giveth team for a while still:
- Verification is an example (although GIVpower might replace it one day, not sure if this is possible)
- Social Impact is an example (although impact measurement and validation might over time, at least partially, be decentralized through projects like ixo and others)
- And I am not sure about assessing activity of a project. There are countable elements here and aspects that might still require human interpretation
Next Steps
Please comment: Where do you resonate? Where do you see flaws? Where do your values differ on this topic?
I am planning to collect feedback for two weeks or so, then see how the discussion goes in the Strategy Session on impact on June 28.
After that I would propose an updated version of this and see where you all stand on potentially implementing it.