Concrete Project Ranking Model

Since summer we have dicussed potential metrics for ranking projects. In this post I would like to suggest a concrete model for how to combine them.

Components of the model

The idea would be to combine four different sets of metrics that then add up to a score that determines the rank. Here is a link to an overview of the quantitative model in a Google sheet.
The maximum score for the ranking is 200. The project with the score of 200 has the top rank. Here are the four sets of metrics:

  1. Platform Data (max 25 points)
    Data we can extract directly from profiles or donation data: number of Donations, activity, amount recently donated, number of hearts

  2. GIVholder Opinion (max 25 points)
    A normalized score for the GIVPower Rank

  3. Maker Opinion (max 25 points)
    A normalized score for the Budget Box Rank

  4. Impact Evaluator Opinion (max 25 points)
    A score for the social impact (potential) of the project. This is a half automatic, half human assessment based on the three possible kinds of data: awards/certificates OR evidence (studies, etc.) OR a strong theory of change. Details on this can be found in this Google Doc (Social Impact Metrics for the Giveth Ranking).

In addition:

  • Verification status adds another 100 points, so a verified project will always be ranked higher than a non-verified project.
  • If two projects have the same score, the one with the most recent project update is ranked higher.
  • We could hard-code that there are always a number of brand new projects (or “projects of the month”, etc.) visible high up in the ranking (or next to the ranked projects).
  • Users could have the option to rank using only a subset of these metrics.

This mixture of metrics should ideally ensure a good balance between popularity vs expert opinion, between objective data and subjective assessment, between automatic data and data that requires real eyes of the team, and it should limit the influence of metrics that can be more easily gamed.
The scores (=weights) of the different metrics could be adapted over time if the current distribution turns out to be off.

Next Steps

Ashley has already talked to Carlos to get feedback on the general availability of the internal data sources. We have a general green light here but should of course assess in detail.

Next step should be feedback from you on this post and then some kind of vote.

Potential following steps are then:

  1. Checking availability of external data sources (impact certificates, etc.).
  2. Building a template for integrating this into the verification process.
  3. Designing the technical specs for the ranking algorithm.
  4. Translating into code.
  5. Training Impact Assessors

Let Ashley and me know what you think!

4 Likes

I’m sure I’m confused, so apologies if this question has already been answered back in the summer, when I was just starting with Giveth, but now that GIVpower is released and so many hours were and are being spent on it, what are the additional benefits or end goals for this additional project ranking model?

I see the 4 categories and 3 of them are separate factors from GIVpower, which I really do enjoy the Impact Evaluator Opinion specifically, but just trying to understand the full scope of why? Is it just for more internal analytic gathering and using?

Thanks in advance for breaking it down for me. :slight_smile:

Hi Jake,
GIVpower is great, but it adds a certain bias to ranking:

  1. It’s plutocratic (money talks). People with more GIV have more ranking power (and ranking power leads to more GIV). These will mostly be donors or (at this point) contributors.
  2. It’s a popularity vote, not an expert vote. Actual social impact of a project is not factored in. GIV holders have no special expertise around social impact, it’s mostly pure confidence or personal sympathy.

To answer your question: The benefit of this ranking model is to strengthen other voices beyond donors needed for a balanced ranking, especially projects/makers and experts. And also to factor in some of the data of the profile itself (e.g. update activity). The previous forum post linked right at the beginning also goes into the background why more than GIVpower is needed.

6 Likes

Thanks for breaking it down quickly for me @rainer as i was still not crystal clear after reviewing the previous forum post. Now it makes precious sense to me and like what the end goals / benefits are!

But who will make up the evaluation panel / Impact Assessors for this? Like what groups of people, assuming some internal Giveth contributors, some external specialists, some project owners and donors or will they be considered too partial?

3 Likes

@rainer.hoell What about having different filter views or ranking pages for different criteria? For example, the GIVpower ranking could be the default them maybe there is a social impact ranking, category ranking, popularity ranking etc. each with their own pages. I think that might be easier than trying to find the perfect ranking system (since donors often use the filters and search options anyway to find what’s important for them).

4 Likes

Praise you @rainer.hoell for starting this conversation again and suggesting this very well thought proposal. I really like it in general, but I have a few comments.

@aabugosh I like the idea of the different sorting/ranking options. But I also see the value that the default option which is the one many people will stick to balances different categories.

I see a lot of value in having makers opinions and Impact evaluators. I think that giving governance to impact evaluators will have HUGE second-order and network effects including increasing our stakeholders to reputable organizations & individuals working in the space without giving GIV tokens. Following this line of thought, I would argue that we could add to the quantitative model spreadsheet a category for using also Budget Box (AKA, Pairwise) for reputable organizations & individuals in the impact space. It would also be simpler to implement at first.

I think in general the 25 points/category makes sense as an ultimate goal. However, I suggest using a ramp or quarterly triggers to reach final numbers, since it would take time to have a diversified pool of Impact evaluators and we just launched GIVpower and this it’s currently one of the main utilities of GIV, for example:

23 Q1: GIVpower 100% (Other solutions will still be in development anyways)
23 Q2: GIVpower 70% , Platform data 10%, Maker opinion 10%, Impact evaluators 10% (If any of the categories are not yet ready it could be absorbed by any of the other categories)
23 Q3 GIVpower 40%, Platform data 20%, Maker opinion 20%, Impact evaluators 20% (If any of the categories are not yet ready it could be absorbed by any of the other categories)
23 Q4 Each category 25% (or points)

I also think is great that unverified projects can get a rank anyways, even if it principle is lower than verified projects. My gut feeling is that it doesn’t need to be 100 points. GIVpower boosting is also exclusive for verified projects, maybe it can be more like 50 points

1 Like

So happy to see this up in the forum and it has been fun collaborating and brainstorming!

Things that I feel are worth mentioning or asking so that we can consider them in this model.

  • Who gets to participate in Pairwise (BudgetBox)? Just verified projects? Can we allow everyone to participate but give extra weight to those users with projects, verified projects, expertise, etc? We could use non-transferrable tokens… Although we wouldnt be able to revoke the token so once we issue it, they would have access forever. Maybe we should consider giving people who have history in the non-profit charity space extra weight as well… hmmm.

  • I want to point to Mitch’s comment in the forum post about mandatory updates. He mentions ways that we can take updates into consideration when determining rank.

Going further, if we choose to incorporate project updates into a project ranking system we could say that any updates above the downvote threshold are disqualified, meaning the project is ranked as if that update never existed. We could also allow project mods to manually disqualify updates as well.

To start, it would be included in the verification process. projects will be incentivized to offer this information because it means their project will rank higher. A lot of organizations already have this information, its just a matter of sharing it with us. We also are taking into consideration that smaller organizations may not have this information and offer them partial points for at least submitting a well-thought out theory of change. There is a document linked in the original post that outlines the idea for impact assessment with little overhead. Do you have any ideas on how we can bring in more diversity to the team of Impact Assessors that will take into consideration that we have a small team and not a ton of internal resources for it? Maybe another blockchain for good or impact evaluation DAO that would want to collaborate somehow?

I assume we will have filters for each… you could filter or sort by GIVpower rank, pairwise score, impact score… and the default would take into consideration all of these things for a well-rounded score.

5 Likes

I would also love to consider how we can take categories into consideration on our projects page. It would be soo cool to see something similar to Netflix where our projects are shown in rows by category and you can scroll through them horizontally… this would also play into the ranking as you would see the most highest ranked projects for each category rather than the top ones of the whole platform.

6 Likes

Homepage Redesign

Following the GIVernance call today I would invite anyone who would like to see changes in the homepage design to follow and comment on this thread - Improving the Homepage

Thoughts

I like the idea of updating our default “quality score” metrics to something more robust - changing the project ranking however looks to be very complicated, we just got it working only just a few weeks ago with the launch of GIVpower and we do not yet know how this plays out over time and what truly will be the pros and cons of the current new default ranking model.

Budget box (Pairwise) will definitely be a cool addition but there’s currently no idea of what user group will be included to participate in this product and how to go about getting them tokens to participate.

#4 - Impact Evaluator Opinion - seems very interesting, I can say from talks with other web3 fundraising platforms such as gitcoin and inverter there is a lot of interest in creating a model of “proof-of-projecthood” badges or attestations, we’re currently working on some sort of project registry in this regard for the implementation of GIVfi.

For this point as well we would want to avoid any lengthy and recurring manual processes for the verification team to have to go through or else this will build scaling changes later on down the road. The devil is in the details on this one - looks good on paper, but I’m skeptical of how this would work practically.


A short rant on the “Giveth plutocracy”

I tend to disagree with this general panic over GIVpower/Project Ranking being plutocratic - I think this is the system we sought to build from day 1, functioning normally and now seemingly over the last month it’s turned into a moral crisis. We decided collectively to build a donor-driven donation platform and to reward and empower those who give.

This is how the system is supposed to work - you make a donation, you get GIV and then you get to do a bunch of cool stuff within Giveth with your tokens.

The “plutocracy” are the people funding projects - they are the ones donating and fuelling the purpose of Giveth, does it seem like we shouldn’t be giving them power?

If this doesn’t seem right then we should revisit what our goals are and what our mission is, but to me it looks like we’re building the system that we promised to deliver.

3 Likes

In general I think we need to just be more specific in what is being proposed.

The current Rank affects 3 things:

  1. Default Sort
  2. Rank Number on the Page
  3. GIVbacks %

Right now all 3 of those things are the same… and they were planned on being the same in the design of the project pages.

I think it is easy to make #1, the default sort be influenced by 25/25/25/25

I think it is really hard for #3, the GIVbacks reward to be influenced by 25/25/25/25 as it very much impacts the token holders.

#2, the Rank number, is very connected to #3 and GIVpower right now… so I would lean toward keeping it connected to that until we make a strong case that the 25/25/25/25 sort is better then the GIVpower sort… mostly just our of practicality… its a lot of UX work to find a replacement feedback mechanism for the value that rank and projected rank provide to GIVpower.

So if the only interest is in the default sort… this is an easy thing to experiment with and prove that its better.

1 Like

My thoughts on the 4 sets of metrics for the the 25/25/25/25 idea

1. Platform Data

Platform data was a mess, it needs to be played with and tweaked a lot before I would want to integrate it into the system… The old sort was soooo bad and we were only using platform data. That said, if we play with it, i’m sure we could get some good data here.

2. GIVholder Opinion

In general, GIVpower was very well thought out, I think we have an excellent way of getting token holder sentiment.

3. Maker Opinion

Pairwise (formerly budget box) will be really cool for allowing projects to have input… but we will need to make sure we have a good number of projects participating before we take the score too seriously… how many project owners are enough do you think? 20? 40? Either way, I think this will be really cool and engaging and hopefully will encourage projects to fundraise on our platform.

4. Impact Evaluator Opinion

I have a lot of faith that the Impact Assessment from our verification team could be easy to integrate and could provide valuable clean data into our system… but it might not be fair to call it “Impact Assessment” until they have practiced it for a while and really figured it out, and caught up with the backlog of projects. It seems like this might be broken up into 2 pieces, our verification team’s subjective score… and maybe bonus points for those random projects that are ranked highly by other assessors (GIVwell/ CharityWatch/etc). We should cater to the donors that we have… which is the crypto audience. Impact is always subjective… but crypto donors will probably find the impact of Coincenter and the EFF to be a lot higher than other impact assessments would and we should keep that in mind, since this is the main audience that will use this sort to find projects.

I want to add a 5th group:

5. Donors

We can simply use Pairwise for this… but anyone who has donated to a verified project on Giveth, we have their address and their donation amounts… They should be able to rank projects subjectively beyond just their donations… also inviting them to rank projects may get more donations.

Is 25/25/25/25 the right weighting…

Honestly, it seems like the token holders should decide on the weighting. It would be interesting to see how people would vote to adjust it, if we gave them an easy way to do it.

Path towards executing these things.

Slowly. I think we can play with the default sort a lot, and make these things as individual filters, and as they prove themselves to work… We can integrate them into the Ranking, first by snapshot vote, then much later by building a GIVpower voting mechanism maybe? The long term plan I think is likely to change, but working towards getting this ranking data seems worthwhile.

3 Likes

Interesting read.
I agree with @mitch that the plutocracy here is given to individuals with a for-good trajectory. Also, having that weight on GIV, creates a buying pressure, which is a common benefit for all of us who receive GIV from donors.

At the same time I see the point of @rainer.hoell and @WhyldWanderer in balancing ranking.
Having additional metrics could incentivize the use of the platform in general, by increasing the ranking of projects as a result of keeping the projects updated, providing evidence of the things done, and getting curated opinion of others (which should be tied to a reputation), et. al, is a promising idea.

I even find that these kind of metrics would improve the impact making of the projects. For example, if a project has the need of understanding theory of change for putting it in its campaign page, then the project will have a greater chance of achieving its goal.

However, the interface is key to get the attention of donors, having many things in the campaign’s page might increase the chance of lossing their attention.
I would suggest having sub-tabs in each project’s page (campaign’s page). For example:
About, Budget, Theory of Change [following a template maybe], Impact metrics, Team, Socials.

Also, DM me through Discord if you need focus group volunteers. I am happy to help in the continous improvement of Giveth.
Finally, thank you all who have supported Urbánika with your philantropic-plutocratic power (:rofl:). Long life to the rich GIVers!

Joke aside, blessings to all of you who work in making the GIV economy. I wish you a year full of love, joy, wisdom, good health, prosperity, and memorable fun experiences.

8 Likes

What a legend. Thx @HBesso31

2 Likes

Rainer and I have been collaborating on this for a while with the guidance of Griff… which is part of the reason we started looking at using Pairwise (Budgetbox) to begin with… to balance out GIVpower. We have taken a lot of steps to consider the complications and complexity of implementation and maintenance and have chosen methods that take little to no overhead, assuming we were going to use Pairwise either way. The suggested additions to the verification process for impact measurement are little to no extra work for the team and are things that we are already asking for (theory of change, etc). The main decisions would be around Pairwise and how it could work to serve as a great balance to GIVpower as well as the weighting and points system. I dont think it is a question as to whether or not this is a valuable initiative. What we really need is feedback around the parameters - the weights and points that certain data is worth… what does the community think?

I dont think it is fair to token gate our public forum - I think anyone should be allowed to leave feedback around what they think the weights should be. Eventually this will go to a DAO vote where token holders will vote to pass it or not - that feels like the token holders do get to decide the weights right?

HIstory of this Initiative

This is something that has been a hot topic for a while now… There has been numerous discussions and strategy sessions formed to contemplate and strategize around these topics. There was a whole connect working group formed around how to meet the needs of projects and how to make Giveth less plutocratic… and if Giveth is really about revolutionizing philanthropy we should really take these things into consideration. Do we want to keep building the same system that is already not working?.. Where those with the money have the power and the make the rules?

This post was created for ideation and brainstorming… no big decisions are required… Lets start with something to work with and adjust it as it evolves… once we can get a visual of how it functions as a whole.

Here are a few resources where relevant discussions have happened around this topic in the past… its really not a new topic.
Forum post on Revising Quality Score
Forum post on Project Ranking Metrics
Forum post on GIVmatching
Forum post on Rewarding Projects
Strategy Session on Imapact
Work done in the Connect WG

1 Like

Turned into a moral crisis over the last month? As you can see above, its not a new topic… the plutocracy is just more obvious with GIVpower now live.

Look at the projects page… how did those projects get to the top? They are owned by someone who contributes or has contributed to Giveth, they are friends with someone who works for Giveth, they got a FAT GIVdrop, or they have somehow captured the support of a Giveth contributor. There is no chance for the little guys to even compete with that…

You are asking project owners to withhold funding from their missions, from making real impact, so that they can have a miniscule amount of GIVpower or voting power in the Gardens compared to your giant pile of tokens that you get to use to drive the platform and its development… or make decisions like who wins the popularity contest? And it’s not like the majojrity of your voting rights or GIVpower came from donating either… its great that you contribute to building the platform and you get tokens for that… and you got a GIVdrop - that’s cool - But who are we building this platform for? Ourselves? We are kind of in an echo-chamber. Shouldn’t we be incorporating the advice of experts and those on the ground doing the work?

“We intended for the plutocracy to be those who are funding projects” as you say.
Look at the donations (We get to see each donation made when we review GIVbacks data)… who is donating each round?? It’s Giveth contributors and close friends of Giveth contributors… orrr highly curated large donations that are hand-held from point A to point B… Maybe it’s difficult for you to see the plutocracy because you have a large portion of the GIV and as a token holder, of course you like the GIVpower ranking because it means that you get to put all of your favorite projects at the top…

I think we can do better… How do we use the power of this awesome tool and economy that we are building to flip the script on the charity and non-profit sector? Lets not rebuild the same broken system… shouldn’t we incorporate a mechanism so that the poor and many have an advantage over the rich and the few?

I’m not saying that we shouldn’t still be a donor driven platform… or that we should stop rewarding GIVers… or that we should change our mission. I just want to take a look at the big picture. We have had multiple people from the non-profit sector point to this aspect of Giveth… it is not appealing to projects because it doesn’t change the power dynamic. They still struggle with donors having all of the power and telling them what to do - when they really are the ones that know whats best to make the biggest impact in their field. They deserve to be a part of Building the Future of Giving… and if our goal isn’t to actually revolutionize philanthropy and fund the projects that are making a difference in the world (public goods) and instead to play games with tokens then I’m not sure what I am doing here.

Some interesting Data from the last round :point_down:

Round 26 Data

239 Total Eligible Donations - $3514.71

  • 127 Giveth Contributors past/present - $1819.62
  • 29 Anonymous (could be contributors) - $858.61
  • 16 GIVdrop Recipients that aren’t contributors - $250.62
  • 67 Donations made by non-Giveth contributors that didn’t receive a GIVdrop - $585.86
5 Likes

This discussion is getting a little complex so I’d love to just simply things a little with and explanation of the rank, what it is, what it means… and what I think can/should be played with.

GIVpower background

GIVpower was (in part) created as a way to decentralize GIVbacks issuance. (Check out this interesting but old spec). Prior to GIVpower, donations to any verified project was rewarded with up to 75% back in GIV streamed over time. Now, with GIVpower, when you donate to the project with the most GIVpower - you can get up to 80% back… whereas donating to a project w/ no GIVpower yields only up to 50% back.

This is fantastic because essentially GIV holders are able to use their GIV to control the way GIV is being used as incentives for donations.

GIV holders get to decide what GIV goes out, and to whom - thereby using GIV as a reward for people who donate to the public goods projects they believe in.

Where the rank started

The “rank” - from my perspective - came about as an answer to the question “How do we show which project is the best to donate to? How do we show which project was boosted the most with GIVpower?”

The decision to sort the projects according to the “rank” by default was an extension of this - show the projects that yield the most GIVbacks first and therefore attract more donors to them, so that GIVpower essentially is the means for our community to curate Giveth projects in a decentralized way.

The fact that donors get GIVbacks and therefore big donors become big GIV holders is algined with the concept of Giveth being “by the donors, for donors” - which is something we set out to do at the launch of the GIVeconomy.

What the rank actually IS

The concept of “rank” in this forum post now seems to have been a bit lost so I want to bring it back. It is not some arbitrary measure of which projects are the “best”, but rather…

IMO:

  • We should not add new metrics into the GIVbacks % - this should be controlled by GIV holders via GIVpower (as it was designed) so that they can influence GIV issuance and use the GIVeconomy to express their values via public goods projects they believe in.

  • We should not actually change the “Rank Number” on the page until we consider this from a design perspective… Right now GIVpower is built with a “projected rank” & a “current rank”… and you impact the rank by boosting… it’s a game & it took a considerable amount of time to build & implement… and it was just launched.

What I think we should continue to discuss here

We can definitely improve the default sorting of projects on the project page, and can play with adding in new as per this forum post.

I think we should add in new sort functions, based on the conlusions of this forum discussion like:

  • sort by impact
  • project owner favourites
  • GIVpower
  • number of donations

etc.

And I think it’s worthwhile playing with a revised default sort that is some combination of these things (whether 25/25/25/25, or otherwise).

TL;DR

The rank is not an arbitrary measure of which project is the “best”. It is a necessary component of the gamified UX of GIVpower project curation, took considerable time to build & implement, and right now, is also an indicator of which projects yield the most GIVbacks. Imo… practical next steps are

  1. Improve our design to show which projects yield the most GIVbacks explicity
  2. Add in new options for “sort by” - based on metrics like impact, updates, donations, etc
  3. Continue the discussion here under the context that we are deciding how to improve the default sort of Giveth projects on the projects page.

After we have tested & improved the default sort, we can revisit the idea of changing the rank number, working closely with the design team… but this should be much later, after we have concrete evidence that it works well.

1 Like

From @WhyldWanderer on governance call:

The only thing this points system will initially impact is the default sort on the homepage.

1 Like

From the governance call: I would like to explore the possibility to display the quality score breakdown on the Projects page. Mainly the GIVholder, Maker and Impact Investor Opinion data.

Thanks @Griff and @karmaticacid for clarifying the different meanings of “rank”. I agree that there are at least two, and they answer different questions that potential donors could have:

  1. Display/Quality rank, answering the question: Where can I maximize my social impact?

  2. GIVPower rank, answering the question: Which project has the strongest recommendation from GIVholders/donors? And (at least right now): Where can I maximize my personal reward for a donation?

This post is about the first one, determining what users see when they come to the Giveth page. I was not aware that most people’s default understanding of “rank” is the second one. That one would not be affected. Maybe we want to do that in the future, but not right now.

@WhyldWanderer has written everything I could have written (and lots more) about the limitations of GIVpower as a quality/impact indicator. It’s not bad, it’s just incomplete.

Thanks for all the other great suggestions:

  • Yes, ideally users should absolutely be able to filter according to all the criteria separately if they want to do so.
  • Yes, we should look at @Cotabe’s idea to increase the weight of additional criteria over time.
  • Yes, it would be great if we could display the different scores on the projects page.

I would be hesitant to add a fifth group “Donors”. Given that GIVpower is mostly the donors’ voice already, this would essential double the weight of the donor.

Next step: @WhyldWanderer and me will pull some data from the platform to compile a sample dataset and then do some ranking simulations in a spreadsheet.

Please keep adding your comments, they are really insightful!

1 Like