Project Ranking Metrics

What is this about?

How should projects be ranked on the platform? The question is important because it will influence visibility (and thereby how many donations a project receives) and because especially the first six projects on the start page will give visitors an impression what Giveth stands for. Ranking algorithms are always a value judgments of a platform.

The rank of a project could also one day influence GIVBack distributions (which would be an extension of GIVpower). But that’s further down the road because it is more complicated and optional.

This post is meant to present a draft model and to kick off a discussion, not to present a full solution. It is really important for me to understand how people feel about certain metrics/factors that should go into this.

This post is also a preparation for the next Strategy Session of the Connect WG on the topic of impact on June 28 (wanna join?).

Both this post and the upcoming session are also meant to help structure some of the discussion we already had in a pretty wild forum thread back in Jan-March on verification, trust ratings, etc.

More on next steps below, at the end of this post.

Ranking Factors

Here is a possible model with 8 factors that could go into the calculation of a project rank on the platform. “Project rank” means the default overall ranking displayed when a visitor comes to giveth.io. Users can of course sort/filter by any of these factors separately. The ranking should also be used do determine the order of results when a filter is used or when a user enters a search term.

Here are the factors:

  1. Verification Status
    That’s a fairly easy one, and it’s more of an implicit factor. Projects that are not verified do not even show up on the platform, and it should stay that way. There are still ideas about how to potentially improve verification criteria, but it is essentially meant to confirm the “public good intention” of a project as a basis for GIVBack eligibility.

  2. Popularity among donors/GIV holders
    This is the GIVpower curation mechanism, and it should of course be a factor in the overall ranking of projects. GIVpower takes into account the number of tokens used to “vote” in favor of a project and the number of rounds the donor is willing to lock tokens. GIVpower users can also delegate their voting power.

  3. Number of donations
    Despite the option to delegate voting power, GIVpower is mostly a plutocratic principle: If you hold more tokens, you have more voting power. That is consistent with Giveth aiming to be a donor-centric platform, but even donors themselves will expect more than just seeing a reflection of their own peers’ preferences (which is not a proxy for project impact or quality). One more factor that could be included is the number of donations a project has received, which is roughly equivalent to the number of supporters it has, i.e. something like a public vote of confidence. This would make sure we also factor in the preferences of donors that, for whatever reason, decide not to participate in GIVpower. It also creates an incentive for projects to get more people to donate.

  4. Likes from users
    This is already implemented on the platform as an option for users to signal a preference for a project. It should be given some weight. However, its signaling power is weaker than an actual donation. Users might distribute likes without ever donating to anything. Also, it’s possible that users use the “like” feature basically as a bookmarking feature to have an overview of projects they want to check out for whatever reason. So it’s not entirely clear if this a “vote” in favor of a project.

  5. Social Impact
    This one is notoriously elusive, but we should at least try to factor it in. We will discuss this in more detail in the upcoming Strategy Session. One simple start could be to give projects a boost that have really solid evidence for impact (through some kind of beneficiary survey or even a study). Some of the flagship projects we are trying to get on the platform (UBI projects or Extinction Rebellion, for example) can provide that and demonstrate best practice. This factor tends to favor bigger and more mature projects (which can afford to spend a little bit of time and thought on this), but it should count for something nonetheless.
    One of the challenges here is that we either invite projects to self-assess this (which invites manipulation) or Giveth needs to verify this in some form (which might not be scalable). To be discussed.

  6. Activity
    How active are projects? How much are they willing to engage with the public and/or donors? Similar to impact, this is not easy to operationalize (Is it updates on the platform? Twitter activity? Something else?) and to make it both manipulation-resistant and scalable. Ashley is working on this, and it is also part of the upcoming Strategy Session.
    It would also be helpful to clarify how important this is for us and why. Is this about knowing that a project is still alive? Or about how well they communicate in general? Or about how much they engage with the Giveth Community and thereby qualify as candidates to become (with Giveth’s help) DAOs and microeconomies one day?

  7. Freshness
    Newcomers on the platform should have a chance to be visible. With only the critera above, chances are slim that they will. Maybe we could make sure that one of the six projects displayed on the Giveth start page is one that has joined (or been verified) not more that two weeks before.

  8. Team Pick/Expert Curation
    From time to time the Giveth team (maybe supported by impact experts) could explicitly recommend a project, maybe indicated by a label like “Project of the Month” or “Team Pick”. Something like this is happening already: The highlighted projects in the monthly newsletter are exactly that. But right now these do not show up in the rankings on the platform. Maybe, similar to “freshness”, one of the six projects on the start page could always be one of these.

Weighting

For the ranking, the factors could be given different weights in the ranking algorithm. For example:

  • 20% GIVpower votes
  • 10% Number of donations
  • 10% Likes from users
  • 40% Social Impact
  • 20% Activity

In addition

  • non-visible if unverified
  • one “fresh” in the first six projects
  • one “pick” in the first six projects

What I think should not be factored in, by the way, is the amount already donated. Users can easily filter by that, but the ranking logic should not favor projects just because they have fundraised a lot already. In that case we would have the same projects always at the top of the list.

We could of course play with something like “trending projects” (most donations/donated amount in the last month/week/24h). This might create some excitement (just as “trending” creates on Twitter). But I am not sure if this sets the right incentives for donors.

Donor needs, impact needs, decentralization vs. centralization

From a bird’s eye view, a ranking should express Giveth’s loyalty to two things:

  1. What donors/token holders like and want: ability to trust the platform, finding exciting projects, finding emotionally gripping projects, expressing preferences for projects, using the GIV tokens in the economy, co-directing the path Giveth is on
  2. What creates actual social impact: channeling as much funding as possible to projects which most efficiently and effectively provide public goods and which actually change the life of people out there

If we only feel loyal to the first one, we end up with projects that sound sexy and are very good at communicating. That might work for a while, but without some minimum standard of impact validation, Giveth will potentially compromise its credibility.

Some of the right balance between these two can be achieved through decentralized mechanisms. And we should use these as much as possible. For philosophical reasons but also because Giveth itself cannot handle too much complexity (=workload) with the little staff resources it has. Donor preference (GIVpower), number of donations, likes, “freshness”: These can all be handled in a decentralized manner (or can be calculated already from the data we have).

But we also need to make sure that we consider factors that cannot easily be decentralized or just calculated right now. These might have to be taken care of by the Giveth team for a while still:

  • Verification is an example (although GIVpower might replace it one day, not sure if this is possible)
  • Social Impact is an example (although impact measurement and validation might over time, at least partially, be decentralized through projects like ixo and others)
  • And I am not sure about assessing activity of a project. There are countable elements here and aspects that might still require human interpretation

Next Steps

Please comment: Where do you resonate? Where do you see flaws? Where do your values differ on this topic?

I am planning to collect feedback for two weeks or so, then see how the discussion goes in the Strategy Session on impact on June 28.

After that I would propose an updated version of this and see where you all stand on potentially implementing it.

6 Likes

I might have misunderstood but unverified projects do currently appear on the Giveth platform – what separates the projects out is that verified projects have the ‘verified badge? So your new model is suggesting to change the current treatment of unverified projects?

There is a flip-side to this. Whilst on the surface I agree that those that receive the most donations, should not simply rise to the top for your reason stated.

However, we do want to encourage projects to run their own fundraising campaigns. Active and successful fundraising is good for marketing for both Giveth and the project: it raises awareness, increases engagement, creates connection between project and donors. In order to encourage projects to actively engage in their own fundraising should there be some kind of ranking credit to reflect these activities?

I accept that there is overlap with this point and metric no. 3 (no. of donations), which is perhaps sufficient to encapsulate a reward for the efforts of projects in their fundraising activities.

1 Like

I think ‘trending’ would take in this in to account - I like this idea a lot. (sorry, I replied before reading the next point).

Nice post! We’ve definitely mused on this topic more than once. We actually have (had?) a “quality score” metric that was implemented a year ago using # of donations, # of likes and length of project description, it was pretty basic and it was there before any project status existed like “unlisted”,“listed” and verified. It definitely needs an update!
The original article announcing quality score can be found here:
https://medium.com/giveth/what-dappened-march-17-31-3f5f201bfbf3
(very young version of Giveth!)

Also a quick refresher on how we handle listed, unlisted and cancelled projects can be found in our documentation:

I like most of these because we can keep track of them automatically. Metrics that we don’t have to manually quantify are great because they are scalable without needing extra contributor resources.

  1. Yes, Verified projects should always be on top of listed projects

  2. Users “stake” GIVpower onto projects to curate projects, this stake amount is constantly changing as users stake and unstake GIVpower. We could look retroactively at a slice of time (2 weeks/1month) and add in the factor of projects with the highest average GIVpower on them.

3 & 4. Likes and Donations are great but they are not sybil proof. Meaning a single user could make an infinite amount of accounts and like a single project many many times. Similarily a single user or many unique users could make many small donations, either from one account or an infinite amount of accounts, there’s no way to tell if the donation is genuine or gamed. Instead you could go for total USD value of donations raised.

We had an issue maybe 5-6 months ago with a project that was spun up with little to no project description or information and an army of donors began putting hundereds of tiny 0.1 DAI donations and overnight it became one of the top Giveth projects.

5 & 8. These both seem a bit arbitrary and #5 especially seems hard to maintain. Who would define the social impact metrics? Who will actually go through hundreds or projects submitting social impact evidence and score them? Seems like a lot of work for something that could prove to be hard to quantify. #8 I think works a bit counter to the purpose of GIVpower which is to provide community curation. Assuming a good chunk of our contributors are also GIV whales we do have the power ourselves to curate using GIVpower.

  1. An easy metric is to check on when the project was last updated, this also ties into #7.
  2. I don’t like the hard requirement of making a new project on the front page but I like that new projects have a higher score than older ones, like most recently updated projects I think we could give this a weight.

An interesting feature could be to have say a slice of top 20 projects using our ranking metrics, every cycle (x weeks/months) we show a new batch of top projects, at least giving a change for successful projects to have some time in the spotlight.

2 Likes

I generally agree with you on the factors and weight distribution of the factors. I have to say that the current system hugely overlooks “Social Impact” which is a miss-alignment with Giveth’s mission which is to support social good as well as a miss with donors’ general desire to achieve social impact in the world as a basic reason to give.

Here are some visible flaws that’s shown in the current top projects:

  1. A favored approach to projects related to Giveth in some way - mostly because the current donor population is very concentrated within the Giveth’s closest community.

  2. There isn’t a defined understanding of what “social proof” or “social reputation” mean and the fluid interpretation is problematic. A closer look to the top Giveth projects, you can soon find out that some of the “social proof” may not be as credible. As an example, “Feed the Hunger” project ranks very high, but if you look deeper, the pictures on the project profile and within the project updates aren’t authentic but from news and stock photos. The updates also don’t really talk about how the project can be sustainable and achieve social impact to alleviate poverty and tackle reasons for homelessness in interior BC where addiction is a problem (in addition to feeding a very small homeless population in a high-welfare first world country and a town of less than 100K residents). There should be basic standard of social impact being measured and basic standard of social proof observed (authentic project information etc.). Otherwise, Giveth’s reputation as platform could be in danger when sophisticated donors come to look for projects.

  3. A misunderstanding and a confusion amongst platform based activity, verification, and real world impact. These are in fact very different concepts:

  • Verification: a basic proof that the project is real, credible, and with a core mission to support social good

  • Platform activity: a way to connect with donors and the platform as part of stewardship and donor retention.

  • Real World Impact: # of people/organization helped; environment protected; etc. This does not equal to platform activities and updates and need real proof. Although this could be challenging, it should be a core measurement based on what Giveth is set out to achieve, which is social good.
    I feel that the weight system propose is a good balance of these three factors.

  1. Plutocracy. One of the reason that attracts me to the platform is the quote that “people on the ground should make the final decision” (Ostrom on the Commons). In today’s nonprofit world, one big problematic power imbalance is high networth individual or “philanthropists” calling the shots (not necessarily just bureaucrats and institutions, in fact in some countries more so than institutions, we are looking at a 80-20 split on philathropic giving from powerful rich individuals vs institutions). In a way, I was hoping that Giveth is on the side of the “people on the ground”. However, I feel that the current system still enable people with the most money calling the shot (not different from the real world power imbalance we are experiencing). I think this is why social impact needs to more addressed and the platform need to be more project-centric (vs. big donor-centric) to become truly a “people’s platform”. And in turn, enabling great projects to build their capacity to fundraise and attract donations from regular donors who just want to make contribution, when achieved in scale, can bring more success than a few high net-worth crypto donors and at the same time also fullfill the platform’s impact to create a balanced social sector (build social projects, support them, and engage regular donors more into giving).

In general, I really like the proposal and agree with most of it. Thank you.

2 Likes

Also wondering if this is a proposed approach for unverified projects (not for social good) as those unverified do show up and in fact some even comes up as top funded.

Thank you! Ashley also just mentioned there is a forum post on the quality scoring system currently in use.
It is here:

I was not aware of that. Good to have the full picture.

I think it’s going to be important to make sure whatever we put up there for rankings should not be easy to game and should have some time component that rates actions recently higher than actions in the past.

2 Likes

Great discussion and what it brings up for me is that Giveth as a platform & the team can play the role that conditions and shifts both Donors and Projects.

The issue we are familiar w/ is that Philanthropy has a tendency to become disconnected from the ground level impact because by nature the use of project reports, case studies, and beneficiary survey to understand what’s happening at the social ground level.

These traditional communications deliverables help understand impact at Donor level by compressing time to the point that they leave little awareness or space for the micro experience at the Project level.

The challenge then is to translate the value of a quantifiable donation about the impact so that it allows for the social qualitative experiences to deepen the relationship between Donors and Projects and thereby shift the Philanthropy culture.

On the subject of Activity weighting, Giveth, is helping facilitate relationships between Donors and Projects and benefits by identifying and monitoring where each respectively falls on a few spectrums.

Web3 inclination:
Active Engagement <> Passive Engagement

Relationship Orientation:
Donor Dependent <> Donor Independent

Learning Style:
by Doing <> by Following

These spectrums can help inform which projects are good for external communications to meet just “Donate to criteria” vs a initiative that is supported by a campaign requiring detailed ongoing external communications about the evolution of a group/community into a regenerative economy.

We as the Giveth team can keep an eye out for what we’re looking for by also classifying what others are looking for so we have better opportunities to develop content that keeps donors and projects engaged.

For example,
Identifying a Donor who’d like to paired with a Project as they embark on a regenerative finance journey together.

This is where I believe the strength of Giveth team and community will most come in handy. A team pick will always invite a nuanced and contextualized understanding about the Project to a Donor audience because we each look for different things.

For example, I’m super curious about the LatAm eco villages and eco university movement bc they represent the evolution of human civilization. The interplay of technology (human systems) and nature (living systems) makes for a rich social and cultural experience that would also appeal to broader audiences.

2 Likes

Projects that are not verified do show up on the platform currently.

The current progression is unlisted > listed > verified > traceable.
Unlisted status is the only status that is currently not visibly shown in the projects list.
Traceable status will soon be phased out and we are no longer encouraging projects to strive for this status or promoting the GivethTRACE platform. I assume that as TRACE phases out, we will look at what was successful there and how we may implement some of the most valued features into the current platform.

I would have to disagree with this sentiment. I believe that not having a ‘verified’ badge shouldn’t discredit the project or put it at a disadvantage. Not being enrolled in the GIVbacks program can already put them in the 2nd choice position for donors. The only thing that ‘verified’ means is that the project was approved to be enrolled in the GIVbacks program. Maybe we should consider changing the term Verified as it insinuates that ‘unverified’ projects are of a lesser quality. But this is for another conversation.

I like the idea of taking the number of donations into consideration.
Do you think this metric would incentivize people to game the system by making many small donations rather that one large one in order to get their project to rank higher? It is nice that it would encourage projects to get more people to donate though. Both of these things are assuming that the user is informed in detail as to how this ranking system works and can use this information to their benefit.

I would love to see some simulations of what the result of these weights would look like when actualized. Is there any easy way to visualize this before implementation?

Again, I disagree that listed projects should be hidden from the platform.
This is why we have the unlisted status for projects that don’t meet quality assurance guidelines.

I agree and I think that this an important variable to include in this equation. I look forward to exploring what this looks like in the Strategy Session.

This is a question that also popped up for me while reading the original post. How can we ensure that we are not incentivizing users to game the system so that their project appears on the top?

I would assume this responsibility would be left to the verification team and I can attest to the fact that we do not have the resources for this. I’m really interested in learning more about projects like ixo that Rainer mentioned above.

I think this could be a better option than #7 above.

Personally, I would love to see how we can use the layout of the projects page to diversify what projects are seen by donors as well. I imagine a Netflix like setup where the projects page shows each category or cause the way Netflix shows categories like comedy, scifi, etc.

If things are presented in rows by category or cause, it gives that many more opportunities for a project to be at the ‘top’. The user would scroll sideways through each category rather than one page dictated by the filter. They could also choose to view each category as a whole page.

I think we can also really expand the options we allow donors to sort and filter projects by to negate the heavy need for a ranking system. By making it easier for donors to connect with the projects they care about, we reduce the need for heavy curation and ranking algorithms.

1 Like

This!

I’d rather see user do the filtering and ranking based on their choices than this being implied by the platform.

I’m still asking myself: Why are we ranking the projects anyway?

I wrote a monologue here.

1 Like

I understand your perspective, @markop. I think the issue, as I understand it, is that the projects have to be presented in SOME kind of order, whether we should be « involved » or not. And how they are presented on the website affects fundraising, visibility, etc. So how is that facilitated?

@WhyIdWanderer about verified projects, I think I disagree with your comment:

« The only thing that ‘verified’ means is that the project was approved to be enrolled in the GIVbacks program. »

I don’t believe this is how « verified » is presented by Giveth to its projects and donors. In the docs, for example, the first line of the Verified Projects page is: « ‘Verified’ is a seal of approval for legitimate projects on Giveth. »

Further explanation from the same page (which is also expressed elsewhere in the GIViverse):

« Verified Projects need to show that they are doing work to create non-excludable value for society and that they have some reputation at stake that would prevent them from gaming or manipulating the GIVbacks program for personal gain. »

In the video donor 101 course, for example, we say: « What is the criteria to get verified? It just means that some one from our team has to verify that this is a real project and that they need to prove in some way through social media or any other means they are raising money for public goods and that it’s not for personal gain. » GIVbacks video

2 Likes

Exactly! And I think that this may be a mistake. It gives the idea that projects that don’t have a verified badge are not legitimate. I think we want to stay away from painting this picture. Because even projects that don’t have a verified badge are legitimate. This is exactly why I’m thinking through the pro’s con’s of changing the term ‘verified’ to something that just labels projects as ‘eligible for GIVbacks’ in one word. I think this is especially important as we move toward a decentralized system of curation/verification through GIVpower and ranking.

2 Likes

I think we are looking into ranking systems so that eventually the system will be decentralized and the project verification team will become less involved than the GIVcurators - whom will most likely take the lead on deciding which projects are eligible for GIVbacks in the future. Also, the ranking system encourages higher quality projects in general from description to updates.

I think the idea is that GIVpower in tandem with project ranking will serve to determine which projects give more GIVbacks than others. I know the details have not been worked out yet but I think this is the general idea.

3 Likes

Exactly this, GIVbacks are starting out as a flat centralized process… but in the long run, we will want to use our ranking system to distribute GIVbacks in a decentralized way.

We also will want to use it for distributing Matching Funds. The GIVmatching Spec relies on it.

1 Like

Bump
bump
bump
bump
BUMP

2 Likes

We have a project in the early research phase making it easy for projects, and impact evaluators to qualitatively rank projects

Test out budget box (Name will change)

and if you want to advise on improvements make issues here:

4 Likes

I love Budget Box <3

I think it would be awesome to give projects creators curation over project ranking. They deserve more governance power too.

I am also very excited about building relationships with impact investors and philanthropists before asking for an investment or grant. Just giving them access to Giveth’s budget box and start building report :slight_smile:

1 Like