Project Ranking Metrics

I might have misunderstood but unverified projects do currently appear on the Giveth platform – what separates the projects out is that verified projects have the ‘verified badge? So your new model is suggesting to change the current treatment of unverified projects?

There is a flip-side to this. Whilst on the surface I agree that those that receive the most donations, should not simply rise to the top for your reason stated.

However, we do want to encourage projects to run their own fundraising campaigns. Active and successful fundraising is good for marketing for both Giveth and the project: it raises awareness, increases engagement, creates connection between project and donors. In order to encourage projects to actively engage in their own fundraising should there be some kind of ranking credit to reflect these activities?

I accept that there is overlap with this point and metric no. 3 (no. of donations), which is perhaps sufficient to encapsulate a reward for the efforts of projects in their fundraising activities.

1 Like

I think ‘trending’ would take in this in to account - I like this idea a lot. (sorry, I replied before reading the next point).

Nice post! We’ve definitely mused on this topic more than once. We actually have (had?) a “quality score” metric that was implemented a year ago using # of donations, # of likes and length of project description, it was pretty basic and it was there before any project status existed like “unlisted”,“listed” and verified. It definitely needs an update!
The original article announcing quality score can be found here:
(very young version of Giveth!)

Also a quick refresher on how we handle listed, unlisted and cancelled projects can be found in our documentation:

I like most of these because we can keep track of them automatically. Metrics that we don’t have to manually quantify are great because they are scalable without needing extra contributor resources.

  1. Yes, Verified projects should always be on top of listed projects

  2. Users “stake” GIVpower onto projects to curate projects, this stake amount is constantly changing as users stake and unstake GIVpower. We could look retroactively at a slice of time (2 weeks/1month) and add in the factor of projects with the highest average GIVpower on them.

3 & 4. Likes and Donations are great but they are not sybil proof. Meaning a single user could make an infinite amount of accounts and like a single project many many times. Similarily a single user or many unique users could make many small donations, either from one account or an infinite amount of accounts, there’s no way to tell if the donation is genuine or gamed. Instead you could go for total USD value of donations raised.

We had an issue maybe 5-6 months ago with a project that was spun up with little to no project description or information and an army of donors began putting hundereds of tiny 0.1 DAI donations and overnight it became one of the top Giveth projects.

5 & 8. These both seem a bit arbitrary and #5 especially seems hard to maintain. Who would define the social impact metrics? Who will actually go through hundreds or projects submitting social impact evidence and score them? Seems like a lot of work for something that could prove to be hard to quantify. #8 I think works a bit counter to the purpose of GIVpower which is to provide community curation. Assuming a good chunk of our contributors are also GIV whales we do have the power ourselves to curate using GIVpower.

  1. An easy metric is to check on when the project was last updated, this also ties into #7.
  2. I don’t like the hard requirement of making a new project on the front page but I like that new projects have a higher score than older ones, like most recently updated projects I think we could give this a weight.

An interesting feature could be to have say a slice of top 20 projects using our ranking metrics, every cycle (x weeks/months) we show a new batch of top projects, at least giving a change for successful projects to have some time in the spotlight.


I generally agree with you on the factors and weight distribution of the factors. I have to say that the current system hugely overlooks “Social Impact” which is a miss-alignment with Giveth’s mission which is to support social good as well as a miss with donors’ general desire to achieve social impact in the world as a basic reason to give.

Here are some visible flaws that’s shown in the current top projects:

  1. A favored approach to projects related to Giveth in some way - mostly because the current donor population is very concentrated within the Giveth’s closest community.

  2. There isn’t a defined understanding of what “social proof” or “social reputation” mean and the fluid interpretation is problematic. A closer look to the top Giveth projects, you can soon find out that some of the “social proof” may not be as credible. As an example, “Feed the Hunger” project ranks very high, but if you look deeper, the pictures on the project profile and within the project updates aren’t authentic but from news and stock photos. The updates also don’t really talk about how the project can be sustainable and achieve social impact to alleviate poverty and tackle reasons for homelessness in interior BC where addiction is a problem (in addition to feeding a very small homeless population in a high-welfare first world country and a town of less than 100K residents). There should be basic standard of social impact being measured and basic standard of social proof observed (authentic project information etc.). Otherwise, Giveth’s reputation as platform could be in danger when sophisticated donors come to look for projects.

  3. A misunderstanding and a confusion amongst platform based activity, verification, and real world impact. These are in fact very different concepts:

  • Verification: a basic proof that the project is real, credible, and with a core mission to support social good

  • Platform activity: a way to connect with donors and the platform as part of stewardship and donor retention.

  • Real World Impact: # of people/organization helped; environment protected; etc. This does not equal to platform activities and updates and need real proof. Although this could be challenging, it should be a core measurement based on what Giveth is set out to achieve, which is social good.
    I feel that the weight system propose is a good balance of these three factors.

  1. Plutocracy. One of the reason that attracts me to the platform is the quote that “people on the ground should make the final decision” (Ostrom on the Commons). In today’s nonprofit world, one big problematic power imbalance is high networth individual or “philanthropists” calling the shots (not necessarily just bureaucrats and institutions, in fact in some countries more so than institutions, we are looking at a 80-20 split on philathropic giving from powerful rich individuals vs institutions). In a way, I was hoping that Giveth is on the side of the “people on the ground”. However, I feel that the current system still enable people with the most money calling the shot (not different from the real world power imbalance we are experiencing). I think this is why social impact needs to more addressed and the platform need to be more project-centric (vs. big donor-centric) to become truly a “people’s platform”. And in turn, enabling great projects to build their capacity to fundraise and attract donations from regular donors who just want to make contribution, when achieved in scale, can bring more success than a few high net-worth crypto donors and at the same time also fullfill the platform’s impact to create a balanced social sector (build social projects, support them, and engage regular donors more into giving).

In general, I really like the proposal and agree with most of it. Thank you.


Also wondering if this is a proposed approach for unverified projects (not for social good) as those unverified do show up and in fact some even comes up as top funded.

Thank you! Ashley also just mentioned there is a forum post on the quality scoring system currently in use.
It is here:

I was not aware of that. Good to have the full picture.

I think it’s going to be important to make sure whatever we put up there for rankings should not be easy to game and should have some time component that rates actions recently higher than actions in the past.


Great discussion and what it brings up for me is that Giveth as a platform & the team can play the role that conditions and shifts both Donors and Projects.

The issue we are familiar w/ is that Philanthropy has a tendency to become disconnected from the ground level impact because by nature the use of project reports, case studies, and beneficiary survey to understand what’s happening at the social ground level.

These traditional communications deliverables help understand impact at Donor level by compressing time to the point that they leave little awareness or space for the micro experience at the Project level.

The challenge then is to translate the value of a quantifiable donation about the impact so that it allows for the social qualitative experiences to deepen the relationship between Donors and Projects and thereby shift the Philanthropy culture.

On the subject of Activity weighting, Giveth, is helping facilitate relationships between Donors and Projects and benefits by identifying and monitoring where each respectively falls on a few spectrums.

Web3 inclination:
Active Engagement <> Passive Engagement

Relationship Orientation:
Donor Dependent <> Donor Independent

Learning Style:
by Doing <> by Following

These spectrums can help inform which projects are good for external communications to meet just “Donate to criteria” vs a initiative that is supported by a campaign requiring detailed ongoing external communications about the evolution of a group/community into a regenerative economy.

We as the Giveth team can keep an eye out for what we’re looking for by also classifying what others are looking for so we have better opportunities to develop content that keeps donors and projects engaged.

For example,
Identifying a Donor who’d like to paired with a Project as they embark on a regenerative finance journey together.

This is where I believe the strength of Giveth team and community will most come in handy. A team pick will always invite a nuanced and contextualized understanding about the Project to a Donor audience because we each look for different things.

For example, I’m super curious about the LatAm eco villages and eco university movement bc they represent the evolution of human civilization. The interplay of technology (human systems) and nature (living systems) makes for a rich social and cultural experience that would also appeal to broader audiences.


Projects that are not verified do show up on the platform currently.

The current progression is unlisted > listed > verified > traceable.
Unlisted status is the only status that is currently not visibly shown in the projects list.
Traceable status will soon be phased out and we are no longer encouraging projects to strive for this status or promoting the GivethTRACE platform. I assume that as TRACE phases out, we will look at what was successful there and how we may implement some of the most valued features into the current platform.

I would have to disagree with this sentiment. I believe that not having a ‘verified’ badge shouldn’t discredit the project or put it at a disadvantage. Not being enrolled in the GIVbacks program can already put them in the 2nd choice position for donors. The only thing that ‘verified’ means is that the project was approved to be enrolled in the GIVbacks program. Maybe we should consider changing the term Verified as it insinuates that ‘unverified’ projects are of a lesser quality. But this is for another conversation.

I like the idea of taking the number of donations into consideration.
Do you think this metric would incentivize people to game the system by making many small donations rather that one large one in order to get their project to rank higher? It is nice that it would encourage projects to get more people to donate though. Both of these things are assuming that the user is informed in detail as to how this ranking system works and can use this information to their benefit.

I would love to see some simulations of what the result of these weights would look like when actualized. Is there any easy way to visualize this before implementation?

Again, I disagree that listed projects should be hidden from the platform.
This is why we have the unlisted status for projects that don’t meet quality assurance guidelines.

I agree and I think that this an important variable to include in this equation. I look forward to exploring what this looks like in the Strategy Session.

This is a question that also popped up for me while reading the original post. How can we ensure that we are not incentivizing users to game the system so that their project appears on the top?

I would assume this responsibility would be left to the verification team and I can attest to the fact that we do not have the resources for this. I’m really interested in learning more about projects like ixo that Rainer mentioned above.

I think this could be a better option than #7 above.

Personally, I would love to see how we can use the layout of the projects page to diversify what projects are seen by donors as well. I imagine a Netflix like setup where the projects page shows each category or cause the way Netflix shows categories like comedy, scifi, etc.

If things are presented in rows by category or cause, it gives that many more opportunities for a project to be at the ‘top’. The user would scroll sideways through each category rather than one page dictated by the filter. They could also choose to view each category as a whole page.

I think we can also really expand the options we allow donors to sort and filter projects by to negate the heavy need for a ranking system. By making it easier for donors to connect with the projects they care about, we reduce the need for heavy curation and ranking algorithms.

1 Like


I’d rather see user do the filtering and ranking based on their choices than this being implied by the platform.

I’m still asking myself: Why are we ranking the projects anyway?

I wrote a monologue here.

1 Like

I understand your perspective, @markop. I think the issue, as I understand it, is that the projects have to be presented in SOME kind of order, whether we should be « involved » or not. And how they are presented on the website affects fundraising, visibility, etc. So how is that facilitated?

@WhyIdWanderer about verified projects, I think I disagree with your comment:

« The only thing that ‘verified’ means is that the project was approved to be enrolled in the GIVbacks program. »

I don’t believe this is how « verified » is presented by Giveth to its projects and donors. In the docs, for example, the first line of the Verified Projects page is: « ‘Verified’ is a seal of approval for legitimate projects on Giveth. »

Further explanation from the same page (which is also expressed elsewhere in the GIViverse):

« Verified Projects need to show that they are doing work to create non-excludable value for society and that they have some reputation at stake that would prevent them from gaming or manipulating the GIVbacks program for personal gain. »

In the video donor 101 course, for example, we say: « What is the criteria to get verified? It just means that some one from our team has to verify that this is a real project and that they need to prove in some way through social media or any other means they are raising money for public goods and that it’s not for personal gain. » GIVbacks video


Exactly! And I think that this may be a mistake. It gives the idea that projects that don’t have a verified badge are not legitimate. I think we want to stay away from painting this picture. Because even projects that don’t have a verified badge are legitimate. This is exactly why I’m thinking through the pro’s con’s of changing the term ‘verified’ to something that just labels projects as ‘eligible for GIVbacks’ in one word. I think this is especially important as we move toward a decentralized system of curation/verification through GIVpower and ranking.


I think we are looking into ranking systems so that eventually the system will be decentralized and the project verification team will become less involved than the GIVcurators - whom will most likely take the lead on deciding which projects are eligible for GIVbacks in the future. Also, the ranking system encourages higher quality projects in general from description to updates.

I think the idea is that GIVpower in tandem with project ranking will serve to determine which projects give more GIVbacks than others. I know the details have not been worked out yet but I think this is the general idea.


Exactly this, GIVbacks are starting out as a flat centralized process… but in the long run, we will want to use our ranking system to distribute GIVbacks in a decentralized way.

We also will want to use it for distributing Matching Funds. The GIVmatching Spec relies on it.

1 Like



We have a project in the early research phase making it easy for projects, and impact evaluators to qualitatively rank projects

Test out budget box (Name will change)

and if you want to advise on improvements make issues here:


I love Budget Box <3

I think it would be awesome to give projects creators curation over project ranking. They deserve more governance power too.

I am also very excited about building relationships with impact investors and philanthropists before asking for an investment or grant. Just giving them access to Giveth’s budget box and start building report :slight_smile:

1 Like

Budget Box is going to be renamed… we are going to vote on it with the app itself :smiley:

I will post the link in a day or 2.