9 emerging trends in managing awards, grants, and scholarships

9 Emerging trends in managing awards, grants, and scholarships

In our latest webinar, we explored 9 emerging trends in managing awards, grants, scholarships, abstracts, and so much more. These innovative strategies streamline and enhance these programs and provide a more engaging experience for everyone involved. In these webinar we cover:

  • Who is Reviewr
  • Dual nominator/applicant workflows
  • Collecting references
  • AI and plagiarism content detection
  • Creating a fair, non-bias, review and selection
  • Broadening the evaluation criteria
  • Normalizing results and identifying judge trends 
  • Measuring and reporting on impact
  • Enhancing program transparency
  • Data security and privacy
Outlined below are discussion points revolving around these 9 key emerging trends in managing awards, grants, scholarships, abstracts, and more alongside a link to that portion of the webinar and an interactive tour of Reviewr mimicking that specific trend.
 
Let’s get started!

Trend 1: Dual nominator/nominee workflows

View/Watch Trend

The Dual Nominator/Nominee Workflow is structured to optimize both participation and data accuracy in award, scholarship, or grant applications. Initially, nominators submit basic information about themselves and the nominee. This first step is crucial as it allows for a high volume of entries by reducing the barrier to entry, crucial for initial engagement. It also brings awareness to the program by potential self applicants who were just not aware. Opposed to the “nominator” sending them a link to apply, organizations now have valuable data and insights into who’s been nominated for automated email follow ups and conversion metric tracking. In the second phase, nominees themselves are prompted to provide a detailed and comprehensive account of their qualifications and fill out the full application. This puts the ownership on the applicant themselves.This two-tiered approach ensures:

  • High Participation: Simple initial nominations encourage more entries as well as the ability to monitor conversions and boost participation through automated reminders.
  • Rich Data Collection: Detailed applications from nominees allow for a thorough evaluation based on substantial and self-verified information.
  • Consistent Selection: With only the data submitted directly by the applicant we can ensure all applicants are being evaluated fairly and not based off deliverables submitted by others about them.
  • Streamlined Processing: Separating casual entries from serious applications early in the process helps manage resources more efficiently

 

Trend 2: Reference collection

View/Watch Trend

The revamped method of reference collection focuses on a standardized approach, ensuring consistent, reliable, and relevant information from each reference. By transitioning from narrative-based reference letters to a structured questionnaire format, the system achieves several key improvements:

Avoid reference letters:

  • Historically references were collected in a letter format but this is now an outdated and risky method.

  • Letters create a barrier for references.

    • Hard to write

    • Takes time and effort

    • Multi-step to create, write, and send back.

    • Not all references are created equal – some are better written than others, some had more time put into it, etc. Is this a fair representation of the scholarship applicant?

    • Hard to blind PII in a letter.

Leverage reference templates:

  • Outline 3-5 questions that each reference should answer.

  • Consider adding a reference evaluation scoresheet.

  • Lowers the time and effort barrier for references.

  • Creates data consistency amongst all applicants.

  • Creates consistency in the review process with defined data sets.

Use Reviewrs automated reference collection process:

  • Award nominees or nominators will enter the name and email of the reference

  • Triggers an email notification to reference

  • Reference clicks on a link that brings them to a reference template

  • Reference simply fills out the template with the ability to save, log out, and work at their own pace.

  • Visibility to both award program managers as well as to applicants on the progress of references.

  • Actual reference content can be blinded from the applicant.

  • Upon submission, the reference template is automatically attached to the applicant profile.

  • Reference data can be blinded more easily by the review team.

 

Benefits to this revamped reference submission process:

Standardization and Reliability

  • Uniform Response Format: The use of a structured questionnaire for all references eliminates the variability inherent in letters. Each reference answers the same set of questions, which directly relate to the applicant’s qualifications and suitability for the program.

  • Direct Comparison: Standardized responses facilitate direct comparisons among candidates, enhancing the fairness of the evaluation process.

Efficiency and Completeness

  • Automated Reminders: The system sends automated reminders to references to complete their submissions, ensuring timely collection of all necessary information.

  • Ease of Submission: The online format simplifies the submission process for references, increasing the likelihood of complete and thoughtful responses.

Enhanced Data Integrity

  • Reduction of Bias: By focusing on specific, relevant questions, the potential for biased or overly subjective evaluations is minimized, leading to more objective and actionable data.

  • Improved Evaluation Quality: Evaluators receive high-quality, relevant data that accurately reflects the candidate’s capabilities, supporting better-informed decision-making.

This change in approach to collecting references ensures that the evaluation process is both efficient and equitable, providing a solid foundation for assessing candidates’ suitability and potential.

 

Trend 3: AI content detection

View/Watch Trend

As artificial intelligence (AI) continues to evolve, the need to distinguish between human-generated and AI-assisted content has become increasingly important. This distinction is crucial in settings where originality and authenticity are highly valued such as in awards, grants, scholarships, call for speakers, abstract collection, fellowships, and more. The implementation of AI content detection tools is aimed at identifying the extent to which AI has been used in the preparation of submissions. These Reviewr tools are able to differentiate between content that is entirely generated by AI and content that has been used AI to assist the human creator. In a similar fashion, it’s also critical to identify and plagiarized material as well as the identified source from where the content was derived.

The AI content detection system integrates several functionalities:

  • Detection and Analysis: It can precisely identify the presence of AI-generated text, providing a percentage score that indicates the extent of AI or plagiarised content involvement.
  • Maintaining Integrity: By flagging submissions that heavily rely on AI, the system ensures that the contributions reflect the true capabilities and efforts of the applicants. This is essential for maintaining the integrity of competitive academic and professional environments.
  • Policy Enforcement and Education: Understanding the role of AI in submissions allows institutions to enforce existing policies or develop new guidelines regarding the use of AI technologies. It also provides an educational opportunity to inform participants about acceptable practices related to AI use.

The strategic implementation of these tools helps maintain a level playing field, ensuring that all submissions are evaluated fairly based on their merits and the genuine effort of the participants. This system is particularly important in an era where AI tools are readily accessible and capable of producing content that could potentially blur the lines between human and machine-generated work.

 

Trend 4 & 5: Review workflows and promoting a fair, non-bias, selection

View/Watch Trend

Creating a fair, non-biased framework is essential for any evaluation process, especially those involving diverse applicant pools such as scholarships, awards, and grants. Reviewr employs several strategies to ensure fairness and objectivity:

  • Randomized Review Assignments: Reviewr can randomly assign applications to reviewers. This method helps prevent any preconceived notions or biases that reviewers might have towards certain candidates, promoting a more equitable evaluation process. There are many ways to structure a review and selection workflow that can still rely on a form of randomization, if if a workflow consists of phases, committees, categories, etc. 

    • They key component to randomization of the submission to evaluator pairing is to first identify the workload capacity of reviewers and then from here, identify how many times each individual submitter must be reviewed to get a proper gauge on the quality of the application. For example, we may decide that review teams should not evaluate more than 20 people due to time constraints, but at the same time, each applicant must get reviewed by 5 different evaluators. Within Reviewr, we can just enter these metrics and the system will auto generate the assignments.

  • Anonymization of Submissions: To further prevent bias, Reviewr can redact any identifying information from applications. This ensures that evaluations are based solely on the merits of the content and the qualifications of the candidates, not on any personal or demographic characteristics. Any information that is also not relevant to the review and selection process such as data collected for record keeping purposes, even if not PII, should also be redacted to lower the barrier for the selection team and ensure a seamless experience.

  • Transparency in Processes: The organization provides clear guidelines and criteria for evaluations, which are openly communicated to all participants. This transparency helps build trust in the fairness of the process and ensures that all candidates understand how decisions are made.

These measures are designed to create an environment where all candidates have an equal opportunity to be evaluated fairly, based on their abilities and achievements without the influence of bias or favoritism.

 

The approach to broadening evaluation criteria in award, grants, scholarships, call for speakers, abstracts, etc is designed to recognize the variety of applicant qualities and achievements, beyond traditional academic or professional metrics. This inclusive evaluation strategy contains a variety of criteria that reflects the diverse strengths and potential of applicants that should directly align with the mission and vision of your organizations. There are key characteristics you are looking for in the ideal candidate and these are the items we must be measuring against:

Incorporation of Diverse Evaluation Metrics

  • Beyond Academics and Professional Achievements: Criteria now include evaluations of leadership qualities, community involvement, innovative thinking, and resilience, among others.

  • Cultural and Contextual Relevance: Adjusting criteria to reflect the specific cultural, social, and economic contexts of applicants, recognizing that excellence and potential can be conveyed differently across different settings.

Customized Weighting of Criteria

  • Flexible Weighting System: Criteria can be weighted differently depending on their relevance to the specific goals of the program. 

    • For example, for leadership-focused awards, leadership experiences and potential might be weighted more heavily.

  • Adaptation: The criteria and their weightings are regularly reviewed and adapted based on feedback from stakeholders and evolving program goals.

Scoring Models

  • Holistic Scoring: Instead of strict numerical scoring, a more narrative-based evaluation may be used to capture the nuances of each candidate’s contributions and potential impacts.

  • Feedback-Driven Improvements: Ongoing adjustments to scoring models based on evaluator feedback and participant outcomes help ensure that the evaluation process remains relevant and effective.

By broadening the evaluation criteria, programs can more accurately and fairly assess the true breadth of talents and potentials, ensuring that candidates are selected not only for what they have achieved but also for what they can achieve.

 

Trend 6: Normalize results and identify judging trends

View/Watch Trend

To ensure fairness across different evaluators who may have varying standards of strictness, normalizing results is a critical process. This involves adjusting scores to a common scale and calibrating the assessments across different reviewers. This process reduces the impact of biases and ensures consistent evaluations, no matter who conducts them.

Let us tell you a story. An ivy league university that we will not mention year over year would have evaluators question why particular candidates were not selected. Those that reviewed the applicant thought they were strong fits and actually one of the highest they evaluated. For the sake of privacy, let’s pretend this evaluators name was Bob. Bob, who never scores an applicant greater than 15 evaluated a particular candidate as a 14. Bob later was shocked to hear that this applicant was not selected as it was the highest rated applicant Bob scored. Looking into the data, it was later identified that because not all applicants were evaluated by all reviewers, that some evaluators were scoring people as 25s consistently. While that particular candidate was scored “highly” by Bobs standards, it was actually quite low compared to the standards of other evaluators. This lies the problem – how do we not only identify judging trends by particular evaluators, but also take this into account when generating results.

  • Standardization Techniques: Implementing statistical methods to adjust scores based on the average stringency or leniency of reviewers. Reviewers normalization report will identify each particular judges average score amongst all the applicants they review, and then compare that overall average against a particular candidate to see if that applicant is higher, or lower, than how that judge normally scores. We can then create a baseline of “average” and use this averaged baseline in comparison to other judges and applicants.

  • Calibration Sessions: Regular meetings where reviewers discuss and align on scoring standards to ensure consistency in how evaluation criteria are applied.

  • Feedback Loops: Incorporating feedback from both reviewers and candidates to continuously refine the scoring and normalization processes.

This normalization is vital for maintaining the integrity of the evaluation process, ensuring that all candidates are judged fairly and equitably, irrespective of which reviewers assess their applications.

 

Trend 7: Measuring and reporting on impact

View/Watch Trend

Measuring the impact of programs such as scholarships, grants, and awards is crucial for understanding their effectiveness and for reporting to stakeholders. This involves a systematic approach to collecting and analyzing data to assess whether the program’s objectives are being met. Key aspects of this process include:

  • Long Term Tracking: Monitoring the progress of recipients over time to evaluate the long-term impact of the program. This helps in understanding how effectively the program contributes to the career or educational advancements of its beneficiaries.

  • Outcome-Based Metrics: Establishing specific, measurable outcomes that the program aims to achieve. These could include academic success, professional achievements, community impact, or other relevant metrics depending on the program’s goals.

  • Regular Reporting: Providing periodic reports to stakeholders, which detail the outcomes and successes of the program. This transparency is essential for maintaining trust and support from donors, participants, and the public.

  • Feedback Collection: Gathering insights from participants and other stakeholders through surveys, interviews, and other feedback mechanisms. This information is invaluable for refining the program and addressing any areas where it may not be meeting expectations.

Ultimately, impact and grant reporting is not a challenge – but collecting the proper data from applicants to generate these reports is. It’s critical that we collect in a timely, seamless manner, this data that can be easily recalled for generating reports. 

Example grant reporting template for applicants:

  • Defined time period update

  • Funded budget vs actual fund utilization

  • Key successes

  • Key challenges

  • States goals vs actual outcomes

  • Explanation of discrepancies

  • What can be improved moving forward

  • Applicant target metrics

    • Actual vs realized

    • Explanation of discrepancies

  • Applicant qualitative impact

  • Applicant quantitative impact

  • Project plan moving forward

Effective impact measurement not only demonstrates the value of the program but also provides essential data that can be used to improve future iterations of the program, ensuring that it continues to meet the needs of its participants and achieves its intended goals.

 

Trend 8 & 9: Transparency and data security

View/Watch Trend

Enhancing transparency in the process of selection and evaluation is key to building trust among participants and stakeholders. Transparency involves clear communication about the processes, criteria, and decisions made during the evaluation and selection phases. Here’s how transparency is integrated:

  • Clear Criteria and Processes: Detailed documentation of all criteria and processes used in the selection and evaluation stages is made available to all participants. This ensures that everyone understands the basis on which decisions are made.

  • Accessibility of Information: Making information about the process easily accessible to all candidates, including potential applicants and unsuccessful candidates, helps demystify the process and reduces perceptions of bias or unfairness.

  • Real-Time Updates: Providing participants with real-time updates on their status within the program, as well as any changes to the process or criteria. This ongoing communication helps keep participants informed and engaged.

  • Post-Selection Feedback: Offering detailed feedback to participants after the selection process, especially to those who were not selected. This can provide valuable insights into areas for improvement and encourage future participation.

  • Build community trust and create an environment for self development and growth.

    • Did we follow the Proof of Process (POP)

      • Sets the stage to ensure a fair, non-bias, trustworthy, and compliant program

      • Public facing statement that outlines the entire submission application and selection process that Ultimately can be mapped back to show the process was followed.

      • Elements of a Proof of Process (POP)

        • Dates and deadlines

        • Outline of the submission process

        • Outline of the review and selection process

        • Post selection process

    • If an applicant was not selected, are we prepared to prove why?

By enhancing transparency, the organization not only fosters a fairer environment but also builds a positive reputation, encouraging more individuals to participate confidently in the program.

Maintaining rigorous data security and privacy standards is essential, especially as programs handle sensitive personal information of participants. Here’s how data security and privacy are addressed:

  • These are critical programs to your organizations and for many, are life changing opportunities for applicants. 

      • They need to be treated as such.

  • Lack of security control or transparency in the Proof of Process can be a deterrent to potential applicants.

  • Compliance with global standards

      • In an ever-changing and dynamic environment, compliance standards must be constantly monitored. 

        • GDPR, FERPA, California Personal Information Act, etc 

      • Leave this to the exports (Reviewr). 

    • SOC2 Type 2 Certification 

      • Industry gold standard for data security and privacy 

      • While particular tools may be compliant (form builders, association management softwares, etc), they may not be when utilized together to power a program they were unintended for. 

  • This is why its critical to leverage a dedicated award, grant, scholarship, abstract, etc software

  • Transparency in Data Usage: Clearly communicating to participants how their data will be used, who will have access to it, and under what circumstances. Participants should also be informed about their rights regarding their data, including access, correction, and deletion rights.

  • Regular Security Audits: Conducting regular security audits and penetration testing to identify and address vulnerabilities. This proactive approach ensures that the organization remains ahead of potential security threats.

Data security and privacy are critical not only for compliance and protecting participants but also for maintaining the trust and integrity of the program. These measures ensure that participants feel secure in providing their information, knowing that it will be handled with the utmost care and confidentiality.

More To Explore

Blue White 3D Business Blog Banner 1
Webinar

Enhancing Grant Management

Schedule a Demo View Interactive Tour Enhancing Grant Management Community Brands’ “Association Trends Study 2024” sheds light on priorities and challenges facing

Blue White 3D Business Blog Banner 1 11.57.14 AM
Webinar

Enhancing Scholarship Management

Schedule a Demo View Interactive Tour Enhancing Scholarship Management Introduction to the Scholarship Landscape and Key Insights The recent findings from Netforum’s

Implement these trends today

CTA post