Last month, Svetlana Negroustoueva, the lead of the evaluation function at CGIAR, visited Washington, DC, to attend the 4th Conference on Evaluating Environment and Developmentorganized by the Independent Evaluation Office of the Global Environment Facility (GEF). While there, she took the opportunity to conduct in person key informant interviews with staff of the International Food Policy Research Institute (IFPRI), one of CGIAR’s research centers, as part of a series of evaluations on science groups. Science journalist Alan Dove interviewed Svetlana about her trip; below is the edited transcript of their discussion. 

Alan Dove (AD): What was the purpose of your trip to Washington, DC?

Svetlana Negroustoueva (SN): There was a dual purpose. First, it was to present and engage at the Conference on Evaluating Environment and Development, a very high-level global event for the evaluation profession organized by the Global Environmental Facility (GEF) with partners. The conference was both a thematic and a professional one on environment, climate change, and related topics. 

The second reason for my trip was to conduct in-person data collection for an ongoing real-time cluster evaluation on CGIAR science groups (ToRs). This was done at IFPRI, one of the CGIAR centers with headquarters in Washington, DC.

AD: Let’s start with the conference. What did you learn, and what were the main highlights of the evaluation conference for you?

SN: While it was my 3rd time attending this conference, it was the first time that the  CGIAR’s evaluation function was actively represented there. I was honored to showcase our experience in two different but insightful sessions with other colleagues: on the role of Science in Evaluation, and Inclusion.

The first panel on ‘Science Informing Evaluation’, was an opportunity to dive into the core of what we do, and to talk about how we frame and evaluate agricultural research-for-development (AR4D) interventions in CGIAR. We highlighted the continued importance of experimenting and bringing to light new approaches to evaluate the nexus of environment and climate change. That followed the spirit of the conference because it brought together the social sciences and the biophysical and natural sciences. Evaluating that nexus requires different, systematic approaches that vary by geography and thematic contexts. It is at the nexus of R4D; our independent evaluation function considers what gets evaluated and how. 

Participants of the first panel on ‘Science Informing Evaluation’. Svetlana Negroustoueva, IAES Evaluation Function Lead, farthest right. Photo: IAES Evaluation Function.

In 2023, the UN Secretary General announced the creation of an independent Scientific Advisory Board to advise UN leaders on breakthroughs in science and technology and how to harness the benefits of these advances and mitigate potential risks. The UN Secretary-General recently stated that “Scientific and technological progress can support efforts to achieve the Sustainable Development Goals”. In the foyer of the conference room, I presented a poster on evaluating quality of science (QoS), and had very insightful exchanges with evaluation peers who were very interested in learning more about our method.

Since 2020, our evaluations of the evolving CGIAR research portfolio have ranged from the Water, Lands and Environment CRP (resource) to ongoing evaluation of the 3 Science groups. In the CGIAR-wide Evaluation Policy, we included a designated Quality of Science evaluation criterion to single out elements of the legitimacy of how science is conducted, delivered, and scaled, and assess the credibility of scientific outputs. Geeta Batra, the IEO Director, wrote a reflection on our session on how science can inform evaluation. The EvalSDG insight #18 on the topic is forthcoming.

AD: Bringing together social and natural sciences like that can be tough, because of their inherent differences, right?

Svetlana Negroustoueva presenting CGIAR to stakeholders. Photo: IAES Evaluation Function. 

SN: CGIAR is an integrated research partnership with a mission to deliver science and innovation that advance the transformation of food, land, and water systems in a climate crisis. Part of the tension may stem from one type of science not recognizing the value of the other. But research for development (R4D) necessitates the full continuum of interdisciplinary science. In our context, there is an unspoken perception that social scientists are not “true” or “real” scientists, compared to those who work in biological and natural science. But that’s changing. In our Synthesis of learning from a decade of CGIAR research programs (2021), of 44 evaluations, the need to integrate and bring social science to the design of the R4D portfolio and its implementation was strongly highlighted. Our evaluation teams are led by evaluation professionals, most often social scientists, with team members (subject-matter experts) reflecting on each topic, to enhance the credibility of results and the framing of recommendations in our AR4D context.

Related to social science, in the second panel on ‘Inclusion’ , we featured the evaluation of CGIAR’s GENDER Platform. Our value-add to the session was in bringing together 3 sides of the evaluation: managers (IAES), peer-review (ERG) and representatives of management (developing/implementing Management Response). Frank Place of IFPRI, a CGIAR center, highlighted the value of inclusivity of evaluand in evaluation, to arrive at actionable recommendations. Frank discussed how that is being implemented.

AD: Were there other major themes that came out of the conference?

SN: In the discussion on evaluation methodologies to increase rigor, we highlighted the recent use, as part of the evaluation of the CGIAR Genebank platform, of Social Network Analysis (SNA). SNA quantifies the collaborative efforts of various stakeholders to achieve shared objectives. SNA can enhance evaluation processes by capturing and visualizing relationship nuances in interventions or programs, revealing insights into collaboration dynamics, identifying strong connections, and pinpointing gaps where interactions can be improved or established, thus boosting a group’s overall effectiveness.

We also discussed artificial intelligence, and what to do with it in the context of science, research and evaluations, especially at the nexus of various sciences. That was in one of the sessions on the role of communities of practice at the nexus, in bringing together professionals with interests and expertise in certain topics for the greater good.

Timely to our current priorities, another important theme was the urgency and the need to focus on evaluating right there and right now, in real-time. Learning needs to be basically cyclical, and that takes effort and strategy to think about how to best use evaluative evidence for learning. So rather than waiting until everything’s done to evaluate it, you need to be evaluating in real time to provide feedback.

AD: Besides attending an evaluation conference, you also met with the evaluation team for Science Groups Evaluation, similar to Ibtissem Jouini’s recent trip to Kenya. What did you find?

SN: Indeed, I was joined by the evaluation Team Leader and one of the Subject Matter Experts to conduct data collection at IFPRI. This was a formal data collection exercise as part of cluster evaluations on CGIAR’s science groups, as opposed to the scoping work Ibtissem did, so these were complementary but different types of exercises. Ours was more targeted to the Systems Transformation science group.

Echoing learning from Kenya, a recurring theme in IFPRI interviews was that change can be scary, especially with limited and inconsistent guidance and communication around what’s coming next. There was also an appreciation for the role of CGIAR centers: the energy is from the centers and the center leadership, so that’s important. They also appreciated the increasing volume of collaborations with other initiatives and partners, and it inspired them to look for more and different partners.

Interviewees praised renewed empowerment and inclusion: namely, that the initiatives’ lead and co-lead structure has brought more women and more younger scientists into leading positions. They also noted the importance of sequencing of R4D activities, and that risk mitigation should take place before initiatives. For example, when you assess a risk based on evidence, you familiarize yourself with lessons from previous and ongoing work, and then with that information in mind, hopefully whatever comes out is a lot more coherent, and doesn’t repeat mistakes. We will be sharing more in the coming weeks and months on learnings from evaluating science groups and work around QoS, so follow IAES on social media.