In this week’s episode, host Kristin Hayes talks with Priya Donti, cofounder and executive director of Climate Change AI, a nonprofit that works at the intersection of climate change and machine learning. Donti discusses various types of artificial intelligence, the applications of artificial intelligence in the energy transition and climate policymaking, and the importance of interdisciplinary collaboration in the ethical development and implementation of artificial intelligence.
Listen to the Podcast
Notable Quotes
- Defining artificial intelligence (AI): “Artificial intelligence … is basically any computational algorithm that performs some kind of complex task, and this refers to both symbolic techniques—you write down a set of rules and get an algorithm to automatically reason over them—and to data-driven techniques, where you have a large amount of data, and you’re trying to automatically extract patterns and insights from that in order to then apply it.” (4:35)
- Artificial intelligence is one tool in the climate toolbox: “Tackling climate change is an all-hands-on-deck effort, and it’s going to require the employment of all of the tools and approaches we have across society, and that includes AI and machine learning. AI will be essential in some use cases … [and] helpful in accelerating timelines in some cases. For example, it’s used to accelerate the process of scientific experimentation for clean technologies like batteries.” (10:17)
- Effective implementation of artificial intelligence requires engagement with all stakeholders: “Doing this work right requires deep collaboration across different domains of expertise—AI, climate, operationalization, policy, social science.” (23:11)
Top of the Stack
- “Putting the ‘Smarts’ into the Smart Grid: a Grand Challenge for Artificial Intelligence” by Sarvapali D. Ramchurn, Perukrishnen Vytelingum, Alex Rogers, and Nicholas R. Jennings
- “Tackling Climate Change with Machine Learning” by David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Sasha Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla P. Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, and Yoshua Bengio
- Climate Change and Artificial Intelligence: Recommendations for Government Action by Peter Clutton-Brock, David Rolnick, Priya L. Donti, Lynn H. Kaack, Tegan Maharaj, Alexandra (Sasha) Luccioni, and Hari Prasanna Das
- “Environmental Justice in the Age of Big Data: Challenging Toxic Blind Spots of Voice, Speed, and Expertise” by Alice Mah
The Full Transcript
Kristin Hayes: Hello and welcome to Resources Radio, a weekly podcast from Resources for the Future. I’m your host, Kristin Hayes. My guest today is Dr. Priya Donti, cofounder and executive director of Climate Change AI, which is a global nonprofit initiative to catalyze impactful work at the intersection of climate change and machine learning.
Priya earned her PhD in 2022 in both the Computer Science Department and the Department of Engineering and Public Policy at Carnegie Mellon University, and she’ll join the faculty of MIT’s Electrical Engineering and Computer Science Department this fall.
I love talking with all of our Resources Radio guests, but I’ll admit I’ve particularly been longing for a conversation on this topic—that is, artificial intelligence (AI) and climate change—for some time now, and I am pretty darn excited about this opportunity to get a few of my questions answered. I will admit, I have many questions, given what I would consider the vast promise and peril of artificial intelligence.
Priya and I will talk terminology, algorithms, and some of the really incredible ways in which machine learning is being applied to climate challenges. Stay with us.
Hi, Priya. It’s really great to talk with you.
Priya Donti: Thanks for having me on.
Kristin Hayes: We always like to start our conversations with what I would consider a more fulsome introduction to our guests, so let me ask a little bit about you and your background and what drew you to working at this intersection of AI and climate change. I’m really curious how those interests came together in the form of this new Climate Change AI nonprofit, as well.
Priya Donti: I’ve been working on the topic of AI and climate change since 2016 or so, but my interests started way earlier than that. In my first week of high school, I had this amazing teacher who put aside the first few weeks of our biology class for a climate and sustainability curriculum. Within that, we learned about the fact that climate change is fundamentally a really human issue and one that’s disproportionately going to affect the world’s most disadvantaged populations. That really resonated with me and made me want to dedicate my life to working on this.
But I didn’t actually know how I would do that. I graduated from high school, went to undergrad, and fell in love with computer science. At the time, this was a bit of a conundrum for me, because I didn’t really understand if and how computer science could play a role in addressing climate change. But luckily, toward the end of undergrad, I stumbled upon this paper called “Putting the ‘Smarts’ into the Smart Grid,” which talked about how AI and machine learning would actually be really critical to helping us create next-generation power grids that could integrate renewables. I was hooked. I ended up getting a fellowship to travel for a year and interview people about power grids before starting my PhD working on this topic.
A couple of years into my PhD, I met several like-minded people who similarly wanted to employ their AI and machine learning skills for climate action—some coming from the AI and machine learning side and some coming from the climate side. We realized that there was huge potential to mobilize the broader community around accelerating climate action through the use of AI. That’s how Climate Change AI was born as a nonprofit.
Kristin Hayes: Amazing. That’s great. Thank you for that introduction.
And, oh boy, as I’ve mentioned, I think I’m in the territory of perhaps overly excited for the opportunity to talk to you about this. I always do like to start with some contextual questions, but I think that’s especially important in this case, because I think terminology can be a challenge in this complicated and very rapidly evolving space.
So I would love to start with just a little bit of terminology review and maybe in particular how to think about the terms “artificial intelligence” or “AI” as compared to “machine learning.” Then, also, are there other terms out there in the world in which you work that our listeners should have in mind as we start our conversation?
Priya Donti: Artificial intelligence really refers to any computational algorithm that can perform a complex task. These are often, but not always, tasks we associate with human intelligence in some way—speech or perception or reasoning—but not always. Things like forecasting solar power, for example, is not something that at least I personally do in my free time.
Artificial intelligence, though, is basically any computational algorithm that performs some kind of complex task, and this refers to both symbolic techniques—you write down a set of rules and get an algorithm to automatically reason over them—and also to data-driven techniques, where you have a large amount of data and you’re trying to automatically extract patterns and insights from that in order to then apply it. When people talk about machine learning, they’re talking about this second type of AI, where you’re learning automated insights from large amounts of data and applying them in some way.
Now, within machine learning, one term that often comes up is “deep learning.” Deep learning basically refers to a kind of machine learning where you’re using a specific kind of model called a neural network in order to actually draw these automated insights from data.
Kristin Hayes: Okay. Another term that seems to sort of float around in the data science world is “big data.” It certainly seems like volumes of data are really important for any of the terms that we’re talking about here. Is that a fair assessment, that this is really about working with large-scale data sets—that that’s really where the benefits of artificial intelligence and machine learning come in?
Priya Donti: Yeah, primarily you want settings where you have a large volume of data, though the meaning of “large” can be subjective, depending on the use case that you’re looking at, and you want high-quality data. The kind of quality of the insight you’re going to get from a machine learning algorithm is limited by the quality of the data that you’re feeding in, so that quality is really important.
There’s also some ongoing work that’s trying to figure out, even in settings where you don’t have large data—maybe you have medium data, but you have some other kinds of knowledge or insights; for example, knowledge about the physics of your power grid—how can you actually merge those two types of knowledge so you can learn both from medium data and based on the existing physical knowledge you might already have.
Kristin Hayes: Okay, thanks. Well, one more contextual question here around ChatGPT. I think a lot of folks—and I’ll admit myself that I’m very much included—only really started focusing on AI developments with the public release of ChatGPT last November. We sort of talk about ChatGPT as if it’s all of artificial intelligence, but it’s clearly not; it’s clearly one type. Can you just say just a little bit more about what type of AI or machine learning ChatGPT is? I’m curious for your opinion, too, on why you think AI really exploded into the public consciousness in a way that all of the other uses of AI that had been around us for many more years maybe didn’t.
Priya Donti: That’s a great question. ChatGPT is a kind of AI called generative AI. This is a type of AI that analyzes data to learn some kind of distribution or pattern or structure of that data, and then it generates outputs that are meant to seem similar to the data it was trained on. In this case, ChatGPT was trained on text data from the internet, and it then tries to produce answers that replicate, in some way, the structure and pattern in that underlying data.
I think the reason it’s really exploded into the public consciousness is because of the user interface. Maybe one way to think about this is to draw an analogy to the early internet. I mean, the internet—the early origins of it existed in the 1960s with the first computer networks. By the late 1980s, you were able to send messages or files or read articles, but you still needed to be an expert in how computers and computing worked in order to be able to do that. But when, in the early ’90s, the first web browsers were launched, all of a sudden you didn’t need that deep technical expertise to have the internet be really tangible and usable for you. I think that the ChatGPT interface—the interface in front of the GPT model—is, I think, what has enabled this to be really tangible and accessible to a lot of people.
But, as you emphasized in your question, I think it’s worth noting that this is one type of AI—it’s not the entirety of AI. I really hope that this leap into the public consciousness serves as an entry point to not necessarily just hopping on the bandwagon of this one kind of AI, but using that as an opportunity to really learn about what AI is more broadly and what its strengths, limitations, and risks are and to really use it in a principled way across many of our applications.
Kristin Hayes: Yeah, that’s great. I’m sure we’ll come back to these terms throughout our conversation, so it’s great to have this baseline.
But let’s turn more explicitly to this question of that intersection between AI and climate change. You’ve already hinted at this, but I want to ask you all the same. As I began to poke around on this topic, one of the very first blogs that I came across was from the Boston Consulting Group, and it had what I would characterize as a rather provocative headline of—and I’m going to quote here—“AI is Essential for Solving the Climate Crisis.” I found that fascinating. They didn’t say useful, they said essential, which is a pretty strong claim.
I wanted to start by getting your overall sense of the importance of AI for tackling climate change. Where are you in that understanding of how essential this might be to solving an issue as complicated—and frankly as data-driven—as climate change?
Priya Donti: What I’d say is that tackling climate change is an all-hands-on-deck effort, and it’s going to require the employment of all of the tools and approaches we have across society, and that includes AI and machine learning. AI will be essential in some use cases. Arguably, I think it’s hard to imagine how we’re going to optimize an increasingly renewable and distributed power grid using AI and similar data-driven techniques. AI is going to be helpful in accelerating timelines in some cases. For example, it’s used to accelerate the process of scientific experimentation for clean technologies like batteries.
There are other places where AI will be a force multiplier, some places where it’ll provide a small assist, and some where it’s not useful at all. I would say that AI is essential, but in the same way that policy, engineering, and other tools and forms of action are essential, where for any of these, we really need to understand where they’re needed and just do our best to get those use cases off the ground.
Kristin Hayes: Okay. Well, that’s great, and I think that’s kind of an important grounding for the rest of our conversation, too, and we’ll come back to some additional questions on that. But we will now spend the next 45—I’m just kidding, we’re not going to spend 45 minutes, but I really would love to give you some time to talk through some of the promising applications of AI to climate-related challenges that you’ve seen. It was a great grounding, again, that you just gave—it’s not for everything, but it seems every day that something new is announced about some sort of promising application. I think you’re probably as versed in these as anybody, so can you just highlight a few examples for our listeners that might show both the range and scale of some of these solutions that are being proposed?
Priya Donti: Absolutely. I want to talk about a couple of different themes where AI and machine learning can be used and then give a few examples. One of these may be unsurprising—it’s improving predictions by analyzing past data in order to provide some kind of foresight.
For example, the nonprofit Open Climate Fix has been working with the UK power system operator, the National Grid Electricity System Operator, to implement deep learning–based forecasts of electricity supply and demand. They were actually able to cut the error of the previous electricity demand forecasts in half by using deep learning to take in a combination of historical-demand data and weather data; and even video data of clouds moving overhead—just different streams of data, and being able to use all of that to really come up with a more nuanced forecast.
Similarly, there’s a company in Kenya called Selina Wamucii, and they developed an AI-based tool called Kuzi, which provides early warnings on locust outbreaks by, again, analyzing many different types of data—agricultural data, weather data, and satellite data—and gleaning insight from that in order to understand where there might be an increase in climate-induced locust outbreaks. That’s one theme: forecasting.
Another one is distilling large, unstructured data sources into actionable information. For example, the MAAP project is using satellite imagery to provide a real-time picture of deforestation in the Amazon by analyzing satellite imagery and figuring out where deforestation might be occurring, and then you can enable interventions to stop it. Similarly, in the public sector, the UN Satellite Centre, UNOSAT, has used AI to do high-frequency flood reporting that has improved disaster response in Asia and Africa by analyzing satellite imagery frequently in an automated manner and then giving daily updates on how flooding is evolving.
A third theme is that AI can optimize complex, real-world systems to improve their efficiency. For example, there are a number of companies that are using AI to optimize the heating and cooling systems in buildings in ways that have shown reductions, depending on the numbers you get from different companies, of 10 percent to 30 percent in energy use. Similarly, power system operators and researchers are working to improve the readiness of AI for optimizing power grids to help us integrate renewables. For example, RTE France runs this machine learning challenge called “Learning to Run a Power Network,” which validates the use of reinforcement-learning techniques for this.
I guess the last theme I’ll talk about is accelerating the process of scientific discovery. The startup Aionics, for example, has helped battery manufacturers cut down design times by a factor of 10 by analyzing the outcomes of past experiments to suggest promising future ones.
These are just a couple of examples, and, for listeners who are interested, I’d encourage you to check out the paper “Tackling Climate Change with Machine Learning” that we wrote at the launch of Climate Change AI. It tries to provide an overview and research agenda for how AI and machine learning can play a role. It really has applications across many climate change mitigation sectors—power, transport, agriculture, and so forth—but also a lot of uses in helping us adapt to climate change and strengthen tools like policy, finance, and education that really support all of that work.
Kristin Hayes: Wow. Okay. Boy, I once again have a million follow-up questions, but let me try to limit myself here to two, if I could. That was fantastic. It’s great to hear about those applications.
Is there a summary of the way that AI is being used in a way that, say, folks who want to not reinvent the wheel, but essentially to learn what other companies and countries are doing, can in fact understand those applications in kind of a consolidated way and maybe build on the learning that’s already happened out there? That’s my first question: how do others who aren’t actually in these individual developments learn about these types of developments, aside from wonderful podcast conversations like ours?
Then, kind of a parallel question is around how this expertise is coming; how this artificial intelligence and machine learning expertise is in fact being embedded in these various other entities. Can you say a little bit more about either of those things?
Priya Donti: Yeah. In terms of discovery of projects, one way to learn more about these is at Climate Change AI. I referenced this report, “Tackling Climate Change with Machine Learning.” We also have an interactive-summaries version of it, where you can scroll through different use cases and applications, and we also run a workshop series at the machine learning conferences, where people can submit the work that they’re doing, and those are also available on our website, so you can scroll through there and get a sense of what the different applications are.
In terms of further aggregation, I guess we’re working on it. If you want to let us know about a use case that you’re working on or work with us on trying to consolidate this data—and in fact, make it easier for people to discover use cases and such—we’re really happy to talk to people about that.
In terms of the kinds of expertise, there’s a combination of expertise needed in terms of literacy. Where it is that AI can play a role or get a mental model for the promises and the risks, but also get implementation capacity to work with the data, to deal with the algorithms—but also get social science capabilities, to understand what the broader context is in which this algorithmic development is happening, what kinds of stakeholders need to be engaged with, and how to audit and evaluate these kinds of algorithms. This expertise can either be built up in house, or, of course, many entities look to external consultancies and such, though this does create risks of capture or of certain tech companies or consultancies with more of this expertise driving directions. So, some healthy mix of in-house capacity and external capacity ends up being important.
Kristin Hayes: Okay, great. Hearing all that, I can tell that even as you approach this with what I would characterize as a fair amount of enthusiasm, you’ve also been very candid that AI isn’t a silver bullet. Both you and your colleagues have written that in various reports. I think that’s come across in your comments so far, too. Maybe we can talk about that for just a minute. It’s not a silver bullet. It shouldn’t be considered the solution to every single piece of the climate challenge. So, can you give me an example of where AI actually isn’t a good fit for solving a climate-related problem? Maybe that would help put this in context for our listeners, as well.
Priya Donti: Absolutely. I think there are a couple of places. One is places where there are fundamentally value-based judgments that need to be made. I’ve been pitched before on this use case of, “Well, what if we could just write down all of the data that we have in a country and use AI to make our public policies for us? That way, we just don’t have to do the work.”
It’s worth noting, first of all, that that’s not necessarily particularly realistic. But also, the input data that you put into an algorithm, the way you design it, has a lot of value judgments laden in it, and putting data together and running algorithms on top of it isn’t a way to avoid making those value judgments. It’s a way to pretend that you avoided making them. So, any place where there are really explicit value judgments that need to be made, there isn’t the ability, in general, to just automate away those value judgments. You have to lean into that when doing policymaking, for example.
Another kind of place where I think AI is often pitched as promising, but where the picture is maybe less optimistic than that, is causal inference. There are lots of situations where people will be like, “Oh, well, I implemented a policy. I want to understand what the causal factors were in making that policy work or not work. Can we throw AI at it?” But it’s worth noting that AI, and machine learning in particular, is statistics. So, all of the things that you usually hear about correlation being something you can capture with statistics more easily than causation apply. And so, there are legitimately good use cases of AI and machine learning within broader causal inference workflows, but it’s not going to solve your causal-inference problem right out the gate.
There are also places where sometimes what’s needed is automation, but not necessarily AI. I’ve started to see some use cases recently where people are basically saying, “There isn’t a way to easily gather certain kinds of well-structured data. Can we use GPT to generate that data for us?” But often in those situations, the correct solution is to scrape that data from trusted sources in an automated way using a web scraper. There are some times where I think the interface on recent AI technologies causes it to be used in cases where some other type of automation is actually what’s needed, basically.
And then, in general, there’re some problems that are small data or that need manual analysis or where AI is just part of a bigger strategy. It doesn’t make sense to fully instrument your building with fancy optimization of heating and cooling systems if you’re not going to insulate it first. So, I think just being cognizant of that bigger context is also important.
Kristin Hayes: Right. That’s great. I’m super glad to have that set of counterexamples too, because I do think that gives a pretty robust picture of both the uses and where it’s potentially not the most useful tool.
I want to come back to barriers for just a second. We talked a little bit about where the expertise comes from and whether there’s in-house expertise versus external expertise. I imagine that certainly could be a barrier for some organizations in terms of really taking the most advantage of AI. But what are some of the other barriers that you see to more widespread adoption of AI in the appropriate use cases related to climate change? Maybe I’ll phrase it another way, too: if you were the tsarina, if you will, of this climate change–AI universe, what would you actually do to help overcome some of those barriers, as well?
Priya Donti: Well, if I were the tsarina, I would probably just take away the problem of climate change, and then we’re done.
Kristin Hayes: Fair enough.
Priya Donti: But in a more practical sense, I think the barriers, in addition to expertise, literacy, and personnel, are also data and data-collection systems and digital infrastructure—both the literal, raw data, and also the computational infrastructure or the simulation environments for your power grid or your industrial system or your building that allow you to actually test these algorithms out.
In addition, as I mentioned earlier, doing this work right requires deep collaboration across different domains of expertise—AI, climate, operationalization, policy, social science. There often isn’t enough collaboration, as well as, in some cases, just a lack of digital-innovation pathways in established industries. If I want to create an algorithm for power-grid optimization today, it’s really hard to actually validate that advancement in technical readiness and actually figure out what that pathway is to deployment on a real system, because we’re often lacking in agreement on how to actually evaluate or simulate the results of these methods, what the metrics that you need to use to do that are, and just that pathway to deployment; and that relevant infrastructure isn’t always there.
I think there’s a lot that can be done—again, on this expertise aspect—to build internal capacity within a wider range of organizations, incentivize organizations to share use cases and data and best practices, create data task forces, have cross-sectoral innovation centers and education. I think there are lots of things like that that need to be done.
For those listeners who are interested in digging into some of these aspects more deeply, we wrote a report back in 2021 for the Global Partnership on AI that really tries to provide actionable strategies for governments and intergovernmental organizations on how to align the use of AI with climate action. In there, there’s a bit of detailing of some of these bottlenecks in more depth, as well.
I think there’s a lot that can be done. Governments, industries, nongovernmental organizations—I think everybody has a role to play in increasing education and readiness and so forth.
Kristin Hayes: Yeah, that’s great. I’m particularly interested in that question of simulation, because you’re right, it’s not like you can just deploy things that have worked on an intellectual basis on the power grid, for example, without actually testing those solutions—but how do you test those solutions when you don’t actually have a simulation option as robust as the power grid itself? That’s a really interesting point.
I also wanted to ask if you have seen any forward motion in any of the directions that perhaps you and your colleagues did identify back in 2021. Are some of those collaborations and education efforts starting to take place? Or do you feel like we’re still right at the beginning of this?
Priya Donti: I think there’s definitely stuff starting to take place, but that, in some sense, we are still towards the beginning. On the education front, for example, UNEP, the UN Environment Program, launched this education module on AI and machine learning for climate and the sustainable development goals that is specifically aimed at policymakers in national and international organizations.
I think it’s really great to see that resource out there, and I hope people take advantage of it to increase literacy within their own organizations. In addition, I would say that there has been movement on creating additional simulation infrastructure.
I think I referenced it earlier—the Learning to Run a Power Network Challenge by RTE France. What they did is not only put out a challenge but put out simulation infrastructure underneath that challenge to actually enable people to test out what their solutions will look like on a power grid. They continue to run subsequent versions of that challenge where they further mature the challenge and the underlying environment.
So, there’s definitely some movement here, but I think there needs to be a lot more that happens. In some sense, a lot of this is the creation of public goods, so I think some of this has to be from public funding, from public policy, from nongovernmental organizations, from organizations that are really incentivized to create these public goods to then enable a lot of this innovation and implementation to happen.
Kristin Hayes: Yeah. Well, I know we’re getting close to the end of our substantive portion of our conversation today, so I have one last content question for you on this topic: I think it’s fair to say the elephant in the room here is the deep concern that has been expressed by many individuals and across many media outlets around the potential for social damage—maybe even social destruction—from AI. There are real challenges here, real risks, so how do you think about taking advantage of the promise of AI while also mitigating those dangers? Then, on a more specific level, what do you think about how individuals who are really consumers of AI—as opposed to developers of AI, to a great extent—can individuals think about avoiding some of those pitfalls, as well, particularly as we think about applying AI to climate change?
Priya Donti: That’s a great question. When I say we think about AI’s dangers, I think people’s minds go in two different directions when they hear that term. One of those is existential risks—superintelligent AI, or automated warfare, and so forth.
But I think what is often more pressing and widespread is its clear and present harms in terms of exacerbating biases and inequities through its data or how it’s designed or how it’s used; or its trustworthiness and accountability when it’s used in policymaking contexts; or also its climate impacts—both its own footprint through hardware and computer—but maybe more importantly through how it’s widely used to accelerate oil and gas exploration and extraction, or to increase societal consumption through advertising, or to facilitate the development of autonomous vehicles in a way that locks in, potentially, our dependence to private transportation as opposed to public.
It really comes down to this combination of both what kinds of applications we work on or enable AI to be leveraged for disproportionately and also how we work on them.
I think in terms of what we do, I think, for example, countries come up with AI strategies. I think there needs to be a deep integration of ethical and climate considerations into those strategies, not just to create a subset of AI for good applications that are done in addition to business-as-usual AI, but to really shape business-as-usual AI itself to create algorithmic auditing and transparency frameworks that really equip people and organizations to do this kind of thinking both upfront when a project is being developed and as a project is being deployed and evaluated.
And, of course, these regulatory interventions are essential, but as you mentioned, individual researchers and organizations need to be thinking about this, too. So in some sense, there’s no replacement for interdisciplinary and cross-functional collaboration where you’re really engaging researchers and implementing industries and end users and affected parties and so forth to come together and really shape the way in which we’re evaluating if work is aligned with these ethical values and really ensuring that we’re building that in right from the get-go in a project.
Kristin Hayes: Great. Well, you have given us so much to think about. And you’ve already mentioned some great resources that folks can check out. We will make sure to list those on the website, alongside the recording on RFF’s website. But I did want to invite you to offer up any other recommended content as part of our Top of the Stack closing feature, that you might want to recommend to our listeners. It could be a book, another article, another podcast. Priya, what’s on the top of your stack?
Priya Donti: There’s an article from 2017 that I really like. It’s called “Environmental Justice in the Age of Big Data: Challenging Toxic Blind Spots of Voice, Speed, and Expertise.” What this piece really looks at is the implications of the big-data era for how we think about environmental justice issues and, in this case, specifically thinking about measurement of toxic releases and things like that. I just find this article a reminder that it is really important to look at what is in the big data that is captured, and we’ve talked about a lot of use cases where AI can play a role in that, but it’s also important to think about what is systematically not captured in that data and to make sure that that’s included in the narrative, as well.
Kristin Hayes: Right. Fantastic. Well, thank you again. I really appreciate you taking the time. You’ve done a great job of talking us through these very complicated issues. A lot of food for thought as this obviously continues to be of high visibility and importance in the media ecosystem. I’m assuming it’s only going to grow in importance as time goes on. So, thank you so much for this grounding and for your time.
Priya Donti: Thanks so much.
Kristin Hayes: You’ve been listening to Resources Radio, a podcast from Resources for the Future. If you have a minute, we’d really appreciate you leaving us a rating or a comment on your podcast platform of choice. Also, feel free to send us your suggestions for future episodes.
This podcast is made possible with the generous financial support of our listeners. You can help us continue producing these discussions on the topics that you care about by making a donation to Resources for the Future online at rff.org/donate.
RFF is an independent, nonprofit research institution in Washington, DC. Our mission is to improve environmental, energy, and natural resource decisions through impartial economic research and policy engagement. The views expressed on this podcast are solely those of the podcast guests, and may differ from those of RFF experts, its officers, or its directors. RFF does not take positions on specific legislative proposals.
Resources Radio is produced by Elizabeth Wason, with music by Daniel Raimi. Join us next week for another episode.