Centering the Humanity in Technology
February 27, 2025
Civic Science Fellow Christine Custis Brings Accountability, Agency, and Inclusion to AI Policy and Governance
“For the longest time, we joked that our phones were listening to us,” says Christine Custis, Civic Science Fellow at the Institute for Advanced Study. “But they don’t have to listen. We give them everything—our location, our preferences, our choices. And all these systems are doing is guiding us toward the light, like an amphipod under the control of a parasite. But we have to ask ourselves: at what cost?”

Inspired by image by former Civic Science Fellow Anand Varma (CSF 2020-21)
A computer scientist by training and an AI innovator with more than two decades of experience, Custis has seen firsthand how technology quietly commandeers human agency.
“We’re not the users anymore; we’re the product,” she says. “Our data is the product.”
Now, as a Civic Science Fellow in Alondra Nelson’s Science, Technology, and Social Values Lab, Custis is on a mission to push for something different: technology that respects autonomy, embraces inclusivity, and prioritizes humanity. At the intersection of ethics, policy, and society, she’s asking complex questions about how artificial intelligence impacts human lives—questions that can generate actionable answers.
The Path to Civic Science
Custis’s career began in technical spaces. She honed her expertise at IBM and The MITRE Corporation before becoming director of programs and research at the Partnership on AI. Over two decades, she built a reputation as a dynamic problem-solver, tackling issues including AI safety, labor policy, and transparency. Yet, despite her professional success, something felt incomplete.
“My assignments were always about impacting people, but they could sometimes be very vacant of those people’s insights and inputs,” she says. “I didn’t want to just deliver a technical asset. I wanted to see its effects on the people it was actually for.”
This desire to merge technical innovation with a focus on humanity led her to the Civic Science Fellowship program. Already familiar with Alondra Nelson’s work bridging science, technology, and society, Custis saw a post on LinkedIn announcing Nelson’s return to the Institute for Advanced Study from a stint in the White House Office of Science and Technology Policy, along with an opening for a fellowship in her lab. Intrigued, Custis applied and was quickly selected.
“It felt serendipitous,” Custis says of the transition. “This work lets me position myself as a scientist whose content and context better inform civil life, because it’s more about people.”
Interdisciplinary Work at the Lab
Custis plays a central role in the lab by supporting one of its flagship projects: the AI Policy and Governance Working Group. This initiative convenes leaders from industry, academia, civil society, and government to explore the ethical and societal implications of artificial intelligence.
“This group is a space to have deep thought about the ethical issues around AI—the responsible design, development, and use of it,” she says. While the 18-month-old group initially focused on responding to government proposals, such as a request for input on dual-use models for AI, Custis sees a shift ahead. “We’re looking at how to be more forward-thinking, how we can set strategy instead of just reacting.”
Each formal meeting of the Working Group combines private sessions with public-facing workshops, illustrating the group’s commitment to inclusion. Custis attended a session at a meeting in Hawaii in March that brought together educators, artists, policymakers, and members of the local community for discussions about the risks and threats posed by generative AI systems.
“You’d hear from educators sharing how they’re using AI or what they fear about it, as well as artists asking, ‘What about me? How does this impact my work?’” she says. “It became this open dialogue where people who aren’t necessarily technical experts had the chance to engage.”
Custis says public engagement like this shouldn’t be optional but standard practice. “These conversations should happen way more often. People who don’t use but are affected by these systems—what I think of as impacted non-users—deserve to be part of the dialogue.”
Even behind closed doors, Custis sees the diversity of perspectives within the working group as central to its success. By blending rigorous policy work with lived experiences, the group ensures that its strategies and recommendations reflect real-world concerns. Helping to plan these meetings and synthesize the results is one of Custis’s primary contributions.
“It’s one thing to look at a policy proposal,” she says. “It’s another to see how, say, an educator responds or how different communities interpret the impact.”
Beyond the working group, Nelson’s lab is what Custis calls a “thinker’s space,” filled with fellows and affiliates from a wide range of disciplines.
“I get to hear the research of folks writing books, creating documentaries, and exploring big questions,” she says. “And sometimes the most interesting discussions just happen when you’re walking around or having coffee together.”
Human Agency and Inclusion
Custis’s work is grounded in accountability and inclusion. At its heart is her concern for human agency—the ability for people to maintain autonomy in a world increasingly shaped by technology, including artificial intelligence. “The coercive architecture of so many applications strips us of our autonomy,” she says. “We’re handing over our data willingly. And then these systems guide us. Toward what? And who benefits?”
For Custis, preserving agency alone isn’t enough. She argues that scientists and technologists must prioritize bi-directional collaboration with the people their work affects.
“It’s not done until everyone participates,” she says. “The product, the policy, the science—it’s not done until the people impacted are part of the process, their lived experiences are part of the science. Including those voices isn’t just ethical, it makes the science better.”
Custis’s time at the Partnership on AI revealed how deeply entrenched resistance to inclusivity can be. During external discussions on AI in healthcare, she encountered extreme opposition when advocating for patient voices in the conversation.
“I remember getting so much pushback for wanting to include patient advocates in the breakout rooms, and it was just baffling to me. Why wouldn’t we invite these voices into the room?” she says. “It struck me how technologists or specialists can easily see the people that their product or service is for as being in the way.”
At the root of Custis’s work is a guiding principle: “What we build and how we build it says a lot about what we value. We need to design for humanity’s needs—not just for innovation’s sake.”
Looking Ahead: A Living Network
As her fellowship progresses, Custis is thinking deeply about the future—not only her own but also the legacy of the Civic Science Fellowship. “This fellowship isn’t just about the 18 months,” she says. It’s about seeding the future. “Maybe nothing comes of a connection now, or even in the next five years. But on the sixth year? You’ll remember who to call.”
Custis sees the fellowship model as critical for tackling the most complex societal challenges, particularly in fields like AI governance.
“We are all part of the answer,” she says. “Scientists, technologists, impacted non-users, advocates—everyone. And if we don’t talk and work together, we lose the nuanced truth.”
Beyond her work with AI governance, Custis is planning a collection of essays centered on the theme of disembodiment: How do we maintain our sense of self, our ability to think and choose for ourselves when, for example, we’re constantly entangled with AI systems that are designed to learn from us and influence us, often without our full understanding or control?
“It’s the way we sort of lose ourselves in artificial intelligence, or we actually allow it to commandeer our lives,” she says. Drawing on her own experiences as a product innovator and blending that with broader societal commentary, she hopes to explore both the practical and personal dimensions of ethical AI.
The writing project ties into the larger questions she explores: How do we build responsibly? How do we ensure marginalized voices aren’t left out? And how do we create systems that reflect the full complexity of human lives?
Custis knows these questions don’t have simple answers, but that’s precisely the point.
“This is a chance to slow down and ask the big questions,” she says, “before we lose sight of the humanity we’re trying to serve.”
Christine Custis’s Civic Science Fellowship at the Institute for Advanced Study is supported by the Rita Allen Foundation and The Kavli Foundation.