Back To Blog

On Tech Ethics Podcast – Considerations for Using AI in IRB Operations

Season 1 – Episode 15 – Considerations for Using AI in IRB Operations

Discusses considerations for using artificial intelligence in Institutional Review Board operations.

 


Episode Transcript

Click to expand/collapse

 

Daniel Smith: Welcome to On Tech Ethics with CITI Program. I’m pleased to welcome Myra Luna-Lucero back to the podcast. Myra is the research compliance director at Columbia University’s Teachers College. Today, we are going to discuss considerations for the use of artificial intelligence in institutional review board operations. Before we get started, I want to quickly note that this podcast is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization’s attorneys if you have questions or concerns about the relevant laws and regulations that may be discussed in this podcast. In addition, the views expressed in this podcast are solely those of our guest. And on that note, welcome back to the podcast, Myra.

Myra Luna-Lucero: Thanks so much, Daniel. Happy to be here.

Daniel Smith: I’m really looking forward to hearing your thoughts on the use of AI and IRB operations. But first, can you quickly tell us more about yourself and what you are currently focusing on at Teachers College?

Myra Luna-Lucero: Sure. Yes. As you mentioned, I’m the research compliance director here at Teachers College. And right now, we’re working very hard on trying to think through the steps that we would need to understand the current diverse and dynamic research that’s coming out of our research community and part of that is AI. So I’m happy to be able to talk a little bit about some of the strategies we’ve taken and also just some of the questions that we have still, even as we’re thinking through some of these processes.

Daniel Smith: Wonderful. I know a lot of people have been grappling with this for a while now, so I’m looking forward to hearing your thoughts and I think they’ll be really helpful. So can you start by providing a brief overview of the operational aspects of an IRB and what are some of the key responsibilities of an IRB office?

Myra Luna-Lucero: The primary purpose of the Institutional Review Board is fundamentally to protect the rights and welfare of human subjects involved in research activities. As IRB administrators, we really are beholden to primary ethical principles and the common rule. And in pursuant of those responsibilities, we’re trying to evaluate this risk benefit ratio for any of the studies that we review. And right now, we’re just reviewing such dynamic and multimodal proposals for data collection. And as IRB reviewers, we’re trying to think through guides to design and checklists that not only support our ability to review such protocols, but also work to support the researcher as they’re designing their research projects. We’re also thinking through some of the federal, state, city, and local policies within human subjects research and how we can balance all of those parts together as we’re thinking about larger projects that support our research community.

So one of the things I love most about this job is that there’s such excitement of the protocols that we review. There’s such enthusiasm from researchers grappling with challenges to solve and prospects of making the world a better place. And so at the end of the day, it’s one of the joys of the job as opposed to some of the more challenging, rigorous parts that you have to get through the administrative side. But I definitely like the conversations that I have with researchers about the dynamic work they’re doing in the field.

Daniel Smith: So I think that’s a really helpful overview of IRB operations for those of us that are not already familiar, but can you share some of the primary reasons why you and other IRB professionals are considering the integration of AI into IRB operations?

Myra Luna-Lucero: Yeah, I feel like even hearing that question, it sounds like a very weighted prospect. And I think for me to answer it, I need to walk back just a little bit and say we in the IRB profession are typically trying to understand how researchers navigate in the field. Because as you remember, our responsibility is to protect the rights and welfare of the human subject. And in doing that, we need to understand what researchers are proposing and the kind of work that they want to engage with human subjects in the field. And I already know that researchers are very interested in AI, artificial intelligence, and they’re already interested in these multimodal aspects of research design. So it helps me in understanding the work that they’re already interested in when I review their protocols, because I’m able to better assess the risk, I’m better able to provide suggestions or revision requests in helping make that project more robust and more aligned with ethical considerations for protecting the rights and welfare of human subjects.

So I’m considering AI in the IRB operations more so because I want to understand how researchers are using it, and I want to think deeply about perhaps how it can improve our own administrative processes, and moreover, what we need to do in creating safeguards and making standard operating procedure decisions that protect data, that protect privacy and confidentiality, that protect data use. And so it’s this big idea of AI in IRB operations with a lot of undercurrent of balancing openness, curiosity, exploring AI functions, and then very deeply rooted in how we can make safeguards, policies and procedures, and expectations of our job, as well as how we’re going to review such protocols when researchers submit them for our consideration in, again, protecting the rights and welfare of human subjects.

Daniel Smith: That certainly makes sense. So can you provide an example of how you personally have used AI to better understand how it works in the context of how researchers are using it, and then maybe also an example of how you’re thinking about its potential use in IRB operations?

Myra Luna-Lucero: Such a great question. I do have to say that I am not an expert in AI. I am approaching it much like I would think of exploring any new technology. It is me trying to understand the use, the function of the technology, but I am very far from being the expert. You can think of it as I’m wise enough to know data security and data operations, but still trying to figure out the limits or limitlessness of AI in the context of research. So what we’ve been thinking about in the IRB office is ways to streamline our operations and use AI that allows us to be more responsive to researcher inquiries on a quicker, more efficient basis. And one of the conversations has revolved around a chatbot or generative AI that can produce content to support the research community.

And even just that concept, I have not developed anything. Nothing has been produced. We don’t have anything pilot tested yet. These are just preliminary concepts. The idea of a chatbot and the idea of a generative AI that can produce content or support content creation for researchers. Even just those topics alone, have created lengthy, lengthy discussions with technology specialists, with colleagues like you, with researchers. And we’re, I don’t think even at the point of thinking about how AI can be used on a day-to-day basis, but these conversations have inspired such rich thoughts and such rich concepts about, “Okay, we have this use of technology, we have this resource, what can we do with it?” If we think about it in terms of a chatbot, very simply, a chatbot in this case would be a option on say, IRB website that would be populated by frequently asked questions. And it would serve very similarly as individuals have experienced chatbots when getting a prescription or when consulting websites about where can I find this item, et cetera.

But it would be populated by content that is IRB or research with human subjects related. A user would log in, they would start the chat, how do I submit an IRB protocol? The chatbot would auto generate content that basically says, click this link, or here’s the steps to create a new IRB protocol. That does seem a little bit more approachable. It does seem like a new version of technology that we could apply the AI transition from that is much more complicated. And that would look something along the lines of an AI that can be used to generate consent form language that then a researcher can use in their protocol submission package.

Now, here’s where things get a little uncertain. So what would that look like exactly? And are we removing the learning process for a researcher to generate their own content by bypassing it and then having an AI create that content for them? And I think that’s where we’re stuck in our IRB considerations of the use of AI. I had a very deep conversation with a researcher who shares my concern about the importance of a researcher generating their own thoughts, even though it may be convenient and easy to call upon ChatGPT to call that language for you. But there is in essence, a fundamental learning experience, a critical thinking, analytical thought process that goes into generating new content as it pertains to a research study.

And so I’m feeling that in this context, perhaps one of our standard operating procedures is that we would highly encourage researchers, and maybe this is the language we have to use now to not to use generative content sources and rather ask that they generate it themselves, but consider that maybe that AI be used for reviewing content after it has already been generated. So much like an editor might review content or much like a pilot test of that content. So it could look for fluency of language, natural language speech, or it could look for age appropriateness, et cetera. So this is where the debate is still in discussion.

So going back to your question, can I provide some examples of the use of AI? Yes, but we’re not quite there in understanding how it would be operationally used, but we are thinking about limitations and considerations on focusing in the learning part of human interaction, learning to write their own content and the convenience of an AI that can produce maybe more robust content. This is where we’re really debating, and I don’t have an answer. I can just say that these conversations are rooted in the hope of using AI in smart and ethical and efficient ways, while simultaneously encouraging researchers to generate their content and really think deeply about the work that they’re doing. And it’s not to say that they’re not doing that already, but there is a convenience that they may take that AI route. And we’re trying to think through what that means for the learning opportunity of generating their own content. So that’s the idea right now of some of the concepts we’re thinking about for AI and IRB.

Daniel Smith: The example of the consent form sounds like it could be pretty useful to have essentially a second set of eyes for lack of better terms on a consent form to make sure that it’s optimized for readability and understandability and things like that. So just focusing on that example for a moment, can you talk a bit about some of the challenges that you see with integrating that from an operational perspective?

Myra Luna-Lucero: I think that’s a great question. I think it is rooted in homegrown versus open access. Now, I’m going to describe that a little bit more. So if the AI is populated with a data source that is homegrown, in other words, it is based solely on the institutional knowledge, the content that can be credibly cited, that does have validity, that AI could potentially, as you described, be a second set of eyes, an editor, so to speak, for the vetting of content. And it could provide an abundant amount of resources for researchers who may be struggling with framing content and organizing their content. But again, that’s thinking of it if the AI is “taught,” put that in quotes, about content or provided content that is homegrown because there’s a lot more control that you can have about content that is homegrown.

Now, if that AI is provided with content that is more open source. So it’s an open repository or it is from maybe uncertain source that is described one way, but there’s not clear transparency on when that open source was created or how that open source was created, it’s a little bit of a trickier situation. Now, we want to have faith that there are good actors that are producing open-source data, that has robust content that is validated and vetted and organized in a systematic way with high level ethical standards. As we also know, there are bad actors who may create convoluted content or may create content that is biased one way or another. And when we’re dabbling into this open source, it does get a little bit harder to maintain a sense of where that source’s credibility is.

Now, what does that mean? That means that researchers and IRB reviewers are going to have to think very deeply about the source credibility for how an AI is populated. And there’s the time factor too. Well, having generated a home content generated AI seems one of the more reliable, safer ways to ensure the content is valid and credible. It is very time-consuming. And so in a lot of ways, it is more adaptable or easily accessible to target some of these open sources. And it’s again, not to say good or bad or make some kind of claim one way or another, it’s just another consideration.

And so in conversations that I’ve had with researchers who specialize in AI and robotics and technology, they’re trying to find a balance in themselves and in the work that they’re proposing for both this homegrown AI populated content and tapping open sources, that there is transparency and there is credibility, and there is content that can be validated. And so it can be all of these qualifiers. It can be an amazing source, and it can improve the accessibility and content review of some of these protocols. But at the same time, we have to think deeply about the source that the AI is pulling from and how that source is valid or how that source is credible.

Daniel Smith: I want to take a quick break to tell you more about CITI Program’s Technology, Ethics, and Regulations course. This course, which was developed by various experts in technology ethics includes modules specific to AI in human subjects research. You can learn more about this course and others at citiprogram.org. Now, back to the conversation with Myra.

You’ve talked about working with different groups at your institution that sound like they’re very helpful resources when you’re thinking about this and weighing these decisions. So can you just talk some more about that and what that collaboration looks like and the types of questions that you’re asking them and helpful inputs that you’re getting as you think through these things?

Myra Luna-Lucero: I think that’s a great question. I want to give a quick analogy of what it feels like. It feels like I’m walking into a museum exhibit and there are people that have clear knowledge about one artifact and they’re giving me lots and lots of information, and I’m very happy to have that encounter. And then I go to another artifact, and then there’s nobody and everybody’s, “We don’t know where we got this from. We have no idea.” And then I’m trying to find a docent or a curator or a manager of this exhibit to help me understand the content, but they don’t exist or maybe they have some uncertainty. And so it’s like I’m walking around all of these artifacts or all of these curated spaces in terms of technology. We have AI, robotics, generative AI. We have predictive AI, we have data mining, all of these artifacts in this space, and I’m trying to find answers. But what’s happening is that I’m getting robust content about some things and I’m getting a lot of shrugs, and we don’t know yet in other cases.

And so I’m trying to walk around and figure out these spaces, and there’s a lot of where is that from and where do I go and what direction do I head? And there’s no answers because there’s not enough information or it hasn’t been tested yet. It hasn’t been vetted yet, but there’s some areas that there’s a lot of information about, and there’s experts that can talk about that content in robust ways. And so it’s taking those moments when I’m getting information from an individual who has a lot of content to share and then trying to add it to the other pieces in this space and seeing, “Does it relate to this context? Okay, this is an unknown artifact. Okay, what would it look like if it did do this? How would it function if we were to propose that?”

And that’s really what it feels like. It feels like I’m trying to make sense of a space. I’m trying to look at these artifacts. I’m trying to piece together information while simultaneously being curious and okay with limitations and responsive to the, we don’t know yet, and we’re not sure yet, because I think that’s part of the narrative. I’ve gone to several webinars. I’ve visited and communicated with several researchers, and that seems to be a very common conversation. We’re all in this space. We’re all looking at all of these artifacts. We’re trying to figure out origins and meaning and purpose. We have some information that’s very well-versed, and we have others that we just don’t know yet. It’s an uncomfortable space. It is a confusing space, but it’s also a very exciting space to be in as well. And I think for me and my personality and for the kinds of ways that I want to think through the world is being comfortable with not knowing is part of the process of dealing with some of this very robust technology, this very robust AI, but not being stagnant.

And I think that’s also important is that I can be in a space of unknowing and I can be in a space of curiosity, and I can be in a space of confusion, but I can also be in a space of answer seeking and trying to be proactive and trying to broach these conversations. Even if the conversation is like, “Okay. Well, are robots going to take over the world? I don’t think so. Well, let’s just have that thought for a second. Okay. No, great.” Now, we can move forward and think through what tools are available, how can we utilize these tools, and what does it mean to use these tools in this particular way? And so I hope that analogy translated in as best as I can try to construct the feelings that I have as I’m engaging in these topics. It’s wonder, it’s curiosity, it’s excitement, it’s let’s be more efficient. And then it’s also like, “Whoa, where’d that come from? And what does that mean?” And I think that’s a perfectly wonderful space to be in. So long as you’re doing so with caution and critical thinking and mindfulness.

Daniel Smith: As you’ve been navigating this learning process for a while now, do you have any tips or lessons learned that you can share with our audience?

Myra Luna-Lucero: Yes, I would love to share some tips and I think some considerations, ask questions. And sometimes it feels intimidating, and some people will say, “Well, I barely learned how to use one version of technology and now it’s 10 times faster and 10 times bigger. And I don’t know how to ask the question.” Sometimes it’s okay to just say, “I don’t know how to ask this question. Can somebody help me think it through?” And I think that thought partner, having somebody that you can bounce these thoughts back and forth with is really important because sometimes you want to just ask the big questions, the big philosophical questions that you may not find answers for. But having that thought partner to bounce those questions off with is very important. I went to a webinar and one of the questions is, well, how do we grapple with all of this emerging technology when we as humans are functioning in a challenging world in and of itself just as a human? How are we now having to deal with all of this technology?

And I think part of it is to remember that you can pace yourself. You can pause a study, you can discuss with a researcher, you can ask experts. You don’t have to embrace all the technology all at once. You can create standard operating procedures that allow you to keep your ethical education up while simultaneously ensuring that researchers are supported in the work that they want to do, the inspiring work that they want to propose using technology and communicate with them and say, “Give me a minute to figure out how this technology is used. Let me think it through as I’m considering ways to mitigate risk.” And pausing is part of that process. So the thought partner, pausing until you have a grasp of the kind of work that the researcher is proposing. And I have adopted a pretty clear plan to jot down standard operating procedures as I’m having conversations with these experts or with these colleagues about AI.

And I’m not saying that those standard operating procedures will become a final version or a final policy, but it’s moreover to think, “Okay, this person said this is a data security risk. I should think this through a little bit.” If it were a data security risk and this is what it would look like, I jot that down and have this running list of standard operating procedures that then I can vet with the IRB chair or the IRB board or other IRB specialists. And some of the risks to think about, and I mentioned this before, is the data source for AI, where’s the data coming from? And was that data ethically collected? And is that a consideration that you need to make right now? Or is this something you can think through in a more vetted way? And how do we fall in line with that kind of data source?

What does it mean when an AI is populated with a questionable data source? The second is that AIs and data mining, there are still ways to re-identify populations. If there’s enough data points about one individual person, there is a possibility that that person could be re-identified even by an AI. So we need to think about consent forms and communicating with researchers about what it means to collect anonymous data. Is it truly anonymous or are there enough personally identifiable information or enough data points that could re-identify that person, even if the original intention was to collect that data in anonymous ways. And then I think the third part of this reassessment and consideration is public private spaces. AIs will be in public spaces, they will be in private spaces. And in the long run, considering AI is going to be part of the natural cycle of IRB protocol review and could even potentially be a conversation that a reviewer, an IRB reviewer has with a researcher, did you pilot test these measures with an AI before a human subject?

And that feels like an odd sentence to say, but it could be part of the risk mitigation is testing some of the measures within AI before it’s been tested with a human. So having, again, a thought partner, pausing, thinking through what you need to do to process your standard operating procedures, seeking content experts and remaining curious, I think those are some of the highlights. I kind of dabbled and went back and forth between some of those thoughts, but I do think that’s really where I’m at right now as I’m thinking about AI and research with human subjects.

Daniel Smith: I think those are all really great tips and considerations for everybody as they also navigate this space. On that note, are you aware of any other resources out there that could help folks navigate these issues?

Myra Luna-Lucero: Yeah. I mean, honestly, the CITI training has a plethora of webinars that are on this topic, and I think they do a very good job about dissecting some of these complex topics while simultaneously leaving questions that we don’t know yet. And I think that seeking those kinds of sources is really important. I have even gone so far as just Google AI and see what articles come out or what is prompted, what is the first hit on a web search. I also think that it’s important to just listen to podcasts and play with AI, see what comes out of it, even for your own personal understanding. Because when you know about the use of this technology, you can better assess potential risk factors. So those are some sources. I don’t have a repository of sources that I go to for this particular topic, but I can say that I am open to diverse resources because I think that’s really going to help educate me as I’m assessing protocols as researchers are already very interested in using AI in their projects.

Daniel Smith: I will definitely include some links to some of the resources that CITI Program offers on the topic in our show notes so our listeners can learn more. And on that note, do you have any final thoughts that you would like to share that we’ve not already touched on today?

Myra Luna-Lucero: So it was an interesting conversation that I had with a colleague about AI, and there’s apprehension and there’s uncertainty about what does AI mean and all these bigger projects and such. And we were communicating, all right, what is the value of humans now because now we have to have these big conversations about humans and artificial intelligence. And at the end of the day, research compliance reviewers care about research. We care deeply about the protection of human subjects. We care deeply about ethical standards, and we will always be part of that typical research plan because we have that deep care. And there’s nobody that can convince me that an AI can care as much as a IRB reviewer. So that will always be true. And then I think the other part that is good to think about when you’re thinking about AI is that humans also have hunches.

We have instincts. We have these gut feelings. And as we’re thinking about reviewing protocols that involve diverse topics, we’re going to be thinking through the risk factors. And a lot of that is those hunches about how one thing could be a potential risk or creating a safeguard to protect that population. So that’s something to always think about. And then the third is that humans are illogical, and that’s okay too. And I think those kinds of illogical thought processes, and I think those types of inquiries add to these larger discussions. So although AI may feel overwhelming, although technology is growing at a rapid pace, we always have that deep care. We can always rely on those internal signals, that hunch, and we can ask questions that may not be logical, and that’s okay because we’re thinking and analyzing something that is quite complex. And holding onto those factors I think is really important as you are exploring AI, as you’re exploring these topics, and as you’re thinking through standard operating procedures that apply to your organization.

Daniel Smith: Thank you again for the wonderful conversation today, Myra.

Myra Luna-Lucero: Thanks. I had a great time. Thanks so much.

Daniel Smith: And I also invite everyone to visit citiprogram.org to learn more about our courses and webinars on research, ethics, and compliance. You may be interested in our Essentials of Responsible AI course, which covers the principles, governance approaches, practices, and tools for responsible AI development and use. And with that, I look forward to bringing you all more conversations on all things tech ethics.

 


How to Listen and Subscribe to the Podcast

You can find On Tech Ethics with CITI Program available from several of the most popular podcast services. Subscribe on your favorite platform to receive updates when episodes are newly released. You can also subscribe to this podcast, by pasting “https://feeds.buzzsprout.com/2120643.rss” into your your podcast apps.

apple podcast logo spotify podcast logo amazon podcast logo


Recent Episodes

 


Meet the Guest

content contributor myra luna-lucero

Myra Luna-Lucero, EdD – Columbia University

Dr. Myra Luna-Lucero is the Research Compliance Director at Teachers College, Columbia University. In addition to supporting researchers, she has recently launched an ethics internship program and an extensive transformation of the College’s IRB website. She regularly offers seminars and workshops on research compliance and IRB leadership.


Meet the Host

Team Member Daniel Smith

Daniel Smith, Associate Director of Content and Education and Host of On Tech Ethics Podcast – CITI Program

As Associate Director of Content and Education at CITI Program, Daniel focuses on developing educational content in areas such as the responsible use of technologies, humane care and use of animals, and environmental health and safety. He received a BA in journalism and technical communication from Colorado State University.