February, 2026
This document outlines the values guiding Connecticut College Library’s evaluation and integration of GenAI research features within our licensed academic collections. Library database vendors are rapidly introducing capabilities such as AI-enhanced semantic search, AI-generated answers synthesized from retrieved sources, summaries of results and articles, and identification of key authors, research, and emerging trends into their products (Tay 2025). The Connecticut College Library is committed to learning as much as possible about these enhancements, assessing their fit, and educating our community on both the benefits and the limitations. Our choices will be intentional and aligned with our mission to foster information literacy, support academic excellence, and uphold ethical information practices.
Scope note: At this time, the library’s primary focus is on evaluating licensed services through our library vendors.
Values
The following core values - Accountability, Access, Autonomy, Assurance - guide our evaluation and integration of GenAI tools when selecting new products or considering whether to enable them within existing licensed academic databases/collections.
Accountability
- Transparency: Provide clear, verifiable information about data sources, model architecture and training, algorithmic behavior, and limitations. Librarians must understand how tools work in order to educate users. We prioritize interfaces that make functionality visible to end users.
- Social Responsibility: Align partnerships with values of diversity, inclusion, and environmental sustainability. Expect ethical labor practices, responsible data use, bias mitigation, and transparent sustainability commitments.
Access
- Accessibility: Conform to ADA and WCAG 2.1 (or higher) standards so tools and insights are maximally useful to all users.
- Collection Discovery: Enhance the ability to discover and access resources across all library collections and databases.
Autonomy
- Choice and User Agency: Support flexible implementation with opt-in/opt-out controls at institutional and individual levels. Preserves traditional research methods (e.g., keyword search, controlled vocabularies, citation chasing) alongside GenAI features, with clear features and explanations.
- Privacy: Provide robust protections for user data and intellectual property, ensuring data remains confidential. Patron activity must remain confidential and not be used to train external models without explicit, op-in consent. Minimize data collection and provide retention and deletion controls.
Assurance
- Interoperability: Integrate cleanly with existing library and IT infrastructure (e.g., SSO, OpenAthens, link resolvers, discovery layers).
- Security: Maintain security, privacy, and institutional control over settings and data.
Landscape and Commitment to Our Community
We must acknowledge the practical realities and limitations inherent in working with third party vendors and the quickly evolving AI landscape. First, AI models operate as 'black boxes,' meaning their internal decision-making processes are opaque. In some instances, even the vendors themselves may not fully comprehend the intricacies of how their tools generate outputs.
Furthermore, vendors often restrict configurability. Despite our insistence on granular control, some platforms simply do not offer this functionality. While we strive to conduct thorough due diligence aligned with our values, we may not always receive satisfactory answers. In cases where a tool provides critical value but lacks full transparency, we are forced to navigate these trade-offs carefully.
Finally, the AI landscape is dynamic; vendors frequently update products, enable new features, or adjust algorithms without prior notice. We ask for the continued trust and understanding of our users, knowing that we remain guided by our values to make the best choices for our community.
Implementation
- Collections staff, such as the Director of Library Collections, Access & Discovery and/or the Assistant Director for Library Collections, learn of emerging AI tools, features or options through communication from vendors, consortial partners, research support librarians or library patrons.
- Collections staff log the tool, vendor, feature scope, source/date and notes in an AI decision log.
- Collections staff apply the values identified in this document to choose one of: enable the tool on a pilot basis, disable the tool on a pilot basis, or defer pending more information, and record the decision and rationale and the AI decision log.
- If more information is needed before deciding to pilot, collections staff will consult with research support staff and, if still unclear, the Librarian of the College. Collections staff will then decide whether to enable the tool on a pilot basis, and log the decision as noted above.
- At least once per semester, collections staff will review pilot decisions with research support staff, note any concerns or issues, and either finalize ongoing use of the tool or decline to do so. The final decision and effective date will be logged, and any necessary communication to library staff and/or patrons will be planned.
Contact Us
For questions about this document, its implementation, or research tools, please email refdesk@conncoll.edu. A reference librarian will receive your message and respond.
References
Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press.
Reidsma, M. (2020). Masked by Trust: Bias in Library Discovery. Litwin Books.
Tay, A. (2025). Testing AI Academic Search Engines - What to find out and how to test (2).
Zhu, J. (2023, Oct. 16). Reflecting on a decade with the Open Discovery Initiative: Insights from IEEE. The Scholarly Kitchen. https://scholarlykitchen.sspnet.org/2023/10/16/guest-post-reflecting-on-a-decade-with-the-open-discovery-initiative-insights-from-ieee/
Zuboff, S. (2020). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.