Responding to the Threat of AI Agents in Pedagogy
February 2026
1. Executive Summary
- Blackboard commits to the protection of academic integrity through advocacy, ongoing engagement with institutions, and—where technologically possible—product delivery.
- AI Agents can enter online learning environments and complete tasks on behalf of students. These are now widely available, and in some instances being marketed to students directly.
- Blackboard considers AI Agents that replace learning an unethical use of generative AI in education. They threaten both the learning process and academic integrity, and don’t align with our Trustworthy AI Approach.
- At this time, AI Agents cannot be identified or blocked by vendors of learning technology working in isolation. This is also the case for web-based services in other categories.
- Accordingly, Blackboard believes a multi-faceted and industry-wide approach is necessary, combining both pedagogical and technological best practices to advance academic integrity.
- As an important part of this, we encourage vendors of AI Agents to adopt HTTP Message Signatures as proposed by the Internet Engineering Task Force (IETF), allowing their use in online learning environments to be identified. If this, or a similar mechanism of identification, is implemented at scale, it will provide institutions with the power to understand AI Agent use from their students and develop policies for appropriate behavior.
- We advise institutions to also consider opportunities to evolve pedagogical approaches that promote authentic student work. We are also engaging in proprietary research in this area.
2. Introduction: AI Agents and the impact on teaching and learning
The last three years, dating back to release of ChatGPT in November 2022, have seen an ongoing and important conversation about the appropriate role for generative AI in higher education. Academic integrity has been a central theme, amid fears that students will engage AI to produce assessable work and present it as their own.
Recent developments with agentic AI have once again placed academic integrity under the microscope. AI Agents—in particular AI Browsers or AI Browser Extensions—are widely available tools that can perform online tasks at the user’s instruction. This goes beyond simply providing a response to a query, as is the case with ChatGPT and other common generative AI tools, and allows AI to mimic the online behaviors of a human user and complete important processes without human oversight. This has already made a huge impact across a wide range of industries—online shopping, filing taxes, coding, and job applications are examples of common tasks where AI Agents may be employed—and, indeed, will affect various departments within higher education institutions in different ways.
Our focus here is pedagogy. This information is not designed to cover every potential use case within the education sector; instead, it looks specifically at how AI Agents impact the learning process and academic integrity.
Doing so requires a baseline understanding of three crucial points:
- AI Agents now have the capability to log in to online learning platforms on a student’s behalf (provided the student shares their credentials with the AI agent), scan the information therein, and complete course activities such as assignments, discussions, and more. As they appear as a student user, they are very difficult to detect (more information in the ‘AI Agents cannot be detected by EdTech vendors alone’ section below).
- There are already tools on the market, such as Perplexity Comet and Google Classroom, that are designed for education and/or being directly marketed to students.
- As with all areas of AI, these tools are developing at a rapid pace and adoption among students is growing. We believe it is essential that the higher education sector collaborates to quickly assess the impact they are having on the learning process, leading to the development of responsible policies.
Our approach to ethical AI in higher education
At Blackboard, we take an optimistic view of the role of generative AI in pedagogy. We have led the market in developing native tools that add efficiency and engagement to online learning, all backed by our Trustworthy AI Approach. We have also engaged actively with the higher education community around research and policy development, and have publicly advocated for the importance of authentic assessment as a core component of academic integrity in the AI era. This has been positively received by our partner institutions and other industry voices.
At the center of our thinking here is what Ethan Mollick, Associate Professor at the Wharton School, describes as “co-intelligence”. Put simply, this entails a combination of human and artificial intelligence to achieve a desired goal, supported by strong guidelines on the appropriate role for each. Our view is that the workforce is already moving in this direction, and, accordingly, that higher education can best meet the needs of learners by doing the same.
AI Agents are a threat to learning and academic integrity.
Many third-party AI Agents, however, are not examples of co-intelligence. Quite the opposite, in fact — they remove the student almost entirely, meaning the only intelligence is artificial. They don’t assist the student’s accrual of knowledge, and impede the instructor and institution’s ability to assess learner understanding and progress.
In short, we believe that AI Agents can be detrimental to the learning process when they replace the act of learning. This directly threatens academic integrity and warrants a concerted effort from the higher education sector to address.
AI Agents cannot be identified by EdTech vendors alone
Many within our global learning network have expressed concerns about AI Agents, and understandably asked if we can prevent their use in our Blackboard learning management system (LMS). Upon receiving these inquiries, we immediately released a blog post, providing full transparency to our partner institutions on the limitations EdTech vendors face in this area.
In the LMS, AI Agents don’t show up as separate apps or users—because they use the user’s account and credentials, they look identical to normal student activity. Because they operate at the web interaction and automation layers—filling in forms, clicking buttons, making API calls, etc.—they fall outside what the LMS itself can control. Most AI Agent activities in the platform are indistinguishable from user activities.
In short, given currently available technologies, it is not possible for Blackboard—or any other LMS vendor or provider of a web-based service—to reliably detect an AI Agent, much less block one.
3. Working together to address AI Agents in pedagogy
We don’t believe that a figurative silver bullet exists here; no single action or step will resolve the ethical challenges of AI Agents immediately. Nor do we believe that the approach of simply trying to block AI will lead to good results for the sector.
Instead, what is required is a multi-faceted approach with both technological and pedagogical elements – underpinned by a commitment to evolve as AI Agent technologies change.
Technology: A collaborative approach to identify use of AI Agents
A sustainable solution to the challenge of AI Agents in education depends on direct cooperation from those developing these technologies. Only if creators of agentic AI tools empower education technology providers with reliable means to identify AI Agent activity inside LMS platforms can academic integrity be protected and institutions be truly supported.
To make this possible, agents themselves must carry verifiable signals that distinguish their actions from those of human users. One promising technical standard is HTTP Messaging Signatures, as promoted by the Internet Engineering Task Force (IETF.) With this approach, every interaction initiated by an AI Agent, such as logging in to a platform or completing an activity, contains a cryptographic signature embedded in the HTTP request. This signature allows LMS and other educational platforms to immediately and accurately recognize actions performed by an AI Agent, and to make this information available to institutions.
If developers of AI Agent tools implement HTTP Messaging Signatures or similar mechanisms, Blackboard and other EdTech vendors could detect and flag agentic activity, offering institutions the ability to uphold policy and academic standards in partnership with their technology providers.
For more details on HTTP Messaging Signatures, see the IETF’s proposed architecture: click here.
Pedagogy: An ongoing dedication to promoting original student work
While these technological advancements hold promise, we would not encourage institutions to think of them as a ‘catch all’. The rise of generative AI is going to require evolution of pedagogical practices, and AI Agents are no exception.
At Blackboard, we’re dedicated to helping institutions in this area. As a recent example, Lisa A. Clark, EdD, Associate Vice President of Academic Transformation has produced a reframed view of Bloom’s Taxonomy, providing valuable insights into how authentic assessment can be evolved and applied in the AI era.
Leading educators and institutions are also publishing best practices based on their experiences. We encourage our institutional partners to engage in these conversations, and to consider the following strategies (among others) as part of the response to AI agents:
- Vanderbilt University outlines the importance of discussing academic integrity and its value directly with students, as well as localizing tasks and inviting personal reflection to elicit authentic responses.
- Harvard University details how AI can be successfully integrated into tasks to promote AI literacy, complemented by an oral defense to ensure comprehension.
- Educators at Curtin University and Deakin University in Australia articulate that generative AI can be a catalyst for positive change, allowing higher education to “move away from assessment practices that sort, rank, and grade students toward approaches that recognize diverse ways of knowing.”
- Our great partners at Lamar University, spearheaded by Associate Provost Ashley Dockens, advocate for a tailored approach to AI in each program area. Their “AI in Every Program” initiative evaluates ethical use in different contexts to establish pedagogical best practices.
4. Blackboard’s commitment to a sustainable solution
We are deeply committed to advancing both the technological and pedagogical sides of this equation, as follows:
Technology –
- We will advocate for the application of HTTP Messaging Signatures, or similar methods of detection, in all appropriate forums, including:
- In direct conversations with major technology partners.
- In major industry bodies with whom we are affiliated: 1EdTech, Educause, JISC, and more.
- In dialogue with analysts, industry publications, and other forums of thought leadership.
- Where relevant, in dialogue with governments and regulatory bodies.
- Should advancements be made, we commit to prioritizing development of the Blackboard LMS to allow our partner institutions to benefit.
- In the case of HTTP Messaging Signatures, this would involve providing administrators with tools to track, understand, and report on AI Agent usage.
Pedagogy –
- We will continue to develop activities and assessments tasks in the Blackboard platform that promote original submissions from students.
- The expanded use of audio and video, driven by the development of Video Studio, is a recent example of this in action.
- We will continue to listen to partner institutions and develop our platform in a way that responds to their challenges.
- Ideas submitted via our Idea Exchange which are designed to promote authenticity in student work will be given priority in our development plans.
- We will engage in ongoing research and sharing of best practices, through our annual user conference, the Community, and elsewhere.
In pursuing the above, we commit to upholding the following values:
- Collaboration – We will work openly with all relevant stakeholders—including direct competitors—for the benefit of the higher education sector.
- Privacy and Security – We will ensure that all user data is protected as part of this collaborative approach, in line with our company policies and government regulations, and continue to operate in line with our Trustworthy AI framework.
- Transparency – As demonstrated by our initial blog post, we commit to providing our learning community with full transparency on how this situation evolves and any implications for institutions. This commitment extends to an open invitation for all our partner institutions to discuss this area in more depth.
