Anthology
+00:00 GMT
Thought Leadership
October 1, 2025

The Latest Front in the Battle for Academic Integrity: Initial Thoughts in Response to the Rise of AI Agents

The Latest Front in the Battle for Academic Integrity: Initial Thoughts in Response to the Rise of AI Agents
# AI
# Higher Ed

Unmasking AI Agents: The Next Academic Integrity Challenge

Dom Gore
Dom Gore
Ben  Burrett
Ben Burrett
The Latest Front in the Battle for Academic Integrity: Initial Thoughts in Response to the Rise of AI Agents
Concern about the use of AI Agents in education has exploded in recent weeks, driven by offerings such as Perplexity Comet and Google Homework Help which are being actively marketed to learners. For the uninitiated, AI Agents—sometimes referred to as AI Browsers or AI Browser Extensions—are readily accessible online tools that go beyond the creation of content in response to a user prompt, as one might expect from ChatGPT, for example, to actively perform web-based tasks at the user’s instruction. This includes the capability to log in to the LMS on a student’s behalf (provided the student shares their credentials with the AI agent), scan the information therein, and even attempt course activities—all backed by some of the world’s most powerful AI models. Understandably, recent growth in learner access to these tools has been met with alarm from many in the higher education sector.
A crucial point must immediately be clarified: AI Agents don’t show up as separate apps or users—they look identical to normal student activity in the LMS. Because they operate at the web interaction and automation layers—filling in forms, clicking buttons, making API calls, etc.—they fall outside what the LMS itself can control. Many are completely invisible to the platform.
It is also important to stress that this is true for all websites, not just online learning environments. A recent research paper, The Hidden Dangers of Browsing AI Agents, explains that these tools are deliberately designed to slip past application-level detection and can interact with systems in ways that go unnoticed. The BBC has made a similar point, noting that AI Agents are “difficult to trace, let alone block, because they disguise themselves as standard client activity within existing digital ecosystems.”
In short, given currently available technologies, it is not possible for Blackboard®—or any other LMS vendor or provider of a web-based service—to reliably detect an AI Agent, much less block one.
In the absence of technical controls, any short-term responses to the potential misuse of AI Agents will need to be delivered through institutional policies. Genuine prevention of student access to AI Agents will only be possible for institutions that control the whole network and/or device that students use, which may be a route that the K-12 sector explores but has limited application in higher ed. Other tools for consideration include the likes of lockdown browsers which create a sandboxed environment around the LMS. Institutions can also guide students through updating their generative AI policies to clearly define the permitted and prohibited use of AI Agents and providing thorough training on these updated policies.
Here at Anthology, our fundamental view remains—as we’ve articulated previously in the context of AI plagiarism—that blocking AI in the classroom is extremely difficult, and institutions will see best results by adapting their pedagogical approach to the AI era. AI already has a pivotal role in both students’ daily lives and many desirable areas of employment, and higher ed should focus on preparing learners for a world where human and artificial intelligence are constantly applied in combination. At the heart of addressing challenges such as those presented by AI Agents is learner engagement—when students are invested in their course, and see a clear correlation between what they’re learning and opportunities in the workforce, they’re more inclined to want to genuinely understand the subject matter, and to use AI in a way that aids rather than inhibits or bypasses this understanding.
Rethinking assessment practices emerges as a key priority in this context. With this in mind, we have recently assembled a Future of Assessment Working Group, and the insights shared from this global collective are already informing our development plans. Spearheaded by Lisa Clark, EdD, associate vice president of academic transformation at Anthology, we’ve released a new white paper that reframes Bloom’s Taxonomy for the AI era, providing actionable steps for institutions to deliver authentic assessment tasks. We’ll be following this with a broader paper on the future of assessment in October and further content in the months to come. Keep an eye on our LinkedIn and the Anthology Community for the latest!
Finally, as AI applications and their use continue to expand exponentially, the importance of ethical frameworks grows in parallel. Our Anthology Trustworthy Approach remains the guiding light as we add market-leading, native AI innovations to Blackboard, and we continue to engage with institutional partners from all corners of the globe to help establish effective policies for their teachers, students, and other stakeholders.

Comments (0)
Popular
avatar

Dive in

Related

Blog
Engage Points Early Access Has Officially Begun!
By Rosario Bruzon • Sep 29th, 2025 Views 73
external
Digital Accessibility with Dr. Amy Lomellini
Sep 26th, 2025 Views 6
Blog
Quick Fixes, Big Impact: Accessibility Wins for Your PDFs
By Mindy Mekhail • Aug 28th, 2025 Views 212
Blog
Engage Points Early Access Has Officially Begun!
By Rosario Bruzon • Sep 29th, 2025 Views 73
external
Digital Accessibility with Dr. Amy Lomellini
Sep 26th, 2025 Views 6
Blog
Quick Fixes, Big Impact: Accessibility Wins for Your PDFs
By Mindy Mekhail • Aug 28th, 2025 Views 212
Terms of Service