As AI’s capabilities grow, so does its acceptance in educational settings – either by choice or necessity! In both K–12 and higher education, it has moved quickly from a topic of speculation to a daily presence in classrooms, offices and study sessions. Teachers, professors and students are finding new ways to integrate AI into their routines, while schools and universities race to establish clear guidelines for its use.
K–12 teachers are experimenting with AI for lesson planning, drafting practice materials and differentiating instruction for varied learning needs. Some use it to generate example problems, simplify complex readings or model written responses at different skill levels, giving educators more time to focus on direct interaction with students. But this raises questions about quality control, transparency and the boundaries between instructional support and possible over-reliance on the new tools. Meanwhile, students are using AI to brainstorm ideas, outline essays, translate text and check their understanding of new concepts. Surveys show that usage is higher among older high school students, particularly in settings in which technology access is widespread. The patterns are varied, though: some use AI to supplement learning, while others succumb to temptation, using it as a shortcut, prompting concerns from teachers who want to protect the integrity of student work without shutting down legitimate learning use cases.
AI governance in K–12 remains inconsistent. International bodies like UNESCO are recommending human-centered policies, transparency in data use and specific age-appropriate guidelines. In the United States, states’ departments of education and local school districts are publishing their own standards, frequently focusing on disclosure rules, acceptable tool lists and prohibited uses during assessments. Some, such as Washington State, frame AI as a way to deepen inquiry and reflection rather than replace student thinking.
In contrast, higher education has moved faster toward experimentation. Professors are using AI to create course outlines, draft feedback, adapt reading lists for different ability levels and prepare interactive classroom activities. Faculty in technical disciplines employ it to generate code snippets or data analysis templates, while humanities instructors use it to frame discussion questions or create parallel reading materials. Administrative staff also deploy AI for scheduling, communications and academic advising support.

To no one’s surprise, college students’ use mirrors much of what is happening in K–12, but with greater independence. They apply AI to generate study guides, work through practice problems and refine drafts. Instructors are responding with strategies that integrate AI into coursework in transparent ways, such as assignments that require students to critique AI-generated responses or annotate their work to show where and how AI contributed. Blanket bans, where attempted, have proven difficult to enforce and, importantly, clash with the expectations that students will face in the workplace.
In both K–12 and university settings, the challenge is no longer whether AI will be part of teaching and learning, but rather how it will be used responsibly and effectively. Teachers and professors are looking for AI tools that support their creativity, uphold academic integrity and comply with local policies. They need ways to create assessments that are both engaging and resistant to AI-enabled shortcuts, to map student understanding in real time and to respond with substantive feedback that helps learners improve. They also need systems that can grow with them as technology evolves, integrating as seamlessly as possible into their existing workflows without adding complexity.
We think that this is where our AI-augmented assessment authoring engine, QueryTek, will offer a new way forward. Designed for academic publishing and adaptable to both K–12 and higher education contexts, we have built QueryTek as an AI-assisted assessment and feedback engine that works in collaboration with subject matter experts. It allows educators to produce high-quality assessments tied to clear objectives and cognitive levels and to deliver meaningful rejoinders, targeted feedback that’s crafted to deepen understanding. QueryTek’s architecture supports governance by allowing institutions to define the rules for AI use and embed those rules directly into assessment creation and delivery. As AI becomes a more integrated partner and part of content creation, QueryTek provides educators with a trusted collaborator; one that saves time, enhances instructional quality and easily scales alongside the evolving educational landscape.
Sources:
Ellucian. 2024 Higher Education Research Study. Reston, VA: Ellucian, 2024.
https://www.ellucian.com/resources/2024-ellucian-higher-education-research-study
EDUCAUSE. “2024 EDUCAUSE Horizon Report: Teaching and Learning Edition.” EDUCAUSE, 2024.
https://www.educause.edu/horizon-report-teaching-and-learning-2024
Pew Research Center. “Teens and ChatGPT: AI Use in Schools.” Pew Research Center, 2024.
https://www.pewresearch.org/internet/2024/05/02/teens-and-chatgpt-ai-use-in-schools
RAND Corporation. “Teachers’ Use of Artificial Intelligence.” RAND, 2024.
https://www.rand.org/pubs/research_reports/RRA2517-1.html
UNESCO. Guidance for Generative AI in Education and Research. Paris: UNESCO, 2023.
https://unesdoc.unesco.org/ark:/48223/pf0000384948
Washington Office of Superintendent of Public Instruction. “Human–AI–Human: Guidance for AI in Washington Schools.” Olympia, WA: OSPI, 2024.
https://www.k12.wa.us/ai-guidance
Wiley. 2024 Voice of the Student and Instructor Report. Hoboken, NJ: Wiley, 2024.
https://www.wiley.com/en-us/network/education/voice-of-the-student-and-instructor
Extanto Technology. “Vermont and the Responsible Use of AI.” 2022. https://extanto.com/articles/vermont-and-the-responsible-use-of-ai/
Extanto Technology. “Vermont’s AI Policy Informs User Guide for Responsible AI Implementation.” 2025. https://extanto.com/articles/vermonts-ai-policy-informs-user-guide-for-responsible-ai-implementation/