For the past few years, Baltimore City Public Schools teacher Lee Krempel has watched students try to pass off generative artificial intelligence as their work — and the giveaways were often glaring.
“One time I knew for sure this kid, just from their class performance, didn’t actually read Hamlet that closely, and suddenly they had ideas in an essay on feminism in Hamlet and psychoanalytic criticism in Hamlet … it was actually kind of hilarious,” Krempel said.
But Krempel, who teaches 12th grade English and AP Literature, hasn’t always been sure how to handle these new forms of plagiarism. He’s grateful for the AI guidance City Schools released last week to help teachers navigate use of the technology in the ChatGPT era.
Dawn Shirey, the district’s director of virtual learning and instructional technology, led the development of the new guidance. After hearing the struggles of teachers like Krempel, she wanted to make sure staff had clear direction on how to manage AI use in the classroom.
For now, the guidance is only viewable through a City Schools account as the district works to create a public facing page.
The guidelines include:
- A definition of generative AI and an outline of commonly used tools
- An introduction to a generative AI “acceptable use scale” to guide students and teachers on assignments
- An explanation of inappropriate uses of AI, including submitting AI-generated work without citation or harassing another student
- Guidance on how teachers can address plagiarism and enforce academic integrity
- An outline of privacy rules for using AI tools
FAQ: Baltimore Schools’ AI guideance
What is the new guidance for?
To help Baltimore City Public Schools teachers navigate generative artificial intelligence use in classrooms. It includes definitions of generative AI, an acceptable use scale for assignments, explanations of inappropriate uses, academic integrity enforcement methods, privacy rules and advice against relying on AI detection software.
Who developed this guidance?
Dawn Shirey, the district’s director of virtual learning and instructional technology, led the development. She convened a workgroup, drew on recommendations from TeachAI, and hosted listening sessions with parents, teachers, special education staff and students.
Where can I access the guidance?
At time of publication, the guidance is only viewable through a City Schools account. The district is working to create a public-facing page.
What is the ‘acceptable use‘ scale?
The district created a five-level generative AI acceptable use scale. At Level 1, students may not use AI at all. At Level 5, students can use AI freely with personal oversight, as long as they cite the tool and link any chats to their work. Teachers can apply different levels based on specific assignments.
What constitutes inappropriate use of AI?
Inappropriate uses include submitting AI-generated work without citation and using AI to harass another student.
How should teachers handle suspected plagiarism?
The guidance advises teachers not to rely on online AI checkers to prove plagiarism, but instead to draw on their knowledge of a student’s past work.
Why doesn’t the district recommend AI detection software?
The district has heard of too many false positives and false negatives with AI checkers, and recent academic studies have confirmed they are unreliable.
How are consequences determined for AI misuse?
Teachers can determine their own approaches within the district’s existing academic integrity policy. Some teachers may allow students to redo essays, while others assign zeros for plagiarized work.
What training is available for teachers?
The district held optional professional learning sessions the week before classes began, including discussions on AI ethics and how to integrate AI tools into the classroom. The sessions concluded with a pitch competition where teachers developed and presented their own ideas.
Instead of unreliable AI checkers, guiding teachers to trust their instincts
To draft the guidance, Shirey convened a workgroup, drawing on recommendations from TeachAI, an initiative that advises on AI in education. During the last academic year, she convened a series of listening sessions with parents, teachers, special education staff and students.
She found that teachers often felt uncertain about how to address suspected plagiarism, while students were unsure how to use AI appropriately as a tool.
A key part of the new guidance advises teachers not to rely on online AI checkers to prove plagiarism, but instead to draw on their knowledge of a student’s past work. These AI “detectors” remain inconsistent, per a January 2025 study from the Journal of Applied Learning and Teaching.
“We’ve heard of too many false positives and false negatives with checkers… So we really don’t want folks to rely on it,” Shirey said.
When ChatGPT first went live, Krempel, the English teacher, sometimes turned to these detection tools. He now regrets using the software to start those conversations with students.
“I’m doing real-time writing with students in class all the time, so I’m familiar with their voice and the level of complexity in their sentences — I can tell without the software,” Krempel told Technical.ly.
When plagiarism is suspected, teachers can decide on their own approaches, aligned with the district’s existing academic integrity policy. In Krempel’s classes, students may redo essays, while another teacher, who asked not to be named for fear of administrative blowback, assigns a zero for plagiarized work.
“The first time it happened, it was a warning, and I explained to them that they were going to get a zero in the grade book for it… and if it happened again, it would be a referral and a call home,” the teacher said. “But there wasn’t super clear guidance on how to approach that in the past.”
Flexible guidelines, because not all teaching is the same
At Level 1 of the district’s new generative AI acceptable use scale, students may not use AI at all. At Level 5, they can use it freely with personal oversight, as long as they cite the tool and link any chats to their work.
Krempel has seen that students often don’t understand what constitutes inappropriate use of AI.
“It makes sense that a 16- or 17-year-old, who hasn’t quite developed an idea of plagiarism, thinks they can just snatch some of the language that ChatGPT has used and put it in an essay, unattributed,” Krempel said.
Plagiarism was teachers’ biggest focus during the listening groups, per Shirey, but the new guidance also addresses the biases present in generative AI tools, encourages teachers to discuss how the technology can reinforce stereotypes, and refers teachers to the district’s bullying policy if students use AI to harass others.
The district held optional professional learning sessions for staff members the week before classes began to help teachers understand the guidance and integrate AI tools into their classrooms. Sessions included how to introduce grade-level appropriate discussions on the ethics of AI and concluded with a pitch competition, where teachers developed and presented their own ideas. One teacher created a Gemini Gem that adjusts the difficulty levels of primary sources and provides a Spanish translation.
High school English teacher Forrest Gertin helped lead the sessions. A proponent of new technologies in the classroom, Gertin uses the tools to coach the school debate team by prompting students with follow-up questions. He sees a lot of benefit from the new guidance.
“We really wanted to slow down the process,” Gertin said, “from ‘Oh my god, it’s here’ to how can it help and really improve the learning experience for our students.”