I enjoy listening to exceptionally smart, well-informed practitioners discuss their work in detail. This habit has me tuning into Supreme Court oral arguments during lunch or playing panel discussions and conference talks while performing mindless copy-and-paste tasks—like creating Google calendars for teachers’ schedules or K-12 pacing guides.
Yesterday I stumbled upon an hour long video of a Stanford Law School program featuring AI thought leaders from industry and academia. They explored topics aimed at deepening the understanding of AI’s social good, with a particular focus on its legal applications. The discussion touched on security challenges posed by AI systems, including vulnerabilities, adversarial attacks, and the need for robust defense mechanisms.
What does that mean to a public high school teacher, building principal or district administrator? Not much in practical terms – except that underscores the vast scale of AI discussions, which is crucial for grasping the breadth of perspectives on the subject. Even skimming through the video and catching just a minute of any speaker’s remarks reveals the enormous gap between how AI is discussed in academia and industry versus how it’s framed in K-12 education.
Using NotebookLM to Extract Key Insight
This video can also show the way AI tools like NotebookLM can help digest and analyze content. After uploading the video, I skimmed the AI-generated study guide to focus on sections relevant to social studies and education. By querying NotebookLM and using it to review segments of the video, I pulled quotes and pinpointed key takeaways. Two points stood out as particularly insightful for educators.
Soft Power: AI’s ability to shape perceptions of truth
One of the phrases in Notebook LMs Study Guide struck me as particularly alarming:
“Soft Power Problem: AI shaping perception of truth, values, history (e.g., models developed in China promoting specific political narratives).”
As a history educator witnessing one of the most aggressive efforts to rewrite American history, this immediately caught my attention. I asked NotebookLM to direct me to the exact moment in the video where this was discussed. While I could have just read the transcript, I wanted to hear the speaker’s own words—and you should too. It only takes three or four minutes.
Listen to Christina Q. Knight discuss her geopolitical concerns about AI, particularly how it wields “soft power” to influence thought:
The experience of hearing these views expressed aloud is different from the experience of reading them – which is an interesting distinction, but that’s a conversation for a different time.
Knight’s warning is critical for educators, especially those in social studies and history:
“When these models are embedded in healthcare, education, financial systems, this could raise huge concerns. This is also a soft power problem because these models are increasingly shaping how people perceive truth, what we think about values and history, and informing our opinions and models developed in China, for instance, have to abide by very specific requirements to uphold certain values and promote certain political narratives.” “This is very harmful when it’s shaping the global information ecosystem”
Social studies teachers should consider this alongside findings from the Social Perception Lab, published by the Network Contagion Research Institute, which found that:
- Nearly 30% of Americans polled in late summer/early fall believed Trump staged assassination attempts in Pennsylvania and Florida to gain sympathy.
- 28% believed immigrants were eating pets in Springfield, Ohio.
We’re already living in a world where conflicting “realities” diverge from material truth. How should this inform our approach to AI in classrooms?
The Need for Human Verification
NotebookLM’s briefing document included a useful analogy for educators: Students using AI should be like drivers in self-driving cars—alert and never fully trusting the system.
Shreya Rajpal (CEO & Co-founder of Guardrails AI) noted
“the critical issue here is that all of our AI systems today are not at the level of reliability where you can consume what they’ve built without human intervention or human verification.”
She expanded with a compelling comparison:
“I think this kind of like abstracts to a broader problem with AI generated outputs, which is that the burden for the human changes from creation to verification. And there’s been like numerous, studies in the past, et cetera that show that, when that burden shifts, like verification is just like, it’s an easier problem to mess up. We saw this in self-driving all the time, right? Even when you’re driving even, for example, like at self-driving car companies, the driver, so to speak, is in the driver’s seat. Even if they’re not actively driving, they still need to be extremely alert. And it’s a much higher burden than just driving the car yourself, right? Because you have to be like very attuned to your surroundings. And we’ve seen countless times before that, when there’s a situation like these, like mess ups happen and people take their eye off the ball. “
Again, listening to her speak is more impactful than just reading the transcript.
Rajpal’s point about the difficulty of sustained vigilance is crucial for educators. If AI can distort students’ perception of truth, our lessons must emphasize critical verification. We should also worry how today’s transactional education system incentivizes students to seek “correct answers” quickly over cultivating their ability to discern truth and thinking on their own.
It’s important to take the discussions of leaders in the AI industry and academia into account when thinking about AI in K-12 public education. You’ll have a better sense of the world we’re preparing young people to enter while avoiding the tarpit of academic integrity.
