Pages

Monday, February 16, 2026

Educational Democracy in the Age of ChatGPT: Stop Grading Learning

 

A headline caught my eye: “ChatGPT is in classrooms. How should educators now assess student learning?”

And my immediate response was: why should we assess student learning at all?

Why are we still treating education like a competitive sport—like a game with gatekeepers—where someone in authority decides what counts as “knowing,” then measures students against it?

Because if we’re honest, “assessment” often hides a deeper assumption:

What I want you to learn is more important than what you want to learn.

Educational democracy: the student’s learning belongs to the student

In the programs we’re building at aibia, we don’t use grades. We don’t “assess” student work in the usual sense.

We read it. We respond to it. We give feedback designed to challenge the work, stretch it, deepen it, and push it beyond where it already is.

The goal is not compliance. The goal is independent thought.

Our classrooms—and our field-based programs—are designed so students can follow their own ideas, their own critiques, their own questions: the things that genuinely stand out to them, the things that matter to them, the things they can build a life around.

This is what we’re calling educational democracy.

And it’s grounded in a method we call critical reimaginative theory where we move beyond critique, and start reimaging answers to those critiques, and asking what it would mean to rebuild the world differently—then practice that rebuilding in real places, with real people, in real constraints.

The real problem isn’t AI. It’s that academia won’t interrogate its foundations.

The article itself is smart in the way it talks about generative AI as a classroom reality. It makes useful points about the “post-plagiarism” situation—how education may need to move beyond panicked policing and into clearer expectations, process-based learning, and preserving student voice. 

But what struck me is what almost nobody says out loud:

We keep starting from the same old foundation, as if it’s unquestionable.

  • Why are we demanding they learn what we want them to learn?
  • Why are we demanding they become who we want them to become?
  • Why do we treat knowledge like something we “own,” then distribute downward?
  • Why are we assessing students?
In a decolonial world—especially in a world where we no longer control access to knowledge—those questions aren’t optional. They’re the whole point.

Universities don’t control knowledge anymore—and that’s not a tragedy

Take something like the Peloponnesian War.

Students don’t need a university class just to access information about it. They can look it up in seconds. They’ll read Wikipedia. Watch a documentary. Pull up a timeline. Or ask an AI system for a quick overview and engage in a real time dialog about it.

And you know what?

That may be all the knowledge they actually need about it.

If what we’re offering is mostly controlled access to information, then yes—AI makes us obsolete. But the deeper reality is: the internet already did that.

The question is what education is for when information is everywhere.

“It’s just theory” is the most revealing insult in modern education

I once had a friend dismiss what I was doing by saying: “Who cares? It’s just theory.” He’d gone into an MBA/accounting pathway. For him, learning was mostly about qualification—proof that someone can do a job.

Fair enough: there are areas where demonstration matters—coding, medicine, engineering, accounting, trades—where competence has real consequences and you need credible signals of capability. But that’s not what most of higher education pretends to be.

So much of what we “teach” is ideas. Frameworks. Interpretation. Ethics. Politics. Meaning. Critique. And in those spaces, the obsession with assessment often turns learning into compliance: learn what I assign, write what I recognize, mirror what I reward.

Assignments often fail because they start from the wrong authority

We hand students an assignment and say: “Write 2,000 words on X.” But maybe they don’t care about X. Maybe X has no emotional gravity for them. Maybe it doesn’t connect to their life, their struggles, their hopes, their questions.

So what are we doing?

We’re telling them what they’re supposed to be interested in. Then we act surprised when they disengage—or when they use the easiest tool available to produce something “acceptable.”

That’s not a student failure. That’s a design failure.

AI makes the old gatekeeping model look even more absurd

The more powerful generative AI becomes, the more obvious it is that the old model was already collapsing:

  • Students can access information instantly.
  • They can explore philosophies they were never taught.
  • They can self-direct at a level that was impossible 30 years ago.
  • All in real time, and in dialog with an authoritative source they have a relationship with 

When I was growing up, nobody ever taught me anarchism as a serious tradition. I got the negative caricature: bombs, chaos, disorder. Yet, today, a student hears the word once and can learn the entire intellectual history in an afternoon—thinkers, movements, debates, critiques, variations.

So who are we to pretend that at the doorway of a university classroom, students suddenly must learn what we decree?

Post-plagiarism isn’t a crisis. It’s a mirror.

If “plagiarism” feels like an existential threat, it’s because learning has been commodified. Plagiarism is what happens when education becomes a product, and student work becomes a credentialing artifact, and “success” becomes something you chase for approval.

But if the goal is the learner’s journey toward meaning—toward becoming a fully fledged, independent, autonomous self—then the question changes. 

It becomes less: “Who wrote this sentence?”

And more: “What did you learn? What did you struggle with? What changed in you? What can you now see, do, or imagine that you couldn’t before?”

That’s why I don’t hear “post-plagiarism” as scary. I hear it as a chance to finally admit the truth:

We should have been building education for learning all along—not for policing.

Let the dinosaur go

There’s a line in the “post-plagiarism” framing that sticks with me: it’s not about panic, it’s about rethinking what it means to learn and demonstrate knowledge in a world where human cognition interacts with digital systems. 

Exactly.

But let’s go further. If we keep trying to force AI into the old assessment regime, we’re basically trying to keep a dinosaur alive. And anybody who’s seen Jurassic Park knows: holding onto the dinosaur is not the best thing...

Let it go.

Let students run free—not in chaos, but in responsibility. Stop treating education as competition, conflict, and gatekeeping. Start treating it as facilitation: helping people become more capable of thinking, judging, imagining, and building.

Educational democracy means the student’s learning belongs to the student.

And in a world where knowledge is everywhere, that’s not idealism.

That's just simply the new rules of the game...

Friday, January 9, 2026

AI Didn’t Make Me Smarter. It Made Academia Less Ableist.

 I just watched a PBS NewsHour segment on generative AI in higher education, and it landed with a strange mix of déjà vu and relief. The piece frames this moment well: the current senior class is the first to have spent almost their whole college career in the age of generative AI, and schools are scrambling because the tech is moving faster than policies (or detection) can keep up.

But the part that hit me wasn’t the “future of education” rhetoric. It was the familiar storyline of gatekeepers on one side, pragmatists on the other, and a whole set of people—often the ones already struggling to fit into academia’s “normal”—left to absorb the impact.  

The PBS framing: policing vs. pedagogy

PBS follows a philosophy professor who describes a recognizable shift in student writing: suddenly polished, impersonal, and “business document”-ish. Then we get the enforcement reality: detection tools, time-consuming investigations, and professors describing teaching as turning into policing.

And then the other side of the debate appears: faculty and administrators saying students are going to use these tools anyway, so the only sane approach is teaching responsible use. One professor encourages students to use AI to critique their own work and deepen understanding; Ohio State rolls out an “AI fluency” initiative that requires undergrads across disciplines to learn and use AI tools.

PBS also highlights something that the 'panic' discourse often forgets: AI isn’t only about writing essays. In the segment, it’s used to speed up research processes (like scanning large sets of recordings) and to support creative/technical experimentation.

All of that is real. But here’s what I want to add: for some of us, AI isn’t primarily a cheating tool or way to crunch data faster. It’s an accommodation tool—maybe the best one we’ve ever had.

My version of “academic integrity” is surviving a system not built for me

I wasn’t diagnosed with a learning 'disability' until late—19. That meant my early education was the old-school version of “try harder.” Eventually I got the standard accommodations: extra time on exams, alternative formats... the slow, piecemeal accessibility fixes that help you limp through a system designed around a narrow definition of “good academic work.”

Yet even with accommodations, I hit the same wall again and again: academia is built around normative minds and reading and writing as the standard metric for success. If you don’t read quickly, don’t draft cleanly, don’t naturally produce “proper academic prose,” you can have strong ideas and still never make and/or be treated like you don’t belong.

My journey through academia has always been a struggle, school after school, disclose or not, accommodations or just try harder. Mistakenly switching around page numbers gets you scolded, made an example of. "How could you think like that," "what were you thinking," "if he was at university X, he'd be gone already..." Standing up against these situations, against supervisors trying to bang your head into normative logics brings retaliation, and disclosing at the beginning of a job gets you fired before you even start. At one point I even had to file a lawsuit against an institution, which all I'm allowed to say about it now is that 'it has been resolved.' Which, to those who know about lawsuit settlements, didn't leave me empty handed. 

These situations aren't even about the difficulties of writing for the normative minded academic audience, or trying to navigate the 'publish or perish' academic job market where 'accommodation' for extra time doesn't exist. Do you really think a university is going to pay you full salary and let you teach part time to accommodate your research and publishing? 

So when I hear parts of the AI debate framed like “kids these days just want shortcuts,” I get the critique—but I also want to flip the lens:

What if AI isn’t a shortcut around working—what if it’s a ramp into a building that never installed stairs?

The stat nobody knows what to do with: 86% are already using it

PBS cites a survey: 86% of college students are using AI tools like ChatGPT, Claude, and Google Gemini for schoolwork. (For a widely cited source on the same “86%” claim, see the Digital Education Council Global AI Student Survey.) Academia can argue about whether that’s good. But operationally, the world has already moved in that direction. It is very easy to see the merits across most all of life. So the real question is: what do we do now—especially for students and scholars who were historically excluded by the reading/writing bottleneck?

What AI actually does for me (and what it doesn’t)

AI didn’t magically give me better ideas. I’ve been generating ideas for decades. The difference is that AI helps me translate my thinking into a structure that the “normative academic mind” can actually digest.

Here’s my current workflow in plain terms:

  • I start from material I already have (hundreds of thousands of words across years).
  • I collate everything related to one concept (say, “integrated autonomy”) into one document.
  • I feed that to AI and ask it to extract themes, claims, and through-lines—basically, as a research assistant would do. 
  • Then I have it generate an outline and work through it: I push it away from generic phrasing, force it to keep my terminology, and rebuild the outline until the argument matches what I actually mean.
  • Then we draft sections one by one, it offering a first draft and me then making it my own and within my wording, perspectives, and argument.
  • I do the human work that matters: the research, the conceptualizing, the argument, making judgment calls, adding nuance, selecting what to cut, and verifying every factual/citation claim.

AI helps with the worst parts of academic labor for me: 1) structuring that argument for the normative academic mind, and 2) chasing sources and doing initial mapping. Not because I can’t do it, but because it costs me disproportionate time and energy - and I would argue any researcher. Used carefully, AI becomes the difference between “I can’t even get into this literature maze” and “I can get oriented and start doing real scholarship.”

But I’m not naive about the trade-offs. The PBS segment is right that detection is messy—and also that AI itself can be wrong. It can hallucinate. It can confidently invent citations. It can homogenize voice and turn even the most critical content into neoliberal tech bro business terminology if you let it. (On detector reliability and bias concerns, see Stanford HAI on detector bias and OpenAI’s note explaining why it discontinued its AI text classifier for low accuracy.)

So the issue isn’t “AI or no AI.” It’s AI with vigilance, transparency, and clear norms.

A middle path: integrity through process, not prohibition

Here’s what I wish the AI-in-academia debate would say out loud:

  • Detection tools aren’t a foundation for justice. PBS shows false accusations and students describing how arbitrary the “signals” can feel (even punctuation choices).
  • If you want integrity, shift toward process-based evaluation: drafts, outlines, version history, oral defenses, and reflective memos that explain how a text was produced.
  • Teach students how to use AI like a tool: brainstorming, critique, study questions, structure suggestions—with disclosure and accountability.
  • And ultimately, recognize that EVERY academic writer - from a first year BA student to full professor can always use a sounding board, research assistant, and proof reader. 

This is also where publishing ethics is heading: major guidance emphasizes that AI tools can’t be authors, and humans remain responsible for accuracy, originality, and disclosure of AI use. See COPE’s guidance on authorship and AI tools and the ICMJE recommendations on disclosing AI-assisted technologies (and, for a major journal policy example, Science journals’ editorial policies).

The real revolution: who gets to participate in knowledge production

My blunt take is this: AI is exposing how exclusionary academia has been. For a long time, the system (self-)selected for one cognitive style and output and then called it “merit.” Now we have a technology that can reduce the penalty for being dyslexic, neurodivergent, non-native in academic English, or simply non-normative in how you structure thought.

That doesn’t mean “anything goes.” It means we finally have a chance to build academic culture around what we say we value: knowledge production and knowledge dissemination—without quietly excluding everyone who can’t perform the rituals in exactly the approved way.

If higher ed is serious, it will stop treating AI as only a threat and start treating it as both:

  • a challenge to assessment and authorship norms, and
  • a once-in-a-generation accessibility lever.

And for people like me? It’s not just changing how I write. It might be the reason my work actually reaches the world.

Educational Democracy in the Age of ChatGPT: Stop Grading Learning

  A headline caught my eye: “ChatGPT is in classrooms. How should educators now assess student learning?” And my immediate response was: w...