Pages

Friday, January 9, 2026

AI Didn’t Make Me Smarter. It Made Academia Less Ableist.

 I just watched a PBS NewsHour segment on generative AI in higher education, and it landed with a strange mix of déjà vu and relief. The piece frames this moment well: the current senior class is the first to have spent almost their whole college career in the age of generative AI, and schools are scrambling because the tech is moving faster than policies (or detection) can keep up.

But the part that hit me wasn’t the “future of education” rhetoric. It was the familiar storyline of gatekeepers on one side, pragmatists on the other, and a whole set of people—often the ones already struggling to fit into academia’s “normal”—left to absorb the impact.  

The PBS framing: policing vs. pedagogy

PBS follows a philosophy professor who describes a recognizable shift in student writing: suddenly polished, impersonal, and “business document”-ish. Then we get the enforcement reality: detection tools, time-consuming investigations, and professors describing teaching as turning into policing.

And then the other side of the debate appears: faculty and administrators saying students are going to use these tools anyway, so the only sane approach is teaching responsible use. One professor encourages students to use AI to critique their own work and deepen understanding; Ohio State rolls out an “AI fluency” initiative that requires undergrads across disciplines to learn and use AI tools.

PBS also highlights something that the 'panic' discourse often forgets: AI isn’t only about writing essays. In the segment, it’s used to speed up research processes (like scanning large sets of recordings) and to support creative/technical experimentation.

All of that is real. But here’s what I want to add: for some of us, AI isn’t primarily a cheating tool or way to crunch data faster. It’s an accommodation tool—maybe the best one we’ve ever had.

My version of “academic integrity” is surviving a system not built for me

I wasn’t diagnosed with a learning 'disability' until late—19. That meant my early education was the old-school version of “try harder.” Eventually I got the standard accommodations: extra time on exams, alternative formats... the slow, piecemeal accessibility fixes that help you limp through a system designed around a narrow definition of “good academic work.”

Yet even with accommodations, I hit the same wall again and again: academia is built around normative minds and reading and writing as the standard metric for success. If you don’t read quickly, don’t draft cleanly, don’t naturally produce “proper academic prose,” you can have strong ideas and still never make and/or be treated like you don’t belong.

My journey through academia has always been a struggle, school after school, disclose or not, accommodations or just try harder. Mistakenly switching around page numbers gets you scolded, made an example of. "How could you think like that," "what were you thinking," "if he was at university X, he'd be gone already..." Standing up against these situations, against supervisors trying to bang your head into normative logics brings retaliation, and disclosing at the beginning of a job gets you fired before you even start. At one point I even had to file a lawsuit against an institution, which all I'm allowed to say about it now is that 'it has been resolved.' Which, to those who know about lawsuit settlements, didn't leave me empty handed. 

These situations aren't even about the difficulties of writing for the normative minded academic audience, or trying to navigate the 'publish or perish' academic job market where 'accommodation' for extra time doesn't exist. Do you really think a university is going to pay you full salary and let you teach part time to accommodate your research and publishing? 

So when I hear parts of the AI debate framed like “kids these days just want shortcuts,” I get the critique—but I also want to flip the lens:

What if AI isn’t a shortcut around working—what if it’s a ramp into a building that never installed stairs?

The stat nobody knows what to do with: 86% are already using it

PBS cites a survey: 86% of college students are using AI tools like ChatGPT, Claude, and Google Gemini for schoolwork. (For a widely cited source on the same “86%” claim, see the Digital Education Council Global AI Student Survey.) Academia can argue about whether that’s good. But operationally, the world has already moved in that direction. It is very easy to see the merits across most all of life. So the real question is: what do we do now—especially for students and scholars who were historically excluded by the reading/writing bottleneck?

What AI actually does for me (and what it doesn’t)

AI didn’t magically give me better ideas. I’ve been generating ideas for decades. The difference is that AI helps me translate my thinking into a structure that the “normative academic mind” can actually digest.

Here’s my current workflow in plain terms:

  • I start from material I already have (hundreds of thousands of words across years).
  • I collate everything related to one concept (say, “integrated autonomy”) into one document.
  • I feed that to AI and ask it to extract themes, claims, and through-lines—basically, as a research assistant would do. 
  • Then I have it generate an outline and work through it: I push it away from generic phrasing, force it to keep my terminology, and rebuild the outline until the argument matches what I actually mean.
  • Then we draft sections one by one, it offering a first draft and me then making it my own and within my wording, perspectives, and argument.
  • I do the human work that matters: the research, the conceptualizing, the argument, making judgment calls, adding nuance, selecting what to cut, and verifying every factual/citation claim.

AI helps with the worst parts of academic labor for me: 1) structuring that argument for the normative academic mind, and 2) chasing sources and doing initial mapping. Not because I can’t do it, but because it costs me disproportionate time and energy - and I would argue any researcher. Used carefully, AI becomes the difference between “I can’t even get into this literature maze” and “I can get oriented and start doing real scholarship.”

But I’m not naive about the trade-offs. The PBS segment is right that detection is messy—and also that AI itself can be wrong. It can hallucinate. It can confidently invent citations. It can homogenize voice and turn even the most critical content into neoliberal tech bro business terminology if you let it. (On detector reliability and bias concerns, see Stanford HAI on detector bias and OpenAI’s note explaining why it discontinued its AI text classifier for low accuracy.)

So the issue isn’t “AI or no AI.” It’s AI with vigilance, transparency, and clear norms.

A middle path: integrity through process, not prohibition

Here’s what I wish the AI-in-academia debate would say out loud:

  • Detection tools aren’t a foundation for justice. PBS shows false accusations and students describing how arbitrary the “signals” can feel (even punctuation choices).
  • If you want integrity, shift toward process-based evaluation: drafts, outlines, version history, oral defenses, and reflective memos that explain how a text was produced.
  • Teach students how to use AI like a tool: brainstorming, critique, study questions, structure suggestions—with disclosure and accountability.
  • And ultimately, recognize that EVERY academic writer - from a first year BA student to full professor can always use a sounding board, research assistant, and proof reader. 

This is also where publishing ethics is heading: major guidance emphasizes that AI tools can’t be authors, and humans remain responsible for accuracy, originality, and disclosure of AI use. See COPE’s guidance on authorship and AI tools and the ICMJE recommendations on disclosing AI-assisted technologies (and, for a major journal policy example, Science journals’ editorial policies).

The real revolution: who gets to participate in knowledge production

My blunt take is this: AI is exposing how exclusionary academia has been. For a long time, the system (self-)selected for one cognitive style and output and then called it “merit.” Now we have a technology that can reduce the penalty for being dyslexic, neurodivergent, non-native in academic English, or simply non-normative in how you structure thought.

That doesn’t mean “anything goes.” It means we finally have a chance to build academic culture around what we say we value: knowledge production and knowledge dissemination—without quietly excluding everyone who can’t perform the rituals in exactly the approved way.

If higher ed is serious, it will stop treating AI as only a threat and start treating it as both:

  • a challenge to assessment and authorship norms, and
  • a once-in-a-generation accessibility lever.

And for people like me? It’s not just changing how I write. It might be the reason my work actually reaches the world.

AI Didn’t Make Me Smarter. It Made Academia Less Ableist.

 I just watched a  PBS NewsHour segment on generative AI in higher education , and it landed with a strange mix of déjà vu and relief. The p...