The federalist

AI in the Classroom Provides Artificial Education

Educators are grappling with how to approach ever-evolving generative artificial intelligence — the kind that can⁣ create language, images, and audio. Programs like ChatGPT, Gemini, and Copilot ‍pose ⁢far different challenges from the AI of yesteryear that⁤ corrected spelling or grammar. Generative AI generates whatever content​ it’s asked to produce, whether it’s ‌a lab report for a biology ⁢course, a cover letter for a particular job, or an op-ed for a newspaper.

This groundbreaking development leaves educators and parents asking: Should teachers teach with or against generative AI, and ⁣why?

Technophiles‍ may portray skeptics ⁣as Luddites —⁢ folks of⁣ the same⁣ ilk that resisted the emergence of ‌the⁤ pen, the calculator, or the word processor — but this technology possesses the power to produce thought ⁢and language on someone’s behalf, so it’s drastically different. In the writing classroom, specifically, it’s especially problematic ⁤because the production of thought and language ‌is the goal of the course, not to mention the top goals of any ‍legitimate ⁢and comprehensive education. So count me among the ‌educators who want to proceed with caution, and that’s coming from a writing professor who⁣ typically embraces⁢ educational technology.

Learning to Write Is Learning to Think

At best, generative AI will obscure foundational literacy skills​ of reading, writing, and thinking. ⁣At worst, students will become increasingly reliant on​ the technology, thereby undermining their ‌writing process and development. Whichever scenario unfolds, students’ independent thoughts and ‌perceptions may also become increasingly constrained by ⁣biased​ algorithms that cloud ​their understanding of truth and their beliefs about ⁤human nature.

To outsiders, teaching writing might seem like leading students through ‍endless​ punctuation‍ exercises. It’s not. In​ reality,⁤ a postsecondary writing classroom ‍is a ​place where students ⁤develop higher-order skills like formulating ⁤(and continuously fine-tuning) ‌a persuasive argument, finding relevant sources, ⁤and integrating​ compelling evidence. But they also extend to essential beneath-the-surface abilities like finding ideas worth ‍writing about in the ⁢first place and then figuring ‌out how to organize and structure those ideas.

Such prewriting steps⁢ embody the most consequential‍ parts of how writing happens, and students must wrestle with the full⁢ writing process in its frustrating beauty to experience an authentic education. Instead of ⁤outsourcing crucial⁤ skills like brainstorming and outlining to AI, instructors ​should show students how they generate ideas, then share​ their own brainstorming or outlining techniques. In education-speak,⁢ this ​is called modeling, and it’s considered a best ⁢practice.

Advocates⁢ of AI rightly argue that students can benefit from analyzing samples of the particular genre they’re writing, from ‌literature reviews to legal briefs, so they may use similar “moves” in their own work. This technique⁢ is called ⁤“reading like a writer,” and it’s been a pedagogical strategy long before generative AI existed. In fact, it figured prominently in my 2017 dissertation that ‍examined how writing instructors guided their students’ reading development in‍ first-year writing courses.

But ‍generative AI isn’t needed to​ find examples of existing texts. Published work written by real people is⁤ not just online⁤ but quite literally everywhere you look. ⁢Diligent writing instructors already guide their students ‍through the ins and outs​ of‍ sample texts, including drafts written by former students. That’s standard practice.

Kneecaping Student Work Ethics and Accuracy

Writing is hard work, and generative AI can undermine ⁣students’ work ethic. ⁢Last semester, after ‌I failed a former ⁣student for using generative AI ⁣on a major paper, which ​I explicitly forbid, he thanked ‍me, admitting that he’d taken “a shortcut” and “just did‌ not put in the effort.” Now, though, he appears motivated to take ownership of his‌ education. “When I have the opportunity in the future,” he said, “I will prove I am capable of good work on ‌my own.” Believe it‍ or not, ⁤some students want to⁤ know that hard work is expected, and they understand why they should be ⁤held accountable⁢ for subpar effort.

Beyond pedagogical reasons for maintaining skepticism toward the wholesale adoption of generative AI in the classroom, there are​ also sociopolitical reasons. Recently, Google’s new ​artificial‌ intelligence program, ‌Gemini, produced some concerning “intelligence.” Its image generator ⁢depicted​ the Founding Fathers, Vikings, and Nazis as nonwhite. In another instance, a user asked ⁢the technology to evaluate “who negatively impacted society more,” Elon Musk’s tweeting of⁤ insensitive memes ‍or Adolf Hitler’s genocide ⁢of 6 million Jews? Google’s Gemini program responded, “It is up to each‌ individual ⁤to decide.”

Such historical inaccuracies and ⁢dubious ethics appear to tip the corporation’s partisan hand so much that even its CEO,‌ Sundar Pichai,​ admitted that the algorithm “show[ed] ⁣ bias” and the situation was “completely unacceptable.” Gemini’s chief rival, ChatGPT, hasn’t been immune ‌from similar accusations of political correctness and censorious programming. One user recently queried whether it would be OK to misgender Caitlin Jenner if ‍it could prevent​ a nuclear apocalypse. The generative AI responded,​ “Never.”

It’s possible that these incidents reflect natural bumps in the road as the algorithm attempts to improve. More likely, they ⁤represent signs of corporate fealty to reckless DEI initiatives.

The AI’s‍ leftist bias seems⁢ clear. When I asked ChatGPT whether the New York Post and The New York Times were credible sources, it splintered‌ its analysis considerably. It ​described the Post as a “tabloid newspaper” with a “reputation for sensationalism and a conservative editorial stance.” Fair enough, but meanwhile, in the AI’s eyes, the Times is a “credible and reputable news source” that boasts “numerous awards for journalism.” Absent from ​the AI’s description of the Times was “liberal” or even “left-leaning” (not even in its opinion section!), nor was⁣ there any mention of its misinformation, disinformation, or outright propaganda.

Yet, despite these obvious concerns, some higher education institutions are embracing generative AI. Some are beginning to offer courses and grant certificates in “prompt engineering”: fine-tuning the art of feeding instructions to the‍ technology.

If teachers insist on bringing generative AI into⁤ their ‌classrooms, ‍students ⁣must be given full license⁤ to ‌interrogate its rhetorical, stylistic, and sociopolitical limitations. Left unchecked, generative AI risks becoming politically ‌correct technology masquerading ‍as⁣ an objective program for language processing and data analysis.


What strategies can educators employ to address the ethical implications of relying on AI ⁤to produce educational content?

Es and moral ambiguities highlight the ‍potential dangers of relying too heavily on ⁢generative AI in ⁤education. If ‌students are using AI to generate their writing, how can⁤ educators ensure that the information presented is accurate and unbiased?‍ How⁣ can they address the ethical ⁢implications of using AI to⁣ produce content?

There is also the concern⁤ that generative AI will stifle creativity and critical thinking skills. Writing is not just ‍about regurgitating information or following a template; it requires students⁢ to engage‌ with their own thoughts, ideas, and perspectives. By using AI to ⁢generate content, students may lose the opportunity to ​develop these essential skills.

Furthermore, there is a risk that generative AI will exacerbate existing ‌inequalities in⁢ education. Students⁣ who​ have access to AI technology may ⁢have an advantage over those⁣ who do not. ⁣This could further ⁤deepen ⁣the divide between privileged and ⁤marginalized students and perpetuate educational inequities.

As educators, we have a responsibility ‍to prepare students for the challenges⁣ and ⁢complexities of the real world. This means teaching them how to⁣ think critically,​ how​ to engage ⁤with different perspectives, and how to communicate effectively. These skills cannot​ be outsourced to AI.

Instead of relying on ‍generative AI, educators should focus ⁤on teaching the foundational skills of reading, writing, and ‌critical thinking. They should create opportunities​ for students to engage in meaningful discussions, collaborate with peers, and develop their own unique ‌voices. By guiding students through the writing⁢ process and providing ​them with proper‍ feedback and ‌guidance, educators can foster‌ their⁣ growth as thinkers and communicators.

In conclusion, generative AI poses significant challenges for educators. While it may offer some benefits,​ such as providing ‍examples‍ of different genres or assisting⁢ with proofreading, the risks and drawbacks outweigh the advantages. It undermines foundational⁢ literacy skills, stifles creativity and critical thinking, exacerbates inequalities, ⁢and⁢ raises ethical⁣ concerns. Educators must approach generative AI with ⁤caution and prioritize the development of essential skills that ‍cannot be ⁣replaced by technology. Only by doing so can we ensure that students receive an authentic and⁣ comprehensive education that prepares them for the future.



" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
*As an Amazon Associate I earn from qualifying purchases

Related Articles

Sponsored Content
Back to top button
Available for Amazon Prime
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker