AI EDUCATION: AI in Education and Media Raising Big Questions

689

Each week we find a new topic for our readers to learn about in our AI Education column. 

There are some industries, particularly highly regulated ones like finance, where AI implementation is still in very early stages and many users are only now figuring out the how and why of artificial intelligence. 

There are others where artificial intelligence’s impact was instantaneous and pervasive—like education and journalism—who are now struggling with developing an ethical framework and best practices to guide their ongoing use of AI, even as the technology is forcing rapid change. 

In schools and journalism, much of AI policy is still fragmented by design—though best practices are emerging. School policy is highly localized because most U.S. schools are localized—locally funded and locally controlled. In a free press, journalism is similar—there is no AI regulation that binds publishers beyond already existing media law against malpractice like defamation and incitement. 

In this week’s AI Education column, we’re going to take a bird’s eye view at what’s happening with Artificial intelligence in education and journalism and what it may mean for the technology moving forward. 

How We Got Here 

The country is right in the middle of public school back-to-school season, and guidelines on the use of AI in the classroom and coursework are being published. According to Delaware Online, the Delaware Department of Education released statewide guidance last week on AI in the classroom, offering best practices—including the recommendation that teachers be clear about expectations about AI use in classroom discussions. 

On the university level, schools are also grappling with AI. The University of Missouri is giving instructors rein to permit or restrict AI use, but is also requiring them to publish AI policies in course syllabi each semester, according to the Columbia Missourian. 

In journalism, Aaron Pelczar, a 40-year-old cub reporter at the Cody Enterprise in Wyoming, resigned after admitting that he used AI to help write stories, according to the Associated Press. Pelczar’s use of AI was extensive enough to include generated quotes that were falsely attributed to public figures. 

And in Australia, Cosmos, a science magazine, is being criticized for using AI to generate content, according to the ABC, due to some unfortunate timing: it started using AI-generated content just after an unrelated decision to cut half of its staff due to financial decision. At least, in Cosmos’ case, the use of AI is clearly disclosed. 

Gen AI Is Great for Schools and Publications 

Artificial intelligence, when carefully applied, is a great tool for researchers—including teachers, professors, students, reporters and editors, for whom research is an essential part of the job—thanks to its ability to sift through large amounts of data. AI helped crack open the Panama Papers case, assisting reporters and investigators in reading and understanding 2.6 terabytes worth of data related to businesses and wealthy individuals who offshored assets to avoid—or evade—taxes. 

Generative AI is also a useful creative tool. It can also help journalists, researchers and educators personalize their work for every audience—individual or group—in an automated manner. So teachers can offer instructional materials and exercises tailored for each student to help them learn in the ways they learn best—and journalists can offer content to each reader or media consumer in the medium and method that reaches them best. 

AI can also help in the classroom and in media with idea generation, translation, collaboration and communication, creating interactive content and accessibility. It can help draft boilerplate stories about hires and capital raises and report financial market results, freeing up journalists to do what makes them shine—talk to people, study, investigate, learn and report. 

AI might, in the long run, make us more efficient and happier at our jobs. That remains to be seen. Efficiency-creating technology tends to pile on more work and can have a zero-sum or negative impact on worker happiness, especially in knowledge industries. 

AI Has Drawbacks 

Institutions have to create a safe space for students to learn about artificial intelligence, especially children and seniors. These spaces should also be as safe as possible from identity theft, harassment and bullying. 

An AI model is only as good as its training—so schools and journalists alike must make sure they understand and mitigate the biases in any tools they adopt. Used well, AI itself may mitigate some of the natural bias of journalists and researchers. Results generated with artificial intelligence should be received skeptically, and carefully fact-checked and edited before published or turned in. 

Ethical guidelines for students, teachers and journalists almost always call for disclosure. AI should be cited in any research or coursework when it is used, and journalists should inform their readers when artificial intelligence aided in the reporting or writing of their work.  

Thinking About the Deeper Implications 

In publishing guidelines, journalism and education thought leaders have also cautioned about the overuse of AI. Students and educators may overuse AI, leading to lazy scholarship and atrophied critical, analytical and creative abilities. Journalists may overuse AI as well and fail to hone the skepticism and linguistic fluency that comes with steady reporting and writing. 

It’s also a new technology. Adopting AI tools too early may risks exposure to business failures—some of these startups will not last, leaving a school system or a newspaper without essential infrastructure. 

Education and journalism are rooted on trust. Readers and viewers have to be able to trust that the information being reported to them is real and accurate—and to maintain trust with a broad audience, reporters and editors have to adopt a neutral enough stance to appeal to many opposing viewpoints. Educators, too, have to be trusted by their students as being accurate and truthful, and successful students must be trusted by their educators to do their own work and present it to the best of their ability. These industries’ relationship with artificial intelligence has to be built on the same kind of trust. 

AI Could Change Us—Hopefully for the Better 

The genie isn’t going back into the bottle. Just as the calculator changed the way math is taught and the way people do math in their careers, and the internet changed the way we research and communicate, and the printing press annd book changed the way we pass knowledge across generations, so AI will change the way we write and learn and think. Teachers and publishers are powerless to stop this change, because the change isn’t just happening within an industry or the economy—it’s happening in our brains. 

Over time, the ability to store and access information in your brain has gone from an immensely powerful skill to one that is almost unnecessary for survival. In a thousand generations, we’ve gone from hunter-gatherers using all of our massive brains to survive to organisms with yottabytes of data ready to be interpreted at our fingertips. AI can definitely help us train-up our brains. 

The great hope, of course, is that we can use all the grey matter this technology frees up to do some more good for the world. But there’s no reason that has to be the case, especially if journalists continue to sloppily use AI in ways that damage the credibility of their publications, or students use the technology to cheat themselves out of an education, or the use of AI in the classroom further stunts the cognitive and ethical development of our children.