October 5, 2025

What failures of past educational technologies can teach us about the future of AI in schools

0
The-Best-VPN-for-School.jpg


This article was initially published on the conversation.

American technologists have told educators to quickly adopt their new inventions for over a century. In 1922, Thomas Edison said that in the near future, all textbooks would be replaced by film bands, because the text was effective at 2%, but the film was 100%effective. These false statistics are a good reminder that people can be brilliant technologists, while being incompetent education reformers.

I am thinking of Edison whenever I hear technologists insisting that educators must adopt artificial intelligence as quickly as possible to get ahead of the transformation that is about to wash schools and society.

At the MIT, I study the history and the future of education technology, and I have never had an example of a school system – a country, a state or a municipality – which quickly adopted a new digital technology and saw lasting advantages for their students. The first districts to encourage students to bring mobile phones in class did not better prepare young people in the future than schools that adopted a more cautious approach. There is no evidence that the first countries to connect their classrooms to the Internet separate in economic growth, the level of education or the well-being of citizens.

New education technologies are as powerful as communities that guide their use. The opening of a new browser tab is easy; Creating conditions for good learning is difficult.

It takes years to educators to develop new practices and standards, so that students adopt new routines and for families to identify new support mechanisms so that a new invention reliably improves learning. But as AI propagates in schools, a historical analysis and new research carried out with teachers and students from kindergarten to 12th year offer advice on navigation on uncertainties and minimization of damage.

We were wrong and too confident before

I started teaching students in high school history to seek the web in 2003. At the time, library experts and information sciences developed a web pedagogy that encouraged students to read the websites looking for credibility markers: quotes, appropriate formatting and a “About” page. We have given students control lists such as the CRAAP – currency test, reliability, authority, accuracy and objective – to guide their evaluation. We have taught students to avoid Wikipedia and trust websites with areas .org or. EDU in .com areas. All of this seemed reasonable and informed of evidence at the time.

The first article evaluated by peers demonstrating effective methods to teach students how to search for the web was published in 2019. He showed that novices that used these commonly taught techniques have implemented tests evaluating their ability to sort the truth from fiction on the web. He also showed that online information assessment experts used a completely different approach: to quickly leave a page to see how other sources characterize it. This method, now called side reading, has led to faster and more precise research. The work was a punch for an old teacher like me. We had spent almost two decades teaching millions of obviously ineffective research students.

Today, there is a cottage industry of consultants, keys and “opinion leaders” traveling in the country which claims to train educators on how to use AI in schools. National and international organizations publish AI literacy executives claiming to know what students need for their future. Technologists invent applications that encourage teachers and students to use a generative AI as tutors, as lesson planners, as editors or as a conversation partners. These approaches have about as much probable support today as the Craap test did it when it was invented.

There is a better approach than making too confident assumptions: rigorously test new practices and strategies and only largely defending those who have solid evidence of efficiency. As with the literacy of the web, these evidence will take a decade or more to emerge.

But there is a difference this time. AI is what I called “arrival technology”. The AI ​​is not invited to schools thanks to an adoption process, such as the purchase of an office computer or a smartboard – it plants the party, then begins to reorganize furniture. This means that schools should do something. Teachers feel it urgently. However, they also need support: in the past two years, my team has interviewed nearly 100 educators from all over the United States, and a general chorus is “do not make us go alone”.

3 strategies for the prudent way to follow

Pending better answers from the community of education community, which will take years, teachers will have to be scientists themselves. I recommend three benchmarks to move forward with AI under conditions of uncertainty: humility, experimentation and evaluation.

First, remind students and teachers regularly that everything that tries schools – literacy frameworks, educational practices, new assessments – is a better assumption. In four years, students could hear that what they have been learned for the first time on the use of AI has proven to be completely false. We must all be ready to revise our reflection.

Second, schools must examine their students and their programs and decide what types of experiences they would like to conduct with AI. Certain parts of your program could invite the game and new daring efforts, while others deserve more caution.

In our podcast “The Homework Machine”, we interviewed Eric Timmons, a teacher in Santa Ana, California, who teaches elective cinema lessons. The final evaluations of his students are complex films that require several technical and artistic skills to produce. I have passionate, Timmons uses AI to develop his study program, and he encourages students to use AI tools to solve cinema problems, technical design scripts. It is not worried that AI does everything for students: as he says: “My students like to make films … So why would they replace this with AI?”

It is among the best most thoughtful examples of a “all in” approach that I encountered. Nor can I imagine recommending a similar approach for a course like ninth year English, where the pivot introduction to the writing of secondary schools should probably be treated with more cautious approaches.

Third, when teachers are launching new experiences, they must recognize that local evaluation will occur much faster than the rigorous sciences. Whenever schools launch a new AI policy or teaching practice, educators must collect a stack of related student works that have been developed before AI is used during teaching. If you let students use AI tools for formative comments on scientific laboratories, take a stack of laboratory reports around 2022. Then collect new laboratory reports. Examine whether post-Ai laboratory reports show an improvement in the results that are important to you and revise the practices accordingly.

Between local educators and the international community of education scientists, people will learn a lot by 2035 on AI in schools. We could see that AI is like the web, a place with risks, but ultimately, so full of important and useful resources that we continue to invite it to schools. Or we could see that AI is like mobile phones, and negative effects on well-being and learning ultimately prevail over potential gains, and are therefore better treated with more aggressive restrictions.

Everyone in education feels an emergency to resolve uncertainty around generative AI. But we don’t need a race to generate answers first – we need a race to be right.The conversation

Justin Reich, professor of digital media, Massachusetts (MIT) Institute of Technology

This article is republished from the conversation under a Creative Commons license. Read the original article.


https://gizmodo.com/app/uploads/2024/06/The-Best-VPN-for-School.jpg

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *