A man wearing a black shirt and dark jacket talks to an unseen audience..

PHOTO CREDIT: LIFETOUCH

 

If you have an iPhone, you’ve held Adam Cheyer’s invention in your hand. Siri, the voice-activated virtual assistant embedded in Apple products, started life as an app developed by Cheyer and two other co-founders and introduced in February 2010. It didn’t take long for people to notice the potential of Siri. Three weeks after the app launched, Steve Jobs offered to buy it. Now the artificial intelligence (AI) tool, or another version of it, is everywhere.

Jobs “saw the coming wave of AI before anyone else,” Cheyer said. “Like a drip, drip, drip of a dam that’s about to burst, AI aspects started coming out fast and furious.”

Cheyer was a keynote speaker at the NSBA Annual Conference in Orlando in April. He outlined a brief history of AI up to the introduction of ChatGPT in November 2022. “I’ve been a paid professional for over 35 years in this field,” he said. “I was working along, step by step, and AI was getting better and better and then a hockey stick curve took off. In the past decade, there have been things that have happened that I never thought I would see in my lifetime. That is meant to be a fairly shocking statement. It was to me.”

He also talked about the impact it will have on students, teachers, and school leaders. This article is adapted from his address.

This is going to be the most important talk I’ve ever given on AI because I’m sure you’ve heard about ChatGPT and this explosion of AI. Out of every field and industry, education is going to have to wrestle with this the most. It’s an incredible opportunity, but it has to be handled delicately. I’m not here to give you the answer, but hopefully I will give you some insights so that we can start the discussion together about how to apply AI in education.

What is GPT? It stands for Generative Pre-Trained Transformer. Generative means it can generate words. You give it the first few words and it will predict the next word. It’s like your autocorrect or your cellphone predictor on steroids.

Pre-trained—That’s the steroids part. These things [use large datasets from the internet] and millions of dollars of compute time on huge amounts of text.

Transformer—Think of it as a machine learning model that has certain properties. If you type in “John hit the….” what’s the next word? There’s going to be probabilities learned in the past. Ball is the most likely outcome. Penguin is probably way down on the list. It’s trying to predict the next word, and then the next word, etc. And it’s doing this in a few ways. This is the transformer part. It has all these probabilities. So now the sentence is “John hit the ball out of the park, and then he….” There’s several words it could be. But the key thing of the transformer model is that neural network techniques used to just look at the last couple of words to make the next prediction. Using what’s called an attention model, they now can learn which parts of the sentence are important in the prediction. AI never did that before.

GPT and similar AI does not always use the most probable word. There’s a parameter called temperature that says sometimes take second best or third best. It makes it more interesting. Because if you always predict the most probable, you get these bland, not so interesting texts. So AI is about trying to be creative, not necessarily accurate. You might say “two plus two equals...” and four is the highest probability. But because of the temperature parameter, it might generate three and a half instead, or maybe five. Realizing that these models are more focused on generating “creative” texts rather than absolutely accurate ones is an important fact to know about how they are architected.

Just a few months ago, ChatGPT went mainstream. It is the fastest adopted technology by far. It took five days to have a million users; two months to have 100 million users. Nothing like this has ever happened. And there’s a reason for it. It’s important.

AI is getting faster and faster. Where is this going to lead? Do we need to fear AI? Stephen Hawking said development of AI could spell the end of the human race. Elon Musk piled on by saying with the artificial intelligence, we’re summoning the demon. Now, ironically, he’s the one who funded and started Open AI, which is doing ChatGPT.

Yes, it’s amazing, but little progress has been made on artificial general intelligence, the type of stuff we do every day. We’re not passing human intelligence. Yet.

A man types on a laptop displaying sketches of people talking

PHOTO CREDIT: SUTTHIPHONG/STOCK.ADOBE.COM

 

Every 10 years, the world changes. We had the introduction of Windows, and about 10 years later we had the web. Ten years later, we had mobile computers. Ten years later is now. I predicted that around 2022, there would be a new paradigm for interaction as important as the web and mobile and it would be based on a conversational assistant. AI and ChatGPT is not just a curiosity. It will be more important to you, and to our students, than the internet. It brings problems to wrestle with, opportunities to address.

How should AI be used in education? What do we need to do about it? I can’t give you the answers, but here are a few points: First, technology evolves quickly, but people don’t change as fast. And when shiny technology—iPads, Siri, ChatGPT—comes along, it often distracts from the most important part of education. Education’s most essential component lies not in the pages of a book nor on the pages of a screen, but in the motivation ignited when a student gazes into the eyes of an inspirational teacher. That always must be first and foremost. 

But educators have been coming up to me with lots of fears. They ask: Will kids cheat using GPT? You can type in a prompt, and it’ll generate an essay for them. When the internet first came out, we had similar fears. It used to be that educational materials were in books, which had publishers and editors to validate the information. And you had to go to the library. With the internet, anyone can put anything online, true or not true. It could misinform. It could be not high-quality enough for our students. But the internet’s still important, right? For all the good and all the bad the internet has brought, overall, it has brought good.

There are tools you can use to see if something has been AI-generated or written by humans. Just like when the internet came out, they were afraid kids would upload essays. And then you could just search for essays, download them, and pass them on. There are plagiarism detection tools. We just need to set up the policies and the expectations for teachers and students.

Teachers ask: Will I lose my job to AI? Lawyers ask this question too, GPT is pretty darn good. It passes the bar exams. The best answer I have for them is lawyers will not lose their jobs to AI. Lawyers will lose their jobs to lawyers using and applying AI. This message holds true for educators.

In March 2017, Harvard Business School, in cooperation with Harvard Medical School, came out with some papers showing that AI plus a doctor could improve the predictions of whether potentially cancerous tumors are cancerous or benign, at a rate of more than 30%, which saves huge amounts of lives, stress, money. So, this is a theme that started to come out: AI on its own, can’t do it, but AI plus a human outperforms the human by themselves.

AI can be used to help teachers. They can use it to tai-lor lesson plans to be more personalized and appealing to individual students. You can take an existing math word problem and, say my student likes basketball, ask it to be rewritten in terms of a basketball context. My other student likes poetry; can you rewrite it in terms of poetry?

A professor at the Wharton School of the University of Pennsylvania came up with a policy for his class. He told them, “I expect you to use AI in this class. Sometimes it will even be required. If you do minimum work on your prompts, it’s going to produce mediocre results. It will take work and learning to get this tool to behave well. And I’m going to teach you. Don’t trust anything it says. It hallucinates. It has old data. You are responsible for anything that you deliver. So you have to validate everything. It’s a tool, but you must acknowledge using it. It’s plagiarism if you use ChatGPT to write something and you don’t acknowledge it. I need to see the prompts that you use and know exactly what parts you did and what parts they did. And then use it appropriately.”

GPT-style AI is not going away; we cannot ignore it. It will be more important than the internet. In the next 10 years, we will be figuring out how to apply it, how to use it, what the risks are, what the policies should be, and what regulation should be. I’m optimistic that AI will be a positive force for good and overall outweigh the negatives, but there are risks to address. We must teach the proper application of the technology, caution students about the perils, and help them create their own critical thinking skills. You need to figure out the right policies for our schools and our classrooms. The groundwork you lay will forge and shape how this evolves and how the next generation evolves. So please take the time to pay attention to start this discussion.

Around NSBA

Six students conduct a science experiment with potatoes and electrodes.

2024 Magna Awards: Silver Award Winners

The 2024 Magna Awards program recognizes 15 exemplary district programs in three enrollment categories as Silver Award winners.