AI for Teachers, An Open Textbook: Edition 1

Individual or collective AI

The key question of tomorrow’s AI could well be: 'for who is the AI working for?'

When you use a tool which is supposed to give you a benefit in learning, you expect it to be the case. But can there be a reason for which the tool is in fact aiming to optimize a more complex function than just to fulfil your needs? And does this matter provided you also get the expected result? Let's see.

Of course, when the AI is built by a private company, it makes sense to understand what their business model is, because it will allow you to understand who they are ultimately working for: if it is software to be bought on a once-off basis by parents, these will need convincing for other parents to be also interested. If it is schools, teachers or governments, the arguments will change, and so will the software.

We should remember that when there is machine learning based AI software, the learning will take place with regards to an objective function: the neural network can be trained to minimize the pupil learning time, to maximize the quiz test results,or a combination of both factors.

But is many cases, the learning will take place in a social environment, and the AI's recommendation may have an effect not just on the individual, but on the other individuals or on the group as a whole.

To explore this idea let's look at how the popular Waze system works. Even if it is not of a great impact for teaching -although many teachers will use it to be in time for school!- it can be relevant here.

Waze

Waze is an app currently used to help car drivers find their route: it is therefore a navigation system. But Waze also has many features of a social network as much of the data it uses to analyze the traffic conditions does not come from official open data repositories or cameras, but from the users themselves. According to the company itself, no more, no less than 150 million people use Waze every month, on all platforms.1

For those who don’t use Waze , this is a very simple summary of how it works:
You are on your way to work. Like every day. You know your way but you will still use Waze. And so will a large fraction of the cars around you. On your map, you will find the route computed to bring you where you want to go, but also elements like the estimated time of arrival, which is updated every few minutes as the traffic conditions change where you re and through the zones you will visit on your way. You can also be told that there is an object on the road at 260m, a car accident at 1km, a traffic jam in 3 km. Depending on these updates, the system can propose an alternative route which will "save you" 7 minutes...
For this to work, you, as a Wazer, will be entering information and warning fellow wazers, via the system, that there is some animal wandering where you are or -and this is important- that the object supposed to be on the road is no longer there.

Where is the AI?

There is AI in the computation of the expected times, the routes, etc. This means taking into account static information (distances) but also dynamic information (the speeds of the cars). Waze will also use your own history to take into account your driving patterns.2
Waze will even know the traffic light are synchronized to your advantage or not.

But there is more to it:
When a wazer enters information about something new, how does the system take it into account? Suppose I warn that the road is blocked, what is supposed to happen? A human expert could double check (are other users saying the same?), use a model telling him how much credit should be given to this particular user, check if the user is really halted... The Ai will do the same.

And more. When the system detects a traffic jam on the normal road, it will send the users on a different path. But how can the system know that the traffic jam is less or a problem if it doesn't send users into the traffic jam to check. The users already stuck cannot give that information. So the system has to send some traffic into the problem to find out if the problem is solved.

Some ethical considerations?

There are a number of ethical considerations:
1.Waze knows a lot about you. Where you live and work, your usual stops, your habits. It also will propose some adverts to which you may answer or not.
2.In order to satisfy as many customers as possible, Waze has to solve many Exploration/Exploitation dilemmas like the one above. How does it make that decision? Is there a "right" way of making that decision?
3.Using these tools on a regular basis does have consequences on our capacity to solve the problem for ourselves. It is now known that our (human) cognitive capacities are being affected. As an example-which is surely not isolated- one author of the textbook was using Waze on a complicated Monday morning. The system told him to leave the highway to avoid congestion. After 2 km of a nice secondary road, Waze changed it's mind and suggested that the best route was driving back to the highway. What matters in this example is not that the system changed its optimized route -which makes sense- but the fact that our dependency on such AI driven systems makes us incapable of making our own judgements.3

Consequences for education

To our knowledge, this issue of group handling doesn’t occur in education. Yet. When the resources are unlimited (access to a web platform for example) this situation is of little consequence. But suppose the resources are limited: only 3 pupils can use the robot at the same time. In this case an AI system will be proposing which pupils should have access to the robot. And there can be many factors governing the decision. If the system wants to be fair, the decision may be random. But many will not be happy by that. If the system wants to obtain the best results for all the classroom, it may allocate more resources to disadvantaged children. But if the system is given as tasks to secure the fact that at least 90% of the pupils get grade XYZ at the end of the term, it will unavoidably choose some pupils who will be part of the other 10%.

The role of the teacher

An AI era teacher must understand how such systems work, and what are the caveats of the algorithms. And make sure she/he makes the decisions. This is easier to say than to do. A teacher can use an AI system because -as is the case of the navigation tool described above- this tool can give benefits to all. But a teacher can, and should, contrast the decision proposed by the AI with their own experience. Wasting 15 minutes on a road isn't a big deal. But making the wrong call for your pupils is.

------------------------------------------------------------------------------------------------------

1 https://www.cozyberries.com/waze-statistics-users-facts/ and https://www.autoevolution.com/news/waze-reveals-how-many-users-run-the-app-on-android-and-iphone-197107.html for some facts and figures concerning Waze.

2 Petranu, Y. Under the Hood: Real-time ETA and How Waze Knows You’re on the Fastest Route
https://medium.com/waze/under-the-hood-real-time-eta-and-how-waze-knows-youre-on-the-fastest-route-78d63c158b90

3 Clemenson, G.D., Maselli, A., Fiannaca, A.J. et al. Rethinking GPS navigation: creating cognitive maps through auditory clues. Sci Rep 11, 7764 (2021). https://doi.org/10.1038/s41598-021-87148-4
https://www.nature.com/articles/s41598-021-87148-4

This page has paths: