Ever since the start of the pandemic, more and more public school students are using laptops, tablets or similar devices issued by their schools.
The percentage of teachers who reported their schools had provided their students with such devices doubled from 43% before the pandemic to 86% during the pandemic, a September 2021 report shows.
In one sense, it might be tempting to celebrate how schools are doing more to keep their students digitally connected during the pandemic. The problem is, schools are not just providing kids with computers to keep up with their schoolwork. Instead – in a trend that could easily be described as Orwellian – the vast majority of schools are also using those devices to keep tabs on what students are doing in their personal lives.
Indeed, 80% of teachers and 77% of high school students reported that their schools had installed artificial intelligence-based surveillance software on these devices to monitor students’ online activities and what is stored in the computer.
This student surveillance is taking place – at taxpayer expense – in cities and school communities throughout the United States.
For instance, in the Minneapolis school district, school officials paid over $355,000 to use tools provided by student surveillance company Gaggle until 2023. Three-quarters of incidents reported – that is, cases where the system flagged students’ online activity – took place outside school hours.
In Baltimore, where the public school system uses the GoGuardian surveillance app, police officers are sent to children’s homes when the system detects students typing keywords related to self-harm.
Safety versus privacy
Vendors claim these tools keep students safe from self-harm or online activities that could lead to trouble. However, privacy groups and news outlets have raised questions about those claims.
Vendors often refuse to reveal how their artificial intelligence programs were trained and the type of data used to train them.
Privacy advocates fear these tools may harm students by criminalizing mental health problems and deterring free expression.
As a researcher who studies privacy and security issues in various settings, I know that intrusive surveillance techniques cause emotional and psychological harm to students, disproportionately penalize minority students and weaken online security.
Artificial intelligence not intelligent enough
Even the most advanced artificial intelligence lacks the ability to understand human language and context. This is why student surveillance systems pick up a lot of false positives instead of real problems.
In some cases, these surveillance programs have flagged students discussing music deemed suspicious and even students talking about the novel “To Kill a Mockingbird.”
Harm to students
When students know they are being monitored, they are less likely to share true thoughts online and are more careful about what they search. This can discourage vulnerable groups, such as students with mental health issues, from getting needed services.
When students know that their every move and everything read and written is watched, they are also less likely to develop into adults with a high level of self-confidence. In general, surveillance has a negative impact on students’ ability to act and use analytical reasoning. It also hinders the development of the skills and mindset needed to exercise their rights.
More adverse impact on minorities
U.S. schools disproportionately discipline minority students. African American students’ chances of being suspended are more than three times higher than that of their white peers.
After evaluating flagged content, vendors report any concerns to school officials, who take disciplinary actions on a case-by-case basis. The lack of oversight in schools’ use of these tools could lead to further harm for minority students.
The situation is worsened by the fact that Black and Hispanic students rely more on school devices than their white peers do. This in turn makes minority students more likely to be monitored and exposes them to greater risk of some sort of intervention.
When both minority students and their white peers are monitored, the former group is more likely to be penalized because the training data used in developing artificial intelligence programs often fails to include enough minorities. Artificial intelligence programs are more likely to flag languages written and spoken by such groups. This is due to the underrepresentation of languages written and spoken by minorities in the datasets used to train such programs and the lack of diversity of people working in this field.
Leading AI models are 50% more likely to flag tweets written by African Americans as “offensive” that those written by others. They are 2.2 times more likely to flag tweets written in African American slang.
These tools also affect sexual and gender minorities more adversely. Gaggle has reportedly flagged “gay,” “lesbian” and other LGBTQ-related terms because they are associated with pornography, even though the terms are often used to describe one’s identity.
Increased security risk
These surveillance systems also increase students’ cybersecurity risks. First, to comprehensively monitor students’ activities, surveillance vendors compel students to install a set of certificates known as root certificates. As the highest-level security certificate installed in a device, a root certificate functions as a “master certificate” to determine the entire system’s security. One drawback is that these certificates compromise cybersecurity checks that are built into these devices.
Gaggle, which scans digital files of more than 5 million students each year, installs such certificates. This tactic of installing certificates is similar to the approach that authoritarian regimes, such as the Kazakhstani government, use to monitor and control their citizens and that cybercriminals use to lure victims to infected websites.
Second, surveillance system vendors use insecure systems that hackers can exploit. In March 2021, computer security software company McAfee found several vulnerabilities in student monitoring system vendor Netop’s Vision Pro Education software. For instance, Netop did not encrypt communications between teachers and students to block unauthorized access.
The software was used by over 9,000 schools worldwide to monitor millions of students. The vulnerability allowed hackers to gain control over webcams and microphones in students’ computers.
Finally, personal information of students that is stored by the vendors is susceptible to breaches. In July 2020, criminals stole 444,000 students’ personal data – including names, email addresses, home addresses, phone numbers and passwords – by hacking online proctoring service ProctorU. This data was then leaked online.
Schools would do well to look more closely at the harm being caused by their surveillance of students and to question whether they actually make students more safe – or less.
Author Bio: Nir Kshetri is Professor of Management at the University of North Carolina – Greensboro