Google’s AI plans make its dominance of education more alarming

When Google CEO Sundar Pichai addressed the company’s annual I / O developer conference on May 18, 2021, he made two announcements suggesting that Google is now the most powerful organization in the world. world in the field of education. Opening the opening speech broadcast live from the gardens of the Mountain View campus, Pichai celebrated how Google was able to “help students and teachers continue to learn from anywhere” during the pandemic.

Minutes later, he announced Google’s new AI language platform, a central part of the company’s long-term AI strategy, with an example of a specific education use case. . LaMDA (language model for dialogue applications), he claimed, could allow students to ask questions in natural language and receive meaningful, factual, and interesting conversational responses.

“So if a student wanted to learn more about space,” Pichai wrote on the corporate blog, “The model would give sensible answers, making learning even more fun and engaging. If that student then wanted to move on to another topic, ”he added,“ LaMDA could continue the conversation without any retraining. The company’s plan is to integrate LaMDA into its Workspace suite of cloud computing tools, software and products.

These proclamations indicate how Google plans to advance its education business in the wake of COVID-19 disruptions – consolidating the huge growth of its platforms in schools and integrating AI into education and learning. This is raising new concerns among privacy activists and researchers as it allows Google to access data on students and schools internationally.

Google’s world class

With schools reopening around the world, Google has worked hard to ensure that the significant market gains made in 2020 can be sustained and reinforced as students return to physical, rather than virtual, classrooms. With the number of users of its digital learning platform, Google Classroom, up to 150 million compared to 40 million a year earlier, it announced a new “road map” for the platform in early 2021.

“As more teachers use Classroom as a ‘hub’ for learning during the pandemic, many schools are treating it as their learning management system (LMS),” the program manager wrote. of Classroom. “Although we did not decide to create an LMS, Classroom is committed to meeting the evolving needs of schools.”

The roadmap for Classroom as an LMS school was just a plan presented during its Learn with Google conference, which also included the launch of 40 new models of Chromebook laptops as well as feature upgrades for its educational products. These developments illustrate the continued strategic expansion that Google has pursued in education for the past 15 years, since launching its free education software in 2006 and its low-cost Chromebooks in 2011. Its competitive edge in hardware and school software only advanced during the pandemic. .

Google’s steady expansion of education reach has always been a hot topic. Five years ago, the nonprofit for digital freedoms, the Electronic Frontier Foundation filed a formal complaint with the Federal Trade Commission against Google for collecting and mining data on student personal information from Chromebooks and Google Apps for Education (renamed from Workspace for Education) without permission or opt-out options. Researchers from the University of Boras in Sweden have highlighted how privacy policy Google Apps for Education has disguised its business model, making it almost impossible to determine what data it has collected on students and what Google uses it for.

Google’s data mining in education has only become more and more contested. In February 2020, the New Mexico attorney general filed a complaint alleging Google violates student privacy who use its Chromebooks and software, in violation of both Federal law and Student confidentiality commitment of which Google is itself a signatory. Google, the attorney general said, had pledged to only collect, store, use and share student data for expressly educational purposes, but continued to use it for commercial purposes.

Nonetheless, in the months that followed, Google continued to expand into education systems around the world, often with the support of state or country-level education departments and international organizations such as the OECD.

Controversies over collecting and sharing data are likely to intensify with the expansion of Classroom. Research published by a team from universities in Australia and the UK, to which I recently contributed, highlighted how well hundreds of external educational technology providers are. integrated into Classroom, potentially allowing Google to extend its data mining practices far beyond the platform. Classroom’s roadmap confirms its intention to expand these integrations, through a “marketplace” of “Classroom add-ons” that teachers can then assign without requiring additional student connections. This makes Classroom itself the primary gateway for students to access other non-Google resources.

These developments give Google extraordinary power of control in the education technology industry, as it sets the rules for other third-party vendors to integrate with Classroom and for the exchange of data between them. In its new role as LMS, Classroom can even integrate with existing school information systems, acting as a key interface between a school and its student records.

Together, Classroom’s expansion and integrations prioritize a particular teaching model based on the constant collection and exchange of student data between platforms through Google Cloud. The distinction between commercial purpose and educational purpose is increasingly difficult to identify in these developments. Google’s data mining business model has become symmetrical and supportive of the digital approaches to teaching and learning that Google itself has helped establish as a global model for the future of education.

Techno-ethical audit

Google now seems likely to push its new artificial intelligence feature into schools as well. Education won’t be the only sector of society impacted by Google’s conversational AI interface – although, as Sundar Pichai’s announcement at I / O made clear, education is a case in point. obvious use of these technologies.

Large language model technologies are among the most controversial of Google’s recent developments. Late last year, a group of researchers, including the two co-directors of Google’s own ethical AI team, produced a research paper claiming harmful ideas, prejudices and misleading information are integrated into these models. Google subsequently fired the authors of its ethical AI team, which led to widespread condemnation and serious questions about the long-term ethical implications of its AI strategy.

This raises the troubling question of whether the installation of Google’s linguistic AI technologies in educational products could breed prejudice and misinformation within schools. At I / O, Pichai asserted that further development will ensure that “fairness, accuracy, security and privacy” are built into LaMDA before full deployment, although the firing of its ethical AI specialists weakens credibility. of these statements.

According to the authors of a new research paper, “Don’t be mean: should we be using Google in schools?The business deserves much closer scrutiny before any further expansion in education. Using a “techno-ethics audit” method, the University of North Texas research team found that “Google extracts students’ personal data, circumvents laws to protect it, Targets them for profit, obscures the intention of the company in its terms of service. Service, recommends harmful information and distorts students’ knowledge. “

The techno-ethical audit is an important step in responding to Google’s growing role in education. But broader questions remain about the influence of private tech companies in public and public education systems, and the potential of new AI and cloud computing platforms to change school sector practices and priorities.

The involvement of private companies in education is not new, but the international scale of Big Tech influence and the technological and ethical implications of emerging platforms, AI and data systems in schools demand new attention. Google has produced the underlying hardware, software, and cloud and data systems that education systems increasingly depend on, at scales that cross geographic and political boundaries and continents. These are technical, ethical and political issues that should not only be delegated to educators and school leaders. They must be addressed at the regulatory level and as part of a collective democratic discussion on the future of schools beyond the pandemic.


Ben Williamson is a Senior Fellow at the University of Edinburgh, UK, and is on Twitter @BenPatrickWill.


Source link

About Barbara Johnson

Check Also

Maine Voices: High time to see climate change education for the urgent need it is

On October 26, the Legislative Council of Maine voted on which bills will be considered …

Leave a Reply

Your email address will not be published. Required fields are marked *