“We can act now”, say Kalervo Gulson, Sam Sellar, Andrew Murphie, and Simon Taylor to ensure that the Australian experience of Artificial Intelligence (AI) in schools is positive…
Artificial Intelligence is quickly becoming part of everyday life. However, AI is moving more quickly into education policy sectors.
In this article, we will discuss some fundamental aspects of education, such as the relationship between skills and AI and possible approaches to adapt to and prepare for AI in schools and society as a whole. We conclude with suggestions and resources for students and citizens interested in learning more about AI.
In order to increase prediction abilities, AI refers to autonomous computer systems that use algorithmic networks to learn from patterns in big data sets (Russell & Norvig, 2016; Walsh, 2016). The application of AI in conjunction with “big data” presents new chances to solve complex and intractable social and political problems (Elish & Boyd, 2017). However, there is a need for caution as well.
Education, skills, and artificial intelligence
Although the type and amount of this substitution differ, there is agreement that automation as part of Artificial Intelligence will replace some functions and employees. Furthermore, Hajkowicz et al. (2016) suggest that in order to face future workforce difficulties, Australian society will need to offer young people the appropriate skills for current and future demands and workplace and lifelong learning opportunities to support retraining.
Luckin et al. (2016), in one of the first papers on AI in education, warn against getting lured by new technology and call for a significant focus on pedagogy. Heckman (2011) emphasises the importance of focusing on “attentiveness, perseverance, impulse control, and sociability” when it comes to “21st-century” abilities.
In the study literature and in national curricula, many skill frameworks have been recognised. However, there appears to be some agreement on the broad categories of essential skills, including cognitive skills that have traditionally been emphasised informal education, non-cognitive skills (both inter-and intrapersonal), and skills that enable people to effectively interact with information and communication technologies (ICT).
The ethical problems surrounding AI systems are wide-ranging, spanning creation, uses, and results,’ as Campolo et al. (2017) point out. The following sections address the ethical development and usage of AI and prepare citizens for an AI environment, including students of all ages in this article and the application of AI in public policy sectors such as education.
Ethics and AI
It’s critical to diversify the types of professionals working in AI development (Campolo et al., 2017). Strategies to improve the gender imbalance in STEM education will be required to address the lack of diversity among developers (OECD, 2018). Luckin (2017) has also urged educators to collaborate with AI developers, noting, “Everyone has to be included in a discussion about what AI should and should not be built to achieve.” ‘Training data, algorithms, and other design choices that build AI systems may reflect and magnify existing cultural preconceptions and injustices,’ as Campolo et al. (2017) point out.
Education, health, and other social policy domains are high-stakes areas for AI adoption. It will be critical to take steps to eliminate biases in decision-making when judging learning ability, illness risk, and medical diagnosis, among other things.
Regulation and data privacy
There is obviously a need for mechanisms to increase transparency, regulation, and algorithmic literacy, as well as ways to monitor what algorithms are doing in practice and create effective accountability mechanisms, as machine learning and algorithms become more embedded in the mediated infrastructure of everyday life (Ananny, 2016). This will entail determining which areas of regulation require revision or creation.
As companies produce and manage data systems in education (Williamson, 2017), critical considerations include: what happens to students, parents, and other kinds of data when they are used in systems, as well as who owns data and who has access (Zeide, 2017).
Some recommendations emphasise the importance of personal data ownership and opt-in rather than opt-out programmes (Tene & Polonetsky, 2012). We could look at how Google Mail is used in schools as an ‘opt-in’ strategy test case.
Use of AI in public agencies
Much of the provision of automated systems is made under the proprietary knowledge of corporations. There has been a call for core government agencies, such as those in charge of “high stakes” stakeholders such as criminal justice, healthcare, welfare, and education, to “no longer use “black box” AI and algorithmic systems.” (Campolo and colleagues, 2017, p. 1).
The term “black-box” refers to the fact that the workings of these systems are either secret (because of proprietary knowledge) or cannot be discovered (due to how various forms of Artificial Intelligence perform computations).
As some decision-making gets automated, there must be recognition of the narrowness resulting from automation if the context is not provided. That is, system and school administrators will have to rethink how they formulate goals and use data while acknowledging the limitations and risks of automated systems (Campolo et al., 2017, p. 13). Particularly the risk of missing critical contextual details that go into making complex social areas like education.
Conclusion: What should we learn in an AI society?
The debate over how artificial intelligence will affect society over the following few decades centres on whether technological development will be different this time than in prior periods of severe upheaval.
One of the most important things for anyone who works in, teaches, or in schools is understanding how AI works and what it can and cannot achieve. In the absence of everyone becoming a computer scientist, national governments are already attempting to provide avenues for everyone to learn what AI might mean for society and how AI works.
While anyone can take some of the well-known Coursera AI courses, the Finnish government has created an online course aimed at teaching the fundamentals of AI to 1% of the population, or about 55,000 people [ii].
Anyone with an internet connection can enrol in and complete this course, which can be found here. The NSW Department of Education in Australia has started commissioning research, hosting events, and providing pertinent resources, some of which are available in a free collection.
When new proposals for automation and AI are put forward, it is critical for educators and regulators to address challenges like those stated above. This effort should not be presumed to be done by AI suppliers or algorithm builders. The teaching profession and education authorities will need to devote resources and time to learning about, comprehending, and developing these new technologies together, ideally before they become too common in our schools and educational institutions.
Adamson, F., & Darling-Hammond, L. (2015). Policy pathways for twenty-first-century skills. In P. Griffin & E. Care (Eds.), Assessment and teaching of 21st-century skills: Methods and approach (pp. 293-310). Dordrecht: Springer Netherlands.
Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability, and timeliness. Science, Technology, & Human Values, 41(1), 93-117.
Campolo, A., Sanfilippo, M., Whittaker, R., & Crawford, K. (2017). AI Now 2017 report. New York: AI Now. Retrieved from https://ainowinstitute.org/AI_Now_2017_Report.pdf
Elish, M. C., & Boyd, d. (2017). Situating methods in the magic of Big Data and AI. Communication Monographs, 1-24.
Hajkowicz, S., Reeson, A., Rudd, L., Bratanova, A., Hodgers, L., Mason, C., & Boughen, N. (2016). Tomorrow’s digitally enabled workforce: Megatrends and scenarios for jobs and employment in Australia over the coming twenty years. Brisbane: CSIRO. Retrieved from https://www.acs.org.au/content/dam/acs/acs-documents/16-0026_DATA61_REPORT_TomorrowsDigiallyEnabledWorkforce_WEB_160128.pdf
Heckman, J. J. (2011). The economics of inequality: The value of early childhood education. American Educator, 35, 31-47.
Luckin, R. (2017). The implications of Artificial Intelligence for teachers and schooling. In L. Loble, T. Creenaune, & J. Hayes (Eds.), Future Frontiers: Education for an AI world (pp. 109-125). Melbourne: Melbourne University Press/ NSW Department of Education.
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence Unleashed: An argument for AI in education. Retrieved from https://www.pearson.com/content/dam/corporate/global/pearson-dot-com/files/innovation/Intelligence-Unleashed-Publication.pdf
OECD (2018). PISA 2015: Results in focus. Paris: OECD Publishing.
Russell, S., & Norvig, P. (2016). Artificial Intelligence: A modern approach (3rd ed.). Harlow, Essex: Pearson Education Limited.
Serholt, S., Barendregt, W., Vasalou, A., Alves-Oliveira, P., Jones, A., Petisca, S., & Paiva, A. (2017). The case of classroom robots: teachers’ deliberations on the ethical tensions. AI & Society, 32(4), 613-631.
Tene, O., & Polonetsky, J. (2012). Big data for all: Privacy and user control in the age of analytics. Northwestern Journal of Technology, 11(5), 239-273.
Walsh, T. (2016). The singularity may never be near. arXiv preprint arXiv:1602.06462.
Williamson, B. (2017). Big data in education: The digital future of learning, policy and practice. London: SAGE Publishers.
Zeide, E. (2017). The structural consequences of big data-driven education. Big Data, 5(2), 164-172.