Home

Research Group and Colloquium About the Impact of AI on Ethics and Social Justice in Education

Welcome! The Research Group will have its first meeting (virtually) on Monday, July 12, 2021, at 10am CT/9am ET

The Research Group Proposal 

Abstract: We propose to convene a research group on ethical and social justice concerns arising from the use of artificial intelligence in education, in order to develop guidelines and best practices to ensure AI benefits all students, and potential harms are mitigated. Artificial intelligence (AI) is expected to transform the world including education and learning in this century. This raises the question of how we can ensure that AI is applied ethically in education. We know that inattention to these issues can and likely will lead to headline-grabbing inequities, like those reported in facial recognition [10], healthcare [11] and the criminal-justice system [12]. 

We propose to bring together a small, knowledgeable group of 15-20 ed-tech practitioners, AI experts, AI ethics experts, learning scientists, education ethicists, and student data privacy experts to investigate and discuss the ethical and social justice issues when AI is used in education. We will also create a colloquium series that is aimed at a broad audience of stakeholders within education. We expect to produce a summary paper, a set of framing questions, and a set of proposed guidelines for the development of ethical educational technology (including educational data mining and learning analytics) at the end of the series.

Artificial intelligence within education: Artificial intelligence in educational applications is used to guide student learning, to recommend educational interventions, to provide feedback to students, to grade student work, and to provide student analytics. Analytics may be provided to students, faculty, and/or education administrators, and may be used to estimate student knowledge or to predict student outcomes. Large scale collection of data about students, their learning, and their environment (learning analytics) and mining and analysis of this data go hand in hand with the development of educational tools augmented by artificial intelligence (AI-augmented education). 

Artificial intelligence (AI) is expected to transform the world in this century. This raises the question of how we can ensure that AI is applied ethically. The use of AI in learning faces many of the same ethical questions: how do we decide where investment and development in AI is both ethical and most useful, how do we ensure that AI agents behave ethically, how do we ensure that AI is deployed and used ethically, and what functions do we not delegate to artificial agents. Ethical issues specific to AI-augmented learning include the focus on learner motivation, learner agency (ability to set goals, take action, reflect on progress and adjust, and related beliefs about capacity to achieve goals), learner engagement, and interest-driven exploration. 

There is an urgent need to discuss the ethics of AI-augmented education. Studies have shown that technologies such as educational data mining and learning analytics can further disadvantage certain students [1, 2]. Students may struggle to make information displayed in learning analytics dashboards actionable [3, 4], may get discouraged, rather than encouraged, or may respond by rote learning instead of developing meta-cognitive skills supported by their own sense of agency [5, 6]. This is partially due to limited integration of learning analytics and other emerging technologies with learning theories [7, 8]. There is an opportunity for technology and education leaders to proactively work together to address these emergent issues to assure that the promise of increased and improved educational attainment applies to all students including those from historically marginalized and under-represented groups, and that students continue to develop higher-level skills. 

Issues surrounding FATE (Fairness, Accessibility, Transparency and Ethics) are inherent to educational technology. Indeed, some intelligent tutoring systems (ITS: a computer system that aims to provide immediate and customized instruction or feedback to learners) have been shown (via simulations) to produce larger learning gains for higher-performing students than low performing students [9]. Bias is an inherent part of machine learning (ML), which relies on past data to predict future events. For example, face-recognition software recognizes female and non-white faces less accurately than those of white men [10]. Healthcare [11] and the criminal-justice system [12] are other sectors plagued by models used in ways they were not developed for, and other machine learning pitfalls [13]. The development of AI and ML for education sits within the larger technology context, where business-as-usual practices prioritize efficiency over resiliency [14], and within the educational context, where repeated investments in technical solutions fail to deliver intended benefits [15]. In addition to FATE, it is critical that AI systems value and protect student privacy and follow privacy norms appropriate specifically to an educational context [16]. Even the perceived threat of impropriety could derail the entire movement, as evidenced by the high profile collapse of the $100M InBloom project [17, 18, 18].  

Prior Work: Rice University is already a leader in many areas of AI-augmented educational technology. OpenStax, an organization within Rice that produces high-quality, free, open textbooks and low-cost digital learning tools, uses machine learning developed in Rich Baraniuk’s lab. Rice and OpenStax are enhancing Rice’s reputation and reach by building research infrastructure to enable a broad coalition of leading research teams around the country in cognitive science, education, and machine learning. In addition to current NSF and philanthropic funding, Baraniuk, Vardi, and leading research teams nationally have submitted a major proposal to NSF to create a new AI Institute: SET-SAIL: Students, Environments, and Technologies for Scalable AI-based Learning. Rice is also leading in ethics in technology through the Initiative on Technology, Culture and Society, and the establishment of Rice’s COMP 301 course on Computing, Ethics, and Society. 

Externally, organizations are engaging in research and discussion around ethics in AI (ACM FAccT and AINow), but for the most part, use of AI in education is left out of the conversation. There are, however, a few nascent initiatives, including the Institute for Ethical AI in Education, workshops on ethics in AI for education at larger conferences: see FATED 2020 | Fairness, Accountability, and Transparency in Educational Data, Ethics in AIED: Who cares? – An AIED 2019 conference workshop, Advances and Opportunities: Machine Learning for Education – a Neurips 2020 workshop. In addition, there is an increasing amount of academic scholarship on the value of emerging modes and methods of student engagement, such as embedding game characteristics and principles within learning activities, integrating educational values in digital game design [20, 21], and the “gamification” of education [22, 23, 24].  We propose to bring these pioneering researchers and ideas to Rice, and work together to move the conversation forward. 

Ethics Research Group and Colloquium Format. Led by Antón, Ferreira, Fletcher, Jelinkova, and Vardi, the research group will bring together educational-technology implementers, researchers, and ethicists to develop guiding questions and principles for ethical issues in AI-augmented education and to plan broader engagement with stakeholders including educators, parents and students. The research group is envisioned as a sequence of three interactive sessions held over the next three seasons beginning with a 3-hour virtual session in the Summer, a virtual follow up in the Fall and then a 1-2 day in-person workshop in the Spring of 2022. The research group will bring participants together virtually for presentations and discussion at a time that can accommodate participants in the US and Europe. We also envision a colloquium that builds on the research group to organize a series of panels resulting from the research group’s work that will include a broad audience of education stakeholders in order to engage the broader public in critical conversations around the ethical use of AI within education.

Resulting Products. A core group of participants (the conveners and participants so-inclined) will produce a white paper from the colloquium proceedings that includes a set of framing questions to guide ethical decision making around the use of AI in higher education, and a proposal for guidelines for the development of ethical educational technology, data mining and learning analytics. We will also produce an Op-Ed for a wider audience (for example:  https://theconversation.com)

Framing Questions:

  • How do we decide where investment and development in AI-augmented education is both ethical and most useful?
  • How do we ensure that AI agents behave ethically?
  • How do we ensure that AI-augmented education is deployed and used ethically?
  • What functions in education do we not delegate to artificial agents? 
  • What gaps in research need to be filled to answer these questions?

Topics:

  • Sources of bias in the educational context
  • Metrics that ensure equitable outcomes for students
  • Availability of data for training AI algorithms that includes and is representative of diverse students
  • Discoverability of negative outcomes
  • Potential harms of technology solutionism (the belief that every problem has a technological solution)
  • Balancing privacy, transparency, and the need to ensure fairness by collecting sensitive data for evaluation
  • The role of contextual privacy in education (privacy expectations and choices are not fixed, but dependent on context such as who the information will be shared with and why)
  • Incorporating accessibility (for disabilities) and inclusive design (designing for as broad a population of abilities as reasonably possible) in AI-augmented education
  • Factors of AI explainability [Sc], trust, and recourse in educational settings. Whether the decisions, recommendations, and analytics developed with AI can be explained and understood; whether people over-trust or under-trust them and in what contexts; whether people can take reasonable and relevant actions to change negative decisions.

References

[1] Lonn, S., Aguilar, S. J., & Teasley, S. D. (2015). Investigating student motivation in the context of a learning analytics intervention during a summer bridge program. Computers in Human Behavior, 47, 90–97. https://doi.org/10.1016/j.chb.2014.07.013

[2] Schumacher, C., & Ifenthaler, D. (2018). The importance of students’ motivational dispositions for designing learning analytics. Journal of Computing in Higher Education, 30(3), 599–619. https://doi.org/10.1007/s12528-018-9188-y

[3] Jivet, I., Scheffel, M., Schmitz, M., Robbers, S., Specht, M., & Drachsler, H. (2020). From students with love: An empirical study on learner goals, self-regulated learning and sense-making of learning analytics in higher education. Internet and Higher Education, 47(May), 100758. https://doi.org/10.1016/j.iheduc.2020.100758

[4] Jivet, I., Scheffel, M., Specht, M. & Drachsler, H. (2018). License to evaluate: Preparing Learning Analytics Dashboards for educational practice. In LAK’18: International Conference on Learning Analytics and Knowledge, March 7–9, 2018, Sydney, NSW, Australia. ACM, New York, NY, USA, https://doi.org/10.1145/3170358.3170421

[5] Lim, L., Joksimović, S., Dawson, S., & Gašević, D. (2019). Exploring students’ sensemaking of learning analytics dashboards: Does frame of reference make a difference? ACM International Conference Proceeding Series, 250–259. https://doi.org/10.1145/3303772.3303804

[6] Wise, A. F. (2014). Designing pedagogical interventions to support student use of learning analytics. Proceedins of the Fourth International Conference on Learning Analytics And Knowledge – LAK ’14, 203–211. https://doi.org/10.1145/2567574.2567588

[7] Zawacki-Richter, O., & Latchem, C. (2018). Exploring four decades of research in Computers & Education. Computers and Education, 122(June 2017), 136–152. https://doi.org/10.1016/j.compedu.2018.04.001

[8] Marzouk, Z., Rakovic, M., Liaqat, A., Vytasek, J., Samadi, D., Stewart-Alonso, J., Ram, I., Woloshen, S., Winne, P. H., & Nesbit, J. C. (2016). What if learning analytics were based on learning science? Australasian Journal of Educational Technology, 32(6), 1–18. https://doi.org/10.14742/ajet.3058

[9] Doroudi, S., & Brunskill, E. (2019, March). Fairer but not fair enough on the equitability of knowledge tracing. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge (pp. 335-339).

[10] Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91).

[11] Parikh, R. B., Obermeyer, Z., & Navathe, A. S. (2019). Regulation of predictive analytics in medicine. Science, 363(6429), 810-812.

[12] Yapo, A., & Weiss, J. (2018). Ethical implications of bias in machine learning. HICSS.

[13] Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., and Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 59 – 68. ACM. 

[14] Vardi, Moshe Y., Efficiency vs. Resilience: What COVID-19 Teaches Computing in Communications of the ACM, May 2020, Vol. 63 No. 5, Page 9 DOI: 10.1145/3388890

[15] Reich, Justin. (2020) Failure to Disrupt: Why Technology Alone Can’t Transform Education, Harvard University Press, 9780674089044

[16] H. F. Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford: Stanford Law Books, 2010. 

[17] Herold, B. (2014). inBloom to shut down amid growing data-privacy concerns. Education Week, 21. Retrieved from http://blogs.edweek.org/edweek/DigitalEducation/2014/04/ inbloom_to_shut_down_amid_growing_data_privacy_concerns.html

[18] Regan, P. M., & Jesse, J. (2018). Ethical challenges of edtech, big data and personalized learning: twenty-first century student sorting and tracking. Ethics and Information Technology, 21(3), 167–179. https://doi.org/10.1007/s10676-018-9492-2

[19] Reidenberg, J. R., & Schaub, F. (2018). Achieving big data privacy in education. Theory and Research in Education, 16(3), 263–279. https://doi.org/10.1177/1477878518805308

[20] M. Flanagan and H. Nissenbaum, Values at Play in Digital Games. MIT Press, 2014.

[21] P. Rooney, “A Theoretical Framework for Serious Game Design: Exploring Pedagogy, Play and Fidelity and their Implications for the Design Process,” Int. J. Game-Based Learn., vol. 2, no. 4, pp. 41–60, Oct. 2012, doi:10.4018/ijgbl.2012100103.

[22] Dichev, C., Dicheva, D. Gamifying education: what is known, what is believed and what remains uncertain: a critical review. Int J Educ Technol High Educ 14, 9 (2017). https://doi.org/10.1186/s41239-017-0042-5

[23] Nah F.FH., Zeng Q., Telaprolu V.R., Ayyappa A.P., Eschenbrenner B. (2014) Gamification of Education: A Review of Literature. In: Nah F.FH. (eds) HCI in Business. HCIB 2014. Lecture Notes in Computer Science, vol 8527. Springer, Cham. https://doi.org/10.1007/978-3-319-07293-7_39

[24] Kim, Sangkyun, Song, Kibong, Lockee, Barbara, and Burton, John. 2017. Gamification in Learning and Education : Enjoy Learning Like Gaming. Cham: Springer International Artificial Intelligence in Education: Promises and Implications for Teaching and Learning https://curriculumredesign.org/our-work/artificial-intelligence-in-education/#1445977470587-f8dc16b8-5787 Publishing AG. Accessed January 8, 2021. ProQuest Ebook Central.