The willy-nilly embracing of AI in education and ChatGPT in particular has got me thinking again about the importance of rubrics for the evaluation of education technology. I think we are at a place where all of the efforts that have gone into the vetting of education technology are more important than ever. I am not a Luddite or a technophobe. I have been involved in education technology for more decades than I will admit. I am not “wringing my hands” as many ed tech folks are dismissively claiming that educators are doing about AI. But this is not my first rodeo. There are scores of tools, apps, programs, etc. that have come and gone over the years. A lot of this reminds me of the Web 2.0 flurry around 2008-10 when faculty and IDs accepted every tool as the latest thing despite the lack of accessibility and the coming spectre of “sustainability” – that is when we all figure out that corporations do not do anything for free. Many of the same people who were excoriating companies for flogging non-accessible software and apps ten years ago seem to very excited about AI. Pages have been written on issues around ed tech and the corporate take-over of education, yet somehow all of that gets forgotten. I think it is important to think about why and what we should be doing instead.
There are a number of issues around using AI in education. I am not concerned about the plagiarism issue because that is an easily addressed instructional design issue that I have addressed elsewhere in this blog even before anyone was concerned about AI. There are other issues that I think are even more troubling.
One of the ironies of all the hubbub around ChatGPT is that this is happening during a much needed examination of how we approach diversity, equity and inclusion in education. There has been a lot of serious steps taken and just as an example I would point to the numerous rubrics that are available for analyzing a syllabus or a course for DEI issues. And yet it all stops when it comes to the latest thing in education. Large Language Model AI uses huge amounts of data scraped from the internet developed by a society that is endemically racist. I think this represents a huge step backwards.
As reported in Cyber Security Hub, “Italy has made the decision to temporarily ban ChatGPT within the country due to concerns that it violates the General Data Protection Regulation (GDPR). GDPR is a law concerning data and data privacy which imposes security and privacy obligations on those operating within the European Union (EU) and the European Economic Area (EEA).” The Italian Data Protection Agency is concerned about all of the personal data that was used to “train” ChatGPT. It is unclear how OpenAI uses the personal data that ChatGPT was trained on and what it does with the information it gleans from its use. By entering your email, phone number, and information from the browser used, combined with the information it can glean from the kinds of questions that are being entered, provides ChatGPT with a wide-array of actionable personal information that can be monetized.
There are so many ethical issues here just starting with how ChatGPT, for instance, was trained using exploitative labor practices. But there are also unresolved copyright issues. It is all well and good that the courts seem to be saying that text, images, and audio created by AI are not copyrightable, but ethically, AI is not creating anything – it is exploiting the creative work of others. Eventually, this will have to be addressed. If I Xerox a poem or artwork, and say it is mine, I am wrong. It is a derivative work based on someone else’s creation. Once the courts understand how LLM AI works and what is really going on, we may have a different reading but I am not holding my breath.
So Why Are We Doing This?
The real question for me is why are ed tech and instructional design folks diving into this headlong? Many are not just testing and evaluating, they are actively promoting it as a tool to do their job and encouraging instructors in the “proper use” of a tool when we know there are serious problems with it. I would also add that very few people really understand ChatGPT well enough to make that recommendation. I hear instructional designers use all kinds of verbs around ChatGPT that are misleading and dangerous (it “thinks,” “creates”).
Maybe there is an allure of an “easy” way to do instructional design or to write a paper. Thinking and processing information is hard work – but I think it is work worth doing. If the cost of saving some time and effort is the reinforcement of an inherently racist program (or at least substantially biased) that was built using exploitative labor practices then maybe it is just not worth it. I am not excited about thinking being off-loaded to an algorithm, doing the work should be part of the fun. The travel industry brags that they have been using canned ad copy for years. When was the last time you were excited by travel ad copy?
Fomo is the “Fear of Missing Out” – we had better do this before someone else does. No instructional designer or “futurist” wants to be left behind in the next ed tech revolution. Except it is not an ed tech revolution, it is a corporate raking in of user information and cash because none of the corporations want to miss out either.
This is somewhat related but the idea here is that as an instructional designer unless you are using the latest version of WidgetWare3000 then you are somehow less than. There has to be room for analyzing what we are doing and why we are doing it. I think that being on the cutting edge should include a careful look at what we are recommending to faculty and why. I am not saying that there is NO role for AI in education. I am asking why there is no oversight on what these corporations are doing in education.
And let’s face it: there is no real oversight. None of the recommendations and few of the education technology purchasing rubrics used by IT depts have any real consequences for any one using whatever software a faculty or staff member decides to use. Is this really an academic freedom issue? One would hope it was part of a budget process but then any vetting only goes towards things that are purchased with that particular budget or purchased in a particular way. No one in education is willing to take responsibility until someone sues. We have made a lot of great in-roads into accessibility but we still have a long way to go. ChatGPT has the same problem with data security.
Ed tech companies that engage in open-washing and routine copyright violations are thrilled with AI as courts determine that images and text generated by AI prompts are not subject to copyright even though the text generated may have been derived from materials under copyright. So legally, the open-washers are in the clear legally even though ethically they are, as usual, neck deep in the steaming, ethical miasma of for-profit at any cost.
What Is To Be Done?
I am not asking anyone to abolish AI or anything like that. I am only asking that we follow up on efforts meant to vet education technology before we recommend its use in education. That’s it. We have VPATs for education technology, but those deal with accessibility: we need to include data security among other things as well.
Ed Tech Evaluation Rubrics: An Annotated Reading List
Ansley, Lauren & Watson, Gavan. (2018). A Rubric for Evaluating E-Learning Tools in Higher Education. Educause Review. An examination of the Rubric for eLearning Tool Evaluation. This rubric includes criteria for privacy, data protection, and rights. It also considers cost under the “Accessibility” criteria.
Digital Tool Evaluation Rubric. (n.d.). Mississippi Department of Education. This rubric is for k-12 instructors and it is surprisingly comprehensive. It includes data security and “support for all learners.” I like the simplicity, but for our purposes, I think it needs more links out to explanations and research.
Sinkerson, Caroline. et al. (2014). RAIT: A Balanced Approach to Evaluating Educational Technologies. Educause Review. I like the approach in general, the method used for evaluation. This method does not look at student privacy, accessibility, and DEI issues, but focuses solely on the instructor’s relationship to the technology.
TrustEd Apps™ Data Privacy Rubric (2018). 1EdTech. IMS Global. This is a narrow rubric meant to evaluate how an app handles user privacy. This is particularly important considering that ed tech companies may either directly sell their user data (dis-aggregated or no) or sell their data to companies who in turn sell it to data brokers. Why isn’t this in other ed tech evaluation rubrics?
Lee, C-Y. & Cherner, T. S. (2015). A comprehensive evaluation rubric for assessing instructional apps. Journal of Information Technology Education: Research, 14, 21-53. Significantly, this method includes “cultural sensitivity” as a criteria. See “Appendix A” of this article for a copy of their rubric.
ELC Equity and Tech Rubric (Draft). (2023). E-Learning Council. Washington State Board for Community and Technical Colleges. This is a draft that the DEI workgroup from the ELC is working on. It covers a lot of the issues that are being discussed around AI in education. I think this is going to be a real game-changer in how we look at ed tech in WA state. The link will be live when they have finished. Watch this space…
If you have other resources that you think might address these issues, feel free to put a link in the comments below.