On the Banality of AI

I read an interesting post on Mastodon where the Algorithmic Sabotage Research Group (ASRG), “an ongoing, conspiratorial, aesthetico-political, practice-led research framework focused on the intersection of digital culture and information technology,” that referenced Hannah Arendt’s Eichmann in Jerusalem: A Report on the Banality of Evil. It discusses how AI had the characteristics of Eichmann in that it shared “the inability to critique instructions, the lack of reflection on consequences, a commitment to the belief that a correct ordering is being carried out.” There are some interesting and necessary questions here despite the hyperbolic issues I have with this post. Eichmann was a sentient being and is responsible for his evil. AI is not “thoughtless” – it doesn’t think at all (except if you consider auto-complete thinking). I think the people and corporations behind some of the issues with AI need to be held responsible and assist in the toxic clean-up – they are the thoughtless ones. The sad thing is that this is not a new relationship we have with technology. In a fit of irony, while writing a presentation on AI. I decided to ask it about this post.

I asked: What six things should we do to address this statement:
“While #AI produces thoughtlessness, in the sense that political philosopher Hannah Arendt meant when interpreting the actions of Nazi war criminal Adolf Eichmann[3]; the inability to critique instructions, the lack of reflection on consequences, a commitment to the belief that a correct ordering is being carried out, “Algorithmic Sabotage” is intensified by the new forms of machinic knowing and the nascent becoming of an anti-worker and anti-community computational complex, making necessary a restructuring that reorients the focus from the miasma of AI and its concomitant toxic algorithmic operations of optimisation to developing techniques for redistributing social power, starting from a feminist standpoint and progress towards the implementation of prefigurative strategies of resistance, agency and refusal, to inhibit, slow down or reverse the emergence of harmful racialized practices of exteriorization and exclusion driven by algorithms.”

ChatGPT said:

“To address the complex issues raised in the statement about AI and its harmful social impacts, here are six actionable strategies:

1. Develop Ethical AI Frameworks and Governance

  • Establish AI Ethics Boards: Create diverse, multidisciplinary ethics boards to oversee AI development and ensure systems are aligned with moral and ethical standards, minimizing harm to marginalized communities.
  • Ethical AI Certification: Introduce a certification process for AI systems that meet rigorous ethical guidelines, similar to certifications in environmental sustainability, ensuring that technologies prioritize social good over optimization.

2. Ensure Algorithmic Transparency and Accountability

  • Mandatory Audits for Algorithms: Require companies and institutions deploying AI to conduct regular transparency audits. These audits should analyze algorithmic decision-making processes for biases, fairness, and discriminatory impacts.
  • Public Algorithm Registries: Governments and organizations should create public registries where the details of algorithms used in critical decision-making are published, including their intended purpose, training data, and performance metrics.

3. Promote Data Sovereignty and Individual Control

  • Data Ownership Rights: Advocate for laws and policies that grant individuals control over their personal data, ensuring they can opt out of systems that misuse or commodify their information for exclusionary or exploitative purposes.
  • Community Data Governance: Support the establishment of community-driven data trusts where local communities can govern how data is used, particularly in public services or social welfare systems impacted by AI.

4. Restructure AI to Prioritize Social and Economic Equity

  • Inclusive AI Development: Mandate that AI development teams include members from underrepresented and marginalized communities to ensure that systems are designed with a broad range of perspectives and needs in mind.
  • Address Economic Displacement: Develop policies that address the economic displacement caused by AI-driven automation, such as universal basic income (UBI) or retraining programs that prepare workers for roles that cannot be easily automated.

5. Create Spaces for Prefigurative Strategies of Resistance

  • Support Grassroots Activism: Encourage and fund grassroots movements that resist harmful AI practices, especially those focusing on anti-worker and anti-community algorithms. These movements can push for collective bargaining rights for AI-affected workers.
  • Technology Cooperatives: Foster the development of technology cooperatives where AI tools are built, governed, and maintained democratically by workers and users, creating an alternative to corporate-controlled AI systems.

6. Address and Combat Harmful Racialized and Gendered Practices

  • Anti-Discrimination Laws for AI: Implement strict anti-discrimination laws that prevent AI from perpetuating racial, gender, or class biases. These laws should hold organizations accountable for any discriminatory outputs from their AI systems.
  • Bias-Detection Tools: Support the development of tools and frameworks that automatically detect and mitigate biases in AI systems, ensuring that AI-driven decisions do not reinforce existing social inequities.

These six actions—focusing on ethical governance, transparency, equity, and community empowerment—can help reorient AI development away from harmful practices and towards more just and inclusive outcomes.”

I think these are interesting solutions, but they are solutions that should be in place for most technology, especially education technology. One could easily replace the word “AI” here with “Education Technology” and not be far from the mark. The problem with educational institutions is that they tend to let industries determine what tools we need and how we use them. We keep paying for tools, like Canvas for instance, that we know are not ADA 508 compliant but we just shrug our shoulders and pay for tools that we know do harm. This is why we need to keep having the conversations about AI and demand higher standards and practices from all of our technology.

I asked it to create an image expressing these strategies: AI generated picture of a protest against AI with many misspelled words on signs.

I agree! Ethical algornpabbily now!!

This entry was posted in AI and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.