AI, Problem-Solving, and Visual Thinking

Photo of Picasso from Apple Computer's "Think Different" campaign.I have been looking at AI as a thinking tool. Not as something to generate content (which I think it is essentially lousy at), but as an aide to thinking. I heard a number of people at conferences and elsewhere talk about how AI is like a calculator and how calculators were supposed to be the end of math. What calculators really did was get rid a lot of grunt work with the pencil and allowed people to think about steps and processes instead. It is the difference between writing with a pen in a notebook and word processing. Back in the 90s an uncle of mine described the word processor as the chain saw of writing world. (He grew up in Minnesota in the 30s and spend a lot of time chopping wood with an axe.) Imagine if someone was given a chainsaw and started using it as an axe? That person would chuck the new tool and go back to using the old – and they should. We are in a similar place with AI: I don’t think we are using it right, at least, not quite yet.

As an example, I asked AI to generate guitar tablature for the tune “Happy Birthday.” It produced some tab. I then asked it to rewrite it in the form of one of Bach’s two-part inventions. And it did. It even explained the voices and the process. The problem was that it did not know the tune to “Happy Birthday” – all it produced was gibberish. Its original tune for “Happy Birthday” was wrong and the two-part invention was just as bad. What was absolutely spot on was the explanation of the process of creating a two-part invention. If I had the time, I could have fed it the corrected tune, added some chords and fixed it. I have countless examples of AI getting fundamentals and facts wrong: if I put in a chapter of an OER textbook as ask it to create 20 multiple choice questions, it may ask “What color is the table header on page 25?” if it feels the chapter is a little thin. So, in my experience, it is good at processes not facts.

Picasso once commented on computers: “But they are useless. They can only give you answers.” So when you are working with AI, getting the “answers” is really the wrong way to use it, although in some limited ways, it can do that since it is also trained on Wikipedia. What it can really do well is to understand and express information through processes. I started thinking about what I do to think about ideas and what problem-solving methods I use. In my ABE and English classes, I would have students use Rogerian and Toulmin methods to analyze arguments. I would also use alternative problem-solving techniques such as Synectics. In short, I am interested in anything that helps my students analyze arguments apart from classical debate that creates “right and wrong” and “winners and losers.” I think we are all too familiar with why these are necessary skills to teach. Jennifer Gonzalez has a great list of alternatives to traditional debate. She focuses on speaking and listening which are lost arts. Maybe some of these non-traditional methods could be used in AI to help students think about problems in new ways.

While revisiting all of this, I ran into Kye Gomez’ “Tree of Thought” prompts. If you have not head of this it is an approach to problem-solving that aims to map out the different paths and potential solutions to a problem, structured similarly to a decision tree. This method is grounded in the principles of cognitive science and systems thinking, where the emphasis is on understanding and navigating the complexity of thought processes by visualizing them as interconnected branches, each representing different possible outcomes and actions. This approach is particularly relevant in complex problem-solving scenarios where traditional linear thinking may fall short.

The method involves breaking down a problem into its core components and exploring each branch’s possible decisions and outcomes. This helps in understanding the problem from multiple perspectives and encourages a comprehensive analysis of potential solutions. By visualizing thought processes as a tree, individuals can systematically evaluate the implications of each decision, leading to more informed and strategic choices.

Here are some key aspects of the “Tree of Thought” method:

  1. Pattern Recognition: Recognizing and organizing different types of information and knowledge to form coherent patterns.
  2. Iterative Learning: Continuously refining and adapting thoughts based on new information and feedback.
  3. Non-Linear Thinking: Moving away from linear, step-by-step problem-solving approaches to more dynamic and interconnected thinking.
  4. Knowledge Flow: Understanding that knowledge is not static but flows and evolves, requiring flexible thinking structures.

In the context of problem-solving, the “Tree of Thought” method serves as a powerful tool for navigating complexity and making informed decisions based on a holistic view of the problem space. Mind you, this is meant to be training the AI to think, but teaching students these prompt engineering techniques can bring critical thinking to a whole new level. What I like about it is that it can replace the “debate” kind of thinking and allow contending voices around a problem to come to some kind of agreement about what the problem might be, who or what is affected by the problem, and what solutions might arise from shared understandings rather than the winner/loser paradigm of traditional debates. For students, imagine using it to test Stephen Downes’ Guide to the Logical Fallacies which I have used with my English 101 students. One could create a Tree of Thought where two of the “experts” engage in one or more of the logical fallacies and one does not. We could also add to the GPT’s knowledge bank something like Nikol’s Thirteen Problem Solving Models as well.

In this case, I went into chatGPT 4o and put in the prompt: Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realises they’re wrong at any point then they leave. The question is “What is the connection between student engagement and retention? and it gave me three well-thought out perspectives and some shared conclusions. I thought it was really useful but it seemed dense and complex. I wanted to separate out the arguments. I asked it if it could put the arguments into a concept map. It said it could and gave me an outline that I could use to build a concept map. That is fine, but it occured to me that ChatGPT can output code so I asked it for the concept map to be put into HTML5 so I could visualize the arguments. It did it. I next asked for the different levels in the map to be color coded and it did that as well.

a section of a concept map

I started thinking about this: my English students could do the same thing with any of their readings. Any information, article or book can have a lens of information visualization applied to it (anything in the Periodic Table of Visualization Methods, for instance), and allow the students to get a toe-hold into different ideas in ways that they understand best.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Posted in AI | Tagged , , , | Leave a comment

Old School Machine Learning and A.I

We have been using ML and AI around here from jump street. “Watch, remember, repeat” is the “pedagogy” in all the commercial education platforms!

Posted in education | Leave a comment

CFP: AI Institute at Clover Park Technical College

Clover Park Technical College's mascot, Simon Robot. Clover Park Technical College has a CFP for its 2024 AI Institute, which examines the innovative and ethical use of AI in higher education.

This event will take place on Fri., Aug. 9th and Sat., Aug., 10th, 2024. We invite educators, researchers, instructional designers, and forward-thinking practitioners to contribute their insights and expertise.

This event is free and virtual for all. It was previously known as the OER Faculty Institute. We at first envisioned these institutes to focus on community and technical college but there was so much interest from a wide variety of institutions that we felt everyone would benefit if it were truly open.

CFP form and Registration

Posted in AI | Tagged , , , , , , , | Leave a comment

A Short Annotated Reading List for AI Ethics and Policy

Robot reading a journal.

Fatuous illustration by Dall-E.

This is a reading list of essays, resources, and full-hatched policies at other institutions for AI and ethics. It is meant to provide a background for those interested in developing an AI policy for their courses or for their institution. I think it is important to understand the risks of using AI as well as the potential benefits. By understanding the two together, I think we will see that developing ways to mitigate the dangers is absolutely critical. The purpose of this list is strictly utilitarian and not exhaustively academic. I am interested in striking a balance between student care and innovation. I think the readings below reflect that balance. I have written more on the local college policy issues as well has a number of posts on the ethical issues an AI but also with education tech in general elsewhere on this blog.

AI Ethics Guidelines Global Inventory. (2021). Algorithm Watch.
This is useful for looking at issues with AI from around the world. Looking at other policies allows us to view how other organizations and governments have sought to solve problems: problems we may not have yet considered.

Artificial Intelligence Policies: Guidelines and Considerations. (2024). Learning Innovation and Lifetime Education. Duke University.
I included this as a mature, thought-out plan that seems to account for a number of approaches that faculty might need depending on what and how they are teaching.

Atwell, Sue. (2024). From Principles to Practice: Taking a whole institution approach to developing your Artificial Intelligence Operational Plan. National Center for AI.
This article is useful for mapping out how to bring in all of the users and stakeholders into the planning process. Gathering the local voices is important for cultivating buy-in to any policy. It is also an inclusive, shared-governance way of helping faculty and staff evolve their thinking about new technology.

Antoniak, Maria. (2023). Using Large Language Models With Care: How to be mindful of current risks when using chatbots and writing assistants. Medium.
The list of ten risks are useful in discussions about why a policy on AI is needed. This is a good introductory essay on ethical issues around AI.

Bender, Emily M. et al. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ACM Digital Library. Association for Computing Machinery.
This is a thorough and close look at the ethical issues around AI and what to do about it. This is a seminal paper on the topic and is a “must read.”

Brandon, Esther et al. (2023). Cross-Campus Approaches to Building a Generative AI Policy. Educause Review.
Again, leadership in developing policies should always ask “who have we included in the policy making procedures?”  It is too easy to exclude voices in the name of expediancy. 

Cardona, Miguel A. et al. (2023). Artificial Intelligence and the Future of Teaching and Learning.  Office of Education Technology. Department of Education.
This is a good 300 ft view of AI in education and includes a balance of opportunities, challenges, and risks. 

Eaton, Lance. (2024). Syllabi Policies for AI Generative Tools. Google Form.
A Google form and spreadsheet of hundreds of examples of course policies on AI. “This resource is created…for the purposes of sharing and helping other instructors see the range of policies available by other educators to help in the development of their own for navigating AI-Generative Tools…”

Ethical AI for Teaching and Learning. (nd). Center for Teaching and Innovation. Cornell University.
A thoughtful assessment of the issues: “Building literacy in Generative AI includes addressing ethics, privacy, and equity with intention. There are many open questions, including legal questions, regarding the ethical design, development, use, and evaluation of generative AI in teaching and learning. While generative AI may potentially be powerfully useful, concerns and sensitivities surround a number of key issues…”

Ethics guidelines for trustworthy AI. (2019). European Commission.
Again, looking at AI and ethics through the lens of other cultures than the U.S.A can help us be aware of issues that our own biases may not allow us to see. “The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy.” 

Gašević, D., Siemens, G., & Sadiq, S. (2023). Empowering learners for the age of artificial intelligence. Computers & Education: Artificial Intelligence, 4, 100130.
This paper includes a discussion about how generative AI’s weaknesses can be turned into possible strengths given the proper training of teachers and students in its use. 

K-12 Generative AI Readiness Checklist. (2023). The Council of the Great City Schools.
This is an important document because it shows the stark difference between how K-12 adopts technology v. Higher Ed. This represents a thoughtful investigation and method in how to gather the voices and concerns around the adoption of new technology. We need a version of this for colleges.

U.S. University Policies on ChatGPT. (2023). Scribbr. Google Spreadsheet.Scribbr-
We have the Eaton spreadsheet (above) for class policies; this is a useful collection of college policies. “This spreadsheet details the AI policies of 100 American universities. It is current through June 2023 and updated as circumstances change.”

If you have a resource that you think we should be looking at in addition to these, feel free to add it by commenting below or dropping me a line. Thanks!

Posted in AI, education | Tagged , , , , | Leave a comment

Multi-Modality: Creating a Student Centric, Flexible Choice Instruction/Learning Model

Century College logoI am at Achieving the Dream’s Dream 24 conference with a team from Clover Park Technical College in Orlando, Fl. I am interested in looking at highflex models from the instructional design perspective. These are my notes for colleagues – they are impressionistic and represent my own intrests as an instructional designer – your results may vary, contents may settle during shipping.

From the program:

Century College’s Multi-Modality Model addresses student needs for flexibility and high-impact learning, increasing access and equity for all students. Al multi-modality classroom is where a faculty member uses three modalities (face-to-face, online synchronous, and online asynchronous) to teach a course and the key component is student choice, where students can choose any modality on any given day throughout the semester to fit their schedule and learning style preference. Resulting from lessons learned during and affter the pandemic, college leadership employed and shared governance structure where faculty and administration worked collaboratively creating an innovative model that includes professional development, student supports, state of the art technology, and more! A robust data-driven assessment showed strong student outcomes, narrowing of equity gaps, and advancement of teaching and learning. Participants will explore how this model can be successfully implemented at their own institutions.

Century College is in Minnesota – a career and tech college. 11,000 students, 169 degrees, diplomas, and certificates. 45% students of color and 50% first in family to college.

They invested 4.4 million into 103 classrooms, 10 labs, and 21 student gathering spaces.

They discussed institutional buy-in. Multimodal classes provide flexibility to students.

One of the first speakers was the faculty union leader. 60-70% full-time faculty – this makes buy-in all the more important. There is a faculty shared governance process. Communication was the key to faculty buy-in. They had trouble implementing online learning because of the lack of support and faculty buy-in. They needed to provide the tools and support to make it work.

This started with a student who was a mother who couldn’t make it to class so an instructor used Adobe Connect to live stream the course for that student. The students’ need for flexibility.

Benefits: Increased access, increased enrollments and student success.
Challenges: Support for faculty and workload, support for students, high quality captioned videos, communication and logistics.

Structure:

  • Students choice for three modalities
  • Faculty training
  • Classroom tech support
  • Resources to support students
  • Assessment for improvement

Planning for implementation was discussed with a two-year timeline. Faculty get paid for professional development, and faculty have to go through the training in order to teach. Teaching and learning center, student workers, and IT working together.

They contracted with a leader in the highflex model (Beatty) from San Francisco State. They took what they learned from him and built their own after.

Technology:

  • Front camera
  • Speakers
  • On-call IT support
  • Tech assisstant
  • Room microphone
  • Podium microphone
  • Projector
  • Assisted hearing device
  • Wireless lavalier mic

This took a lot of training to get it to work for faculty.

Changes in course design. The faculty member said “go paperless” – use the LMS for everything even if you are teaching face-to-face. It functions as one community – you are not teaching three classes simultaneously. They use tools like Perusal which is a social annotation tool. Clear, transparent assessments and rubrics that are provided ahead of time ensures that all three modalities are assesses equitably.

Students enrolled in multi-modality sections have higher pass rates than hybrid and online sections of the same course and had pass rates comparable to f2f sections. It helped narrow the equity gaps as well.

To implement:

Include research and best-practices review, collaborative discussions with faculty and admin, get student feedback, develop model based on student choice, be willing to invest.

__________________________

I am interested in this because of COVID and also because we did this 12 years ago at Tacoma Community College for the Health Information Management courses. Those students included those already working in the health field, some were parents, or employed elsewhere. We did a very simple model using the LMS in conjunction with Elluminate (like Zoom), and a live phone connection.

Posted in education | Tagged , , , , , , | Leave a comment

AI: What is your policy?

Free ai generated art image, public domain art CC0 photo.I wrote this for the Teaching & Learning Center Newsletter, but I wanted a version of this in the blogosphere in case someone had a take on the five positions here in this brief article. We are also discussing AI policy in an AI task force:

Readers of this newsletter may already be familiar with “generative AI,” like ChatGPT, which launched in November 2022. It can generate papers, reports, test questions and answers, computer code, images and more when asked questions by the user. It has some educators worried it will encourage plagiarism. Some institutions have gone as far as banning it. We currently do not have an “official” position at Clover Park on the student use of AI. Given that, an instructor is probably going to take one of these positions (suggested by Jeff Budlong, in Inside Iowa State)

  • It’s not allowed: only content by a student or as part of an assigned group is accepted.  
  • It’s allowed with attribution: AI-assisted work on some assignments is allowed when students identify what parts of the assignment were AI generated and how it helped them.  
  • It’s allowed in limited instances: AI can be used to prepare for assignments by brainstorming, but students must show how it helped them reach the result.  
  • Use is encouraged broadly or required: students can use AI but must identify what parts it generated.   

Risking a little hubris, I might add a fifth position, “ignoring it completely because you are confident enough in your instructional design that the only way for a student to use external tools like AI would be to apply the same critical thinking skills they would need to work through the assignment in the first place.”  

What should your position be on AI in your classroom? The TLC has a number of options to help faculty make that decision. We have offered and will continue to offer workshops and podcasts around AI, we are holding an AI Faculty Institute over the summer, and you can always talk to us in the TLC. We can help you look at your curriculum and make suggestions about where AI can help either you or your students, or how to craft your assignments and policies to communicate your expectations to your students.   

If you would like to explore more policies, I would recommend Syllabi Policies for Generative AI Tools which has collected over 70 policies from different universities and colleges. Also, Texas A&M University has a wide variety of policies gathered from other universities in a more condensed format.  

If you have a sixth position, I would love to hear from you! Comment below, catch me on Mastodon, or drop me a line. 

 

Posted in AI | Tagged , , , | Leave a comment

Instructional Design Positions at Clover Park

The logo of Clover Park Technical CollegeClover Park Technical College is looking for two instructional designers for project positions. I work as an Instructional Designer at Clover Park and I can tell you it is an excellent place to work!

Clover Park Technical College (CPTC) is seeking a highly skilled Instructional and Curriculum Design Specialist to support college faculty in Curriculum design. The ideal candidate will be passionate about supporting the college’s diverse student body and will help support the college’s work to cultivate an inclusive culture and campus climate by valuing diversity, sustaining an environment of belonging, and promoting equitable opportunities for all. The successful candidate will work to eliminate equity gaps and help students succeed in their chosen career path and an increasingly diverse work environment. Well-qualified applicants should be enthusiastic about continuously working to help faculty keep the curriculum relevant, researching and implementing the latest developments in curriculum with the intent of helping to create a more inclusive classroom environment.

The Instructional Design and Curriculum Specialist is responsible for supporting faculty in the researching, planning, and development/design of program courses, scope and sequences, syllabi, program maps, and adoption and development of Open Educational Resources (OER). Additionally, the Design and Curriculum Specialist will help facilitate faculty training such as Fundamentals of Teaching Online, ACUE, and Reading Apprenticeship, as well as a variety of faculty learning communities. The purpose of this appointment is to support the redesign of the curriculum to create a more inclusive and equitable classroom experience for students. This includes strengthening student learning outcomes that address the identified requirements of the industry. This is a grant-funded project position that will be funded up to August 2024.

This position has been designated as a bargaining unit position represented by the Aft Professional Staff, Local 6431, and is overtime-exempt.

Clover Park Technical College celebrates the many individuals that make up our community and embraces the opportunity to learn from our differences and similarities. CPTC values equity and respect. We seek to create an environment of innovation and excellence and focus on student success, lifelong learning, and social responsibility.

Apply on the college’s website.

Posted in education, instructionaldesign | Tagged , , | Leave a comment

AI: An “F” for Fake

Orson meets TerminatorI have been playing with voice cloning to try and get a sense of where the technology is and how it can be used in instructional design. There are a number of moral and legal issues with voice cloning, of course, like how do you know the voice you are hearing is who you think it is and what they are saying is what they are actually saying. But there are some very useful applications of this technology in education, such as cross-lingual voice cloning for dubbing videos in other languages. As Perez et al. puts it: “The rapid progress of modern AI tools for automatic speech recognition and machine translation is leading to a progressive cost reduction to produce publishable subtitles for educational videos in multiple languages.” It has also been used by an instructor with ALS who had lost her voice through the illness but got her voice back virtually through voice cloning. It can also be used to create more natural voices in education materials that use text to speech (TTS).

But not all of these tools are created equal. In my experiments I used Orson Welles voice because it is so distinctive (and I am a fan). A voice that distinctive should be low hanging fruit for AI. To be fair, I am only trying free or freemium tools so your experiments will certainly vary. I work a lot with adjunct instructors and students so cost is a factor.

The first one I tried was VoCloner. I was not impressed. It sounds more like Tennessee Williams bumped a little towards the North Atlantic:

The next one, PlayHT, gave much better results:

PlayHT seems to have picked up the cadence of his voice pretty well. PlayHT only lets you clone one voice and if you want to use other voices, they want you to pay. But you can delete a voice and create a new one any time you want. I look forward to more chicanery with this later – maybe for #DS106?

If you have tools that you are using for voice cloning that you think our faculty might find useful, feel free to post a comment below or send me a note. Thanks!

 

 

 

Posted in AI | Tagged , , , , | Leave a comment

Edu Predictions: Where is my conveyor belt sidewalk?

An outlandish car of the future. I have been rather disappointed lately that over the last few years we have not had much in the way of predictions in the education sector. For a while there it seemed like pedagogical prognostic punditry was a wide-spread New Year’s ritual across the blogosphere. I think that COVID and AI have a lot to do with fogging up the crystal balls. A year before COVID shut down schools it would have been hard to predict that a virus was going to change teaching (hopefully forever). Maggie Grady at the University of Buffalo points out that COVID as made us more flexible as teachers, changed how we communicate, and introduced “new” teaching methods such as the flipped classroom. I will be interested in seeing what the research shows about how these changes were implemented, how long the changes persist, and how they effect student success and retention.

I think AI would be hard for most in the education world to make predictions about because most educators really don’t understand what it is, how we will use it, and how it is changing. I keep reading education articles about AI that still apply concepts like “thinking” and “creating” to AI (“Gosh, it thinks just like we do!”). The idea is that since we have read a bunch of books and articles and come up with ideas that ChatGPT does the same thing and therefore it is “thinking” as if the sum total of thinking is the regurgitation of information. But I digress, which is how you know that I did not use ChatGPT to write this: ChatGPT is terrible at meandering digressions (see Victor Hugo or Balzac for really good digressions). My point is that the AI technology is changing so rapidly that it would be foolhardy to make predictions about where it all will wind up.

All of this isn’t to say that you can’t find ANY predictions about education for 2024. There are a number of industry and private consultant types that have a lot to say but most are thinly veiled sales pitches. You can read a lot of the “AI is important and nothing will help your students succeed with AI better than CainCo’s Widgetware 3000…” type of thing at places like eSchool News. Stephen Downes was kind enough to post a link to 157 trends from mostly corporate as presentations in a Google Drive.

Moving sidewalks of the future from 1959

Posted in future | Tagged , , , , | Leave a comment

Efforts to Regulate A.I.

I have not posted here in a while because of conferences, in-service, travel, and other local elearning events – it was a whirl-wind summer of AI in Education, a conference at University of Central Florida. We then presented with our new found expertise at a nursing deans conference, and are working on AI initiatives on campus including a summer AI Faculty Institute. I have been attempting to take a measured approach, looking at the ethics, accessibility, data privacy, climate impacts, racism, etc. But that does not seem to be a popular take on all of this. I am interested in really examining it in order to push for a truly open and transparent version of these kinds of tools that are geared towards education rather than profit.

There has been some movement in regulation on A.I. in the form of one step forward and then two steps backwards. UNESCO has had some interesting discussions, meetings, and publications, but I am waiting for something with more teeth. It is important though to get the principles we are talking about down.  It is ridiculous to think that we can’t agree on this because we may not be in agreement with all of the principles.

I am impressed with how fast Joe Biden got something together. The teeth in this one are pointing in the right direction: make the ethical principles be required for federal funding. Hit them in the pocket book. It will take state governments years to get a statement down that will mean anything. This started last October with the Blueprint for an AI Bill of Rights (the rights are for us, not the AI).

The sad thing is that we have safeguards in place for student data (FERPA) and for accessibility, yet the onus for protecting students is on the schools and faculty – not the ed tech corporations. Again and again, ed tech companies build and sell products that end up selling student information to data brokers and/or not being accessible. Pretty much the same scenario is playing out here. Rishi Sunak hosted the international AI summit and his big message was that it needs to be regulated but not at the expense of “innovation” (innovation = corporate profits).

Posted in AI | Tagged , , | Leave a comment