Active Learning Strategies H5P

I see this graphic every where but the information that originally went with is not easy to find. I took a copy of a pdf from CalTech and created this with it:

I am sure there are better ways to do this, but I am playing with H5P in the Teaching & Learning Center at Clover Park.

Posted in education | 2 Comments

On the Banality of AI

I read an interesting post on Mastodon where the Algorithmic Sabotage Research Group (ASRG), “an ongoing, conspiratorial, aesthetico-political, practice-led research framework focused on the intersection of digital culture and information technology,” that referenced Hannah Arendt’s Eichmann in Jerusalem: A Report on the Banality of Evil. It discusses how AI had the characteristics of Eichmann in that it shared “the inability to critique instructions, the lack of reflection on consequences, a commitment to the belief that a correct ordering is being carried out.” There are some interesting and necessary questions here despite the hyperbolic issues I have with this post. Eichmann was a sentient being and is responsible for his evil. AI is not “thoughtless” – it doesn’t think at all (except if you consider auto-complete thinking). I think the people and corporations behind some of the issues with AI need to be held responsible and assist in the toxic clean-up – they are the thoughtless ones. The sad thing is that this is not a new relationship we have with technology. In a fit of irony, while writing a presentation on AI. I decided to ask it about this post.

I asked: What six things should we do to address this statement:
“While #AI produces thoughtlessness, in the sense that political philosopher Hannah Arendt meant when interpreting the actions of Nazi war criminal Adolf Eichmann[3]; the inability to critique instructions, the lack of reflection on consequences, a commitment to the belief that a correct ordering is being carried out, “Algorithmic Sabotage” is intensified by the new forms of machinic knowing and the nascent becoming of an anti-worker and anti-community computational complex, making necessary a restructuring that reorients the focus from the miasma of AI and its concomitant toxic algorithmic operations of optimisation to developing techniques for redistributing social power, starting from a feminist standpoint and progress towards the implementation of prefigurative strategies of resistance, agency and refusal, to inhibit, slow down or reverse the emergence of harmful racialized practices of exteriorization and exclusion driven by algorithms.”

ChatGPT said:

“To address the complex issues raised in the statement about AI and its harmful social impacts, here are six actionable strategies:

1. Develop Ethical AI Frameworks and Governance

  • Establish AI Ethics Boards: Create diverse, multidisciplinary ethics boards to oversee AI development and ensure systems are aligned with moral and ethical standards, minimizing harm to marginalized communities.
  • Ethical AI Certification: Introduce a certification process for AI systems that meet rigorous ethical guidelines, similar to certifications in environmental sustainability, ensuring that technologies prioritize social good over optimization.

2. Ensure Algorithmic Transparency and Accountability

  • Mandatory Audits for Algorithms: Require companies and institutions deploying AI to conduct regular transparency audits. These audits should analyze algorithmic decision-making processes for biases, fairness, and discriminatory impacts.
  • Public Algorithm Registries: Governments and organizations should create public registries where the details of algorithms used in critical decision-making are published, including their intended purpose, training data, and performance metrics.

3. Promote Data Sovereignty and Individual Control

  • Data Ownership Rights: Advocate for laws and policies that grant individuals control over their personal data, ensuring they can opt out of systems that misuse or commodify their information for exclusionary or exploitative purposes.
  • Community Data Governance: Support the establishment of community-driven data trusts where local communities can govern how data is used, particularly in public services or social welfare systems impacted by AI.

4. Restructure AI to Prioritize Social and Economic Equity

  • Inclusive AI Development: Mandate that AI development teams include members from underrepresented and marginalized communities to ensure that systems are designed with a broad range of perspectives and needs in mind.
  • Address Economic Displacement: Develop policies that address the economic displacement caused by AI-driven automation, such as universal basic income (UBI) or retraining programs that prepare workers for roles that cannot be easily automated.

5. Create Spaces for Prefigurative Strategies of Resistance

  • Support Grassroots Activism: Encourage and fund grassroots movements that resist harmful AI practices, especially those focusing on anti-worker and anti-community algorithms. These movements can push for collective bargaining rights for AI-affected workers.
  • Technology Cooperatives: Foster the development of technology cooperatives where AI tools are built, governed, and maintained democratically by workers and users, creating an alternative to corporate-controlled AI systems.

6. Address and Combat Harmful Racialized and Gendered Practices

  • Anti-Discrimination Laws for AI: Implement strict anti-discrimination laws that prevent AI from perpetuating racial, gender, or class biases. These laws should hold organizations accountable for any discriminatory outputs from their AI systems.
  • Bias-Detection Tools: Support the development of tools and frameworks that automatically detect and mitigate biases in AI systems, ensuring that AI-driven decisions do not reinforce existing social inequities.

These six actions—focusing on ethical governance, transparency, equity, and community empowerment—can help reorient AI development away from harmful practices and towards more just and inclusive outcomes.”

I think these are interesting solutions, but they are solutions that should be in place for most technology, especially education technology. One could easily replace the word “AI” here with “Education Technology” and not be far from the mark. The problem with educational institutions is that they tend to let industries determine what tools we need and how we use them. We keep paying for tools, like Canvas for instance, that we know are not ADA 508 compliant but we just shrug our shoulders and pay for tools that we know do harm. This is why we need to keep having the conversations about AI and demand higher standards and practices from all of our technology.

I asked it to create an image expressing these strategies: AI generated picture of a protest against AI with many misspelled words on signs.

I agree! Ethical algornpabbily now!!

Posted in AI | Tagged , , | Leave a comment

Old Course Development Process Docs

All of these are ancient and need to be updated and to have the WA Course Design checklist incorporated:

Posted in education | Leave a comment

AI and the Four-Legged Chicken Problem

I presented last week at Clover Park Technical College’s AI Institute. I used to talk about education technologies, ideas, and processes as often incomplete and misunderstood through the insistence of  seeing the new technology and trying to cludge it into the old way of doing or seeing things. I used the analogy of the four-legged chicken from some students I over-heard discussing whether or not chickens had four legs because you apparently got four legs in a bucket of chicken. I wanted to update the problem of not having enough information, or not understanding how the parts are related to the whole, to make an effective evaluation of new technology.

Since it was the AI Institute, I wanted ChatGPT to make a picture of a four-legged chicken. I went back and fourth with it asking for a four-legged chicken and kept getting any number of legs but four. This is typical of what it produced:An AI generated picture of a chicken with seven legs.

I asked it why the chicken had seven legs instead of four, and it said “It seems the image generation process has misunderstood the request again, resulting in an incorrect number of legs. This could be due to the complexity of the task, as generating an animal with an unusual number of limbs can sometimes lead to unpredictable results.

I’ll make another attempt to create an image with exactly four legs, making sure to emphasize that specific detail. Let me try once more.An AI generated image of a chicken with three legs.

I was totally befuddled with why it just couldn’t produce a four-legged chicken. I mentioned it in my presentation as something that AI didn’t seem capable of doing and Meagan Sanders, who also presented at the AI Institute, casually sent me a picture of a chicken with four legs: An AI generated chicken with four legs.

So I asked myself “Why can she get a chicken with four legs and I can’t?” And then it struck me, in working with ESL students, constructions (and pronunciation) of phrases like “two-legged” were sometimes problematic. Maybe ChatGPT has the same problem. I then asked it for a “chicken with four legs” and got this:Another AI generated chicken with four legs.

I have not been able to consistently reproduce this as an experiment. With “four-legged humans” I often get what looks like two figures super-imposed on one another giving me a human with four legs and four arms.

I think all of this points out two things: 1) I am spending too much time on four-legged chickens. 2) We are not yet ready to trust AI with the nuclear codes.

Posted in AI | Tagged , , , | Leave a comment

Some Resources from the AI in Education Conference, Orlando 2024

Conference logo.I was at the Teaching and Learning with AI 2024 Conference in Orlando, Florida on July 22nd-24th. I am posting this mostly for members of our team who stayed behind and gallantly supported the Teaching & Learning Center back at Clover Park Technical College. We went to this conference last year and got so much out of it, made so many great connections, that I am thrilled we were able to make it back (despite the ferocious humidity in Orlando). I would recommend this conference to anyone: teachers, admins, K-12 or college, instructional designers, folks in the OER space, or anyone having any interest not only in the future of ed tech but of work in general. I want to share a few books, resources, tools, and websites that I know I will come back to or use with our OER faculty (and others) in the future, and by future I mean this Fall!

This is a list of just a few resources that I think we should examine at Clover Park for our professional development. I went to many more presentations and made a lot of great connections with people and ideas, but this are some highlights for the Teaching & Learning Center:

  • Artificial Intelligence (AI). (n.d.) Missouri Online.
    This site shows what the university system in Missouri believes are the most important points to hit in their faculty professional develop around AI: “Generative AI has rapidly expanded and significantly impacted higher education. Use the links below to explore the available resources for harnessing the power of AI in your teaching and learning activities.”
  • Artificial Intelligence and Information Literacy. (n.d.) Center for Teaching and Learning Transformation. University of Maryland. Online module.
    This, or something like it edited for the needs of a technical college, would be useful for both faculty and students. I downloaded the module and put it on our server but the module was built using some commercial tool add-ons and using videos hosted on UM’s server. I would like to build something like this that could be put out into the Creative Commons and Canvas Commons without the proprietary distractions. The module is live on UM’s server
  • Conrad, Kathryn and Kaperman, Sean (n.d.) Critical AI Literacy for Educators. Padlet.
    Lots of resources in this curated list. I will be mining this for a while!
  • Mollick, Ethan and Mollick, Lilach (2023) Assigning AI: Seven Approaches for Students with Prompts. Wharton School of the University of Pennsylvania & Wharton Interactive.
    This extensive article is broken down into seven sections representing different roles that AI can play with a student. A useful introduction to creating prompts and will lead you to thinking about other possible roles and uses of AI. 
  • Teaching with AI @ Auburn. (n.d.) Biggio Center. University of Auburn. Canvas Course.
    “This self-paced, asynchronous online course has an estimated total duration of approximately 8-10 hours.” I like the topics covered and how it is sequenced. The website gives a good outline of the course. A good model to compare our work here with our courses. 
  • Yee, Kevin et al. (2023) ChatGPT Assignments to Use in Your Classroom Today. FCTL Press. Orlando, Florida.
    This book is a useful resource for teachers. I would recommend this to any teacher who wanted to experiment with a few simple assignments before taking the plunge into AI. 
  • Yee, Kevin et al. (2023) AI Hacks for Educators. FCTL Press. Orlando, Florida.
    This book is focused not on students, but helping teachers save time and streamline their work using AI for things like writing curriculum, outlining a course, updating syllabi, creating rubrics, etc. This is important because the number one challenge that most faculty report is lack of time. 

Again, these are just a few resources for our team around professional development. I will add more to this later as I digest it all. If you have some favorite resources for professional development around AI, feel free to post them in a comment below.

Note: One of the short courses I would recommend for faculty professional development is the AI for Education Course, a two-hour course that is a quick introduction, gets faculty using the tools, and provides a badge (microcredential).

Posted in AI | Tagged , , , , | Leave a comment

AI, Problem-Solving, and Visual Thinking

Photo of Picasso from Apple Computer's "Think Different" campaign.I have been looking at AI as a thinking tool. Not as something to generate content (which I think it is essentially lousy at), but as an aide to thinking. I heard a number of people at conferences and elsewhere talk about how AI is like a calculator and how calculators were supposed to be the end of math. What calculators really did was get rid a lot of grunt work with the pencil and allowed people to think about steps and processes instead. It is the difference between writing with a pen in a notebook and word processing. Back in the 90s an uncle of mine described the word processor as the chain saw of writing world. (He grew up in Minnesota in the 30s and spend a lot of time chopping wood with an axe.) Imagine if someone was given a chainsaw and started using it as an axe? That person would chuck the new tool and go back to using the old – and they should. We are in a similar place with AI: I don’t think we are using it right, at least, not quite yet.

As an example, I asked AI to generate guitar tablature for the tune “Happy Birthday.” It produced some tab. I then asked it to rewrite it in the form of one of Bach’s two-part inventions. And it did. It even explained the voices and the process. The problem was that it did not know the tune to “Happy Birthday” – all it produced was gibberish. Its original tune for “Happy Birthday” was wrong and the two-part invention was just as bad. What was absolutely spot on was the explanation of the process of creating a two-part invention. If I had the time, I could have fed it the corrected tune, added some chords and fixed it. I have countless examples of AI getting fundamentals and facts wrong: if I put in a chapter of an OER textbook as ask it to create 20 multiple choice questions, it may ask “What color is the table header on page 25?” if it feels the chapter is a little thin. So, in my experience, it is good at processes not facts.

Picasso once commented on computers: “But they are useless. They can only give you answers.” So when you are working with AI, getting the “answers” is really the wrong way to use it, although in some limited ways, it can do that since it is also trained on Wikipedia. What it can really do well is to understand and express information through processes. I started thinking about what I do to think about ideas and what problem-solving methods I use. In my ABE and English classes, I would have students use Rogerian and Toulmin methods to analyze arguments. I would also use alternative problem-solving techniques such as Synectics. In short, I am interested in anything that helps my students analyze arguments apart from classical debate that creates “right and wrong” and “winners and losers.” I think we are all too familiar with why these are necessary skills to teach. Jennifer Gonzalez has a great list of alternatives to traditional debate. She focuses on speaking and listening which are lost arts. Maybe some of these non-traditional methods could be used in AI to help students think about problems in new ways.

While revisiting all of this, I ran into Kye Gomez’ “Tree of Thought” prompts. If you have not head of this it is an approach to problem-solving that aims to map out the different paths and potential solutions to a problem, structured similarly to a decision tree. This method is grounded in the principles of cognitive science and systems thinking, where the emphasis is on understanding and navigating the complexity of thought processes by visualizing them as interconnected branches, each representing different possible outcomes and actions. This approach is particularly relevant in complex problem-solving scenarios where traditional linear thinking may fall short.

The method involves breaking down a problem into its core components and exploring each branch’s possible decisions and outcomes. This helps in understanding the problem from multiple perspectives and encourages a comprehensive analysis of potential solutions. By visualizing thought processes as a tree, individuals can systematically evaluate the implications of each decision, leading to more informed and strategic choices.

Here are some key aspects of the “Tree of Thought” method:

  1. Pattern Recognition: Recognizing and organizing different types of information and knowledge to form coherent patterns.
  2. Iterative Learning: Continuously refining and adapting thoughts based on new information and feedback.
  3. Non-Linear Thinking: Moving away from linear, step-by-step problem-solving approaches to more dynamic and interconnected thinking.
  4. Knowledge Flow: Understanding that knowledge is not static but flows and evolves, requiring flexible thinking structures.

In the context of problem-solving, the “Tree of Thought” method serves as a powerful tool for navigating complexity and making informed decisions based on a holistic view of the problem space. Mind you, this is meant to be training the AI to think, but teaching students these prompt engineering techniques can bring critical thinking to a whole new level. What I like about it is that it can replace the “debate” kind of thinking and allow contending voices around a problem to come to some kind of agreement about what the problem might be, who or what is affected by the problem, and what solutions might arise from shared understandings rather than the winner/loser paradigm of traditional debates. For students, imagine using it to test Stephen Downes’ Guide to the Logical Fallacies which I have used with my English 101 students. One could create a Tree of Thought where two of the “experts” engage in one or more of the logical fallacies and one does not. We could also add to the GPT’s knowledge bank something like Nikol’s Thirteen Problem Solving Models as well.

In this case, I went into chatGPT 4o and put in the prompt: Imagine three different experts are answering this question. All experts will write down 1 step of their thinking, then share it with the group. Then all experts will go on to the next step, etc. If any expert realises they’re wrong at any point then they leave. The question is “What is the connection between student engagement and retention? and it gave me three well-thought out perspectives and some shared conclusions. I thought it was really useful but it seemed dense and complex. I wanted to separate out the arguments. I asked it if it could put the arguments into a concept map. It said it could and gave me an outline that I could use to build a concept map. That is fine, but it occurred to me that ChatGPT can output code so I asked it for the concept map to be put into HTML5 so I could visualize the arguments. It did it. I next asked for the different levels in the map to be color coded and it did that as well.

a section of a concept map

I started thinking about this: my English students could do the same thing with any of their readings. Any information, article or book can have a lens of information visualization applied to it (anything in the Periodic Table of Visualization Methods, for instance), and allow the students to get a toe-hold into different ideas in ways that they understand best.

Posted in AI | Tagged , , , | Leave a comment

Old School Machine Learning and A.I

We have been using ML and AI around here from jump street. “Watch, remember, repeat” is the “pedagogy” in all the commercial education platforms!

Posted in education | Leave a comment

CFP: AI Institute at Clover Park Technical College

Clover Park Technical College's mascot, Simon Robot. Clover Park Technical College has a CFP for its 2024 AI Institute, which examines the innovative and ethical use of AI in higher education.

This event will take place on Fri., Aug. 9th and Sat., Aug., 10th, 2024. We invite educators, researchers, instructional designers, and forward-thinking practitioners to contribute their insights and expertise.

This event is free and virtual for all. It was previously known as the OER Faculty Institute. We at first envisioned these institutes to focus on community and technical college but there was so much interest from a wide variety of institutions that we felt everyone would benefit if it were truly open.

CFP form and Registration

Posted in AI | Tagged , , , , , , , | Leave a comment

A Short Annotated Reading List for AI Ethics and Policy

Robot reading a journal.

Fatuous illustration by Dall-E.

This is a reading list of essays, resources, and full-hatched policies at other institutions for AI and ethics. It is meant to provide a background for those interested in developing an AI policy for their courses or for their institution. I think it is important to understand the risks of using AI as well as the potential benefits. By understanding the two together, I think we will see that developing ways to mitigate the dangers is absolutely critical. The purpose of this list is strictly utilitarian and not exhaustively academic. I am interested in striking a balance between student care and innovation. I think the readings below reflect that balance. I have written more on the local college policy issues as well has a number of posts on the ethical issues an AI but also with education tech in general elsewhere on this blog.

AI Ethics Guidelines Global Inventory. (2021). Algorithm Watch.
This is useful for looking at issues with AI from around the world. Looking at other policies allows us to view how other organizations and governments have sought to solve problems: problems we may not have yet considered.

Artificial Intelligence Policies: Guidelines and Considerations. (2024). Learning Innovation and Lifetime Education. Duke University.
I included this as a mature, thought-out plan that seems to account for a number of approaches that faculty might need depending on what and how they are teaching.

Atwell, Sue. (2024). From Principles to Practice: Taking a whole institution approach to developing your Artificial Intelligence Operational Plan. National Center for AI.
This article is useful for mapping out how to bring in all of the users and stakeholders into the planning process. Gathering the local voices is important for cultivating buy-in to any policy. It is also an inclusive, shared-governance way of helping faculty and staff evolve their thinking about new technology.

Antoniak, Maria. (2023). Using Large Language Models With Care: How to be mindful of current risks when using chatbots and writing assistants. Medium.
The list of ten risks are useful in discussions about why a policy on AI is needed. This is a good introductory essay on ethical issues around AI.

Bender, Emily M. et al. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? ACM Digital Library. Association for Computing Machinery.
This is a thorough and close look at the ethical issues around AI and what to do about it. This is a seminal paper on the topic and is a “must read.”

Brandon, Esther et al. (2023). Cross-Campus Approaches to Building a Generative AI Policy. Educause Review.
Again, leadership in developing policies should always ask “who have we included in the policy making procedures?”  It is too easy to exclude voices in the name of expediancy. 

Cardona, Miguel A. et al. (2023). Artificial Intelligence and the Future of Teaching and Learning.  Office of Education Technology. Department of Education.
This is a good 300 ft view of AI in education and includes a balance of opportunities, challenges, and risks. 

Eaton, Lance. (2024). Syllabi Policies for AI Generative Tools. Google Form.
A Google form and spreadsheet of hundreds of examples of course policies on AI. “This resource is created…for the purposes of sharing and helping other instructors see the range of policies available by other educators to help in the development of their own for navigating AI-Generative Tools…”

Ethical AI for Teaching and Learning. (nd). Center for Teaching and Innovation. Cornell University.
A thoughtful assessment of the issues: “Building literacy in Generative AI includes addressing ethics, privacy, and equity with intention. There are many open questions, including legal questions, regarding the ethical design, development, use, and evaluation of generative AI in teaching and learning. While generative AI may potentially be powerfully useful, concerns and sensitivities surround a number of key issues…”

Ethics guidelines for trustworthy AI. (2019). European Commission.
Again, looking at AI and ethics through the lens of other cultures than the U.S.A can help us be aware of issues that our own biases may not allow us to see. “The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy.” 

Gašević, D., Siemens, G., & Sadiq, S. (2023). Empowering learners for the age of artificial intelligence. Computers & Education: Artificial Intelligence, 4, 100130.
This paper includes a discussion about how generative AI’s weaknesses can be turned into possible strengths given the proper training of teachers and students in its use. 

K-12 Generative AI Readiness Checklist. (2023). The Council of the Great City Schools.
This is an important document because it shows the stark difference between how K-12 adopts technology v. Higher Ed. This represents a thoughtful investigation and method in how to gather the voices and concerns around the adoption of new technology. We need a version of this for colleges.

U.S. University Policies on ChatGPT. (2023). Scribbr. Google Spreadsheet.Scribbr-
We have the Eaton spreadsheet (above) for class policies; this is a useful collection of college policies. “This spreadsheet details the AI policies of 100 American universities. It is current through June 2023 and updated as circumstances change.”

If you have a resource that you think we should be looking at in addition to these, feel free to add it by commenting below or dropping me a line. Thanks!

Posted in AI, education | Tagged , , , , | Leave a comment

Multi-Modality: Creating a Student Centric, Flexible Choice Instruction/Learning Model

Century College logoI am at Achieving the Dream’s Dream 24 conference with a team from Clover Park Technical College in Orlando, Fl. I am interested in looking at highflex models from the instructional design perspective. These are my notes for colleagues – they are impressionistic and represent my own intrests as an instructional designer – your results may vary, contents may settle during shipping.

From the program:

Century College’s Multi-Modality Model addresses student needs for flexibility and high-impact learning, increasing access and equity for all students. Al multi-modality classroom is where a faculty member uses three modalities (face-to-face, online synchronous, and online asynchronous) to teach a course and the key component is student choice, where students can choose any modality on any given day throughout the semester to fit their schedule and learning style preference. Resulting from lessons learned during and affter the pandemic, college leadership employed and shared governance structure where faculty and administration worked collaboratively creating an innovative model that includes professional development, student supports, state of the art technology, and more! A robust data-driven assessment showed strong student outcomes, narrowing of equity gaps, and advancement of teaching and learning. Participants will explore how this model can be successfully implemented at their own institutions.

Century College is in Minnesota – a career and tech college. 11,000 students, 169 degrees, diplomas, and certificates. 45% students of color and 50% first in family to college.

They invested 4.4 million into 103 classrooms, 10 labs, and 21 student gathering spaces.

They discussed institutional buy-in. Multimodal classes provide flexibility to students.

One of the first speakers was the faculty union leader. 60-70% full-time faculty – this makes buy-in all the more important. There is a faculty shared governance process. Communication was the key to faculty buy-in. They had trouble implementing online learning because of the lack of support and faculty buy-in. They needed to provide the tools and support to make it work.

This started with a student who was a mother who couldn’t make it to class so an instructor used Adobe Connect to live stream the course for that student. The students’ need for flexibility.

Benefits: Increased access, increased enrollments and student success.
Challenges: Support for faculty and workload, support for students, high quality captioned videos, communication and logistics.

Structure:

  • Students choice for three modalities
  • Faculty training
  • Classroom tech support
  • Resources to support students
  • Assessment for improvement

Planning for implementation was discussed with a two-year timeline. Faculty get paid for professional development, and faculty have to go through the training in order to teach. Teaching and learning center, student workers, and IT working together.

They contracted with a leader in the highflex model (Beatty) from San Francisco State. They took what they learned from him and built their own after.

Technology:

  • Front camera
  • Speakers
  • On-call IT support
  • Tech assisstant
  • Room microphone
  • Podium microphone
  • Projector
  • Assisted hearing device
  • Wireless lavalier mic

This took a lot of training to get it to work for faculty.

Changes in course design. The faculty member said “go paperless” – use the LMS for everything even if you are teaching face-to-face. It functions as one community – you are not teaching three classes simultaneously. They use tools like Perusal which is a social annotation tool. Clear, transparent assessments and rubrics that are provided ahead of time ensures that all three modalities are assesses equitably.

Students enrolled in multi-modality sections have higher pass rates than hybrid and online sections of the same course and had pass rates comparable to f2f sections. It helped narrow the equity gaps as well.

To implement:

Include research and best-practices review, collaborative discussions with faculty and admin, get student feedback, develop model based on student choice, be willing to invest.

__________________________

I am interested in this because of COVID and also because we did this 12 years ago at Tacoma Community College for the Health Information Management courses. Those students included those already working in the health field, some were parents, or employed elsewhere. We did a very simple model using the LMS in conjunction with Elluminate (like Zoom), and a live phone connection.

Posted in education | Tagged , , , , , , | Leave a comment