Course Rubrics: Illinois’ Quality Online Course Initiative

Quality Online Course Initiative logo. I have not used this rubric to evaluate a course. I have read about this rubric, and no research into rubrics for course development would be complete without looking at this one. The Illinois Online Network and the Illinois Virtual Campus developed their own quality online course rubric and evaluation system. Their goal “is to help colleges and universities to improve accountability of their online courses.”

What problems does this rubric solve?
According to their website, the main objectives of this project were to:

  • create a useful evaluation tool (rubric) that can help faculty develop quality online courses
  • identify “best practices” in online courses
  • recognize faculty, programs, and institutions that are creating quality online courses

How was it created?
Again, according to the site: “The first step of our process was to brainstorm criteria for what makes a quality online course. As we came up with our ideas, we noticed several patterns and overlapping information. Therefore, we decided to chunk the information in to six categories.” I appreciate the fact that this was an in-house project, but I would like to know more about any research that went into this project besides “brainstorming.” I am sure that there were faculty of experience and expertise that contributed to this project, but knowing how they identified “best practices” is important. There are a lot of things that teachers used to do as “best practices” that we just don’t do any more (e.g. physical pain as a learning motivator).

What does the rubric assess?
After the brainstorming process, each category was then broken down into topics and finally individual criterion which included these six categories:

  1. Instructional Design
    The instructional design section is fairly basic. It defines learning as a “transfer of knowledge and skills” which is a bit dated.
  2. Communication, Interaction, and Collaboration
    I am glad there is communication, interaction, and collaboration in this rubric. They could go a little further and think about community building beyond group work.
  3. Student Evaluation & Assessment
    As always, assessments are aligned with learning objectives. There are items in this category that should be under “syllabus boilerplate” like a statement of FERPA rules.
  4. Learner Support and Resources
    This is a critical area as those in the commercial MOOC world found out. Supporting students and building that support into the curriculum is crucial to student success in online learning. The rubric assesses a bare minimum of support.
  5. Web Design
    With the concerns about “scrolling,” “pop-up windows” and “frames,” the rubrics gray 1998 roots are showing. This needs a refresh with some Universal Design for Learning.
  6. Course Evaluation
    “Opportunities for learner feedback throughout the course on issues surrounding the course’s physical structure (e.g. spelling mistakes, navigation, dead links, etc.) are provided.” – likewise for Instruction and Content.

Weaknesses of the Rubric
The one that stands out as a surprise is the lack of discussion about accessibility and universal design. I would like to see this rubric refreshed and backed up with research. There is a page on their site that has links marked “research” and “resources” but that seems to be an unimplemented plan. The pedagogy around the rubric needs updating as well.

Strengths of the Rubric
There are some things to like here though. First, it is used for course development, not merely a “fix-it” tool. It is also used for recognizing exemplary courses and faculty. This is one of the most important uses of rubrics on a college campus: the discussion of rubrics, an adoption plan, and implementation can bring faculty together to develop quality online programs from within the culture of the campus. It becomes another hub for connecting faculty with one another. Finally, it is openly licensed which means other campuses get to benefit from their work as they customize a process for the needs of their own teaching and learning communities.

Posted in assessment | Tagged , , , , , , , | Leave a comment

Course Rubrics: Quality Matters

Quality Matters logoI have a lot of experience with this rubric as a teacher, instructional designer, and administrator. I have worked with the Quality Matters rubric in some form or another since the early 2000’s. I was certified as a reviewer through the California State University Chancellor’s office and have reviewed numerous courses using the rubric. While there are many things to like about the rubric, I am a big fan of the early organization. One of the most useful things accomplished by Quality Matters is their research library which gathers research on the standards of the rubric such as interaction.

History
The QM rubric has certainly been an influential rubric. Before fiscal year 2007 or their grant, the rubrics and research were freely available to the public (after that, they introduced a business model of fees and subscriptions to support the research and training). According to their site, Quality Matters began with a small group of colleagues in the MarylandOnline, Inc. (MOL) consortium who were trying to solve a common problem among institutions: how do we measure and guarantee the quality of a course? In 2003, they applied for a Fund for the Improvement of Postsecondary Education (FIPSE) grant from the U.S. Department of Education to create the Quality Matters program.

What does the rubric assess?
The rubric addresses eight research-based standards for successful, quality online courses. They include:

  1. Course Overview Introduction
  2. Learning Objectives (Competencies)
  3. Assessment and Measurement
  4. Instructional Materials
  5. Course Activities and Learner Interaction
  6. Course Technology
  7. Learner Support
  8. Accessibility and Usability

Strengths of QM
I have seen this rubric in action. It has its short-comings but one of the most important things it does is to get an instructor to think about what happens in an online course and how courses, online, hybrid, and face-to-face, can be successful. The rubric gets faculty to think about how they communicate the goals and expectations of the course to their students (standards 1 and 2). Sometimes the disconnect in a course can be that an objective of the course isn’t actually assessed. I can personally attest that faculty with little to no online experience were able to use the QM rubric to build a successful online course. The tool helps faculty to understand issues in online learning that they may not have considered before such as student interaction and accessibility of course materials.

I have worked with a few instructors who, after going through the Quality Matters training, not only created an excellent online course, but applied the principles to improve their face-to-face classes as well. It is a great opportunity to get instructors thinking about teaching and learning in general rather than their subject in particular.

This points out one of the things this rubric does well: it is not so much the rubric itself but the process. Some of the best work on a college campus comes when you bring faculty together to review one another’s courses and talk about what worked for them and what didn’t. The faculty are often very creative in meeting to outcomes of the standards in ways that I am sure the QM folks could not imagine.

I like the caveat included in the rubric about accessibility (standard 8). It says “Meeting QM’s accessibility Standards does not guarantee or imply that specific country/federal/state/local accessibility regulations are met. Consult with an accessibility specialist to ensure that accessibility regulations are met.” The idea that a course can be declared accessible by checking off five items in a rubric is a bit ridiculous.

Weaknesses of QM
I think that the financial model is an issue. I know a lot of faculty who are enthusiastic about it because QM pays faculty to review courses from part of the fees they charge the institutions. I understand and value the recognition of faculty expertise and time, yet we typically don’t pay faculty extra money to observe and review face-to-face courses. For small, cash-strapped institutions subscriptions and fees are a burden.

The QM rubric seems weighted towards objectives and assessment and, in my experience, does not allow for adequate assessment of teacher-student or student-student interaction, community building, fostering student agency, or opportunities for students to guide the learning experience. I have written before on this blog about my conflicted relationship with course objectives. With the rise over the last 20  years of online learning, we are learning a lot about how students learn (e.g. Connectivism) and as much as I appreciate all of the research that has gone into this rubric, it needs a refresh. Standards 2-6 all have “learning objectives and competencies” as the first concern. While learning objectives have their uses, they are often more important to the institution’s ability to supposedly measure them than with the student learning. Sally Stewart did some interesting research around these questions (see the The Trouble with Learning Outcomes). And I get it, students need to know what will be covered in their Psych 100 class, but the objectives don’t teach the class. Often, the most important things that happen in a classroom are the things that can’t be measured, but we will not see a standard for “opportunities for transformative learning experiences” or creativity anytime soon. I am not above negotiating course objectives when students express a need or desire, or when we need to change the direction of a class due to current events – that kind of flexibility and openness is valuable to learners. But how do we measure it? Should we? Doesn’t that kind of teaching style contribute to the success of the students?

Additionally, if rubrics are going to be used to educate faculty on how online learning works then these rubrics need to measure the openness of a course: does the instructor use OER or open textbooks? Accessibility to a course should also account for economically accessible.

QM Rubric as a Facilitation Tool
Overall, the rubric is worth using as a starting point: the opportunities for faculty to work together on courses and programs is invaluable. It is an useful tool to introduce face-to-face faculty to online learning and to change courses that are one-way information dumps to something more engaging. Personally, I think this can happen without the cost and the tools institutions use should be open (Creative Commons license at least).

 

Posted in assessment | Tagged , , , , , , , | Leave a comment

Course Rubrics: OEI

California Community College logo. The first rubric I will look at in this first of a seven-part series looking at course evaluation rubrics is the California Community College’s Online Education Initiative rubric.

Who wrote it?
This rubric was created by the California Community College Chancellor’s Office to provide a checklist demonstrating that the online courses offered through the Online Education Initiative align with state standards; the accreditor, ACCJC’s Guide to Evaluating DE; and national standards (iNACOL). After reviewing the rubric, I think they did a good job of meeting and even exceeding those standards.

There are a couple of things I liked about the OEI rubric right away – it recognizes established standards (expressed in the rubric) and its primary use is for self-evaluation. This rubric is supported by a useful website from the California Community Colleges Chancellor’s Office called “Online Course Design Standards.” This page contains an explanation of how the rubric came about and there are links at the bottom of the page that provide the latest two versions of the rubric and training materials. Most importantly, it is all offered to the public with an open license (Creative Commons, CC-by which is their most flexible licensing) making it free use, unlike other options, such as “Quality Matters.” In their discussion of the license they ask for feedback which I think means that they are aware of the organic nature of our current understanding of how online education works.

What problems does this solve?
According to the site, “The Rubric is intended to establish standards relating to course design, interaction and collaboration, assessment, learner support, and accessibility in order to ensure the provision of a high quality learning environment that promotes student success and conforms to existing regulations.”

This has the potential to solve a number of issues with online courses, including communicating to instructors who are new to online learning what some of the differences might be between online and face-to-face teaching. In my experience, an instructor’s first impulse is to recreate as much as possible what happens in the face-to-face class online which we know doesn’t work.

“In order for a course to be offered as a part of the Online Education Initiative (OEI) course exchange, it must meet established standards relating to course design, instruction, and accessibility that are intended to promote a quality learning environment that conforms to existing regulations. Prior to the submission of a course for OEI consideration, it is helpful for the faculty member to review these guidelines and conduct a self-evaluation. The outcome of this self-evaluation is a component of the OEI Course Application process.” I think that self-evaluation component of this process is absolutely critical no matter which rubric you are using. The main reason is that the individual instructor is often the only person who understands the degree of interactivity a course might have. Each instructor has a different way of implementing successful online teaching techniques. If there is evidence for the implementation of successful teaching, it often has to be located first by the instructor. Instructors can use the rubric not only to fix problems with a less successful course but also to identify why the course or aspects of the course are successful.

What is in it?
The rubric has five sections each with several elements that are assessed in the first three (A,B, and C) as either Incomplete, Aligned, or Additional Exemplary Elements (which recognizes “design choices that further enhance the student experience in the online learning environment.” Sections D and E are marked either Incomplete or Aligned because they address elements that are required by law to be present.

Section A: Content Presentation
“The 13 elements for quality course design in this section address how content is
organized and accessed in the course management system. Key elements include
course navigation, learning objectives, and access to student support information.”

Section B: Interaction
“The 8 elements in this section address instructor initiated and student initiated
communication. Key elements of quality course design covered in this section include
regular effective contact, student-to-student collaboration, and communication activities
that build a sense of community among online learners.”

Section C: Assessment
“The 8 elements in this section address the variety and effectiveness of assessments
within the course. Key elements include the alignment of objectives and assessments,
the clarity of instructions for completing assessments, and evidence of timely and regular
feedback.

Section D: Accessibility
“The 23 elements in this section are reviewed to determine if a student using assistive
technologies will be able to access course content as required by Section 508 of the
Rehabilitation Act of 1973 (also known as ‘508 Compliance’).”

Section E: Institutional Accessibility Concerns
“The 4 elements in this section cover accessibility of external tools and third-party
content. While the accessibility elements in Section D are primarily under the control of
faculty when developing a course, the elements in Section E may be outside the purview
of the instructor which would require additional consideration or intervention at the
institutional level.”

I would estimate that a 5-unit online English 101 course could be reviewed by an experienced course reviewer using this rubric in about 6 to 8 hours. It is thorough, addresses the standards it claims to, and recognizes some of the difficulties in evaluating accessibility. There are opportunities in online course evaluation to create a larger project outside of the corporate model such as Quality Matters. The Creative Commons license on the OEI rubric means that others can not only use the rubric, but can share research data such as any changes in student success and completion rates for courses that were reviewed versus those that were not. The OEI rubric is an important step in that direction.

I will be looking for more research into the successful use of this rubric.

If you have any additional information or experiences with this rubric, please feel free to post a comment below.

Posted in education | Tagged , , , , , , , , | Leave a comment

Evaluating Online Courses: a prelude

Rubrics cube. This is the first of a seven part series examining how different institutions evaluate online courses. Each of the seven blog postings after this will feature a different rubric from a different institution for evaluating online courses.  I am specifically interested in what institutions have created, adapted, or purchased and why. Each posting will attempt to look at:

  • What problems does this rubric solve?
  • How is this solution tied into the campus community?
  • What is unique to this campus culture that is addressed by this rubric?

Some of the issues I have had as an instructor, instructional designer, and administrator is that the evaluation of online courses tends to be an afterthought – something you do only after a course is created and built. It is important that the specifications for online courses are communicated to faculty before the design process begins. This assumes that there is a “design process” and that is not always the case. For the use of rubrics for online course evaluation to work effectively, they must be a part of the course development process. This is not as intuitive as it sounds because different departments in a college may have historical processes for getting online courses created that differ from department to department.

All of this begs the question: why use a rubric for evaluating courses at all? We don’t have rubrics for evaluating face-to-face courses, why do we need them for online courses? First of all, instructors and students have had a life-long experience of being in face-to-face classrooms and there are accepted standards as to what constitutes a face-to-face classroom and how it should work. We do not have that yet in online learning despite the decades colleges have been offering online classes. With some notable exceptions, early online courses were concerned with the transfer of information which is not the same as teaching and learning.

Many instructors and institutions were not clear on how learning occurs online. Since the 90s, there has been a lot of research on how people teach and learn online successfully. Interestingly enough, that research follows the same lines of research around how to teach effectively in face-to-face courses. I am thinking here of Chickering and Ehrmann’s online follow-ups to Chickering and Gamson’s research in face-to-face undergraduate success. The rubrics for evaluating online courses can be a method for communicating the expectations or best practices of online learning and teaching for new teachers or for teachers who have not had a lot of experience online.

All of that said, these rubrics are not a teacher evaluation tool. One can build a course to the specifications of a rubric and still have an unsuccessful course. Its implementation and success will always rest in the teacher’s hands.

And not all rubrics are created equal. Some of them are very basic, some of them try to evaluate things that are difficult to account for in a rubric: cultural inclusivity, for instance, is something that may best be accounted for by formal peer and student evaluations. Not all of the rubrics stress enough of the things that we now know are critical to online course success such as community, or the latest research around Connectivism and Open Pedagogy. Despite the research into successful online learning, there can still be great disparities in universities between their online courses and face-to-face. One cause of this is because despite the widespread implementation of various solutions, each institution is unique – each institution is embedded in a community with different needs. The question I hope to answer this summer, is how do we account for this uniqueness while attempting to apply a solution from outside the institution? What we are really doing when we adopt a course evaluation rubric? What are the concerns or opportunities when adopting a rubric?

My goal at the end of these postings will be to present a meta-rubric for evaluating course evaluation rubrics (something along the lines of Dr. Mollinix at Monmouth University) and to develop an adoption plan for instructors, departments, or institutions that will address the needs of the learning community, the professional development of the instructors, and the goals of the departments or institutions.

The rubrics I am evaluating include:

  1. OEI: Course Design Rubric for the Online Education Initiative, (California Community College’s Chancellor’s Office)
  2. Quality Matters
  3. Quality Online Course Initiative Rubric and Checklist (Illinois Online Network)
  4. Online Course Evaluation Project (OCEP), (Monterey Institute for Technology and Education)
  5. Quality Assurance Checklist (Central Michigan University)
  6. Online Course Assessment Tool (OCAT) and Peer Assessment Process, (Western Carolina University).
  7. Quality Online Learning and Teaching, (California State University)

Let me know if you would like me to look at others or to let me know about your experience with online course evaluation rubrics (the good, the bad, and the ugly). I would love to hear from you.

Posted in education | Tagged , , , , , , | Leave a comment

Nomic: Games in Adult Basic Education

Glass Bead Game novel cover. I have not posted here in a while. I have moved back up to Washington State, set up an education consulting business, and am, gratefully, back teaching again. I have been very focused on my teaching at Green River Community College where I am teaching ABE (Language Arts/Social Studies and Language Arts/Science). I have always loved teaching at this level because this is where teachers can make a huge difference in the lives of others. Teach students how to read “The Compleat Angler” and you have fed them for a day. Teach them how to write an effective cover letter for a job and you can feed them and their families for a life time.

One of the cool things that I have had the opportunity to do is to update the Nomic game I used to use in English 101 and 102 for ABE. If you are not familiar with Nomic, I have posted here a few times earlier about it. I had the help of Jacqui Cain, who has a certificate in reading, my own humble experience, and the assistance of three students who read it for our three C’s: clarity, cohesion, and conciseness. The link to the old posting will give you the old game. The new version can be downloaded here from my site.

Okay, so I am interested in two things – any thoughts you may have on this game AND if you are using games in ABE or developing education in your classroom, I would love to hear about it.

Posted in teaching | Tagged , , , , , | Leave a comment

CmapTools for iPad

Concept map

Concept map (Photo credit: Wikipedia)

I have used Cmap on nearly every computer I have owned or worked with. It is a very useful tool and, I think, very under-rated. I used to think it was difficult to work with until I had a conversation about Cmaps at a conference with Cable Green. He said that my issue was with using the connecting verbs between the concepts: while I might feel that the connections might be forced, it might be cause my thinking about the concepts is not complete. This represented a real break-through in my work with concept maps. It was also very humbling because I was presenting on concept maps at a conference at the time! Nothing like having to rethink your positions at the last minute, but that is why I go to conferences.  If you are not familiar with the tool, it is an open source concept mapping tool:

Cmap software is a result of research conducted at the Florida Institute for Human & Machine Cognition (IHMC). It empowers users to construct, navigate, share and criticize knowledge models represented as concept maps.

I am interested in concept mapping as a tool for writing and creating drafts but also for group projects and exploring ideas with others either in the same room or remotely. I have written about concept mapping previously and it looks like I need to update that posting!

I have not seen Cmap for iPad yet. I am looking forwards to working with it. It seems like a natural fit. “CmapTools for iPad is the best concept mapping App for the iPad. It is the perfect tool to rapidly construct concept maps and share them via the Cmap Cloud.

Source: CmapTools for iPad – Cmap

Posted in education | Leave a comment

Cleaning Up WordPress Tags

English: WordPress Logo

WordPress Logo (Wikipedia)

I woke up very early and couldn’t get back to sleep so I decided to clean up the back end of this site. When I imported this blog from Blogger.Com, WordPress converted all of my tags to categories. I had over 900 categories. I started to delete them a few at a time and I realized that was going to take a long time. Then something hit me – I had the default category set to “uncatagorized” which is not very helpful. I thought since the default topic of this blog is “education” I decided to make that the default in WordPress. To do that, you need to go to Settings > Writing and use the pull-down menu to select a new default. That was pretty easy. Then I was at the Categories page deleting things that should not be categories when I noticed at the bottom of the page there is a “category to tag converter” which will list all of your categories on one page and allow you to convert them all at once. I have finally gotten them down to a manageable, useful list. I could have asked someone about this but I wasn’t sure what I was asking for at first – this is a common tech issue!

2016-12-06_0748

Posted in wordpress | Tagged | Leave a comment

GIMP: Can I have my MFA now?

An animated gif of a cat walking up right like a human.I taught myself to make an animated gif in GIMP (Gnu Image Manipulation Program) today. I followed the brief tutorial at eLearnHub. GIMP is a great open source replacement for Photoshop. In fact there is a version that is set up a little more like photoshop called GIMPShop that has the menus placed in an arrangement more familiar to Adobe users. Both sites have extensive tutorials for pretty much anything an instructional designer. instructor, or artist would like to do with GIMP or GIMPShop. Animated gifs can be used in tutorials and demonstrations – anything that would benefit from a brief animated example. I pulled the images of the cat walking from a selection from a 1963 book called “Animated Movie and TV Cartooning (“magic-animator“) by Paul Robinson. This is from a series of publications called the “Cartoonists Exchange Post Graduate Course.” I clipped the cat pictures out of the PDF using Jing which I have been using for years as an image/screen capture utility to build tutorials. Okay, I think I have proved my artistic chops. I am ready for my Honorary Degree…

 

Posted in art | Leave a comment

Virtual TA: Online Student Success and Retention Issues

Here are a few articles for our bot project:

“Issues of isolation, disconnectedness, and technological problems may be factors that influence a student to leave a course.”
http://onlinelearningconsortium.org/sites/default/files/v8n4_willging_2.pdf

“The authors point out that pedagogy and support structures must be enhanced to ensure the success of students who avail themselves of online learning options.”
http://ccrc.tc.columbia.edu/press-releases/online-students-drop-out-more-often-than-classroom-counterparts.html

One of the principles of good practice in undergraduate education is prompt feedback.
http://teaching.uncc.edu/learning-resources/articles-books/best-practice/education-philosophy/seven-principles

A virtual TA can help resolve tech issues and point to solutions to tech problems. It can connect students to campus support that may already be online. Virtual TAs solve the problem of how to provide students with immediate feedback.

Posted in AI | Tagged , , , | Leave a comment

Being Sentient Means Never Having to Say You’re Sorry

Deutsch: Phrenologie

Deutsch: Phrenologie (Photo credit: Wikipedia)

I am reading the Stanford Report on AI. This statement from the report brought back on all the things I had read about AI growing up: “Contrary to the more fantastic predictions for AI in the popular press, the Study Panel found no cause for concern that AI is an imminent threat to humankind. No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future.” With that said, I love hearing about breakthroughs in artificial intelligence. They are usually accompanied by breathless timelines, e.g., this break-through means that within 10 years, robots will take the place of x workers, or even more dire predictions, like Elon Musk’s, for instance – declaring AI as an “existential threat to humanity.” The AI is not the threat, it is our misperception of the nature of technology and consciousness. If we think that machines can make decisions, and we rely on those “decisions,” you can bet we will be extinct in 100 years. We will be destroyed not by the machines but by our misplacement of power and trust to those machines.  The folks programming AI and those employing it have no idea what intelligence really means.  It is not playing chess. It is not driving a car. One has only to drive down any metropolitan freeway on any weekday between 4 and 7 PM to know that intelligence has nothing to do with driving. I know about this first hand because my Android phone and my wife’s iPhone have both sent us dangerously down one way streets. Does that make the technology evil? Evil implies some sort of intention. Maybe the phone planned that or intended to do so? These two things I think the phone can’t do, planning and intention, why do we think they can drive?

Intelligence is not even making music because making music requires self-reflection, inspiration, a life-long relationship with music and all of the experience and history that would imply. I will believe that computers can really make music when a robot drops out of college, moves back into his mother’s (creators?) garage to become a drummer. But modern computers do not have that kind of agency or motivation. No computer has ever leaned back in a chair and said “I am restless. I am going to the park to play chess.” No computer has ever felt compelled to create.

Learning agent, based on Artificial Intelligen...

Learning agent, based on Artificial Intelligence: A Modern Approach (Photo credit: Wikipedia)

The science of cognition is still in its infancy. How do you go about making something “artificial” when you don’t know what you are making an artifice of? My generation grew up on Disney films where all one had to do was get electrocuted by a computer and all of a sudden, you are super smart. In the early 80s, I remember that the TRS 80s at the college library had Eliza on it and that primitive code sent people out asking questions about AI. There are so many talking computers and robots in the media that I think we think that intelligence is somehow inherent in technology.

The computer folks do not have the background in the humanities to sufficiently define “intelligence” and the humanities folks do not have a sufficient background in technology to understand the programming. Both camps think they are talking about the same thing and they are not. The Turing Test does not measure intelligence, it just records whether or not someone has had a conversation with a computer sufficiently inane that one could not tell if it was with a computer or a person.

Descartes and Spinoza are completely unaccounted for in the current discussions of AI and intelligence. I know that there is a lot going on in the cognitive sciences and AI right now, but in going back to basics, as simple as they are, would reveal a lot. I would dare anyone to ask an modern AI program to give some account of its existence, or stack Spinoza’s three kinds of cognition next to any database and say we are even close to the complexity of human thought. Am I intelligent because I know something? Does a database of information really know something? Does a program that relies on branching-tree metaphors really “understand”? Am I intelligent because I speak French or English? Many idiots can speak French or English and often do so with great eloquence. And if you have been following the great conversation about intelligence and consciousness and the nature of the mind at all for the last 2500 years, you would know that embodiment is a huge deal in thinking about consciousness. None of the issues that are two and a half millennia old are really settled yet, we let a car, programmed by someone who is not aware of or cares about what has been said about ethics or consciousness in even the last 30 years, put one of these contraptions on the street.

What we don’t get is that the AI is us. We are the AI. We created AI for every reason that the AI is not intelligent: our intentions, fears, planning, vision, inspiration, intellect, culture, history, everything that constitutes our being creates AI. AI is another poem. It is a religion. It is art – bound to its time and culture. It is not a science. The science of AI is a game. It plays with every current conception, misperception, and convention of intelligence. The computer scientists are so sure there is a formula and the humanitarians are so sure that there is magic, and we alone determine intelligence. How do we measure it? How do we assess it? Will a computer one day ask “what was my original face before I was born?”

Maybe a computer will one day be able to say “I think therefore I am,” but we have to decide whether that statement is true for that computer or not. Does a recording of those words on a cassette tape mean my stereo is sentient? The fact that I can create narrow, mechanical conditions where a computer makes such a declaration has more to say about my definition of consciousness than the consciousness of technology. The nature of sentiency is unclear at best. We are clever enough to make computers mimic human activity, but that is far from being intelligent. And I am not just a blind sceptic, I know we can accomplish a lot through AI, analytics, and cognitive science – I just think we need a real definition of intelligence before we declare something intelligent.

I am writing a short story in response to these issues because I think a 20 page research paper is a lame response to a situation as ridiculous as this. Only art can capture the sublime absurdity of these questions. I am basically an autodidact: as a technician I am a good artist and as an artist, I am a good technician. Watch this space…

Posted in AI | Tagged , , , , , , | Leave a comment