Monday, August 11, 2014

Just A Few More Common Mistakes to Avoid in Online Course Design and Development

As a follow up to my 5 Common Mistakes to Avoid in Online Course Design and Development and A Few More Common Mistakes to Avoid in Online Course Design and Development posts, I wanted to share a few more mistakes that I have been guilty of and observed higher education faculty face themselves in the online course development process.


  1. Mistake #8: Not giving yourself enough time to build your online course.  Faculty members are busy.  Whether you're a graduate teaching assistant, an adjunct instructor, or tenure-track (or non-tenure track) professor, we are continuously upholding our teaching, research, and service responsibilities resulting in becoming overwhelmed.  You may even be tempted to delay building and designing your online course; however, doing so typically rushes course creation and causes quality to decline.  This frequently occurs during a learning management system (LMS) migration period or during the summertime when faculty are teaching an online course for the first time in the upcoming fall semester. 

    My suggestion:  Begin building your course 16 weeks or more before the course start date, and establish developmental benchmarks along the way.  By beginning the course development process early, it becomes much easier to focus on course quality and devote time to other to other priorities.


  2. Mistake #9: Not using rubrics.  I want to be clear up front that rubrics are not the only mechanism for assessing students' understanding of content or completion of assignments/projects.  Other methods such as holistic scoring and grading checklists can be used.  However, well-developed rubrics, in my opinion, are ideal for the online learning environment for several reasons.
    • Ensure consistency when grading the same assignment over a period of time.  Discussion forums, reading briefs, journal entries, or other writing prompts are good examples where you have specific but consistent expectations such as the quality of responses, spelling and grammar, timeliness, etc. throughout the semester regardless of the topic/content of focus. 
    • Provides students with clear direction on what the expectations are for achieving a specific mastery level/rating across specific criteria for a given assignment or task.
    • Minimize student confusion with grading and assignment/task requirements.
    • Helps students see the connection between the assignment/task at hand and specific measurable learning outcomes or objectives.
    • Helps provide constructive feedback when students do not meet a specific mastery level.  In other words, this gives you, the instructor, an opportunity to justify why points were deducted or what could have been done differently for that student to achieve a higher mastery level.  Rubrics provide a foundation for students to enhance and reflect on their own skills over time using the instructor feedback as a guide.

    My suggestion:  Create a rubric for every assignment in your course.  In cases where you have students complete the same task for different topics/content such as threaded discussions, create one rubric that can be used over again; ensure that the rubric takes timeliness, quality of content/responses, participation, and so on into account and these details are clearly communicated.  Even if students are discussing or writing about different topics throughout the semester, you can save significant time in the course development process if you create a single rubric that can be used for several assignments.  More specifically:  1 rubric for discussion boards, 1 rubric for journal entries, 1 rubric for reading briefs, etc.


  1. Mistake #10: Minimal use of tests and quizzes. From personal experience at the undergraduate level, use of tests and quizzes are ideal for tasks that do require memorization and recalling of facts from online/text readings, PowerPoint slides, videos, or other instructional media.  However, this mistake is often overlooked at the graduate level (master's and doctoral) where the typical faculty mentality is that graduate students are too advanced for traditional online tests and quizzes.  Traditional tests and quizzes may not be used as often at the graduate level than the undergraduate level, but they do have their place in the online learning environment. 

    My suggestion:  Consider creating short ungraded quizzes that are for content evaluation purposes.  This would be the equivalent to asking impromptu questions in a live face-to-face environment, but now your doing this online asynchronously.  If students don't understand a concept, data collected from this ungraded (low-stakes) quiz can reveal these content discrepancies/gaps and give you (the instructor) an opportunity to address it or revise instruction accordingly (this is an example of formative assessment as a process).  Another example may be to create an ice breaker quiz (graded but low-stakes or ungraded) to assess if students have read the course syllabus and understand the course navigation and structure at the beginning of the semester.  I did this as a "scavenger hunt" activity worth 10 points towards participation where graduate students had to achieve 80% or higher, but students had unlimited attempts so they can go back and revisit the syllabus, course navigation, and other elements of the online course.


  2. Mistake #10: Not modeling student expectations in online discussions. When I first starting teaching online as a teaching assistant during doctoral studies with an experienced professor for an introductory research methods course, I noticed how the professor structured and facilitated online discussions that were quite different from my own experiences as a student in my other graduate courses.  The "post and respond to 2-3 other classmates" model of discussions was the norm and did not engage me much as a student.  In observing the veteran faculty member, I noticed one key recurring theme that was critical to successful, engaging, and meaningful online discussions: modeling student expectations for posting and participating in constructive discussions. 

    My suggestion:  For the first (and perhaps second) threaded discussion at the top of the semester, place yourself in the students shoes and participate in the discussion by implementing the requirements (more reason for well designed rubrics) yourself.  For example in initial postings to a module's discussion question(s), I expect students' responses to be thorough, concise, free of grammar and spelling errors, and include in-text citations from course readings/external sources to support their ideas.  So what I do is post an initial response as an example of what a thorough and concise response with supporting citations may look like.  I want students' ideas to be original, but they have to ensure they write in a way that anyone with no knowledge of the given discussion question can understand if you explain it to them.  Furthermore, I expect students to continuously participate in discussions throughout the duration of a given module.  Simply responding to 2-3 classmates initial posting with "yes I agree" or "I disagree" responses and calling it done are not meaningful/constructive and not acceptable.  Once again, I model my expectation for "continuous participation" by responding to a few students constructively by ensuring that responses are (a) thorough and concise, (b) cites supporting evidence as to why I may "agree" or "disagree" with one's idea (and yes, it's OK to disagree), and (c) post responses that are spread out through the duration of a given module.  If a requirement for students is to respond to 3 or more classmates on a given discussion but not all at once, then you need to be sure your students know this up front.  Not only communicating with your students the expectations for online discussion is necessary, but modeling is essential as well.


What have you experienced in online course design and development?  Feel free to share your ideas, thoughts, and suggestions in the comments section below.

No comments:

Post a Comment