“7 Maintaining the Quality of an Online Professional Doctorate” in “An Online Doctorate for Researching Professionals”
7 Maintaining the Quality of an Online Professional Doctorate
Despite the increased number of online programs in the United States and the developments in online communication technologies that have improved online teaching and learning over the last two decades, online education is still viewed with skepticism and as a lesser alternative to on-campus education at institutions of higher education. Early research and efforts to measure the quality of online education have often focused on comparing online offerings to on-campus offerings and have attempted to establish that the quality of the former is equivalent to that of the latter (Phipps & Merisotis, 1999; Thompson & Irele, 2007). An underlying assumption of these efforts is that existing on-campus programs are of the highest quality. Based on our experience, we contend that online programs are different from on-campus programs and must, therefore, be designed, implemented, and evaluated differently. Online programs must be designed, developed, and ready for use before students access course materials, so instructional design and quality-control processes must be in place before a course begins, which is not often the case with on-campus offerings. Furthermore, since online students may never visit the campus of the college or university at which they study, they need online support with registration and other student services. Faculty members must be familiar with online technologies and be able to communicate, organize, and teach in online environments. Several additional factors contribute to the success of online education, foremost among them being the quality assurance and evaluation integrated within an online program.
There are several facets to the maintenance of quality in an online professional doctorate from institutional and programmatic perspectives (e.g., accreditation, accountability, competitive advantage, cost-effectiveness, and student retention). As faculty members and leaders of an online professional doctorate, we have approached quality maintenance from a programmatic perspective, focusing on two areas: the quality and the impact of the doctorate. In this chapter, we focus on the measurement of quality for the continual improvement of the online professional doctorate, leaving the topic of impact for the next chapter. We begin with a brief overview of the leading frameworks used to measure quality in online education and discuss how they pertain to quality maintenance in an online professional doctorate. We then share examples and research from the EdD in educational technology at the University of Florida (UF EdD EdTech) and describe how to ensure and maintain quality in various areas of the online doctorate.
QUALITY IN ONLINE EDUCATION
Meyer (2002) describes quality in distance education as a “complex and difficult concept, one that depends on a range of factors arising from the student, the curriculum, the instructional design, technology used, faculty characteristics, and so on” (p. 101). This complexity is compounded by the use of multiple terms such as quality improvement, quality assurance, and quality management. Quality can be defined differently depending on the stakeholders involved, their perspectives, and the prescribed guidelines or requirements of accrediting agencies. Simply put, quality assurance focuses on what must be put in place while building online programs, and quality management deals with measuring and maintaining quality.
At the institutional level, quality assurance includes strategic plans, strategic partnerships, compliance procedures, course development resources and support, professional development for faculty who teach online, technical infrastructure, the usability of online content for diverse learners and on mobile devices, and student support services. Several organizations around the world have compiled quality indicators to help institutions that offer online programs. For example, the Australasian Council on Open, Distance, and e-Learning (Sankey, Carter, Marshall, Obexer, Russell, & Lawson, 2014) and the Community Association for Community Education, together with the Office of Learning Technologies of Human Resources Development Canada (Barker, 2002) have published easy-to-use benchmarks, guidelines, and performance indicators for professionals seeking to assure quality at the institutional level. The European Association of Distance Teaching Universities (Williams, Kear, & Rosewell, 2012) provides guidelines for quality assessment of e-learning that include a comprehensive list of indicators and markers of excellence that can be used to ensure quality from the institutional to the course levels. In the United States, the Institute for Higher Education Policy (Merisotis & Phipps, 2000) released a report more than fifteen years ago on the quality indicators that could serve as benchmarks for success in distance education; the indicators were categorized according to institutional support, course development, teaching and learning, course structure, student support, faculty support, and evaluation and assessment. In 2002, the Council for Higher Education Accreditation (CHEA, 2002) identified additional areas important for accreditation and quality assurance, such as institutional resources, institutional organizational structure, and student learning outcomes. Together, these reports have informed more recent frameworks on the quality of online education.
The Online Learning Consortium (OLC), previously known as the Sloan Consortium, has played a key role in efforts to define quality in online education in the United States. The OLC provides a framework for quality online education that is built on five pillars: learning effectiveness, student satisfaction, faculty satisfaction, access, and scale (i.e., cost-effectiveness and commitment). The pillars are described in terms of goals, process or practices, sample metrics, and progress indices and can be used by higher education institutions. The OLC also provides a Quality Scorecard that administrators can use to “identify, measure and quantify elements of quality within an online education program” (onlinelearningconsortium.org/consult/quality-scorecard). The scorecard, which is grounded in research, lists indicators in the following nine categories, each of which can be scored on a scale of 0 to 3:
- institutional support (e.g., policies for college credit and intellectual property)
- technology support (e.g., infrastructure, technology delivery systems, faculty/student technology skills)
- course development/instructional design (e.g., course content, course design, student-centred instruction)
- course structure (e.g., course organization, accessibility of materials, grading policies)
- teaching and learning (e.g., different types of interactions, library support, feedback)
- social and student engagement
- faculty support (e.g., professional development, technical assistance for faculty)
- student support (e.g., advising, administrative support)
- evaluations and assessment (e.g., evaluation of learning outcomes, faculty performance, program effectiveness)
At the course level, the national benchmarks and rubrics established by the Quality Matters (QM) program (qmprogram.org) are widely used by institutions of higher education in the United States to ensure the quality of online and blended courses. The eight standards that can be rated on a three-point scale are as follows: course overview and instruction, learning objectives, assessment and measurement, instructional materials, learning activities and interaction, course technology, learner support, and accessibility and usability. QM emphasizes the alignment of learning objectives, measurement, materials, interactions, and course technology as integral to online learning quality. The Blackboard Exemplary Course Program rubric (Blackboard Community Programs, 2012) is another commonly used rubric for course quality; it evaluates course design (e.g., goals, objectives, content presentation, learner engagement, technology use), interaction and collaboration (e.g., interaction logistics, communication strategies, building of community), assessment (e.g., expectations, assessment design, self-assessment), and learner support (e.g., orientation, software, instructor role, course policies, technical/accessibility issues, accommodations for disabilities, feedback).
The above resources provide benchmarks, indicators, and rubrics that can be used to assure and measure quality at the institutional level and in individual courses in an online professional doctorate. The validated and widely used OLC Quality Scorecard is a valuable instrument that can ensure quality in an online program and identify areas for quality improvement. We perceive the above resources to be foundational and informative for those embarking on and providing online education. Additionally, we believe that a program-specific and program-integrated approach to quality improvement is necessary for an online professional doctorate because of the purpose and nature of doctoral studies. Unlike in an online master’s program, coursework in an online doctoral program builds toward doctoral candidacy and a dissertation; thus, the individual courses contribute to the learning outcomes of the holistic curriculum, as do the non-course-specific experiences (e.g., synchronous and asynchronous interactions, online dissertation mentoring). Assessing the quality of individual courses can be valuable in an online professional doctorate but does not reflect the quality of the curriculum as a whole or the students’ preparedness for independent research and writing during the dissertation phase. Student experiences in an online professional doctorate must be assessed across courses and non-course-specific interactions and activities to ensure that the program’s larger goals are being met.
During the initial stages of the UF EdD EdTech, we struggled to identify instruments used in prior research to assess quality in online education that could be used to assess the quality of the curriculum as a whole or to measure students’ development as researching professionals. Quality assessment in the first program offering was thus focused on the program design and on student and faculty satisfaction with that design and was used to improve the program design for the second cohort. Quality assessment efforts thereafter involved a combination of existing instruments in prior research and our own instruments used during the first offering and were adapted as the program evolved and matured. In the next section, drawing from our own research and the instruments used in our program, we present ways in which quality can be assessed in an online professional doctorate with the aim to improve program offerings for subsequent cohorts.
ASSESSING QUALITY IN AN ONLINE PROFESSIONAL DOCTORATE
At the time of writing, we have enrolled four cohorts—beginning in 2008, 2010, 2012, and 2014—of full-time professionals from diverse educational environments. We conducted formative research to improve the program’s quality and impact for each subsequent cohort. We also conducted summative research with the first two cohorts after at least half of the students from each cohort had graduated. Based on our research and experiences with quality assessment in the UF EdD EdTech, we propose the following three areas for assessing quality in online professional doctorates: online teaching and learning before candidacy, online mentoring and research during dissertation, and institutional support (see table 2).
Table 2. Instruments and Resources for Assessing Quality in an Online Professional Doctorate
Instruments | Focus | Resources |
---|---|---|
Online teaching and learning before candidacy | ||
CoI program survey for an online professional doctorate | Online teaching and learning in a community of researching professionals | Kumar et al, 2011; Kumar & Ritzhaupt, 2014 |
Faculty interviews | Faculty perspective on teaching, curriculum, online community of inquiry, development of scholarly thinking, challenges, support structures | Kumar & Dawson, 2012b |
Student focus groups | Building of community across online courses and non-course-specific virtual spaces Development of scholarly thinking, readiness for qualifying exams, student perceptions of the entire curriculum | Kumar, 2014b |
Facebook interaction analysis | Building of community in student-driven virtual spaces | Kenney et al., 2013; Kumar & Hart, 2014 |
formation-literacy surveys and citation analysis | Information-literacy needs analysis Satisfaction with information-literacy instruction Acquisition of information-literacy skills Need for specialized information-literacy instruction | Kumar & Edwards, 2013; Kumar et al., 2012; Kumar & Ochoa, 2012 |
Course evaluations and open-ended responses on individual courses | Quality of individual courses | Blackboard, 2012; Institution-specific course evaluations; QM rubric |
Online mentoring and research during dissertations | ||
Student interviews after dissertation completion | Online mentoring of dissertations, the dissertation process, best practices, challenges, peer support, institutional support | Henriksen et al., 2014; Kumar et al., 2013 |
Faculty interviews | Process, challenges, best practices regarding online mentoring of dissertations | Kumar & Johnson, 2017 |
Analysis of dissertations | Quality of research or the dissertation product according to the guiding principles | Dawson & Kumar, 2014 |
Institutional support | ||
OLC Quality Scorecard, student interviews or survey, and faculty interviews | Support for teaching in an online program, marketing, admissions, technology, administrative and registration support | Sankey et al., 2014; CHEA, 2002; Williams et al., 2012; OLC (onlinelearningconsortium.org/consult/quality-scorecard) |
Faculty self-assessment and interviews | Prior experiences with online teaching and learning, online mentoring, competencies | PSU (weblearning.psu.edu); Schichtel, 2010; Williams, 2003) |
In what follows, we discuss in detail quality assessment in the first two areas, with examples from the UF EdD EdTech. The resources reviewed above include robust instruments for assessing the quality of the third area, institutional support, so we address it only briefly here. Furthermore, we believe that advising and support services for students should be integrated into online teaching and learning processes during coursework, online mentoring during the dissertation process, and institutional and administrative support throughout the program. Likewise, we perceive faculty preparedness and support as an area that is the responsibility of the institution and believe that it should be considered in the assessment of quality in all three areas—online teaching and learning before candidacy, online mentoring during dissertations, and institutional support.
Online Teaching and Learning Before Candidacy
In an online professional doctorate that is designed based on the community of inquiry framework, online teaching and learning before candidacy comprises several areas ranging from individual courses to the building of a community of inquiry. In this section, we describe the data sources that have provided insight, from both student and faculty perspectives, into the quality of instruction, interactions, and support within our program.
The community-of-inquiry survey for online professional doctorates. The UF EdD EdTech was designed to develop a community of researching professionals based on Garrison et al.’s (2000) framework. Notwithstanding the wealth of research available on this topic, we struggled to find a survey that would assess the quality of such a community as it develops in an online program that includes courses and non-course interactions. We decided that the analysis of a random sample of discussion forums across online courses would not adequately represent community development. As a result, we developed a survey for our first cohort and adapted an existing survey for our second cohort in the UF EdD EdTech.
The survey for our first cohort was based on Garrison et al.’s (2000) community-of-inquiry framework but included items specific to our program to ensure that we were evaluating all the aspects of our first program offering. Items in the survey pertained to students’ satisfaction with program elements, learning environments, and support; their perceptions of learning and community building; and the relevance and transferability of their learning to their practice (Kumar et al., 2011). The survey, with an overall reliability of 0.88, contained three sections: Faculty Instruction and Feedback (Cronbach’s alpha = 0.90); Support, Learning Environments, and Community-Building (Cronbach’s alpha = 0.76); and Application of Learning (Cronbach’s alpha = 0.96). It included open-ended questions about student challenges and suggestions for improving the program (Kumar et al., 2011). Items in this survey about community building and learning environments can be adapted for other online professional doctorates based on the environments being used, and items for application of learning can also be adapted based on the goals of the program. We found the anonymity of respondents to be beneficial for quality improvement, since students may not have been as forthright if their identities were known. For instance, students rated faculty presence, cognitive presence, and administrative support highly but reported that the goals of the doctorate had not been clearly communicated at the beginning. The results also highlighted the strengths and weaknesses of the learning environments used for community building, enabling us to probe further during focus groups or faculty interviews and resulting in a redesign of certain learning environments for the next program offering (Kumar, 2014c).
For the second iteration of the program, we adapted the community of inquiry (CoI) survey for online courses (Arbaugh et al., 2008) to make it relevant for online programs, and we included items from our first survey specific to our online professional doctorate. The survey contained items in three sections: Faculty Presence (Cronbach’s alpha = 0.93), Social Presence (Cronbach’s alpha = 0.91), and Cognitive Presence (Cronbach’s alpha = 0.93). It included open-ended questions in each area and asked for suggestions for improvement. Kumar and Ritzhaupt (2014) document how we adapted individual items from the original CoI survey. For instance, some universal changes were made: “instructor” was changed to “faculty,” “participants” to “cohort,” and “course” to “program.” Item 7 in the CoI survey, “The instructor clearly communicated important course goals,” was reworded to “The faculty clearly communicated program goals for Year 1,” and item 34, “I can apply the knowledge created in this course to my work or other non-class related activities,” became “I have applied knowledge or skills gained from Year 1 of the EdD program to my practice/work environment.”
This survey can be useful for others who would like to assess the development of a community of inquiry in an online professional doctorate. In the UF EdD EdTech, for example, students rated the on-campus orientation lower than other interactions and environments for the building of community; open-ended responses revealed that the orientation contained too many information sessions and not enough opportunities for interactions among students (Kumar & Ritzhaupt, 2014). In subsequent program offerings, we presented some of the orientation information online and built in more activities for students. Despite consistently high ratings, we continue to use this survey at the end of the first year of each program cycle because it provides us with a comprehensive picture of student satisfaction, program strengths and weaknesses, and student needs. We highly recommend using a survey at the end of the first year that is specific to the online professional doctorate and then following up with qualitative techniques to collect data on problematic areas.
Faculty interviews. In addition to exploring the student perspective, it is critical to collect data from faculty members about their experiences teaching and advising in the online professional doctorate in the first year. We have found it helpful to interview faculty members about each of the following: online teaching, learning environments, and curriculum in the professional doctorate; the development of an online community of inquiry and scholarly thinking; the challenges they face while teaching and mentoring online; the support structures they need; and any other concerns they may have. In our program, all faculty members involved in the development and implementation of the first offering participated in thirty- to forty-five-minute interviews conducted by a new faculty member. They were largely positive about their experiences and collaborative efforts, but they also reflected on challenges, such as program workload and advising in the context of research, teaching and service; the expectations of rigorous doctoral work in the online environment; and the management of learning environments. We used the results of the CoI survey to probe for the faculty members’ perspectives on areas that were challenging for students. For example, students rated Google Groups for non-course-specific interactions lower than other environments, and the faculty members reflected that because this group was faculty led, it served more as a question-and-answer forum than as a community. This led to a redesign, with a transfer of responsibility and more student autonomy for the next cohort (Kumar, 2014c). Likewise, the importance of learning presence emerged during faculty interviews and was subsequently integrated into the program design. Interviews provide useful insight into faculty members’ understanding of and work in the online professional doctorate and into how they can be better supported while teaching and mentoring online. If program leaders cannot find a partnering researcher to conduct interviews, we recommend a reflective meeting in which each faculty member shares thoughts on specific questions and all participants discuss the strengths, weaknesses, and challenges of various aspects of the online professional doctorate.
Student focus groups. In addition to administering an anonymous student survey at the end of the first year, we conduct student focus groups at the end of the second year to better understand the development of scholarly thinking, community building across courses and non-specific program spaces, and student satisfaction with the curriculum. If an online professional doctorate includes qualifying or comprehensive exams, conducting the focus groups after the exams helps assess students’ perceptions of preparedness for doctoral candidacy. In the UF EdD EdTech, program leaders formulate the questions, but faculty members or researchers who are not associated with the program conduct the focus groups, which allows for some level of anonymity among students providing feedback and ensures the validity of the qualitative data collection process. We find the results of these focus groups useful because students reflect on their experiences over the course of two years from their perspectives as doctoral candidates who have completed qualifying exams, which is different from the data they provide as they are working toward candidacy. Students reflect on their experiences from a bigger-picture perspective and provide suggestions for various aspects of the program, including community-building activities, the sequencing of courses in the curriculum, and the timing of information-literacy instruction.
Facebook interaction analysis. We have found sense of community and peer support across all program-related spaces to be essential for student satisfaction, learning, and retention in our online professional doctorate (Kumar, 2014c). During focus groups, students have highlighted the importance of student-driven spaces in which they interact with each other as people and professionals, not just as students in a program. Two students undertook an analysis of the interactions within their Facebook group to identify the ways in which they engaged with each other and the topics that were most discussed and that contributed to community building. They expected Facebook to be the focus of social interactions among their group, but they found that 93 percent of conversations in the first six months centred on the program (e.g., courses, assignments, professors; Kenney et al., 2013). We used the results of the Facebook analysis to communicate to future cohorts the value of learning presence, student engagement, and a student-driven virtual space. To avoid privacy and conflict-of-interest issues, we suggest that participating students or external researchers, rather than faculty members or program leaders, study the topics and types of interactions within such spaces to understand their value in the online professional doctorate.
Information-literacy surveys and citation analyses. We suggest using surveys to assess whether information-literacy instruction is meeting the needs of professional doctoral students and whether students are acquiring and practicing the skills and strategies needed in their research. In the UF EdD EdTech, we adopted a program-integrated approach to information literacy. Before students begin the program, we distribute a needs-assessment survey about students’ prior experiences with library instruction; their familiarity with discipline-specific databases and citation styles; and their perceived confidence, anxiety, and expertise regarding library resources. In addition to providing information on the skills and content considered essential to the doctorate, the results of the needs assessment inform the content and design of information-literacy instruction during the first year of the program (Kumar et al., 2012).
When the students have been in the program for six months, we use a post-instruction survey to assess student satisfaction with information-literacy instruction, acquisition of information-literacy skills and strategies, and anxiety and self-efficacy regarding information literacy. Students in our program have reported improved skills and confidence but have also made suggestions regarding the timing of instruction, formats that worked better for them, and topics for further instruction; we integrated these suggestions into information-literacy instruction for subsequent cohorts (Kumar & Ochoa, 2012). This survey provides us with information about professional students’ technical and information-literacy needs and skills, and, more importantly, it makes students aware of how essential such skills are to their doctoral studies. Surveys have shown that most of our students have plenty of technological expertise and are comfortable using public search engines but are often unfamiliar with or unsure how to search within discipline-specific databases (Kumar & Edwards, 2013).
In addition to surveys, information-literacy activities that are integrated into initial coursework help us assess whether students are able to search for, find, and manage relevant literature and address any gaps in their knowledge. For example, students can be asked to find a peer-reviewed article about a particular topic. Artifacts and assignments can also be analyzed to assess students’ information-literacy skills. In the UF EdD EdTech, students are required to complete a literature review by the end of their first year. In one instance, librarians undertook a citation analysis of students’ literature reviews to investigate the extent to which information-literacy instruction had been successful in helping students with this assignment. Although students were found to have acquired information-literacy skills, the citation analysis revealed that they were not completely successful at critiquing research, a skill that is essential for scholarly thinking in an online professional doctorate (Kumar, 2014b).
Collaboration with academic librarians is crucial not only for the design and implementation of program-integrated information literacy but for the assessment procedures described in this section. Faculty in an online professional doctorate do not always have expertise in information-literacy skills, and electronic media or interfaces used by libraries change rapidly; thus, the inclusion of a librarian in quality assessment in this area is essential. We have found it challenging to update, validate, and implement information-literacy surveys in a timely manner in our cohorts over the years because of several changes in library leadership. Although incoming librarians have been enthusiastic and willing to collaborate on information-literacy integration and design, not all librarians are experts in information-literacy assessment, especially for nontraditional students.
Course evaluations and open-ended responses on individual courses. We believe that quality in an online professional doctorate should be assessed holistically from a program perspective and should not be approached in the same manner as it is in a master’s program, in which the assessment of individual courses may be sufficient. Nevertheless, when courses are created and offered for the first time, new faculty are teaching in the program, or courses are redesigned for quality improvement, we recommend reviewing course evaluations and using course-specific rubrics (e.g., QM, Blackboard Rubric) to assess the quality of individual courses. For example, when we offered a new blended course in our program, student comments and ratings from standard course evaluations provided useful feedback on ways to improve group activities and the scaffolding of presentations in the course. Similarly, student feedback on research courses offered by a different department revealed the need for resources and activities that were more aligned with students’ research interests and professional contexts.
Mentoring and Research During Dissertations
Students’ relationships with their mentors and their sense of connectedness to the program, institution, and peers contribute to their ability to persist through multiple challenges and move successfully through the dissertation process (Kumar et al., 2013; Kumar & Johnson, 2014). In this section, we describe data-collection processes that can greatly contribute to quality improvement during this critical phase of an online professional doctorate.
Interviews with program graduates. Interviews with students shortly after they graduate can provide significant insight into their experiences during the dissertation process. The questions posed during such interviews focus on the writing of dissertation proposals, the online mentoring of dissertations, the dissertation process, faculty best practices, student challenges, and peer and institutional support. The data that emerged from interviews with the first nine graduates of the program highlighted online mentoring strategies used by faculty members that had helped students and revealed effective strategies used by students that could be emulated (Kumar, Johnson, & Hardemon, 2013). These data were shared with subsequent cohorts and integrated into program materials as practices to emulate; they were also shared with faculty members, who discussed the strategies that worked and reflected on ways to address the challenges that students experience. During the interviews, program graduates also suggested improvements in areas of institutional support, such as clearer instruction on the Institutional Review Board guidelines for research and increased familiarity with the on-campus office that assists students with the formatting of dissertations. These suggestions were then integrated into the program. We believe that a qualitative approach to data collection on the dissertation stage is necessary. In the UF EdD EdTech, interviews with graduates have provided valuable insight into the dissertation experiences of students conducting independent research in environments different from those of traditional research and into the impact of the dissertation process for their professional environments.
Faculty interviews. In addition to conducting graduate interviews, we interviewed faculty members about their experiences with the online mentoring of dissertations—specifically, about strategies, challenges, and other factors that influenced the dissertation process. These interviews were extremely insightful after the first program offering because until then, the participating faculty members had never mentored a dissertation online, nor had they mentored a professional doctoral dissertation. Their reflections on which strategies had worked well for them, what challenges arose regarding boundaries and expectations, and how we could improve the curriculum to better prepare students for the dissertation process were useful for quality improvement of the program (Kumar & Johnson, 2017). For example, faculty members emphasized the need for community and peer support not only during the initial coursework leading up to candidacy but also during the dissertation process; this resulted in increased small-group mentoring by faculty in subsequent cohorts. If it is not possible to conduct interviews with the faculty members, we highly recommend that faculty members participate in a structured reflection about their experiences during the dissertation process, followed by the sharing of reflections with each other.
Analysis of dissertations. While graduate and faculty interviews can shed light on the process of mentoring dissertations online, it is equally important to analyze the product of this phase of an online professional doctorate—the dissertations produced by students. In chapter 5, we presented the guiding principles for dissertations in the UF EdD EdTech and our analysis of the first twenty-three dissertations to assess whether the guiding principles were being fulfilled. Those involved in designing other online professional doctorates may wish to craft their own guiding principles and analyze the alignment of dissertations completed within their programs with those principles. Such an analysis can reveal how dissertations are conducted in professional environments (Dawson & Kumar, 2014); the quality of dissertations; the types of methodologies that students are using (Dawson & Kumar, 2016); and the ways in which dissertations are impacting professional environments. Program leaders can identify areas that may need more attention in the curriculum of the doctorate or during online mentoring of dissertations to ensure research rigour and dissertation quality.
Institutional Support
As we described earlier in this chapter, the OLC Quality Scorecard (onlinelearningconsortium.org/consult/quality-scorecard) is a robust instrument with which to assess administrative and institutional support for an online program. It addresses areas such as technology infrastructure, accessibility of materials, credits, intellectual property, faculty support, student support, course development, and instructional design. Additionally, it can be worthwhile to collect feedback (e.g., through interviews) from faculty members who teach in an online professional doctorate and from students (e.g., through focus groups and surveys) to learn more about their perceptions of quality in these areas.
Faculty preparedness and support for online teaching and mentoring contribute to the quality of an online professional doctorate. The existing literature on competencies needed for teaching online describes several roles of online faculty. Described as pedagogical, social, managerial, and technical (Berge, 2008), these roles include teacher, instructional designer, technology expert, administrator, site manager, graphic designer, support person, editor, librarian, and evaluation expert (Thach & Murphy, 1995). The main competencies that online faculty need fall into four categories: instruction, communication, technology, and management (Williams, 2003). Hicks’s (2014) review of research on and instruments for faculty professional development related to online teaching provides insight into current approaches to faculty readiness and support. However, since the surveys and instruments in the literature do not specifically address the competencies, needs, and preparedness of faculty in online doctoral programs, we cannot recommend existing instruments for the assessment of quality in this area. Institutions of higher education often develop their own instruments that are then made available to others. For example, Penn State University has a self-assessment tool that faculty members can use to evaluate their competencies in this area (weblearning.psu.edu/FacultySelfAssessment). We propose that the interviews conducted to understand faculty experiences of teaching in an online professional doctorate before candidacy and after the dissertation process (described earlier in this chapter) also include questions about perceived needs and support for teaching online. Course evaluations by students and the use of learning analytics are two other sources of data about institutional support and faculty preparedness for teaching and mentoring in an online professional doctorate.
KEY CONSIDERATIONS
The quality assessment procedures and instruments that we have presented in this chapter will be most valuable if the following key considerations are taken into account.
Approaching quality from a holistic program perspective. In an online professional doctorate, it is important to focus on the quality assessment of the complete curriculum as a sequence of learning activities that connect theory, research, and practice; that reflect progressive learning and the development of scholarly thinking; and that encompass different types of online and face-to-face interactions among learners, faculty members, and peers across multiple learning spaces. Unlike assessment of quality for master’s or bachelor’s degrees, assessing quality in individual courses is valuable but not reflective of the nature of an online doctorate that includes teaching and learning during coursework as well as individual work and online mentoring during the dissertation. Quality assessment of an online professional doctorate requires formative and summative research about both the process and the product—that is, research that assesses quality each year but also assesses the quality of the dissertation process, the dissertations produced, and the impact of the program on researching professionals and their professional environments.
In the UF EdD EdTech, we assess quality during the year in certain areas (e.g., information literacy), and at the end of each academic year in other areas (e.g., student satisfaction). We have been able to create a timeline for quality assessment based on the content of the curriculum, but others might find it challenging to identify specific points during an online professional doctorate when quality should be assessed. A key consideration from a holistic program perspective is the alignment of the points of assessment and the instruments with the different stages, phases, and content of the curriculum.
Defining the purpose of quality assessment. The purpose of quality assessment (e.g., accreditation, funding, quality improvement) influences how quality is assessed. In the UF EdD EdTech, the purpose of quality assessment is the continual improvement of the program. To this effect, during the first offering, the different instruments we used were grounded in theory and literature and were often adapted or created to be specific to the program design. We did not use pre-existing instruments, relying instead on more open-ended methods of data collection to assess how the curriculum is working and which areas might be improved. During later offerings of the program, we have continued to focus on quality improvement but have shifted our main emphasis to quality maintenance, for which we adapted existing instruments or refined instruments used earlier for data collection. Furthermore, the instruments used with each cohort (e.g., the questions asked during the interviews or focus groups) need to be changed occasionally to align them with the changes that have been made in the program design. Simultaneously, the specific composition and needs of each cohort of students guides us in varying the emphases on content areas. For example, enculturation into the field of educational technology became an emphasis with a cohort that included many students who did not have previous degrees in the discipline of educational technology.
Although we did not anticipate doing so at the time of data collection, we have used the data to raise awareness about the program among students and faculty at our institution and to market our program to prospective professional students. Additionally, the data collected about the impact of our online doctorate helps us ensure that the program is achieving its goals and impacting educational practice.
Aligning quality assessment with program characteristics and goals. It is important to ensure that quality assessment procedures align with the specific characteristics of the online professional doctorate and its goals. Existing instruments for assessing the quality of online programs, such as the Online Quality Scorecard, cover the generic elements that must be present in quality online programs. However, online professional doctorates vary widely by discipline, country, and institution. Our model at the University of Florida consists of online coursework followed by individual research, but other online professional doctorates might include little or no coursework and require students to begin work on their dissertation from the beginning. Yet others might emphasize collaborative projects or dissertations and may value certain types of knowledge and skills specific to their discipline. Online professional doctorates might also require students and faculty to use specific technologies (e.g., during the dissertation mentoring process) or may involve faculty from multiple departments or from widely dispersed geographic regions. In each case, existing assessment frameworks and instruments will need to be adapted to measure the program being studied.
Collaborating with others for quality assessment. For data collection during quality assessment, we highly recommend partnering with colleagues who have not been involved in the program design in order to benefit from a more objective and critical perspective. In the UF EdD EdTech, one of the authors joined the program a year after it had begun. Because she had not been involved in program development, she was able to conduct quality assessment objectively. However, as her program leadership responsibilities increased, collaborations with faculty or graduate students outside the program became essential. We have found it valuable to partner with on-campus doctoral students who have expertise in qualitative and quantitative research and who are not associated with the online professional doctorate. We sometimes have doctoral students in the UF EdD EdTech who specialize in the areas of adult learning or online teaching and learning, so we can partner with these students within the program for quality assessment. For instance, one group of students interviewed peers as part of a qualitative research project about challenges that researching professionals face with time management, and they then presented their results at a leading conference. Although program faculty members were not involved, the results of this project provided key information that informed program redesign. As another example of student research, two students analyzed Facebook interactions among students in the cohort and then shared the anonymized results with the program leader and subsequently published an article about their project.
CONCLUSION
Quality assurance and assessment in any academic program has become increasingly important not only to demonstrate high academic standards but also for administrative and accreditation procedures such as maintaining rankings (in regions where these exist) and securing or sustaining funding. As the need for researching professionals grows and online professional doctorates increase in number, quality assurance and management are essential to ensure and demonstrate the robustness of online doctoral education. Strategies and instruments used in such endeavours must be commensurate with the complexity of educational experiences (e.g., the inclusion of noncourse experience and interactions, pre- and postcandidacy activities) in an online professional doctorate and should align with a program-specific and program-integrated plan for quality. Existing frameworks and instruments often have to be adapted for program-level application to address the goals of a doctorate that includes mentoring for both independent research and dissertation writing. Furthermore, institutional level support for online educational endeavours as well as for faculty and students has to be ensured and evaluated.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.