Stage III: Common Standard 4 – Continuous Improvement
Common Standard 4 Elements | IIA Stage III Common Standards Submission Requirements | Artifacts to Submit |
---|---|---|
(4.1) The education unit and its programs regularly assess their effectiveness in relation to the course of study offered, fieldwork and clinical practice, and support services for candidates. Both the unit and its programs regularly and systematically collect, analyze, and use candidate and program completer data as well as data reflecting the effectiveness of unit operations to improve programs and their services. | Provide a graphic depiction and a description (not to exceed 300 words) of the multi-year unit assessment cycle which will be used for continuous improvement. Specify the unit assessment activities, when they will occur, and which position(s) will be responsible for collecting and analyzing data as well as determining program modifications. _________________________________ Briefly describe (not to exceed 200 words) and embed links to evidence for how the unit will regularly assess the effectiveness of the proposed program(s) in relation to the course of study offered, fieldwork and clinical practice, and support services for candidates.
_________________________________ Briefly describe (not to exceed 200 words) how the unit and programs will regularly and systematically collect, analyze, and use candidate and program completer data as well as data reflecting the effectiveness of unit operations to improve programs and their services. | Multi-Year Unit Assessment Cycle for Continuous Improvement The Mary Lou Fulton College (MLFC) implements a multi-tiered assessment cycle to ensure continuous improvement across all academic programs. Annually, each degree and graduate certificate program develops an assessment plan outlining program learning outcomes (PLOs), competencies, assessment methods, and success measures. Common Assessments —aligned to specific PLOs—are embedded in coursework and scored using standardized rubrics in Canvas. Course instructors evaluate student performance, and the MLFC data team aggregates results to identify trends Grad Dashboards. The Continuous Improvement Topical Action Group (CI TAG), in collaboration with the University Office of Evaluation and Educational Effectiveness (UOEEE), leads the analysis of assessment data. The CI TAG works with the Office of Data Strategy and program directors to interpret findings and recommend curricular or instructional changes. These discussions occur during Division and program area meetings. Results are published on the MLFC MyEducation Hub for transparency and faculty access. Each semester, graduating students complete an End of Program Survey, providing additional feedback on program effectiveness. Every seven years, all academic programs undergo a comprehensive Academic Program Review (APR), coordinated by the University Program Review and Accreditation (UPRA) Office. This process includes a self-study, external review, and action plan, offering a deep evaluation of program quality and alignment with institutional goals. This continuous, multi-year cycle ensures that data-driven decisions support strategic improvements in teaching, learning, and student success.
MLFC Program Effectiveness Assessment Overview
MLFC will regularly assess program effectiveness through a comprehensive system of Common Assessments (CAs), annual assessment reports, and edTPA data. Each course includes 1–3 CAs aligned to Program Learning Outcomes (PLOs), scored using standardized rubrics in Canvas. These assessments are implemented uniformly across all sections and evaluated at specific Progression Indicator Levels. Aggregated CA data—disaggregated by program, campus, modality, and demographics—provides longitudinal insights into candidate mastery and supports equity-focused analysis. 3.1 Example of Curriculum Map & PLO Integration
3.3.1 Example of Aggregate Graduates Progression Indicators Dashboard
Annual assessment reports summarize data collection, results, and continuous improvement efforts, guiding curricular and instructional decisions.
Fieldwork and clinical practice will be aligned to edTPA, a nationally normed, multiple-measures assessment that captures candidates’ readiness to teach diverse learners. edTPA data will inform iterative improvements to curriculum, clinical experiences, and candidate support services.
All data are analyzed by the CI TAG, Office of Data Strategy, and program directors, with results discussed in program meetings and published in the MLFC MyEducation Hub. Insert MLFC Hub pdf
This integrated system ensures evidence-based, continuous improvement across coursework, clinical practice, and candidate support. |
(4.2) The continuous improvement process includes multiple sources of data including: a. the extent to which candidates are prepared to enter professional practice; and b. feedback from key constituents such as employers and community partners about the quality of the preparation. | Provide an annotated list of data sources which will be included in the continuous improvement cycle (i.e., candidate surveys, employer surveys, exit interview data, etc.) that includes multiple sources of data including the extent to which candidates are prepared to enter professional practice and feedback from key constituents such as employers and community partners about the quality of the preparation.
Describe how these data sources will be included in the unit’s continuous improvement process.
Any other relevant data that will be gathered as part of the continuous improvement process must also be included.
Draft documentation is acceptable. | Annotated List of Data Sources for Continuous Improvement
1. Student Perception Measures These sources provide insight into students’ experiences and perceptions of support, instruction, and program relevance.
Purpose: Gauge early student experiences and identify support needs. Use: Inform onboarding and advising improvements.
Purpose: Collect feedback on course content, instruction quality, and learning environment. Use: Guide course-level revisions and faculty development.
Course and Instructor Evaluation
Purpose: Assess overall satisfaction and perceived preparedness for professional practice. Use: Evaluate program coherence and capstone effectiveness.
These data sources track academic and professional outcomes, offering a performance-based view of program effectiveness. Upon approval from the CTC, we will add our CA programs to these student success measures.
Purpose: Evaluate student learning outcomes across key competencies. Use: Identify curriculum strengths and gaps.
Purpose: Measure readiness for licensure and alignment with state standards. Use: Trigger curriculum or support interventions when rates fall below targets.
Purpose: Capture feedback on program quality and employment alignment. Use: Assess career readiness and relevance of training.
ASU Graduate & Law Student Report Card
Purpose: Understand long-term career success and retrospective program evaluation. Use: Inform strategic planning and alumni engagement.
ASU Undergraduate Alumni Survey
3. External Student Success Measures
These sources will provide third-party validation of program quality and graduate preparedness.
Purpose: Assess graduate performance and workplace readiness. Use: Align curriculum with employer expectations.
Purpose: Ensure compliance with state standards and eligibility for institutional recommendation. Use: Maintain accreditation and program approval.
Purpose: Benchmark program quality nationally. Use: Identify strengths and areas for improvement.
Purpose: To continuously enhance curriculum and student support, we hold at least one meeting each semester with MLFC partners to evaluate both student and partnership experiences. Use: Identify strengths and areas for improvement.
Integration into the Continuous Improvement Process MLFC employs a systematic and data-driven approach to continuous improvement through the following mechanisms:
A faculty-led committee that reviews all data sources to identify trends, flag underperforming areas, and recommend targeted interventions. (CI TAG)
Compares program outcomes against internal benchmarks and division-wide goals across four domains:
15–16 outcomes are tracked per program. Outcomes not meeting expectations are flagged, and action plans are implemented in the following academic year (PLO Monitoring).
Student and alumni feedback is used to refine curriculum, enhance support services, and improve instructional quality (End of Program Survey example).
|