Accreditation
Assessment
Methodology
The methodology for our academic year 2017-18 differs from AY2016-17 in order that we meet the requirements of the Council on Social Work Education (CSWE) and assess student outcomes in a year of curricular transition in both the Bachelor of Social Work (BSW) and Master of Social Work (MSW) programs.
During AY2017-18, Salem State’s BSW and MSW programs collected assessment data to help determine achievement of benchmark scores in the various competency areas for social work practice. Our assessment is based on the CSWE's 2015 Educational Policy and Accreditation Standards (EPAS). Each of the programs designed and collected two forms of assessment data: one field-based measure, and one academic-based measure. The coordination of the data analysis and feedback process of our assessment approach is conducted by the School of Social Work’s Assessment Task Force (ATF), consisting of members from our BSW and MSW programs, as well as our Department of Field Education. Data collection took place throughout AY2017-18. In order to fully digest these data, faculty members began the analysis of all data in spring 2018.
Description of Data Collection Process
In order to collect two measures for each of the School of Social Work’s programs, we engaged in the following steps.
First, through the use of an automated data collection system, ALCEA Software, the school’s Department of Field Education collects data from field instructors about their students’ performance using the 2015 EPAS. Field instructors are given guidance by the Department of Field Education on how to use the scoring approach in evaluating their student interns. This allows for the collection of data in a live practice situation.
Second, we developed rubrics linking competency dimensions to specific assignments in each program, at the student level. These rubrics were distributed to faculty in specifically targeted academic courses. The BSW and MSW Program Coordinators facilitated data collection from all graduating students. Individual student level data were aggregated for each competency and scores were ranked in order to determine the percentage of students who met the benchmark.
Determination of Benchmarks
Specific competency benchmarks were determined by the faculty members of the BSW and MSW programs and
members of the Department of Field Education. These stakeholders determined a rationale for using 3.00 as a benchmark for each measure. Also, the BSW and MSW program faculty and field education staff determined that a goal of 80% was ideal.
Data analysis
Data are compiled and analyzed through the use of sorting in a spreadsheet in order to determine the percentage of students who met or exceeded the benchmark score.
Assessment Methodology Prior to Academic Year 2017-18
During AY2016-17, both BSW and MSW programs collected assessment data to help determine whether our students are achieving benchmark scores in the various competency areas for social work practice. Our approach was modeled after Stephen Holloway’s (2013) Some Suggestions on Educational Program Assessment and Continuous Improvement for the 2008 EPAS. However, our rate calculation methods differ from the approach stated there. The assessment process is conducted by the School of Social Work’s Assessment Task Force (ATF), consisting of members from our BSW and MSW programs.
Reports for academic years 2015-16 and 2016-17 detail the data collection and data consideration processes we engaged in between fall 2015 and spring 2017. Data collection took place in the 2016 and 2017 spring semesters for academic years 2015-16 and 2016-17, respectively. In order to fully digest these data, ongoing data consideration by all faculty members and staff took place in the fall semesters of 2016 and 2017.
Description of Data Collection Process
The School of Social Work’s ATF set out to develop an assessment approach that was embedded in the school’s work process in order to foster a culture of continuous quality improvement. Each year, the ATF engages in a two-part data collection effort designed to yield two measures per competency. This data collection process takes place in April of each year.
First, through the use of an automated data collection system, the school’s Department of Field Education collects data from field instructors about their students. Field instructors are given guidance on how to use the scoring approach in evaluating their student interns. For this portion of our assessment process, we used the EPAS 2015 due to our field education department’s commitment to use this version of EPAS along with our other sister schools in New England.
Second, through the use of an online survey, the BSW and MSW Program coordinators facilitate data collection from all graduating students. For this portion of the assessment process, we used EPAS 2008 as we already had a data collection system that used this version of EPAS from the previous year, which we needed for comparability. Additionally, a set of survey questions focused on learning more about students’ experiences with the explicit and implicit curriculum. In assessing the competencies, students were asked to rate themselves on the practice behaviors at the time they filled out the survey at the end of their senior year, with their field internship just finishing. Collection of the student survey data is usually achieved by designating time for survey completion in BSW and MSW students’ field seminars.
Through the use of this dual approach, two measures are gathered for each competency, one of which is based on the demonstration of the competency in a real practice situation. Each competency is derived from an average of the related practice behavior scores, which are all weighted equally.
Determination of Benchmarks
Benchmarks are determined for each competency by the faculty members of the BSW and MSW programs along with members of the Department of Field Education. These stakeholders determined a rationale for using 3.00 as a benchmark for each measure based on our previous, lower benchmark and the fact that curricular changes had been made to address low scores in previous years. The BSW and MSW program faculty and field education staff determined that a goal of 80% was ideal. In order to determine whether a students’ performance meets the benchmark, field instructors rate students’ performance in field, and students’ rate their own views on their competencies in practice through self-assessment.
Data are analyzed through the use of sorting in a spreadsheet in order to determine the percentage of students who met or exceeded the benchmark score (see link, below). We diverted from Holloway’s (2013) approach as his suggested calculation approach was mathematically incorrect.