Mostrando postagens com marcador Education Assessment. Mostrar todas as postagens
Mostrando postagens com marcador Education Assessment. Mostrar todas as postagens

terça-feira, 7 de junho de 2016

Assessment



Eliminating Tests Through Continual Assessment

by Leslie Tyler

We’re at an inflection point with our approach to testing and measurement …

“Common Core was such a good idea,” remarked a middle school administrator I recently spoke with. “But then the testing ruined it.” My colleagues and I at Edulastic hear this all the time, as we continue to provide teachers, administrators, and school districts a free, easy-to-use online assessment platform for K-12 teachers that allows them to track student’s progress toward Common Core State Standards as well as giving teachers access to create and share fully customized assignments.

Educators have been working on the transition to the new Common Core State Standards over the past 4-5 years. But last year’s final implementation step – administering the standardized tests meant to ascertain whether students met the more rigorous standards – has caused enough controversy to undo that work, overturning the standards themselves in some states. To date, at least 10 states have abandoned Common Core or have announced intentions to do so. President Obama concurred with the test critics, saying,

“Learning is about so much more than just filling in the right bubble. So we’re going to… make sure that we’re not obsessing about testing.’’

So what went wrong? Besides the wholesale change in the test content and delivery, the primary mistake was placing such a large bet on the outcomes. Results could affect federal funding. Teachers and administrators could be fired and “failing” schools taken over or closed. With these types of penalties, testing changed completely from a vital part of teaching and learning into a ruler to rap knuckles.

The Upside of Tests

Good teachers have been giving tests for centuries to understand what students know and what they still need to learn. Such so-called “formative” tests vary widely in method and definition – from students’ reflecting on their work to a quiz on last night’s reading – but they nevertheless provide essential information to teachers and students about what to cover next.  In fact, recent research shows that formative assessment actually helps students retain what they learn.

While it’s a bit Pollyannaish to propose replacing standardized tests with formative ones, we could eliminate the most negative effects by doing more formative assessments.

Following are some of the biggest testing pain points and ways to alleviate them through low-stakes, continual assessment:

Too much time away from teaching
In a survey Edulastic conducted last summer, we found that educators’ top concern with the new tests was the time required of students: 70% were somewhat or very concerned about it. Unlike formative assessments, which provide immediate data on understanding so that teachers can adjust instruction, educators do not get results from standardized tests until it’s too late to do anything about them. A recent study on testing released by the Council of the Great City Schools found that 39% of school districts had to wait 2-4 months to get test results, often not arriving until after school was out for the year. This is where new products such as Edulastic come in—allowing teachers to save time and instantly see each student’s areas of security and struggle, automatically grading and producing reports.
Increased anxiety for students and teachers
Having just one chance to show what you know, with stiff penalties for failure, increases anxiety for teachers and students. In contrast, formative techniques like pre-tests and post-tests help students focus on and practice the most important concepts. Continual assessment reduces anxiety because it’s designed to reveal what a student has learned and has yet to learn, as opposed to whether the student has succeeded or failed.
Lack of reliable data on mastery or progress
Perhaps the most discouraging thing about our current standardized testing scheme is the scarcity of data it produces on student learning. Continual formative assessment produces thousands of time-series data points, allowing educators to say with confidence that a student has mastered a standard or skill. To get this level of confidence from a single, comprehensive test, students would need to answer dozens of questions for each standard, requiring hours of testing (see pain point #1).

Clearing Roadblocks to Change

Historically, standardized tests aimed to easily compare student performance (and by proxy teacher competency). Unfortunately, they are simply inadequate for this task. But how might we answer vital questions like, “How are our schools doing?” and “What do we need to adjust?”

To answer these questions at all levels – from individual students to whole states – we need more formative assessment practice and better data collection systems. Many teachers and schools already make formative and common assessments part of their curriculum. Grade level teachers review results together to figure out what’s working and what needs to be revised or redone. We need more support for this type of professional development, and programs like Edulastic provide insight into student understanding through webinars, DIY training materials, and in-personal professional development sessions.

Second, we need better, more standardized data collection systems. Providing teachers banks of high-quality assessment items to include in their continual assessment mix will yield comparative data on student performance while promoting learning. Aligning teacher-created formative assessments with standards allows for standardized data collection – instead of standardized tests – to exponentially expand the number of data points available on student proficiency.

We’re at an inflection point with our approach to testing and measurement. Educators have access to tools, providing them better research, technology and data to create a new, more efficient system of comprehensive assessment. If we can’t eliminate standardized tests, we can at least reduce their downside. And spend the time and money saved on assessment practices that promote learning and get us closer to the answer to “How are we doing?”


Authentic Assessment, Deeper Learning: ePortfolios in Higher Education

domingo, 29 de setembro de 2013

Educação à Distância



Using Classroom Assessment Techniques: A Proactive Approach for Online Learning




There are two main forms of assessment often used within the online classroom. Both formative and summative assessments evaluate student learning and assist instructors in guiding instructional planning and delivery. While the purpose of a summative assessment is to check for mastery following the instruction, formative assessment focuses on informing teachers in ways to improve student learning during lesson delivery (Gualden, 2010). Each type of assessment has a specific place and role within education, both traditional and online.

To reach higher efficiency and success, formative assessments such as Angelo and Cross’ (1993) Classroom Assessment Techniques (CATs) can be used to check for student understanding prior to the summative assessment within the online classroom. The following strategies have been found to be both simple and effective for both the instructor and student in online modalities.

1. Directed Paraphrasing (Angelo & Cross, 1993)
The ultimate goal for teachers is to provide students with lessons that allow for the highest level of mastery and application. Directed paraphrasing allows teachers to obtain a small snippet of what students have learned. This will also hone in upon summarization and paraphrasing skills by translating specialized information into text that is understood by the learner (Angelo & Cross, 1993). This strategy could be used by:
  • Identifying the desired objective to be communicated to students (e.g. Students will evaluate the importance of professional dispositions ideal for the field of teaching.)
  • Requesting that students write, to a specific audience, a paraphrased summary of what they have learned (e.g. In three to five sentences, directed to your fellow teachers, paraphrase the professional dispositions that are ideal for the field of teaching.) This question may be posed before instruction to assess prior knowledge or during instruction to assess the presented material.
  • Following student responses, the instructor will participate and provide both individual and group feedback to address any areas of confusion and/or misunderstanding by presenting additional discussion responses or comments.

2. Student-Generated Test Questions (Angelo & Cross, 1993)
Teachers can assess what information is best remembered and most important to students by engaging them in developing their own test questions. This can provide instructors with understanding what information students deem as useful, what questions would be considered fair, and how well they are able to address their own test questions. To use this strategy in the online classroom:
  • Identify the desired objective, assignment, or exam to be communicated to students (e.g. Students will evaluate contemporary issues in educational policy.)
  • Determine how many questions students will create. (Typically one to two questions will suffice.)
  • Prior to summative assessment (quiz, assignment, essay, or exam), ask students to develop questions to be posted within the discussion forum. (e.g. Following this week’s topic and discussion, create one to two questions regarding contemporary issues in educational policy. Please provide your answer to the question(s). A variation of this could ask that students provide answers to other students’ questions.)
  • Following student-posed questions, the instructor provides both individual and group feedback to the class to assist students in better test/summative assessment performance by presenting additional discussion responses or comments.

3. Double-Entry Journal (Angelo & Cross, 1993)
Application is one of the essential elements to student comprehension. In order to promote application of specific objectives, instructors can introduce the double-entry journal within the discussion forum. In this strategy, students read, analyze, and respond to assigned text through the use of a simple graphic organizer (Angelo & Cross, 1993). In using a T-chart, students will reserve one side for elements of the text that stood out to them, while the opposite side will be the explanation, analysis, and possible application of this portion of text. This can be conducted in an online classroom by:
  • Selecting a short, vital reading or section of text that is particularly challenging for students.
  • Presenting students with a T-chart template to do the following:
    • Left column – students list and copy three-to-five meaningful excerpts from the specified text.
    • Right column – students explain why each portion of the text was selected in addition to any reactions to their choices.
  • Following student completion, use this to promote discussion within the forums by providing feedback and guidance to students regarding their selections. This should be done in addition to a whole class summary.

The above practices include only a small sample of possibilities in regards to using online formative assessment. If used properly, the student feedback collected through the use of formative assessments such as CATs will allow instructors to check for understanding, guide instruction, and provide a proactive approach to student mastery. An important reminder for online educators is to maximize the use of discussion forums. The fast-paced nature of online education does not allow for time wasted; therefore, the addition of CATs within discussion forums can take a proactive approach to student learning and success.

References:
Angelo, T. A., & Cross, K. P. (1993). Classroom assessment techniques. San Francisco; Jossey-Bass.

Gaulden, S. (2010). Classroom assessment techniques. Essex County College. Retrieved from http://sloat.essex.edu/sloat/delete/contentforthewebsite/classroom_assessment_techniques.pdf

* Emily Bergquist and Rick Holbeck are currently working as ground and online instructors as well as managers of online full-time faculty at Grand Canyon University.

quarta-feira, 23 de maio de 2012

Avaliação

AEA365 | A Tip-a-Day by and for Evaluators

 

‘Grit’ as a Measure of Academic Success

Shelly Engelman & Tom McKlin

The primary objective of many programs that we evaluate is to empower a broad range of elementary, middle and high school students to learn STEM content and reasoning skills. Many of our programs theorize that increasing exposure to and content knowledge in STEM will translate into more diverse students persisting through the education pipeline. Our evaluation questions often probe the affective (e.g. emotions, interests) and cognitive aspects (e.g. intelligence, abilities) of learning and achievement; however, the conative (volition, initiative, perseverance) side of academic success has been largely ignored in educational assessment. While interest and content knowledge do contribute to achieving goals, psychologists have recently found that Grit—defined as perseverance and passion for long-term goals— is potentially the most important predictor of success. In fact, research indicates that the correlation between grit and achievement was twice as large as the correlation between IQ and achievement.
Lessons Learned: Studies investigating grit have found that “gritty” students:
  • Earn higher GPAs in college, even after controlling for SAT scores,
  • Obtain more education over their lifetimes, even after controlling for SES and IQ,
  • Outperform other Scripps National Spelling Bee contests, and
  • Withstand the first grueling year as cadets at West Point.
Even among educators, research suggests that teachers who demonstrate grit are more effective at producing higher academic gains in students.
Rad Resouce Articles:
 Hot Tip: Grit may be assessed with an 8-item scale Grit Scale that has been developed and validated by Duckworth and colleagues (2009).
Future Consideration:  The major takeaway from studies on Grit is that conative skills like Grit often have little to do with the traditional ways of measuring achievement (via timed content knowledge assessments) but explain a larger share of individual variation when it comes to achievement over a lifetime. As we design evaluation plans for programs hoping to improve achievement and transition students through higher education, we may consider measuring the degree to which these programs are impacting the volitional components of goal-oriented motivation. Recently, two schools have developed programs to foster grit in students. Read their stories below:



Related posts:
  1. Debra Thrower on Working with Low-income Families
  2. Larry Bohannon on Self-Assessment
  3. LGBT Week: Jason Taylor on the Educational Context for LGBT Youth
  4. Jack Mills on Project Requirements
  5. Leland Lockhart on Cross Classified Random Effects Models in Evaluation