quarta-feira, 14 de maio de 2014

Student Evaluation


Student Evaluations Aren’t Useless. They’re Just Poorly Used.

By Jonathan Malesic

If it’s early May, then it must be time to talk about what student evaluations of teaching are worth. In a recent essay in Slate, Rebecca Schuman claims that student evaluations are “useless” in their current form, because they encourage students to punish rigorous teachers with low scores and mean comments (and, all too often, sexist or racist ones). The article has gotten a lot of attention from academics I know, who have shared their own stories of uninformed and upsetting comments.

Schuman argues that in light of the unreliability of evaluations, it is unjust to base hiring, firing, and promotion on them. This is especially true for graduate students and part-time faculty members, for whom course evaluations are sometimes the sole documented indicator of job performance.

She is right about the injustice of relying so heavily on a faulty measure of teaching quality, but that in itself does not mean that student evaluations of teaching are useless. It just means we need to use them better. (And, in fact, Schuman does not say we should discard the evaluations altogether, though she does recommend making them more accountable by removing students’ anonymity.)

So how can we make evaluations work better to assess and improve our teaching? Here are four ways to start.

1. Don’t make them the only measure of teaching effectiveness.

Because student evaluations are imperfect, they must be supplemented by other measures that, together, can produce a more reliable picture of a teacher’s effectiveness. Ideally, we could look at what students can actually do at the end of a course (or, in educational lingo, use “valid, direct assessment”). Schuman points to how well her students could communicate in German by the end of her course. But even such an assessment does not tell the whole story, because not all of our courses’ goals are readily measurable after 15 weeks of class. We may want to wait to pass judgment on an instructor until we know how his or her students fare in subsequent courses, or how many get into medical school, or how many become valued resources in their communities. We could even take the philosopher Solon’s advice to the extreme, and call no one a good teacher until all of her students are dead.

In the meantime, though, we do need to make decisions regarding faculty evaluation and development. Classroom visits can help us understand the teacher as a performer and facilitator. Assignments can tell us about rigor and creativity. Responses to students’ papers, exams, and lab reports can indicate the teacher’s empathy and ability to pinpoint how students can improve. Small Group Instructional Diagnosis can elicit constructive student feedback while filtering out the extreme student voices.

2. Know the capabilities and limitations of the specific evaluation instrument.

Where possible, employ forms that have been shown to be statistically valid measures of several independent aspects of the instructor and course. The Student Evaluation of Educational Quality (SEEQ) is one example. At King’s College, we moved to the SEEQ after an analysis of our previous instrument showed that it was measuring only one variable: whether the student liked the class or not. In addition to measuring students’ perception of the instructor’s enthusiasm, preparation, and other aspects of his or her teaching, SEEQ also asks students about the relative pace and workload of the course, revealing whether they find the course “hard,” and so giving more context to interpret the raw evaluation scores.

3. Acknowledge that evaluation scores are correlated with students’ expected grades.

Or, perhaps more accurately, acknowledge that they are correlated with expected grades relative to students’ GPAs. Once you do that, you can measure the relationship between scores and grades and adjust the scores accordingly. Teachers whose scores are consistently above the adjusted mean are doing something that students appreciate, and teachers whose scores are consistently below the adjusted mean are doing something that students don’t appreciate. Find out what those things are by talking with them about their teaching, visiting their classrooms, and reading their assignments and comments on student work.

And just to head off one avenue of critique, I’ll say that doing things that students appreciate is not the same as pandering to them. If students feel valued, if they feel comfortable in a class, if they feel supported, if they just like being in the class, then several obstacles to their learning are removed, and they have a better chance of success.

4. Don’t leave student evaluation of teaching for the end of the course.

Student evaluations of teaching are useless if the teacher finds out what students think about the course only after the course is over. If you want to know what students think about the course while there is still time to make adjustments, you can ask them to fill out the evaluation forms at midsemester. (This should be done anonymously, because while grades are still unsettled, students’ worries about retribution have more legitimacy.) The midterm can also be an opportunity to get students to evaluate their learning, so that they can make changes in the second half of the course. Evidence indicates that instructors who solicit and respond to such feedback end up with higher end-of-term evaluations.

Student evaluations of teaching do not tell us everything we’d like to know about ourselves as teachers. And they do permit students to turn nasty. But they do tell us something. By using them as one measure of teaching effectiveness, we do not capitulate to the student-as-consumer model of education. Rather, we put a measure of faith in our students’ sense (if imperfect) of what is good for them. We acknowledge that our courses are about their learning and that we care about how better to enable that.

* Jonathan Malesic is an associate professor of theology and director of the Center for Excellence in Learning and Teaching at King’s College, in Wilkes-Barre, Pa.

terça-feira, 13 de maio de 2014

Technophobia


How Safe is Your Teaching Job From Technology?

Is Technology Really a Threat to Your Role as an Educator? This question arises from time to time, especially among the technophobes out there, so I thought it deserved a direct response! – KW


by Dillon Wallace


This question has been ominously circulating since the 1980s when computers first made their way into schools – will technology ever replace teachers? Even after 30 plus years, the answer is still a resounding, no. Teaching, and its art form, are built on a strong foundation of passion and a complex body of knowledge. It’s not a simple, repetitive task with few variables involved. Educating is about customizing, planning and engaging students of all walks of life in an environment where each individual has the chance to succeed.

Old School Teacher image 

There isn’t a one-size fits all glove for teaching. Every student learns at a different level in his or her own unique way, be it demonstrations, hands-on learning, small class sizes, standard lectures, solo reading, etc. Teaching is about tailoring, adjusting and emotionally stimulating – something a machine/technology can never replace because teachers are not and never will be seen as simple mechanisms handing out information to students.

Technology is not a threat to teachers’ jobs because teachers assume the roles of leaders, guides, initiators and mentors keeping students on path, encouraging them when they struggle and inspiring them to always reach beyond their grasp. Technology can spew out information, but teachers can roll up their sleeves and lend a hand at learning or an ear at listening in the ultimate plan for success.

In fact, today’s technology and tomorrow’s tech will more than likely only enhance teaching and student learning. With so much technology at our disposal, teachers have many more possibilities with which to reach their students. Technology and teachers will not only continue to coexist, but they will benefit one another.

Here are just a few areas in which teaching can benefit from and be empowered by technology.

  • Relevant engagement – Teachers can always implement a human connection by selecting relevant content tailored toward their curriculum, customize options for levels of difficulty, alternate ways to learn and more. They can read the situation and act accordingly based on what is and what isn’t effectively working. And now with tools like laptops, tablets and other smart devices, it’s even easier to connect anywhere and anytime with students.

  • Complex learning – A teacher and a chalkboard can’t always capture the imagination of a student. Technology can help improve students’ understanding of complex concepts through the use of in-depth animations, simulations and visualizations (videos and etc.).

  • World wide access – Technology can provide added benefits to the classroom when it comes to accessing research, connecting to people, organizing group projects, recording data and more.

  • Real world expertise – More than ever, technology has provided students with the same tools professionals use in their day-to-day careers, giving hands-on learning a whole new meaning. Whether you’re a writer, composer, producer, researcher, number cruncher, analyst, designer – whatever – there are tools at students’ disposal, virtually everywhere.

Technology will not replace teachers now or in the near future as the two continue to improve the opportunity for students to learn. In an increasingly digital world, it’s important to remember that the passion between loving to teach and loving to learn is a human connection that can only be shared between a teacher and his or her students.


sábado, 10 de maio de 2014

"Now you try!"




Teaching Encounter Provides an Up-Close Look at Learning Something New





I just taught a dear friend how to knit, and in doing so I revisited how very challenging it is to teach something you can do easily. 


Knitting, like so many of the skills we teach, including concrete skills like running a lathe and abstract ones like critical thinking, cannot be learned in theory. It is learned by doing. “Now you try,” I say after several slow, deliberate demonstrations of the motions. Oh my, such clumsy confusion. “Here, let me show you again.” I slow down even further and talk through the movements needed to make a stitch. Good gracious, I can hardly watch these tortured, truncated movements, so far from the peaceful, rhythmic flow of knitting. As the confusion continues, thoughts start going through my mind. How many times am I going to have to show her? It can’t possibly be this hard? And why am I feeling frustrated?







I worry about order and pacing. Knitting starts (and ends) with two basic stitches, knit and purl. She has those, sort of. Is it time to move on? What’s next? Common combinations of the two basic stitches? Or should I work on her technique? She’s got to stop propping the needles up on her belly. And we need to move on to something other than this scraggly “sampler” (as she calls it). The needle size and yarn weight are creating something less than lovely. It’s not the kind of knitting that inspires continued effort and she still has lots to learn. I search for something that she can do. It needs to offer challenge but also the strong possibility of success.


Then there are the mistakes. I have forgotten how many ways there are to do it wrong. I aspire to learner-centered teaching, which means she needs to take the lead in identifying and fixing her mistakes. Great in theory—hard in practice, I quickly discover. She fixes things slowly, laboriously, and that takes time away from knitting. She focuses with intense ferocity, which is not only tiring, but it seems to lead to even more errors. And some of the mistakes are serious. Whole rows must come out. I step in and make some of the corrections, wondering if that’s a good idea.


My mind whirls. What feedback do I need to provide? How many corrective messages? I think I’m offering too many and they’re worded so negatively. “No, not over the needle, the yarn goes between the two needles.” “No, that’s not a purl bump.” I should be more positive. I should be asking more questions, offering hints, and making helpful suggestions.


She’s trying so hard. Will she notice if I laud the effort and not the knitting? She is improving but more slowly than I expected. I look at her work and know it would be dishonest to call it good. “Look here, see these four stitches? Wow! They are so smooth. That’s exactly what you want. Good job.” She sighs. I sigh. This is hard work.


We have four days together with books to discuss, walks to take, and a friendship to celebrate. Despite working diligently on the knitting, we are running out of time. I’m afraid she isn’t ready to do this on her own. I buy her books and talk about online resources. She will need the help of a couple friends—people she tells me aren’t good teachers. I feel like I’ve failed.


My flight leaves early. I’m getting the coffee on and there on the counter sits her project—or is it her project? Oh my God, she has done 12 error-free rows! I look more closely. Yup, there are no mistakes. She’s beside me now. Joy dances from her face to mine. “I did it!” “Yes, you did and it looks beautiful!” We hug, celebrating the learning and the teaching.


It’s good to have these in-your-face teaching experiences, I decide. With so many students and too much content to teach, it’s easy to miss the struggles of individual learners and not notice how teacher actions aid or confound the process. It’s easy to be perplexed by a learner’s confusion and quickly draw conclusions about ability. Most important, it’s easy to forget that what now seems simple, straightforward, and perfectly obvious is usually not that way when it’s first encountered. Teaching takes a lot of patience; learning takes a lot of persistence.


Plataformas de divulgação científica


Interatividade da internet mudou a forma de comunicar a ciência

Elton Alisson

As novas plataformas de web 2.0 – como é denominado o uso interativo da internet –, tais como blogs e redes sociais, têm transformado o modo de comunicar a ciência e aumentado a difusão de conteúdo científico em diversos países, incluindo o Brasil.


 


A avaliação foi feita por especialistas participantes de um painel sobre o uso de mídias sociais na comunicação da ciência durante a 13th International Public Communication of Science and Technology (PCST), realizada entre 5 e 8 de maio em Salvador, na Bahia.

Com o tema central “Divulgação da ciência para a inclusão social e o engajamento político”, o encontro ocorreu pela primeira vez na América Latina e reuniu pesquisadores de mais de 50 países para debater práticas e estratégias de comunicação e divulgação científica adotadas em diferentes partes do globo.

“Com o advento das novas mídias on-line, as visões tradicionais da comunicação da ciência estão sendo redefinidas”, disse Dominique Brossard, professora e chefe do Departamento de Comunicação de Ciências da Vida na University of Wisconsin-Madison, nos Estados Unidos.

“Temos cada vez mais blogs de ciência em diversos países, que em grande parte não são feitos por cientistas ou jornalistas científicos, mas por pessoas comuns, com interesses específicos por determinados assuntos científicos, que na tentativa de compreender a ciência têm produzido conteúdo de uma forma que não era feita há dez anos”, disse Brossard, que lidera o Laboratório de Pesquisa em Ciência, Comunicação Social e Público (Scimep, na sigla em inglês) da universidade norte-americana.

De acordo com Brossard, além dos blogs, outras mídias sociais, como o Facebook e o Twitter, têm impactado fortemente o engajamento público na ciência e tecnologia.

São necessários, no entanto, mais dados empíricos para avaliar a real dimensão desse impacto, a forma como o público se relaciona com essas novas mídias e como a informação é difundida nesses novos meios de comunicação, afirmou a pesquisadora.

“Diversos estudos têm demonstrado que as redes sociais contribuem para a difusão de notícias sobre diversos temas, inclusive ciência e tecnologia, e que o público é amplamente a favor da publicação de notícias nas redes sociais”, apontou.

“Mas a pesquisa sobre a comunicação on-line da ciência ainda apresenta muitos desafios e são necessários mais estudos para comprovar nossos pressupostos, que são diferentes dos que tínhamos em relação às mídias tradicionais”, avaliou Brossard.
Segundo a pesquisadora, alguns estudos recentes – como o Reuters Institute Digital News Report 2013, publicado em julho do ano passado pelo Reuters Institute for the Study of Journalism, da Oxford University, do Reino Unido – indicam que o público consome cada vez mais notícias on-line. No caso das notícias sobre ciência e tecnologia, essa tendência não é diferente.

“As pessoas voltam-se cada vez mais para ambientes on-line a fim de encontrar informações sobre ciência e acompanhar os progressos científicos”, afirmou.

“E, em muitos países, as pessoas procuram por informações científicas cada vez mais por meio de buscadores, como o Google, em vez de seguir fontes específicas, como os sites dos principais jornais de seu país”, apontou Brossard.

Uma das características das notícias publicadas hoje no universo on-line, de acordo com a pesquisadora, é que elas estão cada vez mais contextualizadas – ou seja, são acompanhadas por seções de comentários e são “tuitadas”, “retuitadas” e reproduzidas nas redes sociais.

Segundo Brossard, esses “rastros” das notícias podem ser utilizados como indicadores para os pesquisadores da Ciência da Comunicação obterem dados empíricos para estudos sobre comunicação on-line. “Eles podem nos dar pistas para analisarmos os efeitos das notícias sobre ciência, por exemplo, no universo on-line”, indicou.

Uma das constatações feitas em um estudo realizado do grupo dela no Scimep com base em algumas dessas “pistas contextuais”, como ela as denomina, é que os comentários publicados sobre uma notícia podem mudar a forma como os leitores a interpretam.

“Descobrimos que os comentários podem modificar a percepção e a opinião de outros leitores em relação aos resultados de uma pesquisa científica relatados em uma matéria publicada em uma plataforma on-line”, disse Brossard.

A fim de minimizar esse efeito, alguns veículos, como a revista de divulgação científica norte-americana Popular Science, decidiram desativar a seção de comentários dos leitores em suas edições on-line, apontou a pesquisadora. “Essa ação nos forneceu evidências empíricas das nossas conclusões”, afirmou Brossard.

Diálogo com o público

Na avaliação dos participantes do painel, apesar da contribuição das mídias sociais no aumento da difusão de conteúdo relacionado à ciência no mundo, elas são pouco utilizadas e exploradas pelos comunicadores da ciência.

Tantos os jornalistas científicos como os cientistas estão sub-representados no universo digital, apontaram.

“Os cientistas e jornalistas científicos precisam se adaptar e estar mais presentes nessas novas mídias”, disse Mohammed Yahia, editor do Nature Middle East – site da revista científica britânica que se concentra em notícias do mundo árabe relacionadas à ciência.

“Para isso, é preciso estar disposto a ouvir o que o público quer saber e estar aberto a comentários sobre seu trabalho que muitas vezes são terríveis, mas também a receber sugestões muito boas que podem ser úteis para melhorar a narrativa de suas histórias sobre descobertas científicas”, avaliou.

Na opinião de Yahia, as mídias sociais possibilitam aos comunicadores de ciência uma aproximação do público e o seu engajamento nas histórias que contam.

Uma experiência em alguns países da Europa e nos Estados Unidos, segundo ele, é a dos podcasts científicos – arquivos de áudio transmitidos por internet – em que os ouvintes são convidados a responder a uma determinada pergunta sobre um problema científico e as respostas são incorporadas nos próximos episódios.

“É preciso que os comunicadores de ciência tentem envolver seu público na produção de suas histórias”, disse Yahia. “Ao fazer isso, o público também passa a se sentir dono da história relatada e tem maior interesse em pesquisar sobre um determinado assunto científico, adicionar e compartilhar informações”, estimou.

domingo, 4 de maio de 2014

Faculty Development Evaluation




Developing a Framework for a Customized Faculty Development Evaluation Plan


 

Faculty development programs exist, at least to some degree, to help faculty become better teachers, better scholars, and better members of the campus community. Schools invest in faculty development in different ways and at different levels. Yet increasing calls for accountability in higher education are demanding evidence of return on investment. In other words, colleges and universities that are spending time, money, or other resources on faculty development need to determine and show what is working—and improve or abandon what isn’t. Hence the need to evaluate faculty development efforts and to determine their impact.




The first step in creating a comprehensive evaluation plan is developing a framework. Center staff can build on this framework over time, creating additional evaluations, assessments, and reviews as well as enriching those already in place.

Determining where to start is one of the biggest challenges in developing any kind of plan. Fortunately, no one has to reinvent the wheel: there are numerous models to replicate for all kinds of planning. Yet for a faculty development evaluation plan, relying on a curricular model is often the most fruitful approach.

The curricular lens

Faculty developers come from a wide variety of disciplines, but all share experience in instructional development. Everyone can understand and relate to a curricular model and approach to instruction. In this instance, faculty development is just another form of instruction.

So to begin, consider an academic degree program. That academic degree program is usually one of several offered in an academic school or department. The academic degree program is broken down into individual courses. Each course is broken down into individual classes.

Use that same framework for faculty development. The faculty development center is analogous to the academic school or department. Specific center programs—brown bag series, grant programs, mentoring programs, etc.—are analogous to academic degree programs. Within those programs you have individual offerings, such as monthly topics in a brown bag series or social events in the mentoring program.

Programs and offerings vary from institution to institution, but common programs include the following: new faculty orientation; a first-year teacher series; instructional technology; faculty learning communities; consultations services; mentoring programs; grant programs; brown-bag programs; midcareer programs; TA programs; scholarship, teaching, and learning programs; book clubs; and intensives (a series that extends over several months with a variety of offerings connected to the development effort) focused on various initiatives such as active learning, scholarship, grading, critical thinking, or writing.

Important diferences

While the curricular model is incredibly useful in creating a framework for a faculty development evaluation plan, there are two key difference between academic curriculum and faculty development instruction programs. First, the curricular model is based on the assumption that students matriculate through academic degree programs and graduate. These programs are terminal. Faculty development, on the other hand, is continuous.

The second important difference is that while students can choose an academic degree program within a school or department, there are some set requirements that they must meet. Faculty development is entirely optional. Faculty pick and choose what—if anything—they want to attend. In most cases nothing is mandatory, and that can affect evaluation strategies.

Levels of evaluation

Different programs or offerings require different measurements in terms of both content and degree. Imagine throwing a pebble into a pond. Sometimes the pebble is small. In the case of faculty development, that would be a workshop. The impact—like the workshop itself—will be small. Other times the pebble is large, more like a rock, such as a course redesign that involves much of the faculty and requires large investments of staff time and energy. In that case, the effort is intended to make a big difference and its impact should be large.

The idea is to determine what kind of impact is expected given the nature of the offering. The measures should match that expected impact in degree. In other words, measure only where you expect to see an impact. Measuring low-impact efforts at deeper levels will not produce any information or results. Only high-impact efforts—intensive programs—will ripple to the outer levels.


Excerpted from How to Evaluate the Impact of Faculty Development Programs, a whitepaper based on Magna Online Seminar of the same title presented by Dr. Sue Hines, who directs the faculty development program and teaches in the Doctor of Education in Leadership program at Saint Mary’s University of Minnesota OrderNow