segunda-feira, 30 de junho de 2014

Health spending

Health spending starts to rise but remains weak in Europe, says OECD

30/06/14 – Health spending has started to rise again after stagnating or even falling in many OECD countries during the crisis. But the pace of growth remains well below pre-crisis rates, especially in Europe, according to OECD Health Statistics 2014.

In Europe, health spending continued to fall in 2012 in Greece, Italy, Portugal and Spain, as well as in the Czech Republic and Hungary. In Greece, health spending in real terms was 25% lower in 2012 than in 2009, primarily driven by cuts in public spending.

By contrast, outside Europe, Chile and Mexico saw strong growth in health spending in 2012, at 6.5% and 8.5% respectively, largely due to further efforts towards universal coverage and access to healthcare. Health spending in Korea has continued to grow at an annual rate of 6% since 2009, mainly driven by increases in private spending.

In the United States, health spending grew by 2.1% in 2012, above the OECD average but similar to growth rates in 2010 and 2011.
Overall health spending accounted for 9.3% of GDP on average across OECD countries in 2012, little changed from 9.2% in 2011, but up from 8.6% before the crisis.

Continued reductions in pharmaceutical spending
While spending on hospital and outpatient care grew in many countries in 2012, almost two-thirds of OECD countries have experienced real falls in pharmaceutical spending since 2009. Reductions have been driven by price cuts, often through negotiations with manufacturers, and a growing share of the generic market. This share has increased due to patent expirations for a number of high-volume and high-cost brand name drugs, and policies to promote the use of cheaper generic drugs.The share of the generic market grew on average by 20% between 2008 and 2012 to reach 24% of the total pharmaceutical expenditure. The increase was particularly steep in Spain (+ 100%), France (+60%), Denmark (+44%) and the United Kingdom (+28%).
Annual growth in pharmaceutical spending, in real terms

These are some of the recent trends shown in OECD Health Statistics 2014, the most comprehensive source of comparable statistics on health and health systems across the 34 OECD countries. Covering the period 1960 to 2013, this interactive database can be used for comparative analyses on health status, risk factors to health, health care resources and utilisation, and health expenditure and financing.

Country notes are available for all 34 OECD countries at This website also includes an excel file with 50 key indicators.

OECD Health Statistics 2014 is available in OECD.Stat, the statistics portal for all OECD databases.
An embedable data visualisation for this publication is available at:
Please use the ‘+share/embed’ button to customize this tool for your country and language and to generate the embed code for your website.

For further information about the content, please contact Francesca Colombo (tel. + 33 1 45 24 93 60), Gaétan Lafortune (tel. +33 1 45 24 92 67) or David Morgan (tel. +33 1 45 24 76 09) in the OECD Health Division.

sábado, 28 de junho de 2014

Effective learning

An Effective Learning Environment is a Shared Responsibility

By Maryellen Weimer, PhD

Teachers can tell students they'd like to have a positive climate for learning in the classroom, but teachers can't create that climate single-handedly, and trying to legislate those behaviors that do and don't contribute to learning is, on that scale of one to ten, about a two on the least effective side. Much better are activities that develop awareness and call for a commitment from everyone. 

Whether it’s a student who is texting during class, an online student who makes minimal comments to the discussion board, or a teacher who marches nonstop through mountains of material, the learning environment is defined by a combination of individual behaviors, and everybody contributes to what that environment becomes.

Its a responsibility shared by teachers and students. But it’s not an obligation most students seem willing to accept.

Heres an example. I got the idea from a straightforward descriptive inquiry that involved a student cohort and a faculty cohort. Students were asked to identify a set of faculty behaviors that they found irritating. Faculty were asked to name student behaviors that irritated them. I’ve recommended updating the query by asking students to identify faculty behaviors that make it difficult for them to learn and asking faculty to identify student behaviors that make their best teaching difficult to deliver. Now I’m thinking that I’d update the questions further by asking students for a second lista list of those things other students do that make learning in the classroom or online difficult.

I think there’s merit in having students make their lists in small groups, but if you don’t want to take class time with this activity, you could collect the responses online. What you want are two lists of the most common behaviors, say the top five or six things that teachers do and students do that make it difficult to learn. Along with these two lists, you want to construct your listthe one that names the five or six things that students do that makes it tough to teach effectively. You could post the lists, but I think their impact is enhanced if you discuss the results with your students. An open, free-flowing discussion reinforces the importance of the issues raised and will likely be an eye-opening exercise for some.

The beauty of the strategy is how it makes clear that what the class ends up being is the result of actions taken by the teacher and the students. What individuals do matters to the class as a whole, and the behaviors on the list are things students and the teacher can avoid doing. The teacher can take the lead, promising that she’ll try to avoid those behaviors that make it hard for students to learn. Would students work to avoid those behaviors that make the teaching and the learning of others more difficult? Can we be in this together?

Of course, it’s not a surefire, works every time, creates instructional bliss, kind of solution. But it does develop awareness and establish a set of benchmarks that can be used to confront the behaviors, should they emerge, and to solicit and provide feedback at various intervals in the course.

But I think we need to do more. Students don’t understand that those places in which learning occurs have special features. Just like cultures in the lab need distinct conditions to grow, minds require unique environments to acquire information, arrive at ideas and insights, and ultimately thrive. These environments are places where learners feel safe, where they know each other, where they don’t just occupy the space, but share in it with energizing ideas, information, and perspectives. They are places filled with the sounds of minds at work.

I often think of Larry Lesser’s piece that appeared in the November 2010 issue of The Teaching Professor newsletter. He came across a poem in a Jewish prayer book that expressed noble intentions for a worship space. It caused him to write a set of noble intentions for his classroom spacesstarting with what he intended to do and be, followed with his aspirations for students. His intentions transform classrooms, lifting them from ordinary places to spaces of power, filled with potential. Sharing those intentions changed how his students thought about the classroom.

Reference: Appleby, D. C. Faculty and Student Perceptions of Irritating Behaviors in the College Classroom. Journal of Staff, Program, & Organizational Development, 1990, 8 (1), 41-46.

Lesser, L.M. (2010). Opening Intentions. The Teaching Professor, 24 (9), 4.

quarta-feira, 25 de junho de 2014

Diretrizes curriculares de Medicina

Novas diretrizes curriculares de medicina entram em vigor

Mariana Tokarnia

As novas diretrizes curriculares nacionais dos cursos de medicina entram em vigor hoje (23), com a publicação da Resolução 3/2014 no Diário Oficial da União. As escolas de medicina terão até dezembro de 2018 para implementar as mudanças. No entanto, nas turmas abertas a partir desta segunda, o novo currículo terá um ano para ser implementado.

Entre as principais mudanças está o estágio obrigatório no Sistema Único de Saúde (SUS), na atenção básica e no serviço de urgência e emergência. Pela resolução, o internato deve ter a duração mínima de dois anos, com 30% da carga horária cumprida no SUS.

Além disso, os estudantes serão avaliados pelo governo a cada dois anos. A avaliação será obrigatória e o resultado será contado como parte do processo de classificação para os exames dos programas de residência médica. A prova será elaborada pelo Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (Inep), responsável por avaliações como o Exame Nacional do Ensino Médio (Enem). O Inep tem dois anos para começar a aplicar a avaliação.

As diretrizes curriculares para cursos de medicina vigentes até agora eram de 2001. A reformulação estava prevista desde o lançamento do Programa Mais Médicos. Pela nova resolução, o curso de graduação de medicina continuará com seis anos de duração. Durante a discussão do programa cogitou-se a ampliação para oito anos.

A expectativa é que 11.447 vagas em cursos de medicina sejam abertas até 2017 — 3.615 em universidades federais e 7.832 em instituições particulares. Na residência, deverão ser ofertadas 12.372 novas vagas no mesmo período.


Why is no value placed on teaching experience in UK universities?

It's time universities recognised the value of teaching experience instead of hiring based only on number of publications
Anonymous academic

University lecture
 'The corporatisation of higher education disincentivises investments in teaching quality.' 
Photograph: David Levene for the Guardian
After having slogged for three or four years (or seven if, like me, you were part-time and juggling childcare) with sleepless nights and too much caffeine, you stumble towards PhD completion and apply for a temporary teaching contract. Perhaps you think this is a natural first step to a permanent post in academia, the proverbial "foot in the door"? Well, you'd be wrong.

As I, and many others like me, are finding to our cost, no value is placed on teaching experience or ability in terms of either hiring or promotion. In fact, this teaching experience is without value if it's not accompanied by those all-important publications – the one and only measure for employment in higher education today, most notably among "old" universities.
As a recent article on these pages highlights, more than a third of the UK's total academic workforce is on temporary contracts. This rises to more than half when zero-hours teaching staff are included.

In the past three years I have applied for well over 30 posts, and have only been successful on two short-term teaching contracts, for which there was never any hope of permanence or even renewal.

The reason is simple – I have been told by people on hiring panels that how applications are sifted is simply by skipping straight to the publications list. And if the candidate does not have enough publications, it gets tossed aside, regardless of the other experience they might have on their CV. So having teaching experience doesn't hurt, but neither does it help.

The picture isn't any better across the Atlantic. An adjunct professor, as contractual or non-tenured academic staff in the US and Canada are often termed, named Margaret Vojtko died in Autumn 2013 penniless and nearly homeless. She had put in 25 years of contractual teaching contracts at Duquesne University, but received no severance or retirement benefits.

This, it would seem, is not exceptional – over three quarters of American university faculty are now adjunct professors, with prospects for full-time permanent posts rapidly diminishing. In Canada, it is estimated that around half of all university teaching is done by contractual academics.

The first concern here is in relation to teaching quality. Sometimes what has come to be known as the corporatisation of higher education certainly disincentivises investments in teaching or teaching quality, and my experience echoes this.
I have met some wonderful students with whom I might have nurtured a longer-term relationship and whose feedback would have been valuable in enhancing the teaching on the degree – but I have no incentive to invest in the department in this way.
Universities are increasingly using short-term contracts – ranging between eight and 11 months – to cover just the teaching period. This has the added benefit to the university that there is no contractual obligation to re-employ, making teaching staff the most vulnerable of all. I know of one case where this has happened consecutively over a period of 15 years. But this sort of temporary hiring represents significant short to medium-term cost-savings.

As higher education is under increasing funding pressure, moving academics on to temporary contracts makes good "business" sense, but also suggests they are cheap and disposable.

Part of the reason for this imbalance is that the standing of a university is increasingly based on its ranking in research assessments like the research excellence framework (REF), which only measures "impact" through the awarding of research income and publishing profile. Surely with the introduction of £9,000 student fees, effective, committed teaching needs also to be understood as a form of income generation?

Although, increasingly, teaching contract roles are specifying that they want candidates to be "research active", this activity is not really factored into the workload.

A full-time, permanent social science lecturer does not, on average, have more than four to six contact hours per semester, and has research time built into their contract.

In my first contract post, by contrast, the teaching was rearranged so that I taught an extra first-year course with 60 students in addition to what the member of staff I was covering would have taught. In terms of contact time – lectures and seminars – this meant anywhere from eight to 15 hours a week, which did not include preparation time, dissertation supervision, office hours, marking or invigilation.

Given limited work opportunities, many pursue these jobs in the hope of being kept on a more permanent basis. But in higher education, unlike in other industries or sectors, teaching has no value and could not be considered a "way in" to a more permanent post.

Permanent posts are now the reserve of candidates with long lists of publications or funding, whether or not they are good teachers. The vast majority of graduates are more likely to drift from one dead-end teaching contract to the next with no prospect of permanent employment and no other avenue for support while they try to get publications out.

It is a vicious circle, and a completely unfair one at that. Surely it is time universities recognised teaching as an important professional skill with its own hiring and promotion pathway.

This week's anonymous academic is a lecturer at a Russell Group university.

terça-feira, 24 de junho de 2014

Virtual patients

Virtual Patients in Primary Care: Developing a Reusable Model That Fosters Reflective Practice and Clinical Reasoning

Helena Salminen, MD, PhD; Nabil Zary, MD, PhD; Karin Björklund, OT; Eva Toth-Pal, MD, PhD; Charlotte Leanderson, MD, PhD
Karolinska Institutet, Stockholm, Sweden

Background: Primary care is an integral part of the medical curriculum at Karolinska Institutet, Sweden. It is present at every stage of the students’ education. Virtual patients (VPs) may support learning processes and be a valuable complement in teaching communication skills, patient-centeredness, clinical reasoning, and reflective thinking. Current literature on virtual patients lacks reports on how to design and use virtual patients with a primary care perspective.
Objective: The objective of this study was to create a model for a virtual patient in primary care that facilitates medical students’ reflective practice and clinical reasoning. The main research question was how to design a virtual patient model with embedded process skills suitable for primary care education.
Methods: The VP model was developed using the Open Tufts University Sciences Knowledgebase (OpenTUSK) virtual patient system as a prototyping tool. Both the VP model and the case created using the developed model were validated by a group of 10 experienced primary care physicians and then further improved by a work group of faculty involved in the medical program. The students’ opinions on the VP were investigated through focus group interviews with 14 students and the results analyzed using content analysis.
Results: The VP primary care model was based on a patient-centered model of consultation modified according to the Calgary-Cambridge Guides, and the learning outcomes of the study program in medicine were taken into account. The VP primary care model is based on Kolb’s learning theories and consists of several learning cycles. Each learning cycle includes a didactic inventory and then provides the student with a concrete experience (video, pictures, and other material) and preformulated feedback. The students’ learning process was visualized by requiring the students to expose their clinical reasoning and reflections in-action in every learning cycle. Content analysis of the focus group interviews showed good acceptance of the model by students. The VP was regarded as an intermediate learning activity and a complement to both the theoretical and the clinical part of the education, filling out gaps in clinical knowledge. The content of the VP case was regarded as authentic and the students appreciated the immediate feedback. The students found the structure of the model interactive and easy to follow. The students also reported that the VP case supported their self-directed learning and reflective ability.
Conclusions: We have built a new VP model for primary care with embedded communication training and iterated learning cycles that in pilot testing showed good acceptance by students, supporting their self-directed learning and reflective thinking.

(J Med Internet Res 2014;16(1):e3)

Research by students

Transcending Disciplinary Boundaries: Conversations about Student Research Projects

By Pete Burkholder, PhD

One of the most enjoyable aspects of running a faculty development program on teaching is seeing first-hand how much our various disciplines intersect when it comes to teaching and learning. Whereas it can be hard, if not impossible, to speak about disciplinary research with colleagues outside our fields, the common teaching problems we face allow for readily understandable dialog, no matter how far apart the discussants’ fields of expertise.

Two recent presentations by faculty in my program made this abundantly clear. Both concerned authentic research projects required of students in science fields, but the ostensible similarities ended there. The first entailed having graduate-level pharmacy students design a hypothesis-driven research project, something which only a minority of pharmacy programs require. The second took place within the context of a junior-level genetics laboratory for biology and biochemistry majors, where a multi-week experiment had students performing genetic sequencing on microorganisms.

I won’t pretend to understand the technical details behind these projects, but as a historian who’s published on undergraduates’ first experiences with archival research, I was intrigued by the teaching and learning implications. The discussions following these presentations unveiled commonalities that one might not expect between such disparate fields, particularly the problems and opportunities that transcend our disciplinary boundaries. (Publications on these three projects by Burkholder, Myers, and Vaidean et al. are cited below.)

First, all of us were far more concerned with the process of the students’ research than we were with the results. In this sense, we differed markedly from our students, who at least initially remained locked in a dualistic, “correct/incorrect answer” mindset. Genuine research is a messy process: experiments go awry, evidence doesn’t fit or is unintelligible, and dead ends are a constant hazard. Our concerns as educators were not whether students ultimately produced some expected result because such a result often didn’t exist. Rather, we were all primarily interested in how students grappled with challenges as they arose – especially whether they clung to preconceived but potentially unproductive notions of project success, or whether they embraced a new mindset allowing them to overcome the inevitable hurdles in their path. The notion of Ken Bain’s “expectation failure” was applicable here, where students could not continue in their projects without first acknowledging that extant modes of understanding stood in their way.

Second, we all saw value in introducing students to what academics actually do. How do we know what we say we know? What are the limitations of what we can know? Answering such epistemological questions doesn’t come so readily in the context of the usual sanitized research we often ask of our classes. And although the students ultimately saw real value in these projects, many concluded they had no desire to pursue research as a career path. Not that this is a bad thing: on the contrary, eliminating a line of work from consideration gives sharper focus to what genuinely interests students for their professional futures.

Finally, all of us agreed that getting our students involved in authentic research forced us to step back, to carefully examine, and to retool our classes for the challenges participants would face in the laboratory or archives. We needed to systematically deconstruct the steps that would go into such research, and to anticipate and prepare for the problems that would arise. We thus had to detach from our own levels of expertise and try to remember what a novice would know and feel as she or he entered the unfamiliar landscape of research. As Susan Ambrose et al. recently point out, “unpacking” and decomposing complex tasks can be especially challenging for experts, who perform research steps automatically and even oblivious to the difficulties faced by their students.

Perhaps inevitably, we overestimated the preliminary knowledge and skills of our students, requiring us both to be patient as the students developed these faculties (my biology colleague estimated her students take four times longer to do the necessary tasks than it takes her), and to help fill in the gaps as they arose. In my own case, having run archival projects for years now, I’ve gotten better at preparing my students for the job and can more readily empathize with the problems they run into. Yet, just as there is ambiguity and messiness in the students’ projects, there are myriad challenges, some of them unforeseeable, that arise from requiring such work from novices. Just as process trumped results in our students’ research, we all concluded that it’s the process of constantly reassessing our roles and responsibilities that holds the most value to us as educators.

In pushing our students into authentic inquiry and discussing the results, none of us anticipated finding teaching and learning commonalities between our disciplines. But as we tell our own students: when you enter the unscripted realm of research, expect the unexpected.

Susan Ambrose et al., How Learning Works: 7 Research Principles for Smart Teaching (San Francisco: Jossey-Bass, 2010).

Ken Bain, What the Best College Teachers Do (Cambridge: Harvard University Press, 2004).

Peter Burkholder, “Getting Medieval on American History Research: A Method to Help Students Think Historically,” The History Teacher 43/4 (2010), 545-562 (link).

Edith Myers, “Laboratory Exercise: Mapping a Mutation in Caenorhabditis elegans Using a Polymerase Chain Reaction-Based Approach,” Biochemistry and Molecular Biology Education 42/3 (2014), 246-256 (link).

Georgeta Vaidean et al., “Student Scientific Inquiry in the Core Doctor of Pharmacy Curriculum: Critical Issues in Designing and Implementing a Student Research Program,” Innovations in Pharmacy 4/4, Article 131 (2013), 16 pp. (link).

Dr. Pete Burkholder is an associate professor of history at Fairleigh Dickinson University, where he is also the founding chair of the faculty development committee on teaching.
- See more at:

quinta-feira, 19 de junho de 2014

Incentives for Surveys

Is it Ethical to Give Incentives for Surveys?

You’ve probably seen a lot of these ads for “Earning money taking surveys” and that’s got you wondering if giving incentives is ethical.  In a previous article, we asked the question if you should give incentives for survey and the research came back saying that incentives absolutely work.  But some of our readers also wonder if giving incentives is ethical, so I thought we should address this issue here.
When you consider the very low amounts that are often paid for responding to surveys, you realizing that no one is going to get rich completing surveys and at the same time, it’s not uncommon to offer larger incentives to some very targeted and specific respondents such as doctors or lawyers who are very difficult to vet as respondents and who do not take the time to respond to surveys.  

So where is the balance and what’s the right amount of an incentive to give that increases the number (and quality) of the respondent without skewing the data?
In a recent article by Austin Research they make the distinction between “incentive” as the reason for completing the survey, versus a reward as a thank-you gesture for finishing a survey.  I really like that because it distinguishes between the extrinsic financial value of the incentive and the intrinsic acknowledgement of taking the time to complete a survey.  There is a difference — at least in the mind of the respondent.
You see, the actual value of the incentive or reward doesn’t significantly increase the number of quality of the responses — it’s actually what’s offered and the way in which it’s offered that makes a difference in the mind of the respondent — and hence, your results.

3 Non-Financial Incentives that Respondents LOVE

Here’s the good news.  You don’t need to offer financial incentives to your respondents.
  1. Some respondents (especially B2B or industrial respondents) may prefer seeing the results of your research to a financial reward.
  2. Another option is to offer respondents access to special events or training
  3. You can also offer respondents access to downloadable content, books or reports that they can use to run their businesses better.
The bottom line is to look at the incentive question from the perspective of the respondent and ask yourself; what will make them feel special and valued?  If you’re not sure — why not just ask them?  Reach out using email or a phone call and simply tell that that you’re doing some research and that you were wondering what would really be of value to them as a thank-you gift.

domingo, 15 de junho de 2014


Learning Objectives in MOOCs

We are learning more and more about who enrolls in Massive Open Online Classes (MOOCs) and how those students behave.  For example, Harvard and MIT recently released de-identified data from their first 16 MOOCs that ran in 2012-2013 (read more about the Harvard and MIT data sets here and access the actual data here).  The data set includes several variables relating to student activities – for example, whether students visited the course website, watched videos, or completed exams.  These types of measures can tell us a lot about what students do, but it is not clear how much they learned as a result of those actions.
We were interested to find out how much students gained in specific learning objectives as a result of participating in a MOOC.  To do this, we asked students to rate their learning gains in five areas that roughly correspond to Bloom’s taxonomy of learning outcomes.  Bloom’s taxonomy is a classification of learning outcomes that includes both lower-level outcomes (remembering, understanding) and a progression towards higher-level outcomes (evaluating, creating).

We sent a post-course survey to students in Dorian’s Canelas’ Introduction to Chemistry course asking them to indicate the extent to which the course contributed to their progress on five different learning objectives.  The survey was completed by 382 students, and it should be noted that this does not represent a random sample of students in the course.  Rather, the findings generated from this survey are indicative of what some of the more-engaged students experienced.
Of the students who completed the survey, 62% earned a Statement of Accomplishment.  However, even those who did not earn a Statement reported having a very positive experience in the course.  As shown below, when asked to rate their overall experience with the course, the overwhelming majority of all students rated it highly.
The graph below shows student’s self-reported progress on the five learning objectives we asked about.  The percents reported are the percent of students who said that the course contributed “highly” or “very highly” to their progress (other options were “not at all”, “a little”, and “moderately”).
As is typical in a traditional class, most students made significant progress on the lower-level outcomes of gaining knowledge and understanding basic concepts.  However, over half the students reported that they made progress with the higher-level outcomes of applying knowledge to other situations and synthesizing information.  Finally, 42% of students made progress learning to conduct their own inquiry, an objective at the top of the taxonomy.  This group does not include only those who completed the course and earned a Statement.  Of those who did not earn a Statement, 33% said that they made significant progress learning to conduct inquiry.
These numbers highlight the need to think carefully about how we define success in a MOOC.  It is increasingly clear that students who do not earn certificates at the end and who do not meet the traditional metrics of completion are still having meaningful engagements with the course material and accomplishing learning gains.