IBM's Watson is better at diagnosing cancer than human doctors
IBM's Watson --
the language-fluent computer that beat the best human champions at
a game of the US TV show Jeopardy! -- is being
turned into a tool for medical diagnosis. Its ability to absorb and
analyse vast quantities of data is, IBM claims, better than that of
human doctors, and its deployment through the cloud could also
reduce healthcare costs.
The first stages of a planned wider deployment, IBM's business
agreement with the Memorial
Sloan-Kettering Cancer Center in New York and American private
healthcare company Wellpoint will see Watson
available for rent to any hospital or clinic that wants to get its
opinion on matters relating to oncology. Not only that, but it'll
suggest the most affordable way of paying for it in America's
excessively-complex healthcare market. The hope is it will improve
diagnoses while reducing their costs at the same time.
Two years ago, IBM announced that Watson had "learned" the same amount of
knowledge as the average second-year medical student. For the last
year, IBM, Sloan-Kettering and Wellpoint have been working to teach
Watson how to understand and accumulate complicated peer-reviewed
medical knowledge relating to oncology. That's just lung, prostate
and breast cancers to begin with, but with others to come in the
next few years). Watson's ingestion of more than 600,000 pieces of
medical evidence, more than two million pages from medical journals
and the further ability to search through up to 1.5 million patient
records for further information gives it a breadth of knowledge no
human doctor can match.
According to Sloan-Kettering, only around 20 percent of the knowledge that human doctors use when
diagnosing patients and deciding on treatments relies on
trial-based evidence. It would take at least 160 hours of reading a
week just to keep up with new medical knowledge as it's published,
let alone consider its relevance or apply it practically. Watson's
ability to absorb this information faster than any human should, in
theory, fix a flaw in the current healthcare model. Wellpoint's
Samuel Nessbaum has claimed that, in tests, Watson's successful
diagnosis rate for lung cancer is 90 percent, compared to 50
percent for human doctors.
Sloan-Kettering's Dr Larry Norton said: "What Watson is
going to enable us to do is take that wisdom and put it in a
way that people who don't have that much experience in any
individual disease can have a wise counsellor at their side at
all times and use the intelligence and wisdom of the
most experienced people to help guide decisions."
The attraction for Wellpoint in all this is that Watson should
also reduce budgetary waste -- it claims that 30 percent of the $2.3 trillion (£1.46
trillion) spent on healthcare in the United States each year is
wasted. Watson here becomes a tool for what's known as "utilisation
management" -- management-speak for "working out how to do
something the cheapest way possible".
Wellpoint's statement said: "Natural language processing
leverages unstructured data, such as text-based treatment
requests. Eighty percent of the world's total data
is unstructured, and using traditional computing
to handle it would consume a great deal of time and resources
in the utilisation management process. The project also takes an
early step into cognitive systems by enabling Watson to
co-evolve with treatment guidelines, policies and medical best
practices. The system has the ability to improve
iteratively as payers and providers use it." In other words,
Watson will get better the more it's used, both in working out how
to cure people and how to cure them more cheaply.
When Watson was first devised, it (or is it "he"?) ran across
several large machines at IBM's headquarters, but recently its
physical size has been reduced hugely while its processing speed
has been increase 240 percent. The idea now is that hospital,
clinics and individual doctors can rent time with Watson over the
cloud -- sending it information on a patient will, after seconds
(or at most minutes), return a series of suggested treatment
options. Crucially, a doctor can submit a query in standard English
-- Watson can parse natural language, and doesn't rely on
standardised inputs, giving it a more practical flexibility.
Watson's previous claim to fame came from it winning a special
game of US gameshow Jeopardy! in 2011. For those
unfamiliar, Jeopardy!'s format works like this: the
answers are revealed on the gameboard and the contestants must
phrase their responses as questions. Thus, for the clue "the
ancient Lion of Nimrod went missing from this city's national
museum in 2003" the correct reply is "what is Baghdad?". Clues are
often based on puns or other word tricks, and while it's not quite
on the level of a cryptic crossword, it's certainly the kind of
linguistic challenge that would fox most language-literate
computers.
Watson's ability to parse texts and grasp the underlying rules
has had its drawbacks, though, as revealed last
month when IBM research scientist Eric Brown admitted that he had
tried giving Watson the Urban Dictionary as
a dataset. While Watson was able to understand some of the, er,
colourful slang that fills the site's pages, it also failed to
understand the different between polite and offensive speech.
Watson's memory of the Urban Dictionary had to (regrettably) be
wiped.
Nenhum comentário:
Postar um comentário