Ginni Rometty keynote: CES 2016
Jan 6, 2016
Watson at work: IBM CEO Ginni Rometty talks cognitive technology, partnerships with Under Armour, Medtronic, and SoftBank (Pepper robot)
Health care is one of the areas where there’s just a tremendous amount of potential and promise. So far, what we’ve seen in health care is that it has really lagged other areas of the economy in terms of what technology has been able to do. We haven’t seen the kind of disruptive impact that we’ve seen in a lot of other areas in health care. And that’s part of the reason that the costs remain so high.
… Although these components are largely domain-independent, some tuning to the special locutions of Jeopardy! questions has been done
… These Question Classes (QClasses) are used to tune the question-answering process by invoking different answering techniques [3], different machine learning models [4], or both.
Most of our rule-based question analysis components are implemented in Prolog [6, 7], a well-established standard for representing pattern-matching rules.
… we explain how we implemented rule-based portions of question analysis using Prolog.
a named entity recognizer (NER), a co-reference resolution component, and a relation extraction component [12].
ESG has been adapted in several ways to the special locutions of Jeopardy! questions. In place of Bwh[ pronouns …
In spite of these adaptations, care was taken not to degrade parsing of normal English. This is done in part by use of switches for the parser that are turned on only when parsing Jeopardy! questions.
Most of the question analysis tasks in the Watson project are implemented as rules over the PAS and various external databases such as WordNet [16].
… In all, these rule sets consist of more than 6,000 Prolog clauses.
This decision can be somewhat subjective according to our definition of LAT. Examples of disagreements were “singer” versus “lead singer” and “body” versus “legislative body”.
The Jeopardy! domain includes a wide variety of kinds of questions, and we have found that a one-size-fits-all approach to answering them is not ideal. In addition, some parts of a question may play special roles and can benefit from specialized handling.
The QClasses PUZZLE, BOND, FITB (Fill-in-the blank), and BOND, and MULTIPLE-CHOICE have fairly standard representations in Jeopardy! and are detected primarily by regular expressions.
The rule-based recognizer includes regular expression patterns that capture canonical ways that abbreviation questions may be expressed in Jeopardy!
It is common in question-answering systems to represent a question as a graph either of syntactic relations in a parse or PAS [18–20] or of deep semantic relations in a handcrafted ontology [21–23]. Watson uses both approaches.
Most other question-answering systems use question analysis to identify a semantic answer type from a fixed ontology of known types [19, 26–29]. Because of the very broad domain that Jeopardy! questions cover, this is not practical.
CAS: common analysis structure
ESG: English Slot Grammar, a Slot Grammar parser
LATs: lexical answer types
NER: named entity recognizer
PAS: predicate-argument structure (eg.: PAS builder)
UIMA: Unstructured Information Management Architecture
—————————————
Watson is powered by 10 racks of IBM Power 750 servers running Linux, and uses 15 terabytes of RAM, 2,880 processor cores and is capable of operating at 80 teraflops.
Watson was written in mostly Java but also significant chunks of code are written C++ and Prolog, all components are deployed and integrated using UIMA. http://www.redditblog.com/2011/02/ibm-watson-research-team-answers-your.html
At an event held at the Royal Society in London, for the first time ever, a computer passed the Turing Test, which is widely taken as the benchmark for saying a machine is engaging in intelligent thought.
But like the other much-hyped triumphs of artificial intelligence, this one wasn’t quite what it appeared.
Computers can do things that seem quintessentially human, but they usually take a different path to get there.
IBM’s Deep Blue mastered chess not by refining its intuitions but by evaluating hundreds of millions of positions per second. Watson won at Jeopardy not by wide reading but by swallowing all of Wikipedia
Fed up with human shortcomings, the characters in Madeleine George’s play turn to high-tech companions. Could machines be assistants, friends, and even partners? The (Curious Case of the) Watson Intelligence explores the amazing things technology can do for us…and what it can’t.
So in my story the character of Eliza is a computer scientist and she has kind of, like, lifted some of IBM’s technology from a job that she used to have there and taken it off and embedded it in a sociable robot.
GEORGE: And also how they felt about Watson. And they were quite candid about saying things like “I love Watson“. Or “Watson is just like another child to me“. That – I felt hardened by it because I know how easy it is for people who are not specialists to fall in a kind of love with our machines.
Dr. Mark Kris is among the top lung cancer specialists in the world.
As chief of thoracic oncology at Memorial Sloan-Kettering (MSK) Cancer Center in New York City, he has been diagnosing and treating patients for more than 30 years.
But even he is overwhelmed by the massive amount of information that goes into figuring out which drugs to give his patients — and the relatively crude tools he has to decipher that data.
“This is the standard for treatment today,” he says, passing me a well-worn printout of the 2013 treatment guidelines in his office.
We choose a cancer type. A paragraph of instructions says to pair two drugs from a list of 16. “Do the math,” he says. It means more than 100 possible combinations. “How do you figure out which ones are the best?”
It’s a huge problem. More than 230,000 Americans will be diagnosed with lung cancer this year.
Almost all of them will receive chemotherapy. As crude as the existing guidelines are, says Kris, they won’t be followed more than half the time. If we bumped up adherence by just 10% to 20%, he says, as many as 30,000 people might live longer. Never mind curing cancer — shouldn’t we be able to get the best available combinations of medications to sick people now?
That’s the question that led Kris to IBM. He saw that more information was not the answer.
What doctors needed was a better brain — one that could instantly vacuum up facts, draw deeper connections between data points, and remember everything. They needed Watson.
IBM’s Watson—the same machine that beat Ken Jennings at Jeopardy—is now churning through case histories at Memorial Sloan-Kettering, learning to make diagnoses and treatment recommendations. This is one in a series of developments suggesting that technology may be about to disrupt health care in the same way it has disrupted so many other industries. Are doctors necessary? Just how far might the automation of medicine go?
Are doctors necessary?
Just how far might the automation of medicine go?
processing up to 60 million pages of text per second, even when that text is in the form of plain old prose, or what scientists call “natural language.”
something like 80 percent of all information is “unstructured.” In medicine, it consists of physician notes dictated into medical records, long-winded sentences published in academic journals, and raw numbers stored online by public-health departments.
Watson even has the ability to convey doubt. When it makes diagnoses and recommends treatments, it usually issues a series of possibilities, each with its own level of confidence attached.