A survey published in the journal arXiv predicted with a 50% probability that high-level machine intelligence would equal human performance as a surgeon in approximately 35 years. See graph below.
Click on the figure to enlarge it |
A potential flaw in this study is that the surveyed individuals were all artificial intelligence researchers who predicted that machines would not be their equal for over 85 more years with the 75% likelihood of this occurring being over 200 years from now.
I suspect if surgeons were asked the same questions, we would say it would take over 85 years for machines to be able to operate as well as we can and 35 years until artificial intelligence researchers would be replaced by their creations.
[Thanks to @EricTopol for tweeting a link to the arXiv paper.]
Part 2
Similar to the question “who is responsible if a driverless car causes an accident?” is “when artificial intelligence botches your medical diagnosis, who’s to blame?” An article on Quartz discussed the topic.
[Digression: The article matter-of-factly states “Medical error is currently the third leading cause of death in the US… ” This is untrue. See this post of mine and this one from the rapid response pages of the BMJ.]
If artificial intelligence was simply being used as a tool by human physician, the doctor would be on the hook. However indications are that artificial intelligence may be more accurate than humans in diagnosing diseases and soon may be able to function independently.
If a machine makes a diagnostic error, are the designers of the software responsible? Is it the company that made the device? What about the entity owns the system? No one knows.
The Quartz piece did not address this. Who is responsible if a nonhuman surgeon makes a mistake during an operation?
I’m sorry I won’t be around 35 years to hear how this is settled.
2 comments:
I think the reasoning behind choosing AI researcher as the last job to be "solved" is that once computers get better at AI research than we do, everything else will pretty much be solved immediately. The AIs will make better AIs than we can, and those will make even better AIs and so on. Thus, whatever doesn't get solved before "AI researcher" gets solved, will be solved along with it.
That doesn't prove that a surgeon must be easier to build than an AI researcher, but it does suggest (if you buy the argument) that AI researcher should be the (shared) top problem.
I'm an AI researcher and to me, surgeon definitely seems like the more challenging job for a human to learn. From an AI perspective, however, it's a more difficult question to answer. A lot of things that make surgery difficult for us (memory, motor control, vision) are things we're at least starting to figure out on a simpler scale. Other requirements, like creativity and planning are further off. The most difficult aspect is probably the ability to respond to unexpected, never-before-seen situations.
AI research boils down to just those abstract concepts like creativity, reasoning and planning that we understand far less (no fine motor control required here, thank god), but the upshot is that we can work entirely in the computer. Already almost all AI research is setting up programs that search for other programs that do the things we want them to do. In some sense we're already using AI to build other AI.
I think for both professions, we'll see some progress within 35 years. Any predictions about reaching human performance are fairly useless speculation, and usually turn out to be wildly optimistic, and occasionally a little too pessimistic.
Peter, thanks for the thoughtful comments. I suppose AI will conquer everything eventually, but I'm glad you don't think surgery will be solved within 35 years.
Post a Comment
Note: Only a member of this blog may post a comment.