A survey published in the journal arXiv predicted with a 50% probability that high-level machine intelligence would equal human performance as a surgeon in approximately 35 years. See graph below.
Click on the figure to enlarge it |
A potential flaw in this study is that the surveyed individuals were all artificial intelligence researchers who predicted that machines would not be their equal for over 85 more years with the 75% likelihood of this occurring being over 200 years from now.
I suspect if surgeons were asked the same questions, we would say it would take over 85 years for machines to be able to operate as well as we can and 35 years until artificial intelligence researchers would be replaced by their creations.
[Thanks to @EricTopol for tweeting a link to the arXiv paper.]
Part 2
Similar to the question “who is responsible if a driverless car causes an accident?” is “when artificial intelligence botches your medical diagnosis, who’s to blame?” An article on Quartz discussed the topic.
[Digression: The article matter-of-factly states “Medical error is currently the third leading cause of death in the US… ” This is untrue. See this post of mine and this one from the rapid response pages of the BMJ.]
If artificial intelligence was simply being used as a tool by human physician, the doctor would be on the hook. However indications are that artificial intelligence may be more accurate than humans in diagnosing diseases and soon may be able to function independently.
If a machine makes a diagnostic error, are the designers of the software responsible? Is it the company that made the device? What about the entity owns the system? No one knows.
The Quartz piece did not address this. Who is responsible if a nonhuman surgeon makes a mistake during an operation?
I’m sorry I won’t be around 35 years to hear how this is settled.