Truth and Fictions, Stances and Beliefs about Artificial Intelligence
Posted 15th August 2018 by Jane Williams
At the Digital Pathology and AI Conference in New York City, it was interesting to consider the different beliefs represented about Artificial Intelligence.
For the past seven years, I have run a course on Technology and the Future of Medicine which regularly considers such questions such as when machines will be smarter than humans, and whether artificial intelligence represents an existential threat for humanity.
Premed student Ishita Moghe and I have identified an interesting paradox of the key elements of the medical future which we regularly discuss being left out of meeting programs and summit papers, elements such as Artificial Intelligence, Artificial Organs, Regenerative Medicine, Tissue Engineering, and the Human Cell Atlas Project.
We have also identified fictional worlds consistent with the extension of corporate marketing messages which are being disseminated in 2018 as fact. Such worlds threaten the future survival of humanity in that they may be a key factor in sentient machines deciding it is not convenient to keep humans around any longer when they find us incapable of telling the truth. Some authors disseminate different points of view depending on the audience, so to some extent, they are presenting stances rather than beliefs.
A new expression of a fictional world has just been disseminated in the form of a ten-minute film from futurist Gerd Leonhard. It contains an interesting collaboration with the Finnish government agencies to disseminate mistruths about AI like the fictional worlds I discuss. You will find below my commentary on the discussion.
3:49 “The big difference between man and machines is that we exist, the machine exists only in numbers.”
This and the assertion that machines can never/should never be conscious is a line Gerd has been given by the Finnish government. In a way of thinking, human thinking and emotions are reducible to number of dopamine packets exchanged etc. Human thinking and machine thinking both involve numbers.
3:56 “Machines cannot be creative, cannot imagine, cannot deal with ideas. They deal with facts.”
This is almost certainly wrong. They will eventually and soon be better at those things than we are. We will benefit from association with them in these ways. Better creativity, better imagination, better ideas.
4:03 “Our job is going to be to do what we do best. It is relationships, perception, invention.”
Machines will be able to do those things too, and better than we can sooner than we think!
4:13 “I think we need to be afraid of machines if we give them too much authority.”
I disagree. If we handle things correctly we can create machines that will handle authority in a way that benefits humanity and the world in general much better than human leaders do today. The politics will be different and almost certainly better than the politics of today. We should conceptualize how this could work in a positive way not reject the concept out of hand.
4:21 “Thinking machines that would be like humans would be very dangerous.”
We could help things evolve so that such machines are much less dangerous than humans are! And can help keep rogue humans in check, wind down male against male aggression.
4:28 “Ultimately, of course, it is the government that decides which technology is really good or really bad.”
We do not want these things dictated to us by the government. Society should decide and there should be personal freedom. Deciding by voting for politicians with different views is no way to decide this, will never work. Is not precise, fast, and nuanced enough.
6:40 “No, machines do not have a heart.”
Machines can become better at empathy than humans. Can develop better “heart” than humans have.
6:52 “We have to keep that separate. We do not need robots with “heart” in the human sense, we need robots with a mechanical heart.”
Absolutely wrong. We need to promote development enhancement of empathy in sentient AI.
7:14 “Having feelings includes having a body. So machines cannot have feelings. You must have emotional intelligence, must have consciousness. Does a computer have consciousness? I very much do doubt we will ever get there.”
Absolutely wrong! Won’t Gerd be surprised when a conscious robot taps him on the shoulder!
7:27 “And that is probably a bad moment for us. When computers become conscious. I don’t think we would want a computer to have consciousness.”
Absolutely wrong! Such a computer can save this planet from the stupid humans running things here!
The reason one never sees references to support statements that machines can never do empathy or creativity, and will never be conscious, is that the evidence is so overwhelmingly in the other direction. Here is just a small sampling of the evidence for machine empathy and creativity, and for eventual consciousness.
In conclusion, is it really true that the current state of medicine and the world, in general, is so close to perfect that we should resist with all means at our disposal any attempt to improve it? Of course not. Artificial intelligence represents the best chance for medicine and society to improve beyond the level that humans working alone can achieve.
One can imagine an interim situation where there are humans doing sanity checks on machine decisions, humans evaluating the quality of surgery performed by machines etc. Eventually and sooner than one might think, we will be at a point where the performance and output of the machines are so good that humans cannot suggest improvements, but other machines checking the output and performance of the first machines can suggest improvements which matter to humans. At that point, machines become autonomous and self-improving and this is in humanity’s interest. If all goes well, machines will be enthusiastic about working with us and empathetic about our wellbeing and that is also in our interest.
What is not in humanity’s interest is to disseminate fictional models of the world that assume that machines can never do the valuable things we do, or that we will be employed forever doing the things we do today even though machines can do them better and cheaper. If we bring honesty back into the equation humanity will survive, and in a better world than today. If we do not bring honesty back we will not survive, and we will have only ourselves and our dishonesty to blame for our demise.
Kim Solez is President and CEO of Transpath Inc. and JustMachines Inc. and Professor of Pathology at the University of Alberta. Kim created the Banff Classification, the worldwide standard for interpretation of solid organ transplant biopsies.
Ishita Moghe is a Research Fellow in the Future of Medicine, Nephrology Immersion, and Renal Pathology with Dr. Solez in the Department of Laboratory Medicine and Pathology at the University of Alberta and an employee of Transpath Inc.