If 2018 is any indication, 2019 will see AI play an even bigger role in our lives than we imagined. It will be deployed in ways we don’t realize and to do things we didn’t expect. We won’t even be aware of when it is being used for or against us. And there may not be any appeal of AI decisions. Further, it will be everywhere deployed by everyone thanks to various companies democratizing the technology. These are the observations of leaders from around the industry as reported by Forbes.
“We are seeing the democratization of AI through open source algorithms, affordable computing power and AI specialized hardware,” said Roy Raanani, CEO and founder of Chorus.ai. “Google TensorFlow released open source software to allow anyone to build on Google’s own machine learning algorithms. Also the introduction of AI specialized hardware by Apple, Google, Tesla and NVIDIA is increasing AI performance by tens to hundreds, and enabling that performance in smaller form factors.”
Santi Subotovsky, General Partner at Emergence, and Oded Gal, Head of Products at Zoom Video Communications, believe AI will reshape business meetings by increasing productivity and surfacing hidden insights. AI combined with speech recognition can enable automatic note-taking. It can also surface non-verbal cues that participants of a meeting could miss.
Expect facial recognition as a standard part of the conference room. Much insight can be gained from knowing who used the room, when, and for what purpose.
Candace Worley, Chief Technical Strategist at McAfee, sounds a cautionary note. She believes there will be special oversight of AI usage due to the “legal, ethical, and cultural implications.” She cites the fact that “AI has demonstrated unfavorable behavior such as racial profiling, unfairly denying individuals loans, and incorrectly identifying basic information about users.”
Nick Caldwell, Chief Product Officer from Looker, offers the most optimistic endorsement of AI by suggesting we stop giving its decisions greater scrutiny than we do for humans. He uses a doctor as an example. We trust her professional judgment without forcing her to cite all the studies, research, journals, and lectures she consumed that factored into her decision. He acknowledges that sometimes AI will make mistakes. But for AI to do its best work, we have to get out of its way.
There are a few differences between AI and doctors. For one, we know exactly how, where, and by whom doctors are trained. We can audit that process and be sure it meets expected standards. Second, doctors are accountable for their mistakes. And there are certainly times when we get to question their judgment.
Legally, we still have not worked out what the training standard should be for AI, or who is liable when AI makes a mistake. Will insurance companies cover AI like they do other professionals? Despite these issues remaining open questions for now, it seems professionals in a number of industries are set to integrate it even more in their processes and in our lives.