What does the AI enterprise of the future look like? That’s a tough question that I’ve been asked to consider, along with a distinguished panel at Valley ML AI Expo 2020.  The title of the panel is, “Life, the Universe and the AI Enterprise of the Future.” Based on an initial chat with panel chair Gautam Khera, I’ve written up some possible topics we’ll be covering on the panel. Consider this a preview of the talk.

 When will we achieve artificial general intelligence?  The system must at least pass the Turing Test to be considered general AI.  Is it conversational with humans? Can it reason?  Chatbots are the closest thing we have now to conversational AI, but they require a lot of training data.  We’ve got a long way to go.  Intelligence is multi-dimensional.  It is very difficult to model the human brain. My best guess is possibly 2060.

What is your take on model interpretability?  I am not a huge fan of explainable AI. You don’t need to know how a car works to drive it.  You should use explainability techniques where they are most needed: to detect AI bias. Be careful with model agnostic methods that do feature analysis. These techniques can be unreliable when the features are correlated. There is solid work underway for explaining these massive non-linear models.  In particular, neural-network specific interpretability methods look promising, including saliency-based explanations, inverted feature maps, neural-network specific mathematical models.

What do computers do better than humans? Computers are better than humans at detecting correlation changes. A co-worker of mine Jorge Silva simply says, “Computers are fantastic at counting.”  More specifically, machines are good at detecting an internal representation of the world.  You want to understand this state in order to take appropriate actions. At SAS we used reinforcement learning to determine when to open another line in the café. Previously the cashier would call for the manager when the line was too long. But by this time, it’s already too late. The machine is good at understanding this state and adds a new cashier before the system goes out of whack. The human usually is not as good at keeping a running tally of these metrics.

Is statistics dead?  My background is in statistics but my skillsets have grown as technology has evolved. I’m still a fan of statistical inference to better understand how a set of factors affect the outcome.  I don’t understand why statisticians and machine learners have less than an optimal appreciation for each other.  Check out my blog post on this topic to understand why you should add more statistical learning to your machine learning tool kit.

Do we have enough machine learning algorithms?  Generative probabilistic models (including GANs), reinforcement learning, and online machine learning in general, and deep learning are game-changers. In fields such as recommender systems, predictive asset maintenance and peer-to-peer lending are already state-of-the-art.  We need to place more emphasis on online learning. The models need to adapt to concept drift without fully retraining the models. This is easy to do with neural networks because the weights are fixed.  It’s very similar to transfer learning but doing so continuously is online learning. There is research underway at SAS on how to best do online learning for random forests and gradient boosting models. If you keep adding more and more trees,  the model will grow without limits.  We are trying to figure out the best way to forget the least relevant trees and still control for model decay.  You can’t just delete the oldest trees.

Big data is often observational.  What do you recommend for observational data?   Most of data I get is observational and not from designed experiments.  For example, common data sets are for customer transactions or data on patients coming into the hospital or filing an insurance claim.  I often use causal over predictive (supervised) modeling on this data.  I’ve recently built propensity scores and causal models to determine if a patient treatment intervention was different than a control group. Often when the patients have variable health conditions or they are observed across different environments, you may not see a significant difference in the intervention versus the control unless you adjust for these factors.   These same methods are applicable to any industry.  Causal analysis methods should in no way be thought of as artificial general intelligence.

You’ll have to tune in to hear what else we cover and what my fellow panelists have to say. There are so many more viable topics, including robotics, hardware advances and context-specific AI. I would love to hear your thoughts on any of these topics.  Register for the panel now or chat with me on LinkedIn.  

 

Share

About Author

Wayne Thompson

Manager Data Science Technologies

Wayne Thompson, Chief Data Scientist at SAS, is a globally renowned presenter, teacher, practitioner and innovator in the fields of data mining and machine learning. He has worked alongside the world's biggest and most challenging organizations to help them harness analytics to build high performing organizations. Over the course of his 24 year tenure at SAS, Wayne has been credited with bringing to market landmark SAS analytics technologies, including SAS Text Miner, SAS Credit Scoring for Enterprise Miner, SAS Model Manager, SAS Rapid Predictive Modeler, SAS Visual Statistics and more. His current focus initiatives include easy to use self-service data mining tools along with deep learning and cognitive computing tool kits.

Related Posts

Comments are closed.

Back to Top