Deep Bayesian Learning and Mining
Jen-Tzung Chien, National Chiao Tung University, Hsinchu, Taiwan; firstname.lastname@example.org
This tutorial addresses the advances in deep Bayesian learning for sequence data which are ubiquitous in speech, music, text, image, video, web, communication and networking applications. Spatial and temporal contents are analyzed and represented to fulfill a variety of tasks ranging from classification, synthesis, generation, segmentation, dialogue, search, recommendation, summarization, answering, captioning, mining, translation, adaptation to name a few. Traditionally, “deep learning” is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The “latent semantic structure” in words, sentences, images, actions, documents or videos learned from data may not be well expressed or correctly optimized in mathematical logic or computer programs. The “distribution function” in discrete or continuous latent variable model for spatial and temporal sequences may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focuses on a series of advanced Bayesian models and deep models including Bayesian nonparametrics, recurrent neural network, sequence-to-sequence model, variational auto-encoder (VAE), generative adversarial network, attention mechanism, memory-augmented neural network, skip neural network, temporal difference VAE, stochastic neural network, stochastic temporal convolutional network, predictive state neural network, and policy neural network. Enhancing the prior/posterior representation is addressed. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in sequence data. The variational inference and sampling method are formulated to tackle the optimization for complicated models. The embeddings, clustering or co-clustering of words, sentences or objects are merged with linguistic and semantic constraints. A series of case studies are presented to tackle different issues in deep Bayesian data mining. At last, we will point out a number of directions and outlooks for future studies.
Jen-Tzung Chien is the Chair Professor at the National Chiao Tung University, Taiwan. He held the visiting researcher position with the IBM T. J. Watson Research Center, York- town Heights, NY, in 2010. His research interests include machine learning, deep learning, natural language processing and computer vision. He served as the associate editor of the IEEE Signal Processing Letters in 2008-2011, the guest editor of the IEEE Transactions on Audio, Speech and Language Processing in 2012, the organization committee member of ICASSP 2009, the area coordinator of Interspeech 2012, EUSIPCO 2017-2020, the program chair of ISCSLP 2018, the general chair of MLSP 2017, and currently serves as an elected member of the IEEE Machine Learning for Signal Processing Technical Committee. He received the Best Paper Award of IEEE Automatic Speech Recognition and Understanding Workshop in 2011 and the AAPM Farrington Daniels Award in 2018. Dr. Chien has published extensively including the books “Bayesian Speech and Language Processing”, Cambridge Uni- versity Press, in 2015, and “Source Separation and Machine Learning”, Academic Press, in 2018. He has served as the Tutorial Speaker for a number of top conferences including the ICASSP in 2012, 2015, 2017, the Interspeech in 2013, 2016, the COLING in 2018, and the AAAI, ACL, KDD, IJCAI in 2019.
Deep Explanations in Machine Learning via Interpretable Visual Methods
Boris Kovalerchuk1, Muhammad Aurangzeb Ahmad2,3, Ankur Teredesai2,3
1Dept. of Computer Science, Central Washington University. 400 E. University Way, Ellensburg, WA, 98926, USA; email@example.com
2KenSci Inc. & Dept. of Computer Science and Systems, University of Washington Tacoma. TLB 307C, 1900 Commerce St, Tacoma, WA 98402, USA; firstname.lastname@example.org
3KenSci Inc. & Department of Computer Science & Systems, University of Washington Tacoma TLB 307C, 1900 Commerce St, Tacoma, WA 98402, USA; email@example.com
Interpretability of Machine Learning (ML) models is a major area of current research, applications and debates in AI. The debates include a statement that most of the interpretation methods are not interpretable and a call to stop explaining black box ML models for high stakes decisions and use interpretable models Instead. This tutorial covers the state-of-the-art research, development, and applications in the area of Interpretable Knowledge Discovery and ML boosted by Visual Methods. The topic is interdisciplinary, bridging efforts of research and applied communities in AI, Machine Learning, Visual Analytics, Information Visualization, and HCI. This is a novel and fast-growing area with significant applications, and potential due to its importance in applications and prominence of visual ways of human cognition and perception. The recent progress in this area is very evident with a major deep learning explanation approach, based on visualization of salient areas and methods to visualize similarity of high-dimensional data in deep learning and other ML studies. Multiple techniques are emerging including lossless and reversible methods to visualize high-dimensional data, which will be presented in this tutorial to stimulate studies beyond heatmaps, t-SNE, and black-box ML models in general.
Boris Kovalerchuk: Dr. Boris Kovalerchuk is a professor of Computer Science at Central Washington University, USA. His publications include three books "Data Mining in Finance" (Springer, 2000), "Visual and Spatial Analysis" (Springer, 2005), and "Visual Knowledge Discovery and Machine Learning" (Springer, 2018), chapters in the Data Mining Handbook, and over 170 other publications. His research and teaching interests are in machine learning, visual analytics, visualization, uncertainty modeling, image and signal processing, and data fusion. Dr. Kovalerchuk has been a principal investigator of research projects in these areas, supported by the US Government agencies. He served as a senior visiting scientist at the US Air Force Research Laboratory, and as a member of expert panels at the international conferences, and panels organized by the US Government bodies.
Muhammad Aurangzeb Ahmad: Muhammad Aurangzeb Ahmad is an Affiliate Assistant Professor in the Department of Computer Science at University of Washington Tacoma and the Principal Research Data Scientist at KenSci, an Artificial Intelligence in healthcare focused startup in Seattle. He has had academic appointments at University of Washington, Center for Cognitive Science at University of Minnesota, Minnesota Population Center, and the Indian Institute of Technology at Kanpur. Muhammad Aurangzeb has published over 50 research papers in the field of machine learning and artificial intelligence. His current research is focused on the responsible AI in healthcare via explainable, fair, unbiased, robust systems.
Ankur Teredesai: Ankur M. Teredesai is a Professor of Computer Science & Systems at University of Washington Tacoma, and the founding director of the Center for Data Science. His research interests focus on data science applications for healthcare, and its societal impact. Apart from his academic appointments at RIT, and the University of Washington Tacoma, Teredesai has significant industry experience, having held various positions at C-DAC Pune, Microsoft, IBM T.J. Watson Labs, and a variety of technology startups. Prof. Teredesai has published over 75 papers on machine learning in leading venues, has managed large teams of data scientists and engineers, and deployed numerous machine learning applications across various industries: from web advertising, social recommendations, to handwriting recognition. Since 2009, for over a decade, his research focus has been making AI assistive for healthcare. His research contributions have led to advancing our understanding of risk, and utilization prediction for chronic conditions, such as diabetes and heart failure.