Plenary Speakers


IJCNN 2021 Plenary Speakers and Bio
 

Marios M. Polycarpou

Professor, IEEE Fellow, IFAC FellowDirector, KIOS Research and Innovation Center of ExcellenceProfessor of Electrical and Computer Engineering, University of CyprusHonorary Professor, Imperial College London, U.K.Founding Member of the Cyprus Academy of Sciences, Letters, and Arts 

Title of the talk:  "Smart Interactive Buildings"

 Abstract:  Modern buildings are complex systems of structures and technology aimed at providing a safe and comfortable environment for the occupants. Recent advances in information and communication technologies have generated significant interest in developing smart buildings, which provide much greater capabilities in terms of energy efficiency, safety, security, interactivity, as well as in terms of mitigating environmental impact. New components for smart buildings, such as sensors, actuators, controllers, embedded systems and wireless communications, are becoming readily available. Moreover, the Internet-of-Things (IoT) technology is already having a significant impact on developments related to smart interactive  buildings. The objective of this presentation is to provide an overview of current advances in smart buildings and to present some results on monitoring, control and fault tolerance of Heating, Ventilation and Air-Conditioning (HVAC) systems, which are crucial components of smart buildings. Various estimation, learning and feedback control algorithms will be presented and illustrated, and directions for future research will be discussed.

Bio: Marios M. Polycarpou is a Professor of Electrical and Computer Engineering and the Director of the KIOS Research and Innovation Center of Excellence at the University of Cyprus. He is also a Founding Member of the Cyprus Academy of Sciences, Letters, and Arts. He received the B.A degree in Computer Science and the B.Sc. in Electrical Engineering, both from Rice University, USA in 1987, and the M.S. and Ph.D. degrees in Electrical Engineering from the University of Southern California, in 1989 and 1992 respectively. His teaching and research interests are in intelligent systems and networks, adaptive and learning control systems, fault diagnosis, machine learning, and critical infrastructure systems. Dr. Polycarpou has published more than 350 articles in refereed journals, edited books and refereed conference proceedings, and co-authored 7 books. He is also the holder of 6 patents.Prof. Polycarpou received the 2016 IEEE Neural Networks Pioneer Award. He is a Fellow of IEEE and IFAC and the recipient of the 2014 Best Paper Award for the journal Building and Environment (Elsevier). He served as the President of the IEEE Computational Intelligence Society (2012-2013), as the President of the European Control Association (2017-2019), and as the Editor-in-Chief of the IEEE Transactions on Neural Networks and Learning Systems (2004-2010). Prof. Polycarpou is an Honorary Professor of Imperial College London. His research work has been funded by several agencies and industry in Europe and the United States, including the prestigious European Research Council (ERC) Advanced Grant, the ERC Synergy Grant and the EU Teaming project.

Professor Karl J. Friston

MB, BS, MA, MRCPsych, FMedSci, FRSB, FRSWellcome Principal FellowScientific Director: Wellcome Trust Centre for NeuroimagingInstitute of Neurology, UCL 

Title of the Talk: “Active Inference” 

Abstract: In the cognitive neurosciences and machine learning, we have formal ways of understanding and characterising perception and decision-making; however, the approaches appear very different: current formulations of perceptual synthesis call on theories like predictive coding and Bayesian brain hypothesis. Conversely, formulations of decision-making and choice behaviour often appeal to reinforcement learning and the Bellman optimality principle. On the one hand, the brain seems to be in the game of optimising beliefs about how its sensations are caused; while, on the other hand, our choices and decisions appear to be governed by value functions and reward. Are these formulations irreconcilable, or is there some underlying imperative that renders perceptual inference and decision-making two sides of the same coin.

Key words: active inference ∙ insight ∙ novelty ∙ curiosity ∙ model reduction ∙ free energy ∙ epistemic value ∙ structure learning


Bio: Karl Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). These contributions were motivated by schizophrenia research and theoretical studies of value-learning, formulated as the dysconnection hypothesis of schizophrenia. Mathematical contributions include variational Laplacian procedures and generalized filtering for hierarchical Bayesian model inversion. Friston currently works on models of functional integration in the human brain and the principles that underlie neuronal interactions. His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference). Friston received the first Young Investigators Award in Human Brain Mapping (1996) and was elected a Fellow of the Academy of Medical Sciences (1999). In 2000 he was President of the international Organization of Human Brain Mapping. In 2003 he was awarded the Minerva Golden Brain Award and was elected a Fellow of the Royal Society in 2006. In 2008 he received a Medal, College de France and an Honorary Doctorate from the University of York in 2011. He became of Fellow of the Royal Society of Biology in 2012, received the Weldon Memorial prize and Medal in 2013 for contributions to mathematical biology and was elected as a member of EMBO (excellence in the life sciences) in 2014 and the Academia Europaea in (2015). He was the 2016 recipient of the Charles Branch Award for unparalleled breakthroughs in Brain Research and the Glass Brain Award, a lifetime achievement award in the field of human brain mapping. He holds Honorary Doctorates from the University of Zurich and Radboud University.  

Riitta Salmelin

Professor of Imaging Neuroscience at the Department of Neuroscience and Biomedical EngineeringAalto University, Finland

Title of the talk: What neuroimaging can tell about human brain function

Abstract: Over the past few decades, real-time tracking of cortical current flow (magneto/electroencephalography, MEG/EEG) and accurate localisation of blood oxygenation changes (functional magnetic resonance imaging, fMRI) have offered windows to the functional architecture of the human brain. The neuroimaging domain has reached its first level of maturity: we now know how to measure and quantify different types of signals and, phenomenologically, we know what type of group-level functional effects to expect in a large variety of experimental conditions. Specific brain areas, networks and electrophysiological dynamics have been proposed to be linked with various perceptual, motor and cognitive functions and their disorders. To reach the next phase in human neuroscience, we need to advance from group-level descriptions to quantitative model-based individual-level predictions. These developments will be illustrated with focus on language function for which descriptive models, largely based on observations of patients with language disorders, are being supplemented by computationally explicit models of mechanisms and representations. Machine learning approaches are essential tools in this endeavour.



Bio: Riitta Salmelin is Professor of Imaging Neuroscience at the Department of Neuroscience and Biomedical Engineering, Aalto University, Finland. Her research focuses on two complementary lines of investigation: uncovering neural organization of human language function by use and development of imaging methods and computational modelling, and examining sensitivity of MEG and fMRI activation and network measures to different neural and cognitive processes. She has pioneered the use of MEG in language research, and applied multimodal MEG/fMRI and interareal connectivity in the study of human cognition. She is the senior editor of the first handbook on MEG (“MEG. An Introduction to Methods”, Oxford University Press, 2010) and Associate Editor of Human Brain Mapping. Honours include membership of the Academia Europaea, Wiley Young Investigator Award by the Organization for Human Brain Mapping, and the Justine and Yves Sergent Award.  
 

Peter Tino

University of Birmingham, UK 

Title of the Talk:Fascinating World of Recurrent Networks: A Personal View” 

Abstract: When learning from "dynamic" data where the order in which the data is presented does matter, the key issue is how such temporal structures get represented within the learning machine. In the case of artificial neural networks an often adopted strategy is to introduce feedback-connections with time delay(s). This enables the neurons to form their activation patterns based on the past, as well as the current neural activations. Neural networks of this kind became to be known as Recurrent Neural Networks (RNN). Many diverse architectures fall under the umbrella of RNN (from simple RNN to LSTM and Transformer-XL) with application domains ranging from language processing and narrative production to bioinformatics and chaotic time series modelling and prediction. While such an introduction of a "neural memory" in RNN seems quite straightforward, we are still far from full understanding of their workings. I will present my personal view of the developments in this field with emphasis on key ideas that helped to push our understanding forward. In particular, we will view and analyze RNN as state space models, non-autonomous dynamical systems, temporal filters and (based on our recent work) "temporal" feature spaces. I will conclude by arguing that deep understanding of internal workings of RNN are not only scientifically exciting but also a necessary pre-requisite for building reliable, understandable and interpretable RNN models. Bio:Peter Tino holds a Chair position in Complex and Adaptive Systems at the School of Computer Science, University of Birmingham, UK. His interests span machine learning, neural computation, probabilistic modelling and dynamical systems. Peter is fascinated by the possibilities of cross-disciplinary blending of machine learning, mathematical modelling and domain knowledge in a variety of scientific disciplines ranging from astrophysics to bio-medical sciences. He has served on editorial boards of a variety of journals including IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Cybernetics, Scientific Reports, and Neural Computation and (co-)chaired Task Force on Mining Complex Astronomical Data and Neural Networks Technical Committee (TC of IEEE Computational Intelligence Society). Peter led an EPSRC-funded consortium of six UK universities on developing new mathematics for personalised healthcare. He was a recipient of the Fulbright Fellowship to work at NEC Research Institute, Princeton, USA, on dynamics of recurrent neural networks, UK–Hong-Kong Fellowship for Excellence, three Outstanding Paper of the Year Awards from the IEEE Transactions on Neural Networks and the IEEE Transactions on Evolutionary Computation, and the Best Paper Award at ICANN 2002. 

 
Dietmar Plenz
 
National Institute of Mental Health, USA
 

Title of the talk: “Self-Organized Criticality in the Brain”


Abstract: Self-organized criticality (SOC) refers to the ability of complex systems to evolve toward a 2 nd -order phase transition at which interactions between system components lead to scale-invariant events that are beneficial for system performance. Over the last two decades, considerable experimental evidence has accumulated that the human brain with its diversity in cell types, interconnectivity, and plasticity might exhibit SOC. I will review experimental findings for the mammalian cortex to self-organize toward four dynamical motifs presently identified in the intact brain: up-states, oscillations, neuronal avalanches, and coherence potentials. During up-states, the synchronization observed for nested theta/gamma oscillations embeds scale-invariant neuronal avalanches, which can be identified by robust power law scaling in avalanche size with a slope of −3/2 and a critical branching parameter of 1. This precise dynamical coordination, tracked in the local synchronization of cortical neurons using microelectrode arrays or two-photon imaging, emerges autonomously in superficial layers of cortex. It is homeostatically regulated, exhibits separation of time scales, and optimizes information transfer for large, associative networks. A subclass of avalanches, the coherence potentials, exhibits an analogy to soliton-based information transfer. These exquisite dynamical features are carefully regulated by the balance of excitation and inhibition as well as the neuromodulator dopamine, which constitute control parameters within the framework of criticality. The precise interactions between up-states, oscillations, avalanches, and coherence potentials provide compelling evidence for SOC in the brain and identify brain-inspired, dynamical templates to develop novel architectures for computation. (for more details see: Plenz et al., 2021, Frontiers in Physics, https://doi.org/10.3389/fphy.2021.639389


Bio: Dr. Plenz is Chief of the Section on Critical Brain Dynamics in the Intramural Research Program at the NIMH.  He attended college at the Universities of Mainz and Tuebingen, Germany.  Under the supervision of Prof. Valentino Braitenberg and Ad Aertsen, he received his Ph.D. in 1993 at the Max-Planck Institute of Biological Cybernetics/University Tuebingen, where he pioneered the development of in vitro cortex networks to study the emergence of neuronal population dynamics.  During his 3 year postdoctoral fellowship with Stephen T. Kitai at the University of Tennessee, Memphis, he developed advanced cortex-forebrain neuronal cultures that allowed him to identify the mechanisms of distinct activity patterns that characterize normal and abnormal population dynamics in cortex and basal ganglia. Dr. Plenz joined the NIMH as a Tenure-track Investigator in 1999 and was promoted to Senior Investigator with tenure in 2006.  His laboratory combines electrophysiological and imaging techniques and neural modeling to study the self-organization of neuronal networks 
                                      

Zongben XuXi’an Jiaotong University, China  

Title of the talk: On Presuppositions of Machine Learning: A Best-fitting Theory

Abstract: Machine learning have been applied with a set of prerequisite or hypotheses, the optimal setting of which is a `the chicken or the egg’ problem. Those hypotheses include in particular (i) the Large Capacity Hypothesis on hypothetical space, (ii) the Independence Hypothesis on loss function, (iii) the Completeness Hypothesis on training data, (iv) the Prior-Determine- Regularizer Hypothesis on regularization terms, and (v) the Euclidean Hypothesis on analysis framework. We analyze the role, effect and limitations of those hypotheses in this talk, and propose a systematic way, could named as a best-fitting theory, to break through each of the hypotheses. More specifically, we propose the model driven deep learning approach to burst the Large Capacity Hypothesis, develop a noise modeling principle to breach the Independence Hypothesis, suggest the axiomatic curriculum/self- paced learning approach for the Completeness Hypothesis, the implicit regularization method for the Prior-Determine-Regularizer Hypothesis, and Banach space geometry for the Euclidean Hypothesis. In each case, we show the best-fitting strategy, substantiate the value and outcome of the breaking though. We show also that the continuing effort for bursting the hypotheses of ML is needed, which is then opening new hot directions of ML research.

Bio: Dr. XU Zongben,born in January, 1955 as a Han, is an academician of the Chinese Academy of Sciences, mathematician and expert of Signal and Information Processing. In1987, he received his PhD degree of Science in Department of Mathematics of Xi’an Jiaotong University, China. During 1988 and 1989, he went to Strathclyde University of England for postdoctoral research. In 1990, he was fast-tracked as professor, which was authorized by the State Education Commission. During1990 to 2001, he worked as the researcher and visiting professor in CUHK, Essex University in England and Naplpli University in Italy one after another. Now,he serves as a chief scientist of “The Basic Theory and Key Technology of Intellisense for Unstructured Environment”, the National Basic Research Program of China (973 Project) and Beijing Center for Mathematics and Information Interdisciplinary Sciences. 

 
Nikola Kasabov
 
Fellow IEEE, Fellow RSNZ, Fellow INNS College of Fellows
Auckland University of Technology, Auckland, New Zealand, [email protected], George Moore Chair/Professor, University of Ulster, UK
Honorary Professor Teesside University UK and the University of Auckland, NZ  
 

Title of the talk: Transfer Learning and Knowledge Transfer Between Humans and Machines with Brain-Inspired Spiking Neural Networks for Adaptable and Explainable AI     

Abstract:The talk argues and demonstrates that brain-inspired spiking neural network (SNN) architectures  can be used for transfer learning, i.e. to learn new data and new classes/tasks/categories incrementally utilising some previously learned knowledge. Similarly to how the brain manifests transfer learning,  these SNN models do not need to be restricted in number of layers, neurons in each layer, etc. as they adopt self-organising learning principles. The new learned knowledge can be extracted in forms of graphs and symbolic fuzzy rules and its evolution traced over time. The presented approach is illustrated on an exemplar brain-inspired SNN architecture NeuCube (free software and open source available from www.kedri.aut.ac.nz/neucube). The extraction of symbolic rules from NeuCube at each learning tasks and each subject allows for  knowledge transfer between humans and machines in an adaptive, evolving, interactive way. This opens the field to build new types of open and transparent BCI and AI systems.  More details can be found in: N.Kasabov, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, Springer, 2019, https://www.springer.com/gp/book/9783662577134.

Bio:Professor Nikola Kasabov is Fellow of IEEE, Fellow of the Royal Society of New Zealand, Fellow of the INNS College of Fellows, DVF of the Royal Academy of Engineering UK. He is the Founding Director of KEDRI and Professor at the School of Engineering, Computing and Mathematical Sciences at Auckland University of Technology. Kasabov is a Past President of the Asia Pacific Neural Network Society (APNNS) and the International Neural Network Society (INNS). He is member of several technical committees of IEEE Computational Intelligence Society and Distinguished Lecturer of IEEE (2012-2014). He is Editor of Springer Handbook of Bio-Neuroinformatics, Springer Series of Bio-and Neurosystems and Editor-in-Chief of the Springer journal Evolving Systems. He is Associate Editor of several international journals. Kasabov holds MSc and PhD from TU Sofia, Bulgaria. His main research interests are in the areas of neural networks, intelligent information systems, soft computing, bioinformatics, neuroinformatics. He has published more than 650 publications. He has extensive academic experience at various academic and research organisations in Europe and Asia, including: George Moore Chair in Data Analytics at the University of Ulster; Professor at the University of Otago, NZ; Advisory Professor at Shanghai Jiao Tong University; Visiting Professor at ETH/University of Zurich and Robert Gordon University UK, Honorary Professor at the University of Auckland and Teesside University. Prof. Kasabov has received a number of awards, among them: Doctor Honoris Causa from Obuda University, Budapest; INNS Ada Lovelace Meritorious Service Award; NN Best Paper Award for 2016; APNNA ‘Outstanding Achievements Award’; INNS Gabor Award for ‘Outstanding contributions to engineering applications of neural networks’; EU Marie Curie Fellowship; Bayer Science Innovation Award; APNNA Excellent Service Award; RSNZ Science and Technology Medal; 2015 AUT Medal; Honorable Member of the Bulgarian, the Greek and the Scottish Societies for Computer Science. More information of Prof. Kasabov can be found on the web site: https://academics.aut.ac.nz/nkasabov.