In this work, we formulate a new multi-task active learning setting in which the learner’s goal is to solve multiple matrix completion problems simultaneously. At each round, the learner can choose from which matrix it receives a sample from an entry drawn uniformly at random. Our main practical motivation is market segmentation, where the matrices represent different regions with different preferences of the customers. The challenge in this setting is that each of the matrices can be of a different size and also of a different rank which is unknown. We provide and analyze a new algorithm, MAlocate that is able to adapt to the unknown ranks of the different matrices. We then give a lower-bound showing that our strategy is minimax-optimal and demonstrate its performance with synthetic experiments.
Roni Stern “Multi-agent pathfinding: robust and efficient solutions”
The multi-agent path-finding (MAPF) problem, is the problem of finding a plan for moving a set of agents from their initial locations to their goals without collisions. This problem has numerous applications in digital entertainment, warehouse management, law enforcement, and robotics. While solving MAPF optimally is NP-hard, research on optimal and suboptimal MAPF has been flourishing using a range of algorithmic approaches, and state of the art solvers solving problems with over a hunderd agents in a few minutes. In this talk, I will give an introductory overview of the field, and then focus on recent work on robust solutions to MAPF interleaving planning and execution. The talk will be partially based on a tutorial on MAPF I gave together with Prof. Roman Bartak at AAMAS 2018.
Gennady Osipov, “Introduction to AI methods”
Pattern structures and pattern setups provide a natural mathematical tool for knowledge discovery in complex data, such as sequences, graphs, etc.
We show how these tools can be used for efficient gneration of taxonomies and dependencies from data in different applied domains.
Vadim Stefanuk, “Mathematical modeling of distributed memory”
To complete a description of the situation it would be appropriate to note that occasionally there are some changes of the main interests and approaches in Artificial Intelligence. For instance, it was unexpected to observe an intensive discussion of Ergodicity during one of IJCAI which is the most important international AI event. In general it is possible to see that Artificial Intelligence as a science gradually esquires a character of precise fundamental discipline where the mathematical problem formulations become more and more popular among experts. Following this tendency it was decided to include in the present lecture some problems that are in the cross point of Mathematics, Computer Science and Artificial Intelligence. The models to be considered let one see both simplicity and complexity of some tasks related to memory organization.
The purpose of this lecture is to establish the fundamental links between two important areas of artificial intelligence – fuzzy logic and deep learning. This approach will allow researchers in the field of fuzzy logic to develop application systems in the field of strong artificial intelligence, which are also of interest to specialists in the field of machine learning. The lecture also examines how neuro-fuzzy networks make it possible to establish a link between symbolic and connectionist schools of artificial intelligence.
Ricardo Gudwin “Motivational Systems in Cognitive Architectures”
Motivational Systems are specific modules of Cognitive Architectures, responsible for determining the behavior of artificial agents based on cognitive models of human motivations. In this talk we discuss how these ideas coming from psychology can be used in the field of cognitive architectures, explaining how motivational systems differ from other kinds of systems, and how they can be used to build control systems for artificial agents.
Evgeny Osipov “Computing with randomness: A new-old paradigm for energy efficient Artificial Intelligence”
The importance of machine intelligence is increasing dramatically in diverse sectors of our society: self-driving cars, sequencing of genome, diagnostic of deceases those are only few most notable applications. The reality of the current most successful approaches to learning, e.g. Deep Learning, is that they demand several days of work of a large cluster of digital computers to be able to perform the required function. While doing so, enormous amounts of energy are consumed. In this talk I will discuss an alternative computation model called hyper-dimensional (HD) computing, which has capabilities of implementing complex cognitive functionalities with substantially lower energy footprint. There are several different flavors of HD computing, each using different data types for representing HD vectors and different mathematical operations. The most known HD computing architectures are Binary Spatter Code, Holographic Reduced Representation, Multiply-Add-Permute (MAP) architecture, Random Indexing, and Semantic Pointer Architecture (SPAUN). Collectively different flavors of HD computing are referred to as Vector Symbolic Architecture (VSA). In HD computing, information is represented in vectors of extremely large dimensionality (several thousand bits). Such vectors can then be mathematically manipulated not only to classify but also to bind, associate and perform other types of cognitive operations in a straightforward manner. The source of tremendous energy saving of HD computers is in the combination of the mathematical properties of HD spaces on the one hand and the hardware realization of operations by in-memory computations at substantially lower voltages on the other. Through its major operations HD computing can significantly reduce the required computation and thereby improve energy efficiency of traditional machine learning algorithms. Currently the interest for computing with hypervectors is gaining high popularity. In the context of most popular learning approaches (Deep learning) HD computing can significantly reduce the required computation and thereby improve energy efficiency. Increasing energy efficiency of machine learning approaches and before all Deep Learning is one of the central challenges of the artificial intelligence research. The current development trend in the area of learning machines is towards the binarization of the most energy- and time-consuming operations . The state-of-the-art finding in  is that the binarized artificial neural networks in fact work due to the properties of binary hyperdimensional spaces. In the historical overview I will also demonstrate that the traces of HD computing could be found in several other popular research areas. In the core of the talk I will present mathematical apparatus behind the HD framework and focus on several illustrative use-cases where HD computing can bring potential benefits.
Medical imaging including radiology, pathology, surgery, neuroscience, etc is getting more important in modern medicine. I talk about unmet needs from the clinical side including efficient anonymization, curation, and smart labeling for cheap labeling. Domain adaptation or image normalization to overcome differences of multi-center trials, interpretability, visualization and causal learning to mitigate black box property, uncertainty of medical data, and uncertainty of artificial intelligence decision. Reproducibility study of deep learning using repeatedly scanned images. Augmentation, curriculum learning, one / multi-shot learning to solve diseases’ imbalance, rare or a small number of dataset. GAN Applications, novelty (Abnomly) detection under supervised learning for human decision in later, Big data PACS. Content based image retrieval, deep radiomics and deep survival. Develop physics-induced machine learning with well-known physics and medical laws. Robustness to adversarial attack, etc.
Ildar Batyrshin “Outline of the general theory of similarity, correlation and association measures and its applications to constructing measures of relationship of data for different domains”
The lecture covers a new, non-statitical approach to the measures of similarity, correlation and association. The measures are considered as functions defined on a universal domain and satisfying the given properties. The general techniques of similarity and correlation functions design for different data types will be presented. The lecture is intended for students, post-graduates and researchers interested in mathematical methods of data analysis and their applications.
Vladimir Gorodetsky “Behavior-based Paradigm for Group Control of Agent Networks”
Despite the diversity of applications determining current trends in the area of modern intelligent information technologies, the majority of them has many common features fitting the frameworks like Internet of Things and/or Cyber-physical systems. Indeed, these frameworks were developed because of generalization of a wide class of modern applications composed of large number of intensively interacting heterogeneous (e.g. physical, virtual and social) autonomous objects with embedded computing and communication capabilities united in a network of dynamic connectivity, in general case. These applications operate in a group fashion based on intensive interactions of relatively simple distributed autonomous entities existing in shared knowledge and data space, utilizing the shared resources and services and, thus, needing in distributed coordination and synchronization of their individual behaviors to achieve the group’s objectives. Emergent behavior of such systems is referred to as a group (collective) behavior and related control problem – as group control. Typical examples of such applications can be found in the military scope, collective and/or cloud robotics, distributed surveillance systems composed of a swarm of small satellites, teams of unmanned aerial vehicles used for agriculture task management and many others constituting the basis of what is now called digital society, in particular, digital economy.
In the lecture, the common properties of such applications will be analyzed. It will be shown that the traditionally used knowledge-based (KB) paradigm of Artificial Intelligence (AI) is not well to model these applications, and it is more reasonable to utilize the behavior-based (BB) paradigm instead of the former.
The focus of the lecture will be on the statement of the BB group control problem and analysis of basic concepts exploiting for its semantic modeling. The most important of the latter are such concepts as behavior pattern, group behavior scenario, situation, situation assessment and situation awareness, which are practically ignored in ontologies representing semantics of applications in KB paradigm of AI. The lecture will attract the attention to the topmost importance of the event ontology and its role in implementation of procedural semantics of distributed real-time systems. Other important aspects of BB model like exclusive situations management as main component of adaptive group control, scenario performance planning, scenario knowledge and knowledge base will be highlighted too. The introduced concepts of BB model and their practical usage will be illustrated by a case study implementing a group of interacting autonomous robots performing assembly production without intervention of a human. The formal model of this application specified as a network of interacting state machines with internal states and implemented as a self-organizing p2p network of autonomous agents will be introduced and discussed. It will also be shown that group control of agents united in the network can be efficiently implemented via a number of adaptive agent interaction protocols including the exclusive situation management protocols. In conclusion, a sketch of a roadmap of research and development within BB models of group control scope will be outlined.
Hermann Ney “Speech Recognition and Machine Translation: From Bayes Decision Rule to Deep Learning”
During the last 8 years, the accuracy of speech recognition and machine translation systems could be improved significantly by deep learning methods such as deep MLPs, RNNs, LSTM RNNs, CTC and attention models. We will present how these methods fit into the framework of statistical decision theory. We are working on many of these methods in comparative evaluations and will discuss the results.
Konstantin Yakovlev, “Intelligent Robotics”
Robots and AI are meant to be together. That’s the most widespread stereotypical common point at the moment. But is this actually true? Are there any robots nowadays that can be attributed as being intelligent? What are the distinctive features of such robots that make them stand out of the handful of automated machines like ATMs, coffee-machines etc? What methods and algorithms are used to build what is called intelligent control systems – complex pieces of software that control robots’ behavior? We will address these questions in the lecture and try to get the plausible answers.