Monday, November 13, Kende street 13-17 basement
9:00 Welcome addresses
László Monostori, Director, Institute for Computer Science and Control, Hungarian Reseach Network
9:30 Yulia Timofeeva: Introduction of the Warwick University CS Department
10:00 Andras Benczur: Overview of the Artificial Intelligence National Laboratory
The Artificial Intelligence National Laboratory (MILAB) aims to strengthen Hungary's position in AI. MILAB is founded in 2020 as the coordinated national artificial intelligence umbrella for the collaboration of all major research centers, universities and large-scale national programs. We funnel business needs and international research relations by leveraging on the existing relationship of our partner institutions. We strengthen social innovation to embrace technological innovation. Research in MILAB is initiated by integrating flagship projects in medical image processing, transportation, manufacturing, logistics, and by relying on internationally renowned Hungarian mathematicians and physicists among others in the areas of graph theory and network science.
10:30 Bálint Vanek: Overview of the National Laboratory for Autonomous Systems
11:00 János Levendovszky: AI research at the Budapest University for Technologies and Economics
11:30 András Lőrincz: AI research at Eötvös University Budapest
12:00 Balázs Szegedy, AI research at the Rényi Institute
12:30 lunch break
14:00 László Lengyel, Vice president, National Research, Development and Innovation Office
14:05 Theory of computing
14:05 Artur Czumaj: On Clustering in High Dimension
In this talk we will discuss some recent advances in Euclidean Uniform Facility Location and its applications to k-median and k-means, focusing on the high-dimensional regime. We first present an algorithm for Euclidean Uniform Facility Location in the classical setting of dynamic geometric streams that relies on importance sampling from the stream, and obtains an O(1)-approximation of the optimal cost using only poly(d · log n) space. Next, we discuss how this result can be extended to the parallel setting, and then, to k-median and k-means problems with bicriteria guarantees.
Grasshoppers jumping in the plane
An interesting high school math problems asks if four (point-like)
grasshoppers start at the vertices of a square, and each grasshopper can
jump over another grasshopper arriving at the same distance on the
opposite side, can they arrive after a sequence of such jumps to the
vertices of a larger square? The negative answer has an elegant proof,
but it does not generalize to a regular pentagon instead of a square.
It was surprising for us to notice, that such sequence of jumps does
exist for the regular pentagon and indeed, for every regular $n$-gon,
$n>6$. The answer comes with nice characterization of the achievable
configurations but tells very little about how many steps are needed to
achieve them. For example we believe that getting a larger regular
$n$-gon can be achieved in poly$(n)$ steps but we cannot prove this. We
do not even know if the grasshoppers start at the vertices of a regular
dodecahedron (on the 3-dimensional medow), can they arrive at the
vertices of a larger regular dodecahedron.
The results are joint with János Pach.
14:35 Ramanujan Sridharan (online): Fixing Knockout Tournaments
A knockout tournament or single-elimination tournament is a standard format of competition that is ubiquitous in sports, elections, and several other domains involving decision-making. A well-studied question in computational social choice centered around this concept is whether a knockout tournament can be conducted in a way that makes a specific player win. In this talk, I will sketch some of the recent advances on this topic from the perspective of parameterized complexity.
14:50 Balázs Ráth (Technical University)
The forest fire model is a modification of the dynamical Erdős-Rényi random graph model: inflammable edges appear randomly between vertices, but the edge set of large connected components are deleted due to random lightning strikes. The balance between creation and destruction creates self-organized criticality (SOC): as soon as the graph reaches its critical state, it remains critical. The aim of this talk is to highlight the connection between this phenomenon and the theory of inhomogeneous random graphs.
15:05 Graham Cormode (online): Federated Computation for Private Data Analysis
The federated model of computation combines distributed and private computation. I'll give a brief overview of this model, and highlight some recent results that span federated learning and federated analytics.
15:20 Sumanta Sarkar (online): Privacy Preserving Machine Learning
The past decade has seen a significant progress in machine learning and their usage in various applications such as health and finance. Machine learning is a data driven process - it needs large amounts of data. These data often are generated by the users. Therefore, with the growing applications of machine learning, the privacy concerns are also growing. To address the issue of privacy leakage in machine learning, the topic privacy-preserving machine learning (PPML) has emerged that aims to protect the privacy of sensitive data and the model. PPML is an active area of research in the cryptographic community - it employs privacy enhancing technologies (PETs) such as homomorphic encryption, zero knowledge proofs, multiparty computation, etc. However, there is always a trade-off between performance and privacy guarantee. In this talk we will give a general view on PPML and how PETs have been useful in this regard.
15:35-18:00 coffee and demos
Demos are on the 6th floor of the Lagymanyosi street 11 building
SZTAKI
Research Laboratory on Engineering & Management Intelligence:
Nacsa János: Bin picking robot;
Hajós Mátyás: Pictor-o-Bot;
Erdős Gábor: Laser Welding
Ákos Zarándy, Péter Földessy, Computational Optical Sensing and Processing Laboratory: Infant behavioural video monitor decision support artificial intelligence module
Péter Soós, Péter Szabó, Systems and Control Laboratory: UAV-UGV cooperative inspection
Marcell Kégl, Machine Perception Laboratory: Real-time people surveillance by Lidar and camera fusion
BME
Mátyás Szántó: Carla2NeRF
Semmelweis University
Bendegúz Sulyok: CRC detection framework
Eötvös University
András Lörincz, Kristian Fenech Vision-Based Dynamic Real-Time 3D Digital Twin: A Combination of NeRF and Deep Sparse Technologies
András Lörincz, Kristian Fenech Mobile application based 3D body pose estimation for physical rehabilitation
Tuesday, November 14, Kende street 13-17 basement
9:00 Machine learning
9:00 Long Tran-Thanh: AI and ML research at Warwick
In this talk I will give a brief overview of the AI and ML research at the University of Warwick, ranging from foundations of AI to its applications. I will also (briefly) discuss my own research on learning with strategic agents.
9:30 Márk Jelasity (Szeged University): Highlights of ML research in Szeged
The talk will discuss a number of research results in the areas of NLP, signal processing, and robust ML. The goal is to present a birds eye view of ML research at the Department of AI. Márk is a full professor at the University of Szeged. He is the head of the Department of AI, the PhD School of Computer Science, and the Research Group on AI of the Hungarian Research Network (HUN-REN).
10:00 Peter Triantafillou: Machine Unlearning: Courses for Horses … for Courses
Machine unlearning is a burgeoning field of research concerned with the need to remove (the effect of) specific data items from trained machine learning models. This is a notoriously difficult problem, especially for models with non-convex losses such as Deep Neural Networks (DNNs). And it is a key problem for mitigating the risks of AI, given the need to remove from models the effect of (i) biased data items, and/or (ii) data items with (purposely or involuntarily) erroneous data annotations, and/or (iii) (for privacy reasons, such as ensuring individuals’ right to be forgotten - a la GDPR) remove such sensitive data. In this brief talk I will overview results of our recent research in machine unlearning, based on our NeurIPS 2023 and SIGMOD 2024 papers, as well as from our experience gained with co-organizing the first machine unlearning competition for NeurIPS 2023, hosted by Kaggle (with currently ca. 1000 participating teams throughout the world).
10:15 Dániel Sándor (Technical University) Federated multitask learning
Using federated learning to collaborate with other parties is becoming common when conducting machine learning on high-value data. In our work, we try to expand the possibilities of existing federated models to apply them to multitask problems. Previously we presented FedMTBoost, which used boosting to enhance predictive performance in a small drug-target interaction problem. In this presentation we demonstrate the algorithm's performance on a larger scale using a cross-domain benchmark data set.
In this presentation, I will discuss my recent research endeavors, focusing on the utilization of representation learning concepts to develop trustworthy algorithms that can continuously learn across a variety of tasks. I will emphasize both the theoretical foundations of my work and the practical development of AI/ML algorithms, employing concepts such as sparsity, low-rank, and latent representations of pretrained deep generative models. To conclude, I will showcase empirical results from these approaches, highlighting their effectiveness in addressing challenges related to adversarial robustness and continual learning.
10:45 Ranko Lazic: Implicit bias of gradient-based algorithms
The tremendous empirical success of machine learning using neural networks that are so powerful as to perfectly memorise training data is largely a mystery from the standpoint of classical theory. I shall outline the exciting research direction of determining the implicit bias of gradient-based algortihms, which seeks to explain the magic by which the training of modern neural networks often converges to solutions that generalise well to unseen inputs. A joint work with Dmitry Chistikov and Matthias Englert to be presented at NeurIPS 2023 will serve as an example.
11:00 Matthias Englert: Adversarial examples and reprogramming
Neural networks have been used with great success in many applications. However, it is a well established phenom on that neural networks are prone to adversarial attacks. Such attacks consist of targeted, but relatively small, manipulations of input data that result in the neural network producing incorrect outputs. More recently, Elsayed, Goodfellow, and Sohl-Dickstein (ICLR 2019) introduced the concept of adversarial reprogramming, which allows an attacker to not just induce incorrect outputs, but to repurpose a given neural network to perform a completely different task to the one it was trained on. We will discuss this and some of our own work (joint with Ranko Lazic) in this direction, which was presented at NeurIPS 2022.
11:15 Fanghui Liu: Over-parameterization in neural networks: the good, the bad, the ugly
The conventional wisdom of simple models in machine learning misses the bigger picture, especially over-parameterized neural networks (NNs), where the number of parameters is much larger than the number of training data. Our goal is to explore the mystery behind over-parameterized NNs from a theoretical side. In this talk, I will introduce my research on this direction in a high-level way to understand the good, the bad, the ugly behind over-parameterized NNs. First, I will discuss generalization guarantees of over-parameterized NNs, e.g., benign overfitting and double descent. Then I will talk about two theoretical-oriented application topics in trustworthy AI, i.e., robustness and privacy. It aims to answer a fundamental question: over-parameterization in NNs helps or hurts robustness/privacy?
11:30 László Vidács (Szeged University): Applied AI in software development
The presentation will provide a quick overview of the Department of Software Engineering's numerous projects and activities in which AI is applied.
11:45 Markus Brill (Online): From Computational Social Choice to Civic Participation
I will give a brief overview of my research in Computational Social Choice, which is concerned with aggregating preferences of agents into collective outcomes, and I will describe applications in the area of Civic Participation, including Participatory Budgeting.
12:00 Debmalya Mandal (Online): Performative Reinforcement Learning
Real-world applications of reinforcement learning (e.g. for making recommendations on online platforms) often ignore the fact that the deployed policy might change the underlying environment. In this talk, I will introduce the framework of performative reinforcement learning where the policy deployed by the learning agent might change the underlying MDP (reward, transition, or both). An important solution concept in this framework is the performatively stable policy i.e. the optimal policy in the stable environment. I will discuss how one can repeatedly optimize a regularized version of the standard RL problem to obtain such a stable policy. Finally, I will discuss a recent work that considers the setting where the change in the environment is gradual and the resulting challenges in performing sample-efficient repeated optimization.
12:15 Balázs Csáji (SZTAKI): Distribution-Free Uncertainty Quantification with Kernels
Kernel methods are widely used in statistics, machine learning (ML), signal processing, and related fields. Their theoretical foundations are based on reproducing kernel Hilbert spaces (RKHSs) and kernels are often interpreted in ML as similarity measures. In this talk, we study uncertainty quantification for off-line regression problems, thus we assume a finite (i.i.d.) sample of input-output pairs. First, we present a method to construct distribution-free, non-asymptotic confidence intervals for the true (noiseless) outputs of the underlying data-generating (regression) function at the observed inputs. These regions are built around chosen nominal estimates, such as the models created by KRR, SVR or kernelized LASSO. Then, by using an additional assumption that the data-generating function has a Fourier transform with a compact support, i.e., the function cannot change arbitrarily fast, we extend the construction to any (unobserved) inputs. At the end, we can build non-parametric, non-asymptotic, simultaneous confidence bands for the true data-generating function for any given significance level.
12:30 lunch break
Posters are displayed in the social space next to the basement lecture hall.
List of posters at the end of the Program.
14:30 Computer vision, Medical, health, bioinformatics
The human brain, comprising billions of neurons with diverse morphologies and intrinsic properties, relies on the firing of action potentials and the release of neurotransmitters to transmit information. The pattern of action potential discharges at presynaptic terminals is intricately linked to dynamic calcium signalling within the presynaptic machinery which orchestrates the precise release of neurotransmitters. In this talk I will provide a brief overview of my recent research, demonstrating how experimentally constrained computational modelling can complement traditional laboratory techniques such as electrophysiology and imaging and offer valuable insights into the mechanisms governing neuronal dynamics across multiple scales.
14:45 Gihan Mudalige (online): High Performance and Scientific Computing Research at Warwick
In this talk I will briefly introduce the research from my group in the area of High Performance and Scientific Computing. The group's work focuses on the development of next-generation high performance computing (HPC) numerical simulation software libraries through the utilization of domain–specific languages and high-level abstraction frameworks. The key motivation of this research is to develop techniques to automatically parallelize HPC applications while at the same time maintain near-optimal performance on diverse multi-core, many-core and reconfigurable parallel systems. Our current and recent research projects includes work with Rolls-Royce plc, NAG, UKAEA, Alan Turing Institute, IBM TJ-Watson Laboratory, and a wide range of prominent UK Universities. I have a close collaboration with Dr. Istvan Reguly from PPCU Hungary who is also a Honorary Associate Professor at Warwick linked to this work. I will detail our joint work and future plans and opportunities to focus in this area.
Hippocampal theta oscillations orchestrate faster beta-to-gamma oscillations facilitating the segmentation of neural representations during navigation and episodic memory. Supra-theta rhythms of hippocampal CA1 are coordinated by local interactions as well as inputs from the entorhinal cortex (EC) and CA3 inputs. However, theta-nested gamma-band activity in the medial septum (MS) suggests that the MS may control supra-theta CA1 oscillations. To address this, we performed multi-electrode recordings of MS and CA1 activity in rodents and found that MS neuron firing showed strong phase-coupling to theta-nested supra-theta episodes and predicted changes in CA1 beta-to-gamma oscillations on a cycle-by-cycle basis. Unique coupling patterns of anatomically defined MS cell types suggested that indirect MS-to-CA1 pathways via the EC and CA3 mediate distinct CA1 gamma-band oscillations. Optogenetic activation of MS parvalbumin-expressing neurons elicited theta-nested beta-to-gamma oscillations in CA1. Thus, the MS orchestrates hippocampal network activity at multiple temporal scales to mediate memory encoding and retrieval.
15:15 Márton Szemenyei (Technical University) Robust Machine Vision Systems for Mobile Robots
The Computer Vision and Machine Learning lab is located at BME's Department of Control Engineering and Information Technology. As such, our work is strongly focused on computer vision methods that can be applied for mobile robots and autonomous vehicles. In our current research we aim to develop detection systems that are robust even in the face of various input disturbances, such as rain, fog or occluded objects. to solve these tasks, we apply both active and passive vision techniques. We also research AI-based navigation methods that allow robots to map an environment and locate themselves in it, and develop realistic simulated environments to evaluate our algorithms. Naturally, our team also has to deal with tight run-time requirements, therefore we also focus on discovering efficient neural network architectures.
15:30 Shan Razan (online): Computational Tools for Deployment of AI algorithms in Pathology
Computational Pathology has seen rapid growth during the past decade mainly fuelled by swift advancements in deep learning and AI. Pathology images in practice are different from natural images in size and need to be analysed at multiple scales which makes it challenging to write standardised machine learning pipelines which can work across images from multiple modalities and different labs. At the TIA Centre, we have developed computational tools like TIAToolbox to help standardisation of deep learning pipelines, making them accessible to the public and to bring reproducibility in the computational pathology. We provide detailed example notebooks on how to use and extend these tools in computational pathology workflows with visualisation.
Our research group focuses on finding practical applications for new AI algorithms in the field of pharmaceutical manufacturing. We found that artificial neural networks can be used for the empirical modeling of various pharmaceutical operations. Based on the models we constructed, the quality of the products can be predicted based on the properties of raw materials and the settings of the operation. Our other main topic is utilizing AI-based object recognition for detecting faulty products and for segmenting particles from complex backgrounds in order to measure their size.
16:00 Alex Olar, Oz Kilim (Eötvös University): Image processing applications for cancer diagnostics
Alex and Oz present their results on x-ray image and histological slide evaluations. Alex presents his recent work, which won the first prize in a data challenge. Oz presents new insights about using AI models for understanding animal behavior, especially a possible explanation of birds' success in pattern recognition on histological slides.
Anatomically and biophysically detailed models of neurons and networks have become important tools in neuroscience. This presentation will introduce some new methods and software tools that allow feature-based analysis of morphological and electrophysiological data, the systematic construction and validation of multicompartmental model neurons, and the principled estimation of neuronal biophysical parameters from physiological recordings.
16:30-18:00 Coffee, posters
Posters are displayed in the social space next to the basement lecture hall.
SZTAKI:
Csaba Kerepesi Machine learning in ageing research
Domokos Kelen Theoretical Evaluation of Asymmetric Shapley Values for Root-Cause Analysis
Dániel Rácz & Bálint Daróczy Tangent similarity gap in feedforward ReLU networks
Béres Ferenc Urban life and wellbeing assessed by mobile usage
Budapest University of Technology and Economics:
Bence Szinyéri The Advantages of Deep Learning in Bridge Weigh-in-Motion Systems
Balazs Pejo Reconciling the uneasy relationship between privacy, robustness, and other aspects of Federated Learning
Szeged University:
Hamza Baniata Distributed scalability tuning for evolutionary Blockchain sharding optimization
Vilmos Bilicki TBC
Márk Jelasity On the functional similarity of robust and non-robust neural representations
Farkas Richárd TBC
András Kicsi Machine understanding of radiological reports
Eötvös University
Kristian Fenech Perceived group personality: a fused deep predictor for group performance
Kristian Fenech Gaze-based estimation of memorable moments during online meetings
Bruno Melicio Towards machine evaluation of diagnostic tests in autism
Gyongyver Ferencz FaceGym: A Facial game for assessment and training in autism
Adam Fodor BlinkLinMult: A transformer based blink detection method
Aron Fothi Deep NRSFM for Multi-View Multi-body Pose Estimation
Skaf Joul NIPGBoard: Interactive Tool for Data Visualization-based Domain Expert Cooperation
Kristian Fenech Distribution-Free Uncertainty Quantification for the Regression Function of Binary Classification
Hungarian State Treasury
Nóra Fenyvesi, Richár Tuhári, József Stéger: Parameter Dependency of Multi-tailed Models
Ferenc Béres, Csaba Sidló Urban life and wellbeing assessed by mobile usage