Keynote Speakers 2025

Dr. Hrachya Astsatryan


Hrachya Astsatryan (PhD Computer Science, MS Applied Mathematics) is the Director of the Institute for Informatics and Automation Problems at the National Academy of Sciences of Armenia, where he also leads the Center for Scientific Computing. His work lies at the intersection of high-performance computing, artificial intelligence, and scientific computing. He has authored over 100 publications in leading international journals, conferences, and workshops. In 2023, he was awarded the Commemorative Medal of the Prime Minister of the Republic of Armenia, and in 2005, from the President of the Republic of Armenia for his outstanding work in Technical Sciences and Information Technologies.

 

Talk title: High-Performance Artificial Intelligence: Bridging Advanced Computing and AI

Abstract: With high-performance computing (HPC) and artificial intelligence (AI) converging, new possibilities are arising to address scientific, industrial, and societal challenges at a large scale. The High-Performance AI (HiPerAI) allows for coupling scalable AI algorithms with cutting-edge computing infrastructure. The talk is built on four pillars: (1) our adaptive HPC platform, (2) AI-optimized HPC workload, (3) AI applications in scientific simulations-physics, chemistry, biology, quantum systems, and Earth Observation-where AI enhances modeling, flexibility, and predictability, and (4) emerging trends such as secure federated computing, edge-cloud-HPC convergence, hybrid quantum-classical AI, and sustainable, reproducible scientific computing.


Petra Dalunde


Petra Dalunde has a past as a political advisor in the City Hall of Stockholm, pedagogy and testbed creator. For the last three years she has been working at RISE (Research Institutes of Sweden), coordinating the CitCom TEF (Testing and Experimentation Facilities for AI in Smart and Sustainable Cities and Communities) as well as being a coordinator for Formal Test and Evaluation of AI at RISE, and in 2025 became a co-director for the Swedish AI Factory MIMER.

 

Talk title: Swedish AI Factory MIMER

Abstract: The presentation will explore how the Swedish AI Factory Mimer will work towards alignment to AI Continent Action plan, launched by the EU Commission, in practice. The plan tries to describe how the different EU AI Innovation Infrastructures will relate to each other [TEFs, EDIHs, Giga factories, AI Factories, AI on Demand Platform, Data spaces aso], and support the development of ethical and trustworthy AI and support compliance towards AI Act.


Prof. Dr. Marjan Mernik


Marjan Mernik received the MSc and PhD degrees in Computer Science from the University of Maribor in 1994 and 1998, respectively. He is currently a professor at the University of Maribor, Faculty of Electrical Engineering and Computer Science. He was a visiting professor at the University of Alabama at Birmingham, Department of Computer and Information Sciences. His research interests include programming languages, domain-specific (modelling) languages, grammar and semantic inference, and evolutionary computations. He is the Editor-in-Chief of the Journal of Computer Languages, as well as an Associate Editor of the Applied Soft Computing Journal and Swarm and Evolutionary Computation Journal. He has been named a Highly Cited Researcher for the years 2017 and 2018. More information about his work is available at https://lpm.feri.um.si/en/members/mernik/.

 

Talk title: How Evolutionary Algorithms Can Efficiently Explore and Exploit the Search Space

Abstract: Evolutionary Algorithms mimic nature with mechanisms such as selection, crossover and mutation to solve various optimisation problems. To properly apply Evolutionary Algorithms, a deep understanding of various selection, crossover and mutation operators is required. However, exploration and exploitation are crucial steps and even more essential concepts for any search algorithm. On the other hand, these fundamental concepts are not very well understood among practitioners using evolutionary algorithms. Furthermore, how to measure exploration and exploitation directly is an open problem in Evolutionary Computation. In this talk, I will first introduce the basic ingredients of every evolutionary algorithm and point out many problems and mistakes inexperienced users face, as well as different applications of evolutionary algorithms. In the second part of my talk, our novel direct measure of exploration and exploitation will be explained as based on attraction basins — parts of a search space where each part has its own point called an attractor, to which neighbouring points tend to evolve. Each search point can be associated with a particular attraction basin. If a newly generated search point belongs to the same attraction basin as its parent, the search process is identified as exploitation; otherwise, as exploration. In the last part, I will mention some open problems regarding computing attraction basins for continuous problems.


Prof. Dr. Verginica BARBU MITITELU

Verginica Barbu Mititelu is a senior researcher in the Natural Language Technology group of the Romanian Academy Research Institute for Artificial Intelligence, the representative of the Multiword Expression section in the SIGLEX board and leader of the Working Group on Lexicon-corpus interface of the UniDive COST Action. She has constantly been preoccupied with and involved in the development of language resources, especially for Romanian, applying up-to-date annotation schemas and adjusting them to the characteristics of the language under study. She has also been concerned with standardizing the resources developed, especially using Linked Data principles of representation, and with the registration of their metadata in international data repositories.

 

Talk title: A fly in the Ointment. Multiword Expressions and Their Challenges for NLP Tools and Tasks

Abstract: Commonly referred to as expressions, locutions, idioms, or constructions, multiword combinations that exhibit idiosyncratic behaviour across linguistic levels constitute a pervasive and well-recognised challenge in natural language. Within computational linguistics, such constructions have long been identified as a significant 'bottleneck' for Natural Language Processing. Despite the substantial progress brought about by large-scale language models, the robust and systematic handling of these non-compositional units remains an open problem. In this talk, I will discuss the computational challenges posed by these phenomena and present the broader efforts of the international research community dedicated to their linguistic and computational modelling.


Prof. Dr. Yaroslav D. Sergeyev


Yaroslav D. Sergeyev is Distinguished Professor at the University of Calabria, Italy and Head of Numerical Calculus Laboratory at the same university. He is Editor-in-Chief of the Spinger’s Journal “Operations Research Forum”. Several decades he was also Affiliated Researcher at the Institute of High-Performance Computing and Networking of the Italian National Research Council, and is Affiliated Faculty at the Center for Applied Optimization, University of Florida, Gainesville, USA. His research interests include global optimization (he was President of the International Society of Global Optimization, 2017-2021), infinity computing and calculus (the field he has founded), numerical computations, scientific computing, philosophy of computations, set theory, number theory, fractals, parallel computing, and interval analysis. He was awarded several research prizes (International Constantin Carathéodory Prize, International ICNAAM Research Excellence Award, International Prize of the city of Gioacchino da Fiore, all in 2023; Khwarizmi International Award, 2017; Pythagoras International Prize in Mathematics, 2010; EUROPT Fellow, 2016; Outstanding Achievement Award from the 2015 World Congress in Computer Science, Computer Engineering, and Applied Computing, USA; Honorary Fellowship, the highest distinction of the European Society of Computational Methods in Sciences, Engineering and Technology, 2015; The 2015 Journal of Global Optimization (Springer) Best Paper Award; Lagrange Lecture, Turin University, Italy, 2010; MAIK Prize for the best scientific monograph published in Russian, Moscow, 2008, etc.). In 2020, he was elected corresponding member of Accademia Peloritana dei Pericolanti in Messina, Italy. Since 2021 he is included in the rating “Top 2% highly cited authors in Scopus” produced by Stanford University, the list “Top Italian Scientists. Mathematics”, the list of top researchers produced by Research.com, etc. In 2022, his biography has been published in Chinese by the journal Mathematical Culture. In 2023, the book “Primi Passi nell’Aritmetica dell’Infinito” authored by Prof. Davide Rizza from the University of East Anglia has been published. The book is dedicated to teaching the Infinity Computing methodology developed by Prof. Sergeyev. His list of publications contains more than 300 items (among them 6 authored and 11 edited books and more than 130 articles in international journals). He is a member of editorial boards of one book series (Springer), 12 international and 3 national journals and co-editor of 14 special issues. He delivered more than a hundred of plenary/keynote lectures and tutorials at prestigious international congresses. He was Chairman of 7 and Co-Chairman of 8 international conferences and a member of Scientific Committees of more than 110 international congresses. In 2023, the 21st International Conference of Numerical Analysis and Applied Mathematics, Crete (Greece) has been dedicated to the achievements of Prof. Sergeyev and his 60th birthday.
More information is available at https://www.yaroslavsergeyev.com and https://www.theinfinitycomputer.com

 

Talk title: Numerical Infinities and Infinitesimals in Optimization

Abstract: In this talk, a recent computational methodology is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework. It is based on the principle ‘The part is less than the whole’ applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses as a computational device the Infinity Computer (a new kind of supercomputer patented in several countries) working numerically with infinite and infinitesimal numbers that can be written in a positional system with an infinite radix. On a number of examples (numerical differentiation, divergent series, ordinary differential equations, etc.) it is shown that the new approach can be useful from both theoretical and computational points of view. The main attention is dedicated to applications in optimization (local, global, and multi-objective). The accuracy of the obtained results is continuously compared with results obtained by traditional tools used to work with mathematical objects involving infinity. For more information see the dedicated web page http://www.theinfinitycomputer.com and this survey: The web page developed at the University of East Anglia, UK is dedicated to teaching the methodology: https://www.numericalinfinities.com/


 Dr. Michal Valko


Michal is the Chief Models Officer at a stealth startup, tenured researcher at Inria, and the lecturer at the MVA master of ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, self-supervised learning, or self play. Michal has recently worked on representation learning, word models and deep (reinforcement) learning algorithms that have some theoretical underpinning. In the past he has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Michal is now working on large large models (LMMs), in particular providing algorithmic solutions for their scalable fine-tuning and alignment. He received his Ph.D. in 2011 from the University of Pittsburgh, before getting a tenure at Inria in 2012 and starting Google DeepMind Paris in 2018 with Rémi Munos, In 2024, he became the principal Llama engineer at Meta, building online reinforcement learning stack and research for Llama 3.

 

Talk title: Gamification of Large Language Models

Abstract: Reinforcement learning from human feedback (RLHF) is a go-to solution for aligning large language models (LLMs) with human preferences; it passes through learning a reward model that subsequently optimizes the LLM's policy. However, an inherent limitation of current reward models is their inability to fully represent the richness of human preferences and their dependency on the sampling distribution. In the first part we turn to an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a preference model, which is conditioned on two inputs given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF) and give a new algorithmic solution, Nash-MD, founded on the principles of mirror descent. NLHF is compelling for preference learning and policy optimization with the potential of advancing the field of aligning LLMs with human preferences. In the second part of the talk we delve into a deeper theoretical understanding of fine-tuning approaches as RLHF with PPO and offline fine-tuning with DPO (direct preference optimization) based on the Bradley-Terry model and come up with a new class of LLM alignment algorithms with better both practical and theoretical properties. We finish with the newest work showing links between and building on top of them.