![]() |
Dr. Hrachya Astsatryan
Hrachya Astsatryan (PhD Computer Science, MS Applied Mathematics) is the Director of the Institute for Informatics and Automation Problems at the National Academy of Sciences of Armenia, where he also leads the Center for Scientific Computing. His work lies at the intersection of high-performance computing, artificial intelligence, and scientific computing. He has authored over 100 publications in leading international journals, conferences, and workshops. In 2023, he was awarded the Commemorative Medal of the Prime Minister of the Republic of Armenia, and in 2005, from the President of the Republic of Armenia for his outstanding work in Technical Sciences and Information Technologies. |
Talk title: High-Performance Artificial Intelligence: Bridging Advanced Computing and AI
Abstract: With high-performance computing (HPC) and artificial intelligence (AI) converging, new possibilities are arising to address scientific, industrial, and societal challenges at a large scale. The High-Performance AI (HiPerAI) allows for coupling scalable AI algorithms with cutting-edge computing infrastructure. The talk is built on four pillars: (1) our adaptive HPC platform, (2) AI-optimized HPC workload, (3) AI applications in scientific simulations-physics, chemistry, biology, quantum systems, and Earth Observation-where AI enhances modeling, flexibility, and predictability, and (4) emerging trends such as secure federated computing, edge-cloud-HPC convergence, hybrid quantum-classical AI, and sustainable, reproducible scientific computing.
![]() |
Petra Dalunde
Petra Dalunde has a past as a political advisor in the City Hall of Stockholm, pedagogy and testbed creator. For the last three years she has been working at RISE (Research Institutes of Sweden), coordinating the CitCom TEF (Testing and Experimentation Facilities for AI in Smart and Sustainable Cities and Communities) as well as being a coordinator for Formal Test and Evaluation of AI at RISE, and in 2025 became a co-director for the Swedish AI Factory MIMER. |
Talk title: Swedish AI Factory MIMER
Abstract: The presentation will explore how the Swedish AI Factory Mimer will work towards alignment to AI Continent Action plan, launched by the EU Commission, in practice. The plan tries to describe how the different EU AI Innovation Infrastructures will relate to each other [TEFs, EDIHs, Giga factories, AI Factories, AI on Demand Platform, Data spaces aso], and support the development of ethical and trustworthy AI and support compliance towards AI Act.
![]() |
Prof. Dr. Marjan Mernik
Marjan Mernik received the MSc and PhD degrees in Computer Science from the University of Maribor in 1994 and 1998, respectively. He is currently a professor at the University of Maribor, Faculty of Electrical Engineering and Computer Science. He was a visiting professor at the University of Alabama at Birmingham, Department of Computer and Information Sciences. His research interests include programming languages, domain-specific (modelling) languages, grammar and semantic inference, and evolutionary computations. He is the Editor-in-Chief of the Journal of Computer Languages, as well as an Associate Editor of the Applied Soft Computing Journal and Swarm and Evolutionary Computation Journal. He has been named a Highly Cited Researcher for the years 2017 and 2018. More information about his work is available at https://lpm.feri.um.si/en/members/mernik/. |
Talk title: How Evolutionary Algorithms Can Efficiently Explore and Exploit the Search Space
Abstract: Evolutionary Algorithms mimic nature with mechanisms such as selection, crossover and mutation to solve various optimisation problems. To properly apply Evolutionary Algorithms, a deep understanding of various selection, crossover and mutation operators is required. However, exploration and exploitation are crucial steps and even more essential concepts for any search algorithm. On the other hand, these fundamental concepts are not very well understood among practitioners using evolutionary algorithms. Furthermore, how to measure exploration and exploitation directly is an open problem in Evolutionary Computation. In this talk, I will first introduce the basic ingredients of every evolutionary algorithm and point out many problems and mistakes inexperienced users face, as well as different applications of evolutionary algorithms. In the second part of my talk, our novel direct measure of exploration and exploitation will be explained as based on attraction basins — parts of a search space where each part has its own point called an attractor, to which neighbouring points tend to evolve. Each search point can be associated with a particular attraction basin. If a newly generated search point belongs to the same attraction basin as its parent, the search process is identified as exploitation; otherwise, as exploration. In the last part, I will mention some open problems regarding computing attraction basins for continuous problems.
![]() |
Prof. Dr. Verginica BARBU MITITELU
Verginica Barbu Mititelu is a senior researcher in the Natural Language Technology group of the Romanian Academy Research Institute for Artificial Intelligence, the representative of the Multiword Expression section in the SIGLEX board and leader of the Working Group on Lexicon-corpus interface of the UniDive COST Action. She has constantly been preoccupied with and involved in the development of language resources, especially for Romanian, applying up-to-date annotation schemas and adjusting them to the characteristics of the language under study. She has also been concerned with standardizing the resources developed, especially using Linked Data principles of representation, and with the registration of their metadata in international data repositories. |
Talk title: A fly in the Ointment. Multiword Expressions and Their Challenges for NLP Tools and Tasks
Abstract: Commonly referred to as expressions, locutions, idioms, or constructions, multiword combinations that exhibit idiosyncratic behaviour across linguistic levels constitute a pervasive and well-recognised challenge in natural language. Within computational linguistics, such constructions have long been identified as a significant 'bottleneck' for Natural Language Processing. Despite the substantial progress brought about by large-scale language models, the robust and systematic handling of these non-compositional units remains an open problem. In this talk, I will discuss the computational challenges posed by these phenomena and present the broader efforts of the international research community dedicated to their linguistic and computational modelling.
![]() |
Prof. Dr. Panos M. Pardalos
Panos M. Pardalos serves as professor emeritus of industrial and systems engineering at the University of Florida. Additionally, he is the Paul and Heidi Brown Preeminent Professor of industrial and systems engineering. He is also an affiliated faculty member of the Computer and information science Department, the Hellenic Studies Center, and the Biomedical Engineering program. He is also the director of the Center for Applied Optimization. Pardalos is a world-leading expert in global and combinatorial optimization. His recent research interests include network design problems, optimization in telecommunications, e-commerce, data mining, biomedical applications, and massive computing. |
Talk title: will be announced
Abstract: will be announced
![]() |
Dr. Michal Valko
Michal is the Chief Models Officer at a stealth startup, tenured researcher at Inria, and the lecturer at the MVA master of ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, self-supervised learning, or self play. Michal has recently worked on representation learning, word models and deep (reinforcement) learning algorithms that have some theoretical underpinning. In the past he has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Michal is now working on large large models (LMMs), in particular providing algorithmic solutions for their scalable fine-tuning and alignment. He received his Ph.D. in 2011 from the University of Pittsburgh, before getting a tenure at Inria in 2012 and starting Google DeepMind Paris in 2018 with Rémi Munos, In 2024, he became the principal Llama engineer at Meta, building online reinforcement learning stack and research for Llama 3. |
Talk title: Gamification of Large Language Models
Abstract: Reinforcement learning from human feedback (RLHF) is a go-to solution for aligning large language models (LLMs) with human preferences; it passes through learning a reward model that subsequently optimizes the LLM's policy. However, an inherent limitation of current reward models is their inability to fully represent the richness of human preferences and their dependency on the sampling distribution. In the first part we turn to an alternative pipeline for the fine-tuning of LLMs using pairwise human feedback. Our approach entails the initial learning of a preference model, which is conditioned on two inputs given a prompt, followed by the pursuit of a policy that consistently generates responses preferred over those generated by any competing policy, thus defining the Nash equilibrium of this preference model. We term this approach Nash learning from human feedback (NLHF) and give a new algorithmic solution, Nash-MD, founded on the principles of mirror descent. NLHF is compelling for preference learning and policy optimization with the potential of advancing the field of aligning LLMs with human preferences. In the second part of the talk we delve into a deeper theoretical understanding of fine-tuning approaches as RLHF with PPO and offline fine-tuning with DPO (direct preference optimization) based on the Bradley-Terry model and come up with a new class of LLM alignment algorithms with better both practical and theoretical properties. We finish with the newest work showing links between and building on top of them.
- arXiv:2312.00886
- arXiv:2310.12036
- arXiv:2402.05749
- arXiv:2402.02992
- arXiv:2310.17303
- arXiv:2405.08448
- arXiv:2410.17055
- arXiv:2503.19612
- arXiv:2505.19731