Temporal Graph Reading Group

TGL logo

Every Thursday at 11am-12pm EST

Join us via Zoom

Follow us on twitter @tempgraph_rg
Follow us on Bluesky @tempgraphrg.bsky.social
Sign up here to receive email communications about the reading group
Visit our Youtube Channel for Past Recordings
Join the TGL slack to discuss with the TG community: here.
Contact here if there is any issues with the invite link.
Register the reading group in your google calendar to follow us each week.

Upcoming Talks

Reading group will resume in Winter 2025, have a great holiday season everyone! See upcoming talks here

[January 16th, 2025]

  • Interpreting Temporal Graph Neural Networks with Koopman Theory
    Presenter: Michele Guerra UiT The Arctic University of Norway, Department of Mathematics and Statistics, Norway
    Michele Guerra is a Ph.D. student in the Department of Mathematics and Statistics at UiT The Arctic University of Norway. He received his MSc (2019) and BSc (2016) degrees in Physics from University of Padua, Italy. Michele’s research spans from uncertainty quantification to interpretability for randomised and trainable models on time series and graphs.

    Spatiotemporal graph neural networks (STGNNs) have shown promising results in many domains, from forecasting to epidemiology. However, understanding the dynamics learned by these models and explaining their behaviour is significantly more complex than for models dealing with static data. Inspired by Koopman theory, which allows a simpler description of intricate, nonlinear dynamical systems, we introduce an explainability approach for temporal graphs. We present two methods to interpret the STGNN’s decision process and identify the most relevant spatial and temporal patterns in the input for the task at hand. The first relies on dynamic mode decomposition (DMD), a Koopmaninspired dimensionality reduction method. The second relies on sparse identification of nonlinear dynamics (SINDy), a popular method for discovering governing equations, which we use for the first time as a general tool for explainability. We show how our methods can correctly identify interpretable features such as infection times and infected nodes in the context of dissemination processes.
  • Past Talks, Fall 2024

    [September 19th, 2024]

  • From Link Prediction to Forecasting: Information Loss in Batch-based Temporal Graph Learning
    Presenter: Moritz Lampert Julius-Maximilians-Universiät Würzburg, DE
    Moritz Lampert is a PhD student at the Chair of Machine Learning for Complex Networks, Center for Artificial Intelligence and Data Science (CAIDAS), Julius-Maximilians-Universität Würzburg, under the supervision of Prof. Ingo Scholtes. His research interests include different aspects of graph representation learning with a current focus on temporal graphs and applications in single-cell biology. Prior to his PhD studies, he completed his master's degree at the same institution.

    Dynamic link prediction is an important problem considered by many recent works proposing various approaches for learning temporal edge patterns. To assess their efficacy, models are evaluated on publicly available benchmark datasets involving continuous-time and discrete-time temporal graphs. However, as we show in this work, the suitability of common batch-oriented evaluation depends on the datasets' characteristics, which can cause two issues: First, for continuous-time temporal graphs, fixed-size batches create time windows with different durations, resulting in an inconsistent dynamic link prediction task. Second, for discrete-time temporal graphs, the sequence of batches can additionally introduce temporal dependencies that are not present in the data. In this work, we empirically show that this common evaluation approach leads to skewed model performance and hinders the fair comparison of methods. We mitigate this problem by reformulating dynamic link prediction as a link forecasting task that better accounts for temporal information present in the data. We provide implementations of our new evaluation method for commonly used graph learning frameworks.
  • [September 26th, 2024]

  • Graph Deep Learning for Time Series Forecasting
    Presenter: Andrea Cini The Swiss AI Lab IDSIA USI-SUPSI & Universit`a della Svizzera italiana, Switzerland.
    Andrea Cini is a postdoctoral researcher at the Graph Machine Learning Group and IDSIA, at USI. He obtained his PhD from USI under the supervision of Prof. Cesare Alippi, with a thesis on graph deep learning methods for time series forecasting. Previously, he was a visiting researcher at Imperial College London and earned his MSc and BSc at Politecnico di Milano. His research focuses on relational time series processing; in particular, he introduced methods for forecasting, data reconstruction, and probabilistic graph learning. His work has been published in leading machine learning venues, including JMLR, NeurIPS, ICML, and ICLR, where he frequently serves as a reviewer. He is one of the creators of the Torch Spatiotemporal library.

    Graph-based deep learning methods have become popular tools to process collections of correlated time series. Differently from traditional multivariate forecasting methods, neural graph-based predictors take advantage of pairwise relationships by conditioning forecasts on a (possibly dynamic) graph spanning the time series collection. The conditioning can take the form of an architectural inductive bias on the neural forecasting architecture, resulting in a family of deep learning models called spatiotemporal graph neural networks. Such relational inductive biases enable the training of global forecasting models on large time-series collections, while at the same time localizing predictions w.r.t. each element in the set (i.e., graph nodes) by accounting for local correlations among them (i.e., graph edges). Indeed, recent theoretical and practical advances in graph neural networks and deep learning for time series forecasting make the adoption of such processing frameworks appealing and timely. However, most of the studies in the literature focus on proposing variations of existing neural architectures by taking advantage of modern deep learning practices, while foundational and methodological aspects have not been subject to systematic investigations. To fill the gap, this paper aims to introduce a comprehensive methodological framework that formalizes the forecasting problem and provides design principles for graph-based predictive models and methods to assess their performance. At the same time, together with an overview of the field, we provide design guidelines, recommendations, and best practices, as well as an in-depth discussion of open challenges and future research directions.
  • [October 3rd, 2024]

  • AnyGraph: Graph Foundation Model in the Wild
    Presenter: Lianghao Xia The University of Hong Kong
    Dr. Xia Lianghao is currently a postdoctoral researcher at the Data Intelligence Laboratory of the University of Hong Kong, collaborating with Professor Huang Chao. Lianghao's research mainly focuses on the field of data science, with particular attention to foundational models, graph learning, information retrieval, and spatio-temporal modeling. He has published 14 first-author papers in top conferences in related fields. His work has received multiple awards, including the Most Influential Paper Award at WWW'2023, SIGIR'2023, 2022, as well as the Best Paper Nomination and Spotlight Paper Award at the WWW'2023. In addition, he was selected by Stanford University as the world's top 2% scientists in 2024.

    The growing ubiquity of relational data structured as graphs has underscored the need for graph learning models with exceptional generalization capabilities. However, current approaches often struggle to effectively extract generalizable insights, frequently requiring extensive fine-tuning and limiting their versatility. Graph foundation models offer a transformative solution, with the potential to learn robust, generalizable representations from graph data. This enables more effective and adaptable applications across a wide spectrum of tasks and domains. In this work, we investigate a unified graph model, AnyGraph, designed to handle key challenges: i) Structure Heterogenity. Addressing distribution shift in graph structural information; ii) Feature Heterogenity. Handling diverse feature representation spaces across graph datasets; iii) Fast Adaptation. Efficiently adapting the model to new graph domains; iv) Scaling Law Emergence. Enabling the model to exhibit scaling law behavior, where its performance scales favorably with the amount of data and parameter sizes. To tackle these critical challenges, we build the AnyGraph upon a Graph Mixture-of-Experts (MoE) architecture. This approach empowers the model to effectively manage both the in-domain and cross-domain distribution shift concerning structure-level and feature-level heterogeneity. Furthermore, a lightweight graph expert routing mechanism is proposed to facilitate AnyGraph’s fast adaptability to new data and domains. Our extensive experiments on diverse 38 graph datasets have demonstrated the strong zero-shot learning performance of AnyGraph across diverse graph domains with significant distribution shift. Furthermore, we have validated the model’s fast adaptation ability and scaling law emergence, showcasing its versatility.
  • [October 10th, 2024]

  • DTGB: A Comprehensive Benchmark for Dynamic Text-Attributed Graphs
    Presenter: Jiasheng Zhang University of Electronic Science and Technology of China
    Jiasheng Zhang is a Ph.D. student in the Department of Computer Science at the University of Electronic Science and Technology of China (UESTC), specializing in temporal knowledge graphs and dynamic graph learning. He is currently visiting Yale University, where he is working with Rex Ying on using large language models for temporal graphs and temporal knowledge reasoning. He is recognized as Rising Star in Scientific Research by UESTC.

    Dynamic text-attributed graphs (DyTAGs) are prevalent in various real-world scenarios, where each node and edge are associated with text descriptions, and both the graph structure and text descriptions evolve over time. Despite their broad applicability, there is a notable scarcity of benchmark datasets tailored to DyTAGs, which hinders the potential advancement in many research fields. To address this gap, we introduce Dynamic Text-attributed Graph Benchmark (DTGB), a collection of large-scale, time-evolving graphs from diverse domains, with nodes and edges enriched by dynamically changing text attributes and categories. To facilitate the use of DTGB, we design standardized evaluation procedures based on four real-world use cases: future link prediction, destination node retrieval, edge classification, and textual relation generation. These tasks require models to understand both dynamic graph structures and natural language, highlighting the unique challenges posed by DyTAGs. Moreover, we conduct extensive benchmark experiments on DTGB, evaluating 7 popular dynamic graph learning algorithms and their variants of adapting to text attributes with LLM embeddings, along with 6 powerful large language models (LLMs). Our results show the limitations of existing models in handling DyTAGs. Our analysis also demonstrates the utility of DTGB in investigating the incorporation of structural and textual dynamics. The proposed DTGB fosters research on DyTAGs and their broad applications. It offers a comprehensive benchmark for evaluating and advancing models to handle the interplay between dynamic graph structures and natural language. The dataset and source code are available at https://github.com/zjs123/DTGB.
  • [October 17th, 2024]

  • Unifying Evolution, Explanation, and Discernment: A Generative Approach for Dynamic Graph Counterfactuals (KDD 2024)
    Presenter: Bardh Prenkaj Technical University of Munich
    Dr. Bardh Prenkaj is a postdoc in AI at the Technical University of Munich trying to understand What Makes us (or not) Human and What Actually is Machine Intelligence with Gjergji Kasneci. From 10/2022 to 10/2024, he was a postdoc at the Sapienza University of Rome in Anomaly Detection and Explainability. He received his PhD in Computer Science in 2022 and his MSc in 2018. In Rome, he worked in the AIIM - formerly IIM - group advised by Paola Velardi and Giovanni Stilo. He then worked as a senior researcher at the CS Department in Sapienza in anomaly detection for social isolation disorders. He made BetterScholar with Antonio Norelli, an alternative to Google Scholar profiles.

    We present GRACIE (Graph Recalibration and Adaptive Counterfactual Inspection and Explanation), a novel approach for generative classification and counterfactual explanations of dynamically changing graph data. We study graph classification problems through the lens of generative classifiers. We propose a dynamic, self-supervised latent variable model that updates by identifying plausible counterfactuals for input graphs and recalibrating decision boundaries through contrastive optimization. Unlike prior work, we do not rely on linear separability between the learned graph representations to find plausible counterfactuals. Moreover, GRACIE eliminates the need for stochastic sampling in latent spaces and graph-matching heuristics. Our work distills the implicit link between generative classification and loss functions in the latent space, a key insight to understanding recent successes with this architecture. We further observe the inherent trade-off between validity and pulling explainee instances towards the central region of the latent space, empirically demonstrating our theoretical findings. In extensive experiments on synthetic and real-world graph data, we attain considerable improvements, reaching ∼99% validity when sampling sets of counterfactuals even in the challenging setting of dynamic data landscapes.
  • [October 24th, 2024]

  • Long Range Propagation on Continuous-Time Dynamic Graphs (ICML 2024)
    Presenter: Alessio Gravina Department of Computer Science, University of Pisa
    Alessio Gravina is a postdoctoral researcher at the University of Pisa, Italy. He received his Ph.D. from the University of Pisa in 2024 with a thesis on graph deep learning methods inspired by dynamical systems and neural differential equations. He holds MSc (2020) and BSc (2018) degrees in computer science from University of Pisa, Italy. He has been a visiting researcher at Huawei Research Center, Munich in 2023, at Swiss AI Lab IDSIA in 2022, and at Stanford University in 2019, while he won the Fujitsu AI-NLP Challenge in 2018. Alessio's interests are in the area of deep learning and machine learning for graphs, with a special focus on differential equation-based graph neural networks.

    Learning Continuous-Time Dynamic Graphs (C-TDGs) requires accurately modeling spatio-temporal information on streams of irregularly sampled events. While many methods have been proposed recently, we find that most message passing-, recurrent- or self-attention-based methods perform poorly on long-range tasks. These tasks require correlating information that occurred "far" away from the current event, either spatially (higher-order node information) or along the time dimension (events occurred in the past). To address long-range dependencies, we introduce Continuous-Time Graph Anti-Symmetric Network (CTAN). Grounded within the ordinary differential equations framework, our method is designed for efficient propagation of information. In this paper, we show how CTAN's (i) long-range modeling capabilities are substantiated by theoretical findings and how (ii) its empirical performance on synthetic long-range benchmarks and real-world benchmarks is superior to other methods. Our results motivate CTAN's ability to propagate long-range information in C-TDGs as well as the inclusion of long-range tasks as part of temporal graph models evaluation.
  • [November 7th, 2024]

  • Making Temporal Betweenness Computation Faster and Restless (KDD 2024)
    Presenter: Filippo Brunelli European Commission — JRC
    Filippo Brunelli is a scientific officer at the Joint Research Centre of the European Commission. His current work revolves around quantitative and qualitative analysis of multimodal transport networks to support European transport policies aiming at enhancing accessibility and reducing externalities. He obtained his PhD degree from Inria, while being hosted at IRIF (Paris), under the supervision of Laurent Viennot and Pierluigi Crescenzi. The focus of the thesis was on temporal graphs, specifically exploring models and algorithms with potential uses in transport networks, where efficient routing algorithms are crucial. Previously, he obtained a bachelor's and master's degree in mathematics, respectively, at the University of Trento and the University of Pisa (Italy).

    Buß et al [KDD 2020] recently proved that the problem of computing the betweenness of all nodes of a temporal graph is computationally hard in the case of foremost and fastest paths, while it is solvable in time 𝑂 (𝑛^3 𝑇^2) in the case of shortest and shortest cforemost paths, where 𝑛 is the number of nodes and 𝑇 is the number of distinct time steps. A new algorithm for temporal betweenness computation is introduced in this paper. In the case of shortest and shortest foremost paths, it requires 𝑂 (𝑛 +𝑀) space and runs in time 𝑂 (𝑛𝑀) = 𝑂 (𝑛^3𝑇 ), where 𝑀 is the number of temporal edges, thus significantly improving the algorithm of Buß et al in terms of time complexity (note that 𝑇 is usually large). Experimental evidence is provided that our algorithm performs between twice and almost 250 times better than the algorithm of Buß et al. Moreover, we were able to compute the exact temporal betweenness values of several large temporal graphs with over a million of temporal edges. For such size, only approximate computation was possible by usingthe algorithm of Santoro and Sarpe [WWW 2022]. Maybe more importantly, our algorithm extends to the case of restless walks (that is, walks with waiting constraints in each node), thus providing a polynomial-time algorithm (with complexity 𝑂 (𝑛𝑀)) for computing the temporal betweenness in the case of several different optimality criteria. Such restless computation was known only for the shortest criterion (Rymar et al [JGAA 2023]), with complexity 𝑂 (𝑛^2𝑀𝑇^2). We performed an extensive experimental validation by comparing different waiting constraints and different optimisation criteria. Moreover, as a case study, we investigate six public transit networks including Berlin, Rome, and Paris. Overall we find a general consistency between the different variants of betweenness centrality. However, we do measure a sensible influence of waiting constraints, and note some cases of low correlation for certain pairs of criteria in some networks
  • [November 14th, 2024]

  • TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs (NeurIPS 2024 Datasets and Benchmarks Track)
    Presenter: Julia Gastinger Mannheim University and Shenyang Huang McGill University, Mila
    Julia Gastinger (she/her) is a Ph.D. student at Mannheim University, supervised by Professor Heiner Stuckenschmidt. Previously, she was a Research Scientist in the AI Innovations group at NEC Laboratories Europe. Her research primarily focuses on graph-based Machine Learning – she is interested in how to incorporate the time aspect in knowledge graph representations. She served as a Reviewing Chair and Co-Organizer in Temporal Graph Learning Workshop @ NeurIPS 2023.
    Shenyang Huang (he/him) is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs. He also enjoys writing medium blog posts about temporal graph learning.

    Multi-relational temporal graphs are powerful tools for modeling real-world data, capturing the evolving and interconnected nature of entities over time. Recently, many novel models are proposed for ML on such graphs intensifying the need for robust evaluation and standardized benchmark datasets. However, the availability of such resources remains scarce and evaluation faces added complexity due to reproducibility issues in experimental protocols. To address these challenges, we introduce Temporal Graph Benchmark 2.0 (TGB 2.0), a novel benchmarking framework tailored for evaluating methods for predicting future links on Temporal Knowledge Graphs and Temporal Heterogeneous Graphs with a focus on large-scale datasets, extending the Temporal Graph Benchmark. TGB 2.0 facilitates comprehensive evaluations by presenting eight novel datasets spanning five domains with up to 53 million edges. TGB 2.0 datasets are significantly larger than existing datasets in terms of number of nodes, edges, or timestamps. In addition, TGB 2.0 provides a reproducible and realistic evaluation pipeline for multi-relational temporal graphs. Through extensive experimentation, we observe that 1) leveraging edge-type information is crucial to obtain high performance, 2) simple heuristic baselines are often competitive with more complex methods, 3) most methods fail to run on our largest datasets, highlighting the need for research on more scalable methods.
  • [November 21st, 2024]

  • RELBENCH: A Benchmark for Deep Learning on Relational Databases (NeurIPS 2024 Datasets and Benchmarks Track)
    Presenter: Rishabh Ranjan Stanford University
    Rishabh is a second year PhD student at Stanford University. He is advised by Prof. Jure Leskovec and also works with Prof. Carlos Guestrin. He completed his undergrad in Computer Science and Engineering at IIT Delhi, where he was the President’s Gold Medalist for the class of 2022 and also received the Suresh Chandra Memorial Trust Award for the best undergrad thesis in CS. Prior to the PhD, he spent a year at Carnegie Mellon University as a visiting research scholar hosted by Prof. Zachary Lipton. His interests lie at the intersection of relational deep learning and foundation models, with the research goal of extending the success of foundation models like LLMs to structured data such as tables, time series and relational databases. Website: https://rishabh-ranjan.github.io

    Much of the world's most valued data is stored in relational databases and data warehouses, where the data is organized into many tables connected by primary-foreign key relations. However, building machine learning models using this data is both challenging and time consuming. The core problem is that no machine learning method is capable of learning on multiple tables interconnected by primary-foreign key relations. Current methods can only learn from a single table, so the data must first be manually joined and aggregated into a single training table, the process known as feature engineering. Feature engineering is slow, error prone and leads to suboptimal models. Here we introduce an end-to-end deep representation learning approach to directly learn on data laid out across multiple tables. We name our approach Relational Deep Learning (RDL). The core idea is to view relational databases as a temporal, heterogeneous graph, with a node for each row in each table, and edges specified by primary-foreign key links. Message Passing Graph Neural Networks can then automatically learn across the graph to extract representations that leverage all input data, without any manual feature engineering. Relational Deep Learning leads to more accurate models that can be built much faster. To facilitate research in this area, we develop RelBench, a set of benchmark datasets and an implementation of Relational Deep Learning. The data covers a wide spectrum, from discussions on Stack Exchange to book reviews on the Amazon Product Catalog. Overall, we define a new research area that generalizes graph machine learning and broadens its applicability to a wide set of AI use cases.
  • [November 28th, 2024]

  • Expressive Power of Temporal Message Passing
    Presenter: Przemysław Andrzej Wałega University of Oxford, Queen Mary University of London, UK
    Przemysław Andrzej Wałega is an Associate Professor (Senior Lecturer) in Queen Mary University of London, Centre for Fundamental Computer Science and Senior Researcher in University of Oxford, Department of Computer Science. He received PhD in Logics in the Institute of Philosophy at the University of Warsaw, BEng and MS degrees in Mechatronics from Warsaw University of Technology, and BS degree in Philosophy at the Institute of Philosophy, University of Warsaw He is especially interested in methods for complex reasoning about time. His research is devoted to the temporal formalism DatalogMTL whereas his PhD thesis is about interval temporal logics and on an interplay between their computational complexity and expressive power. Most recently, He is interested in characterising expressive power of temporal graph neural networks with logical languages and explain models' predictions with extracted logical rules.

    Graph neural networks (GNNs) have recently been adapted to temporal settings, often employing temporal versions of the message-passing mechanism known from GNNs. We divide temporal message passing mechanisms from literature into two main types: global and local, and establish WeisfeilerLeman characterisations for both. This allows us to formally analyse expressive power of temporal message-passing models. We show that global and local temporal message-passing mechanisms have incomparable expressive power when applied to arbitrary temporal graphs. However, the local mechanism is strictly more expressive than the global mechanism when applied to colour-persistent temporal graphs, whose node colours are initially the same in all time points. Our theoretical findings are supported by experimental evidence, underlining practical implications of our analysis.
  • Past Talks, Winter 2024

    [January 18th, 2024]

  • Intensity Profile Projection: A Framework for Continuous-Time Representation Learning for Dynamic Networks NeurIPS 2023
    Presenter: Alexander Modell Imperial College London and Patrick Rubin-Delanchy University of Bristol
    Patrick Rubin-Delanchy obtained his PhD in Statistics at Imperial College London in 2008, supervised by Prof. Andrew Walden. He was awarded a Heilbronn research fellowship in Data Science in 2012, which he first held at the University of Bristol and then at the University of Oxford. I became assistant professor of Statistics at the University of Bristol in July 2017, and was promoted to full professor in July 2022. As of Jan 2024, He is the chair of Statistical Learning at the University of Edinburgh. His research interests include data exploration, graph embedding and unsupervised learning.
    Alex Modell is research associate in Statistics at Imperial College London. His research is in statistical modelling and large-scale inference for high-dimensional and graph structured data, with a particular interest in unsupervised learning with temporal data. He is currently funded by the EPSRC grant “Network Time Series and Stochastic Processes”.

    We present a new representation learning framework, Intensity Profile Projection, for continuous-time dynamic network data. Given triples (i,j,t), each representing a time-stamped (t) interaction between two entities (i,j), our procedure returns a continuous-time trajectory for each node, representing its behaviour over time. The framework consists of three stages: estimating pairwise intensity functions, e.g. via kernel smoothing; learning a projection which minimises a notion of intensity reconstruction error; and constructing evolving node representations via the learned projection. The trajectories satisfy two properties, known as structural and temporal coherence, which we see as fundamental for reliable inference. Moreoever, we develop estimation theory providing tight control on the error of any estimated trajectory, indicating that the representations could even be used in quite noise-sensitive follow-on analyses. The theory also elucidates the role of smoothing as a bias-variance trade-off, and shows how we can reduce the level of smoothing as the signal-to-noise ratio increases on account of the algorithm `borrowing strength' across the network.
  • [January 25th, 2024]

  • DistTGL: Distributed Memory-Based Temporal Graph Neural Network Training SC 2023
    Presenter: Hongkuan Zhou University of Southern California
    Hongkuan Zhou is a Ph.D. student at Ming Hsieh Department of Electrical and Computer Engineering at University of Southern California. His research interest lies in the acceleration and application of GNNs and Temporal GNNs on general-purpose hardware.

    Memory-based Temporal Graph Neural Networks are powerful tools in dynamic graph representation learning and have demonstrated superior performance in many real-world applications. However, their node memory favors smaller batch sizes to capture more dependencies in graph events and needs to be maintained synchronously across all trainers. As a result, existing frameworks suffer from accuracy loss when scaling to multiple GPUs. Even worse, the tremendous overhead of synchronizing the node memory makes it impractical to deploy the solution in GPU clusters. In this work, we propose DistTGL --- an efficient and scalable solution to train memory-based TGNNs on distributed GPU clusters. DistTGL has three improvements over existing solutions: an enhanced TGNN model, a novel training algorithm, and an optimized system. In experiments, DistTGL achieves near-linear convergence speedup, outperforming the state-of-the-art single-machine method by 14.5% in accuracy and 10.17× in training throughput.
  • [Feb 8th, 2024]

  • Taming Local Effects in Graph-based Spatiotemporal Forecasting NeurIPS 2023
    Presenter: Ivan Marisca, The Swiss AI Lab IDSIA USI-SUPSI, Università della Svizzera italiana
    Ivan Marisca is a Ph.D. student at the Swiss AI lab IDSIA and USI - Università della Svizzera italiana (CH), under the supervision of Prof. Cesare Alippi. He received his MSc (2020) degree in Computer Science and Engineering from Politecnico di Milano (IT). Ivan's research focuses on graph-based learning from irregular spatiotemporal data, with applications in prediction, imputation, and control on sensor networks. His work has been published in top-tier conferences of the field, such as NeurIPS, ICLR, and AAAI. In addition to his research, Ivan regularly works as a teaching assistant and holds lectures in machine learning courses for MSc students at USI. He is also a co-creator and maintainer of Torch Spatiotemporal.

    Spatiotemporal graph neural networks have shown to be effective in time series forecasting applications, achieving better performance than standard univariate predictors in several settings. These architectures take advantage of a graph structure and relational inductive biases to learn a single (global) inductive model to predict any number of the input time series, each associated with a graph node. Despite the gain achieved in computational and data efficiency w.r.t. fitting a set of local models, relying on a single global model can be a limitation whenever some of the time series are generated by a different spatiotemporal stochastic process. The main objective of this paper is to understand the interplay between globality and locality in graph-based spatiotemporal forecasting, while contextually proposing a methodological framework to rationalize the practice of including trainable node embeddings in such architectures. We ascribe to trainable node embeddings the role of amortizing the learning of specialized components. Moreover, embeddings allow for 1) effectively combining the advantages of shared message-passing layers with node-specific parameters and 2) efficiently transferring the learned model to new node sets. Supported by strong empirical evidence, we provide insights and guidelines for specializing graph-based models to the dynamics of each time series and show how this aspect plays a crucial role in obtaining accurate predictions.
  • [Feb 22nd, 2024]

  • Temporal Graph Analysis with TGX WSDM 2024 Demo Track
    Presenter: Shenyang Huang McGill University, Mila
    Shenyang Huang is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs. He also enjoys writing medium blog posts about temporal graph learning.

    Real-world networks, with their evolving relations, are best captured as temporal graphs. However, existing software libraries are largely designed for static graphs where the dynamic nature of temporal graphs is ignored. Bridging this gap, we introduce TGX, a Python package specially designed for analysis of temporal networks that encompasses an automated pipeline for data loading, data processing, and analysis of evolving graphs. TGX provides access to eleven built-in datasets and eight external Temporal Graph Benchmark (TGB) datasets as well as any novel datasets in the .csv format. Beyond data loading, TGX facilitates data processing functionalities such as discretization of temporal graphs and node subsampling to accelerate working with larger datasets. For comprehensive investigation, TGX offers network analysis by providing a diverse set of measures, including average node degree and the evolving number of nodes and edges per timestamp. Additionally, the package consolidates meaningful visualization plots indicating the evolution of temporal patterns, such as Temporal Edge Appearance (TEA) and Temporal Edge Trafficc (TET) plots. The TGX package is a robust tool for examining the features of temporal graphs and can be used in various areas like studying social networks, citation networks, and tracking user interactions. We plan to continuously support and update TGX based on community feedback. TGX is publicly available
  • [Feb 29th, 2024]

  • Graph Neural Networks for Temporal Graphs: State of the Art, Open Challenges, and Opportunities TMLR
    Presenter: Veronica Lachi University of Siena, Siena, Italy
    Veronica Lachi is a PhD candidate in Engineering and Information Sciences at the University of Siena. She received a master's degree cum laude in Applied Mathematics from the University of Siena. Her research interests are centered around geometric deep learning, specifically focusing on the theoretical properties of graph neural networks, pooling in graph neural networks, and graph neural networks for temporal graphs.

    Graph Neural Networks (GNNs) have become the leading paradigm for learning on (static) graph-structured data. However, many real-world systems are dynamic in nature, since the graph and node/edge attributes change over time. In recent years, GNN-based models for temporal graphs have emerged as a promising area of research to extend the capabilities of GNNs. In this work, we provide the first comprehensive overview of the current state-of-the-art of temporal GNN, introducing a rigorous formalization of learning settings and tasks and a novel taxonomy categorizing existing approaches in terms of how the temporal aspect is represented and processed. We conclude the survey with a discussion of the most relevant open challenges for the field, from both research and application perspectives.
  • [March 7th, 2024]

  • Graph-based Virtual Sensing from Sparse and Partial Multivariate Observations ICLR 2024
    Presenter: Giovanni De Felice University of Liverpool
    Giovanni De Felice is a PhD student in the ‘Data Mining & Machine Learning group’ at the University of Liverpool and a visiting at Università della Svizzera Italiana (CH). His research focuses on time series and spatiotemporal analysis, with a particular focus on material weathering and performance prediction. Previously, he received his MSc (2020) degree cum laude in Particle Physics from the University of Pisa (IT), conducting his thesis within the Mu2e experiment at Fermilab (US).

    Virtual sensing techniques allow for inferring signals at new unmonitored locations by exploiting spatio-temporal measurements coming from physical sensors at different locations. However, as the sensor coverage becomes sparse due to costs or other constraints, physical proximity cannot be used to support interpolation. In this paper, we overcome this challenge by leveraging dependencies between the target variable and a set of correlated variables (covariates) that can frequently be associated with each location of interest. From this viewpoint, covariates provide partial observability, and the problem consists of inferring values for unobserved channels by exploiting observations at other locations to learn how such variables can correlate. We introduce a novel graph-based methodology to exploit such relationships and design a graph deep learning architecture, named GgNet, implementing the framework. The proposed approach relies on propagating information over a nested graph structure that is used to learn dependencies between variables as well as locations. GgNet is extensively evaluated under different virtual sensing scenarios, demonstrating higher reconstruction accuracy compared to the state-of-the-art.
  • [March 21st, 2024]

  • Dynamic Graph Information Bottleneck WWW 2024 Research Track
    Presenter: Haonan Yuan Beihang University
    Haonan Yuan is currently a Ph.D. candidate at the State Key Laboratory of Software Development Environment, and Beijing Advanced Innovation Center for Big Data and Brain Computing at Beihang University, supervised by Prof. Jianxin Li and Prof. Qingyun Sun. His research interests include generalizable graphs representation learning, dynamic graph learning, and graph foundation model. His work has been published in top-tier conferences of the field, such as NeurIPS, AAAI, WWW, and CIKM. He is the program committee member of the Graph Foundation Model Workshop @ WWW 2024 in Singapore.

    Dynamic Graphs widely exist in the real world, which carry complicated spatial and temporal feature patterns, challenging their representation learning. Dynamic Graph Neural Networks (DGNNs) have shown impressive predictive abilities by exploiting the intrinsic dynamics. However, DGNNs exhibit limited robustness, prone to adversarial attacks. This paper presents the novel Dynamic Graph Information Bottleneck (DGIB) framework to learn robust and discriminative representations. Leveraged by the Information Bottleneck (IB) principle, we first propose the expected optimal representations should satisfy the Minimal-Sufficient-Consensual (MSC) Condition. To compress redundant as well as conserve meritorious information into latent representation, DGIB iteratively directs and refines the structural and feature information flow passing through graph snapshots. To meet the MSC Condition, we decompose the overall IB objectives into DGIBMS and DGIBC, in which the DGIBMS channel aims to learn the minimal and sufficient representations, with the DGIBMS channel guarantees the predictive consensus. Extensive experiments on real-world and synthetic dynamic graph datasets demonstrate the superior robustness of DGIB against adversarial attacks compared with state-of-the-art baselines in the link prediction task. To the best of our knowledge, DGIB is the first work to learn robust representations of dynamic graphs grounded in the information-theoretic IB principle.
  • [March 28th, 2024]

  • Anomaly Detection in Continuous-Time Temporal Provenance Graphs NeurIPS 2023 Temporal Graph Learning Workshop
    Presenter: Jakub Reha University of Amsterdam
    Jakub Reha is a PhD candidate at AMLAB at the University of Amsterdam. He holds a Bachelor's degree from the Technical University of Denmark and a Master’s degree from KTH Royal Institute of Technology. His research interests are centred around anomaly detection in temporal graphs and causality in machine learning, specifically causal discovery in time series.

    Recent advances in Graph Neural Networks (GNNs) have matured the field of learning on graphs, making GNNs essential for prediction tasks in complex, interconnected, and evolving systems. In this paper, we focus on self-supervised, inductive learning for continuous-time dynamic graphs. Without compromising generality, we propose an approach to learn representations and mine anomalies in provenance graphs, which are a form of large-scale, heterogeneous, attributed, and continuous-time dynamic graphs used in the cybersecurity domain, syntactically resembling complex temporal knowledge graphs. We modify the Temporal Graph Network (TGN) framework to heterogeneous input data and directed edges, refining it specifically for inductive learning on provenance graphs. We present and release two pioneering large-scale, continuous-time temporal, heterogeneous, attributed benchmark graph datasets. The datasets incorporate expert-labeled anomalies, promoting subsequent research on representation learning and anomaly detection on intricate real-world networks. Comprehensive experimental analyses of modules, datasets, and baselines underscore the effectiveness of TGN-based inductive learning, affirming its practical utility in identifying semantically significant anomalies in real-world systems.
  • [April 4th, 2024]

  • HGE: Embedding Temporal Knowledge Graphs in a Product Space of Heterogeneous Geometric Subspace AAAI 2024
    Presenter: Jiaxin Pan University of Stuttgart
    Jiaxin Pan is a Ph.D. student at the University of Stuttgart, supervised by Prof. Steffen Staab. Her research interests are temporal knowledge graphs and termporal query answering.

    Temporal knowledge graphs represent temporal facts (s, p, o, τ ) relating a subject s and an object o via a relation label p at time τ , where τ could be a time point or time interval. Temporal knowledge graphs may exhibit static temporal patterns at distinct points in time and dynamic temporal patterns between different timestamps. In order to learn a rich set of static and dynamic temporal patterns and apply them for inference, several embedding approaches have been suggested in the literature. However, as most of them resort to single underlying embedding spaces, their capability to model all kinds of temporal patterns was severely limited by having to adhere to the geometric property of their one embedding space. We lift this limitation by an embedding approach that maps temporal facts into a product space of several heterogeneous geometric subspaces with distinct geometric properties, i.e. Complex, Dual, and Split-complex spaces. In addition, we propose a temporal-geometric attention mechanism to integrate information from different geometric subspaces conveniently according to the captured relational and temporal information. Experimental results on standard temporal benchmark datasets favorably evaluate our approach against state-of-the-art models
  • [April 11th, 2024]

  • Deep Temporal Graph Clustering ICLR 2024
    Presenter: Meng Liu and Ke Liang National University of Defense Technology, Changsha, China
    Meng Liu is a Ph.D. candidate at National University of Defense Technology. His research interests include Temporal Graph Learning, Deep Clustering, and Knowledge Graph. He has published 7 papers as first author including ICLR, SIGIR, ACM MM, and CIKM, which have received 200+ citations. He received awards such as Best Student Paper of CCHI 2023, China National Scholarships (Twice), Excellent Master Thesis, etc. He also serves as reviewer for TKDE, TOIS, TNNLS, TOMM, and NeurIPS, ICML, ICLR, KDD, etc.
    Ke Liang is currently pursuing a Ph.D. degree at the National University of Defense Technology (NUDT). Before that, he got BSc degree from Beihang University and MSc degree from Penn State University. His research interests include graph learning and knowledge graphs. He has published papers and serve as reviewers in several top-level journal/conferences, e.g., IEEE T-KDE, IEEE T-NNLS, ICML, AAAI, SIGIR and etc. The github repo (https://github.com/LIANGKE23/Awesome-Knowledge-Graph-Reasoning) he hosts earns more than 900 stars

    Deep graph clustering has recently received significant attention due to its ability to enhance the representation learning capabilities of models in unsupervised scenarios. Nevertheless, deep clustering for temporal graphs, which could capture crucial dynamic interaction information, has not been fully explored. It means that in many clustering-oriented real-world scenarios, temporal graphs can only be processed as static graphs. This not only causes the loss of dynamic information but also triggers huge computational consumption. To solve the problem, we propose a general framework for deep Temporal Graph Clustering called TGC, which introduces deep clustering techniques to suit the interaction sequence-based batch-processing pattern of temporal graphs. In addition, we discuss differences between temporal graph clustering and static graph clustering from several levels. To verify the superiority of the proposed framework TGC, we conduct extensive experiments. The experimental results show that temporal graph clustering enables more flexibility in finding a balance between time and space requirements, and our framework can effectively improve the performance of existing temporal graph learning methods. The code is released: https://github.com/MGitHubL/ Deep-Temporal-Graph-Clustering.
  • [April 18th, 2024]

  • GraphPulse: Topological representations for temporal graph property prediction ICLR 2024
    Presenter: Kiarash Shamsi Department of computer science, University of Manitoba
    Kiarash Shamsi is a Ph.D. student at the University of Manitoba, under the supervision of Dr. Cuneyt Akcora. He has published as a first author in conferences such as NeurIPS, ICLR, and ICBC. His research interests are temporal graph learning, graph neural networks, topological data analysis, and blockchain data analysis and systems.

    Many real-world networks evolve over time, and predicting the evolution of such networks remains a challenging task. Graph Neural Networks (GNNs) have shown empirical success for learning on static graphs, but they lack the ability to effectively learn from nodes and edges with different timestamps. Consequently, the prediction of future properties in temporal graphs remains a relatively under-explored area. In this paper, we aim to bridge this gap by introducing a principled framework, named GraphPulse. The framework combines two important techniques for the analysis of temporal graphs within a Newtonian framework. First, we employ the Mapper method, a key tool in topological data analysis, to extract essential clustering information from graph nodes. Next, we harness the sequential modeling capabilities of Recurrent Neural Networks (RNNs) for temporal reasoning regarding the graph’s evolution. Through extensive experimentation, we demonstrate that our model enhances the ROC-AUC metric by 10.2% in comparison to the top-performing state-of-the-art method across various temporal networks. We provide the implementation of GraphPulse at https://github.com/kiarashamsi/GraphPulse.
  • [April 25th, 2024]

  • Continuous-time Graph Representation with Sequential Survival Process AAAI 2024
    Presenter: Abdulkadir Celikkanat Department of Applied Mathematics and Computer Science, Technical University of Denmark
    Abdulkadir Celikkanat recently started working as an assistant professor at Aalborg University, and he was previously a postdoctoral researcher at the Technical University of Denmark. He obtained his Ph.D. degree at CentraleSupelec, University of Paris-Saclay, where he worked on representation learning methods for graph-structured data. Before that, he earned his BSc in Mathematics and an MSc in Computer Engineering from Bogazici University. Currently, he is interested in the metagenomics binning problem and working on integrating heterogeneous environmental factors with bacterial genome data.

    Over the past two decades, there has been a tremendous increase in the growth of representation learning methods for graphs, with numerous applications across various fields, including bioinformatics, chemistry, and the social sciences. However, current dynamic network approaches focus on discrete-time networks or treat links in continuous-time networks as instantaneous events. Therefore, these approaches have limitations in capturing the persistence or absence of links that continuously emerge and disappear over time for particular durations. To address this, we propose a novel stochastic process relying on survival functions to model the durations of links and their absences over time. This forms a generic new likelihood specification explicitly accounting for intermittent edge-persistent networks, namely GRAS2P: Graph Representation with Sequential Survival Process. We apply the developed framework to a recent continuous time dynamic latent distance model characterizing network dynamics in terms of a sequence of piecewise linear movements of nodes in latent space. We quantitatively assess the developed framework in various downstream tasks, such as link prediction and network completion, demonstrating that the developed modeling framework accounting for link persistence and absence well tracks the intrinsic trajectories of nodes in a latent space and captures the underlying characteristics of evolving network structure..
  • [May 2nd, 2024]

  • zrLLM: Zero-Shot Relational Learning on Temporal Knowledge Graphs with Large Language Models NAACL 2024
    Presenter: Zifeng Ding Database System and Data Mining group at Ludwig Maximilian University of Munich (LMU Munich)
    Zifeng Ding is now a Computer Science PhD student in the group of Database System and Data Mining at Ludwig Maximilian University of Munich (LMU Munich), supervised by Prof. Volker Tresp. He is a nominated PhD of European Laboratory for Learning and Intelligent Systems (ELLIS), co-supervised by Prof. Michael Bronstein at University of Oxford. He is also a researcher at Siemens AG.

    Modeling evolving knowledge over temporal knowledge graphs (TKGs) has become a heated topic. Various methods have been proposed to forecast links on TKGs. Most of them are embedding-based, where hidden representations are learned to represent knowledge graph (KG) entities and relations based on the observed graph contexts. Although these methods show strong performance on traditional TKG forecasting (TKGF) benchmarks, they face a strong challenge in modeling the unseen zero-shot relations that have no prior graph context. In this paper, we try to mitigate this problem as follows. We first input the text descriptions of KG relations into large language models (LLMs) for generating relation representations, and then introduce them into embedding-based TKGF methods. LLM-empowered representations can capture the semantic information in the relation descriptions. This makes the relations, whether seen or unseen, with similar semantic meanings stay close in the embedding space, enabling TKGF models to recognize zero-shot relations even without any observed graph context. Experimental results show that our approach helps TKGF models to achieve much better performance in forecasting the facts with previously unseen relations, while still maintaining their ability in link forecasting regarding seen relations.
  • [May 9th, 2024]

  • The role of Egocentric Perspective in Temporal Networks
    Presenter: Antonio Longa University of Trento
    Antonio Longa is an Assistant Professor (RTD-A) at the University of Trento, actively engaged in research within the Structured Machine Learning (SML) Group led by Andrea Passerini. He earned PhD with honours from the University of Trento and Fondazione Bruno Kessler, under the mentorship of Bruno Lepri. He is currently deeply engaged in research, which spans the realms of Temporal Graph Neural Networks (TGNNs) and the vital domain of GNN explainability. His work involves unravelling the dynamics of temporal networks while also enhancing the interpretability of GNNs. These parallel endeavors allow to explore the temporal dimension in machine learning while ensuring that the inner workings of GNNs are transparent and understandable.

    Temporal networks play a crucial role in modeling and understanding time-dependent systems, spanning from social interactions to biological processes. Over time, network science has developed numerous measures for analyzing and comparing temporal networks. Some of these methods involve breaking down the network into smaller interaction segments, focusing on a limited number of nodes over a short time span. Along this line, one approach is to adopt an egocentric perspective, considering the temporal evolution of each node's neighborhood. In this talk, we introduce techniques based on the egocentric viewpoint, specifically focusing on the concept of Egocentric Temporal Neighborhoods (ETNs). Furthermore, we demonstrate how this concept can be extended to include labeled nodes (LETNs) and higher-order interactions (HETNs). Finally, we illustrate how these structures can be utilized to generate surrogate temporal networks (ETN-gen).
  • [May 16th, 2024]

  • History repeats Itself: A Baseline for Temporal Knowledge Graph Forecasting IJCAI 2024
    Presenter: Julia Gastinger NEC Laboratories Europe, Mannheim University
    Julia Gastinger (she/her) is a research scientist at NEC Laboratories Europe and a Ph.D. student at Mannheim University, supervised by Professor Heiner Stuckenschmidt. Currently she is doing a research internship at Mila. Her research primarily focuses on graph-based Machine Learning – she is interested in how to incorporate the time aspect in knowledge graph representations. Prior to joining NEC Laboratories Europe, Julia graduated with a master’s degree from Stuttgart University, where she studied Engineering Cybernetics with a focus on Autonomous Systems and Control Theory.

    Temporal Knowledge Graph (TKG) Forecasting aims at predicting links in Knowledge Graphs for future timesteps based on a history of Knowledge Graphs. To this day, standardized evaluation protocols and rigorous comparison across TKG models are available, but the importance of simple baselines is often neglected in the evaluation, which prevents researchers from discerning actual and fictitious progress. We propose to close this gap by designing an intuitive baseline for TKG Forecasting based on predicting recurring facts. Compared to most TKG models, it requires little hyperparameter tuning and no iterative training. Further, it can help to identify failure modes in existing approaches. The empirical findings are quite unexpected: compared to 11 methods on five datasets, our baseline ranks first or third in three of them, painting a radically different picture of the predictive quality of the state of the art.
  • [May 30th, 2024]

  • TGLite: A Lightweight Programming Framework for Continuous-Time Temporal Graph Neural Networks ASPLOS'24
    Presenter: Wanyu Zhao, Charith Mendis and Yufeng Wang University of Illinois at Urbana-Champaign, USA
    Charith Mendis is an Assistant Professor at the University of Illinois at Urbana-Champaign. Previously, he was a visiting faculty researcher at Google and was instrumental in designing and developing the learned TPU cost model used in production. His research interests are in automating compiler construction and in building high-performance ML systems. He received his Ph.D. and Master from the Massachusetts Institute of Technology and his B.Sc. from the University of Moratuwa. He recently co-led the DARPA ISAT study on "ML Optimized Compilers for Heterogeneous Architectures (MOCHA)". He is the recipient of an NSF CAREER Award, an IEEE Micro Top Picks honorable mention, the William A. Martin outstanding master's thesis award at MIT, a best student paper award, a best paper award, and the university gold medal for his B.Sc. He has published work at both top programming languages venues such as PLDI and ASPLOS as well as at top machine learning venues such as ICML and NeurIPS.
    Wanyu Zhao is a first-year Ph.D. student in Computer Science at the University of Illinois Urbana-Champaign (UIUC), under the supervision of Professor Charith Mendis. Their current research focuses on developing efficient and scalable systems for temporal graph learning. With a strong interest in the intersection of systems and machine learning, Wanyu aims to explore novel techniques in constructing efficient AI systems through algorithmic insights and comprehensive systems understanding.
    Yufeng Wang (first author of TGLite and TGOpt) received his M.S. in Computer Science from the University of Illinois at Urbana-Champaign, where he was advised by Professor Charith Mendis. His research is focused on Temporal Graph Neural Networks, particularly on optimizing their performance and ease of implementation, with his work being published at the PPoPP and ASPLOS conferences. He graduated in 2023 with the David J. Kuck Outstanding Master’s Thesis Award. Yufeng is currently a Software Engineer at Tesla, working on deep learning compilers.

    In recent years, Temporal Graph Neural Networks (TGNNs) have achieved great success in learning tasks for graphs that change over time. Moreover, larger and more complicated temporal graphs are introduced frequently, requiring additional compute and memory resources. To facilitate these demands as well as to increase the productivity of machine learning researchers, we need a high-performance programming framework with user-friendly abstractions. In this talk, we present TGLite, a light-weight framework for programming TGNNs that can simultaneously achieve both goals: ease of programmability and high performance. TGLite is built on top of PyTorch and seamlessly integrates with it and for the first time exposes a rich set of primitive operators to program TGNNs. Further, it provides an array of in-built optimizations such as deduplication, memoization, pre-computation and preloading to enable high-performance TGNN implementations. We show how these optimizations accelerate TGNN training and inference by more than 3x and 4x compared to the TGL framework. Moreover, compared to TGL, TGLite's programming interface provides first-class support for TGNN primitive operators making it easier for ML researchers to try out new TGNN models.
  • [June 6th, 2024]

  • Time to Cite: Modeling Citation Networks using the Dynamic Impact Single-Event Embedding Model AISTATS 2024
    Presenter: Nikolaos Nakis, Technical University of Denmark
    Nikolaos Nakis is currently a postdoctoral researcher at École Polytechnique, part of the Polytechnic Institute of Paris. He holds a Ph.D. from the Technical University of Denmark. Nikolaos received his B.S. in Physics from the National and Kapodistrian University of Athens and an M.S. in Mathematical Modeling and Computation from the Technical University of Denmark. His research focuses on applying machine learning to complex systems and graph representation learning, with a primary emphasis on social network analysis.

    Understanding the structure and dynamics of scientific research, i.e., the science of science (SciSci), has become an important area of research in order to address imminent questions including how scholars interact to advance science, how disciplines are related and evolve, and how research impact can be quantified and predicted. Central to the study of SciSci has been the analysis of citation networks. Here, two prominent modeling methodologies have been employed: one is to assess the citation impact dynamics of papers using parametric distributions, and the other is to embed the citation networks in a latent space optimal for characterizing the static relations between papers in terms of their citations. Interestingly, citation networks are a prominent example of single-event dynamic networks, i.e., networks for which each dyad only has a single event (i.e., the point in time of citation). We presently propose a novel likelihood function for the characterization of such single-event networks. Using this likelihood, we propose the Dynamic Impact Single-Event Embedding model (DISEE). The DISEE model characterizes the scientific interactions in terms of a latent distance model in which random effects account for citation heterogeneity while the time-varying impact is characterized using existing parametric representations for assessment of dynamic impact. We highlight the proposed approach on several real citation networks finding that the DISEE well reconciles static latent distance network embedding approaches with classical dynamic impact assessments.
  • [June 13th, 2024]

  • HOT: Higher-Order Dynamic Graph Representation Learning with Efficient Transformers LOG 2023
    Presenter: Maciej Besta, Department of Computer Science, ETH Zurich
    Maciej Besta leads research on graph computations and large language models at the Scalable Parallel Computing Lab at ETH Zurich and the ETH Future Computing Lab; he also works on network topologies and occasionally other aspects of the high-performance computing landscape. Maciej published more than 50 top conference and journal papers. He won, among others, the competition for the Best Student of Poland (2012), the first Google Fellowship in Parallel Computing (2013), the ACM/IEEE-CS High-Performance Computing Fellowship (2015), the IEEE TCSC Award for Excellence in Scalable Computing Early Career (2023), and the OlympusMons Award for contributions to scalable storage systems (2024). His doctoral dissertation on irregular computations received the ETH Medal for outstanding doctoral thesis (2021), and awards from IEEE (2021), SPEC (2022), and ACM (2022) for the best dissertation worldwide in - respectively - scalable computing, performance analysis, and high-performance computing. He received Best Paper awards and nominations at ACM/IEEE Supercomputing 2013, 2014, 2019 (2 papers), 2022, and 2023 (2 papers); at ACM HPDC 2015 and 2016, ACM Research Highlights 2018, and others. More detailed information on https://people.inf.ethz.ch/bestam/

    Many graph representation learning (GRL) problems are dynamic, with millions of edges added or removed per second. An example workload in this setting is dynamic link prediction: using a history of graph updates to predict whether a given pair of vertices will become connected. In this talk, we present a general pipeline for highly accurate yet scalable GRL, and apply it to the dynamic link prediction problem. The key idea underlying our pipeline is that harnessing higher-order (HO) graph structures, such as k-hop neighbors and more general subgraphs such as cliques, improves the accuracy of various GRL tasks. The pipeline consists of three general stages: extracting selected HO structures, encoding the structures into the appropriate representation, and making the prediction. However, using HO also comes with challenges. In the first stage, extracting HO structures is time-consuming and difficult to program. We address this issue with a simple yet surprisingly powerful observation: operations on sets of vertices, such as intersection or union, form a large part of many complex HO mining algorithms, and can offer rich and simple parallelism at multiple levels. In the second and third stage, the encoding of extracted HO structures into a form suitable for predictions, results in increased memory pressure. We address this with HOT: a Transformer-based model that imposes hierarchy on the attention matrix, significantly reducing memory footprint. The final design offers a sweetspot between high accuracy and low memory utilization, for example achieving 9%, 7%, and 15% higher accuracy than – respectively – DyGFormer, TGN, and GraphMixer, for the MOOC dataset. Finally, we also discuss a broader problem of predicting complex subgraphs called motifs, detailing how this problem generalizes link prediction and how to use Graph Neural Networks for high accuracy.
  • [June 20th, 2024]

  • On the Generalization Capability of Temporal Graph Learning Algorithms: Theoretical Insights and a Simpler Method
    Presenter: Jian Kang, University of Rochester
    Jian Kang is an Assistant Professor in the Department of Computer Science at the University of Rochester. His research lies in fair and reliable graph learning for social good. Prior to joining the University of Rochester, he received Ph.D. in Computer Science from the University of Illinois Urbana-Champaign, advised by Dr. Hanghang Tong. He is recognized as a Rising Star in Data Science by the University of Chicago and a Mavis Future Faculty Fellow by the University of Illinois Urbana-Champaign.

    Temporal Graph Learning (TGL) has become a prevalent technique across diverse real-world applications, especially in domains where data can be represented as a graph and evolves over time. Although TGL has recently seen notable progress in algorithmic solutions, its theoretical foundations remain largely unexplored. This paper aims at bridging this gap by investigating the generalization ability of different TGL algorithms (e.g., GNN-based, RNN-based, and memorybased methods) under the finite-wide over-parameterized regime. We establish the connection between the generalization error of TGL algorithms and 1 “the number of layers/steps” in the GNN-/RNN-based TGL methods and 2 “the feature-label alignment (FLA) score”, where FLA can be used as a proxy for the expressive power and explains the performance of memory-based methods. Guided by our theoretical analysis, we propose Simplified-Temporal-Graph-Network (SToNe), which enjoys a small generalization error, improved overall performance, and lower model complexity. Extensive experiments on real-world datasets demonstrate the effectiveness of SToNe. Our theoretical findings and proposed algorithm offer essential insights into TGL from a theoretical standpoint, laying the groundwork for the designing practical TGL algorithms in future studies.
  • Past Talks, Fall 2023

    [September 14th, 2023]

  • Edge Directionality Improves Learning on Heterophilic Graphs 2023
    Presenter: Emanuele Rossi Imperial College London
    Emanuele Rossi is a final-year Ph.D. student at Imperial College London, working on Graph Neural Networks and supervised by Prof. Michael Bronstein. His research explores various aspects of graph neural networks, such as scalability, dynamic graphs, and learning with missing node features. Before starting his Ph.D., Emanuele spent time working at Twitter and Fabula AI, which was bought by Twitter in June 2019. He also holds an MPhil from the University of Cambridge and a BEng from Imperial College London, both in Computer Science.

    Graph Neural Networks (GNNs) have become the de-facto standard tool for modeling relational data. However, while many real-world graphs are directed, the majority of today's GNN models discard this information altogether by simply making the graph undirected. The reasons for this are historical: 1) many early variants of spectral GNNs explicitly required undirected graphs, and 2) the first benchmarks on homophilic graphs did not find significant gain from using direction. In this paper, we show that in heterophilic settings, treating the graph as directed increases the effective homophily of the graph, suggesting a potential gain from the correct use of directionality information. To this end, we introduce Directed Graph Neural Network (Dir-GNN), a novel general framework for deep learning on directed graphs. Dir-GNN can be used to extend any Message Passing Neural Network (MPNN) to account for edge directionality information by performing separate aggregations of the incoming and outgoing edges. We prove that Dir-GNN matches the expressivity of the Directed Weisfeiler-Lehman test, exceeding that of conventional MPNNs. In extensive experiments, we validate that while our framework leaves performance unchanged on homophilic datasets, it leads to large gains over base models such as GCN, GAT and GraphSage on heterophilic benchmarks, outperforming much more complex methods and achieving new state-of-the-art results.
  • [September 21st, 2023]

  • Temporal Graph Learning for Financial World: Algorithms, Scalability, Explainability & Fairness AAAI 2022
    Presenter: Karamjit Singh, AI Garage and Mastercard
    Karamjit Singh is currently the Director of Artificial Intelligence at Mastercard, where he focuses on creating AI-driven products in the payment space. With 11 years of experience spanning various industries, Karamjit holds dual master's degrees, including an M.Tech in Computer Applications from IIT Delhi and a Master’s in Mathematics. He has authored over 20 publications in AI/ML conferences and holds more than 15 patents. Karamjit has achieved recognition through accomplishments like securing the 2nd Rank in CIKM 2020 Analytics Cup and winning the IEEE VGTC VPG International Data-Visualization Contest. He has also contributed to data science conferences and workshops as part of program committees and organized the MUFin workshop at CIKM 2021, ECML PAKDD 2022, AAAI 2023.

    The most intuitive way to model a transaction in the financial world is through a Graph. Every transaction can be considered as an edge between two vertices, one of which is the paying party and another is the receiving party. Properties of these nodes and edges directly map to business problems in the financial world. The problem of detecting a fraudulent transaction can be considered as a property of the edge. The problem of money laundering can be considered as a path-detection in the Graph. The problem of a merchant going delinquent can be considered as the property of a node. While there are many such examples, the above help in realising the direct mapping of Graph properties with the financial problems in the real-world. This tutorial is based on the potential of using Graph Neural Network based Learning for solving business problems in the financial world
  • [September 28th, 2023]

  • Temporal Graph Benchmark for Machine Learning on Temporal Graphs NeurIPS 2023 Datasets and Benchmarks Track
    Presenter: Shenyang Huang, Farimah Poursafaei McGill University, Mila
    Shenyang Huang is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs. He also enjoys writing medium blog posts about temporal graph learning.
    Farimah Poursafaei is a PostDoc at McGill University and Mila. She conducts research on dynamic graph neural networks, and temporal graphs. She completed her PhD at McGill University in Computer Engineering. During her PhD, she was working on anomaly detection on cryptocurrency transactions networks. She served as the Reviewing Chair in Temporal Graph Learning Workshop @ NeurIPS 2022.

    We present the Temporal Graph Benchmark (TGB), a collection of challenging and diverse benchmark datasets for realistic, reproducible, and robust evaluation of machine learning models on temporal graphs. TGB datasets are of large scale, spanning years in duration, incorporate both node and edge-level prediction tasks and cover a diverse set of domains including social, trade, transaction, and transportation networks. For both tasks, we design evaluation protocols based on realistic use-cases. We extensively benchmark each dataset and find that the performance of common models can vary drastically across datasets. In addition, on dynamic node property prediction tasks, we show that simple methods often achieve superior performance compared to existing temporal graph models. We believe that these findings open up opportunities for future research on temporal graphs. Finally, TGB provides an automated machine learning pipeline for reproducible and accessible temporal graph research, including data loading, experiment setup and performance evaluation. TGB will be maintained and updated on a regular basis and welcomes community feedback. TGB datasets, data loaders, example codes, evaluation setup, and leaderboards are publicly available
  • [October 5th, 2023]

  • HAGEN: Homophily-Aware Graph Convolutional Recurrent Network for Crime Forecasting AAAI 2022
    Presenter: Chenyu Wang and Zongyu Lin Tsinghua University
    Zongyu Lin is now a CS Ph.D student from UCLA, graduated from Tsinghua university. His research interest lies broadly in AI for content generation (e.g., large language models and diffusion models), and AI for science (e.g., time series forecasting).
    Chenyu Wang is now a Ph.D student at MIT EECS, advised by Prof. Caroline Uhler and Prof. Tommi Jaakkola. Prior to that, she attained her bachelor's degree at Tsinghua University. Her research interest lies broadly in machine learning, representation learning, and AI for science (especially computational biology).

    The goal of the crime forecasting problem is to predict different types of crimes for each geographical region (like a neighborhood or censor tract) in the near future. Since nearby regions usually have similar socioeconomic characteristics which indicate similar crime patterns, recent state-of-the-art solutions constructed a distance-based region graph and utilized Graph Neural Network (GNN) techniques for crime forecasting, because the GNN techniques could effectively exploit the latent relationships between neighboring region nodes in the graph if the edges reveal high dependency or correlation. However, this distance-based pre-defined graph can not fully capture crime correlation between regions that are far from each other but share similar crime patterns. Hence, to make a more accurate crime prediction, the main challenge is to learn a better graph that reveals the dependencies between regions in crime occurrences and meanwhile captures the temporal patterns from historical crime records. To address these challenges, we propose an end-to-end graph convolutional recurrent network called HAGEN with several novel designs for crime prediction. Specifically, our framework could jointly capture the crime correlation between regions and the temporal crime dynamics by combining an adaptive region graph learning module with the Diffusion Convolution Gated Recurrent Unit (DCGRU). Based on the homophily assumption of GNN (i.e., graph convolution works better where neighboring nodes share the same label), we propose a homophily-aware constraint to regularize the optimization of the region graph so that neighboring region nodes on the learned graph share similar crime patterns, thus fitting the mechanism of diffusion convolution. Empirical experiments and comprehensive analysis on two real-world datasets showcase the effectiveness of HAGEN
  • [October 12th, 2023]

  • A Survey on Graph Neural Networks for Time Series: Forecasting, Classification, Imputation, and Anomaly Detection
    Presenter: Ming Jin Monash University
    Ming Jin is a Ph.D. Candidate in Computer Science at Monash University, where he is currently being supervised by Prof. Shirui Pan and A/Prof. Yuan-Fang Li. He is also a Research Staff at Metso Outotec and a Research Assistant at Monash University. Ming completed his Bachelor's and Master's degrees at Hebei University of Technology and The University of Melbourne in 2017 and 2019, respectively. He has published over ten top-ranked conference and journal papers and regularly served as a PC member and reviewer of major AI/ML/DM conferences and journals, including IJCAI, PAKDD, TNNLS, TKDE, and more. Ming's research interests mainly focus on graph neural networks and geometric deep learning, with a particular emphasis on temporal settings such as dynamic graph neural networks. He is particularly passionate about applying these techniques to tackle practical challenges such as time series forecasting, industrial process modeling, and anomaly detection.

    Time series are the primary data type used to record dynamic system measurements and generated in great volume by both physical sensors and online processes (virtual sensors). Time series analytics is therefore crucial to unlocking the wealth of information implicit in available data. With the recent advancements in graph neural networks (GNNs), there has been a surge in GNN-based approaches for time series analysis. These approaches can explicitly model inter-temporal and inter-variable relationships, which traditional and other deep neural network-based methods struggle to do. In this survey, we provide a comprehensive review of graph neural networks for time series analysis (GNN4TS), encompassing four fundamental dimensions: forecasting, classification, anomaly detection, and imputation. Our aim is to guide designers and practitioners to understand, build applications, and advance research of GNN4TS. At first, we provide a comprehensive task-oriented taxonomy of GNN4TS. Then, we present and discuss representative research works and introduce mainstream applications of GNN4TS. A comprehensive discussion of potential future research directions completes the survey. This survey, for the first time, brings together a vast array of knowledge on GNN-based time series research, highlighting foundations, practical applications, and opportunities of graph neural networks for time series analysis.
  • [October 19th, 2023]

  • GRAFENNE: Learning on Graphs with Heterogeneous and Dynamic Feature Sets ICML 2023
    Presenter: Sahil Manchanda Department of Computer Science at the Indian Institute of Technology Delhi
    Sahil Manchanda is a PhD scholar at the Department of Computer Science at the Indian Institute of Technology Delhi, working under the supervision of Prof. Sayan Ranu. Sahil works in the area of Learning to solve graph optimization problems with focus on Combinatorial Optimization, Graph Neural Networks, Lifelong Learning and Generative modeling. He is also interested in applications of Computer Science concepts in other fields such as VLSI Chip Design and Material Science to discover new materials. He has interned at prestigious institutes such as the University of Tokyo, Naver Labs- France and Qualcomm AI research Amsterdam. His research works have been published in conferences such as NeurIPS, AAAI, ICML and ECML-PKDD. Additionally he has one US patent granted to his name. Sahil has been the recipient of the Qualcomm Innovation Fellowship for the year 2022.

    Graph neural networks (GNNs), in general, are built on the assumption of a static set of features characterizing each node in a graph. This assumption is often violated in practice. Existing methods partly address this issue through feature imputation. However, these techniques (i) assume uniformity of feature set across nodes, (ii) are transductive by nature, and (iii) fail to work when features are added or removed over time. In this work, we address these limitations through a novel GNN framework called GRAFENNE. GRAFENNE performs a novel allotropic transformation on the original graph, wherein the nodes and features are decoupled through a bipartite encoding. Through a carefully chosen message passing framework on the allotropic transformation, we make the model parameter size independent of the number of features and thereby inductive to both unseen nodes and features. We prove that GRAFENNE is at least as expressive as any of the existing message-passing GNNs in terms of Weisfeiler-Leman tests, and therefore, the additional inductivity to unseen features does not come at the cost of expressivity. In addition, as demonstrated over four real-world graphs, GRAFENNE empowers the underlying GNN with high empirical efficacy and the ability to learn in continual fashion over streaming feature sets.
  • [October 26th, 2023]

  • Along the Time: Timeline-traced Embedding for Temporal Knowledge Graph Completion CIKM 2022
    Presenter: Fuwei Zhang and Zhao Zhan Institute of Computing Technology, Chinese Academy of Sciences
    Fuwei Zhang is now a PhD student from the Institute of Artificial Intelligence, Beihang University. His research interests include data mining and applied machine learning, with a special focus on the representation and application of knowledge graphs.
    Zhao Zhan is an assistant professor at Institute of Computing Technology, Chinese Academy of Sciences. His research interest lies in data mining and machine learning, especially the representation and application of knowledge graphs.

    Recent years have witnessed remarkable progress on knowledge graph embedding (KGE) methods to learn the representations of entities and relations in static knowledge graphs (SKGs). However, knowledge changes over time. In order to represent the facts happening in a specific time, temporal knowledge graph (TKG) embedding approaches are put forward. While most existing models ignore the independence of semantic and temporal information. We empirically find that current models have difficulty distinguishing representations of the same entity or relation at different timestamps. In this regard, we propose a TimeLine-Traced Knowledge Graph Embedding method (TLT-KGE) for temporal knowledge graph completion. TLT-KGE aims to embed the entities and relations with timestamps as a complex vector or a quaternion vector. Specifically, TLT-KGE models semantic information and temporal information as different axes of complex number space or quaternion space. Meanwhile, two specific components carving the relationship between semantic and temporal information are devised to buoy the modeling. In this way, the proposed method can not only distinguish the independence of the semantic and temporal information, but also establish a connection between them. Experimental results on the link prediction task demonstrate that TLT-KGE achieves substantial improvements over state-of-the-art competitors. The source code will be available on https://github.com/zhangfw123/TLT-KGE.
  • [November 2nd, 2023]

  • My journey in the land of temporal networks and dynamic graphs
    Presenter: Petter Holme Aalto University
    Petter Holme is a Professor of Network Science at the Department of Computer Science, Aalto University, Finland, and a Research Fellow at The Center of Computational Social Science, Kobe University, Japan. Formerly, Holme held faculty positions at the Tokyo Institute of Technology, Japan, Sungkyunkwan University, Korea, and Umeå University and the Royal Institute of Technology, Sweden. His research covers a broad scientific ground in the borderland between the social and formal sciences. Among many other things, Holme pioneered the study of temporal networks. Holme has over 190 publications and over 19,000 citations.

    I will cover some highlights and insights in my two-decade-long research on how to understand systems through their combined temporal and topological structure. I will discuss: The use of randomization techniques to understand diffusion dynamics on temporal networks. The profound difficulties in generalizing insights from network science and graph theory to temporal graphs. The philosophy of community structures in graphs. Why there hasn't been any discovery of ubiquitous structures that are both temporal and topological.
  • [November 9th, 2023]

  • Temporal Dynamics-Aware Adversarial Attacks on Discrete-Time Dynamic Graph Models KDD 2023
    Presenter: Kartik Sharma Georgia Institute of Technology
    Kartik Sharma is a third-year PhD student at Georgia Tech, where he is advised by Prof. Srijan Kumar. He is interested in studying the robustness and alignment of machine learning models. His works are focused on studying these models for structured data such as social networks, temporal graphs, and knowledge bases. He has published in venues such as KDD, AAAI, CIKM, and WSDM.

    Real-world graphs such as social networks, communication networks, and rating networks are constantly evolving over time. Many deep learning architectures have been developed to learn effective node representations using both graph structure and dynamics. While being crucial for practical applications, the robustness of these representation learners for dynamic graphs in the presence of adversarial attacks is highly understudied. In this work, we design a novel adversarial attack on discrete-time dynamic graph models where we desire to perturb the input graph sequence in a manner that preserves the temporal dynamics of the graph while dropping the performance of representation learners. To this end, we motivate a novel Temporal Dynamics-Aware Perturbation (TDAP) constraint, which ensures that perturbations introduced at each time step are restricted to only a small fraction of the number of changes in the graph since the previous time step. We present a theoretically-motivated Projected Gradient Descent approach for dynamic graphs to find effective perturbations under the TDAP constraint. Experiments on two tasks — dynamic link prediction and node classification, show that our approach is up to 4x more effective than the baseline methods for attacking these models. We extend our approach to a more practical online setting where graphs become available in real-time and show up to 5x superior performance over baselines We also show that our approach successfully evades state-of-the-art neural approaches for anomaly detection, thereby promoting the need to study robustness as a part of representation-learning approaches for dynamic graphs.
  • [November 16th, 2023] **cancelled, to be rescheduled**

  • Temporal Graphs and Temporal Network Characteristics for Bio-Inspired Networks during Optimization
    Presenter: Abdel Isakovic Colgate University
    Abdel Isakovic is a Visiting Assistant Professor of Physics at Colgate University.

    Temporal network analysis and time evolution of network characteristics are powerful tools in describing the changing topology of dynamic networks. This paper uses such approaches to better visualize and provide analytical measures for the changes in performance that we observed in Voronoi-type spatial coverage, particularly for the example of time evolving networks with a changing number of wireless sensors being deployed. Specifically, our analysis focuses on the role different combinations of impenetrable obstacles and environmental noise play in connectivity and overall network structure. It is shown how the use of (i) temporal network graphs, and (ii) network centrality and regularity measures illustrate the differences between various options developed for the balancing act of energy and time efficiency in network coverage. Lastly, we compare the outcome of these measures with the less abstract classification variables, such as percent area covered, and cumulative distance travelled.
  • [November 23rd, 2023]

  • ULTRA: Foundation Models for Knowledge Graph Reasoning
    Presenter: Michael Galkin Intel AI Lab
    Michael Galkin is a Research Scientist at Intel AI Lab in San Diego working on Graph Machine Learning and Geometric Deep Learning. Previously, he was a postdoc at Mila – Quebec AI Institute with Will Hamilton, Reihaneh Rabbany, and Jian Tang, focusing on many graph representation learning problems. Sometimes, Mike writes long blog posts on Medium about graph learning.

    Foundation models in language and vision have the ability to run inference on any textual and visual inputs thanks to the transferable representations such as a vocabulary of tokens in language. Knowledge graphs (KGs) have different entity and relation vocabularies that generally do not overlap. The key challenge of designing foundation models on KGs is to learn such transferable representations that enable inference on any graph with arbitrary entity and relation vocabularies. In this work, we make a step towards such foundation models and present ULTRA, an approach for learning universal and transferable graph representations. ULTRA builds relational representations as a function conditioned on their interactions. Such a conditioning strategy allows a pre-trained ULTRA model to inductively generalize to any unseen KG with any relation vocabulary and to be fine-tuned on any graph. Conducting link prediction experiments on 57 different KGs, we find that the zero-shot inductive inference performance of a single pre-trained ULTRA model on unseen graphs of various sizes is often on par or better than strong baselines trained on specific graphs. Fine-tuning further boosts the performance.
  • [November 30th, 2023]

  • Self-Supervised Dynamic Graph Representation Learning via Temporal Subgraph Contrast TKDD
    Presenter: Ke-Jia Chen Nanjing University of Posts and Telecommunications
    Ke-Jia Chen is an associate professor in Nanjing University of Posts and Telecommunications, China. She received the PhD in Université de Technologie de Compiègne, France and the M.S. and B.S. degree in Nanjing University, China. Her current research focuses on graph machine learning and complex network analysis. She once published papers in TOIS, TKDD, ECML, ICDM, ASONAM, etc.

    Self-supervised learning on graphs has recently drawn a lot of attention due to its independence from labels and its robustness in representation. Current studies on this topic mainly use static information such as graph structures but cannot well capture dynamic information such as timestamps of edges. Realistic graphs are often dynamic, which means the interaction between nodes occurs at a specific time. This paper proposes a self-supervised dynamic graph representation learning framework (DySubC), which defines a temporal subgraph contrastive learning task to simultaneously learn the structural and evolutional features of a dynamic graph. Specifically, a novel temporal subgraph sampling strategy is firstly proposed, which takes each node of the dynamic graph as the central node and uses both neighborhood structures and edge timestamps to sample the corresponding temporal subgraph. The subgraph representation function is then designed according to the influence of neighborhood nodes on the central node after encoding the nodes in each subgraph. Finally, the structural and temporal contrastive loss are defined to maximize the mutual information between node representation and temporal subgraph representation. Experiments on five real-world datasets demonstrate that (1) DySubC performs better than the related baselines including two graph contrastive learning models and four dynamic graph representation learning models in the downstream link prediction task, and (2) the use of temporal information can not only sample more effective subgraphs, but also learn better representation by temporal contrastive loss.
  • Past Talks, Summer 2023

    [May 4th, 2023]

  • De Bruijn Goes Neural: Causality-Aware Graph Neural Networks for Time Series Data on Dynamic Graphs LOG 2022
    Presenter: Ingo Scholtes and Lisi Qarkaxhija Center for Artificial Intelligence and Data Science of Julius-Maximilians-Universität Würzburg, Germany
    Ingo Scholtes Ingo Scholtes is a Full Professor for Machine Learning in Complex Networks at the Center for Artificial Intelligence and Data Science of Julius-Maximilians-Universität Würzburg, Germany as well as SNSF Professor for Data Analytics at the Department of Computer Science at the University of Zürich, Switzerland. He has a background in computer science and mathematics and obtained his doctorate degree from the University of Trier, Germany. At CERN, he developed a large-scale data distribution system, which is currently used to monitor particle collision data from the ATLAS detector. After finishing his doctorate degree, he was a postdoctoral researcher at the interdisciplinary Chair of Systems Design at ETH Zürich from 2011 till 2016.In 2016 he held an interim professorship for Applied Computer Science at the Karlsruhe Institute of Technology, Germany.In 2017 he returned to ETH Zürich as a senior assistant and lecturer. In 2019 he was appointed Full Professor at the University of Wuppertal.Since 2021 he holds the Chair of Computer Science XV - Machine Learning for Complex Networks at Julius-Maximilians-Universität Würzburg, Germany.
    Lisi Qarkaxhija Lisi Qarkaxhija is a Computer Science PhD candidate at the Chair of Machine Learning for Complex Networks, Center for Artificial Intelligence and Data Science (CAIDAS), Julius-Maximilians-Universität Würzburg, under the supervision of Prof. Ingo Scholtes. His research focuses on developing higher-order graph neural network models and their applications in interdisciplinary fields. Prior to his PhD studies, Lisi completed his Masters in Data Science at University of Primorksa (FAMNIT) Koper, Slovenia, and his Bachelor's in Mathematics from the same institution.

    We introduce De Bruijn Graph Neural Networks (DBGNNs), a novel time-aware graph neural network architecture for time-resolved data on dynamic graphs. Our approach accounts for temporal-topological patterns that unfold in the causal topology of dynamic graphs, which is determined by \emph{causal walks}, i.e. temporally ordered sequences of links by which nodes can influence each other over time. Our architecture builds on multiple layers of higher-order De Bruijn graphs, an iterative line graph construction where nodes in a De Bruijn graph of order \textdollar k\textdollar represent walks of length \textdollar k-1\textdollar , while edges represent walks of length \textdollar k\textdollar . We develop a graph neural network architecture that utilizes De Bruijn graphs to implement a message passing scheme that considers non-Markovian characteristics of causal walks, which enables us to learn patterns in the causal topology of dynamic graphs. Addressing the issue that De Bruijn graphs with different orders \textdollar k\textdollar can be used to model the same data, we apply statistical model selection to determine the optimal graph to be used for message passing. An evaluation in synthetic and empirical data sets suggests that DBGNNs can leverage temporal patterns in dynamic graphs, which substantially improves performance in a node classification task.
  • [May 11th, 2023]

  • Complex Evolutional Pattern Learning for Temporal Knowledge Graph Reasoning, ACL 2022
    Presenter: Zixuan Li, Chinese Academy of Sciences
    Zixuan Liis currently an assistant professor at CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences. Before that, he obtained his Ph.D. degree from the Institute of Computing Technology, Chinese Academy of Sciences. His current research interest includes knowledge graphs and natural language processing. He regularly publishes in top-tier conferences and journals of the field, including SIGIR, ACL, EMNLP, AAAI, KIS and IEEE TKDE.

    A Temporal Knowledge Graph (TKG) is a sequence of KGs corresponding to different timestamps. TKG reasoning aims to predict potential facts in the future given the historical KG sequences. One key of this task is to mine and understand evolutional patterns of facts from these sequences. The evolutional patterns are complex in two aspects, length- diversity and time-variability. Existing models for TKG reasoning focus on modeling fact sequences of a fixed length, which cannot discover complex evolutional patterns that vary in length. Furthermore, these models are all trained offline, which cannot well adapt to the changes of evolutional patterns from then on. Thus, we propose a new model, called Complex Evolutional Network (CEN), which uses a length-aware Convolutional Neural Network (CNN) to handle evolutional patterns of different lengths via an easy-to-difficult curriculum learning strategy. Besides, we propose to learn the model under the online setting so that it can adapt to the changes of evolutional patterns over time. Extensive experiments demonstrate that CEN obtains substantial performance improvement under both the traditional offline and the proposed online settings
  • [May 25th, 2023]

  • Graph Kalman Filters
    Presenter: Daniele Zambon, The Swiss AI Lab IDSIA & Universit`a della Svizzera italiana, Switzerland.
    Daniele Zambon is currently a postdoctoral researcher with the Swiss AI Lab IDSIA, Università della Svizzera italiana USI, Lugano, Switzerland. He received his Ph.D. degree from USI, with a thesis on anomaly and change detection in sequences of graphs. He holds MSc and BSc degrees in mathematics from Università degli Studi di Milano, Milan, Italy. He has been a Visiting Researcher/Intern at the University of Florida, Gainesville, FL, USA, the University of Exeter, Exeter, U.K., and STMicroelectronics, Geneva, Switzerland. His research interests include machine learning on graph-structured data, time series analysis, and learning in nonstationary environments. He regularly publishes in and is in the program committee of top-tier journals and conferences of the field, including IEEE TPAMI, IEEE TNNLS, IEEE TSP, NeurIPS, ICML, and ICLR.

    The well-known Kalman filters model dynamical systems by relying on state-space representations with the next state updated, and its uncertainty controlled, by fresh information associated with newly observed system outputs. This paper generalizes, for the first time in the literature, Kalman and extended Kalman filters to discrete-time settings where inputs, states,and outputs are represented as attributed graphs whose topology and attributes can change with time. The setup allows us to adapt the framework to cases where the output is a vector or a scalar too (node/graph level tasks). Within the proposed theoretical framework, the unknown state-transition and the readout functions are learned end-to-end along with the downstream prediction task.
  • [June 8th, 2023]

  • Fast and Attributed Change Detection on Dynamic Graphs with Density of States PAKDD 2023
    Presenter: Shenyang Huang, Mila and School of Computer Science, McGill University
    Shenyang Huang is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs.

    How can we detect traffic disturbances from international flight transportation logs or changes to collaboration dynamics in academic networks? These problems can be formulated as detecting anomalous change points in a dynamic graph. Current solutions do not scale well to large real-world graphs, lack robustness to large amounts of node additions/deletions, and overlook changes in node attributes. To address these limitations, we propose a novel spectral method: Scalable Change Point Detection (SCPD). SCPD generates an embedding for each graph snapshot by efficiently approximating the distribution of the Laplacian spectrum at each step. SCPD can also capture shifts in node attributes by tracking correlations between attributes and eigenvectors. Through extensive experiments using synthetic and real-world data, we show that SCPD (a) achieves state-of-the art performance, (b) is significantly faster than the state-of-the-art methods and can easily process millions of edges in a few CPU minutes, (c) can effectively tackle a large quantity of node attributes, additions or deletions and (d) discovers interesting events in large real-world graphs.
  • [June 15th, 2023]

  • Chartalist: Labeled Graph Datasets for UTXO and Account-based Blockchains NeurIPS 2022 Datasets and Benchmarks Track
    Presenter: Kiarash Shamsi and Cuneyt Gurcan Akcora, University of Manitoba, Canada
    Kiarash Shamsi is a Ph.D. student in computer science at the University of Manitoba in Canada, and attained his Master's and bachelor's degree in computer software engineering from the University of Science and Culture and the University of Science and Technology in Iran. Kiarash's research interests revolve around machine learning, deep learning, data analysis, and blockchain technologies, focusing on their practical applications. Kiarash is contributing to the advancement of these areas through research and publication, with his work being featured in prestigious conferences and journals such as NeurIPS, ICBC, and IEEE Access.
    Cuneyt Gurcan Akcora is an assistant professor of computer science and statistics at the University of Manitoba in Canada. He received his Ph.D. from the University of Insubria, Italy. His research interests include data science on complex networks and large-scale graph analysis, with applications in social, biological, IoT, and blockchain networks. Akcora has been awarded a Fulbright Scholarship and has published his research in leading conferences and journals, including IEEEtran, KDD, NeurIPS, VLDB, ICDM, SDM, IJCAI, and ICDE.

    Topic 1 Abstract Over the last couple of years, Bitcoin cryptocurrency and the Blockchain technology that forms the basis of Bitcoin have witnessed an unprecedented attention. Designed to facilitate a secure distributed platform without central regulation, Blockchain is heralded as a novel paradigm that will be as powerful as Big Data, Cloud Computing, and Machine Learning. The Blockchain technology garners an ever increasing interest of researchers in various domains that benefit from scalable cooperation among trust-less parties. As Blockchain applications proliferate, so does the complexity and volume of data stored by Blockchains. Analyzing this data has emerged as an important research topic, already leading to methodological advancements in the information sciences. In this tutorial, we offer a holistic view on applied Data Science on Blockchains. Starting with the core components of Blockchain, we will detail the state of art in Blockchain data analytics for graph, security and finance domains. Our examples will answer questions, such as, "how to parse, extract and clean the data stored in blockchains?", "how to store and query Blockchain data?" and "what features could be computed from blockchains"?
    Topic 2 Abstract (Topological Data Analysis on Networks, Applications and Scalability issues) Over the last couple of years, Topological Data Analysis (TDA) has seen a growing interest from Data Scientists of diverse backgrounds. TDA is an emerging field at the interface of algebraic topology, statistics, and computer science. The key rationale in TDA is that the observed data are sampled from some metric space and the underlying unknown geometric structure of this space is lost because of sampling. TDA recovers the lost underlying topology. We aim at adapting TDA algorithms to work on networks and overcoming the scalability issues that arise while working on large networks. In this talk, I will outline our three alternative approaches in applying Persistent Homology and TDAMapper based Topological Data Analysis algorithms to Blockchain networks.
    see more from the following papers:
    BitcoinHeist: Topological Data Analysis for Ransomware Prediction on the Bitcoin Blockchain
    ChainNet: Learning on Blockchain Graphs with Topological Features
    Dissecting Ethereum Blockchain Analytics: What We Learn from Topology and Geometry of the Ethereum Graph?
  • [June 22nd, 2023]

  • Comparing Apples and Oranges? On the Evaluation of Methods for Temporal Knowledge Graph Forecasting ECML 2023 and NeurIPS 2022 TGL workshop
    Presenter: Julia Gastinger NEC Laboratories Europe and Data and Web Science Group at University of Mannheim
    Julia Gastinger is a research scientist at NEC Laboratories Europe and a second year Ph.D. student in the Data and Web Science Group at University of Mannheim, supervised by Professor Heiner Stuckenschmidt. Her research primarily focuses on Temporal Knowledge Graphs – Specifically, she is interested in Temporal Knowledge Graph Forecasting and the evaluation of methods in this research field. Julia actively contributes to the research community as a co-organizer of the temporal graph learning reading group.

    Due to its ability to incorporate and leverage time information in relational data, Temporal Knowledge Graph (TKG) learning has become an increasingly studied research field. To predict the future based on TKG, researchers have presented innovative methods for Temporal Knowledge Graph Forecasting. However, the experimental procedures employed in this research area exhibit inconsistencies that significantly impact empirical results, leading to distorted comparisons among models. This paper focuses on the evaluation of TKG Forecasting models: We examine the evaluation settings commonly used in this research area and highlight the issues that arise. To make different approaches to TKG Forecasting more comparable, we propose a unified evaluation protocol and apply it to re-evaluate state-of-the-art models on the most commonly used datasets. Ultimately, we demonstrate the significant difference in results caused by different evaluation settings. We believe this work provides a solid foundation for future evaluations of TKG Forecasting models, thereby contributing to advancing this growing research area.
  • Past Talks, Winter 2023

    [Feb. 16th, 2023]

  • Neighborhood-aware Scalable Temporal Network Representation Learning , LOG 2022 Conference Best Paper
    Presenter: Yuhong Luo, University of Massachusetts
    Yuhong Luo is a first year master student at UMass Amherst. Prior to that, he worked as a software engineer at Airbnb and worked with professor Pan Li of Purdue University as a research intern. His research interest includes ML, graph representation learning, ML fairness, etc.

    Temporal networks have been widely used to model real-world complex systems such as financial systems and e-commerce systems. In a temporal network, the joint neighborhood of a set of nodes often provides crucial structural information useful for predicting whether they may interact at a certain time. However, recent represen- tation learning methods for temporal networks often fail to extract such information or depend on online construction of structural features, which is time-consuming. To address the issue, this work proposes Neighborhood-Aware Temporal network model (NAT). For each node in the network, NAT abandons the commonly-used one-single-vector-based representation while adopting a novel dictionary-type neighborhood representation. Such a dictionary representation records a down- sampled set of the neighboring nodes as keys, and allows fast construction of struc- tural features for a joint neighborhood of multiple nodes. We also design a dedicated data structure termed N-cache to support parallel access and update of those dic- tionary representations on GPUs. NAT gets evaluated over seven real-world large- scale temporal networks. NAT not only outperforms all cutting-edge baselines by averaged 1.2%↑ and 4.2%↑ in transductive and inductive link prediction accuracy, respectively, but also keeps scalable by achieving a speed-up of 4.1-76.7× against the baselines that adopt joint structural features and achieves a speed-up of 1.6-4.0× against the baselines that cannot adopt those features. The link to the code: https: //github.com/Graph-COM/Neighborhood-Aware-Temporal-Network.
  • [Feb. 23rd, 2023]

  • Direct Embedding of Temporal Network Edges via Time-Decayed Line Graphs , ICLR 2023
    Presenter: Sudhanshu (Dan) Chanpuriya, University of Massachusetts, Amherst
    Sudhanshu Chanpuriya is a PhD student at UMass Amherst's Theoretical Computer Science Group, where he is advised by Cameron Musco. His research interests lie at the intersection of machine learning and spectral graph theory. Prior to joining UMass, he studied computer science and engineering physics at Dartmouth College.

    Temporal networks model a variety of important phenomena involving timed interactions between entities. Existing methods for machine learning on temporal networks generally exhibit at least one of two limitations. First, time is assumed to be discretized, so if the time data is continuous, the user must determine the discretization and discard precise time information. Second, edge representations can only be calculated indirectly from the nodes, which may be suboptimal for tasks like edge classification. We present a simple method that avoids both shortcomings: construct the line graph of the network, which includes a node for each interaction, and weigh the edges of this graph based on the difference in time between interactions. From this derived graph, edge representations for the original network can be computed with efficient classical methods. The simplicity of this approach facilitates explicit theoretical analysis: we can constructively show the effectiveness of our method's representations for a natural synthetic model of temporal networks. Empirical results on real-world networks demonstrate our method's efficacy and efficiency on both edge classification and temporal link prediction.
  • [March 2nd, 2023]

  • Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs, NeurIPS 2022
    Presenter: Ming Jin, Monash University
    Ming Jin is a Ph.D. Candidate in Computer Science at Monash University, where he is currently being supervised by Prof. Shirui Pan and A/Prof. Yuan-Fang Li. He is also a Research Staff at Metso Outotec and a Research Assistant at Monash University. Ming completed his Bachelor's and Master's degrees at Hebei University of Technology and The University of Melbourne in 2017 and 2019, respectively. He has published over ten top-ranked conference and journal papers and regularly served as a PC member and reviewer of major AI/ML/DM conferences and journals, including IJCAI, PAKDD, TNNLS, TKDE, and more. Ming's research interests mainly focus on graph neural networks and geometric deep learning, with a particular emphasis on temporal settings such as dynamic graph neural networks. He is particularly passionate about applying these techniques to tackle practical challenges such as time series forecasting, industrial process modeling, and anomaly detection.

    Continuous-time dynamic graphs naturally abstract many real-world systems, such as social and transactional networks. While the research on continuous-time dynamic graph representation learning has made significant advances recently, neither graph topological properties nor temporal dependencies have been well-considered and explicitly modeled in capturing dynamic patterns. In this paper, we introduce a new approach, Neural Temporal Walks (NeurTWs), for representation learning on continuous-time dynamic graphs. By considering not only time constraints but also structural and tree traversal properties, our method conducts spatiotemporal-biased random walks to retrieve a set of representative motifs, enabling temporal nodes to be characterized effectively. With a component based on neural ordinary differential equations, the extracted motifs allow for irregularly-sampled temporal nodes to be embedded explicitly over multiple different interaction time intervals, enabling the effective capture of the underlying spatiotemporal dynamics. To enrich supervision signals, we further design a harder contrastive pretext task for model optimization. Our method demonstrates overwhelming superiority under both transductive and inductive settings on six real-world datasets.
  • [March 9th, 2023]

  • Learnable Spectral Wavelets on Dynamic Graphs to Capture Global Interactions , AAAI 2023
    Presenter: Anson Bastos, Indian Institue of Technology, Hyderabad (IITH, India) and Abhishek Nadgeri, RWTH Aachen, Germany
    Anson Basto is a PhD student at the Indian Institue of Technology, Hyderabad (IITH, India) under the supervision of Dr Manish Singh (IITH, India) and Dr Toyotaro Suzamura (The University of Tokyo, Japan). His research interests/works are in the areas of Geometric machine learning, Graph Signal Processing, Topology, Knowledge graphs etc.
    Abhishek Nadgeri is currently pursuing his master’s at RWTH Aachen, Germany, his research interests are Graph ML, time series and NLP.

    Learning on evolving(dynamic) graphs has caught the attention of researchers as static methods exhibit limited performance in this setting. The existing methods for dynamic graphs learn spatial features by local neighborhood aggregation, which essentially only captures the low pass signals and local interactions. In this work, we go beyond current approaches to incorporate global features for effectively learning representations of a dynamically evolving graph. We propose to do so by capturing the spectrum of the dynamic graph. Since static methods to learn the graph spectrum would not consider the history of the evolution of the spectrum as the graph evolves with time, we propose a novel approach to learn the graph wavelets to capture this evolving spectra. Further, we propose a framework that integrates the dynamically captured spectra in the form of these learnable wavelets into spatial features for incorporating local and global interactions. Experiments on eight standard datasets show that our method significantly outperforms related methods on various tasks for dynamic graphs.
  • [March 16th, 2023]

  • Do We Really Need Complicated Model Architectures For Temporal Networks? , ICLR 2023
    Presenter: Weilin Cong, Pennsylvania State University
    Weilin Cong is a 4th year Ph.D. candidate at Penn State University. His research focuses on both the fundamental problems in graph representation learning, including optimization, generalization, expressive power, and model architecture design. He has published as the first author in AI conferences NeurIPS, ICLR, AISTATS, KDD and SDM on graph representation learning.

    Recurrent neural network (RNN) and self-attention mechanism (SAM) are the de facto methods to extract spatial-temporal information for temporal graph learning. Interestingly, we found that although both RNN and SAM could lead to a good per-formance, in practice neither of them is always necessary. In this paper, we propose GraphMixer , a conceptually and technically simple architecture that consists of three components: 1 a link-encoder that is only based on multi-layer perceptrons (MLP) to summarize the information from temporal links, 2 a node-encoder that is only based on neighbor mean-pooling to summarize node information, and 3 an MLP-based link classifier that performs link prediction based on the outputs of the encoders. Despite its simplicity, GraphMixer attains an outstanding performance on temporal link prediction benchmarks with faster convergence and better generalization performance. These results motivate us to rethink the importance of simpler model architecture. Code is in the supplementary.
  • [March 23rd, 2023]

  • Representation Learning in Continuous-Time Dynamic Signed Networks, CIKM 2023
    Presenter: Kartik Sharma, Georgia Institute of Technology
    Kartik Sharma is a 2nd year PhD student advised by Prof. Srijan Kumar at Georgia Techology advised by Prof. Srijan Kumar. Before joining there, Kartik graduated from IIT-Delhi in 2021, where he was fortunate to be advised by Prof. Sayan Ranu. His research focuses on learning and optimization for graph-structured data. Through his research, his goal is to (1) study the effectiveness of these models in difficult and practical (for eg., adversarial, constrained, dynamic) scenarios, and (2) develop new models that can circumvent these problems with a theoretical grounding.

    Signed networks allow us to model conflicting relationships and in- teractions, such as friend/enemy and support/oppose. These signed interactions happen in real time. Modeling such dynamics of signed networks is crucial to understanding the evolution of polarization in the network and enabling effective prediction of the signed struc- ture (i.e., link signs and signed weights) in the future. However, exist- ing works have modeled either (static) signed networks or dynamic (unsigned) networks but not dynamic signed networks. Since both sign and dynamics inform the graph structure in different ways, it is non-trivial to model how to combine the two features. In this work, we propose a new Graph Neural Network (GNN)-based ap- proach to model dynamic signed networks, named SEMBA: Signed link’s Evolution using Memory modules and Balanced Aggregation. Here, the idea is to incorporate the signs of temporal interactions using separate modules guided by balance theory and to evolve the embeddings from a higher-order neighborhood. Experiments on 4 real-world datasets and 4 different tasks demonstrate that SEMBA consistently and significantly outperforms the baselines by up to 80% on the tasks of predicting signs of future links while matching the state-of-the-art performance on predicting existence of these links in the future. We find that this improvement is due specifically to superior performance of SEMBA on the minority negative class.
  • [March 30th, 2023]

  • Towards Better Evaluation for Dynamic Link Prediction (NeurIPS 2022 Datasets and Benchmarks Track)
    Presenter: Farimah Poursafaei, McGill University, Mila
    Farimah Poursafaei (she/her) is a PostDoc at McGill University and Mila. She conducts research on dynamic graph neural networks, and temporal graphs. She completed her PhD at McGill University in Computer Engineering. During her PhD, she was working on anomaly detection on cryptocurrency transactions networks. She served as the Reviewing Chair in Temporal Graph Learning Workshop @ NeurIPS 2022.

    Despite the prevalence of recent success in learning from static graphs, learning from time-evolving graphs remains an open challenge. In this work, we design new, more stringent evaluation procedures for link prediction specific to dynamic graphs, which reflect real-world considerations, to better compare the strengths and weaknesses of methods. First, we create two visualization techniques to understand the reoccurring patterns of edges over time and show that many edges reoccur at later time steps. Based on this observation, we propose a pure memorization-based baseline called EdgeBank. EdgeBank achieves surprisingly strong performance across multiple settings which highlights that the negative edges used in the current evaluation are easy. To sample more challenging negative edges, we introduce two novel negative sampling strategies that improve robustness and better match real-world applications. Lastly, we introduce six new dynamic graph datasets from a diverse set of domains missing from current benchmarks, providing new challenges and opportunities for future research. Our code repository is accessible at github
  • [April 6th, 2023]

  • Scalable Spatiotemporal Graph Neural Networks AAAI 2023 ( also NeurIPS 2022 Temporal Graph Learning Workshop best paper)
    Presenter: Andrea Cini and Ivan Marisca, IDSIA, Università della Svizzera italiana
    Andrea Cini is a Ph.D. student at IDSIA and a visiting researcher at Imperial College London. Previously, he worked as machine learning engineer in the aerospace industry and obtained his MSc and BSc in Computer Science at Politecnico di Milano. Andrea’s research has been published in top-tier machine-learning venues such as JMLR, NeurIPS, ICLR and AAAI. His main research interests are in methods to process spatiotemporal data by exploiting graph representations and graph deep learning; applications are in time series analysis, with a focus on spatiotemporal forecasting. Personal website: https://andreacini.github.io/
    Ivan Marisca is a Ph.D. student at the Graph Machine Learning Group within the Swiss AI lab IDSIA at Università della Svizzera italiana (USI), under the supervision of Prof. Cesare Alippi. He received his BSc (2017) and MSc (2020) degrees in Computer Science and Engineering from Politecnico di Milano.Ivan's research focuses on graph-based learning from irregular spatiotemporal data, with applications in prediction, imputation, and control on sensor networks. His works have been published in top-tier conferences such as NeurIPS, ICLR, and AAAI. In addition to his research, Ivan is an enthusiastic teaching assistant and lecturer in machine learning courses for MSc students at USI. Personal website: https://marshka.github.io/

    Slidelive video recording from NeurIPS 2022 TGL workshop
    Neural forecasting of spatiotemporal time series drives both research and industrial innovation in several relevant application domains. Graph neural networks (GNNs) are often the core component of the forecasting architecture. However, in most spatiotemporal GNNs, the computational complexity scales up to a quadratic factor with the length of the sequence times the number of links in the graph, hence hindering the application of these models to large graphs and long temporal sequences. While methods to improve scalability have been proposed in the context of static graphs, few research efforts have been devoted to the spatiotemporal case. To fill this gap, we propose a scalable architecture that exploits an efficient encoding of both temporal and spatial dynamics. In particular, we use a randomized recurrent neural network to embed the history of the input time series into high-dimensional state representations encompassing multi-scale temporal dynamics. Such representations are then propagated along the spatial dimension using different powers of the graph adjacency matrix to generate node embeddings characterized by a rich pool of spatiotemporal features. The resulting node embeddings can be efficiently pre-computed in an unsupervised manner, before being fed to a feed-forward decoder that learns to map the multi-scale spatiotemporal representations to predictions. The training procedure can then be parallelized node-wise by sampling the node embeddings without breaking any dependency, thus enabling scalability to large networks. Empirical results on relevant datasets show that our approach achieves results competitive with the state of the art, while dramatically reducing the computational burden
  • [April 13th, 2023]

  • Spatio-Temporal Meta-Graph Learning for Traffic Forecasting , AAAI 2023
    DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction , CIKM 21 Best Resource Paper Runner-Up
    Presenter: Renhe Jiang, Center for Spatial Information Science, The University of Tokyo
    Renhe Jiang is a full-time lecturer at Center for Spatial Information Science, The University of Tokyo. He received his B.S. degree in Software Engineering from Dalian University of Technology, China, in 2012, M.S. degree in Information Science from Nagoya University, Japan, in 2015, and Ph.D. degree in Civil Engineering from The University of Tokyo, Japan, in 2019. From 2019 to 2022, he was an assistant professor at Information Technology Center, The University of Tokyo. His research interests include data mining and machine learning, especially spatiotemporal data science, spatiotemporal AI, multivariate time series, urban computing, and intelligent transportation system. You can find his website here.

    Spatio-Temporal Meta-Graph Learning for Traffic Forecasting (AAAI 2023). Traffic forecasting as a canonical task of multivariate time series forecasting has been a significant research topic in AI community. To address the spatio-temporal heterogeneity and non-stationarity implied in the traffic stream, in this study, we propose Spatio-Temporal Meta-Graph Learning as a novel Graph Structure Learning mechanism on spatio-temporal data. Specifically, we implement this idea into Meta-Graph Convolutional Recurrent Network (MegaCRN) by plugging the Meta-Graph Learner powered by a Meta-Node Bank into GCRN encoder-decoder. We conduct a comprehensive evaluation on two benchmark datasets (i.e., METR-LA and PEMS-BAY) and a new large-scale traffic speed dataset called EXPY-TKY that covers 1843 expressway road links in Tokyo. Our model outperformed the state-of-the-arts on all three datasets. Besides, through a series of qualitative evaluations, we demonstrate that our model can explicitly disentangle the road links and time slots with different patterns and be robustly adaptive to any anomalous traffic situations. Codes and datasets are available at https://github.com/deepkashiwa20/MegaCRN.
    DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic Prediction (CIKM 2021). Nowadays, with the rapid development of IoT (Internet of Things) and CPS (Cyber-Physical Systems) technologies, big spatiotemporal data are being generated from mobile phones, car navigation systems, and traffic sensors. By leveraging state-of-the-art deep learning technologies on such data, urban traffic prediction has drawn a lot of attention in AI and Intelligent Transportation System community. The problem can be uniformly modeled with a 3D tensor (T, N, C), where T denotes the total time steps, N denotes the size of the spatial domain (i.e., mesh-grids or graph-nodes), and C denotes the channels of information. According to the specific modeling strategy, the state-of-the-art deep learning models can be divided into three categories: grid-based, graph-based, and multivariate time-series models. In this study, we first synthetically review the deep traffic models as well as the widely used datasets, then build a standard benchmark to comprehensively evaluate their performances with the same settings and metrics. Our study named DL-Traff is implemented with two most popular deep learning frameworks, i.e., TensorFlow and PyTorch, which is already publicly available as two GitHub repositories https://github.com/deepkashiwa20/DL-Traff-Grid and https://github.com/deepkashiwa20/DL-Traff-Graph. With DL-Traff, we hope to deliver a useful resource to researchers who are interested in spatiotemporal data analysis.
  • [April 20th, 2023]

  • Graph Neural Networks for Link Prediction with Subgraph Sketching , ICLR 2023 Notable top 5%
    Presenter: Benjamin Paul Chamberlain Charm Therapeutics and Sergey Shirobokov ShareChat
    Dr Ben Chamberlain is a Principal Scientist at Charm Therapeutics where he works on 3d machine learning to develop cancer drugs. He was previously a staff machine learning researcher within the graph ML group at Twitter and the Head of machine learning at ASOS.com. He did his PhD at Imperial College with Professor Marc Deisonroth on large scale graph ML.
    Dr Sergey Shirobokov is a Senior Machine Learning scientist at ShareChat, where he works on improving the company's recommender algorithms. He was previously a Senior Machine Learning Researcher at Twitter Cortex team. He did his PhD at Imperial College London on simulator-based optimisation algorithms for high energy physics.

    Many Graph Neural Networks (GNNs) perform poorly compared to simple heuristics on Link Prediction (LP) tasks. This is due to limitations in expressive power such as the inability to count triangles (the backbone of most LP heuristics) and because they can not distinguish automorphic nodes (those having identical structural roles). Both expressiveness issues can be alleviated by learning link (rather than node) representations and incorporating structural features such as triangle counts. Since explicit link representations are often prohibitively expensive, recent works resorted to subgraph-based methods, which have achieved state-of-the-art performance for LP, but suffer from poor efficiency due to high levels of redundancy between subgraphs. We analyze the components of subgraph GNN (SGNN) methods for link prediction. Based on our analysis, we propose a novel full-graph GNN called ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as messages to approximate the key components of SGNNs without explicit subgraph construction. ELPH is provably more expressive than Message Passing GNNs (MPNNs). It outperforms existing SGNN models on many standard LP benchmarks while being orders of magnitude faster. However, it shares the common GNN limitation that it is only efficient when the dataset fits in GPU memory. Accordingly, we develop a highly scalable model, called BUDDY, which uses feature precomputation to circumvent this limitation without sacrificing predictive performance. Our experiments show that BUDDY also outperforms SGNNs on standard LP benchmarks while being highly scalable and faster than ELPH.
  • [April 27th, 2023]

  • Temporal Knowledge Graph Reasoning with Historical Contrastive Learning AAAI 2023
    Presenter: Yi Xu, Shanghai Jiao Tong University
    Yi Xu is a 3th year Ph.D. candidate at Shanghai Jiao Tong University under the supervision of Prof. Luoyi Fu and Prof. Xinbing Wang. His research interests include natural language processing, knowledge graphs and data mining. Specifically, He is devoted to the research of large language models and temporal knowledge graph.

    Temporal knowledge graph, serving as an effective way to store and model dynamic relations, shows promising prospects in event forecasting. However, most temporal knowledge graph reasoning methods are highly dependent on the recurrence or periodicity of events, which brings challenges to inferring future events related to entities that lack historical interaction. In fact, the current moment is often the combined effect of a small part of historical information and those unobserved underlying factors. To this end, we propose a new event forecasting model called Contrastive Event Network (CENET), based on a novel training framework of historical contrastive learning. CENET learns both the historical and non-historical dependency to distinguish the most potential entities that can best match the given query. Simultaneously, it trains representations of queries to investigate whether the current moment depends more on historical or non-historical events by launching contrastive learning. The representations further help train a binary classifier whose output is a boolean mask to indicate related entities in the search space. During the inference process, CENET employs a mask-based strategy to generate the final results. We evaluate our proposed model on five benchmark graphs. The results demonstrate that CENET significantly outperforms all existing methods in most metrics, achieving at least 8.3% relative improvement of Hits@1 over previous state-of-the-art baselines on event-based datasets.
  • Organizers

    profile picture

    Farimah Poursafaei (she/her) is a PostDoc at McGill University and Mila. She conducts research on dynamic graph neural networks, and temporal graphs. She completed her PhD at McGill University in Computer Engineering. During her PhD, she was working on anomaly detection on cryptocurrency transactions networks. She served as the Reviewing Chair in Temporal Graph Learning Workshop @ NeurIPS 2022.
    Website, Google Scholar, Linkedin

    profile picture

    Julia Gastinger (she/her) is a Ph.D. student at Mannheim University, supervised by Professor Heiner Stuckenschmidt. Previously, she was a Research Scientist in the AI Innovations group at NEC Laboratories Europe. Her research primarily focuses on graph-based Machine Learning – she is interested in how to incorporate the time aspect in knowledge graph representations. She served as a Reviewing Chair and Co-Organizer in Temporal Graph Learning Workshop @ NeurIPS 2023.
    Website, Google Scholar, LinkedIn
    Most Recent Publication: On the Evaluation of Methods for Temporal Knowledge Graph Forecasting

    profile picture

    Shenyang Huang (he/him) is a PhD student at McGill University and Mila, focusing on temporal graph learning (supervised by Prof. Reihaneh Rabbany and Prof. Guillaume Rabusseau). He is interested in representation learning on temporal graphs, anomaly detection and graph representation learning. He was the Organization Chair for the Temporal Graph Learning Workshop @ NeurIPS 2022. His previous research includes change point detection on temporal graphs, COVID-19 disease modeling with temporal contact graphs and link prediction on temporal graphs. He also enjoys writing medium blog posts about temporal graph learning.
    Website, Google Scholar, Twitter, Linkedin