Graph Neural Networks for Traffic Pattern Recognition: An Overview
Item Type Conference Paper
Authors Binshaflout, Elham;Ghazzai, Hakim;Massoud, Yehia Mahmoud Citation Binshaflout, E., Ghazzai, H., & Massoud, Y. (2023). Graph Neural
Networks for Traffic Pattern Recognition: An Overview. 2023 IEEE International Conference on Smart Mobility (SM). https://
doi.org/10.1109/sm57895.2023.10112264 Eprint version Post-print
DOI 10.1109/sm57895.2023.10112264
Publisher IEEE
Rights This is an accepted manuscript version of a paper before final publisher editing and formatting. Archived with thanks to IEEE.
Download date 2023-12-22 19:57:44
Link to Item http://hdl.handle.net/10754/691561
Graph Neural Networks for Traffic Pattern Recognition: An Overview
Elham Binshaflout, Hakim Ghazzai, and Yehia Massoud
Innovative Technologies Laboratories, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia Email:{elham.binshaflout, hakim.ghazzai, yehia.massoud}@kaust.edu.sa
Abstract—This survey aims to provide an overview of the recent developments and applications of Graph Neural Networks (GNNs) in the field of traffic patterns recognition. The focus is on the utilization of GNNs to model and analyze traffic data and their effectiveness in solving various traffic-related tasks such as traffic flow prediction, congestion detection, and forecasting.
The paper covers the latest literature on GNNs for traffic pattern recognition and provides insights into the strengths and limitations of these models. The results of this survey suggest that GNNs have the potential to significantly improve the accuracy and efficiency of traffic pattern recognition and can play a key role in revolutionizing the field of traffic management and prediction.
Index Terms—Graph neural networks, traffic pattern recogni- tion, intelligent transportation systems, smart mobility.
I. INTRODUCTION
Developing efficient Intelligent Transportation Systems (ITS) is becoming increasingly demanding as several tasks need to be simultaneously addressed. These tasks include transportation management and control, communication im- provement among road users, and safety and privacy levels enhancement [1]. Traffic Pattern Recognition (TPR) is another critical aspect in modern ITS that require extra research effort as it is involved in many systems such as traffic management centers, self-driving vehicles, and traffic control systems (e.g., smart traffic lights and road lane usage management). By recognizing patterns in traffic data, such as vehicle speed, lane occupancy, vehicle trajectory, and traffic flow, transportation practitioners can gain valuable insights into the behavior and performance of the transportation system. In addition, enabling the Internet of Vehicles (IoV) is opening a new channel of communication and information sharing among vehicles and road users in general. This highly volume of information can be analyzed to develop more effective traffic management strategies, optimize traffic flow, and ultimately improve the overall efficiency and safety of the transportation system [2].
With the increasing availability of real-time traffic data, various deep learning approaches have been proposed to en- hance TPR tasks. The models have to capture and analyze the historical traffic data flow, and capture the traffic participant information as well as their relationships. This lays heavily on its spatial-temporal metrics which changes dynamically.
Hence, there is challenge to accurately represent the traffic features, road network structure and extract the topological connection relation and complex correlation among the con- nected entities. Traditional neural networks have been applied in tasks like traffic prediction [3]. However, they usually faces
the challenge of increasing complexity of the model which affects the accuracy and training time when applied to real- time data. It has been shown that those methods are lacking the ability to deal with uncertainty in the data captured, and this is one of the main features of traffic data structure [4].
Graph Neural Networks (GNNs) is a category of deep learning model that has been widely applied in a variety of ar- eas including recommendation systems, the Internet-of-Things (IoT) networks, and bioinformatics. They can represent graph- based data, i.e., non-euclidean data, more effectively. Recently, they become a very popular tool for modeling and recognizing traffic patterns. GNNs have the ability to model complex relationships between the different traffic elements, such as intersections, roads, and vehicles, allowing for effective rep- resentation and analysis of large-scale traffic data. There are various GNN-based approaches which have been proposed and applied to a number of traffic-related tasks, including traffic flow prediction [5], traffic anomaly detection [6], and traffic control optimization [7]. These applications have demonstrated the effectiveness of GNNs in handling the complexity and heterogeneity of real-world traffic data, leading to improved performance compared to traditional methods.
This paper aims to present a comprehensive overview about the recent advances in TPR applications employing GNN architectures. There are a limited number of existing reviews on GNN applications for ITS related problems. For example, The authors of [8] reviewed the existing machine learning approaches employed to solve a range of ITS applications.
The authors of [9] provided a survey on different GNN architectures that are exclusively focusing on traffic forecasting problems. In [10], the authors summarized the traffic flow prediction models based on graph neural network (GNN).
However, there is no existing review that investigates GNN variants and their applications to the different tasks that fall under the scope of TPR for ITS and smart mobility. In this paper, we investigate the application of GNN models in TPR.
We discuss some of the recent models and architectures that exploit the graphical data to address the main challenges of TPR problems. We also discuss some open and challenging research directions that can help improve the applications of GNN in TPR and ITS in general.
II. INTRODUCTION TOGRAPHNEURALNETWORKS
A wide variety of common real-world problems are best illustrated as graphs, due to the unstructured representation of the data, especially if it is represented in a non-Euclidean
space, in which it can grow exponentially [8]. This complexity led to the fail of traditional neural networks in finding optimal solutions for those problems. For instance, in an e-commerce systems, the relationship between users and products can be modeled as graphs to produce highly accurate recom- mendations. Thus, there has been a rising interest in the research community in improving the existing deep learn- ing methods for learning graph-based data in a variety of settings, including supervised, unsupervised, semi-supervised, self-supervised, and reinforcement learning [11].
The GNN is a recent paradigm that has evolved recently as a response to this need. It enables neural networks to function on any graph-based data type. GNNs do not only offer new and exciting applications and generalization potential for deep learning models, but also can significantly improve the performance in several deep learning architectures. For instance, a considerable number of work presented various GNN-based strategies to enhance the existing approaches such as Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Autoencoders to handle the graph data complexity in a productive manner. In fact, the results of those studies presented a promising future of this tool specially when working with decentralized data structures [12].
A. Definition and Architecture
A graph consists of a set of nodes (entities) and edges (relationship between nodes). It can be denoted as G = (V, E), where V is the set of vertices and E is the set of edges in the graph. The graph can represent differ- ent types and scales (e.g., Directed/Undirected, Homoge- neous/Heterogeneous, Static/Dynamic) depending on the con- text of the problem and structure of the data [11] [13]. A gen- eralization architecture of the input/output process for GNN model is presented in Fig. 1. Regardless of the task domain, the input graph gets passed into a series of neural networks, in which it can be a structured (e.g., maps, vehicle social connections) or unstructured (e.g., images, video streams). In the case of unstructured scenario, usually an extra step has to be done prior to the execution of the model, which is to build the graph structure from the task (e.g., building a graph from an image, graph of images, or building a graph of connected devices, etc.). The GNN model consists of a number of layers, each performs a set of computational modules (sampling, CNN/RNN based operation, pooling, skip connection). The graph structure is then transformed into either a graph embedding or node embedding, depending on the target information that needs to be captured.
This pipeline enables GNN to keep track of the node, edge, and graph features to measure their similarity according to the problem domain. For example, the GNN model can show a powerful performance in extracting features about road users, vehicles, and road sensors and their relations in a complex ITS system, where node embedding is applied to find the common interest and services between the different components. This flexibility in modeling the GNN architecture can empower the learning of the deep learning model and allows it to make both
Fig. 1. A diagram presenting the general pipeline of a GNN.
general and deep understanding of the data, which would lead to more accurate and realistic performance [11].
B. GNN General Tasks
In this section, we present the different general GNN tasks that can be performed by GNN according to their targets and the general working process.
1) Network Embedding: In network embedding, the graph or network of graphs are converted into a lower-dimensional vector representation, then the model extracts relevant in- formation by learning how nodes, edges, and structures are correlated through propagation. The main goal of this task is to reduce the complexity of the data structure and transform the data representation into matrices which can accelerate the computation time and increase the efficiency of the model. We distinguish two levels of embeddings:
• Graph Component Embedding: this technique can em- bed the node and the edge of the graphs. The node- level embedding is applied in problems where the target is to perform predictions, visualization, or classifica- tion [14]. Examples of node embedding frameworks are GraphSage [15] and LINE [16]. On the other hand, the edge-level embedding has a number of applications on connected vehicles and IoV networks that require to determine relationships between nodes. One of the famous edge embedding model is Edge2Vec [17].
• Graph Embedding: With this technique, the learning model should be able to generate visualization of the graph structure. This is achieved through mapping vector
embeddings and compare those mappings by looking for the vertices that are closer/similar to each others and those that are different/non similar. Examples of graph embedding frameworks are Graph2vec [18] and GEM [19].
There are a variety of non-GNN based embedding tech- niques such as matrix factorization and shallow neural embed- ding [13]. However, those techniques are limited in their ability to scale for large, complex, and dynamic graph data. There- fore, GNN-based embedding models can generate dynamically changing and evolving graphs, which can be seen in real-world problems in wide areas such as smart cities, social networks, recommendation systems [12], VSN fake news detection [20], and mobile and vehicular crowdsourcing [14].
2) Node/Graph Classification: This task aims to generate a prediction/label for the new unlabeled nodes or graph instances that join the graph by updating the history of information about the existing embeddings representation (generated with the embedding task discussed earlier). The iterative updating and propagation process of the GNN model ensures collecting a deep information about the nodes and their neighbors in the network. Afterwards, all nodes with their labels are collected to produce an accurate classification. Node classification is commonly employed as semi-supervised learning, where only a small portion of the nodes labels are needed to infer the newly joined nodes. Social networks analysis, such as vehicular social networks, is one of the common application examples that applies node classification.
3) Link Prediction: This task determines the relationship between two nodes in a graph with an incomplete adjacency matrix. The model is usually interested in learning only the general features for each connected node in the graph, and making the link prediction to determine the newly established relations between participating nodes. he input here is the node embeddings with their specified features. The model then should be able to predict the links between new and uncon- nected nodes based on the history of similar connected nodes.
Link prediction can be used to manage complex transportation systems such as ride-hailing taxis.
4) Community Detection: In this task, the model uses the edge structure and graph topology to group nodes into different clusters or communities. There are a number of real-world applications where community detection can be applied using GNN models such as social networks, social IoT, and edge computer allocation [21]. For instance, the authors of [22]
designed a Spatial-Temporal Graph Social Network (STGSN) model to measure the time-evolving social network problems using attention mechanism as part of the GNN architecture.
Another example is the proposed low-complexity recruitment algorithm [14], which combines graph embedding and cluster- ing techniques in order to solve the problem of collaborative mobile crowdsourcing.
5) Graph Generation: In this task, the target of GNN model is to generate a new but comparable graph structure by learn- ing from sample graph distribution by learning the implicit distribution of graphs and nodes’ relations. The model receives
Fig. 2. Samples of traffic patterns recognition tasks.
sample graphs as input in order to learn the distributions of the graph nodes and edges and their features, in addition to the new data as vertices. Then, the model generates similar graphs for future set of nodes. Such a deep graph generation task can more effectively capture the intricate relationships presented in the graph data, resulting in more realistic graphs and precise prediction models. There are few graph generative models which are presented in [11], such as NetGAN the first graph generative model using random walk theory, GraphRNN, and GraphAF. Graph generation can be applied in many sectors such as recommendation systems [12]. It can also be applied to solve some of the vehicular routing problems [23].
III. APPLICATIONS OFGNNONTPR
Recognizing the traffic patterns is considered one of the most crucial tasks in the ITS and smart mobility fields. The task includes the analysis and understanding the flow of vehicles on roads and highways in order to optimize traffic management and improve safety mechanisms. It also involves the autonomous recognition of any object existing on roads, specifically, traffic lights, traffic signs, road lanes, etc. Exam- ples of TPR tasks are presented in Fig. 2. GNNs are well-suited for these tasks due to their ability to handle graph-structured and dynamic data. Traffic data can be modeled as graphs with different variations based on the targeted tasks. This improves the identification of patterns in traffic significantly, such as congestion or bottlenecks and traffic management, and reduce the likelihood of accidents. The current proposed applications of GNN in TPR tasks can be mainly divided into three directions as summarized in Table . For each TRP task, we introduce the recent advances on using GNN and present some selected and recent studies.
A. Flow Prediction
Flow prediction aims at optimizing the traffic management and safety practices, which can lead in improving a number of tasks such as mitigating congestion, reducing traveling time,
TABLE I
SELECTEDTPRTASKS ADDRESSED USINGGNNMODELS
Reference Adressed TPR Task Employed GNN model
[5] Flow Prediction AARGNN
[24] Flow Prediction Variational Recurrent GNN combined with Convolutional GNN [25] Traffic Anomaly Detection Spatial Temporal GNN
[26] Traffic Forecasting Federated attention-based spatial-temporal graph neural networks(FASTGNN) [27] Traffic Forecasting Adaptive Graph Convolutional Recurrent Network (AGCRN)
[28] Traffic Forecasting Spatial-Temporal Multi-Head Graph Attention Networks (ST-MGAT) [2] Traffic Forecasting Integration of ChebNet, GCN, and GAT (Gra-TF)
[29] Traffic Signal Control Traffic Signal Control via probabilistic Graph Neural Networks (TSC-GNN) [30] Traffic Signal Control Spatio-Temporal Multi-Agent Reinforcement Learning (STMARL) [31] Traffic Signal Control Graph Convolutional Neural Networks (GCNN)
monitoring the transportation network. Flow prediction can also support the development of fully and partially autonomous vehicles. With the increasing amount of connected vehicles on the road and the growth of real-time data collection, designing accurate flow prediction models is becoming more important than ever. However, this task is not trivial due to the dynamically changing spatial and temporal parameters as well as the other road commuters changing factors. Using GNN, this type of task could be modeled by graph with different road entities as nodes and their connection or type of relations selected as the edges. relations can also be classified based on the scope of the problem domain. Regardless of the design and architecture of the GNN model, the target should be always to reduce the complexity of the generated model as much as possible and find the best generalized architecture that can capture desired patterns.
There are few studies that introduced GNN models to optimize the flow prediction task. The authors of [5] presented a new approach for traffic flow prediction using GNNs. They proposed an attentive attributed recurrent GNN, that considers multiple dynamic factors, including spatial-temporal correla- tions, traffic flow dynamics, and historical traffic data. The model uses an attention mechanism to weigh the importance of different factors in the prediction and an attributed graph to model the interactions between traffic flow and various factors such as weather conditions, events, and holidays. In [24], authors proposed a variational GNN, a road traffic prediction model that employs GNNs to learn the underlying represen- tations of the traffic network and uses variational inference to model the uncertainty in traffic predictions. The model uses a convolutional GNN to extract the spatial-temporal features from the traffic data and a variational recurrent neural network to capture the temporal dependencies and predict the traffic’s future flow. This research field is still new and requires to investigate more solutions using GNNs.
Capturing the spatial and temporal factors in the road traffic can greatly impact the traffic flow and patterns capturing and forecasting tasks. It has shown to be very effective in providing a detailed information about the real-time traffic and improve the accuracy of GNN models. This approach led to an emerging era in the flow prediction field [32]. Few research works captured those factors with ST-GNN models.
For example, the study in [25] presents an ST-GNN model for
detecting traffic anomalies on a road network using data from infrastructure-based traffic sensors to learn a representation of the traffic pattern. The learned representation helps in detecting anomalies in the traffic data, such as sudden changes in traffic volume or unusual congestion. In addition, the authors of [33]
were able to model the interactions between different traffic nodes, and also incorporate historical traffic data to make predictions about future traffic flow. The paper [32] provided a taxonomy for the ST-GNN models and a general model architecture with a comprehensive review on the existing solutions proposed using different GNN architectures. It is shown that measuring the spatial and temporal characteristics is not a trivial task to achieve, as most of the existing solutions either suffer from high complexity or generalization.
B. Traffic Forecasting
The main goal is to predict the future flow of traffic on road networks. To represent road networks using GNN, road intersections can be represented as nodes and roads as edges.
The data that can be captured in traffic forecasting could be an information about past traffic patterns, weather conditions, and road closures. Traffic forecasting can be divided into several categories, depending on the specific focus of the forecast.
For example, some traffic forecasts may focus on predicting the flow of traffic on a particular road or intersection, while others may focus on predicting the demand for transportation services in a specific area. Additionally, traffic forecasts can be short-term or long-term, depending on the time frame over which they are predicting traffic patterns. Overall, the goal of traffic forecasting is to provide valuable information that can be used to improve the efficiency and effectiveness of transportation systems.
FASTGNN is one of the novel GNN-based approaches in traffic forecasting [26]. It uses federated learning to improve the traffic speed forecasting task and ensure privacy by pro- tecting the topological structure of the graph and data. It uses graph sampling to partition the graph data into several sub-graphs, which are then distributed to different edge de- vices for training. Then, a novel method called Topological Information Protection (TIP) is applied to ensure that the topological information of the graph is not leaked during the training process. In addition, federated averaging is employed to aggregate the model updates from the edge devices and update the global model accordingly. The authors of [27] used a combination of Graph Convolutional Networks (GCNs) and Recurrent Neural Networks (RNNs). The paper proposed an Adaptive Graph Convolutional Recurrent Network (AGCRN) which adapts the graph structure during the training process to improve the performance of the model. Graph Attention Networks (GATs) could also empower the forecasting models significantly. For instance, the work in [28] presented a Spatial- Temporal Multi-Head Graph Attention Networks (ST-MGAT) which uses multiple attention heads to capture both spatial and temporal dependencies in the graph data. Another example is the model proposed by [2] which utilizes historical traffic
data and real-time sensor data to predict traffic conditions on a given road network.
One of the challenges when using GCNN to capture the spatial aspects of the graph data is that the graph structure is fixed during the training process, which means that also nodes, edges, and parameters are not changed [10]. This does not reflect the traffic topology in practice, which is continuously changing. That encourages researchers to support using RGNN as part of the model architecture as it can take into account historical traffic patterns as well as the complex interactions between different traffic nodes, such as intersections and roads.
In addition, attention mechanisms can significantly improve the model by selectively weighting the importance of different components of the graph, which can be particularly useful for traffic forecasting where traffic patterns can vary greatly between different parts of the graph.
C. Traffic Signal Control
This task aims to optimize traffic flow, reduce travel time, and decrease fuel consumption and emissions by rapidly controlling traffic signal lights. GNNs can be used to model the relationships between traffic signals, road networks, and traffic flow. The road network can be represented as a graph, where each traffic signal is represented as a node and the roads connecting the signals are represented as edges. Graph Convolutional layers can be used to learn the relationships between the nodes and edges in the graph structure. In addition, it can process real-time traffic data and update the hidden state of each node in the graph, which represents the state of each traffic signal.
The authors of [29] and [31] presented hybrid GNN mod- els that combine GCNN with advanced learning approaches.
In [29], combined probabilistic learning algorithms to better model the complex relationships and patterns between traffic flow, road networks, and traffic signals are investigated. While, the authors of [31] proposed an imitation Learning based model by optimizing the time of traffic signals, such as how long each signal should stay green or red. The STMARL model proposed in [30] combines reinforcement learning with a multi-agent system based on GNN to optimize the control decisions by capturing the interactions between signals and coordination between signal timings. It is essential to assess the performance of GNN-based traffic signal control methods with real-time data to ensure that they can produce informed decisions about traffic signal timing, and to adapt to changes in traffic patterns due to environmental and human factors.
IV. FUTURERESEARCHDIRECTIONS
GNNs have achieved a remarkable success in modeling traffic data and solving some smart mobility and transportation problems. However, these significant results do not guarantee optimal solutions applicable to real-time data. As shown in Fig. 3, the development of GNN models are promising, how- ever they still suffers from generalization issues and stability when working with different datasets. In this section, we
Fig. 3. Comparisons between GNN Traffic speed prediction models [26]. HA is the Historical Average of the data. STGCN is spatial-temporal convolutional GNN, FASTGNN is federated attention-based spatial-temporal GNN
summarize the main challenges and future directions which researchers may consider in future GNN based solutions.
• Robust Detection of Anomalies: Traffic is highly diver- sified and suffers from increasing level of randomness due to environmental and human factors. There is still a challenge to develop GNN-based approaches that can effectively address this randomness and capture anoma- lies and deal with their side effect. Thus, there is a high demand for designing generalized GNN approaches that can deal with the continuously changing traffic patterns, and extract valuable information and predictions.
• Hybrid Learning Approaches: One of the promising directions to address the ITS and smart mobility chal- lenges is to design learning models that combine GNN approaches with other learning models so as to leverage the strengths of the GNN models and algorithms to better model the impact of environmental and human factors on traffic patterns. One direction is to combine reinforcement learning with GNN models which can enhance the autonomy of the transportation systems such as smart traffic, autonomous vehicles, and unmanned aerial vehicles. Hybrid GNN can be more robust to variations in the data and more adaptable to different types of transportation problems. However, this requires careful consideration of the trade-offs between different models and algorithms, especially in terms of complexity, as well as the design of appropriate training strategies to effectively combine them, considering the unique charac- teristics and challenges of the transportation domain.
• TPR Stack Reduction:Parallelization is a challenge when deploying multiple GNN models for TPR tasks. This is due to two reasons. First, the execution of multiple deep learning models that aim to solve different tasks in real-time can create a huge challenge when trying to establish a collaboration among those models as they are working in the same environment. Second, GNNs are computationally intensive models, and the training and inference process can be slow, especially when dealing with large graphs.
Parallelizing the computation process can significantly help speed up the training and inference processes, but it can also introduce new implementation challenges, such as the coordination and communication between different models. Additionally, parallelization may not always lead to a linear speedup, due to the high degree of interdependence between the nodes and edges in the graph structure. With the existing hardware capabilities, parallelizing multiple GNN models in the same environ- ment is not a trivial task to achieve, especially because the graphics processing units (GPUs) can quickly become fully utilized by only executing few models. As a result, careful consideration must be given to the choice of parallelization method and the way in which the models are deployed, in order to ensure that the benefits of parallelization are realized for complex smart mobility problems.
V. CONCLUSION
This paper highlighted the potential of GNNs in addressing the challenges of traditional machine learning methods for modeling complex and dynamic traffic pattern recognition problems. The results of this review indicates that GNNs have achieved impressive results in traffic flow prediction, congestion detection, and forecasting, among other tasks.
These findings demonstrate the promising future of GNNs in addressing the limitations of traditional methods and the po- tential of this technology in revolutionizing traffic management and prediction.
REFERENCES
[1] Z. Mahrez, E. Sabir, E. Badidi, W. Saad, and M. Sadik, “Smart urban mobility: When mobility systems meet smart data,”IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 6222–6239, 2022.
[2] Q. Zhang, K. Yu, Z. Guo, S. Garg, J. J. P. C. Rodrigues, M. M. Hassan, and M. Guizani, “Graph Neural Network-Driven Traffic Forecasting for the Connected Internet of Vehicles,”IEEE Trans. Netw. Sci. Eng., vol. 9, pp. 3015–3027, Sept. 2022.
[3] N. Ramakrishnan and T. Soni, “Network traffic prediction using recur- rent neural networks,” in2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 187–193, 2018.
[4] T. Maas and P. Bloem, “Uncertainty intervals for graph-based spatio- temporal traffic prediction,”CoRR, vol. abs/2012.05207, 2020.
[5] L. Chen et al., “AARGNN: An Attentive Attributed Recurrent Graph Neural Network for Traffic Flow Prediction Considering Multiple Dy- namic Factors,”IEEE Trans. Intell. Transport. Syst., pp. 1–11, 2022.
[6] Y. Wu, H.-N. Dai, and H. Tang, “Graph neural networks for anomaly detection in industrial internet of things,” IEEE Internet of Things Journal, vol. 9, no. 12, pp. 9214–9231, 2022.
[7] T. Zhong, Z. Xu, and F. Zhou, “Probabilistic graph neural networks for traffic signal control,” inICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4085–4089, 2021.
[8] Y. Tingting et al., “Machine learning for next generation Intelligent transportation systems: A survey,”Transactions on Emerging Telecom- munications Technologies published by John Wiley & Sons, Ltd, vol. 33, no. 4, 2021.
[9] W. Jiang and J. Luo, “Graph neural network for traffic forecasting: A survey,”Expert Systems with Applications, vol. 207, p. 117921, Nov.
2022.
[10] X. Wenjuan and L. Jianfeng, “Research on Traffic Flow Forecasting Method Based on Graph Neural Network,” in2022 IEEE 2nd Interna- tional Conference on Software Engineering and Artificial Intelligence (SEAI), (Xiamen, China), pp. 243–247, IEEE, June 2022.
[11] J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, and M. Sun, “Graph neural networks: A review of methods and applications,”
AI Open, vol. 1, pp. 57–81, 2020.
[12] S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph Neural Networks in Recommender Systems: A Survey,”ACM Comput. Surv., p. 3535101, May 2022.
[13] X. Liu and J. Tang, “Network representation learning: A macro and micro view,”AI Open, vol. 2, pp. 43–64, 2021.
[14] A. Hamrouni et al., “Low-Complexity Recruitment for Collaborative Mobile Crowdsourcing Using Graph Neural Networks,”IEEE Internet Things J., vol. 9, pp. 813–829, Jan. 2022.
[15] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,”CoRR, vol. abs/1706.02216, 2017.
[16] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei, “Line:
Large-scale information network embedding,” inProceedings of the 24th International Conference on World Wide Web, WWW ’15, (Republic and Canton of Geneva, CHE), p. 1067–1077, International World Wide Web Conferences Steering Committee, 2015.
[17] C. Wang, C. Wang, Z. Wang, X. Ye, and P. S. Yu, “Edge2vec: Edge- based social network embedding,” ACM Trans. Knowl. Discov. Data, vol. 14, may 2020.
[18] A. Narayanan, M. Chandramohan, R. Venkatesan, L. Chen, Y. Liu, and S. Jaiswal, “graph2vec: Learning distributed representations of graphs,”
CoRR, vol. abs/1707.05005, 2017.
[19] P. Goyal and E. Ferrara, “Graph embedding techniques, applications, and performance: A survey,”Knowledge-Based Systems, vol. 151, pp. 78–94, 2018.
[20] Z. Guo, K. Yu, A. Jolfaei, G. Li, F. Ding, and A. Beheshti, “Mixed Graph Neural Network-Based Fake News Detection for Sustainable Vehicular Social Networks,”IEEE Trans. Intell. Transport. Syst., pp. 1–13, 2022.
[21] A. Khanfor, A. Nammouchi, H. Ghazzai, Y. Yang, M. R. Haider, and Y. Massoud, “Graph neural networks-based clustering for social internet of things,” in 2020 IEEE 63rd International Midwest Symposium on Circuits and Systems (MWSCAS), pp. 1056–1059, 2020.
[22] S. Min, Z. Gao, J. Peng, L. Wang, K. Qin, and B. Fang, “STGSN — A Spatial–Temporal Graph Neural Network framework for time-evolving social networks,”Knowledge-Based Systems, vol. 214, p. 106746, Feb.
2021.
[23] A. Nammouchi et al., “A generative graph method to solve the travelling salesman problem,” in2020 IEEE 63rd International Midwest Sympo- sium on Circuits and Systems (MWSCAS), pp. 89–92, 2020.
[24] F. Zhou, Q. Yang, T. Zhong, D. Chen, and N. Zhang, “Variational Graph Neural Networks for Road Traffic Prediction in Intelligent Transporta- tion Systems,”IEEE Trans. Ind. Inf., vol. 17, pp. 2802–2812, Apr. 2021.
[25] H. Zhang, S. Zhao, R. Liu, W. Wang, Y. Hong, and R. Hu, “Automatic Traffic Anomaly Detection on the Road Network with Spatial-Temporal Graph Neural Network Representation Learning,”Wireless Communica- tions and Mobile Computing, vol. 2022, pp. 1–12, June 2022.
[26] C. Zhang, S. Zhang, J. J. Q. Yu, and S. Yu, “FASTGNN: A Topological Information Protected Federated Learning Approach for Traffic Speed Forecasting,”IEEE Trans. Ind. Inf., vol. 17, pp. 8464–8474, Dec. 2021.
[27] L. Bai, L. Yao, and C. Li, “Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting,” p. 12.
[28] K. Tian, J. Guo, K. Ye, and C.-Z. Xu, “ST-MGAT: Spatial-Temporal Multi-Head Graph Attention Networks for Traffic Forecasting,” in2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), (Baltimore, MD, USA), pp. 714–721, IEEE, Nov. 2020.
[29] T. Zhong, Z. Xu, and F. Zhou, “Probabilistic graph neural networks for traffic signal control,” inICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4085–4089, 2021.
[30] Y. Wang, T. Xu, X. Niu, C. Tan, E. Chen, and H. Xiong, “Stmarl: A spatio-temporal multi-agent reinforcement learning approach for coop- erative traffic light control,”IEEE Transactions on Mobile Computing, vol. 21, no. 6, pp. 2228–2242, 2022.
[31] X. Li, Z. Guo, X. Dai, Y. Lin, J. Jin, F. Zhu, and F.-Y. Wang, “Deep imitation learning for traffic signal control and operations based on graph convolutional neural networks,” in2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–6, 2020.
[32] K.-H. N. Bui, J. Cho, and H. Yi, “Spatial-temporal graph neural network for traffic forecasting: An overview and open research issues,” Appl Intell, vol. 52, pp. 2763–2774, Feb. 2022.
[33] M. Li and Z. Zhu, “Spatial-Temporal Fusion Graph Neural Networks for Traffic Flow Forecasting,” Mar. 2021. arXiv:2012.09641 [cs].