Machine learning always requires a large amount of labeled data, and the test data may have a different distribution than the training data. Transfer learning has proven to be an essential method for solving this problem in many fields. However, achieving successful transfer in graph datasets remains challenging, as the pre-training datasets must be large enough and carefully selected. This research looks at the inherent challenges of data scarcity and the need for robust models to increase the versatility and efficiency of Graph neural networks (GNNs)in various implementation domains. By examining the performance between trained GNNs and non-pre-trained GNNs, which can further demonstrate the generalization of the pre-trained GNN strategy and the significance of transfer learning to graph data.
Research Article
Open Access