Graph Convolutional Recommendation Based on Adjacency Matrix Optimization and Negative Sampling

Graph Convolutional Recommendation Based on Adjacency Matrix Optimization and Negative Sampling

Introduction

Recommendation systems have become an essential tool in modern digital platforms, helping users discover relevant items based on their historical interactions and preferences. Traditional recommendation approaches primarily rely on content-based filtering and collaborative filtering. Content-based methods analyze item attributes to recommend similar items, but they often fail to capture user-specific preferences. Collaborative filtering leverages user-item interactions to identify similar users or items, providing more personalized recommendations. However, it suffers from data sparsity and cold-start problems, making it difficult to recommend new items or users effectively.

Graph Convolutional Networks (GCNs) have emerged as a powerful solution to these challenges by learning low-dimensional representations of users and items. Early GCN-based methods, such as GC-MC and NGCF, demonstrated that propagating embeddings through graph structures could capture high-order user-item interactions, improving recommendation accuracy. Subsequent models like LightGCN and UltraGCN simplified GCN architectures by removing nonlinear transformations, further enhancing performance. Despite these advancements, existing GCN-based recommendation systems still face several limitations:

  1. Random Initialization of Embeddings: Most models initialize user and item embeddings randomly, leading to instability in training and suboptimal performance.
  2. Equal Treatment of Convolutional Layers: Existing methods aggregate features from different convolutional layers uniformly, ignoring the varying importance of different layers.
  3. Limited and Low-Quality Negative Samples: Traditional Bayesian Personalized Ranking (BPR) loss generates only a single negative sample per positive pair, restricting the model’s ability to learn robust user preferences.

To address these issues, this paper proposes AMONS (Adjacency Matrix Optimization and Negative Sampling), a novel GCN-based recommendation framework that enhances embedding initialization, layer aggregation, and negative sampling.

Methodology

AMONS consists of three key components:

  1. Historical Interaction Optimized Embedding (HIOE)

Instead of relying solely on random initialization, AMONS leverages the user-item adjacency matrix to refine initial embeddings. The adjacency matrix encodes historical interactions, providing valuable structural information about user-item relationships.

First, user and item embeddings are concatenated into a sparse matrix. The adjacency matrix is then split into smaller blocks to reduce computational overhead. Each block is multiplied with the sparse embedding matrix, and the results are aggregated to produce an optimized embedding matrix. This process ensures that initial embeddings capture meaningful interaction patterns, reducing the instability caused by random initialization.

  1. Layer Aggregation Decay (LAD)

In traditional GCNs, features from different convolutional layers are aggregated with equal weights. However, not all layers contribute equally to the final recommendation. The initial embedding (after optimization) contains direct interaction information, while deeper layers capture higher-order relationships.

AMONS introduces a decay mechanism where intermediate convolutional layers are weighted less than the initial and final layers. Specifically, the first and last layers retain full influence, while intermediate layers are scaled down by a factor inversely proportional to the number of layers. This ensures that the most informative embeddings—those derived from direct interactions and the highest-order relationships—dominate the final representation.

  1. Multi-Negative Sampling Enhancement (MNSE)

The standard BPR loss generates only one negative sample per positive user-item pair, limiting the model’s exposure to diverse negative examples. AMONS extends this by generating multiple negative samples for each positive pair.

For every user-positive item pair, AMONS randomly samples a set of negative items, ensuring none overlap with the user’s positive interactions. This enriched training set allows the model to better distinguish between preferred and non-preferred items, leading to more accurate recommendations. The modified loss function, OpBPR, optimizes over these multiple negative samples, improving the model’s ability to learn user preferences.

Experiments and Results

Datasets and Setup

AMONS was evaluated on two publicly available datasets: Gowalla (a location-based social network) and Amazon-Books (an e-commerce dataset). Both datasets were preprocessed to remove users and items with fewer than 10 interactions. The remaining data was split into 80% for training and 20% for testing.

Performance Comparison

AMONS was compared against several baseline methods, including BPR, GC-MC, NGCF, LightGCN, and LR-GCCF. The evaluation metrics included HR@20, NDCG@20, and Recall@20, which measure recommendation accuracy and ranking quality.

On the Gowalla dataset, AMONS achieved a 19.30% improvement in HR, 17.24% in NDCG, and 19.89% in Recall compared to LR-GCCF. On the Amazon-Books dataset, the improvements were even more significant: 42.52% in HR, 41.47% in NDCG, and 39.53% in Recall. These results demonstrate that AMONS effectively leverages historical interactions and high-quality negative samples to outperform existing methods.

Ablation Study

To validate the contributions of each module, an ablation study was conducted on the Gowalla dataset:

• HIOE alone improved HR by ~5%, showing that optimized embeddings enhance recommendation quality.

• LAD alone also improved HR by ~5%, confirming that weighted layer aggregation helps balance feature importance.

• MNSE alone led to a 17.26% increase in HR, highlighting the critical role of diverse negative samples.

When combined, the three modules produced the best results, demonstrating their complementary effects.

Case Study

To illustrate real-world applicability, AMONS was tested on a movie recommendation scenario. For a user who watched The Shawshank Redemption and Forrest Gump, AMONS recommended Titanic, Avatar, and Schindler’s List—films that align with the user’s preference for critically acclaimed dramas. Another user who enjoyed Inception and The Matrix received recommendations like The Dark Knight and Star Wars, reflecting their interest in sci-fi and action. These examples show that AMONS provides accurate and diverse recommendations by effectively learning from interaction patterns.

Conclusion

AMONS introduces three key innovations to GCN-based recommendation systems:

  1. Historical Interaction Optimized Embedding (HIOE): Enhances initial embeddings using adjacency matrices, reducing reliance on random initialization.
  2. Layer Aggregation Decay (LAD): Prioritizes informative layers in feature aggregation, improving recommendation stability.
  3. Multi-Negative Sampling Enhancement (MNSE): Generates multiple high-quality negative samples, enabling better preference learning.

Extensive experiments on Gowalla and Amazon-Books demonstrate that AMONS significantly outperforms existing methods. Future work could explore integrating attention mechanisms to further refine node importance during graph propagation.

doi.org/10.19734/j.issn.1001-3695.2024.04.0126

Was this helpful?

0 / 0