A Novel Human Learning Optimization Algorithm for Multidimensional Knapsack Problems
Introduction
The multidimensional knapsack problem (MKP) is a classic NP-hard combinatorial optimization problem with widespread applications in capital budgeting, project selection, cutting stock problems, container loading, resource allocation in distributed computing, and constrained approval voting. Since its introduction by Lorie and Savage in 1955, MKP has been extensively studied due to its complex constraints and computational challenges. Unlike the standard 0-1 knapsack problem, MKP involves multiple resource constraints, making it significantly more difficult to solve, especially as problem sizes grow.
Traditional approaches to solving MKP include exact methods such as branch-and-bound and dynamic programming, which guarantee optimal solutions but struggle with scalability. On the other hand, metaheuristic algorithms like genetic algorithms (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) have shown promise in finding near-optimal solutions for larger instances. However, existing methods still face limitations in terms of solution accuracy, stability, and computational efficiency, particularly for very large-scale problems with over 100 items and 10 constraints.
To address these challenges, this paper introduces a novel human learning optimization (NHLO) algorithm inspired by cognitive psychology principles. The proposed algorithm enhances the basic human learning optimization (HLO) framework by incorporating memory mechanisms, adaptive learning operator selection, and variable neighborhood search strategies. These improvements enable NHLO to achieve higher solution quality, faster convergence, and better stability across small, medium, large, and very large MKP instances.
Background and Related Work
Multidimensional Knapsack Problem
The MKP can be formally defined as selecting a subset of items to maximize total value while respecting multiple resource constraints. Each item has a value and consumes a certain amount of each resource. The goal is to determine the optimal combination of items that maximizes value without exceeding any resource limit. Due to its NP-hard nature, exact methods become impractical for large instances, necessitating heuristic and metaheuristic approaches.
Existing Solution Approaches
Previous research has explored various techniques for solving MKP:
• Exact Methods: Branch-and-bound and dynamic programming can find optimal solutions but are limited to small instances due to exponential time complexity.
• Metaheuristics: GA, PSO, and ACO have been widely applied, but they often suffer from premature convergence and poor stability in high-dimensional problems.
• Human Learning Optimization (HLO): Inspired by human cognitive processes, HLO mimics learning behaviors such as individual learning, social learning, and random exploration. While effective for some problems, basic HLO struggles with diversity preservation and local optima avoidance.
Despite these efforts, existing algorithms still exhibit limitations in handling very large-scale MKP instances efficiently. This motivates the development of NHLO, which integrates cognitive psychology principles to enhance search capabilities.
The Proposed NHLO Algorithm
The NHLO algorithm introduces three key innovations to improve upon basic HLO:
- Memory Mechanism via Hash Functions
Human learning involves memorizing past experiences to avoid redundant efforts. NHLO simulates this by using hash functions to encode and store previously visited solutions. This prevents the algorithm from revisiting the same solutions, thereby improving search diversity and reducing computational waste. Three distinct hash functions are employed to minimize collision risks, ensuring efficient memory utilization.
- Adaptive Learning Operator Selection
Cognitive psychology suggests that learning strategies should adapt based on performance feedback. NHLO dynamically selects learning operators (random, individual, social, or team-based) by comparing the fitness of candidate solutions with the average fitness of knowledge repositories. This adaptive strategy balances exploration and exploitation, enhancing convergence speed and solution quality.
- Variable Neighborhood Search (VNS)
To strengthen local search capabilities, NHLO incorporates VNS with three neighborhood operators:
• Swap Operator: Exchanges selected and unselected items to explore new configurations.
• Flip Operator: Reverses the selection status of items to diversify solutions.
• Remove Operator: Deletes an item and replaces it with higher-value alternatives while respecting constraints.
These operators help NHLO escape local optima and refine solutions iteratively.
Experimental Evaluation
Benchmark Datasets
The performance of NHLO was evaluated on 76 standard MKP instances categorized into four scales:
• Small-scale (Set 1): 18 instances with up to 105 items and 30 constraints.
• Medium-scale (Set 2): 30 instances with up to 90 items and 5 constraints.
• Large-scale (Set 3): 30 instances with up to 500 items and 10 constraints.
• Very large-scale (Set 4): 8 instances with up to 150,025 items and 50 constraints.
Comparative Algorithms
NHLO was compared against four state-of-the-art algorithms:
• Binary Particle Swarm Optimization (RBMO)
• Genetic Algorithm (GA)
• Simple Human Learning Optimization (SHLO)
• Human Learning Optimization with Learning Psychology (LPHLO)
Results and Analysis
Small and Medium-Scale Instances
For small and medium-scale problems, all algorithms could find optimal solutions, but NHLO demonstrated superior stability with zero standard deviation across multiple runs. In contrast, GA and RBMO exhibited higher variability, indicating less reliable performance.
Large and Very Large-Scale Instances
NHLO outperformed competitors in large-scale problems, achieving optimal solutions for all instances in Set 3. For very large-scale problems (Set 4), NHLO found optimal or near-optimal solutions where other algorithms failed, showcasing its scalability.
Optimization Speed
Convergence analysis revealed that NHLO reaches high-quality solutions faster than other methods, thanks to its adaptive operator selection and memory mechanisms. Figure 5 illustrates NHLO’s rapid convergence compared to slower, less stable alternatives.
Strategy Effectiveness
Ablation studies confirmed the contributions of each NHLO component:
• Memory Mechanism (A): Reduced redundant searches, improving solution diversity.
• Adaptive Operator Selection (B): Enhanced exploration-exploitation balance, accelerating convergence.
• Variable Neighborhood Search (C): Strengthened local search, refining solution quality.
The synergy of these strategies was critical to NHLO’s overall performance.
Conclusion
The proposed NHLO algorithm effectively addresses the limitations of existing methods for solving MKP. By integrating cognitive psychology principles—memory, adaptive learning, and neighborhood search—NHLO achieves superior accuracy, stability, and scalability across problem sizes. Experimental results demonstrate its robustness, particularly for very large-scale instances where traditional methods falter. Future work could extend NHLO to multi-objective MKP variants and other combinatorial optimization problems.
doi.org/10.19734/j.issn.1001-3695.2024.04.0127
Was this helpful?
0 / 0