Active Defense Method for Information Hiding Based on Self-Supervised Learning PBS-Net and Channel Purification
Introduction
Information hiding, particularly steganography, has become a primary method for covert communication by embedding secret messages into carrier images while maintaining visual imperceptibility and statistical undetectability. As the counterpart to steganography, steganalysis aims to detect and disrupt hidden communications. Traditional steganalysis operates at three levels: detecting the presence of hidden information, extracting the embedded messages, and actively disrupting or destroying the secret data. The third level, known as active steganalysis or active defense, focuses on eliminating secret information from stego images without degrading visual quality, thereby blocking covert communication.
Existing active defense methods fall into two categories: knowledge-based and deep learning-based approaches. Knowledge-based methods manipulate pixel distributions to remove hidden data but often suffer from high computational complexity and image quality degradation. Deep learning-based methods leverage neural networks to filter out secret information but require extensive paired cover-stego datasets and significant computational resources. Moreover, current methods struggle with unknown steganographic techniques and exhibit low error rates in real-world social network scenarios.
To address these limitations, this paper proposes BRAD (Blind-spot Network and Channel Purification-based Active Defense), a novel self-supervised learning framework that eliminates secret information without relying on cover-stego pairs. BRAD integrates pixel-shuffle sampling, blind-spot network architecture, and channel purification to achieve high secret information disruption while preserving image quality.
Related Work
Information Hiding and Steganography
Information hiding techniques exploit redundancy in carrier signals to embed secret data without detection. Steganography, used for covert communication, includes standard and robust methods. Standard steganography minimizes distortion during embedding, while robust steganography ensures message survival under compression and transformations. Notable algorithms include J-UNIWARD for minimizing detectability and DMAS/GMAS for resisting JPEG compression and statistical detection.
Active Defense in Steganalysis
Active defense methods aim to disrupt hidden communications by removing embedded messages. Knowledge-based approaches modify pixel values or apply filtering but often degrade image quality. Deep learning-based methods, such as AO-Net and SC-Net, use neural networks to eliminate secret information but depend on supervised training with cover-stego pairs. Recent advancements include adversarial perturbation removal and model compression techniques to reduce computational overhead. However, these methods still face challenges in generalizing to unknown steganographic schemes and maintaining high image quality.
Proposed Method
Overview of BRAD
BRAD consists of two main components: a self-supervised PBS-Net (Pixel-shuffle Sampling and Blind Spot-Network) for primary active defense and a Channel Refinement Module (CRM) for secondary purification. The framework operates without prior knowledge of steganographic algorithms or manual intervention, making it suitable for real-world social network applications.
Self-Supervised PBS-Net
Traditional active defense networks require supervised training with cover-stego pairs, limiting their practicality. BRAD introduces a self-supervised PBS-Net that trains solely on stego images, eliminating the need for paired data. The network architecture includes:
-
Pixel-Shuffle Sampling (PD/PU):
• PD (Pixel-shuffle Downsampling) rearranges pixels to break spatial correlations, disrupting hidden message patterns.• PU (Pixel-shuffle Upsampling) reconstructs the image to its original dimensions, enhancing local details.
-
Blind-Spot Network (BS-Net):
• Central Masked Convolution: Masks the center pixel in each receptive field to prevent the network from learning identity mappings, forcing it to predict pixel values based on surrounding context.• Dilated Convolution Residual Blocks: Expands the receptive field to capture multi-scale image features, reducing artifacts and improving visual quality.
The self-supervised training minimizes the difference between the network’s output and the input stego image, ensuring effective secret information removal.
Channel Purification Module (CRM)
After PBS-Net processing, the generated images may contain residual artifacts. The CRM refines texture details and smooths flat regions through a non-trainable post-processing step:
- Separates secret information channels from the PBS-Net output.
- Applies additional filtering to eliminate residual hidden data.
- Averages refined features with the original output to enhance visual quality.
This module ensures high disruption of secret messages while maintaining low visual degradation.
Experimental Results
Setup
Experiments were conducted on the ALASKA V2 and BossBase 1.01 datasets, using 10,000 images (8k for training, 1k for validation, 1k for testing). Images were cropped to 256×256 patches and processed using PyTorch on an Intel Core i9-10900 CPU and NVIDIA GTX 1660 Ti GPU. Key metrics included:
• Bit Error Rate (BER): Measures secret message disruption (BER > 0.2 indicates successful removal).
• Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM): Evaluate image quality preservation.
Performance Evaluation
-
Secret Information Elimination:
• Against DMAS: BRAD achieved an average BER of 0.487, exceeding the baseline (0.2) by 243.5%.• Against GMAS: The lowest BER was 0.4885 (244.25% above baseline), demonstrating robust performance across varying payloads (0.01–0.05 bpnac) and quality factors (65–95).
-
Image Quality Preservation:
• PSNR values ranged from 43 to 46, and SSIM scores were above 0.98, indicating minimal visual degradation. -
Comparative Analysis:
• Against AO-Net and SC-Net: BRAD outperformed both methods in BER, PSNR, and SSIM. For DMAS, PSNR improved by 9.14% (AO-Net) and 43.34% (SC-Net); SSIM increased by 0.95% and 8.06%, respectively. Similar gains were observed for GMAS.
Conclusion
BRAD presents a groundbreaking approach to active defense in steganography by combining self-supervised learning with channel purification. Key contributions include:
- Eliminating dependency on cover-stego pairs through pixel-shuffle sampling and blind-spot network training.
- Integrating masked and dilated convolutions to enhance secret information removal and image quality.
- Introducing CRM for secondary purification, further improving disruption rates and visual fidelity.
Experimental results confirm BRAD’s superiority over existing methods, achieving 100% defense success while maintaining high image quality. Future work will focus on deploying BRAD in social network platforms and hardware, optimizing real-time performance, and expanding its applicability to diverse steganographic techniques.
doi.org/10.19734/j.issn.1001-3695.2024.01.0108
Was this helpful?
0 / 0