Stabilizing Sample Similarity in Representation via Mitigating Random Consistency
Authors: Jieting Wang, ZhangZelong, Feijiang Li, Yuhua Qian, Xinyan Liang
Abstract:
Deep learning has been widely applied due to its powerful representation ability. Intuitively, the quality of representation ability can be measured by sample similarity. Recently, an unsupervised metric was proposed to quantify the information content of similarity matrices, focusing on the discriminative power between sample similarities. However, in classification tasks, class-level discrimination is more essential than pairwise similarity. In this paper, we propose a novel loss function that measures the discriminative ability of representations by computing the Euclidean distance between the similarity matrix and the true adjacency matrix. The existence of random consistency is proven to affect the category bias and value bias of evaluation results. To mitigate the random consistency in Euclidean distance, we provide the expected Euclidean distance when the permutation labels follow a uniform distribution, and based on this, we provide an analytical solution for pure Euclidean distance. Theoretical analysis shows that pure Euclidean distance exhibits heterogeneity and unbiasedness. In addition, we provide a generalization performance bound for pure Euclidean distance based on the exponential Orlicz norm, verifying the learn ability of pure Euclidean distance. Experimental results show that our method significantly outperforms traditional loss functions, enhancing accuracy, F1 score, and the model's ability to differentiate class structures.
Keywords:
Mon May 26 20:22:00 CST 2025