中文标题#
透過反事實去偏改善圖神經網絡的公平性
英文标题#
Improving Fairness in Graph Neural Networks via Counterfactual Debiasing
中文摘要#
圖神經網絡(GNNs)在建模圖結構數據方面已經取得了成功。 然而,與其他機器學習模型類似,GNNs 在基於種族和性別等屬性的預測中可能會表現出偏差。 此外,GNNs 中的偏差可能由於圖結構和消息傳遞機制而加劇。 最近的前沿方法通過從輸入或表示中過濾掉敏感信息來減輕偏差,例如邊刪除或特徵掩碼。 然而,我們認為這些策略可能會無意中消除非敏感特徵,導致預測準確性和公平性之間的平衡受損。 為了解決這個挑戰,我們提出了一種新的方法,利用反事實數據增強來減輕偏差。 該方法涉及在消息傳遞之前使用反事實創建多樣化的鄰域,從而從增強的圖中學習無偏的節點表示。 隨後,使用對抗性鑑別器通過傳統 GNN 分類器來減少預測中的偏差。 我們提出的技術 Fair-ICD 在適度條件下確保了 GNN 的公平性。 在標準數據集上使用三種 GNN 主幹進行的實驗表明,Fair-ICD 顯著提高了公平性指標,同時保持了高預測性能。
英文摘要#
Graph Neural Networks (GNNs) have been successful in modeling graph-structured data. However, similar to other machine learning models, GNNs can exhibit bias in predictions based on attributes like race and gender. Moreover, bias in GNNs can be exacerbated by the graph structure and message-passing mechanisms. Recent cutting-edge methods propose mitigating bias by filtering out sensitive information from input or representations, like edge dropping or feature masking. Yet, we argue that such strategies may unintentionally eliminate non-sensitive features, leading to a compromised balance between predictive accuracy and fairness. To tackle this challenge, we present a novel approach utilizing counterfactual data augmentation for bias mitigation. This method involves creating diverse neighborhoods using counterfactuals before message passing, facilitating unbiased node representations learning from the augmented graph. Subsequently, an adversarial discriminator is employed to diminish bias in predictions by conventional GNN classifiers. Our proposed technique, Fair-ICD, ensures the fairness of GNNs under moderate conditions. Experiments on standard datasets using three GNN backbones demonstrate that Fair-ICD notably enhances fairness metrics while preserving high predictive performance.
文章页面#
PDF 获取#
抖音掃碼查看更多精彩內容