Are "manufactured faces" stronger than real ones? The day synthetic data transforms "facial recognition": The reality of fairness, privacy, and on-site implementation

Are "manufactured faces" stronger than real ones? The day synthetic data transforms "facial recognition": The reality of fairness, privacy, and on-site implementation

Facial recognition has become a foundational technology in society, used in smartphone unlocking and airport gates. However, challenges such as privacy concerns, bias, and the difficulty of data collection have persisted for years. This has brought attention to synthetic data generated through GANs and 3D modeling. Synthetic data can freely increase diversity in race, age, lighting, and posture, while also posing lower risks to personal information. The market is rapidly expanding, with projections suggesting it will grow to approximately $1.79 billion by 2030. On the other hand, there is strong criticism regarding "fidelity," or the gap with reality, and "diversity washing," where diversity is merely presented superficially. On social media, there are voices welcoming the "privacy-friendly" aspect, alongside concerns that it could be counterproductive if the verification infrastructure does not keep pace. Moving forward, four key points will be crucial: ① external validation with real-world data, ② control of demographic distribution, ③ auditing of artifacts originating from synthetic data, and ④ regulatory compliance and explainability.