The women-only dating safety app, Tea, has faced a severe data breach, compromising over 72,000 user records, including government IDs, selfies, and private messages. This security failure, detected by users on 4chan, exposed 59.3 GB of sensitive data due to an unsecured backend database without passwords or encryption. Tea’s claims of limited exposure were proven false, as the breach included recent messages and verification selfies, undermining its brand promise of safety for women discussing romantic relationships. The vulnerability was attributed to “vibe coding,” utilizing AI tools like ChatGPT without thorough security checks. Experts warn that such practices, prevalent among tech startups, heighten the risk of data exploitation. Users face threats of identity theft and harassment, raising concerns about Tea’s compliance with data protection laws. This incident serves as a cautionary tale about the risks of generative AI in app development, highlighting that even niche-focused apps must prioritize robust security measures.
Source link