Bumble fights AI scammers with new reporting tool

By providing this option, Bumble aims to curb the misuse of AI technology in creating deceptive profiles and enhance user safety on the platform.

 Adult, Female, Person, Woman, People, Architecture, Building, Furniture, Indoors, Living Room, Room, Face, Head, Book, Publication, Electronics, Mobile Phone, Phone, Matoaka

With the instances of scammers using AI-generated photos and videos on dating apps, Bumble has added a new feature that lets users report suspected AI-generated profiles. Now, users can select ‘Fake profile’ and then choose ‘Using AI-generated photos or videos’ among other reporting options such as inappropriate content, underage users, and scams. By allowing users to report such profiles, Bumble aims to reduce the misuse of AI in creating misleading profiles.

Earlier in February this year, Bumble introduced the ‘Deception Detector’, which combines AI and human moderators to detect and eliminate fake profiles and scammers. Following this measure, Bumble has witnessed a 45% overall reduction in reported spam and scams. Another notable feature of Bumble is its ‘Private Detector‘ AI tool that blurs unsolicited nude photos.

Risa Stein, Bumble’s VP of Product, emphasised the importance of creating a safe space and stated, ‘We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously so our community feels confident in making connections.’