Kenyan Data Workers Blow Whistle on Meta’s Smart Glasses Content
Meta is once again at the center of a brewing legal storm in Kenya. The tech giant is facing fresh complaints regarding its Ray-Ban Meta AI smart glasses, with allegations ranging from mass surveillance to the mishandling of sensitive personal data. This latest controversy follows a series of reports detailing how the technology is being used to record individuals—specifically women—without their knowledge or consent.
The heart of the complaint lies in the data training process. Workers at Sama (formerly Samasource), the company contracted by Meta to label data in Kenya, claim they were tasked with training the AI using images and videos captured by these smart glasses. However, whistleblowers have revealed that the content they were forced to review was often highly intrusive, including explicit sexual images, bank card details, and other sensitive personal information that remained unblurred during the processing stage.
Advocacy groups like Oversight Lab have now petitioned the Office of the Data Protection Commissioner (ODPC), led by Immaculate Kassait, to launch a full-scale investigation. The petition calls for an audit of Meta’s historical data processing and raises alarms over the potential for “mass surveillance,” noting that the glasses allow users to record anything at any time secretly. This echoes a disturbing incident from February 2024, where a foreign national was reported to have used the devices to record Kenyan women in compromising situations without their permission.
In response, Meta has defended its product, stating that the Ray-Ban Meta glasses are designed to help users interact with AI about the world around them. The company maintains that images stay on the device unless a user chooses to share them with Meta AI for processing. Meta also asserted that they use contractors to improve the service and have measures in place to filter sensitive data, claiming that their partner, Sama, has no record of offensive or unmasked sensitive content in their systems.
As the Data Commissioner’s office begins its inquiry, the case highlights a growing tension between Kenya’s push to become a global tech hub and the need to protect its citizens from digital exploitation. Critics argue that the government has focused too much on job creation—citing the thousands of content moderation roles created—while overlooking the psychological toll on workers and the privacy risks posed to the general public.