Child abuse imagery is a real blame even for tech companies like Apple that is tackling it head on. A search warrant placed in the name of Homeland Security Investigations gives an overview of how Apple uses to detect and report these types of images uploaded to iCloud or sent via its email servers, "while protecting the privacy of innocent customers".
The first detection step is automated, using a system used in common by most technology companies.
For each image of child abuse already detected by the authorities, a hash is created. This is actually a digital signature for this image. With this hash, technology companies can request their systems to automatically search for matching images.
Forbes explains what usually happens when a match is detected.
Once the threshold is reached, it is enough for a technology company to contact the appropriate authority, usually the National Center for Missing and Exploited Children (NCMEC). NCMEC is a not-for-profit organization that acts as the country's law enforcement clearinghouse for information regarding online sexual exploitation of children. He usually calls the police after being informed of illegal content, which often (provokes) criminal investigations.
For its part, Apple pushes its investigations further with a manual verification of each image. Using this method, it can confirm or not that the images are indeed suspect. The California firm then provides law enforcement agencies with the name, address and mobile phone number associated with the corresponding Apple ID.
The process was revealed by the search warrant, including comments from an Apple employee.
The investigator posted comments from an Apple employee on how he first detected multiple images of child pornography believed to be downloaded by an iCloud user, and then looked at his emails.
When we intercept the email with suspicious images, they are not forwarded to the intended recipient. This person sent 8 emails that we have intercepted. (Seven) of these emails contained 12 images. The 7 emails and images were the same, as was the recipient’s email address. The other email contained 4 different images from the 12 previously mentioned. The recipient was the same.
I suspect that what happened was that he was sending these images and when they did not arrive, he sent them several times. Either that or he learned from the recipient that she hadn't been received.
The Apple employee then examined each of these images of child pornography suspects, according to Special Agent for Homeland Security Investigations.
Clearly, Apple only examines images that have had a hash of a known image of child abuse. In fact, the interception of "innocent" images is very weak. Apple makes the difference by combining hashing and manual verification, so that nothing escapes it, or as little as possible.