Facebook CEO Mark Zuckerberg offered a vivid example of the technological limitations of training machines to recognize certain types objectionable content on the sprawling social media platform.

Back in 2004, when Zuckerberg was building Facebook in his Harvard University dorm room, he outsourced content moderation to the community. He lacked the resources to hire a staff of thousands, so individual users served as content cops, flagging offensive content which he would review and remove if posts didn’t belong.

Theses days, Facebook relies on a combination of people and machines to monitor for objectionable material, including hate speech, graphic violence, nudity or cruelty. This approach has been effectively at proactively removing terrorist content or posts containing nudity.

“Ninety-nine percent of terrorism content we take down before anyone sees it,” said Zuckerberg. “Whereas hate speech, which is more nuanced linguistically, that’s going to take more years to do something reasonable.”

Zuckerberg seemed to express frustration over criticism that Facebook is better able to handle fleshy posts than hate speech. It all comes down machine learning.

“It’s easier to build an AI that can detect a nipple than identify hate speech,” said Zuckerberg.

So, now you know.