A Pornhub logo at the company's booth during an industry conference.
Enlarge / A Pornhub logo at the company’s booth during the 2018 AVN Adult Expo on January 25, 2018, in Las Vegas, Nevada.

Adult website Pornhub this week announced a slew of new security features and policies as it tries to get back in the world’s good graces in the wake of abuse allegations it faced late last year.

Pornhub is adding “comprehensive measures for verification, moderation, and detection” of uploaded content to verify that the videos on its platform feature consenting adults and not “potentially illegal material,” including exploitation of minors, the company said in a press release (PDF) this week.

The site—and its parent company MindGeek—found itself in the spotlight in early December as New York Times opinion columnist Nicholas Kristof published a feature alleging that Pornhub “monetizes child rapes, revenge pornography, spy cam videos of women showering, racist and misogynist content, and footage of women being asphyxiated in plastic bags.” Kristof spoke to several women who said videos of them being sexually assaulted were uploaded to Pornhub without their knowledge or consent and that having them removed was all but impossible due to the site’s upload and download policies.

Within days, Pornhub suspended uploads and downloads from all nonverified users and deleted millions of videos—nearly 80 percent of its hosted content in the end. Those actions, however, proved to be too little, too late for Visa and Mastercard, which both banned Pornhub from their payment networks.

The company’s new policies include expanding moderation—that means both software-based matching as well as “an extensive team of human moderators” who will manually review all uploads. Pornhub also created a “trusted flagger program.” That program allows any of an international collection of nonprofit groups, such as the National Center for Missing and Exploited Children, to flag a video they think contains illegal content that violates Pornhub’s terms of service. Videos flagged by those groups are immediately disabled rather than remaining visible until further review.

In addition to expanded moderation, Pornhub is bringing in a third party, Yoti, to verify users. Only verified users, including studios and individual members of Pornhub’s “Model Program,” are allowed to upload or download content from Pornhub due to its earlier policy changes.

“We know nothing”

London-based Yoti launched in 2014 and has been slowly expanding its presence inside the UK. It doesn’t yet have a particularly notable footprint inside the US. The company essentially works as middleman. You provide it with a biometric identifier, such as video or voice recordings, show it your government identity paperwork, and it tells whoever’s asking—in this case, Pornhub—that everything’s in order and you are who you say you are. The Yoti customer—Pornhub again, here—never sees the documentation at all.

“We have no idea if you’re using the system. If you use Yoti with another individual, we know nothing,” Yoti co-founder and CEO Robin Tombs explained to ZDNet in 2017. “We just issue receipts to the willing counterparts. If it’s with a company, we know that a name has gone to, say, Barclays Bank, but we don’t know whose name. We don’t need to know that information, so it’s best if we don’t know it. We’ve designed a system that prevents us from knowing it so that you and Barclays can trust the system.”

The company frames its approach as “ethical” and privacy-driven. Among other things, it promises that it only stores identification data for seven days while it conducts verifications, and after that, it keeps user account information encrypted and secured even from itself.

Handing off the identity verification to Yoti sidesteps the potential problem of Pornhub—and therefore, potentially any hackers, regulators, or unethical employees—holding on to performers’ personal information and being able to match it with those performers in other contexts.