Martin Tschammer, head of security at startup Synthesia, which creates AI-generated hyperrealistic deepfakes, says he agrees with the principle driving personhood credentials: the need to verify humans online. However, he is unsure whether it’s the right solution or how practical it would be to implement. He also expressed skepticism over who would run such a scheme.
“We may end up in a world in which we centralize even more power and concentrate decision-making over our digital lives, giving large internet platforms even more ownership over who can exist online and for what purpose,” he says. “And, given the lackluster performance of some governments in adopting digital services and autocratic tendencies that are on the rise, is it practical or realistic to expect this type of technology to be adopted en masse and in a responsible way by the end of this decade?”
Rather than waiting for collaboration across industry, Synthesia is currently evaluating how to integrate other personhood-proving mechanisms into its products. He says it already has several measures in place: For example, it requires businesses to prove that they are legitimate registered companies, and will ban and refuse to refund customers found to have broken its rules.
One thing is clear: we are in urgent need of methods to differentiate humans from bots, and encouraging discussions between tech and policy stakeholders is a step in the right direction, says Emilio Ferrara, a professor of computer science at the University of Southern California, who was also not involved in the project.
“We’re not far from a future where, if things remain unchecked, we’re going to be essentially unable to tell apart interactions that we have online with other humans or some kind of bots. Something has to be done,” he says. “We can’t be naive as previous generations were with technologies.”