Microsoft-backed facial recognition firm rethinks its role in Hong Kong
When Microsoft’s M12 venture fund announced its investment in Israeli face recognition company AnyVision in June, the deal was temporarily held up as the startup sought to determine whether its products adhered to Microsoft’s tough AI ethics standards. After all, facial recognition technology has become a locus of controversy in recent years, raising privacy and civil rights concerns due to its use by law enforcement and governments around the world. After a few weeks, the company said that all of its investors, including Microsoft, were satisfied that the company was a “tool for good,” according to Bloomberg.
But in mid-July, Israeli newspaper Haaretz reported that the Israeli army has been using AnyVision’s face recognition technology at West Bank army checkpoints for entry into the country, as well as deeper in the Palestinian-controlled region. Now, privacy groups are criticizing Microsoft for its support of AnyVision, which is also linked to the Hong Kong, Macau, and Russian governments.
A spokesperson for the startup told Fast Company that its face recognition technology has never been sold and is not currently in use in Hong Kong, where widespread protests against the government have incensed the police and increased tensions with China. However, AnyVision recently advertised an opening for a Hong Kong-based “Regional Sales Manager” on its website, a position that would entail establishing and managing partnerships in Hong Kong and Macau (another special administrative region of China). AnyVision has since removed the Hong Kong job posting from its company careers page. A job ad for a Technical Service Manager in Macau, a position that would help maintain existing accounts and find new ones, is still up on the website. And Global Sources, a Hong Kong-based business-to-business media company that facilitates high-volume trade between China and the world, features an AnyVision CCTV product, the Artificial Intelligence Super WDR Indoor Box Camera, on its website.
The tensions in Hong Kong have thrown a wrench into AnyVision’s business plans for the region, including Macau. While Macau, a mecca for legal gambling and coastal tourism, has a high degree of autonomy, much like neighboring Hong Kong, China has exerted more control in the two decades since assuming administrative control of the region from Portugal. AnyVision’s spokesperson added that the reevaluation of the sales position is currently ongoing, both in Hong Kong and Macau. When asked about criticism over AnyVision’s role in Israel, a spokesperson for Microsoft declined comment, referring questions to AnyVision.
The spokesperson also claims that recent reporting on the use of its facial recognition technology at West Bank border crossings has been misleading. The company doesn’t see its products, installed at biometric gates between Palestine and Israel, as a surveillance tool or as a violation of human rights.
“AnyVision’s facial recognition systems at border crossings work in the same way and for the same purposes as they do in airports, for example,” the company says. “For commuters and others who want to simply cross country borders, facial recognition drastically decreases wait times at border crossings. The other advantage is that they provide an unbiased safeguard at the border to detect and deter persons who have committed unlawful activities.”
Over 40 countries are clients of AnyVision, but the spokesperson would not disclose any of its clients because their privacy is the company’s top priority. They did say that the company’s facial recognition technology is installed at hundreds of sites worldwide.
But AnyVision’s technology has raised concerns among civil liberties advocates, including Shankar Narayan, director of the Technology and Liberty Project at the ACLU of Washington. He says that its face surveillance product Better Tomorrow includes affect analysis capabilities, which can detect emotions, among other things, for threats by individuals on watch lists. AnyVision also offers two other products—Insights, which can track shoppers at retail stores, and Sesame, a facial recognition product for things like mobile ID authentication and data encryption.
“Essentially, their technology is designed to turn us into a surveillance society with these AI-based judgments impacting our lives in ways that we have very little control over,” says Narayan. “The additional cause for concern is that [AnyVision products] are, I understand, being used by the Israeli government, and they have connections with the Hong Kong and Russian governments as well,” according to Forbes‘s Thomas Brewster.
Narayan says AnyVision’s presence in the West Bank, and its connections with Hong Kong and Macau, raise questions about the ethical use of facial recognition, and whether Microsoft’s ethics committee approved the West Bank installations.
“AnyVision agreed to comply with Microsoft’s facial recognition principles, and that commitment is backed-up by verification mechanisms which we are discussing with them,” an M12 spokesperson told Fast Company.
The ethics of face recognition technology
In a July 2018 blog post, Microsoft president Brad Smith wrote publicly about concerns over facial recognition. In it, he called for government regulation, as well as a proactive tech industry approach to make sure that the technology is used responsibly and ethically.
“Facial recognition technology raises issues that go to the heart of fundamental human rights protections like privacy and freedom of expression,” Smith wrote. “These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.”
Six months later, Smith published another post on the Microsoft blog, again championing corporate responsibility and government regulation of face recognition. He called on tech companies to address machine bias in face recognition, but also address people’s privacy, and to prevent mass surveillance by governments that would encroach on democratic freedoms. And while Smith also said the world wouldn’t be best served by a “commercial race to the bottom,” Microsoft’s investment in AnyVision seems to be quite part of that race, where they are, to use Smith’s own words, choosing “market success” over “social responsibility.”
But Smith’s thoughts on the ethics of face recognition didn’t just appear ex nihilo. Narayan says that Smith’s posts on face recognition are partly due to the ACLU and its coalition of racial justice, faith, and civil, human, and immigrants’ rights groups, which have engaged Microsoft on the issue.
“We actually had a meeting with a dozen Microsoft leaders and a dozen members of our coalition, and we engaged with them fairly consistently over the ethics of face surveillance and AI over the last year and a half, and they actually told us that the Brad Smith post was accelerated and put out there because of the meeting with them,” says Narayan. “The posts contained a lot of great rhetoric, but what this demonstrates, I think, is that there is a significant gap between action and verbiage.”
That is to say, big tech companies like Microsoft and Google realized they had to—at least publicly—acknowledge the need for tough regulation of facial recognition technology. It seemed to be a starting point, says Narayan. But behind closed doors, Microsoft’s real vision emerged, he claims. After the blog posts and the engagement with ACLU Washington and its coalition, Narayan says that Microsoft lobbied in Washington State’s legislature for a bill that was more permissive.
“The bill legitimized virtually all public and private face surveillance. At the same time, they put a lot of resources into killing a face surveillance moratorium bill that we and others brought forward.”
Under the bill it supported, Microsoft would maintain control over the data, as well as over how the technology is rolled out. As ACLU Washington saw it, the regulations had no teeth.
“The only thing that the bill disallowed—because it’s actually easier to talk about that—is that you couldn’t follow people over a period of time using face surveillance without a warrant,” says Narayan. “You could train it on a protest, you could use it to look out into an intersection just to see what it can see, you could set one up next to a mosque, but as long as you weren’t following a particular person over time, you were fine.”
Despite enlisting a great deal of lobbying power to advance its interests in face recognition and the control of user data, Microsoft’s data privacy bill was defeated.
AnyVision’s other big tech investors
But Microsoft isn’t AnyVision’s only big tech partner. In 2018, chipmaker Qualcomm announced a Series A investment in AnyVision—funding that AnyVision says will further its efforts to “expand into other industries and develop new AI applications that transform how the world connects, computes and communicates.”
The German engineering and technology company Bosch also invested in AnyVision in 2018. Its 9 percent stake in the company includes a sales and technology partnership. Nvidia, which designs graphics processing units for the gaming and professional markets, as well as chip units for mobile devices, is an official partner on Nvidia’s Metropolis intelligent video analytics platform. This techno-jargon may at first seem innocuous, but Nvidia Metropolis is being marketed as the AI-enabled “lifeblood” for future smart cities. Nvidia, which selected AnyVision as the facial recognition engine for its AI smart-city platform, refused to comment on AnyVision’s use in the West Bank, as well as its ties to Hong Kong, Macau, and Russia.
Unlike Microsoft, which has an AI ethics committee, and which has commented on facial recognition, little is known about the positions of Bosch, Qualcomm, and Nvidia on face recognition and AI ethics. Alissa Cleland, the spokesperson for Bosch in North America, told Fast Company that “person- and object-recognition software enhances the security of individuals” and is used to prevent criminal offenses. “Bosch in its values feels a strong responsibility to ensure that its products are used properly,” said Cleland, who declined comment on AnyVision’s installations in the West Bank and its ties to Hong Kong, Macau, and Russia.
Qualcomm, which invested in Hong Kong-based face recognition company SenseTime in 2017, did not respond to requests for comment on this story. Nvidia is also a partner with SenseTime for its Metropolis smart city platform.
Affect analysis—tech that reads emotions
AnyVision’s desire to sell multiple face recognition products makes sense from a business standpoint. If a company builds FR technology, it is likely they will want to get it into multiple markets. And with many big tech firms and other FR companies racing for market share, AnyVision’s diversification is no great surprise. That said, AnyVision’s wide range of facial surveillance is troubling to privacy activists.
“With AnyVision, in particular, I do think the parameters of the specific things they are building are particular causes of concern,” says Narayan. He points to AnyVision’s affect analysis technology, which can read emotions to determine if someone is a threat, as deeply concerning. Narayan sees affect analysis as building on the long-discredited pseudoscience of whether one can read the parameters of an individual’s face or other characteristics, like skull size, to determine a person’s intelligence or dangerousness. Studies have shown that affect analysis is also biased in face recognition systems: It is less accurate with people of color and other groups.
Anyvision’s Insights product can give supermarkets and other retail stores insights like identifying frequent shoppers or measuring “gaze estimation,” which determines what people are looking at. In the future, products like Insights could expand into many other retail stores, and customers will have no choice but to be tracked by this surveillance technology.
“It’s going to be more pervasive, and AnyVision is trying to accelerate that trend,” says Narayan. “All of these different aspects of what AnyVision technology is designed to do are significant concerns, and then layer over that who AnyVision is selling to—not only the Israel government but Hong Kong and Russia as well. If Microsoft were serious about technology with ethics, then perhaps these kinds of transactions would receive more scrutiny.”
The machines are watching
On August 6, AnyVision published a post on the company blog titled “Ethical and Responsible AI at Anyvision,” in which it laid out its position on privacy, ethics, algorithmic bias, and regulation. The company claims it doesn’t collect or share user data and does not capture face images. Instead, the company writes, the data they capture is rendered using “mathematical vectors that act as secure cryptography, preventing identity hacking even if data is stolen.” AnyVision also notes that it will soon announce an AI Ethics Advisory Board comprised of “external experts both inside and outside” the field.
“Powerful technology like facial recognition has the ability to be misused when guidelines are not put in place,” the company states. “We have invested millions of dollars to ensure our technology not only complies with, but exceeds, local laws and regulations concerning privacy (such as GDPR) and are in favor of regulation where none currently exists. We have supported the proposed facial recognition legislation in the U.S. Senate and will continue to advocate for regulations in geographies where we do business.”
In a March 14 blog post, AnyVision announced its support for a U.S. Senate bill sponsored by Senators Roy Blunt and Brian Schatz that would prohibit commercial companies using facial recognition technology from collecting and sharing user data without consent from users.
Microsoft also enthusiastically endorsed the bill. “Facial recognition technology creates many new benefits for society and should continue to be developed,” said Smith. “Its use, however, needs to be regulated to protect against acts of bias and discrimination, preserve consumer privacy, and uphold our basic democratic freedoms. Senators Blunt and Schatz’s bill has started an important conversation in Congress about the responsible use of this technology. We’re encouraged by their efforts, applaud their leadership and look forward to working with them to develop balanced policy.”
It’s uncertain if the bill will pass. But, if it does, AnyVision, its investors, and other face surveillance companies will be able to get their products to market without much fuss, as long as a third party tests for machine bias, and commercial clients post a sign telling customers the machines are watching.
Amid a renewal of tech activism, it’s unclear how Microsoft employees feel about the investment. A representative for the Tech Workers Coalition says that the group hasn’t heard from any Microsoft employees about AnyVision. But Narayan of ACLU Washington says that he has heard from Microsoft employees who are opposed to AnyVision’s face recognition installations in the West Bank.
“I think the people who are working on the inside in all of these large tech companies, they didn’t become tech workers to help support repressive regimes or usher in the surveillance states,” says Narayan. “That’s why one major component for tech accountability is the tech worker themselves. . . . I expect you are going to see a response to face surveillance, but I don’t know if it will be on AnyVision in particular.”
A spokesperson for the Israel Defense Forces did not return a request for comment.
Fast Company , Read Full Story
(19)