Artificial intelligence in China has received more than its fair share of criticism. The government regime is considered by many to be oppressive and many western observers suggest that it does not give its citizens proper human rights. That extends to technology too, with some describing China as a surveillance state — a place where privacy does not exist and where the government knows everything about its citizens.
This can have strange and dystopian consequences in places, particularly when technologies such as artificial intelligence — which already raises many ethical worries even in countries with lots of ethical safeguards — are involved. Take, for instance, China’s social credit system.
Nosedive
Many have compared it to Black Mirror episode Nosedive, comparing the ubiquitous system’s consequences with the eponymous spiral of chaos that ensues in the episode. Others, though, have cautioned against such comparisons, arguing that we should not trivialise the consequences of China’s artificial intelligence systems, which some say have led to real human rights abuses.
Wired, for instance, describes the country’s social credit system as “complicated – and in some ways, worse”. As the article says, we in the UK are used to systems of ranking based on data; most of us will have a credit score that goes up or down depending on how we pay our bills. That score can help or hinder us get credit and access goods and services — about as fundamental as it gets.
On a smaller level — but no less frustrating if you get caught on the wrong side of it — are buyer and seller rating systems on platforms such as eBay and Airbnb, or passenger and driver ratings on Uber.
These systems are there for a perfectly valid reason — we want to be able to trust the person we are buying from, and they want to be able to trust us. The questions arise when we consider the fact that we could be blocked access to something we consider quite fundamental or, at the very least, highly convenient, simply because an algorithm says so.
Two sides of the coin
In China, Wired says, that system is taken and applied to every aspect of life. “Caught jaywalking, don’t pay a court bill, play your music too loud on the train — you could lose certain rights, such as booking a flight or train ticket,” Wired says.
“The idea itself is not a Chinese phenomenon,” Mareike Ohlberg, research associate at the Mercator Institute for China Studies, told Wired. “But if [the Chinese system] does come together as envisioned, it would still be something very unique,” she says. “It’s both unique and part of a global trend.”
All of these questions, fears and doubts stem from one place — the collection, aggregation and analysis of huge amounts of data. Without that data, any attempt at a credit system — from China’s social credit plans to credit rating in the UK, would be fruitless.
The better side
But data can — of course — be used for better purposes. The BBC recently reported on a new initiative called Tree Hole Rescue, led by Huang Zhisheng, a senior artificial intelligence researcher at the Free University Amsterdam.
The project detects information from social media feeds that suggests that users posting that information might be feeling suicidal. The information gets passed to volunteers and local authorities, who then try to intervene. The BBC cites the case of 21-year-old student Li Fan, who was found and rescued unconscious by local police after posting messages on Chinese platform Weibo, which is similar to Twitter.
In the last year and a half, the programme has been used by 600 volunteers across China, who say they have rescued more than 700 people.
The system — a Java-based program — monitors ‘tree-holes’ (hence the name) on Weibo. They are areas where people post messages sharing how they are feeling, with some saying that they are considering suicide. Based on the content of the messages, the algorithm ranks the messages by the likelihood that a person will consider harming themselves.
Sad news, good data
It is, of course, sad that anyone should be driven to posting such messages or even go as far as harming themselves. But the fact remains that open information and advanced artificial intelligence systems such as these — and their use in China — is saving people.
As Wired says, these issues are complicated, and neatly encapsulates the way in which technology and data can be forces for both good and bad.