• Tidak ada hasil yang ditemukan

The current impact of AI on human rights

3.2 Brief overview of AI’s effect on human rights

3.2.1 The current impact of AI on human rights

At present, there have been instances where AI has been found infringing on human rights.264 In most instances, the infringement of human rights by sophisticated AI algorithms is not deliberate, in fact, in most instances it is an unintended consequence.265 AI systems operate from large amounts of data, which have been fed into their system; AI systems then make their predictions from this data using it as training data.266 This training data can at times be insufficient267, and this deficiency then results in the advanced machines making biased predictions. What could also end up happening is that because AI performs a function called machine learning, it may learn about existing societal prejudices and, therefore, exacerbate them through their application.268 This is something that must be avoided completely.

260 Risse 2019 HRQ 1.

261 Agrawal, Gans and Goldfarb Prediction Machines: The Simple Economics of Artificial Intelligence 1- 5.

262 Agrawal, Gans and Goldfarb Prediction Machines: The Simple Economics of Artificial Intelligence 43-51.

263 Risse 2019 HRQ 2.

264 Latonero Governing Artificial Intelligence: Upholding Human Rights and Dignity 2.

265 See Alston 2018

https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=23881&LangID=E where the author points out that AI systems in themselves so not pose an inherent threat to human rights, but it is rather their misuse that leads to AI systems infringing on human rights.

266 Campolo et al AI Now 2017 Report 14.

267 Google Patents Machine learning with incomplete data sets 12.

268 Agrawal, Gans and Goldfarb Prediction Machines: The Simple Economics of Artificial Intelligence 195-206.

41

One of the most notable of these instances, where AI adversely affected human rights, occurred when a Facebook-owned AI algorithm was found guilty by an independent investigation committee of the UN of excessive incitement of hate speech and violence in Myanmar.269 This independent research unit, which was commissioned by the UN, conducted a thorough investigation into the conflict that occurred in Myanmar, which subsequently led to the death of many people.270 The independent investigative unit found that hate-speech and the incitement of violence that was promoted on Facebook, and subsequently left unchecked by the operators and administrators of the platform, was in part responsible for the horrific outcome.271

In this particular instance, AI systems had an absolutely negative impact on human rights by contributing, through the promotion of hate speech campaigns, to the death of many innocent people in Myanmar.272 The Facebook-owned and -operated AI algorithm infringed on the right to freedom of speech and belief, which the people of Myanmar are entitled to by perpetuating false news that had negatively influenced their ability to freely think for themselves and come to their own conclusions.273

Another instance where AI systems have been found guilty of infringing on human rights occurred in a study conducted by Sweeney.274 Sweeny’s investigation into the discriminatory effects of AI systems began, surprisingly, with a Google search275 her name: Latanya Sweeny. This yielded a few search results, among them were results indicative of her being arrested when in fact she has never been arrested in her entire life.276 This led her to conduct research on whether Google searches that include names predominantly given to black people at birth are more likely (than

269 Darusman 2018

https://www.ohchr.org/EN/HRBodies/HRC/Pages/NewsDetail.aspx?NewsID=22798&LangID=E.

270 Darusman 2018

https://www.ohchr.org/EN/HRBodies/HRC/Pages/NewsDetail.aspx?NewsID=22798&LangID=E.

271 Darusman 2018

https://www.ohchr.org/EN/HRBodies/HRC/Pages/NewsDetail.aspx?NewsID=22798&LangID=E.

272 Darusman 2018

https://www.ohchr.org/EN/HRBodies/HRC/Pages/NewsDetail.aspx?NewsID=22798&LangID=E.

273 Risse 2019 HRQ 11.

274 Sweeney 2013 https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.

275 Sweeney 2013 https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.

276 Sweeney 2013 https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.

42

names that are predominantly given to white people at birth) to produce search results in the form of Google advertisements that suggest that a specific person has been arrested, even though that person never has.277

In this instance, Google’s AdSense algorithm is guilty of infringing on article 12 of the UDHR.278 Alongside the protection of people’s privacy, article 12 also protects the right that people have to not have their honour and reputation interfered with.279 It also makes provision for the fact that the law in the countries that people find themselves in, must protect their honour and reputation.280 Therefore, the findings in Sweeney’s article suggest that Google’s algorithm not only infringed on people’s right to have their honour and reputation respected in terms of article 12 of the UDHR, but it also infringed on people’s right to have their honour and reputation respected in terms of the law in the United States.281

Her research found that Google’s AdSense was guilty of associating search results of people who have names that are predominantly given to black people at birth with advertising results that suggest arrests.282 The fact that a Google search for a person’s name may result in a search result (in the form of a Goggle ad) suggesting that that person has been arrested (when in fact that is not the case) is not only defamatory but also very likely to result in that person being unfairly treated for no real reason.283 Since machine learning enables AI systems to learn what might happen from existing data sets, just as in the case of Google, these algorithms could pick up existing societal prejudices and unknowingly start perpetuating them if left unchecked.284

The fact that Google’s AdSense algorithm associated the names that are predominantly given to black people with search results suggesting that they have

277 Sweeney 2013 https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.

278 A 12 of the Universal Declaration of Human Rights (1948).

279 A 12 of the Universal Declaration of Human Rights (1948).

280 A 12 of the Universal Declaration of Human Rights (1948).

281 Sweeney 2013 https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.

282 Sweeney 2013 https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.

283 Sweeney 2013 https://arxiv.org/ftp/arxiv/papers/1301/1301.6822.pdf.

284 Agrawal, Gans and Goldfarb Prediction Machines: The Simple Economics of Artificial Intelligence 195-206.

43

been arrested is not an indictment of Google as a company.285 Rather, it should serve as a point of serious reflection on what happens when powerful AI systems are left unchecked and to their own devices. It should also serve as a cause for investigation by companies and organisations that make use of sophisticated AI algorithms to ensure that these negative outcomes are avoided.286

In another study involving Google and Google’s AdSense algorithm, Tschant and Datta conducted an experiment where they set up a computer program that can simulate user behavior on websites and present itself as either male or female.287 The goal of this experiment was to try and determine whether Google’s advertising delivery system discriminates based on sex.288 The researchers found that, on average, the Google advertising delivery system did indeed discriminate on the simulated females when it comes to employment-related advertising.289 The algorithm showed simulated better male employment opportunities, as opposed to simulated females, simply because they appeared to be male.290 This amounts to unfair discrimination based on gender and is also a violation of the right to equality.

It is most likely that these algorithms do not set out with the intention of being discriminatory. It could, therefore, be argued that the resulting discrimination in the above-mentioned instances are not a result of acts of malice on the part of the organisation but rather results from omission on the part of the organisation. This then means that organisations that make use of sophisticated AI systems need to have evaluating and monitoring systems in place, that are there exclusively to protect against the abuse of human rights by AI.291

285 Agrawal, Gans and Goldfarb Prediction Machines: The Simple Economics of Artificial Intelligence 195-206.

286 Datta, Tschant and Datta PoPET 92.

287 Datta, Tschant and Datta PoPET 93.

288 Datta, Tschant and Datta PoPET 93.

289 Datta, Tschant and Datta PoPET 93.

290 Datta, Tschant and Datta PoPET 93.

291 Campolo et al AI Now 2017 Report 1.

44