Google Builds AI To Redistribute Wealth
We do not need artificial intelligence to demonstrate that there are sustainable ways to live. But just how would an AI do when given this task? The answer is now available as Google builds an AI to redistribute wealth.
It is no surprise that in the United States, the great bulk of money is concentrated at the very top, leading to horrifyingly high rates of poverty and inequality that far exceed those of other ostensibly “wealthy” countries. The existing political structure guarantees that this upward extraction of wealth will continue, but AI researchers have started experimenting with an interesting question: Is machine learning better suited than humans to build a society that allocates resources more fairly?
The answer appears to be yes, at least in the perspective of the study’s participants, according to a new publication (read below) from Google’s DeepMind researchers that was published in Nature.
In a sequence of trials, a deep neural network was charged with allocating resources more fairly, as humans wanted, according to the paper’s description. The individuals took part in an online economic game known as a “public goods game,” in which they had to decide whether to keep a financial endowment or put a predetermined quantity of coins into a pooled fund at the end of each round. Then, three separate redistribution plans based on various human economic systems would be used to return these funds to the players, along with a fourth plan wholly developed by artificial intelligence (AI), known as the Human Centered Redistribution Mechanism (HCRM). After that, the voters would choose their favourite system.
It turns out that most participants liked the distribution strategy that the AI generated. The AI’s system reallocated wealth in a manner that explicitly addressed the advantages and disadvantages participants had at the start of the game—and eventually won them over as the preferred method in a majoritarian vote. This contrasted with strict libertarian and egalitarian systems, which split the returns based on factors like how much each player contributed.
“Pursuing a broadly liberal egalitarian policy, [HCRM] sought to reduce pre-existing income disparities by compensating players in proportion to their contribution relative to endowment,” the paper’s authors wrote. “In other words, rather than simply maximizing efficiency, the mechanism was progressive: it promoted enfranchisement of those who began the game at a wealth disadvantage, at the expense of those with higher initial endowment.”
Figures from Google’s research paper illustrating an economics game where players contribute coins into a public fund
The approaches are distinct from many AI initiatives, which concentrate on creating a reliable “ground truth” model of reality that is utilized to generate decisions—and in doing so, firmly incorporate the bias of its developers.
“In AI research, there is a growing realization that to build human-compatible systems, we need new research methods in which humans and agents interact, and an increased effort to learn values directly from humans to build value-aligned AI,” the researchers wrote. “Instead of imbuing our agents with purportedly human values a priori, and thus potentially biasing systems towards the preferences of AI researchers, we train them to maximize a democratic objective: to design policies that humans prefer and thus will vote to implement in a majoritarian election.”
Of course, we do not need artificial intelligence to demonstrate that there are sustainable ways to live. On a smaller scale, mutual aid and resource-sharing groups in communities have always existed. Scientific proof also supports the idea that people are inherently inclined toward cooperation, sharing, and achieving wealth for all, in contrast to the dogma of hyper-competitive capitalism.
Although human evaluators favored the AI’s system, this does not necessarily imply that it would equally serve human demands on a broader scale. The experiment is not a radical suggestion for AI-based government, the researchers are eager to point out; rather, it is a framework for further investigation into the potential applications of AI to public policy.
“This is fundamental research asking questions about how an AI can be aligned with a whole group of humans and how to model and represent humans in simulations, explored in a toy domain,” Jan Balaguer, a DeepMind researcher who co-authored the paper, told Motherboard. “Many of the problems that humans face are not merely technological but require us to coordinate in society and in our economies for the greater good. For AI to be able to help, it needs to learn directly about human values.”
- Source : GreatGameIndia