At Brain Bar Budapest, a large hall that was plastered in dark and leafy plants struggled to hold a sea of attendees. The crowd gathered to watch Steve Fuller, author of Humanity 2.0 and the Auguste Comte Chair in Social Epistemology at Warwick University, debate Zoltán Pogátsa, a Hungarian political economist. The topic at hand? Whether or not Universal Basic Income (UBI) will be the “social security net of the future.”
This discussion occurred during the futurist-oriented festival, which celebrates science and society and seeks to push the boundaries of our understanding of what will be realistic and what will be possible in the world of tomorrow.
Pogátsa favored UBI, outlining both the existing problems and future challenges that it could solve. When discussing the growing concern of automation-linked job loss, he compared UBI to a more advanced form of welfare, one that might benefit citizens and elevate them into better circumstances—circumstances that, Pogátsa argues, are unattainable under the current welfare system. Fuller heartily disagreed.
In an interview with Futurism after the debate, Fuller stated that UBI is a system that once made sense, but that this no longer holds true. “Universal basic income, at the end of the day, is an idea that made sense in an older era, one where we imagined a strong state that would actually take control of the population and feel responsible for it, as a result of increasing productivity and so forth.”
“It’s an old socialist welfare state idea. We do not live in an old socialist welfare state world anymore.”
Fuller elaborated, saying that UBI, in-and-of-itself, is not flawed, it’s just not the correct solution to our modern difficulties. “It’s an old socialist welfare state idea. We do not live in an old socialist welfare state world anymore. And as a result, we need a renegotiation of the terms.”
Earlier in the debate, where Pogátsa stated that UBI would be like an advanced form of welfare, Fuller remarked that “a half-assed version isn’t the solution. That’s the point.” He elaborated, drawing modern considerations into our current view of UBI, by noting that we need a system that takes automation into consideration and is a permanent solution to the problems that it poses, not just a quick and temporary fix.
However, Fuller didn’t just shut down the idea of UBI. While he asserted that UBI, as we have known and defined it, isn’t a correct fit for our current world, he stated that there are other, more realistic solutions—ones that truly address the issues that stem from advancing technology.
Fuller suggested that, as we continue to get farther into the data-driven technological age, one solution could be to force companies to pay for the information that they currently take from us for nothing. “We could hold Google and Facebook and all those big multinationals accountable; we could make sure that people, like those who are currently ‘voluntarily’ contributing their data to pump up companies’ profits, are given something that is adequate to support their livelihoods in exchange.”
So, instead of the government doling out standard salaries to all citizens, which is basically what UBI calls for, people would be financially compensated for the data that they give to companies by these very same companies. This could mean that social media giants and other websites that ask for your personal information would have to fairly compensate you for the information that they take from you.
It’s an interesting and novel idea, but it could make for a legislative and regulatory nightmare. For example, how much should a company have to pay for a person’s email address or phone number or clothing preferences? How do we ensure that people aren’t supplying, or companies aren’t collecting, inaccurate information? Will companies be forced to make a plethora of microtransactions to pay their millions of users and visitors for this information?
In short, this is an idea that could end up costing companies and governmental regulatory agencies an exorbitant amount of money.
Of course, if companies are charged a flat fee by the government based on 1. their number of users/visitors 2. the kinds and amount of data that they collect 3. what kind of information they sell, then perhaps we have a solution. The money that the companies pay could be placed in a public trust that is used to support the education system and other parts of the national infrastructure. This would offset some of the monetary burden currently placed on citizens and simultaneously ensure the stability of the economy as AI and automation continue to advance us farther and farther into a data-driven era.
The idea is still in its infancy, and there are many kinks that still need to be ironed out; however, regardless of whether we move forward with this system or an alternative one, it makes sense for information to be properly valued in the age of big data, and for people to be properly compensated for the sale of their information.