Internet

Software engineer David Auerbach: ‘Big tech is in denial about not being in control’ | Internet


David Auerbach is a writer and software engineer who has worked for Google and Microsoft. He also teaches the history of computation at the New Centre for Research & Practice in Seattle, US. His new book is Meganets: How Digital Forms Beyond Our Control Commandeer Our Daily Lives and Inner Realities. He argues that widespread concern about artificial intelligence is legitimate, but the problem is already all around us, with huge tech networks that no one – neither governments nor their owners – is able to control.

Your book is concerned with the threat to social and economic stability represented by what you call meganets. How do you define a meganet?
The definition I use for a meganet is a persistent, evolving and opaque data network that heavily influences how people see the world. It is always on and it consists both of a large server tech component as well as millions upon millions of users who are constantly active, using those services and influencing them. All these users play a small part in the collective authorship of how these algorithms run. The effect is contributing to a severe fracturing of society in which we are literally becoming unable to understand one another, as we split into like-minded self-policing groups that enforce unanimity and uniformity, and prevent any larger-scale societal consensus.

You identify social media platforms, cryptocurrency and the so-called metaverse as aspects of this distorting combination of advanced tech and mass participation. You were a software engineer at Google and Microsoft – when did you first become concerned about this phenomenon?
Well, the problem hit me sometime after the advent of social media. That’s when you saw these feedback loops, where people were acting on algorithms, which then acted on people, and that’s when public discourse seemed to be changing for the worse. But it took me quite a while to get my head around what on earth was going on, to realise that we have less control over these systems than we think, and even the people who administer them have less control than we think.

One point you emphasise is that, as much as we may wish to attribute the lack of oversight at places such as Facebook to greed or indifference, these are not the real issues.
To some extent, the tech companies do merit the blame for what they disseminate, but if you look at this as a conspiracy to make money by making our lives miserable, you’re not going to get anywhere. There’s a great Facebook memo that was leaked in which they said that the narrative they least want is to be perceived as not in control of their systems. Being evil is actually a better look for them than not being in control. But, unfortunately, the not-in-control narrative is a lot more accurate.

So do people working within these meganets recognise the problems you diagnose?
I think it depends on their position. The rank and file feel this, or certainly the ones I’ve talked to do. If you’re Mark Zuckerberg, you’re facing a tremendous amount of cognitive dissonance because you feel no matter what you do, you get flak for it. But at the same time, you don’t want to admit to a level of impotence. My suspicion is that executives are at varying degrees of denial about the sheer scale of the problem.

Is it a universal problem or really an issue that affects liberal free market societies, which can’t impose draconian control measures?
It’s universal, although the engagement differs. One of the ironies I found was that China is actually less aggressive about deploying government-driven meganets than, say, India. Because, in an authoritarian society, the danger to the party of a fallible government-run meganet is greater than in a society in which you can chalk it up to the free market or to third-party vendors.

If we think that these meganets are distorting our view of the world, making it more volatile and unstable, why can’t we just pull the plug?
These systems are too complex and diffuse to stop. It would be like shutting down the stock market, except the degree of complexity is far greater than the stock market. What we should be looking at is mitigating the dysfunctional effects of the meganet, and exert some indirect influence on them. What we can’t do is control them at the fine-grained level that people are asking for.

What did you make of the recent story of Microsoft’s chatbot Sydney that went rogue and revealed its potential dark side to a New York Times reporter?
What people don’t understand is that so much of what Sydney was saying is our collective unconsciousness, our collective data being filtered through its algorithms, that it is much more a reflection of us, and it could not exist without the human component. People see it as a single detached agent because that is what it appears to be. But we can’t understand what it does or why unless we recognise that it’s a product of that relationship between programmed algorithms and the mass of our data they draw on.

You write: “The social history of computing is a story of how we have turned ourselves, our lives, our actions, our purchases, our words into data, online and offline.” Do you think we’re aware of that process and in some way we want it?
The fact is we see ourselves more and more as collections of labels. “I am this, this and this” – what people attack as identity politics. I think that’s a misleading diagnosis, because it’s really about classification and taxonomisation. In a way, we speak a more quantitive language now and the qualitative richness falls through the cracks and gets eliminated. So I think there’s an awareness of it but not necessarily an awareness of where it’s coming from. Do we want it? On some level, yes, because meganets have this potential to create an incredible sense of belonging, one in which you’re around people constantly who you feel at home with.

You discuss the GameStop case in the book, in which Wall Street firms lost billions of dollars after a subreddit group dramatically drove up the stock price of the video game company. What does that show about the power of meganets?
The meganet enables decentralised forms of association that were never possible before. And that does devolve power to a greater extent than has ever happened before, but it does it in a disorganised way, so that you’re not dealing with rationality or a consciousness; you’re dealing with hive mind. It’s not the wisdom of crowds, it’s the chaos of crowds.

One solution you suggest to slow these fracturing social processes is to, as it were, fight chaos with chaos. Could you explain that.
Meganets like to track people demographically and pair like with like. That tends to create homogeneity and increasing doctrinairism. If you were to mess with that, simply to avoid congealing, you would at least slow things. There are various ways you could do that. I believe TikTok already has a way of injecting heterogeneous content into its algorithms, because I think the idea was that it was showing too many pro-anorexia videos in a row or something. But if you focus on one type of content, you’re going to be playing whack a mole. So the issue is, can you do it in such a way that you’re getting heterogeneous content more generally across the board?

Meganets by David Auerbach is published by Public Affairs on 30 March (£25). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.