On Wednesday May 20th, The Red Button Club hosted an event and asked:
What is one thing someone working in foreign policy should know about tech?
Below is a brief note capturing some of what we heard.
Our next event is on June 10th, where we’ll be asking participants to design their own multilateral institution. For more details on the event and our wider work, press the red button below:
Three things that stood out
By design. Tech is not good and is not bad. But nor is it neutral. This is because it is designed by people for specific ends. And we need to consider that more often or not, these are relatively powerful people (in a broad sense of the word) and the tech they design advances their power (again, broadly). So if we want tech to do good, we need to ensure that its design process is inclusive and that its purposes are ethical. This means we should focus on creating new tech for good rather than trying retro-fit good into existing tech.
We see you, everywhere. We need to remember that the use of surveillance tech isn’t restricted to autocracies.In fact, it is just as prevalent in democracies. And all countries face the challenge of updating legislation and policy fast enough. However, this is where big differences start to show (democratic South Africa’s COVID19 tracing app, for example, has been heavily scrutinised). Interestingly, the biggest battles over surveillance tech are being fought out in the illiberal democracies.
Global proliferation of AI surveillance - from Steven Feldstein.
Armed groups as startups. One of our speakers talked about their work tracking armed groups in Syria using social media. Using network analysis they were able to show when and where armed groups were formed and how they related to one another. What was striking is how their use of social media resembled startup marketing strategies, with the Gulf States playing the role of venture capitalists backing the armed groups with the most reach.
Two things to go deeper
Keyboard diplomats. We heard some pretty amazing stories of how citizen activists have been empowered by the ability to communicate quickly and at scale. We also heard about how diplomats have used the very same tools. But what does the communication revolution mean for citizen diplomacy? There are examples out there - such as Isreali and Iranian citizens setting up online groups - but we’d like to go deeper. What do we know about its impact? And should the UK government facilitate or partner with online citizen diplomats? Or does that risk doing more harm than good to soft power?
Not free but not (yet) governed. Communications tech and social media in particular have facilitated a global information explosion. While its scale and speed gives it a life of its own, this information isn’t simply ‘out there’ and ‘free flowing’ like air in the atmosphere. Structures - including algorithms - shape its distribution. On the whole, these structures are designed and managed by companies - but only because governments have allowed them to. For better or worse, states will seek greater control over the structures of global information flows. Will they go beyond blocking information entering their countries and make this a global governance issue?
One thing that got us thinking
We were given some clear advice on how rational and progresisve governments should push back against the illberal use of surveillance tech:
Lead at the global level, getting liberal democracies to band together in order to shape international agreements and wider norms.
Support civil and digital rights activists who are fighting the good fight on the ground.
Regulate tech companies, including through stronger human rights diligence and export controls.
This got us thinking about whether this three pronged approach applies beyond surveillance to how we approach wider tech issues at the global level. Whatever form it takes, it does feel like the UK is far behind the curve on forming a clear global strategy for our digital world. Whether it is states gathering information on their citizens' behavior or citizens disseminating information on how their states behave, or the range of other tech issues we didn’t discuss, we’re going to have to get our heads around this.
For this session, we heard and learned from the brilliant:
Steven Feldstein from the Carnegie Endowment, with a focus on surveillance tech. Steve has written about AI and digital repression as part of a collection of essays on the road to digital unfreedom.
Cansu Bayrak from Bethnal Green Ventures, with a focus on tech and soft power.
Chris McNaboe from The Carter Center, with a focus on tech and conflict. Chris wrote an in depth piece on conflict and atrocity prevention here.
Lisa Schirch from the Toda Institute, with a focus on social media and polarisation. Lisa’s recent work on social media in conflict zones is here.
Want more?
Ian Hogarth’s piece on AI nationalism - which discusses a coming AI arms race and how tech companies will become protected national assets - was also shared.
In The end of high tech war David Kilcullen argues that there has been a technological levelling up.
We heard about the work of Forensic Architecture who undertake advanced spatial and media investigations into cases of human rights violations.