Button TextButton Text
Download the asset
Back
Podcast

How Data Distracts Us From Human Rights

How Data Distracts Us From Human Rights

This week’s guest is lawyer, author, and Senior Research Associate at the Institute for Ethics in AI at Oxford University, Elizabeth M. Renieris.

In this episode, we explore Elizabeth's brand new book, Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse. This takes us on a tour through the ways in which our obsession with data has failed and distracted us, how we need to return to the pillars of basic human rights law that are already well established if we are to regulate technology appropriately. In essence, there is little need to separate the digital from the physical. Elizabeth argues that to think about it otherwise allows for government regulations to become outdated and for corporations to get away with bad behavior.

Find out more about Elizabeth and her book at hackylawyer.com and twitter.com/hackylawyer

**

Host: Steven Parton - LinkedIn / Twitter

Music by: Amine el Filali

Transcription

Elizabeth Renieris [00:00:01] Rather than introduce new laws and regulations. Right. Every time we have these technological advancements or developments, we need to look to the frameworks that have withstood the test of time, which are typically found in constitutions, in human rights law and civil rights law. And in that way are sort of agnostic to what happens in terms of technological development and have a much better shot at being sustainable and future proof. 

Steven Parton [00:00:39] Hello, everyone. My name is Steven Parton and you are listening to the feedback loop by Singularity. Before we jump into today's episode, I am excited to share a bit of news. First, I'll be heading to South by Southwest in Austin on March 14th for an exclusive singularity event at The Contemporary, a stunning modern art gallery that is in the heart of downtown Austin. This will include a full day of connections, discussions and inspiration with coffee and snacks throughout the day with an open bar celebration at night. So if you're heading to South by and you're interested in joining me and having some discussions, meeting our community of experts and changemakers, then you can go to Sue Dawgs, Basecamp Dash. South by Southwest, which I will link in the episode description so you can sign up for this free invite only event and just to know it is not a marketing ploy. When I say that space is genuinely limited, so if you are serious about joining, you probably want to sign up as soon as you can and get one of those reserved spots. And in other news, we have a exciting opportunity for those of you with a track record of leadership who are focused on positive impact. Specifically, we're excited to announce that for 2023, we're giving away a full ride scholarship to each one of our five very renowned executive programs where you can get all kinds of hands on training and experience with world leading experts. You can find the link to that also in the episode description and once more time is of the essence here because the application deadline is on March 15th. And with those notes out of the way, we can get onto this week's guest who is a lawyer, author and senior research associate at the Institute for Ethics and A.I. at Oxford University. Elizabeth Renieris. In this episode, we will be exploring Elizabeth's brand new book, Beyond Data Reclaiming Human Rights at the Dawn of the Metaverse. This takes us on a tour through the ways in which our obsession with data and data policy has failed and distracted us, and how, in order to update our modern regulations of technology, we need to actually instead return to the pillars of basic human rights law that are already well established. In essence, there is very little need to separate the digital from the physical. Elizabeth argues that to think about this otherwise allows for the government to become outdated and for corporations to be able to get away with lots of bad behavior as they fall through the legal cracks. It's a strong argument and one that I enjoyed partaking in. And so without further ado, please welcome to the feedback loop, Elizabeth Renieris. To get us started. I would just love if you could give us a bit of background about yourself and especially as it relates to your new book, Beyond Data. 

Elizabeth Renieris [00:03:44] Yeah, sure. So I can give you maybe some personal background that might be interesting. So as you noted, my book is called Beyond Data Reclaiming Human Rights at the Dawn of the Metaverse. It is out on February 7th. And for those of you who have a book or will have the book, I begin with my preface where I actually talk about a personal anecdote from college where a classmate of mine by the name of Mark Zuckerberg, undertook this experiment of scraping photographs from residential house websites of females and basically creating a base smash of them on a website to have people rate these women against each other based on attractiveness. And this is an experience that really stayed with me throughout my adult life, particularly when I went on to law school and after law school to become a privacy and data protection lawyer. And it's an experience that sort of shaped the way that I thought about privacy and data protection over the last 15 years of working in this field. Because one of the things that I felt then, and I still feel strongly now, is that the nature of the personal violation and sort of the experience of the harm seemed really detached from the way that we think about and talk about. I hate this term, but online harms or digital harms in the tech policy space. And so when I had the idea to write this book, the first thing that sort of came to me was the memory of that experience and the the sort of indignity and injustice that I felt and the way that I feel that these types of experiences will just become more and more prevalent for people. The more technology is deeply embedded into our day to day lives. And how, as a lawyer, I felt so frustrated that the paradigms we used to think about these harms don't reflect that reality. Mm hmm. And in particular, how a almost singular obsession with data has become this kind of rallying point for law and policy and for industry as well. And how in my view, that sort of let us to from really addressing what's at stake in this inherently, what today is very much a cyber physical reality and increasingly so with new technologies. 

Steven Parton [00:06:34] Yeah. I want to push on something here that I thought of when you first mentioned kind of your background, and that is when I think about how people portray themselves online now, basically vlogging every second of their life, the kind of the culture of attention that we have. Do you think there is as much concern with this generation about their privacy as maybe you had using your value systems when you were going through school? Because it's feels like it's probably switched the zeitgeist has maybe changed. And I'm wondering if you what you think about that. 

Elizabeth Renieris [00:07:10] Yeah. It's a great question. So I think people still share the same feelings in the same sort of intuitive sense that there's something very perverse about this behavior. I think what's changed is that it's becoming increasingly hard to opt out of it. So actually, a colleague who's also published in the MIT Press, a book called Digital Lethargy by Thomas Lee, who is really talking about this lethargic, sort of late capitalist experience of having to participate in society by offering ourselves up through these digital means. And so we see this very much where all of our digital infrastructure has effectively become gamified, right, to incentivize us to further share and further engage and give our attention. This is true across professional networks like LinkedIn that have really created those incentives in that in the professional context, obviously, this is pervasive in more traditional social media, where it's really gamified friendships and social connections in that way. It's true. Now in finance, we see this very prominently with a lot of fintechs that are trying to create these incentive structures and these behavioral patterns. It's certainly going to be increasingly true when we talk about what I call in my book Metaverse Technologies like. Augmented and virtual reality. So I don't know that the emotional or. Cognitive or stealth experience is necessarily different. I think we have been put in a position where the cost of participating in society means sort of learning to live with that discomfort or even putting it aside for fear of social isolation, personal cost to reputation or professional capital. You know, it's this sort of idea of the network effects, which is the more technical concept of all my friends are on Facebook, therefore, how could I not be? But in a much more social political sense of that, which is again, this notion of it's the cost of participating in society. So it would be interesting. To do some empirical research on this. Right, in terms of how people actually feel. And is there this impetus and this exhaustion, this digital lethargy that in fact, is there but we but is not maybe at the surface, at the forefront of our minds as a result of the toll of living in this way. 

Steven Parton [00:10:01] You also mentioned in your intro there that our obsession with data is basically a very unhealthy distraction and it's causing us to basically become blind to some of the key things that are actually at stake here, very important things. Can you kind of talk a little bit about what that is? 

Elizabeth Renieris [00:10:20] Sure. So our obsession with data, as I sort of outlined in my book, is a relatively recent phenomenon in the sense that before we talked about data, before we talked about data rights or data protection or data privacy. We had constitutional and human rights around privacy. Those constitutional human rights were broad rights that envisioned an almost physical sphere or bounded ness around an individual. And we're concerned about interferences with, you know, your home, your family, your private life. In many legal traditions, your correspondence, which for feels relevant to this discussion. But they weren't focused on on this notion of data that wouldn't emerge until we had the sort of PC revolution, as people call it, or we had computers enter the home. And what you'll see in my book, which I outline, is that it's interesting how the early data protection laws that came online very much reflect the sort of adoption of personal computing different places. And so, as you know, as populations had more personal computers, so too did they have more data protection laws emerging around the same time. And so what happened there was as we as data sort of became a more tangible thing in terms of digital data rights, Those are binaries, zeros and ones. These computers that enter the primitives that penetrated this physical space, the physical bounded ness, there was this instinctive reaction. Again, there was sort of an inherent knowing that there's some violation occurring here of this original notion of privacy. But the feeling at the time was that it wasn't quite captured in these constitutional and human rights. And so there was this desire to articulate a more specific right around the protection of this data that was now stored in in these digital databases. And that's where I think we took this sort of what I call the icy turn right, the information communications technology turn where privacy sort of went off track. And I was actually meant to give a talk a couple of months ago with the title was Data Protection of Mistake. And the title was intentionally provocative. And unfortunately, COVID got in the way. But the idea was that. We were so confused by this new technology that kind of came along and entered our space that. It was difficult to reflect on the powerful constitutional human rights that we had, including around privacy. And there was a sort of, you know, very urgent reaction to have new laws. And we see this happening now. We definitely see it happening around AI machine learning and other technologies without really assessing what the existing frameworks could do. And so by introducing these, we get this this cluster of data protection laws, both national and eventually the emergence of international legal frameworks and principles that really center data and look at the privacy, the security, the confidentiality of data that speak to technical means and methods of securing and protecting data in a way that were important, but in many ways sort of moved us away from the original focus of human rights and constitutional laws, including privacy, and sort of diverted the conversation in a way that in fact became very technocratic. And this sort of became a boon to private industry because there are myriad ways to say that one is protecting, securing and keeping data confidential by technical means. And that's a very hard proposition to challenge for many reasons. There are concerns around IP and trade secrets and accountability and transparency and a whole array of things that make that challenging, that sort of allowed industry to. Comply with laws and regulations without really addressing the spirit of those laws and regulations and the motivations for them, which are really these fundamental. Harms to people that infringe upon basic constitutional human rights. And so that that sort of in turn led us down this road of proliferating laws and regulations around data and a conversation that also began to fixate on the notion of data. And in my view, what we're getting to now and where we're actually a very interesting point in time where we're starting to see the limitations of that framing, we're really starting to meet the limitations of so-called data protection or data privacy laws and regulations. This is particularly true, again, around things like AI machine learning. And so I think it's a perfect time to start to question that framing, and particularly that the technocratic nature of it and to really think about what we need as people. Because for all the rhetoric around human centered technology and human centric laws and the term human centered in general, which is just everywhere, when you really look at it, you know, when you really survey what's out there in terms of legal frameworks, the vast majority are data centric, right? Yeah, they talk about data as the end all, be all and really focus on its protection and its security and its confidentiality. So that's the main premise of my book. 

Steven Parton [00:16:11] Yeah, Well, I mean, one insight that I found particularly poignant while reading your book is that since we view data, as you said, it's kind of like a substance, rather that rather than something that is, I would say, most like intrinsically and intimately linked to a person. We've been much more okay with exploiting it and focusing on it because we can push aside the human rights issues. But what are some of the issues that are being caused with this approach? What are some of the human rights harms, I guess that you would point to that are consequential from this focus on data rather than on its connection to the individual? 

Elizabeth Renieris [00:16:56] Yeah. So I think that you're absolutely right that data is has this air of sort of objectivity, of neutrality, of otherness of this thing over there in a database, and therefore is very detached from the lived human experience and. Is the appears to be lower risk than it is in actuality when it is a much more intrinsic and integrated property of our lives. I think that the biggest concern that I have, the biggest sort of human rights related risk that that I've seen that this has created, is that we basically zeroed in on two human rights out of dozens. That is more than 30 traditional and a derivative human rights would include even more. And those two are privacy and free expression. Now, that may have been appropriate at a time when we did have these clearly delineated separations between online and offline, between digital and non-digital, between databases and. It may have been appropriate in the time of these neat binaries. It's not going to work. It doesn't work now and it's not going to work going forward, because privacy, as we've already discussed, has basically been gutted to become this very narrow and technocratic notion of the privacy, security and confidentiality of data. So it's lost its sort of intrinsic value and power of being this broad constitutional human right that's concerned with the integrity of the person and the sort of bounded ness around their selves and their experiences. So the dilution of privacy into that current form that it takes in most laws and regulations means that if that is one of two rights that form the pillar of our approach to human rights in respect of what I actually call the post digital realm. Right? Because again, we don't have these neat separations anymore, then that's a weak starting point. And then, of course, the other that's commonly prominent in these conversations around technology governance, including around privacy, is this tension between privacy and free expression. And free expression also takes center stage often because we have a lot of conversations around things like content moderation, and because of much of the focus on technology, governance is actually on the most prominent players that we see and feel. The social media companies, even though in reality this post digital, cyber, physical reality is in everything and everywhere and there there is a much more complicated infrastructure at play here. But I think the narrowing down of a body of human rights to those two is the result of this sort of singular focus on data, where, of course, if we're talking about data, then it would appear that we have a tension between sort of securing it or allowing a sort of free expression of information. But that's not that's not adequate, because what we've come to see very palpably, particularly in the last 5 to 10 years, I would argue, is that data collection and uses of data have resulted in harms that have to do with much more than privacy for expression, that have to do with discrimination and harassment and inequitable treatment. They have to do with exclusion. Exclusion being a really big concern here and, you know, sort of a growing digital divide that have to do with an important body of human rights that's almost never part of the conversation, which are economic, social and cultural rights. Now, if you think about this other body of human rights law, they speak to these concerns and they speak to even more emotionally charged things like the right to benefit from scientific progress, right, which can include something like access to a vaccine. They include the right to participate in the culture, which has implications in lockdown in the way that people relate. And, you know, even beyond that, even in the more sort of traditional realm of human rights, that the body in which the body of law in which we get things like privacy and free expression, which are the civil and political rights that people are more familiar with, there are still concerns around freedom of association and assembly. We saw this very much with protests. We saw this with Black Lives Matter in the States during the pandemic. So the impact of things like these COVID tracing apps and exposure notification tools and again later on around vaccination passports need to be looked at and assessed in the context of these very broad array of human rights, rather than just in the narrow context of something like privacy, which in and of itself has already been gutted in terms of very weak notion of its original form. 

Steven Parton [00:22:17] Well, and you mentioned there the kind of tension, I guess, between the private sector and the government. And it makes me think specifically, you know, that a lot of these issues are the domain of the government or should be the domain of the government. And legislation, as you discussed in the book, seems to be failing, I think, lagging behind pretty dramatically. So could you kind of point us towards any any legislation that has taken place, something maybe like a GDPR and how successful it's been or some of the new things that might be happening? And the American government, I believe you mentioned in the book that there has recently been an attempt there, but it might not be a very strong one even at that. So I guess just broadly speaking, how how is the legislation being handled? 

Elizabeth Renieris [00:23:07] Yeah, sure. So I'll start with on what I do. You, too, in the book. And so is the nature of academic publishing, is that it's a little bit slower than people would probably appreciate. So it had to address some of this after the book went into publication. A lot happened right between the sort of final manuscript and when the book is actually released. And part of what's happened is that in Europe, we have an entirely new legislative package on the table put forward to regulate things like what the European regulators are calling digital platforms or digital services. So we have things like the Digital Services Act, the Digital Markets Act, a Data Governance Act, proposed Artificial Intelligence or AI Act, this whole suite of laws designed to sort of. Update the European legal frameworks for the way that technologies the sort of way that we interact with technologies today. And I think what's concerning about the approach there is well, there are a couple of things that are concerning, and it's premature to really talk about this because some of the proposals are now law but haven't really been tested yet. They're coming online now and some are still like the AI Act is still in draft form. But if I had to sort of project where they're going, we see a lot of the same mistakes being made as as were made with respect to data protection. So one of the first concerns is that even though these laws always tend to purport to be technology neutral, they're never actually technology neutral. They're in fact very specific about the types of technologies at play and the types of technologies that they seek to regulate. And so the concern there is that they're never sustainable or future proof. And this is where there's this perception always that the law sort of lags behind the technology. Right. And how will the laws ever keep up? And by the time we have the AI Act, we need the quantum regulation. Right. And that's sort of the logic. I think that the workaround the answer to that is to leverage existing law and legal frameworks, including a much fuller body of human rights law. So rather than introduce new laws and regulations. Right, every time we have these technological advancements or developments, we need to look to our arsenal of the sort of the frameworks that have withstood the test of time, which are typically found in constitutions, in human rights law and civil rights law that, again, are not starting from the perspective of data for AI, for algorithms, for platforms. They're starting from the perspective of people or citizens and in that way are sort of agnostic to what happens in terms of technological development and have a much better shot at being sustainable and future proof. I think that this impulse to sort of constantly introduce new laws and regulations is really a distraction. And there are different parties at the table that different incentives to introduce those. So obviously from the standpoint of governments, you know, there's the obviously the desire to appear to be doing something about something that's suddenly of deep concern to the public. Right. And perhaps unfamiliar on the part of corporations, it's often in their interest to ask for new laws and regulations rather than have them comply with existing legal frameworks. So it's this very sort of defer deferred delay to strike tactic on the part of individuals. Again, there can be a sort of moral panic around things and and that can sort of encourage these cycles on the part of the public and private sector. So in terms of what Europe is doing, I think it's laudable. I think it a lot of people do look to Europe as sort of a moral leader on tech governance. And I think that the GDPR and General Data Protection Regulation has contributed to that reputation, in part because, as you know, in the United States, we don't have federal privacy legislation, for example. But I think there are a lot of shortcomings, for example, in what the GDPR has done, in particular it being interpreted and sort of implemented in a way that again, has been very technocratic, very focused on data, very easy to circumvent, creating a lot of the appearance of compliance, but in many ways not making us feel any safer and not anticipating, for example, things like generative AI technologies. Right. And not protecting anyone from these sort of new use cases that that that these laws never could have because they were so narrowly focused on protecting data. So I think and a concern there as well is that, you know, the G'Kar, for example, has been replicated the world over. It's sort of been mimicked by national and regional frameworks in different parts of the world. And, you know, that's concerning to me because what it means is that. Rather than be a sort of global floor, it can be a type of global ceiling where that's the best we can do. And in reality, I think the best we could do is really, again, to look for a broader human rights based approach. So that's what's on the table in Europe, I think, in terms of. You know what I alluded to in the book? That's happening in the States is a lot of people are investing a lot of hope and faith in this federal comprehensive privacy legislation. It has gone through many iterations and has many interesting features. I don't think it's going to provide us with the kind of outcomes and safety and security that we're looking for. I do think it will help companies in the sense of rather than having to comply with a patchwork of state laws and regulations. Right. They can sort of have a11 standard federally. But I think that it's still largely focused on data. It's it's still largely built on the past sort of 50 years of data centric laws and regulations that stem all the way back to the emergence of the Fair Information practice principles and subsequent frameworks, the types of laws that eventually provided the foundations for the Data Protection Directive in Europe in 1995 and now the GDPR. So. It's important. I think, again, there's important moral value to the United States having federal privacy legislation. I think it's important in respect of multilateral and bilateral conversations around technology, governance. And and and I think it's kind of table stakes, but I don't think that it's going to address really the issues that are going to wreak the most havoc on people now and going forward. 

Steven Parton [00:30:18] Well, in a perfect world, and obviously, you can't you know, if you have an answer for this, great. But I know it's a big ask. But what would you propose or what are some of the idea solutions, you know, whether it is second order and it is focusing on data or if it's first order and really focused on human right laws instead? What are some of the things that you you would ideally want to see happen to kind of constrain some of these invasions, I guess, into into privacy and into free speech and maybe even into autonomy, for that matter? 

Elizabeth Renieris [00:30:54] So I want to see us stop pretending that tech companies are different. I want to see us stop pretending that we don't have laws that address these things. So, for example, I want to do away with this notion of online harms. It's not a thing if somebody is targeted, harassed, abused, quote unquote, online or by digital means. They are targeted, abused or harassed, full stop. If you know, you have a again, with air quotes, a digital product or service that is defective or creates some kind of danger, bodily danger or otherwise to someone, I want to see our existing liability frameworks apply. If you have an AI tool that results in this sort of discriminatory impact on hiring, I want to see our laws around the supply. For example, that's an area where we see places like New York City introducing new laws and regulations rather than saying the means are irrelevant. The fact that you're using an automated system or any AI tool is irrelevant to the fact that the outcome and the impact of the harm is the same. So effectively, this exceptional treatment of technology companies has to stop. We. It's increasingly irrelevant, Right. What the what the means are. And I think that this kind of constant scrambling around, okay, well, now what does it look like now? Now what do we do rather than who are we? What are our rights? These are well-established pillars of sort of of law, frankly. And that's true across the board. Rather than scrambling every time there's a new shiny toy. And rather than allowing different actors, but especially the private sector to use that as a call for, you know, we need new laws, regulations, it's, you know, hang on. No, you are. And again, who's to say what a tech company is anymore, Right? I mean, arguably, everyone it's a tech company. Everyone's a digital company. It's in my mind, a completely useless term. But the fact that we still hold out this notional sector as something different and exceptional is a huge part of the problem here. I don't care if you have an app that does banking and you know that you hold yourself out as a fintech. If you're providing banking services, right, you need to be held to those standards. If you're an app that provides therapy or medical services, you need to be held to the same standards. So it really I think it's the biggest problem we have is is this exceptional, magical thinking around tech companies. And it's deep and pervasive in the culture. It goes all the way back to the cyber libertarians of John Perry Barlow's day and the Declaration of Independence of Cyberspace. And I'm very concerned because you have a similar rhetoric now around this idea of the metaverse. But as long as we don't frankly call B.S. on that view, then the law is will be ineffective because laws are not meant to be. It's not to say that we don't ever need new laws or regulations and not to say that we don't need to reinterpret existing rights. So, for example, you know, there's a big debate now around neuro technologies. Do we need new rights around the integrity of the brain and cognitive privacy and all this bull, for example? You know, I have a friend who's a scholar in the UK. The library is writing about freedom of thought, which again is a is a core, the fundamental human right that is completely under leveraged, which has largely been taken for granted because it's largely been impossible to interfere with someone's freedom of thought. However, that may not be the case going forward. And so in my view, it makes sense to again, leverage what we have before we think about layering on and introducing new laws and regulations that work against each other and ultimately dilute their efficacy, which leaves us more vulnerable. 

Steven Parton [00:35:12] Do you think that the current laws, the non digital laws are largely capable of handling the digital space even, you know, because you say it is a second order kind of manifestation? How well do the laws, as they have existed in the past, really cover the, I guess, nuances that do arise from the technical space? Do you think that it's pretty well handled even with as long as we're open to just some simple tweaks and adjustments here and there? 

Elizabeth Renieris [00:35:43] Yeah, it's a good question and it's a question I get all the time and I think that there's different sort of strata of law and legal tools and I think you've got sort of the the highest level or depending on your perspective, the foundational sort of bodies of law, Right? So human rights and constitutions, there are these sort of. Metal layers and strata of of where we derive our laws from. Right in those are typically codifying some kind of normative theories, ethics, things like that. Those those are sort of pillars. And I think that those, in my view, are still very powerful. If, again, as they was explained before, extremely under leveraged as a starting point for this conversation. So, for example, in the last ten years, we've seen a proliferation of principles around ethics, and these have come from everyone, from government to private sector to multi-stakeholder groups to civil society, academia, and they start to articulate these. You know, I think there are now 150 or maybe 200 sets of principles that are all slightly different permutations of each other and are all kind of dancing around things that are actually reflected in human rights law but are not articulated in that way or not starting from that or are not the starting point of the conversation. Increasingly, I'm seeing a call to just use existing rights as the foundation of governance, which I think is great. I think the ten years brought private industry a lot of time and have definitely allowed the technology to develop to a point where there are more concerns around our ability to govern. But that's just one one layer. So you have this sort of foundational pillars of law. Then you you get more granular, right? And you introduce laws and regulations, and then you get even more granular because in order to implement those laws and regulations, you have to have very specific rules of two very specific policies. In that granularity, I think that's where a lot of the. Changes and updates and sort of iterations that respond to specific developments in technology and technological tools is most relevant. So I feel like that's where, you know, I think you asked me about how do you account for that sort of both changes and the emergence of of new technologies. And I think where you account for them is at that level of granularity. But if you don't route that granularity in these less movable pillars, right, in these more foundational concepts, particularly when it comes to rights, then you're sort of unmoored, right? And you can't anchor governance in anything because there's just continual distraction and noise. And I think this is sort of what's what's happened with data is that it's again, the signal to noise ratio is really off here. And if we were actually starting from the point from the perspective of the human, as we purport to do, but in reality we don't, I think it would be a lot easier to hold this tension between these foundational concepts, these foundational rights, and the more specific articulation at the granular level in response to new and emerging technologies. 

Steven Parton [00:39:05] Yeah, well, as we come to a close here, I want to have you maybe address a few different type of people who might be listening to this. Maybe the average person who's just, you know, not really involved in the big decision making, maybe an entrepreneur who is starting a business or an executive that has a bit of sway in their company. For these individuals, what can they do to kind of push towards this world that you would like to see happen? You know, is there something that they can do to kind of move in a more human centric direction that has that human rights focus rather than the data focus? 

Elizabeth Renieris [00:39:45] Yeah. So for individuals, I think individuals have been have really been given the short end of the stick right in the last 50 years, I think. Way too much of the burden has been put on people, on individuals, to sort of safeguard themselves and their rights. And while I think that it's great for people to be empowered. It is absurd to think that any individual has any modicum of control over what happens to their data at this day and age is just a complete fantasy. You know, I am a privacy professional. I've been working in this space. I research this stuff. I struggle right to make sure I have all the correct optimal settings at all times on every device, in every context. And even then, there's no evidence or proof that those settings are actually doing what they're intended to do, and they can sort of counteract each other. And so I think to people, I would say, this is not on you. Just like there are concerns around environmental things like pollution and drinking water and, you know, just like we have building codes and safety standards. These are collective concerns that need to be addressed in a way that takes the burden off of the individual, that doesn't ask the individual to to be fully responsible for this. Because, again, that's a that's an absurdity. And so it's sort of to the individual, I guess it's to be aware that this is not their issue and to hold their policymakers accountable for not treating it as such. I think to, you know, the C-suite and executives at large companies, you know, the heretofore sort of technocratic approach to securing and protecting data has probably bought you some time and has worked. But I don't think it's going to work going forward. I do think that there is an increasing sense, as we spoke about the start of this conversation, I felt a sense that the way things work now is sort of not okay and the harms are becoming more visceral. And I think that's going to create sort of a cultural shift towards embracing this more this broader human rights based approach. I think AI is actually doing us a huge service in surfacing some of those issues and having people recognize that privacy isn't enough. There's certainly privacy in the way of protecting data isn't enough, that there are much bigger concerns in general about the way that we relate to technology. And so while that's a double edged sword, because we're going to feel the pain at the same time, sort of I think will enlighten people about what's at stake. And sorry, who are the others? 

Steven Parton [00:42:39] Entrepreneurs. 

Elizabeth Renieris [00:42:40] Well, entrepreneurs. I mean, it's interesting because. You know, I think where we really need innovation. So lots of traditionally tied to especially around tech governance laws have been very sensitive to not not quelling innovation. Right. They want to be pro innovation. This is continually a thing. It's still a thing even in the new legislative package in the EU. And I think that we're going have to push back against that historical default, because I think that's what we're seeing through AI and machine learning and other technologies is that there are some things that we're not going to be able to correct after the fact as market failures or as harms because they're sort of irreparable. And they it's actually a lot more important to have what we call ex-ante mechanisms in the loss of things that addresses harms and potential market failures before they occur. So this might mean, you know, anything from more scrutiny of of mergers and acquisitions to actual safety standards around the way that certain things are built or even outright bans on certain pieces of technology. But I think what it means for entrepreneurs is that the real innovation here will be in not any particular technology, but in offering up a better business model. Right. So what we've seen that's been very nefarious in the last 20 to 30 years has been what Shoshana Zuboff calls surveillance capitalism. And this entire economy sort of built on the harvesting and exploitation of personal data. It feels like everyone's pretty tired of that. At the same time, we all tend to feel pretty powerless. But it does seem that, you know, if and when that startup comes along that finds a better business model, it will be a breakthrough and there will be huge appetite for it. And then they won't be the only ones, Right. And so hopefully we then see this sort of systemic change occurring. And, you know, these shifts have occurred in the past. So I'm not I'm not optimistic every day, but it just sort of takes that shift in perspective to hopefully, you know, to hopefully give us a glimmer of another that another way is possible. 

Singularity

Singularity's team of internal thought leadership works to develop interesting resources, articles and insights about our core areas of expertise, programs and global community.

Download the asset