Reema Moussa 00:01 From the Internet Law and Policy Foundry, this is the Tech Policy Grind podcast. Every two weeks, we'll discuss recent developments and exciting topics in the technology and internet law and policy space. I'm Reema Moussa, and I'm a member of the fourth cohort of Foundry Fellows. The Foundry is a collaborative organization for internet law and policy professionals who are passionate about disruptive innovation. In 2018, the Oversight Board was created to help Meta, formerly Facebook, answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up, and why. This week, Foundry Fellow Meri Baghdasaryan chatted with Julie Owono, a member of the Oversight Board, on the board's first annual report, interesting decisions and learnings, and the board's future. Meri Baghdasaryan 01:05 Hi, Julie, thank you so much for joining tech policy grind. How are you today? Julie Owono 01:11 I'm doing very well. Thank you, Meri, for asking. Very happy to be here. Meri Baghdasaryan 01:16 Thank you for taking the time to chat with us. So let's jump right into the topic of today's podcast. So the focus for today's episode is on the Oversight Board, which is a unique non-judicial mechanism composed of experts from around the globe and tasked with making online content moderation much more human rights centric and compliant with international human rights obligations. I know a lot of people have heard of the Oversight Board. But I would love to start our discussion with really explaining what the board is, who can apply to the board and how it operates. So we would love to hear your insights as a member of the Oversight Board. Julie Owono 01:58 Sure. Well, as you rightly say, it's a non-judicial mechanism, a body of 20 experts, 23 now, 23 experts from around the world, who are located in various places, including UK, Indonesia, Taiwan, the United States, and many other places, Australia, etc. And we all come together to make binding decisions on Facebook and Instagram's content, moderation processes and decisions. So very concretely, if your publication as a user has been taken down by the company - Facebook and Instagram - and you disagree with that decision, you think that there was a wrong enforcement of the community standards, which are the policies that apply to content publication by the users on those two platforms, well, you can appeal to the Board and request that for a review of your of your case and your publication staked down the other. The other way through which we can look at cases and make reviews is also for users who think that other's publications should not be available. So let's say you see, disinformation that you did not post it someone else posted, you see disinformation you think it shouldn't be on the platform, but that Facebook hasn't taken action on it yet, or Instagram, well, then you can also appeal to the Oversight Board for review of that decision not to take down the alleged disinformation content. And last but not least, the company itself can also require from us decisions on specific case on specific contents, but also that the company can request from us some policy advisory requests to help them make their community standards and content policies in general, more human rights respectful. So yes, this is what the the Oversight Board does. We've been operating since May 2022. Sorry, May 2020. So it's been I mean, we were launched in May 2020. But we started really taking cases in October of the same year. So it's been I would say, little over a year and a half that we've been working on this. And we've just recently published our very first annual report, which covered really, a year, several quarters in a year in the full year. And very, we have some quite interesting learnings from that. I'm sure we'll discuss that. But this is a general presentation of the Oversight Board. Meri Baghdasaryan 04:49 Thank you so much for sharing that. I will definitely dig deeper into your the first annual report, but before we move there, so it's clear that oversight board is an appellate body in a way. So to get to the Oversight Board, the user needs to most, the mostly used path is to appeal to Facebook or Instagram and then go to Oversight Board. And I think our audience will also be interested to hear what happens after the user applies what is the, let's say, internal process at the Board to take the case or not? Because as we know, the Board received way more cases than it can handle. So I'm curious as to what happens, how the selection process works. Julie Owono 05:33 Sure. So once you once you request to appeal to the oversight board, and of course, after you have exhausted all the appeals mechanisms that exist within Facebook and Instagram, then you should be provided with an appeals number, which puts your case in the queue. And we have a great case election team that does the well there is an automatic filter, and most of the work is manual, automatic, because you know, there are some appeals that are absolutely not fitting within the the, you know, the characters of what you would expect for from an appeal. But most of the work is being done manually. And that's why I wanted here to talk about the team, the case selection team that's working every day tirelessly to try each of these cases, and to surface those that could. So the board has set very important criteria that we look at when we decided to work on one specific case and compared to another. The first one is whether or not the case poses a significant question or surfaces a significant challenge with regards to how a policy of Facebook or Instagram policies being applied. Whether or not it, it poses also a big challenge for freedom of expression online and human rights, and for society as a whole, I should say. And last but not least, we'll also, of course, try to look at cases from making sure that first of all, the decision that we'll make, or the case that we'll take, potentially could impact users beyond the individual that is concerned. That is concerned by the, the issue or this specific case. So we those are our three criteria. And on top of that, I would say of course, we try to look at cases that are not only located in the United States or in Europe, but also really looking with the global lens at Latin America, Africa, and specifically Sub Saharan Africa, or Asia, South Asia, Southeast Asia, where we know there are lots of challenges also with regards to content moderation. So once that trial has been made, the cases are presented to a case selection committee, which is which is composed of four Board members, four or five, I probably have my math wrong here. But let's say between four and five members, who will basically discuss which case will go forward and will first of all be presented to the legal compliance team because we have to make sure that the case do not involve significant privacy violations. I mean, we don't want to be in any way agents of privacy violations. Absolutely not. And secondly, of course, the remet of our mandate is limited only to cases that do not involve imply a legal obligation for Facebook. So typically, anything that relates to terrorist content will be covered by a law by an existing law. So either a national law or a regional law, for instance, the European Union has several codes of conduct on that and impose obligations to platform so typically, the Oversight Board can't look at that same with you know, Child Safety content CSAM, we usually won't look at those because those are covered extensively by laws. So really, what we look at is what we call usually the lawful but awful content, you know, things that can that are not forbidden per se by any laws. But that the tweet or sorry, the Meta policies of Facebook or Instagram policies community standards do not allow and how to, yeah, how to strike the right balance with the right to expression and the necessary safety that people want to feel when they're using those different social media platforms. After that, after that, you know, legal legal aspect has been solved, well, the cases that can go forward are then distributed to panels, panels of five Board members a little bit more, because now we're 23. So it will depend, some groups will have a bit more. And that panel will be tasked with looking into details, looking in details into the case and coming up with the first decision, and then the rest of the group will be solicited for views, opinions and voting on the decision. Meri Baghdasaryan 10:58 Thank you so much for this insights, it's interesting to really understand the specific scope that the Oversight Board is tasked to handle and also issue interesting decisions. So without further ado, let's jump into the first annual report. So this report, as you mentioned, covers the period from October 2020 to December 2021. And I think it will be interesting to hear your main takeaways around some trends or some statistics, and then later, I would love to dig deeper into some interesting cases, and also the issues with implementation. But let's start with main takeaways. Julie Owono 11:41 Yes, um, I think one of the most striking figures that I that helped me measure the amount of work that had been done is that, of course, so some people feel frustrated that the oversight board at the time of the the the the annual report publication had made "only", and I'm using brackets, because that's not me saying, but "only" 20, 25 decisions, okay, individual decisions on individual cases. But but I think is important to understand is that, like I said, we try to, to focus on cases that could have as much impact as possible on users beyond the individual case, that's been, that we're working on. And to do that, we use the means of recommendations that we can make, within when deciding on each case, we can make recommendations that can afterwards be that have to be responded to by the company. So that means if we recommend XYZ, Facebook has to respond whether or not this recommendation they will take it into account and implemented or not. And I would say that we've made more than 100 recommendations to the company. And out of those 100 and plus, the company has accepted to implement and look into 86 of them, which is huge, it means potentially 86 policy changes at least. And the policy changes in turn entie some other changes, including technical changes. For instance, if you if the company that the company has accepted to let users know, for what policy so what policy they had violated what justified the takedown. Believe it or not, this was not so systematic before. And for anyone who's familiar with any form of rule of law principles, of course, the right to know what you're being sanctioned is, is essential. So, now the company has accepted to implement this. And that means design changes in the way the in the user experience that you will have went from now on when your content is being taken down, you will have an explanation of what policy was violated. And also something that we insisted with the companies also to let know when the machine was involved in the decision making whether your content had been automatically taken down or taken down after human review. So all of these recommendations do imply some technical and design implementations afterwards. Another very important recommendations we've made is to make sure that the community standards of the companies should be available in every as many languages as possible. And especially in languages that are widely spoken by users. That the company has accepted to do that more consistently, it did but not all languages were translated with the same pace or were not translated at all. This is problematic of course for a company that purports to operate borderless, borderlessly if that word exists. So in a borderless manner, let's say it like that. Another recommendation that I can think of which is extremely important for the Oversight Board is developing a protocol for crisis situations. What are, how do your community standards apply when there is an exceptional situation? Whether it's a, let's say, an insurrection at the Capitol Hill, for instance? Or if there is a war in Ukraine, or in Ethiopia or anywhere else when there is a conflict? Or do your rules apply the same way as they would do normally? And that the company has also accepted to develop a crisis protocol or at least crisis policies to explain how their policies applied or do not apply in terms of in times of conflict, hopefully, in crisis? Hopefully, that policy would be will be available soon. The company has committed to publish that at some point. Yes, these are some of the recommendations and main key takeaways and other key takeaway is, of course, so we need more, and that's probably something that that's certainly something that the Oversight Board has to work more even more on is, of course, enlarging the pool of cases that we're receiving, geographically I mean. It is true that while many, we have received many cases from what some people would call the global South, what I increasingly call the global majority, because conceptually, it speaks more to reality. There are more populations in the southern hemisphere, and also the southern hemisphere. It's not always in the south anyway. So there is we have received a lot, a lot of cases coming from those places of the world. But it's true. Many also, many cases have come from Northern America, and the United States specifically, which is great, because it allows us to look at, like I said, issues that poses significant challenges with regards to policy and public interest. But it's not a reflective of the user base of the company. It is true that many users now are located in places of the world where connectivity is exploding, growing exponentially. And those places are located in what we call now the global south. So it's also important for us to be able to have more cases coming from from those parts of the world, specifically Sub Saharan Africa, which really accounts for very little number of cases. I think the statistics is below 0.5%. I'm not even sure we reached 0.5% of cases come from coming from Sub Saharan Africa, but we know there are significant challenges. And specifically, since the whistleblower Francis Haugen has, you know, made some revelations with regards to some of the decisions that are made in terms of content moderation investments. So it's very surprising to have so little cases coming from Sub Saharan Africa. So it means definitely from the Board that we will continue to invest efforts in speaking to audiences, users in that part of the world, but not only I mean, everywhere around the world, but doubling down efforts on places where we have less cases coming from. Meri Baghdasaryan 18:48 And speaking of cases, I would love to dig deeper into some of the interesting ones from the pool that you mentioned. For instance, for me personally, the pro-Navalny protest decision was very interesting because it actually overturned Mehta's decision based on its human rights, responsibilities, even though the removal was in line with Meta's rules. So I would love to hear some of the cases that you found most interesting, based on this annual report. Julie Owono 19:19 So I think we've done a really great job in every cases. So it's very difficult for me to say, I have a favorite but I would say that cases that I've particularly struck me with some of the systemic challenges that exists when it comes to content moderation, and especially content moderation at scale. I would definitely talk about the Abdullah Ӧcalan case, which involved a publication by Facebook, it was a Facebook user who was commenting on the treatment, the solitary confinement of Abdullah Ӧcalan who is a leader of the PKK, a party that is banned in Turkey, and also, that is considered as a dangerous organization, as per Facebook rules. And dangerous organizations, you're not allowed to talk about them, you're not allowed to praise them to be more specifically, it's not talk, it's praising them. So expressing any form of support for them. There is, there is a caveat to that policy that existed for a long time, but that hadn't been used by the moderator who decided to have this content take down the carve, the carve out sorry, was you can talk you can, okay, dangerous organizations cannot be praised, but you can actually discuss the human rights of the leaders or members of the said organization. And in that case, talking about the fact that the solitary confinement has been condemned, internationally, as a punishment is something that refers to the human rights of the leader of the PKK. So it should have been allowed on the platform, but that policy carve out had been lost at some point. And I think, Meta and that's something that I also appreciate with this, this, you know, exercise that we're doing, because, frankly, nothing has been done in this equivalent before. So it's really, we're doing experiment, you know, general experimentation. And I think that Meta has been quite honest, telling us responding to us in our, because when we receive a case, we receive the rationality from the company, why did we take down this, okay? And we also are allowed to ask more questions to the company. And so what we asked the company was, how come you know? How come people cannot talk about human rights of dangerous organizations, leaders, for instance, and the company, frankly, admitted that they lost, I mean, something got lost in translation, the moderators did not have, did not have knowledge of this policy carve out. So it speaks to the challenge of, of course, moderation at scale. And I would also say, a company of this size, operating everywhere around the world. It's not it's not easy. I want to acknowledge that it's it's very difficult to have comprehensive, consistent policies that apply everywhere around the world systematically the same way. It's very difficult. But obviously, that's the ideal. That doesn't mean we shouldn't do it. Of course, that's the ideal, which we should aim at. And we should thrive as companies as organizations anywhere who's anyone interested in making sure that we have online spaces that are that remain borderless, because behind the consistent application of policy, there's this idea that well, no matter where you are around the world, the rules apply the same to you. So for me, it was a very interesting case to put that into perspective. Another very, I mean, we have so many cases but another interesting and important case that I think we've worked on is so following the the George Floyd events assassination and demonstrations that followed in the United States, there was this you know, awakening on racial equality and racial justice issues around the world and including at a company like Facebook. And Facebook at the time adopted a new community standard, I mean, modified its hate speech community standard to prevent the use of blackface. And we received this case, coming from the Netherlands, where as you might be aware, there is a tradition around Christmas which is called Zwarte Piet, The Black Pete. Zwarte Piet being an aide of I think, Santa Claus, or I mean an aide, during Christmas, and he's usually represented with people, you know, rubbing their face with black, I mean, doing blackface. Let's speak very frankly. And the practice has been a subject of debate and has been also there have been some changes. I mean, it was very interesting to look at this case, because it it showed how much how much important platforms like Facebook are actually to our society, whether we like it or not, it's it's true. Much, much of the public debates, much of the much of our citizens lives almost happen on connected online connected platforms. So yes, I think this this case was interesting in turn to understand that and also to understand how a new policy, how a company applies new policies as the company adopts them. So there are many, many, many all of them are very interesting, but to respond to your question, I would say these two come up very rapidly in my mind. Meri Baghdasaryan 25:37 That definitely very interesting cases and very interesting analysis in the decisions. But I cannot not use this opportunity to ask you about the "cross-check" and related drama, let's call it, around it. Julie Owono 25:54 Yes, of course, I did not mention the most famous case that we've worked on, which is, of course, the access of former President Trump's to his Facebook and Instagram accounts. During that case, there was lots lots of learnings, lots of learnings and including, we learned about this system that exists. And that basically, to summarize, although that would not do justice at all, to the to the "cross-check" mechanism or policy. mechanism. Well, "cross-check" basically allows certain accounts, I mean, subject certain accounts to additional content review, different layers of content review, and also ultimately allows for certain passes to policy variations, to summarize, really briefly. What we found out about this mechanism, and when I say we, I really mean society as a whole, because before the Trump decision, unless you were a pundit really interested in, you know, I don't know, policy carve outs at Facebook, I don't think there are many, because first of all, the community standards have been published only, I mean, in a very comprehensive manner, since 2018. So I mean, unless you were really a pundit interested in those issues related to content policy, they were there would be very little chance that would you would be aware of this program. So we asked, basically, we asked, I think, 40 plus questions to Facebook in the Trump case, and Facebook responded to many of them and some they left left them out. And some of the questions one of the questions they responded to with we asked them whether or not the the account of the President of the United States was subjected to different policy enforcement, and they responded that they have this "cross-check" mechanism but that concerns just a few, just a few accounts, you know, not no, no, I mean, just bigger, just the bigger accounts. But what we found out a few months later, through revelations founded by The Wall Street Journal is that actually, what we thought was just a few hundreds of accounts, or even less were 1000s more than 1000s of accounts, 1000s and 1000s of accounts, who had been subjected to the "cross-check" mechanism, and not only policy, you know, public figures, like presidents, but also some influencers. I mean, those we call now influencers and, yeah, some public commentators, I mean, it was pretty much anyone who had a bit of a platform of expression on these two social media platforms, which are Facebook and Instagram. So of course, the figures were more impressive than what we thought. And that implies very serious consequences. How do you consistently apply exceptions to an exceptional regime? I mean, this is a really very, very interesting legal, legal questioning thing. And what's interesting is that so from these revelations by The Wall Street Journal, people were aware that it concerned way more people and had way more implications for society. And then we required I mean, the company agreed to request us to publish a policy advisory opinion because like I said, we do work on cases but we also work on policy advisory requests that that come from the company, and so they they sent us "cross-check" as a public policy advisory opinion request. We are currently working on on it. It's been more than 10 months when, when, in our we recently had our first in person, plenary summit with the Oversight Board and when we had a great working session on crosscheck. And when we were told by the administration that it's been more than 10 months that we'll be working on this, I was like, wow, this was this was massive, it was really, really a lot of work because the system has become so complex, and you'll find out about it when it's published but it was difficult to work on it. And I think, again, it speaks to the seriousness of the company, and commitment of the company to yes, do this, do this, this experiment to this fullest extent, and with as much as much honesty as possible. And I mean, nothing forced them to send us I mean, we don't nothing in our bylaws says, you can mandate Facebook to send you XYZ issue. No, we can do that. It really depends on the company's willingness to do so. And so far, they have been pretty forthcoming, I should say, in many cases. So yes, that was about the "cross-check" program, but you'll find it about find out more about it soon when we publish the results of our policy advisory opinion, Meri Baghdasaryan 31:19 Definitely looking forward to reading the advisory opinion, I think it will be very informative because as you said, nobody knew about this, even the Oversight Board didn't know the full story. So thank you for the important work on this, especially. And you mentioned that they are the Meta, Facebook, Instagram management, employees, etc, are more forthcoming. And a related question, I think, to the administration and operations of the Oversight Board is the implementation of the opinions and decisions. So I know that late in, I think in 2021, according to the annual report, the Oversight Board established an implementation working group. And then I think there is also another committee that is following up on the actual implementation of all the decisions and opinions. So I would love to hear your perspective on this. Do you think this is going well? Was it better than expected? Or was it worse? And what would you love to improve moving forward with this regard? Julie Owono 32:34 Yes, I think, you know, the implementation working group that that used to be a working group and now is has become a committee after we modified our bylaws to provide for the creation of this committee. I think the the story of this committee speaks to the agility of the Oversight Board, the fact that we don't shy away from, you know, reorganizing ourselves if we think that it's in the interest of the mission that we have been tasked with. So the implementation committee, was set up as a working group first in 2021. And is now a robust has now a robust team, that includes data scientist, who can help us track the success and measure the success of our decisions and recommendations through yeah, data analysis of how many potentially how many content are potentially concerned by the decisions that we're making. And we also have very regular interactions with the equivalent of the implementation team at Facebook, who has been tasked with exactly the same, the same thing tracking the success of how Facebook and Instagram have been implementing the recommendations of the Oversight Board and the decisions of the Oversight Board. So that frequent interaction and dialogue gives us a I would say, a rather accurate, as accurate as possible, because of course, there's so I mean, it's we're talking about billions and billions of content. So but we are we are really working hard on on defining in a more even more accurate way, what success means for in terms of implementation of beyond just the data success in terms of how the platform itself is transformed. And whether or not we can we can see that in the because ultimately, our aim is to make sure that users are treated more way more fairly by the company. What fairly means is back to what we were discussing earlier, letting people know why they're being sanctioned in the first place, responding to their requests for appeal, many other things that are coming into consideration for a social media platform like Facebook and Instagram. And and I would say so far, it's it's been very exciting to work. I'm personally very interested in the implementation and so I've participated in many of the conversations. I've read a lot about the challenges. One of the challenges, of course, is, again, the scale, the differences in languages. I mean, let's let's give an example, we made this decision about Nazi quotes. It was one of our first decisions. So it was a case where a user had posted, two or three years ago, had posted his status about so quoting an alleged quote of Joseph Goebbles. It wasn't really him who said that he didn't say exactly that way. But basically the quote was saying, well appeal to their appeal to the sentiments of the masses to be able to manipulate them. I'm paraphrasing, because that's not exactly what exactly the wording of the quote but that was the intention of the quote. And the the aim of the user was originally to criticize the Trump Administration. And so the person had posted this two years ago, and then was prompted as a memory, for those who are Facebook users, you sometimes are prompted to reshare previous publications by Facebook, I mean, previous of your publications, you're prompted by Facebook to reshare them. And so he was prompted to reshare the publication's except at this time, it was not okay anymore, to have that quote available. So it was taken down. And so the person appealed. And the in the in the statement of the user because users are able to make a statement to the Oversight Board. They said, well, "I was criticizing the Trump Administration. I mean, I was criticizing a politican I mean, I wasn't supporting" because the post had been taken down for support to the Nazis. Nazis, as are of course not allowed on on Facebook and praising them absolutely not. But so yes, I'm mentioning this because one of the first things that we tried to do was to see whether or not we could see the impact of the decision. So we told the company, while there's apps, you should be used to give clear example of support, obviously, when the when the user himself comments and says, "this is intended for any government that tries to manipulate the masses to secure their power, including the Trump administration" well, according to the assessment of the the user, of course, that's not the Board talking. But you well, how do you implement this at scale? And make sure that other publications that are also using the same quote in the same, with the same aim, are not taken down unjustly. And yes, that has proved to be challenging, not only because of the language, but also because there could be some, not all publications are in text, some of them are images with text on top of it. So there's a lot of automation that's involved in that's very helpful. Absolutely. But nevertheless, the contextual information is not always easy to grasp. So that's something that we're that's one of the challenges faced by Facebook and Instagram that the Board is increasingly being familiar made familiar with, because we're seeing it in the implementation working group as we're ourselves trying to measure the success of our recommendations and decisions. But it's I would say, one of the most exciting part of this initiative is that it's not just that we're having an empty dialog, there is a follow up, we ask questions, we want results. And when we don't have those results, we say publicly. So yeah. Meri Baghdasaryan 39:14 Definitely. Oversight Board is a great experiment. And we all know how much criticism it received in the beginning. But I think with more work and you know, seeing more results, maybe we'll see this experiment succeed in a way that nobody really expected. So I think my last questions will be about the future of the Oversight Board and your personal perspective on this. So what is next for the Oversight Board? Moving forward? What can we expect any new developments will they be will there be any expansion of the mandate or anything that you think may be important to share? Julie Owono 39:56 This is such an important and timely question. In terms of what the future looks like, specifically on the mandate, obviously, we have the Oversight Board has shown that, you know, we could be where we were not expected initially. One of the first decisions that we made was related to algorithms, algorithmic content moderation. And it is true that if you read our bylaws, you won't see AI anywhere, that's for sure. Or you won't see algorithms. That's absolutely sure too. But nevertheless, well, much of the moderation now happens through algorithm and our work is looking at moderation. So we had to look at this content that had been taken down or well taken down because of algorithms. So that's one thing. We, we are keen on, you know, being helpful where society needs us where users need us, and also where the company needs us, but probably doesn't, didn't know it would need us when the project was initiated, almost five years ago now. So that entails, of course, looking at new products by the company. I'm thinking specifically of course of the Metaverse, we're, of course, paying attention reading a lot. Asking questions when we can and and see whether our model yes, would fit that new space, unchartered territory, to say the least. But yes, again, one idea that I would like probably users listeners sorry to live with is that the Oversight Board is really an object of its time and that it lives in the we are agile and we're we do not shy away from responding to challenges when when needed. In terms of forms of engagement, like I said, we absolutely we'll double down on efforts in places of the world where we have less appeals from, but nevertheless, where there's very serious content moderation issues, that haven't probably haven't even been surfaced yet. I mean, I would be very interested in us being able to surface issues before they become very big problems for the platform. So hopefully, we can do that, too. And in terms of workload, there is one aspect that we've been exploring one process one, processional aspect that we've been exploring is bundling more cases together. When there's a big issue in society a big debate we serve, we usually receive appeals that are related to more or less the same issue. One example is, when there was the, you know, the takeover of the Taliban in Afghanistan, we did receive a lot, the Taliban are a dangerous organization, on Facebook, and so you're not allowed to talk about them. And we've received some cases related to the ability to report about what the Taliban's were doing. So all this to say, we're trying to explore how to bundle cases, when we receive an influx of a huge influx of cases related to the same type of issue and relate it to the same geographic space, for instance. And also, we're also thinking about an easier process for enforcement of use enforcement errors. And I'm thinking specifically about, well, automation that takes down breasts, simply because these are breasts when we know that there are carve outs to the nudity policy that prevents you know, from showing female breasts. Well, we know there are carve outs related to breastfeeding related to breast cancer awareness, et cetera, et cetera. So how can we treat those obvious enforcement errors, which are also acknowledged by the company itself? How can we treat those in a more expidited manner? So that it's, you know, it doesn't require us to spend 90 days on something that's so obvious. So, yes, these are some of the priorities for for the months and years to come. And it's very, very exciting time for the Oversight Board. Meri Baghdasaryan 44:35 Thank you so much, Julie. My very last question to close us off will be if you had a magic wand, what would you add or change in the mandate or operations of the Oversight Board? You personally I mean? Julie Owono 44:53 If I had a magic wand, I would probably add extra hours 24 hours a day. No, no, no. Aside from that, I think it's the, the organization that we have is extremely beautiful in terms of the fact that we have, we're all colleagues around the world. But probably if I had a magic wand, I would. It's very difficult and it's quasi impossible. But I would maybe first of all, add extra hours to 24 hours a day, and probably make the time differences, less of a challenge in terms of, okay, I've colleagues, including myself, wake up very early most some of the time and other colleagues who stay up late, very late a lot for the Oversight Board. That's not a complaint, of course, that's I mean, I'm very happy to start my day very early. And we have an incredible in a system, that incredible administration that tries to find what they call golden hours where, you know, it's most convenient for everywhere in the group, everyone in the group who's located all over the place around the world. But it's a huge and challenging task for them. So, yes, we now thankfully and I hope that will continue to be possible. Now thankfully, were allowed to have more in person interactions which make it more difficult to - meet make the challenges of the time differences less burdensome. But yes, maybe the magic wand would be more opportunities to meet up with my colleagues and interact in person and get things done in the same timezone altogether. So that would be my magic wand. My magic wish if I had the magic. Meri Baghdasaryan 46:58 Yeah, that sounds wonderful. Julie, thank you so much for taking the time to chat with me. And I hope our audience learned more about the Oversight Board and I personally will be following closely and looking forward to the policy advisory opinions and the new decisions and best of luck with all the new plans for the Oversight Board. I'll be really curious to have this conversation next year when you issue the second annual report. Julie Owono 47:27 Thank you very much, Meri for having me. Reema Moussa 47:31 Thanks for listening to this episode of The Tech Policy Grind Podcast. Be sure to check out the Foundry on LinkedIn and Twitter. And if you enjoyed this episode, leave us a review and give us a five star rating really helps out the show. If you're interested in supporting the show, reach out to us at foundrypodcast@ILPfoundry.us. You can find our email in the show notes as well. Tech Policy Grind Podcast comes out every other Thursday. See you next time. Tech Policy Grind Podcast was created by the Fellows at the Internet Law and Policy Foundry. It's produced and edited by me Reema Moussa, with support from the incredible Foundry Fellows of class four. Special thanks to Meri Baghdasaryan and Allyson McReynolds for their support in bringing this episode to air. Transcribed by https://otter.ai