Plenary
Tuesday, 16th October 2018
At 2 p.m.:
OSAMA I AL‑DOSARY: Hello everyone. Welcome back. And now we start our third session for today, third slot for today and our speaker is Constanze.
CONSTANZE DIETRICH: Hi. I am Constanze and this is me. Pretty much exactly one‑and‑a‑half years ago and right now you may be thinking looking at this well she is back talking about the very same topic and yes, in fact I am talking about my master thesis. However, the last time I came with very preliminary findings, a lot of questions, basically on my knees begging for help for ‑‑ from the RIPE community. This time, I come with answers, and fun with stats. That said, for all of those who haven't seen the talk I will give a very brief introduction to the topic and why and how we looked into this and after that I will talk about the survey, what we found and more importantly, what we should do about the issue.
What issue? Security misconfigurations. Last time, I drew on rather large scale examples, let me just say if you want to spend the next years reading about server and database misconfigurations, just Google incidents, this time, though, I have a little anecdote so far has been the grand final of my academic career. I went to have a copy shop to have my master thesis printed, and this was one of those with self‑service computer stations for the printers and I had to wait around 30 minutes for 500 pages to be printed and what does one do to pass the time? How about looking into the stuff other customers had printed. The printer and the scan drive, the trash folder, all parts of the machine were accessible and the German speaking folks around you may already have realised that there is a word file named diary one, so jackpot, that is exciting. This shows what a little bit of password protection could have done and that there are a lot more ways to compromise the security of a lot more systems than occasionally publically communicated.
Therefore, we wanted to know how common misconfigurations are, why this is, where mistakes like that originate and how we can prevent such mistakes in the future. And as operators are usually the executing force, we ask them about it. In short, we started off with five interviews, focus group, then this happened and after two more interviews we created and published our anonymous online survey and with the aid of the RIPE community and other operators subscribed to one or the other mailing list we were able to gather 221 valid responses in 30 days. In total, we received 231 responses, four of them though produced empty lines to to as we suppose browser issues, four had never worked as operators before and two out themselves as fans of trolling surveys. And however, 221 is quite a number, considering the fact that for, yeah, the quite extensive survey must have taken at least 15 minutes to complete, which adds up to more than 55 hours operators spend on answering questions for science. So therefore, thank you very much for your time and input. Here are a few of your results:
Demographics: Although this was the last category of the survey I will start here to give you a short overview for future references. This shows most of them work in Europe, most in Germany, Switzerland Netherlands and UK, and if you have filled out the survey, chances are pretty high that you are between 35 years and 45 years old, have a master's degree and a work experience in IT operations of ten to twenty years. We asked a few more questions about their jobs, though. 19 of our operators are self‑employed, most of which work for IT service providers and a few of them for non‑IT enterprises. 18% are managers and 31% consider themselves as someone having a say in at least management decisions. Over all respondents, about half of them work for ISPs and the rest is basically split in three. IT enterprises, right, the government and the public service sector and non‑IT enterprises.
We were also interested in what systems our respondent operate how often and how they perceive their expertise in the specific area. As we all know, the knowledge and experience spread among operators is quite diverse, so while we ask a lot more questions here I am going to show you one statistic here regarding the comparison of the frequency of the respondents' preoccupation with various operating systems and their respective level of expertise. Supposing the expertise and the experience comes with age, this outcome is rather unsurprising, so younger folks consider their expertise lower than their actual pretty high, pretty frequent activity there. And yet, though, higher than the ones of the age 25 to 34 and 55 years and up. So certainly this is a sign of ‑‑ in this age. The ratio switches with the next age range, expertise rises until they are about 45 and 54 years old, and drops when they are past that age. I wouldn't say that those have given up, but I suppose it's hard to keep track of all the new technologies. As the first actual exercise, we asked the operators how they would assess the severity of various security incidents and issues. They were supposed to consider themselves as part of a company with 1,000 employees and 100 those users. When designing these incidents, we made them fit one aspect of security (100,000) each with one more than incorporated all three integrity, confidentiality and availability. Furthermore, we had three that were risks which wouldn't necessarily lead to a security incident but could cause one. Confidentiality issues were perceived as most severe, closely followed by integrity and for confidentiality, apparently it works like that. More leaked email addresses, more bad. 1,000 leaked email addresses got a medium to high‑ranking, 100,000, high to critical. Of course, it is not as bad to have a small hole in one pants than compared to not having pants at all; however, I know a few people who for whom the pure thought of holes ripping all the way causes a lot of anxiety. The severity of availability issues got the lowest ratings. One of our respondents availability was the biggest issue and it is the lowest of concerns regarding security. Users were losing work done, or colleagues being unable to work for a certain time, only surpassed the risk assessment of failing spam filters. And so far, as a conclusion, maybe availability hasn't proven itself valuable enough to care, which should definitely be considered when talking about the IoT. And maybe it is a matter of prioritisation or experience. Speaking of, one of our main questions was whether and in what way the operators came into contact with security misconfigurations and ultimately how common they are. And then, they are common. We explicitly asked for security misconfigurations the operators encountered themselves and out of 221, 220 operators had encountered those before; only one is missing to say it was all of them. Damn it. Of course, we asked ask if they themselves had misconfigured something before and yes, they had. And for at least 68 of the respondents, this led to a security incident in at least one case. Interestingly, though, not all of our respondents were actually aware of having misconfigured something before, indeed 26 operators after stating they couldn't remember any, eventually put a few check marks in the happened to me column when we asked for different kinds of security misconfigurations, so that we end up here with 89% of the operators. As for the different kinds, the most common ones to be found that is shown in the columns below the middle line were bad or publically known passwords, delayed or missing updates and faulty assignment of permissions. However, not even one of the eleven different kinds of security misconfigurations was encountered by less than half of the respondents. As far, what the operators perpetrated, faulty scripting, broken firewalls and yet again, updates were an issue. So if you want to decide on what processes to implement first, it should be patch management, roles and rights and configuration management, based on these results. However, we missed one. Physical ones. Bad decisions like Post‑its under the keyboard, unshredded paper notes with sensitive data in the trash bin or unlocked server regulars had simply not occurred to us as security misconfiguration, yet when we think about it now they are. Unfortunately, we are unable to state how common this problem actually is. I have a feeling, though. In an open question we asked how the operators came across those misconfigurations. The most frequent response was they just stumbled upon the misconfiguration meaning they were actually doing something entirely different when spotting the mistake and 42% some kind of audit led up to the encounter. For 31% and 9% respectively the impulse came from the outside and from other ticket which for us also includes reports by users or coffee talk. Three of the categories are kind of proactive ways to deal with misconfigurations, though actually coffee talk with other employees is rather proactive as well. However, as automated tests do not necessarily spot misconfigurations and monitoring works best when there is an accident or incident, audits seem like a very effective way to spot mistakes it. Three of the reasons reported ‑‑ respondents reported that the issue had been deliberately implemented as it was just easier to do this ‑‑ to do it this way, which leads us to the reasons for security misconfiguration. Based on the previous qualitative research we provided 24 different answers, roughly divided in personal, environmental or organisational and system‑related reasons. Personal, for us, are more or less self induced aspects and the most frequently checked reasons were lack of knowledge, lack of experience and having other priorities. Apparently, though, five operators felt the need to add laziness or sloth, not finding themselves in the prioritisation one. The top three environmental reasons were sole responsibility as in no peer review, insufficient quality assurance as in not testing anything, and indeed, time pressure. The last category was system‑related reasons and as these tend to be quite specific, we decided to only ask for very basic rather general ones such as bad usability or manufacturers' support ‑‑ so making me nervous.
Again, here are the most common ones, the usage of defaults, meaning insecure defaults, the vast complexity of a system and legacy support. In general, we see that the operators mostly hold personal reasons accountable, closely followed by organisational reasons and system‑related reasons are comparatively rare. Other reasons mentioned tend to be part of the answers provided, yet some people used every single free text field for management bashing, which they could also have done on the next page. Reactions. We wanted to know how the management reacted to a security misconfigurations and in comparison whether incidents affect the way they handle security. Given the reasons that led to security misconfigurations, 71% of the respondents reported that such mistakes lead to the management making adjustments and improving security measures. 53% reported that compared to misconfigurations without serious consequences, the impact of incidents is even higher. That said, several respondents which chose to check other remarked that the management tends to rotate around the problem without actual ‑‑ without solving the actual issue or the measures are disregarded as soon as the issues is out of sight. And well, as there were still a few questions left we added another page. We asked for opinions on several topics, 17 statements to be precise with yet another like it scale for agreement, and sorry, but this may be the time you should look at the presentation on your computers. You may all know whether or not your opinion is a popular one, that said, it differs significantly when we compare opinions based on other responses. For example, operators with less than one year or even less than three years experience trust their tools a lot more than the others. So, basically, the longer you work in IT operations, the less trust you have.
(Applause)
Freelanceers, for example, are not as impressed by blameless postmortems as the employees and what is also interesting is to compare IT related businesses with non‑IT organisations. The operators working in non‑IT organisations have far stronger opinions about whether their company budgets for mistakes. Their companies seem to care less about security standards and it's not a matter of course that their direct supervisor knows what they are actually to go and the amount of work they are doing. Hence, the following recommendations for action, I promised earlier, may not be for everybody, but cover a lot of issues we've just seen. The first one is literally revolutionary. Automation. Wherever possible, infrastructure and procedures should be automated, period. So, however tools have to be secure by default as insecure defaults are just the worst.
Having gotten that one off my chest, we proceed with still rather obvious measures. Documentation. The state of any system and its components must be properly documented so that anyone can fully understand it, even the newbie and the newb if they tried. Properly also means updated immediately, up on my changes and regularly verified for correctness.
Third one. Clear responsibilities. Another surprise, isn't it? Whenever responsibilities overlap or the edge is just not quite touch other there should be a distinct department that is responsible for the security of each device and process and that also has sufficient authority over the device and process to ensure it's a properly security. That said, the department is not supposed to consist of only one single person. Responsibilities must be shared among several people to ensure ‑‑ to allow for peer review quality assurance and of course the exchange of knowledge possibly even beyond their occupational scope.
Well, for the sake of completeness, processes and procedures. Processes must be defined and documented, changes must be planned and properly managed, you know the drill. Now, though, we get into the ones that may need a little explaining. Training dedicated to troubleshooting for evolving operators, to take all those experience and knowledge flaws. Less than 9% of the respondents reported that they had learned how to take over misconfigured systems in school. 55% reported they had not troubleshooting as in systematically investigating a system, figuring out flaws, find issues, it is a skill one can and properly should learn by doing it and this is not only helpful when starting a new job, eventually when, for whatever reason, there is a security incident, it may also help and, well and they know ‑‑ they are not right away know what caused it, they may find the reasons by troubling shooting, so troubleshooting should definitely be part of any IT‑related curriculum. 71% of the respondents reported that misconfigurations allowed for improvements by the management. This makes us wonder there should be some kind of a training dedicated to the management, experience what this abstract concept of security actually is as this is supposedly where it often lacks.
What I myself would rather consider though is some kind of security incident fire drill or life action incidents response training for economists. I think for a lot of CEOs it would be an eye‑opener if they are shown a scenario in of what would happen if something went wrong. Imagine that with prospective like newspapers with severe articles and slides with terrible business forecasts and lay actors as raging customers, not that it needs all of that but conceptualising is fun and the more immr sieve the scenario the more influence this may have on the measures sustainability and also on future risk assessment. Probability, damage, human factors. Where we think of risks as probability times damage, this measure basically adds a third dimension to the assessment. But thinking of human factors as actual factor, we could get a much more precise evaluation of the risk. How much manual effort is needed for installation and system maintenance, how variable is the process, how high is the awareness and how low may it get over time. Human factors can easily increase and also decrease. The risk should therefore be accounted for. But even with this knowledge, with security faults and extensive documentation and so on, security misconfigurations may never be completely eliminated. Therefore as the last measure in this list I want to discuss the role of post processing these mistakes. And this needs an honest error count in companies that constitutes mistakes as something people can learn from and not something they should be blamed for and the best bet would be blameless post mortems. Although freelanceers aren't huge fans of those, 78% of respondents agreed that blameless post mortems helped to detect essential issues and corporate procedures and 43% even emphasised their opinions. It is comforting to see others fail sometimes and for me as an outsider, it's even a fun read usually. Indeed this approach takes time. In order to produce such an analysis the operator has to think back and trace what actions they took at what time, what effects they observed, what expectations they had, what assumption they made, etc.. however, it encourages us to talk about mistakes and not to bury them in our backyard. And to do this without blaming anyone and to do this over and over again is a safe way to establish an error culture where we can accept human error, investigate what went wrong and have the respective reports with actual data be the basis for improvement. Not only but even on the management level.
All right. That's basically it for my talk. However, this is not the end. My former supervisor Tobias who is currently at the ACM CCS presenting this very study, and ‑‑ here this week, pursued this research at the technical university in Delft, if you want to encourage this research on human factors in IT operations, please come find me during the meeting. I'm always happy to talk. Lynn
JEN LINKOVA: Could you go to the slides.
CONSTANZE DIETRICH: Am I really not allowed to touch the screen?
JEN LINKOVA: The one before that, I think. This one. So, I happen to be involved in reviewing large number of post mortems, aye found that it's actually takes a lot of work to get to the real root cause, usually five Ys approach, when you keep asking why, because quite often people say, it was human error or misconfiguation when if you dig deeper just a lack of automation. So I wonder, did you give them to people so they are picking them from the pre at the find list or it was answers they provided?
CONSTANZE DIETRICH: These were answers we provided. So they had 24 reasons exactly these to check. They had also another free text field to add other reasons.
JEN LINKOVA: I am just curious if you could actually reduce the number of options here by looking into real root cause, for example time pressure, work overload, how is it really root cause? Maybe it was something when under time pressure people did some manual misconfiguration mistake which means real root cause like automation.
CONSTANZE DIETRICH: However I am not talking about root causes here. I wouldn't say ‑‑ I wouldn't dare to say these are the root causes.
JEN LINKOVA: So it was ‑‑
CONSTANZE DIETRICH: Reasons. So this ‑‑ these were the answers by the respondents regarding what reasons have there been, not the root cause.
JEN LINKOVA: It's more like country between factors into incident
CONSTANZE DIETRICH: Right.
SPEAKER:
JEN LINKOVA: You did not really look into the root cause because I think it would be interesting to see what needs to be fixed
CONSTANZE DIETRICH: It's definitely a topic for future studies. As I said, it's going on.
CHAIR: Thank you. Are there any more questions for Constanze? Just a question out of curiosity ‑‑ while he is coming. Did you get many movement from the banks, from the finance sector? Many answers from the financial sector, from banks?
CONSTANZE DIETRICH: For the financial sector?
CHAIR: Yes.
CONSTANZE DIETRICH: I am not able to say that unfortunately. I didn't ask about the financial. I asked about the four industries, ISPs, IT enterprises, non‑IT enterprises and government NGO.
AUDIENCE SPEAKER: Fred ‑‑ I just want to make a comment on your cartoons, they are really brilliant, thank you.
(Applause)
CHAIR: So thank you, Constanze. And now we are going to the next presentation, it's Niels ten over from the University of Amsterdam and he is going to talk about innovation and human rights in the Internet architecture. Is self‑regulation delivering on its promise?
NIELS TEN OEVER: Let's see if USB C is delivering on its promise. It did before and now of course it doesn't. It's not detecting a screen. Yes, detected. Yes, hello. Hi. I am PhD researcher with the data active research group at the University of Amsterdam and I study Internet governs and standard setting as a governance innovation. Why is that important? Because scholar said if the United Nations revented today it would probably look like Internet governance. What does this mean? My question can we build a public space on privately owned and governed infrastructure? So, your work is too important to only let it to the engineers so that's where we social scientist and political scientists come in. So, I'm going to look at technology from a social ‑‑ science and technology studies lens so I am going to do way too many things, introduce you into the theoretical frameworks and look at the work you have been doing but please bear with me.
This is a paper that is not yet submitted so all your comments are very welcome in the form of comments, questions, emails or even rotten tomatoes. I am focusing on the Internet architecture and the actors and forces that shaped it. So that's you. And I am showing the drivers and consequences of some of the power dynamics.
So the Internet architecture is often overlooked because it's hidden, and it only actually gets visible when it's not working, but as, you know, it's the precondition of everything that happens on the Internet, routing on the packets, addressing, domain name system and also the oldest part of the Internet. And I'm specifically looking at the Internet engineering task force. So studying architecture as an infrastructure, it is notoriously hard because how do you study something the size of the Internet? Do you need a telescope or microscope or a looking glass?
I will start with my theoretical framework, do you start with that hypothesis or measurements and why should you do it? Well, as Marshal McLuchan said, looking at contents ‑‑ in this case data streams ‑‑ is like looking at the juicy piece of meat which is carried away by the burglar to distract the watchdog of the mind, look at the structure that sets the preconditions shapes and characteristics of the data streams. So with nearly all processes in modern society mediated by the Internet, the architecture is setting invisible rules. Laurence call this code law but law is also still law so maybe code is not just law, maybe it's something else. The Internet architecture creates possibilities and impossibilities and it inhibits behaviour. But of course, we do not want to fall into the trap of technological determinism, that says that technology determines what can be done with it. That we are ultimately defined by the technologies we use.
So this is all recognisable in comments like if it's possible, it will be made. It will it be made possible. But that would bracket the actorship that you as engineers have. But then, on the other hand, we also do not want to leap over in the field of social constructivism because that argues that people discourse and convention completely determine how a tool is used so that the reality of the tool doesn't matter, but that is a bit weird because it's still easier to kill a platoon of soldiers with a bullet than bunch of roses. How the people do it and design it matters. So I seek to understand the relations and ecology in which the Internet ecology is co‑produced it has to do with technology and humans and that where it gets messy.
So I am using concepts like materiality and something is material when it starts to make a difference, and it starts doing that in relation to other things. And this relations we call afford ‑‑ not emphasising what technology determines but rather, how it invites us to do things and how it is ordering our world. So for this I use theoretical framework of a social technical imaginaries which really fits well on the work you are to go because there is this process of co‑production which happens through visions, symbols and future, but it also happens, it produces also institutions and policy making and documentation and this all happens at the same time.
So your work, while you are writing policies and making code and protocols, also creates institutions such as the IETF so there we see the technical protocols and inner workings and procedures of the organisations are all determined in RFCs. And this is arguably because this is an engineering approach to problems and all the problems are neatly documented and are nicely machine‑readable, including the mailing lists which gives us, as discourse researchers, great stuff to look at.
So, this allows me to have a very close look at who said what and how decisions were made and try to understand how the Internet architecture is developing while there are different interests at work. So, I take into account the social technical architecture, imaginary, the affordances of the architecture and the economic drivers, and to understand the dynamics, the institutions, the agenters and motivations I did 25 interviews, qualitative analysis of all RFCs, if you click on the link you find there, you can see the software, the qualitative analysis of RFCs that were mentioned, quantitative and qualitative analysis of mailing lists and participant observation during four years of eleven IETF meetings. Yes, I got more familiar to you in those years. So, then I found that there are three main characteristics of the social, technical Internet imaginary, starting with the end‑to‑end principle which of course as the intelligence, it is at the edges rather than hidden in the network.
That the network only provide data transport; I don't need to explain this to you. But this is not merely a technical concept, there are also very political conceptions of this principle, and it's political conceptions are normative values which state that the network should empower the edges to communicate with each other and not to enable the network as a form of control.
There is a constructive ambiguity between these technical and political interpretations of these engineering concepts, and in several discussions and through other discussions this is made very clear, and also in T‑shirts of course.
And even RFCs are very clear that the IETF and RFCs are not value‑neutral and that IETF aims to empower the edge user. The second pillar of the imaginary's permission is innovation, the possibility to deploy without any permission. This could be seen as a response to the telco area but maybe even to the acceptable use policy enforced by arpa and the RFC which said there couldn't be commercial content on the backbone so you could do anything you want. And of course, openness, the freedom to add nodes. But openness is also this really fussy concept because it has technical concepts but it also features around the culture of open standards and open governance where there is transparency and transparency also has this technical notion that you should be able to follow the packets where they flow but also of the process being transparent and having everything documented so you quite it's quite fussy and messy and going through each other. But this translates in very specific governance ordering, very political political statements, kings presidents and voting, rough consensus and running code.
And Sandra Braham did some interesting work in which she read the first 2000 RFCs and coded them and analysed them and concluded that societal implications were continuously discussed and taken into account from the beginning. You can click on that link and find all her great papers on there.
So I start looking at the moment of commercialisation and privatisation when the US Government ceded direct control, also because they didn't want to pay for the infrastructure and wanted to scale the Internet, commercialisation was explicitly seen as the goal to do that and this is the story of its consequences. So the first big tasks after commercialisation was coming up with the successor to IPv4, as some of you might know this did not go swimmingly. It tested the governance of the IAB and IETF and introduced the short‑term solution of network address translation. Also during that period firewalls were introduced, and aside from securing, they also added control in a network. And of course, it added more advanced notions of network management. All this made the network much more than simple dumb pipes. So what happens in this period of the rise of the mailbox, it changed the affordance structure of the network, it added more control for network operators, which were aided in that by network equipment vendors. And it took away control from the end points and as you see here in this example of it. LS 1.3, because of control and also ‑‑ in the network to design like the future, we needed to make it look like the past. So there you see that the ‑‑ that the network is limiting stuff. Or, that was a simple fix helped, in the case of S ET P it did not help that much, because there was (a TCP) this is something that we never seen widely deployed because of the middle boxes that got in the way because as TCP did everything perfectly that we wanted in a lab situation, in the wild, not so much. So, a TCP did not (SC TN) not work in the wild for the first decade, after 2013 it did but we will get into that. That this led to frustration and contention, until something started to move, which was QUIC, which it had similar ideas to SCTP, and integrated some of its learnings and congestion control, etc.. what was the big difference? It was developed by Google who already had control of CDNs and a browsers so they could a lot of A B testing which allowed them together with very skillful research team to develop this alternative. Which built also on the momentum that was built by Snowden revelations in RFC 7258 so it added encryption and then directly started to do that as much as possible to get rid of any bump in the wire that might be perceived, and then also went far in that to try ensure that there was no ossification possible whatsoever, little handles. And also offered latency, so therefore, all inten sieves got together and all start ‑‑ all stars seemed to align. Well, all is well that ‑‑ to ends well so did encryption and development of QUIC bring us back where we were? No. Again, this altered affordances, you see it's relational again.
So, only a large effort by a transnational corporation with significant control of the network could make this evolution and change the affordance structure. And QUIC tooling running the server is not yet readily available and QUIC deployment will arguably also strengthen consolidation. And with ubiquitous encryption it might be harder to analyse the network for incidents on cookie research that has been done previously will be much harder now. And I hear some of the network operators are not pleased. So, imaginaries seems to be changing.
I got this from an interview with a senior IETF leadership. We need to play into some of the operators or vendors earning models in order to get something deployed, which is a very economist reduction approach to developing the Internet architecture, which seems very different from the architectural principles and the imaginary that was laid out earlier. So concentration of resources and power tussles between transnational corporations alter these affordances, and this also siphons through in the Internet architectural, social technical imaginary. It's a lot of non‑technical wording but sorry.
So, what we see also is that IETF attendance is a stabilising but less companies attend, which is in effect of consolidation. So what does this mean for the Internet architecture imaginary and the Internet architecture in general? Principles of the Internet architecture imaginary are still professed, they are still present in T‑shirts and when people offered the principles are but are they actually still there? And ‑‑ or, rather, is individual engineering prowess and engineering community covering for a larger order. So unfortunately while we expected that self‑regulation and commercialisation would lead to perfect markets, free competition and decentralised structures, this might have not been the case. Some people say this is because we thought about technical distribution but we did not think of economic concentration or even did not think it was possible in 90s so what we have now is indeed market concentration and control and power struggles between transnational corporations. But the response to this is still we are not the protocol police. So we are asked commercialisation worked really, really well for the scaling of the Internet, at least to the places where it was economically viable to connect, we should not forget that half the world is still unconnected; it also led to the prioritisation of commercial interests. And political conceptions of the architectural imaginary are fading into the background. So while rights, freedoms and societal implications have been discussed since the inception of the Internet architecture, these do not find their way in the current discussions in a structural way. Concrete calls for the integration of policy and societal impact and human rights considerations have been made since 2002 as Internet drafts on the mailing lists, but such considerations are not built into the standardisation process. Actually, they are actually being resisted.
So while the importance and size of the Internet architecture has only grown and with that its societal implications, societal implications are not considered in a structured way. Sad.
So, what ‑‑ how does this sound in an academic way? By combining science and technology studies and international political economy our foreground drivers changes and affordances and materiality of the Internet architecture.
I got a lot of cool pictures from Twitter because I am not very good with camera and I am very curious to hear whether you think this is the right way to go, whether I missed something or whether we should integrate some of these considerations, thank you very much.
(Applause)
OSAMA I AL‑DOSARY: We have time for questions. Please state your name and affiliation.
AUDIENCE SPEAKER: Philip, RIPE NCC but speaking for myself. I think a lot of what you say is that commercial interest of few big companies dominate the IETF all right and that has been going on, I don't know, I would say since the '90s you have big companies that try to dominate. But I want to ask you because while you study the IETF about two subjects; someone that it's my perception that after the know den revelation there was really a big political momentum with IETF to really say okay, now, everything has to be sort of privacy aware even if that may not be that interesting for commercial aspects. And the other thing I see that if you, for example, take TLS 1.3, that people have basically to say the same thing, no more middle boxes, if you want your middle box then we will figure something out but we are not going to do it for you so it seems still the political spirit is still very much alive and not just a few companies that dominate the scene.
NIELS TEN OEVER: Great. I was hoping for this question. So, RFC 7258 really elaborates and goes at length to say that pervasive surveillance is an attack but it also then goes out of its way to say it's a technical attack. This is not political. And it's an attack on the network, and it's dangerous for the authenticity, the integrity of the network. So there it really strips off any political connotation, so where the political connotation was very present actually they really went out of their way to remove that. And with TLS 1.3, I think the big thing that people try overcome there was not necessarily privacy for end users, even though that was a nice extra and I have also asked this for the people in QUIC, like was this a design criteria, and it wasn't. It was overcoming ossification and then of course there is the interest that you did not want to share the user data with other people that could ‑‑
AUDIENCE SPEAKER: I guess we should take that off‑line. That is interesting.
AUDIENCE SPEAKER: Alissa Cooper, IETF chair. It's always nice to be studied. Could you go back to the slide with the graphs of participation, please. I should say thank you for your talk, good to hear the summary. Yeah. So I think when you spoke just briefly about this, you said that ‑‑ you know, the participation is flattening in terms of total but consolidating in terms of size or number of companies.
NIELS TEN OEVER: This doesn't show that.
AUDIENCE SPEAKER: I want to make sure people are clear on that.
NIELS TEN OEVER: But it wasn't pretty.
AUDIENCE SPEAKER: Also just in terms of participation, we actually don't track over all participation by affiliation because we kind of can't. If you consider draft authorship and RFC authorship as the main factor of participation.
NIELS TEN OEVER: For that the mailing list analysis was super helpful because there we could have some ways to track affiliation by using people's affiliation both in the TLD they use of their email address and their SIGs, so we have been trying to use that as a factor of analysis as well as draft and RFC authorship.
AUDIENCE SPEAKER: Okay. Cool. So this is in in the paper.
NIELS TEN OEVER: And the code is also online so if people want to ply and make Python code better.
GEORGE MICHAELSON: APNIC but speaking for myself. As occasional IETF participant I think a lot of what you say rings through. True. I recognise familiar faces, the company on their name badge changes over time. As companies have moved in and out of engagement as corporate entities, paying their air fares and their hotel bills to do the activity. The tower, the idea that we speak as individuals and not as entities, it's a very strong binding force on the individual level but there is a degree to which we all know it's fictional. And so this is not unusual. I'm a home owner, I own property and I depend on lawyers to make that legally binding in the State but if someone stood up and said all property is theft I would of course agree with that because it's true. So, holding contradictory views in this space is simply not unusual. It's a normal part of life. What I like here is that you tease out something that actually has been a conversation in the RIPE region for some time. There was a very good presentation I think by Sander Vagen from the Oxford institute who spoke about human rights in the context of network and it pays us to think about the implications pour social policy and outcomes. The classic example for me is the conversation in the IETF around how we could expose a low level one bit signal for from things like QUIC to allow for people to understand the aggregate packet flow rates so engineering can be done and you want to take crypted session and you want to start leaking information, are you crazy? But it is kind of a conversation between two different competing pressures and in some senses, both of them are commercial. It is important for some people to assert a commercial primacy if we respect your privacy and it's for some other people to assert we are bound by other processes which demand certain leaks of your privacy. And we face this when lawyers came into the IETF, and we faced this when the governance conversation moves beyond the casual into the protocol stack. It's never going to go away as a problem. So although I might not agree with everything you said I think it's fantastic you are saying it, good luck with your PhD.
OSAMA I AL‑DOSARY: We will cut off the lines for the people standing right now.
PETER HESSLER: With the open BSD project. I am active in some of the IETF Working Groups, and I have noticed that there are some differences in how, I guess the phrase I can use is how commercial some of them are. Some of them are more so than others and I wonder if you have compared the different Working Groups with each other or is all this research as IETF as a whole?
NIELS TEN OEVER: I would love to study that but then as the previous speaker said, what makes someone or someone's opinion commercial? You know. And that is super tough. So in the paper, I outline that IETF participants take part as craft persons, as negotiators, but also as digital citizens. So and you can't say at this moment you put one hat on and the other hat off, and sometimes these are conflicting with each other. And in one interview I had this beautiful quote when a self‑prescribed described public interest technologist said that he argued for one thing at the microphone and the person that was someone arguing against him and then later that person came to him and said, I thank you very much for arguing that point but I had to make that point on behalf of my employer. Well, there it gets weird. Right? So, it's very hard to do that with this course of analysis so this is why I have been trying to look at the big power tussles that have been going on and in these are between big commercial groups with different commerce interests, where does the public interest come in and how do we ensure that that is safeguarded? Because we all know, like, when you go to ‑‑ last week I was at Dutch Internet governance fora, the question is not should we regulate the Internet but how should we? And I think we as a community, we do not come up with our own responses (if) other people will make those choices for us and I don't think we are going to like those.
AUDIENCE SPEAKER: ‑‑ enterprises. Thank you for the talk, I am certainly sympathetic with a lot of your analysis and conclusions or analysis of the ‑‑ that we are centralising power if you will. We can debate and argue the finer points about whether or not the IETF is more owned by corporations now than it was 20 years ago, I recall when we were doing URI specifications, I recall that first, second I recall we were doing that in the big question was if Netscape won't implement it we can't do it. But I am not sure that the converse holds true, right. If the concern is that the IETF is too concerned with whether or not companies will pursue a given direction the answer isn't strictly well, let's pick the right path and hope companies will implement it. I give you IPv6. So, I think that it's ‑‑ you have to be careful to separate the a.m. SIS from the conclusions so I am not ready to jump on the and the therefore we should bring human rights into the discussion. By agree it's a problem and thanks for the analysis.
RANDY BUSH: IIJ and Arrcus. So I published a broadside 12 years ago probably, 13, in cc R, on the factors underlying vendors in the IETF, and George said it ‑‑ different ways, it takes money to get to bank Koch, such is life, welcome to late stage capitalism. I think there is a very large factor of people who actually are trying to do the right thing and we are trapped. Let's rewind to Sara's presentation of yesterday, where to get DNS privacy we are going to talk to central servers. And we are trapped in our own technology, which is doing it, IPv6 has created more NATS than anything else. Okay? And I haven't got magic pixie dust and I beg anyone who does to share it. But I think we are not having this discourse in the IETF at high enough level and I don't mean of in rank in the IETF, but in just in our public ‑‑ in our general discourse, seeing this as a real problem we need to attack. And so, if you have any pixie dust that can change the discourse, I think this is a brilliant presentation but I don't come with a recipe.
NIELS TEN OEVER: I don't have pixie dust. I talked with Dave Clark at the plenary a while before, but I also do not think that we need to think in terms of this problem as pixie dust and let's engineer our way out of this, right? We need to have this conversation and deliberation but, therefore, we need to prioritise considerations that are not slowly commercial because there is this rising concern but we do not have the platform to talk about it. So I fully agree with what you said and I think we should strive for that and if you all agree then we can do so, in the hallways and in our sessions to redesign the Internet architecture to serve the public interest.
OSAMA I AL‑DOSARY: Our next session is ‑‑ William is going to give us a report on the RIPE accountability task force.
WILLIAM SYLVESTER: Hi, I am the Chair of the accountability task force and our presentation today is intended to review the report that we recently published out to the RIPE list. We have been working on the accountability task force since RIPE 73 and for those who you haven't reviewed the report, you can find it at the URL. We look forward to anyone's feedback. Before we get started, how many people were at the BoF that we had at the RIPE meeting in Marseilles? Raise your hands. Okay. So a few.
So we had a great discussion back then and with that, we took feedback from the community and that's what brought us to our draft report.
So let's talk about why do we have accountability. So, in the early days RIPE was a couple of guys in a room and it was real easy for them to discuss topics, create policy. Since then, we've grown to a plenary room, we have grown to an overflow, we have multiple working groups and we have a lot of different topics that we discuss today and so with this, it's also been very important as we have newcomers that come to our community as well as outsiders who review our community for them to find and understand more about what our community is about. It's also helpful for us to what happened in the past, maybe not to create same mistakes in the future but ultimately as a reference point for how we measure ourselves.
So the task force started off in RIPE 73 as we talked about. We came together as a group, we started having regular discussions. We had a document review where we considered all the different structures and processes within the community. And ultimately we looked to get input from the community at a lot of different stages. Some of this was discussions on RIPE list, others were discussions we had in the plenary, feedback we received from RIPE NCC and the BoF that we had back in RIPE 76.
So the report for those of you who haven't seen it, it's broken up into a couple of parts. The first part is basically describing what is RIPE, our accountability, other elements that base our discussions on our community input. The second part is more document‑focused, it's looking at the different components of accountability, transparency. Ultimately, this focuses on the structures and sort of how we do things and how we conduct business, that kind of stuff. And in the end we have some recommendations.
So how do we see accountability? The task force spent a lot of time discussing what this really meant. And what we came up with, mostly from feedback from the community, was that we are all here to discuss coordination. We are a forum to discuss how we make our networks connect together and how we enable the Internet. But it only works if everybody in this room agrees. And not agrees on you know we have to agree on every policy but you have to agree in the process that the things that we do, the way that we conduct ourselves. And the participants have to trust in the fact that when you get up at a mike or you present a presentation or you make proposal on a mailing list, that your discussions are going to be based off of merit, what your proposals going to be considered honestly and ultimately, it's going to be based on your expert opinion. Not that we are not going to have disagreements, not that we are not going to have conflict, but ultimately, in the end, that the policies that are decided sided upon are legitimate. So this raises the question of who is RIPE accountable to? Are we accountable to third parties? What we ultimately found we were accountable to ourselves. And with that, we are accountable to individuals, because we consider the last presentation about the role of corporations and paying for travel or other aspects so we can all come together.
Ultimately, though, it's about the individuals who can come together and find consensus and through that consensus we can define policies that we can all agree to. And so with this we are accountable to ourselves as individuals, not necessarily constituencies or perhaps external stakeholders. And while we can't solve every problem our community is appropriate for the problems that we attack but there are other better venues that exist out there and in regards to solving problems. And it just because you participate doesn't mean that you are always going to get what you want and unfortunately that's part of how our community works.
But accountability is important, and it's meaningful. Because it preserves trust for all of us. It helps ensure that we have a venue for coordination of the Internet, a place that we can develop policies, that we can all agree upon. And as we talk about capture, the task force heavily considered capture and capture already seem to be the boogeyman in the room, the bad guys are going to come in and take it over whoever they are. The task force sort of dismissed this idea and this was later mirrored by the community with a feedback providing us a greater understanding that we don't feel our community is vulnerable to capture; mostly because of the way that we conduct ourselves. And the way that we conduct ourselves is through our values. So most of our values are process‑based. As a task force we had to identify that we were looking for substantive values. These would have been ideas about all‑encompassing ways that we are going to help or improve the Internet. But through our community feedback we found our process values are actually good enough. The substantive values are covered through our process values. And what are those values? Open, transparent, bottoms up, consensus‑driven. The values that we have is who defines how we get at the mic and how we speak at the mic, how we conduct ourselves at working groups and so through those values we find our normal social mar Rays in the way that we exist.
But so how to we make decisions? Well, in our community, decisions are made through consensus. But the community doesn't really want to define consensus because that is this very special thing to our community. And it's not that the definitions of consensus outside of our community were deemed to be good, and what we found is that from a task force perspective it wasn't our goal to define consensus, we didn't want to create this long bureaucratic documentation on consensus but we wanted to capture and describe to the community what we saw as the way the consensus and decisions were being made throughout the community. And no way did we want this to influence the way that we do consensus or anything along those lines. It's merely our observations. So let's first talk about what consensus is not.
Consensus is not unanimous agreement or winning a vote or a majority opinion or super majority. Consensus is about how the Chairs come together and guide ourselves with crucial roles and the way the Chairs are very important in this process. The Chairs sort our feedback into different categories. Statements of support, plus ones, do I support this, do I noter this but there is also valid and invalid contributions. So what are invalid contributions. These are pretty much anything out of scope. Questions that have been asked and answered already, lax of good faith, self‑serving. These are things that our community rejects overall in the sense of an individual seeking something that's not for the greater goofed the community, the community has rejected.
So how do we deal with low engagement? What happens when you don't have a lot of voices that are participating? You know, while we prefer wider appreciation, it's always great when we have it, it's not something that we can always depend upon, and that always creates a challenge for Chairs. So how do you manage consensus when there is not enough statements to give you a clear direction?
So what we found is that the Chairs are responsible for making those judgements. And so why does this happen? Usually it's the fact that people don't disagree with those who are already speaking or maybe it's a topic they are not interested in. Or perhaps it's something that they are just happy to let somebody else come up with the outcome and whatever outcome they are pleased with.
So how does the task force see this documentation? So the documentation is vital to how we support our open and transparent and core values. Demonstrates to all the external folks what accountability is or specifically what our community is about that helps us be accountable but it also helps newcomers and it helps people engage. And that's always healthy for our community, is to have new opinions and new ideas and new contributors, that make us have a better community overall. And also helps us clarify what is done in the past and why we do that we do.
So one of the big concerns as we started the process of accountability was the fact that we didn't want to be bureaucratic, we didn't want to be creating lots of documentations and volumes of books or large amounts of content that nobody might read. What we found is that over documentation can actually be a problem. It creates its own issues. Sometimes over‑documentation can be used to game the system or it makes things just more difficult and so we have endorsed the tradition of keeping our documentation to the minimal amounts necessary to communicate what it is that we to. And that was something that was very important to the task force.
But what happens when you don't have any documentation or a lack of documentation? It's not a problem because our community has established ways that we conduct ourselves so if we don't document something we have many examples that we observed where the community stepped up and made a decision or through consensus or through just the open bottoms‑up processes found a way to continue to conduct business and to continue to carry forward. And so what we found is that missing documentation is not really missing. If missing documentation is something that gets in the way, for sure we believe that it should be documented, but it doesn't always have to be, because our community is deep and strong and has great ways that we continue to carry ourselves.
So what is it that we found? Well, we found that we have an accountable community, and what does that really mean? We are really established and have been around for quite some time. We have robust structures. So what is that? That means that we have a robust way that we ‑‑ we have our Chair, we have our plenary and we have our working groups, that is what we mean by structures. And we have an open, transparent and bottoms‑up consensus based system and it's been working pretty well for us so far.
We found that there is no immediate risks to our community. We didn't find any glaring holes or big problems. But we do have a full list of recommendations that are in our report and I do encourage everybody to take a look because we'd love your feedback. But a highlight of the couple of the recommendations:
We found that the RIPE selection procedure for the Chair and for the Chairs' description of the role was something that wasn't fully documented. This is something that I understand is currently ongoing and it was something that was great for the task force to review as a way that the community handles issues like that.
It's also helpful to align the Chair selection process. Many of the Chairs through the bottoms‑up process, each Working Group selects its own way that they select Chairs. Some of that could use for an overhaul that might standardise some of those issues. It would be helpful to explain what the Working Group Chair collective is responsible for or what the plenary does or what the powers of the plenary are. Information for newcomers and newly selected Chairs, myself as I became a new Chair a year or so ago, it was a challenge understanding what my roles and responsibilities were and what I was supposed to do and what was this thing about consensus. Fortunately, I had lots of help from a lot of my other Chairs in other working groups, but it's ‑‑ there is definitely we could beef up some of those details to help newcomers and others benefit from the same knowledge.
So going forward, the task force recommends that we should review this periodically. This time around, it's taken us roughly two years. We think that in the future they can benefit from some of our work, perhaps change what we focus on, look at other details, maybe dig deeper on certain things. But ultimately it's going to be up to the community to decide. And so from our next steps, we have a common period open until the middle of November for the draft report that we've put out on the RIPE list. I encourage everyone to read it. I encourage everyone to consider what it says and provide feedback. I saw right before this session, we got some additional feedback, and every voice is important. So we look forward to everybody's input. We will spend some time and integrate everybody's changes and requests and then we are looking to hopefully publish a final report in early/middle of December, and ultimately it's going to be up to the community. So we will look to the community to decide do you believe our recommendations should be implemented? And if you do, then the Chair will be responsible for figuring out how that should happen. I would like to say thank you to the secretariat, specifically Athina and Anthony have been invaluable throughout this whole process helping us keep everything together. And also thank you to the task force members who have gotten us to this point so far. So, thank you. Any questions?
CHAIR: Thank you.
(Applause)
DANIEL KARRENBERG: Thank you very much for that report. It is actually an excellent piece of work. As, you know, I have come out very strongly during the process of chart erring this task force guarding against it, rewriting our Constitution. You have not done that and you have actually done a really impressive piece of work and most of the recommendations are really solid so I think the community should spend some effort in reading this report and discussing the recommendations.
One more remark: You mentioned it already, the Chair role definition and selection procedure is currently a matter of debate as a special mailing list for it, which doesn't have that much participation. So, if you haven't known about this it's RIPE ‑ Chair ‑ discuss and I encourage people who are interested in this particular subject to join that mailing list and join that discussion.
WILLIAM SYLVESTER: Thank you.
RANDY BUSH: IIJ and Arrcus. Daniel has said what needs to be said rather well so I can't resist this, which is I have a question which is, who on the PC has the brilliant sense of irony to put these two presentations together?
CHAIR: Just to get some discussion.
NURANI NIMPUNO: Thank you for this and thanks for the presentation and the report. It was interesting reading and I think it's very good work. And I think, I know how hard it was in the beginning to actually set the scope of this and maybe the devil in me also thinks that it was a good thing to start this just to kind of get some of the reactions from the community and to see that we actually landed in a good place. We don't need to be afraid of having these discussions. And I think the community's robust enough to have these discussions. And then just a few ‑‑ I think the RIPE NCC have done a fantastic job also in supporting the task force and doing actual work so thank you to them.
Two comments also. One is that I think I said that early in the process that accountability is not something you achieve, it's something you strive for. So while I think is a task force has limited scope and mandate and should be closed down, I think we should have regular reviews and go back and look at the accountability of the community and hopefully this process shows had a we can do that.
My second comment: Even though we do a bit of bottoms‑up in the community, that's a different thing and I think the process we have here is bottom up. Cheers.
WILLIAM SYLVESTER: Thank you.
CHAIR: Do you have any more comments, William? Any more questions from the audience. No. Please give the task force‑feed back because they have done a great job and it's good to know what the community thinks about it. If you don't have any more comments we are done. We are going to the coffee break.
(Applause)
Thank you and please remember to rate the talks, it's important for us to know what is good and bad about the talks. Thank you.
LIVE CAPTIONING BY AOIFE DOWNES RPR