Archives

RIPE 77
Plenary session
Monday ‑ 15 October 2018
2 p.m.

HANS PETTER HOLEN: Okay. So, welcome everybody to RIPE 77 in Amsterdam. My name is Hans Petter Holen, I am RIPE Chair and I'm here to welcome you to this meeting and give you a brief overview of what we can expect this week.

It took some time to fill up this room because we are 881 registered members as of now ‑‑ participants as of now for this meeting. And 514 checked in as of of one o'clock. I think this is a new record.

So, this meeting is being recorded. There are microphones here if you want to interact. Please state your name and affiliation. And if you miss one of the sessions, it's streamed and stored so you can actually watch it later.

So, this meeting is open to everybody, not only those present but also remote participants. We want to bring people together from different backgrounds, cultures, nationalities, beliefs and genders. And we want to have this community as a safe, supportive and respectful environment.

So, in order to create a common understanding on that, know that we have become so many, we have a meeting code of conduct. The really short version is please treat each other with tolerance and respect. That's the sort of main line. You can see the details up here. And we really want to stress that, that we welcome all, from all backgrounds here.

Now, in order to resolve any issues in case you feel that you are not treated well, we have three trusts contacts: Mirijam, Vesna and Rob. You see their pictures here, so if you need somebody to discuss with in case you don't feel treated well, please contact our trusted contacts.

We also have a group of Working Group Chairs, so if you have any questions on technical matters with any of the areas, where to go and discuss a certain point or anything else, here are the volunteers that are putting together the Working Groups for you, so, please go and talk to em this, they have yellow badges, so, if all the Working Group Chairs raise their hands... you can see they are scattered around here.

We also have a Programme Committee. So the content that you will find in the Plenary is put together by these people. The Chair is sitting here on the first row, Benno, and he will introduce himself and the Programme Committee a bit later on.

As I mentioned, we have microphones here so this is not just a one‑way show. When, after the presentations there is opportunity for questions and comments, so please line up behind the microphones and state your name and affiliation, even if you have been here as long as I have or longer than I have, not everybody knows who you are, so state who you are and maybe you're not working for the same company as last year, so it's good to know who is speaking. We understand you are not speaking on behalf of any company, but it's interesting to know where you're from.

And be mindful, this is often a queue on the mikes, so even if you are in the front of of queue, it doesn't mean that you can speak for the next half hour. That's usually not a problem, but we have this quote from one of our sister organisations, the IETF, and that was another statement camouflaged as a question, really focus on the questions to get good answers from the presenters.

So the meeting plan. You can see up here, we are Plenary sessions today. There is a task force this evening, there are more Plenary sessions tomorrow. There is a women in tech lunch and there is a BoF tomorrow evening and that's a really interesting one because that's one discussing what do we want with the RIPE meetings and community. We know maybe 900 at these meetings, what do we do when we are twice that size. If you do a linear projection, that won't be that long.

We also, on Wednesday and Thursday, we have Working Groups. In the afternoon on Wednesday, we have the RIPE NCC Services Working Group, which is the place where the services provided by the RIPE NCC is discussed by the community. So this is normally something that's decided on by the general assembly and the membership organisation like the RIPE NCC, but the actual discussions is in open forum, so you don't have to be a member to have an opinion on the services from the RIPE NCC.

The General Meeting is in the evening. That's only for members where they do formal votes on selecting board members and approving changes to the statutes and so on.

More Working Groups on Thursday, and on Friday, we have the NRO reports and the Closing Plenary.

So there are two important things on this, and then of course the most important thing, I always forget that, this is not just about work, this is also about having fun and meeting other people. So there are socials on the agenda. So, you can come tonight to meet the RIPE NCC Executive Board. And maybe I should mention there, that the RIPE NCC Executive Board now has a new Chair, Christian... he is there, so he is now responsible for the RIPE NCC Executive Board. So if you have anything to discuss with them, you have the drinks later today.

There is a welcome reception on Monday and there is a networking party on Tuesday. And then the RIPE dinner on Thursday. And on Wednesday, that's a do‑it‑yourself event, so you actually have to spend today and tomorrow to find some friends to go out and have fun with them.

In order to increase diversity at the RIPE meetings, we have several programmes: we have the RIPE fellowship and RACI programme, bringing in new young blood to the community. You will see some of them presenting some of their work from the research community at this meeting. There is the Women in Tech Lunch with the panel and group discussion, quotas, helpful or harmful? I come from a country where, in public ‑‑ in companies, the board needs to have at least 30% of either gender, is that a good thing? I guess that's one of the topics here. Male allies, why we need them and how to spot them in the La Camelia restaurant. This is an open discussion so it's not gender restricted.

We have on‑site child care. If you haven't registered for that, contact the staff, there may be open spots, I have heard rumours that there weren't that big a turnout this time. So if you have your kids along and haven't registered them, so... maybe there is still a space?

RIPE mentoring, that's something that we have started this meeting where we have volunteer mentors that have been part of the community for a long time and pairing them up with newcomers who have requested the opportunity to have a mentor during the meeting.

There is also a task force working on diversity. They met today at lunch and it says here everyone is welcome to join. That was just before this, that's a bit late.

On Friday, there is a session set aside to discuss the RIPE NCC Chair succession plan. So basically if you want to figure out how to replace me, that's the place to go. There has been some discussion on the mailing list the last couple of weeks, there was a new draft out for the selection process. That didn't reach consensus, so it will not be able to reach consensus at that draft think meeting. There has been suggestions that maybe we should agree on the role description of the Chair first but that's something that we will then have an opportunity to discuss in the Friday morning session, so if you care about me, come and talk to me, come and talk to Mirijam, Anna, and give your input on how you think the logistics should be at this meeting, or simply state it at the discussion list for this topic.

Accountability task force. The accountability task force has done a tremendous job in going through all of the documents describing how the RIPE community works. And you may have heard in the corridors that yes, but RIPE doesn't really exist. RIPE is a community that's been there since end of the '80s, and there are no formal anything. Well, that's not quite true. There are quite a lot of documents there describing different things that happened through time and this task force has gone through it all and looked at it with a critical eye to see is there anything missing. I mean, this has been in all the Internet governance organisations around the world, in particular ICANN, there has been a lot of focus on reviews and accountability. So this is a bottom‑up initiative to do that, the task force has been working for quite a while now and has finally published a report and there will be a presentation on this on Tuesday by William, so you are here somewhere ‑‑ but anyway, thank you which item for chairing that Working Group and we are looking forward to your presentation tomorrow.

There is a RIPE networking app, so if you want to know what's going on and just have your mobile phone and you live in the world of apps, you can use this one to see the agenda, to schedule meetings with others at the meeting and also to send messages and you can find it on the app store or Google Play or you can use it on your desktop.

And then, I come to a thank you to our sponsors and you can see them all here. And this time it's a bit special because our local host has been to a lot of RIPE meetings over the years, but not really as a local host. So this time the local host is the RIPE NCC, so, Axel, I welcome you to the stage to say a few words.

AXEL PAWLIK: All right. That explains it then. I was working. This looks interesting from here. Welcome, everybody, colleagues and friends and strangers and future friends and kids who have been dumped by their parents in the childcare and parents who have been abandoned by their kids. I'm Axel, I say welcome to everybody, great to have you back in Amsterdam.

Usually I say in the newcomers opening that we, as your secretariat, are responsible for the size of the cookies and the amount of coffee. We are, but I must say this time we outdid ourselves with regard to the weather. I am very, very sorry to put you into this position between inside and friends and outside and sun and some friends too. Try to make the best of it, right.

So, I haven't prepared a long speech, I am just very glad that we are all here. One thing I should say, I'm probably the only person who is allowed to speak on behalf of the organisation, right? So, we have lots of people working at home in the office and they might be following us remotely. We have a lot of people here. And one of the foremost tasks that we are getting as staff, is to be a very vigilent and have our ears on the ground and to understand what you want from us. So, apart from the size of the cookies or the spread of the coffee and tea and other beverages, if you have any ideas, any suggestions on what we should do better or not do any more, please do let us know. And there are plenty of opportunities to do that.

I have heard my first complaints, that we failed miserably in providing shorts for this week. We will make good on that and I'll try my best to have shorts at the next meeting, I don't know whether they will do any good. Right, with that I wish you lots of fun, and well... I'll be seeing you.

(Applause)

BENNO OVEREINDER: Thank you, Axel. So, I'm happy to open, or to welcome you to here. My name is Benno Overeinder. I'm speaking for the RIPE Programme Committee. So, as Hans Petter already showed the mugshots of the PC members. It's important to know that there are four appointed RIPE PC members and there are 8 elected, I'll come to that later. I think you know the drill, but we'll be coming now so I can be brief.

The PC is responsible for the Plenary programme on Monday, Tuesday and Friday morning. 16 Plenary presentations. And, I'm actually happy ‑‑ actually I should thank the RACI programme here because we got a lot of good presentations from individuals sending their submissions. But also of very high quality RACI presentations, and actually there is this week, there is no RACI workshop because all the submitted presentations has been either scheduled for the Plenary or scheduled during one of the Working Group sessions, so thank you.

For the lightning talks I want to explain again how we work. It's a very lightweight process, for us at least. We accept presentations on a very late notice. So, for today, we have accepted three presentations, we informed the presenters Friday afternoon. For tomorrow, three lightning talks, we will inform the presenters or the persons who have submitted the lightning talk by this evening, five, six o'clock.

The good news is that for the, for this week, you can think of interesting topics, send in a presentation/proposal until Thursday afternoon, and we will make selection and inform you.

If you don't hear from us, just give us a heads‑up or a ping, because it's a lightweight for us, so maybe we don't inform you really on time or in time if you are not selected. We do inform the selected presentation presenters, but the others are pending, so we are a little bit, you can call it sloppy, but, if you are nervous, send us an e‑mail about the lightning talks.

I want to mention that, tutorials, workshops. Important to ‑‑ well, I repeat that and it will be repeated through the week. Rate presentations, please. It really helps us in doing a good job, or we could improve on some parts or approach us, one of us, and say what you like and what you think is better or could be improved. That really helps us. And also the presenters, they put a lot of time and effort in preparing their presentation and constructive feedback is very, very useful for presenters.

PC elections: You know it, every RIPE meeting, there are two seats up for re‑election. So, please consider yourself to nominate yourself, and especially think also of yourself as you're new or young, so we don't look for seniority here. We look enthusiastic people who want to contribute to the community, to the RIPE meeting. It's an excellent opportunity to network, to learn more people. And also to bring a new presentations if you know of other academics other network operating groups where interesting presentations have been presented and should also be staged on the Plenary in the RIPE meeting.

So it's not only for people who are already for ten years at the RIPE meeting. Also, please, please, young people, newcomers, nominate yourself.

Nomination can be done until 3:30 tomorrow. At 4 o'clock, we'll put you on stage here, you can present yourself for 30 seconds, one minute, and then the elections are open until Thursday 4 o'clock ‑‑ 5:30, sorry for the confusion. So Thursday 5:30 we close the voting, the electronic voting, and we will announce the two new PC members on Friday morning.

That's it. I'm handing the mic over to Jan Zorz. He will open the Plenary.

JAN ZORZ: Thank you, Benno. All right. Welcome to Amsterdam. Let's start with the Plenary part of the programme.

So, next topic, let's talk about optics, we all use it, we all love it, and what's happening with 400G? Well, Thomas Weible from Flexoptix.

THOMAS WEIBLE: All right. Thank you very much for this warm welcome. I am from Flexoptix and it's me also the first time doing somehow a key note. So hopefully you don't give me a hard time. Thanks for the RIPE to enable this.

So, this talk is mainly ‑‑ I think it fits quite well as a key note because it's a view into the future, what will happen in the near future, basically next year, and the presentation is divided into sections, the first one will be a little bit of theory, quite technical stuff, and then, later on, more practical experience, how you can use the 400G technology in the field.

As I said, I am Thomas, I am co‑founder of Flexoptix and I am ‑‑ formally was more in the software development part within the company and now I moved more and more towards the transceiver technology.

So let's talk about theory. And about the theory, it's like this: So you get a rectangle shape of a high speed signal and this represents 1 bit for example. And this 1 bit can be enabled by 2 events. You get a low event down there and you get a high event up there. That's pretty much what we know from school physics and how a digital single looks like. Now, that's I think something we know or we should know.

Now, when you talk to network engineers, some of them have used the term eye diagram and the maturity has seen an eye diagram so far, but no one can explain the background what this is used for. That's what I want to show in the next slides.

So, the main goal is first having a beautiful eye of an eye diagram, and how is this done I am going to show now.

At the beginning we have got a sequence of 3 bits for example. 3 bits in a row we transmit on a line, so here it's 0, 1, 0. Going down up, down. And this sequence of 3 bits is seen in the two events here.

Now, you might wonder, okay, we don't have that rectangle shape any longer, we are talking about high speed singles so the rectangle is squeezed a little bit on the edges and then you get a sign us, basically. When we have a signal of 3 bits in a row going to two events, this ends up at 2 to the power of 3 of 8 events or 8 statuses, basically. That's what we see here. So such a possibility would be like you got 0, 1, 0, or 0, 0, 0, in a row and you can combine all of them as you can imagine down to 1, 1, 0.

Then you overlay all of those 8 different patterns, basically, you get something which looks like an eye. So, these were overlaid now, and in the schematic here and up there you see a fully eye diagram of 100‑gig transceiver. But what makes an eye an eye, this is the transmitter parts going through those different events. It's that shape in the middle which is what matters, that shape make the eye diagram and eye diagram basically. So here we represent 100‑gig transceiver with those two events, basically. But, we want to avoid that, the signal going down from the high event down here to the low that it crosses that eye, because then we will end up with a dodgy eye or a broken eye and, for the receiver on the other end, it's going to be an issue to determine if it is a high event or a low event. So the main goal of a proper transmission is having this clearly shaped. So, in other words, what you want to avoid, you want to avoid dodgy eyes. This was me trying to play ice hockey. I wasn't very successful.

Now, when we move on from that and talking that initial shape that we had at the beginning with those two events and we are talking about 400G now, there is something new introduced. So we add 2 immediate steps in our rectangle shapes, so this is our low event, we get our H3 now and there are two intermediate steps, H2 and H1. What is the benefit of that? The great thing is, we get the frequency but we can double the amount of bits we can transfer on the same frequency and that's a key component for 400G. An eye diagram for 400G will look like this monster here, so we have got three eyes in total and having four different events representing 2 bits each. So just as a matter of the modulation type and that's called 4 level pulse ampletude modulation, or the acronym is PAM4, we kept the frequency, but we just, with a different modulation type, we doubled the data rate.

That's one really important thing when we talk about 400G, to get there, for example, from 100 gig to 400 gig.

So far the theory part. Now how is that built up? And I'm more like a technical guy looking inside into that stuff.

I do a small recap on 100 gig because I think that's important. When we open up a transceiver on this, there are main two components. We have got an electrical side on the left side and an optical side on the right side. And it's straightforward so the 100 gig transceivers we have today, they work with four lanes, so we got here an electrical connector to connect into our host, our switch or router. We have got four lanes, each running at a speed of 25 gig, signalling is on/off key pretty much forward like the one I showed at the beginning, like the high event and the low event, and this is connected with four lanes to an optical engine also driving 25 gig. Now, with 400G, because we have to quadruple basically the rate and we already learned with PAM4 modulation we can at least double it, we have to do a little bit more.

And the trick is basically double the amount of lanes you put in there, so, for 400G transceiver on electrical side will run 8 lanes running each 50 gig but those 50 gig is PAM4. Same frequency like on the 100 gig but data rate 50 gig. Those 8 lanes are going to be connected to an optical engine on the right side also running with 8 lasers running at 50 gigabit and receivers also. So we have, in total, 16 optical components in such a transceiver.

So this will be the first generation for the 400G when we look into the future. Now, the second generation, that's I think ‑‑ that's the one I think which is interesting, is when we look at this now. So, we got still the electrical side running at 8 times 50 gig but on the optical side we increase the data rate up to 100 gig. So the increase is done with the frequency, it's still PAM4 as modulation, but we now got four times 100 gig on the optical side and that's interesting in terms of the ‑‑ which canning used later on then. How is this achieved? We have a gearbox here which combines 8 lanes to 4 lanes together running on a higher frequency.

There might be people asking the question, hey, why do we make it so complicated and extend the electrical side to 8 lanes and why not running the 4 lanes with a speed of 100 gig on the electrical side as well? Yes, in theory this would be possible but no one would like to pay for that, because when we increase the frequency on the electrical side, we need different types of PCP methodical in the future. The longer the traces are the more cost it gets and no one wants to pay for that. It's even cheaper getting a dedicated ASIC placed object that BGP instead of just decreasing the amount of lanes. You might have heard of Kefla or a ceramic, these are two types of material which you could use for the PCP material there when you go up to higher frequency, but as I said, very expensive.

Now, so far, from the transceiver, now on the form factor side, this is all put into a small tiny box, a plugable module, and you might be familiar with the current solution for 100 gigabit doing XS FP 28, that's that one up there. Why do I mention the QSFP 28 here is because this, the one for 400G is called the QSFP‑DD, basically for double density and the great thing about this as a form factor is the slot itself is backwards compatible to 100 gig. So you can plug in a QSFP doing 100 gig in your 400G router. This is achieved on the physics, which are the same and the back row on the connecters, so this that one row of golden connecters, and a QSFP‑DD has two rows, when you plug in this, you have a 400G transceiver in a 400G slot both rows will be connected. When you plug in only doing 100 gig, only the first row will connect and the second won't connect, that's basically it, how it's done, so pretty simple but effective.

A second one, the proportion are pretty much the same but not the absolute values. As you can imagine, those are tiny. Is the OSFP and the interesting part of what we see at the end of the OSFP, as a plugable, is the integrated heat sync. And we will see that at the end, that the heat on the power dissipation is a really really tricky part for speeds beyond 100 gigabit looking towards 400G and even 800 gig.

I mentioned here also the size, and the reason why I mention it, it's more a practical experience I have seen out in the field when the first 10 gig were introduced seven years ago, six years ago, they got a long heat sync at the front portion and quite a lot of people had an issue plugging into ‑‑ they didn't have an issue to plug it into their switcher router but they had an issue when it come to close the door when the fibre was installed because that portion at the front unit was too big. The good news is it won't change and the bad news is it's going up to 35mm. So, even now the portion which sticks out of your one view of the router can be up to 35mm. And it's important to mention because you should add another 50mm for your plug and then give the fibre a little bit of bend radius, because it's still a fibre optic. So you should give it another 50mm for the bend radius so, bear in mind when you design racks now you need at least 130, 140mm from the first rack unit to your door unless you can't close it any longer. And why do I mention that? Because also the switches are getting deeper and deeper, and you might end up for the second rack unit that 1,000mm rack won't be sufficient any longer, so you might look for 1,200 mm racks to get all the gear in and close the doors.

Another thing is what do we connect to the 400G transceivers? So, for sure, pretty straightforward we can have a connectivity of 400G to 400G, but the second one is it's a more fan‑out mode. So, we are running and that's now shown with the second generation of 400G transceivers doing on a single LAN 100 gig and this four times. We can break it out. So, these don't matter, we can run four times 100 gig to different gears and on the 100 gig side also changes will happen, so the QSFP28 will be enabled to 100 gig single Lambda, so, these are different modules which you already have in place so they are new as well, they will be available next year as well.

So it's still the same form factor as the 28, but also on the 100 gig one there will be a micro QSFP or the SFP‑DD, it's running at two times 50 gig.

From the 400G perspective, if we are talking about QSFP‑DD or OSFP, it doesn't matter what kind of application they are. Pretty much the current stage they will support both types of of fan out modes and so on.

Now, an important thing is to mention the IEEE at this point. That's basically where we rely on no one knows it that way but it's more when you interconnect transceivers and you say we are running, for example, at LR4 down there, you have to interconnect one LR4 to another LR4, that's pretty straightforward. And you might have seen it on your IXPs in these days that you only get 100 gig in the face on LR4 and nothing else, although technology wise there are other 100 gig interfaces available. That's a lesson learned ‑‑ the other interfaces are like CLR4, the lesson learned which the Working Groups did on 400G and beyond that, is we have to define a little bit more on the single mode instead of doing 10 km on LR4, we have to define a little bit shorter distances and also, I would call it, easier to produce technology, and that's where the FR4 and FR8 were introduced, so that's a 4 Lambda up to 2km, but with a spacing of 20 nanometres and this spacing is well critical for the transceiver because the broader ‑‑ the wider your spacing is the cheaper you can produce the lasers basically, and a second one is the DR4, it's ‑‑ this is mainly coming out from that application here, breaking out into four different fan out modes.

It was defined by up to 500 metres and formerly known as parallel single mode.

The good thing is now I think there will be a big chance that IXPs, for example, will also, for 400G, will enable you now getting to FR4 technology instead of only running LR4 because they are way more expensive. Why do I mention that one is up to 10km and in most data centres it's not needed. 2km on single mode and in the majority of data centres is sufficient, and because here you can also see that the space in between the different ways is much closer, compared, when I go back to the FR4 one. So, I think that's a really hot candidate also for IXPs connecting to their clients and you have a defined IEEE definition there because on an IXP side it's only one interface you define and the customer defines the other one.

Moving on from there, we come to the plugs. There was the hope that there are no new plugs introduced, but no, this hope is gone. There will be new plugs also introduced for the 400G ones. We know pretty much, you might know the MPO plug, and I think it's a plug which is really hard to handle in the field out there. It's known from 40 gig and 100 gig. Now, for 400G, as we saw at the beginning, we need the capability to enable 8 lasers and 8 receivers, so in total 16, so we need also a plug that is able to handle 16 fibres, and MPO‑16 was introduced, so it's a single plug holding 16 fibres and it got a different key so you can't mix it up with MPO‑12, that's handy.

On the single node side, the LC plug is there and it won't disappear. So it's still around for single mode when it comes to a couple of quadruple Lambda days on one single fibre, for example. Now, 400G also has the capability to break down, as I showed previously, to 4 times 100 gig or even two times 200 gig and there the CS connector on the left side was introduced, so you could call it it's an LC double density plug so it has the shape of an LC plug but it's half the size and so it's double density there. And then you can enable application running, so this is a 400G transceiver and each plug is handling 200 gigabit, for example.

Finally, what I already mentioned at the beginning is power consumption. And that's where I want to stay a little bit. Power is ‑‑ first of all, we ‑‑ it's good and bad there, so, first of all, we want to reduce power for sure, but for the latest technology and when we are thinking about, think about yourself, you want to run DW demo coherent with 100 gig transceiver the maturity will fail and figure out we can't get those components sourced and the reason is those plugables for 100 gigabit were never designed to handle that much load or power. Now, there were also some lessons learned for 400G to enable those plugable modules to handle for power and one thing is to feed in that power so for OSFP we are talking about 12 watts and there are goals of going up in 15 watts. For OSFP it's already for 15 watts and there are lab environments where it can do 20 watts. The problem is one thing getting the power into such a plugable but the other problem is getting the heat out again because they generate heat and just do the math on your own, when you have a switch with 32 ports handling 400 gig and each one holds a transceiver doing 15 watts, so just for the optical portion this will be around 500 watts just for the optical transceivers. Not taking into consideration there is a power supply in there, memory, RAM, CPU, they also might need a little bit of power to operate and you will end up with a switch on one user side hitting the one‑kilowatt mark just on the power ‑‑ on heat dissipation and power consumption.

But why is it so essential having that much power in such a plugable in there? As I said already, the goal is getting beyond those 10km heat mark which we already have. In 100 gigabit, 10km distance was somehow the landmark which was hit. There are ways to go up to 25, 30 kilometres, yes, but that's it basically. Now, as a plugable module in a nice and small form factor, I'm not talking about CFP form factors any longer, I'm talking about tiny modules.

For OSFP and QSFP‑DD that's the great thing. We can feed in way more power and get rid of the heat and there are goals and components of ‑‑ I don't want to say they are available yet, but they will be available, it's possible to build in coherent DWM into such a plugable and then you will end up with a 400 gigabit transceiver going on 400 gigabit doing coherent up to 80km without any amplification. And that's pretty handy.

That's basically it about the 400G ones, I am pretty sure there will be a lot of tasks to do also for 400G to still keep on the configuration part, to change configuration of those transceivers because we have seen also with 100 gigabit where we had to tweak configuration of transceivers that they are operational like getting a 30, or 25, 20km transceiver up and running in a legacy layer 3 gear just in the matter that we changed the micro control inside there. And if you have questions, let me know, thank you very much for listening.

(Applause)

JAN ZORZ: Thank you, Thomas. That was a very good deep tech dive to start the meeting. Yeah, any questions? I can't believe you don't have questions. Amsterdam, wake up.

RANDY BUSH: Randy Bush, IIJ. It's nice every five years to hear the same story or the similar story with the decimal point moved one more place. How long do we have to wait for terabit?

AUDIENCE SPEAKER: Hello. My name is Cyrill, I'm from Russian data centre operator, we are also known as infrastructure service operator.

From the future perspective for ISPs, your presentation was really ‑‑ but from the Internet operator, telecom operator point of view, what about long distance transceivers? Because now I cannot buy even 40G or 40 kilometres or 780 kilometres and I have to use active DODM in first to provide high speed long distance links.

THOMAS WEIBLE: I totally agree, we learned that there was no capability to build a DWD in solution into a transceiver just because of the power issue. Now for 400G there are goals to get up to 80 kilometres and beyond that, with DWDM coherent solutions yes there is the target to build that, we are not there yet, but at least the design is made to have ‑‑ to build it like that. So, yes, it will be there.

AUDIENCE SPEAKER: Do you have any predictions when?

THOMAS WEIBLE: Well, I got a glass bowl. First of all, I mean, that's what I presented now. It's even not available on the market yet so we will see the first switches ‑‑ I have seen a couple of weeks ago the first 400G switch on 1U and it still looked a little bit, even the front unit wasn't made properly yet. So there will be the first switches available by the beginning of next year, and again, it's a shortage crew also on the optical components that they are not available yet. But up to 10km, or 2km, we will see it by the beginning of next year and about DWDM1s, let's see. I don't think it's before 2021.

AUDIENCE SPEAKER: Do you think any 40G, 100G transceivers will be available in the market in the next two years maybe, or 40G, 40km, 100G, 40km?

THOMAS WEIBLE: That's pretty much what I can see there, because when we go back to that one ‑‑ like, the QSFP is a pure 100 gigabit transceiver and it will also get the technology which is made for 400G built into 100 gigabit transceivers, it's more a fact of you need DSPs in a decent size and a decent power consumption landscape. So, yes, as a result of the 400G development, you can use that technology also in 100 gigabit or even gigabit to build those components. That's doable and I'm pretty sure it will be there. The main question is, how much will it cost and who is going to pay for that? That's like always the question, but at the optics, we are at a landscape or at a scale that the quantities are really critical question there.

AUDIENCE SPEAKER: Thank you.

JAN ZORZ: Okay. We are ten minutes ahead of the schedule. So, if there are more questions, I'm sure we can accommodate a couple more.


RANDY BUSH: I asked the question and it wasn't answered. Right. And as you say, you have got the tradeoff of the economics and the problem is in the core or in a scale data centre we need speeds which PCs have not made commodity demands for, so therefore, the cost per unit is holding back the industry and how much is that versus the technology for getting to 1,000 gig? You know, 400 gig, as you say, have Alpha gear in‑house. When do we get the next step? I mean, we heard this story at 40 gig. Right? It was the same song. Is there hope for the next step?

THOMAS WEIBLE: Yes, there is hope. But the next one will be 800 gig first, and then let's see from there. But, about 800 gig we're talking next four, five years. Does it answer the question, Randy?

RANDY BUSH: Yes. We're getting a little tired of LAGs, right. LAGs are painful. And LAGs re‑order.

JAN ZORZ: Okay. No more questions? 3, 2... and thank you, Thomas.

(Applause)

So we talked about fibre, and now let's talk about DNS. So we have Sara Dikinson from Sinodun IT and let's talk about DNS.

SARA DICKINSON: Thank you. If you were at the last RIPE, you might remember I gave a lightning talk on some of the things that were happening with doing DNS over HTTP. This is an expanded version of that talk where I want to give a bit more context about what's been happening over the past five years, in terms of how DNS is being done from end devices.

So I'll first talk about the new standards that have emerged from the IETF in terms of doing DNS over both TLS and HTTPS. I'll talk about the deployment status of both clients and servers for those technologies. And then I'll spend a bit of time talking about the recent things that have been happening with the browsers, some experiments in the last six months and what the future might hold.

A little bit about my background. First of all, why I'm up here and why I'm talking about this. I'm co‑founder of Sinodun IT, based in the UK. We do all distinction DNS. We work on R&D, Open Source implication and standards. We were heavily involved in the development of DNS over TLS. In some of the specification work, early implementations and deployment. We are also a contributor to a collaboration called dnsprivacy.org, which is various organisations and individuals who are interested in advancing the cause of DNS privacy. We have a website and a Twitter feed, so if you are interested in seeing what's going on, take a look.

We are not, however, directly involved in any of the work on HTTPS. So, we are an authors on the standards, we haven't done implementations, we don't have links with browser vendors. If you don't like what you hear on that topic, please don't shoot the messenger.

My goal is to raise awareness about what is happening here. I have talked about this several times recently and every time people come up and say, I had no idea this was what was going on. Please come and talk at this other conference and let that audience know about it.

And there is a whole range of things to talk about here. There is good, bad and there is ugly all involved in encrypting the DNS.

So, first up, I'll say the DNS is really showing its age. It's over 30 years old and neither privacy nor security were part of the original design. So we have spent the last 20 years trying to retrofit those two properties into this protocol.

Work on DNS over TLS started four or five years ago and it was really triggered by the Snowden revelations which showed the extent of the pervasive monitoring that was happening in the DNS. The IETF took a strong stance about this and they declared that pervasive monitoring is an attack on the Internet. They formed a brand new Working Group called DPRIVE, which is where DNS privacy comes from. And the goal there initially was the look at encrypting the stub to the recursive resolver. A lot of that work has been completed and the work has been rechartered to think about putting encryption to the recursive.

After a few years of wrangling with various solutions, the Working Group decided that the way forward was to use DNS over TLS and they got a new dedicated port 853 for this service and that was, became a standard two years ago.

Where are we with implementation status? Well, we have a whole range of implementations. I'll call‑out a few. One is Android Pie and you might be aware in this release, at the number resolver is actually actively probing the resolvers it gets from DHCP on port 853 and if it discovers that it offers on that port it will use it preferentially to do clear text. If it can't get it it will do normal DNS.

System D, whether you love it or loathe it, it's also moving in a similar direction in terms of making this opportunistic mode of DNS over TLS the default. Stubby is a desktop client which is widely used and cross platform. So that's probably the most widely used desktop client at the moment.

It's implemented in most of the big DNS Open Source resolvers. And in terms of what servers we have out there offering this, we have 20 test servers. As of November 2017, Quad9 offered an Anycast service and in March of this year, CloudFlare also stood up a DNS over TLS service on their Quad1 address.

So I will point out that most of the implementations of DoT, as I'll call it, most of the implementations of DoT are at the system resolver level. We are missing native implementations in Windows and in Mac OS and IOS. But we are very hopeful they'll appear in the next few years. If you want to stand up a DoT server, it's easy, there is lots of software that supports it and guides.

So, what do you get when you actually encrypt DNS? Well, the big thing that you get first and foremost is obviously you defeat passive surveillance, this is the good thing. With these specs you also have the option to configure the client with authentication information which means that they can use either PKIX or Dane to authenticate the server.

You can prevent redirects. Your client can choose the hard fail if it can't authenticate that server and it also means that the DNS queries from the client couldn't be intercepted or the client will be able to detect that.

Secondly, you can argue that it increases trust in that service that you have chosen to use. For example, if you have chosen to use a resolver because it does DNSSEC validation, authentication guarantees you that service.

Lastly you get data integrity of the transport, which means you can't inject spoofed response noose an encrypted stream.

So this mode, the mode I talked about was called opportunistic DoH and that's where you just need an IP address. The mode where you need an authentication name is called strict DoH and that's something we can't do today with the HTTP, but there is proposals for how that could happen.

On the bad and ugly side. Well a lot of people are saying why are you even bothering to encrypt the DNS? The SNI still Alexa. The good news is not for long. There is approve of concept code and there is a draft in the IETF. So it looks like SNI being encrypted is a reality that will appear in the very fear future.

I mentioned DNS over TLS runs on a dedicated port. There is the potential that port could be attending or intentionally blocked. DNS over TLS can in principle be run on 443 as that fallback but it's not the default.

What this does do is it focuses the mind on the question on which resolver are you then going to choose for your queries, so the resolver is a single entity that still sees all your DNS queries. So a very important question is here who do you trust to provide that service?

Now, one thing to say is at the moment, as you saw, most of the servers that provide this are Anycast servers on the Cloud or produced by a handful of individuals. It's not being served within networks a lot today. If you want to actively encrypt your own DNS, you have to go outside your network.

So one of the things that does of course is if newer DNS is encrypted it means a whole bunch of things will start to have problems. One I want to mention is split horizon DNS. Of course a client could have some fallback mechanism where it tries to resolve externally and could fallback to do it internally. By that point you have already leaked the names this. Happens today with some peak choosing to actively configure quad 8 on their machines, you can see that as an operator. Whereas with an encrypted stream you can't. One thing a lot of operators are very concerned about when they hear this is the fact that this encrypted DNS will bypass the local monitoring and the security policies and this creates a conflict for them. So we have a case where providing additional privacy for users is directly producing an operational concern for operators.

Now, with the world of DNS over TLS, the goal from the Working Group was really to encrypt the DNS with kind of the smallest they could on top of the existing infrastructure. So nobody really ever suggested doing anything other than encrypting from system resolvers to the resolver you already used so these cases of bypassing the network were short to be short‑term or rare.

With the story for DNS over HTTPS it's different. The deployment model is going in a very different direction and it's being led by browsers and browsers are talking about doing DNS over HTTPS directly from the browser to potentially to a set of pre‑configured Cloud resolver services.

So, where does HTTPS fit into this picture if we already have a solution? It's a much more recent development? Work only started on this about 18 months ago. A working group dedicated to just this was spun up very quickly at the IETF. And the draft went through and it's already accepted to be published as an RFC. So, this will be a specification‑free shortly.

So, the specification side has been very accelerated. For the IETF, this is super fast to get a new protocol defined.

Before I go too far. I want to call out a few things at the specification level that are quite different between DoH and DoT. The first one is the user case, as I mentioned. For DOH the use cases a much more ambitious is to say that it will allow web application to say directly access DNS information using existing browser APIs. So, they were looking at using this in a very different context from the get‑go.

One of the things about discovery is that, according to the spec, strictly you must use a URI template to connect to a DoH server and the problem is, today, we have no standardised discovery mechanism where you can do this. So, it means we don't have the option of doing the opportunistic discovery, probing what's on our network and probably using that for day to day. There are proposals in the works, some around DHCP, some other methods.

Where it gets interesting is where you think about the connection models. Obviously with DoT you are doing dedicated connections. There is a comparable model with DNS over HTTPS where you open an HTTP connection and the only thing you sent on that stream is DoH queries. But also also the potential for a mixed mode scenario with DoH. Which means that an application like a browser could have an existing HTTP connection open, through some discovery mechanism it could determine that the far end has a DoH server and it can start sending its DNS queries over the existing connection completely mixed in with the HTTP traffic.

Now, there is a use case to say this could be beneficial from a privacy standpoint. If I have a browser open and a tab for, say, my bank and my bank does DoH, then I can send my DNS queries to my bank and I'm not then leaking them to a third‑party because that bank already knows all the content I'm looking up.

But where things get interesting from an operational perspective, is that, of course, if these mixed connections do contain DNS queries, it becomes impossible to block just the DNS traffic that's happening there.

Now, that could be good for users who are in oppressive environments but for operators this is a brand new challenge to think about. The last thing I want to call out is that there is also a step‑change in the potential for tracking of users with DNS over HTTP. Historically, with DNS, you have an IP for an end device and just a stream of DNS queries being remitted to a resolver. Potentially with DNS over HTTP you might have multiple connections from an end device, each originating in a different application and the HTTP headers in each of those streams will identify the originating application via the user agent header and there is also other headers in there that can also provide information about the end user. So this could potentially provide a whole other level of granularity about profiling activity on an end device.

In terms of deployment and implementation status. Dose has probably achieved in 18 months what DoT did in five years. There are multiple test service stood up, and again, CloudFlare and Google and quad 9 all offer DoH.

In terms of clients and servers that are available, the way is being led by the browsers by Firefox and by Chrome, and I'll talk about this in a moment. But there are also some desktop application ifs you are interested in doing this.

I will focus a little bit on the relationship between Mozilla Firefox and CloudFlare and in some quarters this is referred to as the 'Moziflare bromance'.

What have these guys being getting up to? Actually some browsers have always actually done their own DNS. Chrome has its own DNS implementation. There are handful of browsers out there that already use encrypted transports. Yandex uses DNS script and ‑ uses DoT. What's interesting is, you might not have noticed it, but Firefox has had a DoH implementation since Firefox 61 so that's well over 6 months. It's not currently enabled by default, but they have been doing experiments in their Firefox nightly build.

Whilst the experiments at Firefox have been doing are the ones that have been hitting the headline, please be aware that Chrome also has a full DoH implementation under the hood. But a couple of weeks ago a PR went in to add a user configuration option to it to expect it to appear in Chrome in the very near future. And of course, I'll just say that Google has handy recursive resolver at Quad8 which is extended to offer DNS over other transports.

One other thing to remember here is when we are talking about this and how rapidly it's evolved, the difference between this and operating ISPs is that the browsers control the clients. So they can put new releases out in a matter of days and weeks compared to the much longer time scales for OS changes.

So, why are browsers looking to use DoH and why do they want to encrypt directly from the browser? So the kind of answers you get from the browser community are along the lines of the OSs are too slow, we're not prepared to wait around and depend on that third‑party to implement it. We're going to do it ourselves.

Also, it's a unique selling point. We are a privacy‑preserving browser, we care about it we are going to do this for the good of our users.

Also they see an opportunity to improve the user experience, which is obviously the Holy Grail in the browser world in terms of primarily latency within the browser. So why did they choose DoH instead of DoT?

That's a link to a blog by one of the Mozilla folks, sort of laying out the reasons they went down this past. And it covers various things, including, from the browser perspective, leveraging the HTTP ecosystem makes a huge amount of sense, they understand proxying, they understand caching in that world, they think they can sell rate the DNS in this. You have a whole community of people saying don't bother this this port, put it on 443, it never gets blocked, it just works.

Lastly, it's kind of a, oh, shiny things scenario for the browser folks, because when you start doing DNS over HTTP, there is the potential to do all sorts of cool things. You can define new media types, for example I think JSON is on the Horizon, you can do server push which, in some context, can help with DNS, because think of all the latency you save by pushing the answers before they asked for. And also there is proposals for doing what's been described as resolverless DNS, which is this thing at the end of a DoH connection does it really have to be what we think of as a classic caching validating resolver or could it be something else because it's actually delivering content in some sense? So there is a whole range of potential for this that's quite unexplored at the moment.

Some people have said it's the way to do DNS 2.0 without doing DNS 2.0. I'll leave you to make up your own minds.

Back in May they announced in a blog that they wanted to do an experiment and, in that blog, they said a couple of things that really caught people's attention.

One is that in the long term, they want to turn DoH on by default for their users. So that's their ambition.

The other thing they said is that they have this agreement with CloudFlare because CloudFlare has a very strong privacy policy in what it does with its DNS data, and that's true.

They have then coined a brand new term TRR, Trusted Recursive Resolver, and they said because CloudFlare is our TRR it means that we think Firefox can ignore the resolver provided by the network and go straight to CloudFlare. So that was the proposal at the time. They went ahead and did the experiment in June and they blogged about it in August, these are links to the blog posts. They did it by opting in half of their Firefox nightly users and sending all queries in parallel to the system provided resolver and out to CloudFlare. And compared the results.

What I found particularly interesting is, when they wrote up the results for this, they enumerated the questions that they were trying to answer, and number one was, does the use of a Cloud DNS service provide well enough to replace traditional DNS? It wasn't about validating any nitty gritty details of the protocol. It was about answering this much bigger question.

The conclusions they came to was that if you use CloudFlare, you take a 6‑millisecond performance overhead, and that, given that you get encryption for that, that cost is acceptable.

So, they have decided that they are going to continue down the road of doing the experiment. They said they are committed to investigating the Cloud providers as the option that they want to look at and just last month they announced another experiment, they are moving the experiment in Firefox beta.

I'd like to come back to this concept of Trusted Recursive Resolver. Now since those original posts, Mozilla have been very careful not to commit to saying what their config will be, and if you ask them in various places they'll say we're still trying to figure this out. So, it's not a done deal, but we don't know what they are going to do so is basically means the DNS community is in limbo waiting for this decision from browser vendors and this is a decision which could have an enormous impact on how stub to recursive DNS works.

One of the things it will do is, it will alter what is, today, the implicit trust model of DNS. In other words, by choosing to log on to a given network, you accept implicitly that you are going to use the resolvers provided by the network. This could change to installing an app, and when you click on, after reading in detail the 13 pages of Ts and Cs, yes, this is what I want to do, you will just remember there is a line in there saying the company was going to send the DNS queries to the resolver of its choice. So this is an interesting question, and one of the concerns it's raised in the community is, does this mean we are heading towards a potential centralisation of DNS resolvers? Given, today, browsers have lists of CAs that they trust, can you imagine a future where we have list its of DNS provides that they trust. If you want that isn't there, you have to go in and manually configure it.

The reactions to this news, as you might imagine, are a bit mixed.

There are some folks who are saying this is actually good because it's provoking discussion about the fact that DNS needs to be encrypted and it's facing operators with the reality is that this is something coming their way, whether it's true DoT or DoH that that they need to adjust to it.

There are others who are looking at this and kind of going, I don't know where this is going. I don't ‑‑ I feel very uneasy, I don't know where this will lead. And then there is a part of the community who think this is the single worst idea that they have ever heard and it's the end of the DNS republic as we know it.

To dig into some of the reactions in a bit more detail. Some operators are saying if this got turned on to tomorrow, I would actually be hit with a massive fine. I have legal and regulatory obligation to say manage the DNS coming into nigh network and I'm not prepared for it, and it will cause me significant issues. Some are going so far as to say that if this happens, they will be forced to either block this traffic or proxy it man in the middle it, something like that, because they need to have control of their network. Because just historically they have the luxury of unencrypted DNS.

Some of this slightly echoes the discussion happened over TLS 1.3, and it's kind of case that is now being transferred into the DNS world.

I have also heard questions about is this legal? If an app does this by default, within GDPR users are required to give informed consent if their data is going to be transported outside the EU. How many people do you know in, you know, sort of, a muggle friends from the rest of the world that don't understand technology, how many of them can give informed consent about what should happen to their DNS queries when A) they don't understand DNS and B) in they probably would won't look at changes in terms and conditions and C) it might be a case where the provider doesn't state clearly what jurisdiction their data falls under.

Now, one of the things it does is it takes you back to thinking about the fundamental threat model involved with DNS. Now, there is no arguing for fact that a TRR can be useful in some networks. So, I'm in a coffee shop, I probably much rather go to somebody like Quad9, for example, where there are absolutely case where is this is useful. In terms of ISPs, in the US, the lack of net neutrality means that many users there view their ISPs as a threat because they are aware that those ISPs are directly monetising DNS queries and selling them to advertisers for example.

The converse is true in the European Union. Because we have GDPR, our ISPs are very tightly restricted in what they can do with our DNS data and nine times out of ten your ISP is probably a better choice in terms of privacy than any Cloud based DNS resolver.

So, in terms of those questions about where you choose to send your queries, there are multiple options.

The other scary thing is slightly that you start thinking about this concept of applications controlling TRRs. What happens if some governments decide that apps to be available through the app store they are to use certain government‑maintained TRRs. What if some TRR operators start providing incentives to app developers to use their TRR? Will anything of this be transparent to users to be obvious?

Lastly, I would encourage you to read a blog by Bert Hubert of PowerDNS where he also goes through some of the other applications of moving to a third‑party DNS provider.

He calls out a couple of things and one is the neutrality of those providers. Some of them are CDNs and therefore they are providing DNS service for users to access consent provided by some of their competitors. So, there are numerous questions there about conflicts of interest. And as I mentioned earlier, there is, today, quite a lack of transparency in terms of, if I do go to a service, what are the authorities that are responsible for blocking or filtering or intercepting the content that goes through that resolver?

So lots of questions, not a whole lot of answers at the moment.

Lastly, I just want to call out something which comes slightly from the other end. One thing that enterprises are going to have to worry about here is what challenges they are going to face if they need to manage lots of end devices in terms of managing the DNS configuration for them. Today it's straightforward because you can do it with DHCP. However, in this world ‑‑ he can have different app for where it's going to send the DNS data. We don't know what the other browsers are going to do. We don't know what other apps are going to do, but there is as real possibility here that we will lose a central configuration point on an end device for where you choose to end your queries and that DNS will disappear into content delivery rather than being part of the infrastructure of the device.

What should you guys be doing? So my advice here is that, if you are an operator, think about running DNS over TLS on your server. There are already devices today that will be probing on port 853 and that number is small today, but it will increase as Android Pie deployment increases as well as the various operators pick up the new release. So it's the right thing to provide them the opportunity to do encrypted DNS within the network.

Also think about running a DoH server. Wherever the story with the browsers go, it's a good thing if users have the option to use a local DoH server in a trusted network. So we think it's the right thing to start getting people thinking about doing this and running it and what the challenges will be.

I would also say, you know, watch this space and keep an eye on all the current work that we're doing. There is work on discovery and we don't quite know where that's going to go. There is a draft that I talked about a couple of years ago in the BCOP group on best current practices for operators of DNS services, because this is now an important thing. Users might choose to use your service or not solely on the basis of your privacy policy. So, we need to be transparent about this and accountable to our users.

If you are interested in this topic, yesterday at the DNSO OARC workshop I gave a talk on this, and that's a link to the slides for that talk and also as I said keep an eye on dnsprivacy.org and Twitter and there is also a session in the BCOP task force later today talking about doing DNS over transports other than UDP and the operational challenges of that.

So, basically, stay tuned. We really don't know where we're going to be in two, six months, a year with this, and I think there is a lot of concern in the operator community about where this could go and the implications for them. With that I can see a question queue already, so thank you very much for your time.

(Applause)

BENNO OVEREINDER: Thank you very much for that excellent presentation.

AUDIENCE SPEAKER: Hello, Jan Zorz, random guy from the Internet. I am running DoT server in my lab as one of the test ones, so I think that's good and I am all good with DoT. It is the DoH that is slightly bothering me. I think that from architectural perspective, DNS resolving function does not belong in the browser, because ‑‑ and we need to have this discussion and, as you mentioned, if every application on our computer decides to start sending DNS queries to various different providers, in brackets, around the world, that has different questionable policies, how to handle our data, I don't think this scales. I just think we are creating a mass at the edge of the network.

And secondly, you mentioned standardisation of the DNS function. Yes, I think we need to start a discussion about centralisation of the Internet. In the IXP world, we see Quad8, Quad9, Quad1, and the decentralisation of the DNS function and there are many other things that are keeping, that are becoming tried and if we go back in time we also said that building a centralised network, keeping the smart in the core was a telco thinking and keeping the Smarts at the edge was the Internet way of doing things. So, why are we now trying to put all the Smarts back in the core? Are we transforming the Internet into a telephony network? What are we doing? I think Randy years ago had a brilliant presentation it was called the revenge of the core and I think we need to revisit that.

AUDIENCE SPEAKER: It's David from Oracle Dyn, and despite being the co‑chair in the Working Group, I will point out that I'm probably mostly has been solely in your graphic in that I have got a bad feeling about this. You already made a lot of points that I want to make but I really want to emphasise just how potentially disruptive this is. Even looking beyond the policy space, there are immediate operational challenges presented by this and things like the browser vendors definitely want to move in the direction of server push which challenges the DNS security model. That's usually disruptive and a number of operators here already run the services that are run on top of the DNS such as parent controls, which all of a sudden by installing a new brother I am circumventing my own parent controls which I have decided for my house. And finally, there are ‑‑ I had one more point to make. Oh, this really challenges the infrastructure that is in place for many many different CDNs and how they rely on DNS to direct traffic and forget that separate policy of the issue, but just all of a sudden, the increased prevalence of this could be extremely disruptive to you will at optimisation that has been done for delivering the traffic on the Internet and increasing transit traffic. The potential for disruption here is at the moment, it's just the water is getting a little more but ‑‑

SARA DICKINSON: I think one of the problems here is the uncertainty. We don't know if it's going to happen, when it's going to happen, what the scale of it is going to be so that's a concern as well.

AUDIENCE SPEAKER: Daniel Karrenberg. I thought I was out of the DNS discussions. What Jan said and what the last person said, I tried to ask a question: Is the DNS community, in your opinion, even concerned that we might use the single DNS name space this way?

SARA DICKINSON: Absolutely. I think there is a possibility that when you look at what the browsers are talking about in terms of ‑‑ I mean, there's been a whole range of proposals around resolverless DNS and we don't even know what the browsers mean by that, where they are going, but they are certainly talking about doing contextual DNS queries within these HTTP connections to something that could customise the responses, not only based on ‑‑ we see GRP location all the time, but what other dimensions could you go down to customise these? But we know the browser guys are pretty inventive, so DPS ‑‑

AUDIENCE SPEAKER: Well, then keep working.

SARA DICKINSON: I think it will work for the browser vendors use cases.

At the back ‑‑

AUDIENCE SPEAKER: Karl Henderson, Verisign Lans. So you did talk about a lot of momentum in DoH largely driven by the browser developers to the recursive. Do you see any design or momentum to have the same transport in the Inbound side as between the authoritative?

SARA DICKINSON: Are you wondering about using DoH for recursive to authoritative?

AUDIENCE SPEAKER: Over DoT.

SARA DICKINSON: Yes, there is certainly interest in that. There's been a the love discussion lately in the Working Group and DoT is probably the favourite transport for that at the moment. I think there have been proposals about doing DNS over QUIC in the future and that's possibly a better fit. The big ‑‑ and there are certain people who are very worried about the scaling of that going to authoratitives, and there is also problems with the authentication model, and I think those two things would need to be solved before we see serious up take between recursive and authoritative.

AUDIENCE SPEAKER: One of the projects I'm working on it find some indicators to measure the quad health of the Internet and one of the proposed metrics is the concentration and we have some tools in place where we are trying to measure that, so there are people who are trying to measure this concentration, I would like to talk to them this week, because I think that this would be a really good indicator to see where the market is going, and how fast it is going because I doable that for browser vendors I'm talking without any hat, I have been very creative, very imaginative. We have on the path many years, we have libraries and all of that, we don't depend on the operating system, DNS is just one more step this that direction. It would be hard to stop that movement. So this is essentially, in my opinion, something that will happen but we just need to understand at what space.

SARA DICKINSON: I agree. This is not something that's going to go away. The browser vendors are not going to shy away from. It's a question of how they do it and how fast they do it and how far they go in my mind.

BENNO OVEREINDER: We have still 10 seconds for this session.

AUDIENCE SPEAKER: Qrator Labs. We have been following DoH discussions better than I do definitely. But, I remember all those suggestions about privacy and making user agents different for DoH requests and also providing less headers. So what is actually implemented in Firefox? Do you know what is implemented at Firefox to reduce this fingerprintable amount?

SARA DICKINSON: They haven't gone to great extent to reduce it. They have just put ‑‑ so their argument is you need these headers for interoperability, for compatibility, and they are putting in the set that they believe needs to be there. There was a proposal to the IETF to argue that when you are doing a DoH, you should absolutely minimise them and basically strip out everything that stops you ‑‑ and only add them back in if you have to do it to interoperate as a fallback. Part of that came out of not adding headers to DNS, there was also a request from the Tor community who can see value in this but who don't want to do this. I apparently can't see the browser vendors going down this road because it was when that proposal was made, it was a brand new idea to them. Oh, we shouldn't put these headers in but we need them, we use them. So I am a bit sceptical about how much traction that will get in the browsers certainly.

AUDIENCE SPEAKER: An unfortunate outcome. Thank you.

AUDIENCE SPEAKER: Friso, Rabobank. Can you say anything about DNS over QUIC?

SARA DICKINSON: It was ‑‑ there was an original specification ‑‑

BENNO OVEREINDER: DNS over QUIC? This evening, there will be an applications of DNS anything over UDP and Oliver will discuss that point also. So, I can advise you to attend that session. Thank you.

SARA DICKINSON: I think the back mic is next.

AUDIENCE SPEAKER: Thanks. I already watched this version of this earlier and people already mention what I had wanted to say. I just want to add something. As we find of identified of decentralisation of the Internet. I think maybe browser vendors have this fear that the browse and itself become less important as people changed to app‑based web use, messengers and stuff. I think this is just another of this line grabbing things. I will ask these people just who are you supposed to ‑‑ are you going to trust out of domain response records in your DoH? Well I hope not. I mean so...

SARA DICKINSON: The security model is unspecified at the moment or highly under specified so that's another concern, another dimension.

AUDIENCE SPEAKER: Yeah, and lots of other things. Let's talk in maybe DNS Working Group about that.

AUDIENCE SPEAKER: Peter Hessler. I am part of the open BSD project. As an operating system we are concerned with where DNS is going. DoT looks pretty interesting, that has a lot of implementation details that we'll have to deal with like linking in SSL libraries to your lib C is rough. But with DoH specifically, Mozilla and Chrome do not ship precompelled binaries for us, we have to build it ourselves. And so we have turned off some of their default things. We have turned off the Firefox studies for all our users, and if the browsers are going to be enabling DoH by default, we are very likely be turning it off because we don't trust them, we don't trust their decision process. We have seen massive failures in when they claim that they are trustworthy so we think it's a lie.

SARA DICKINSON: Interesting. Thank you.

BENNO OVEREINDER: Thank you, Sara.

(Applause)

So, thank you for your patience. We are running five minutes late. And sorry being rude for the mic protocol etiquette. This evening, there is the BCOP task force, IPv6 deployment in mobile providers and more technical operational presentation on DNS over anything but UDP, so that includes DoH ‑‑ well, DNS over QUIC and you have HTTP in the middle, but it's also more technical but there is also room for this type of discussion. And, for the rest of the week ‑‑ this is my favourite topic also, of course ‑‑ but keep this discussion going, and it's also important operators and ISPs join this discussion. Thank you.

(Applause)

(Coffee break)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.