18 October 2018

At 2 p.m.:

JOAO DAMAS: Welcome. About to start the second session of the DNS Working Group at RIPE 77, just worth mentioning, we have a few extra items for the aagenda that we have filed under AOB so maybe we will be running a bit tight. I would like to not leave out and thank our stenographer Aoife here they make the experience much more enjoyable for everyone. Vesna from RIPE NCC for being the chat interface and I don't know her name and I cannot read it from her sorry for taking the minutes, thank you. The first talk, without any further delay, is the RIPE NCC update by Anand.

ANAND BUDDHDEV: Good afternoon everyone. I am Anand Buddhdev from the RIPE NCC and I am going to talk to you this afternoon about some of the stuff that we are busy with at the RIPE NCC. And in particular, K‑root, which is one of the root name servers we operate and something about DNSSEC.

I will start with K‑root. This is one of the 13 root name servers of the DNS and RIPE NCC operates this service. It's running stable and well and there is not been much change since the last RIPE meeting, we have added two new instances, one in Vilnius and the other in Lugansk, in Ukraine and I wanted to present this graphic here showing the responses that we send out from K‑root, we average about 80,000 queries per second and therefore 80,000 responses per second and about a quarter of them result in R code zero. The rest result in NX domain responses so I just thought this would be interesting to present. We seem to exist to answer mostly junk.

Next up I want to talk about capacity, so across all the instances of K‑root at the moment we have about 100 gigabits of second ‑‑ gigabits per second of capacity. Most of our instances which are hosted are connected at one gigabit her second, and our core sites, some of them are connected at 10 gigabits per second. However, on average, (per second) we do about 250 megabits per second of outbound traffic and even less on inbound because queries are are of course smaller than responses usually. We are quite heavily over‑provisioned because we have almost 400 times the capacity of what we do on average. Of course, this usage is not evenly distributed because some sites get more traffic than others because of where they are on the Internet and also depending on how the hosts of our hosted sites are announcing prefixes, some of them only announce the prefixes to their customers or only to selected up‑streams, so the distribution is not even.

We continue to expand and improve the service that we provide for K‑root. So, we are going to be upgrading the remaining core sites that we have to 10 gigabits per second which should give us about 50 gigabits per second of capacity just at the core sites. We have five of these. And then earlier this year, the RIPE NCC Executive Board approved budget for adding a new site and connecting it at 100 gigabits per second. So we went away and began looking at this and examining all the options and possibilities that we had, and this year we do plan on adding one new site or one new instance, and this will happen hardware that is capable of doing 100 gigabit per second. However, we have decided not to initially connect it at 100 gigabits and there are reasons for it. First of all, 100 gigabit connections in transit at the moment is quite expensive, especially if you connect just a single site. (The transit providers prefer you to connect at more sites and you get discount for volume and things like that. So single site at 100 gigabits will be quite expensive, so we are going to install hardware that is capable of doing this, but not initially connect at 100. We then we plan to add more sites next year and once we have enough of them going we can negotiate better 100 gig connection contracts with up‑streams. So this is the plan.

The other issue is when you do traffic at such high speeds there are some engineering challenges, you know, regular servers need to be tuned better, applications like BIND, Knot and NSD that we used have to be tuned a bit better, if you want to filter at such high speeds you can't do this with stock software so there is a lot of tuning and configuration required and we need some time to adjust to this and develop the tooling to be able to handle traffic at such high speeds.

So, yeah, that is mainly our expansion plan for K‑root.

Next up, I am going to talk about DNSSEC. And I am going to talk about migrating our DNSSEC signers. So I will start with a little bit of history for people who are new here, for example. RIPE NCC has been signing its zones since 2005, we were the second organisation after the Swedish registry IIS, to sign our zones. Back in the day, we had only BIND and BIND's DNSSEC sign zone so that is what we used for signing and to make this a little bit easier Olaf Kolkman, whom some of you might know, had written some probased tools to help sign and do some of the automation; however, there was still a fair bit of manual work to be done. A few years later we decided that we wanted to improve things, do more automation and also improve the security a little bit, and so we went for a product by a company called Secure64 and they have a proprietily signing solution and we chose this because, first of all, we had very few options back then; there wasn't much else besides BIND's tools and OpenDNSSEC was around but it was also in its infancy and we wanted something that just worked. This hardware ‑‑ this solution, from Secure64, runs on HP Itanium servers and the claim that they have is it's more secure, because the Itanium processor has special features to do memory compartmentalisation and to keep data safe and secure. And the product was SIP‑certificated and it provided more automation for us. It was able to take zones in, sign them and send them out, via XFR and make things easier for us. We have been using this solution since 2010, so eight years on and that's where we are now.

However, we have decided to migrate away from this solution. One of the main reasons is our hardware has been running for eight years and it's now old, it's unsupported and we had to replace it. So we began looking at what it would cost to replace these signers and we back to our vendor, Secure64, and it turns out it would cost a lot of money to replace our existing signers and so we thought, well, let's look again, because things have moved on in the meantime. So in the meantime, open source solutions have become much better, there are several to choose from now. The support is also quite good. And there are very, very good communities around each of these products and therefore there is a lot more shared knowledge that we can depend upon now.

So we had some criteria that we wrote down to evaluate the various products. The first was that something we chose had to have good and up to date documentation. This was very important to us. We wanted a solution that offered us a bump in the wire signing, so XFR the zone in, sign it and XFR out. We wanted a product that had support for modern algorithms and algorithm roll‑over. We wanted it to be able to do automatic ZSK and KSK roll‑overs, we also wanted it to be safe during a KSK roll‑over, as in not withdraw the keys before the roll‑over was complete. This is, for example, one of the bugs we have faced with Secure64, where the key was withdrawn too early. So this was one of the early bugs we saw. And we all like clear and verbose logging, because when things happen, you want to go back and see what happened.

And finally, we needed a solution that allowed us to import foreign ZSKs and possibly even KSKs in order to do seamless migration.

So we came up with a list of contenders. We have BIND, the right at the top. It has good DNSSEC support, is essentially the reference implementation, it's quite flexible. Then there is OpenDNSSEC, which has also been around for quite a long time, dedicated signer, doesn't do any other authoritative service and also quite flexible. Then we looked at PowerDNS, from these fine people over in the Hague, and this is also a well‑known product, used by several large hosting companies and they also sign their zones with it. And then we looked at Knot DNS, because we already use Knot DNS for providing authoritative‑only service, for both K‑root and the RIPE NCC's Reverse‑DNS cluster. So this is a relatively new feature in Knot DNS. And so we were going to look at it. And then Secure64 stem cells came back to us and said they also had a new signer product, running on x86‑64 hardware and it was based on Knot DNS. So, we were also going to look at this one.

Now as it turns out, (all these solutions that we looked at can all to the job, they can all sign, they can do all the basic stuff that we want them to do. And when all the products can do you know what you need them to do, then in order to select one you have doing nitpicking and you have to really become critical and you know you have to find something that works best for you and stands a notch above the rest. So BIND is really quite flexible and you know, can to all the things we want it to. And initially when we were looking at it we noticed that we couldn't do key management very easily. BIND has all these tools to set key timing parameters and generate keys and all that. And we discovered that later it does come with a utility called Key Manager. That was, I think, developed together with Sebastian Castro, some time ago. Hi Sabastian. But then, it turns out that if you try and build BIND on a system where the requisite Python module is not installed then the utility just doesn't get built and installed and it's just not there and there is no indication that you are missing this rather vital utility. So, I have talked to the ISC people about this and I hope they will make some improvements there. The manual makes no mention of DNSSEC key manager so this is something that needs improvement and then ISC has documentation on their website about DNSSEC and how to do it and this is also a little bit outdated, it also makes no mention of using DNSSEC key manager to make signing and roll‑overs easy.

Next we looked at OpenDNSSEC. This is not packaged for CentOS 7, which is the Linux distribution of our choice. Again, the documentation is outdated. The configuration is in XML, which makes it difficult to write, read and maintain. And OpenDNSSEC only works with PKCS 11 library, so always expects some kind of HSM. So if you want to just keep the keys on disk, you also need to use soft HSM with it.

We looked at Power DNS, it's quite easy to sign zones with this, they have made it easy with a single command (you run PD NS UTI L. However it has no automatic key roll‑over so you can't define sol Reese or schedules or anything like that and you have to use the utilities to do the key roll‑overs yourself. If you have non‑power DNS slaves you need a bit more configuration to make them interoperate better. This is documented, but it needs that little bit of extra configuration.

So, after all that nitpicking, we have a winner and the winner turned out to be Knot DNS from cz.nic. It meets all the criteria that I listed out earlier and then some more. It's a really flexible product. We were already using it for authoritative service, so we were familiar with the configuration, which is easy to read and write and maintain. And the DNSSEC functions have matured to a level where they work for us. So we went for Knot DNS.

I will also mention that we did look at the Secure64 signer, which is based on Knot DNS, but it costs a lot more money and because their development cycle is different, it may lag behind the official versions, so we see no benefit there. If we can go with the stock Knot DNS, from CZ NIC, then why not?

So signing with Knot DNS is really, really easy. You add one line to the configure, you say DNSSEC signing. Set it to ON and that's it. It just signs all your zones. It uses some default parameters. It uses ECDSA algorithms. It sets up ZSK roles automatically. It sets up, you know, everything, CDS, CDNS, Key Publishing. All of that is just on, by default and off you go. If you want some more control, then you can define a policy that suits your organisation and your needs, for example here is a policy where we are using RSA‑SHA 256, instead of ECDFA and we are using a ZSK size of one kilobit. And so, changing policy is also very easy, just a couple of extra lines of config and off you go.

So we made a migration plan to switch away from security 64 and we thought, well, what we should to is, we should export the private keys from our old signer, import them into Knot DNS, sign the keys. ... sign the zones of these keys, switch XFR from the old signer to the new signer and... Voila! If only it were that easy. Because, Secure64 does not allow export of private keys and this is obviously a good thing, because that's what it's designed to do. It's designed to safeguard the keys and you are not supposed to be able to extract them in any way. So this is not a criticism. I am saying it's actually doing its job as intended.

So, in the absence of private key export, we then have to do the migration with a key roll‑over and roll both the KSK and the ZSK, so what we did was, we set up the new siren and configured the zones on it, we let it generate the new ZSKs and KSKs for each zone. Then, we exported the public ZSKs from the new signer back into the old signer, so we had a situation where the old signer was signing both its own ZSK and the new ZSK from the new signer with its own ZSK.

And then we did the opposite; we exported the ZSKs from the old signer and we imported them into Knot DNS, so we again had a situation where the zones on Knot DNS, their DNS set containing the two different ZSKs was signed with Knot ‑‑ and finally, we published DS records from the new signer, the Knot DNS signer into the relevant parent zones. So at the end of this we are now in a situation where we have double DS records and double ZSKs from the old signer and the new signer, available to us. And at this point we can switch were one signer to the other, we can just change our XFR, the master server from one to the other and start publishing zones from the new signer and there will be no validation failures because validators would have previously seen both the ZSKs and they can validate the DNSKEY RR set using either KSK because they are seek both the records are from the parents.

So we are actually in the middle of this right now, we started this work before the RIPE meeting, for the RIPE meeting we have frozen work because we don't usually like to make big engineering changes, so here is an example of the DNSKEY RRsets for, and at the touch the old signer, the answer from the old signer and below that you have the response from the new signer and I have highlighted in red the old siren along with its ZSK in the response from the new one and vice versa. The ZSK from the new signer is being published by the old one. So this is where we are right now. You can query this and this is exactly what you will see yourselves.

And we did the DS record update as well so you now query for DS records you will see two, one with the KSK from the old signer and one with the KSK from the new signer. If you are going to DNS Viz and do a visualisation this is what you see right now. Because we are pulling the zones from the new signer you can see that DNS Viz also shows you the full chain of trust and shows you what looks like an orphan key on the right‑hand side, this is the old KSK from the old signer, but it's just dangling there at the moment and all we need to do is withdraw it.

So, I want to talk about signer security. We have chosen to use a minimal CentOS installation for this and crucially, we are not using HSMs now so the keys are on encrypted disc partitions on these signers. And we have a server security and access policy so that only DNS is allowed into and out of the server, no one can ‑ to the server, there is no SMTP in and out or http, there is no protocol that allows any kind of data into and out of the server. If an operator needs to login, they have to login at the console of the server using the out‑of‑band access and that is is in itself inaccessible from the outside world. However, we occasionally need to configure the server, maybe we need to add a new zone or something and we therefore open up SSH access briefly and we have monitoring that will alert us if the SSH access is open for too long so it's kind of inverse monitor. And http access can be allowed briefly if if we need to do updates and only allowed to RIPE NCC internal server for Yam package repositories.

And I also want to talk about another aspect of DNSSEC which came up yesterday, and this is not related to our signers per se, this is more to do with the RIPE database. So, and this is about CDS and CDN S key automation. So just a quick recap for those people who are not aware. If you want Reverse‑DNS delegation in the RIPE NCC region, you do that by creating a domain object in the RIPE database, and you can have name server attributes in this for NS records and you can have DSR data attributes for DS records. And these domain objects are created by users themselves, so the RIPE NCC does not normally touch them because they are maintained by the users themselves. So there is this RFC 8078 that describes a child for to signal to its parent it has rolled key and the parent should update the records in the zone and this is a well understood mechanism and there is software out there that supports it, Knot DNS does it for example. But for RIPE NCC there are two main issues, one is that of implementation, do we scan the entire space of delegations to find DS records, and also, as I mentioned earlier, RIPE NCC does not normally update users' objects so if RIPE NCC were to do something along these lines, then we would have to have some kind of discussion about policy and adjust text and wording where necessary.

So I just wanted to mention that and this is something I would like to have more discussion about, maybe here and also in the mailing list, so that we can talk more about this. With that, I invite questions.


JOAO DAMAS: Thank you. Anyone got any questions at this time?

NIALL O'REILLY: This may be a premature question, but I'm wondering essentially, the question is whether the ‑‑ whether the question is premature, to ask about plans for cleaning up E is

ANAND BUDDHDEV: What kind of clean‑ups are you referring to

NIALL O'REILLY: I heard they were laying things out there and there was some housekeeping likely to happen but maybe I was dreaming, maybe it's a question for some other time. If that's the question ‑‑ if that's the answer, just tell me it's premature.

ANAND BUDDHDEV: I think the premature, because I have not heard of any ‑‑ yes, there are lame delegations in E S but RIPE NCC doesn't normally touch these and there is no discussion, policy, community discussion going on relating to this. So, we will just leave them there until we hear otherwise

NIALL O'REILLY: Then I am now on the same page as you. Thanks.

AUDIENCE SPEAKER: Vesna, this is a remote question over IRC. It's from Magnus in Frankfurt: Why no HSM? This way the key now be extracted, this can be extracted.

ANAND BUDDHDEV: So this is a question that other people have asked us as well, two RIPE meetings ago I stood up here and talked to the community and said we would like to switch away and maybe we should not use HSMs and the overwhelming response from all the community members was that HSMs are mostly overkill and that if we have good enough security and access policies, then keys on disc for the types of zones that we are signing is good enough. Having said that, we are not opposed to using HSMs and in fact, I had a chat yesterday with somebody from diamond key systems about looking at a product based on the open source Cryptech HSM, so we are open to the idea.

JOAO DAMAS: During that time the issue came out about not protecting the keys more than you protect the data that you are going to sign?

ANAND BUDDHDEV: Precisely. There are more people who have access to editing the zone and can do more damage than somebody stealing a key.

Giovane: Thanks for showing these results, the route ‑‑ I am very happy the K‑root has this provisioning, we have done a bunch of studies over the last years on resilience of the continues and one of the reasons is that the root zone is provisioning and doing for hundreds, that's great. And my question is, like, how are you going to choose your site that you are going to run your 100 gig and we have a tool for that, we already use it, /TPR*ERPB filter, I am not sure if you heard of that, have you any thoughts or where you choose the site?

ANAND BUDDHDEV: We don't have any concrete results yet. I would love to talk to you a bit more about this and my colleague Colin who is here I think he would love to talk to you about this.

Tony Finch: University of Cambridge thank you for talking about CDS update and ‑‑ I have started a discussion on the dinners Working Group mailing else, if anyone else would like to express opinions about it.

JOAO DAMAS: We will touch on that at the very end of the session.

AUDIENCE SPEAKER: DENIC. Thanks for the information about DNSSEC. Concerning HSM decision you had the community surveyed, is this on the block or ‑‑

ANAND BUDDHDEV: Two meetings ago I wrote a RIPE Labs article about this and a number of people read this and there were responses to the article. And this article is still available for view.

AUDIENCE SPEAKER: I will search it. Thanks.

JOAO DAMAS: Thank you. We are moving on, thank you Anand.


Next up Petr Spacek.

PETR SPACEK: I am in the unfortunate position to be here after Anand because it will seem that we are paying RIPE NCC twice as much as others to get more time for promotion. Anyway. Let's have a look at latest version of Knot DNS which has something called geoIP which can mean various things. In general, it's application level Anycast, basically giving different answers to different clients based on their, either IP address or source IP address or the IP address sent in the EDNS‑Client‑Subnet option.

First, important part of this presentation is don't do it if you don't have to. I mean, use proper Anycast if you can. If you can't for any reason, okay for DNS level.

Okay. So in practice, what does it all mean? Let's say that we have a client in Norway which is sending ‑‑ or which is sending DNS query to authoritative server somewhere, it doesn't matter where, and the authoritative server will have a look at the source IP address of the resolver in Norway and reply with answer which is tailored to clients in Norway, let's say. Of course, there needs to be some default so if somebody from Russia or I don't know where asks to this continues server and it doesn't have any specific value for the particular location it will return a default. At so that's the share IP or tailored responses, if you will.

So there is a bunch of challenges with this approach. First of all there is no standard at all how to do that. So you can forget about configuration unique ‑‑ or standardised across vendors because there is no standard at all. If you at the side doing for actual geographic mapping then you have a problem how you find out where the client is from the IP address because the quality of databases which map IP addresses to anything, varies wildly. If you overcome this configuration hurdles, you might encounter problems with performance because of course if you have a bunch of different answers it's something different than the standard DNSSEC, that allows you to sign the answer once and shoot it as quickly as you can. With geoIP we need to get more fencing ‑‑ complexity of surveys because if something is economics it's also fragile.

What we can do:

In not DNS 2.7 we had first step in this feature so it's first version which and I am here to present this idea or implementation, I am very much want to hear from you if it's okay or not, should it we improve the interface, is it fast enough or, etc., etc.. so please take this like a first look at the feature and think about what should be improved.

Okay. In current version of the software, we have couple options you can choose from. One is basically selection criteria which is ‑‑ which offers you an option to base the answer either on source IP address which is one option or the so‑called ECS address or the client subnet address which is the address which Google sends you and tells you in the query use this IP address instead of my source IP address. Okay.

That's basically the key in the database.

And the second selection criteria is are we tailoring the answers based on the subnet or is it actually IP address map to some geography location which is mapped to the particular answer. I will show you a example, it will be clear after that. With DNSSEC it's always fun so we can either do online signing which is slow, or in Knot DNS we have feature which is like let's say pre signed answers which means that the DNS server generates all possible answers beforehand, signed these pre‑generated answers and shoot as quickly as the server can.

Again, so let's have a look how we can configure all this stuff. First step is to have the fall back value because we want to make sure that things don't break when somebody asks for allocation we didn't expect so first button fall back value in the zone file as you would normally. Then if you decide that okay, EDNS client subnet is important for us because we are big open resolver which receives queries from all over the world then you might enable this feature but it's not mandatory. And of course, if you decide to tailor your answers based on the geographic location you need some mapping database. We are not IP database vend are so take whatever you want, you can buy it or you can use available libraries and ‑ your own from whatever data you decide to use.

So here is snippet of the configuration file. We have the EDNS subnet on top and we have something which is called geoIP module and in this case we are looking at configuration which is using IP addresses, not geographic location, just subnet of the client. So the mode is subnet, TTL which will be returned in the geoIP addresses and the important part is path to configuration file which contains the tailored responses that will be in the next slide, it wouldn't in here. And the last step is to tell which zones should be affected by this module. So in the zone configuration we need to add one line which says module and the reference to this module configured previously.

Okay. So, how does the configuration for tailored answers look like? It's like this. It's basically super easy file, simple, which has the DNS name and then there is a list which contains subnet and values for particular subnet and ten it goes on and you can have as many of the definitions as you like. You can have thousands of these. And of course, don't forget that there has to be some fall back in the zone file there still has to be some line with fall back values.

Okay, let's assume we have configured how can we test this? Of course, we can change the source IP address but it's nightmare to do if you are testing it so if you have enabled the EDNS client option you can use K DIG and it has an optional plus subnet and you will specify IP address of the client which you pretend to be and the answer in here will match the values you provided. So we are, for example, we are asking for subnet 192.0 and so on. On the previous slide you can see here is a definition for this subnet which this IP address and the fall back value is obviously totally different. So that was for testing. And this is the like simple case, configuration‑wise because you specify the subnet and it's easy to understand. We can get more fancy if you want, but ah, okay, that will be next. Of course natural question is if do I this and specify the configuration for subnet and I have bunch of different definitions and so on and so on what is the performance? So the baseline here is unsigned zone and we can do 2.5 million queries a second on that particular machine for unsigned zone. If we enable this geoIP based on subnet IP address, the performance will drop to 1 .8 million queries a second which is about 72% of the original performance on zone which is not signed using DNSSEC. If we sign the zone, but don't enable the geoIP the performance about 2 million queries per second which is about 80% and once we enable subnet geoIP on the pre signed zone the performance is about 68%. That's for pre‑signed zone as I said. And for online signing slow as hell, it's like 1% of the original performance. But we didn't thoroughly optimise the online signing because it's just, you know, too cheap, too slow.

Okay. So, is it enough? That is a very good question. What does the number actually mean? These are ‑‑ this is a chart from our benchmarks on our website and the Knot DNS which is not doing any geostuff magic, is this blue line and that is like the original value, the 100% in the previous table and the drop is, if you draw two lines which are about 40% drop just to make it easy, it will drop in here and it's basically the same performance as PowerDNS with BIND back ‑‑ without the geoIP. It needs to be said that guys from PowerDNS didn't optimise this and I think it's pretty good. Let's get more fancy. Now, putting in subnets that not good enough, I want to use specific location like Czechia, it's basically mapping IP address to it bunch of values of different kinds. In our example we are using geoIP database which is mapping an IP address to it a cities, names of cities and countries which are specified in database using the ISO code and some English name. But the paint point is in the geoIP database you can have anything, you can make your own with different key names and values, you can create whatever mapping you want.

So, the configuration. Again, client subset, that is easy, the module configuration is very similar, the thing is that for this use case we need to switch the mode to geoDB and of course to provide the path to the database with the mapping so there is a path to the file and the important part in here is so‑called geoDB key, you can see that there is like three dictionary, if you will, which is mapping IP address to values which have names, so in here we provide basically a template which is used to generate the key from the IP address. So the server will take the IP address, map the IP address to the dictionary and database and then take pieces of the information from database, put them one ‑‑ next to the other, and this is how ‑‑ this is how we construct the key to look up the values in the next step. Configuration for the zone is the same so let's move on.

Of course, if we are doing geoIP the specification of records is a little bit different than for subnet so in this case it's again name, DNS name, and then there is the geokey from the database so key constructed using data from the database, so in this case it's cz and Prague and cz and asterisk so this is an example how you configured your IP for case where cities privileged because you have cache in the city for example, and second record in here is basically valid card for the specific State or country. And of course, the fall back is still there, so if somebody asks from US he will get this fall back. Testing is not very surprisingly the same. So how does it perform when we to this geoIP magic mapping?

Answer is the same. If we just add the geoIP mapping out DNSSEC we are at approximately 64% of original performance, let's say. If we sign the zone and enable the geoIP mapping at the same time we are on 60% of the original performance. So, yeah. If we look at the benchmarks it's basically the same performance you can get anywhere else possibly even without the geoIP.

So I was mentioning DNSSEC just very briefly and I'm claiming that it works, so lets have a look how we configure it. Online signing is just slow and I don't like it so I will skip over. And if you are going for performance and geoIP I recommend you to have a look at the pre‑signed mode where all the signatures have originated. It's fast as hell. The only disadvantage at the moment is that it's not that super integrated in the rest of the Knot DNS so you need to do key roll‑over manually, that is basically the only disadvantage right now.

So, how it looks like in the configuration file. This is the zone testify /TPHEUGS and Anand was showing in his presentation and basically step number 1 is to enable the DNSSEC signing which in practice means DNSSEC signing on, that's it. And at this point, current version requires you to restart the server or reload the zone, it doesn't matter, for the first time so it generates all the keys and so on. And then you can enable ‑‑ you can switch to DNSSEC policy to manual and enable the geoIP because right now the integration is not perfect so you need to first generate keys and which Knot DNS will do for you but you need to turn it on and then you can enable the geoIP but that is something which will be improved in future versions.

A nice thing is you don't need to restart the server, you can do it basically online and you will change the configuration file and then issue a command Knot C zone reload and do the change and will be answering. So, yeah. It's definitely doable.

So in short, finally, we have our implementation of tailored DNS responses, there is a couple of different configurations you can use depending on your use case. And well, for performance it's certainly better to use the pre‑signed answers because then, you know, the crypto is not limited any more. And I will end up my presentation to urge for you if you are using any DNS server, please use at least two different implementations. If you depend on geoIP now you have like one more to choose from. Thank you for attention and we hopefully have sometime for questions.

JOAO DAMAS: Yes, indeed.


If you don't mind I will ask the first one. This is just a generic mapping of IP addresses to any tags you want, right? You can use this as any sort of policy engine

PETR SPACEK: Sure. You can write your own database with any mapping you want. The geoIP or geowhatever is confusing little bit because it's just tailoring answers based on who is asking.

JOAO DAMAS: You could see what BIND has in ‑‑ mostly.

PETR SPACEK: Of course you can do that with BIND views ‑‑

JOAO DAMAS: You can use these to do that sort of behaviour?

PETR SPACEK: It depends very much on use case. If you have zone which is almost the same, most of the data is same and only couple of records is different then I think it's better to use it this way. If you have two zones which are completely different like internal view and external view, it's I think better to use something like BIND zones or two instances of BIND views of the server because this feature is basically made for making little tweaks based on IP address but BIND views are a totally different beast because you have totally isolated zones. For example with Knot DNS you can do dynamic updates to the zone and it will work. And the shared data will be still shared. If you have BIND use you will dynamic update, if I am not mistaken it will separate the zones and be like two or more potentially independent zone files. So that's like crucial difference.

WARREN: Cool. That's good. I am sure I am missing something obvious but why is there a difference between unsigned zone and pre‑signed zone in terms of performance, just bigger responses or actually it's the same thing you are just returning?

PETR SPACEK: That is what we have measured. I assume it's mainly the size of the responses because the response can be 100 bytes and if it's ‑‑ 200, so of course in our benchmarks like one‑third of the time is spent kernel, once you start shipping bigger it will waste more.

JOAO DAMAS: Any other questions at this time? No. Thank you very much, then.


Next up Benno.

BENNO OVEREINDER: Welcome. DNSOP update. So how to convey what we are doing in the DNSOP Working Group to the RIPE community and to the operators. I can mention that what the DNSOP Working Group just put out as, well, with the Working Group last call, sent to the IESG for publication, a long list of drafts, it makes the Working Group happy and makes our AD very happy. Lots of work to do. And I won't go into the different draft because I don't think you get really enthusiastic or I don't think I get right feedback from the group. We can discuss about the process, the drafts being reviewed by IS G and it takes time and some are under discussion, you can see that all yourself on the IETF website, the data tracker for DNSOP. You can see the same view as you have here. You can click on certain discussion points and see where the discussion is going on.

But I think to get some ‑‑ to get some feedback and interaction with the RIPE community, I choose to do some signposting here. So I come up with a number of themes, I think it's personal perspective of what we are doing here in the IETF with DNSOP Working Group, it's not accredited by anyone else. But it at least it shows some of the work we are doing here in and want to have some feedback from the operators.

Good. So what we doing now? So there are a number of initiatives, drafts working on, that I call it provisioning and multi provider. So subject of quite some discussion on the mailing list and also in the Working Group, is the ‑‑ called Alisaing. The way it's currently implemented is certain ‑‑ C name, C name cannot exist it's a redirection, cannot exist with other records on the same level so, can be a mixed record, a C name or any other name server, for example.

So, there is a little bit of conflict in the RFC what's being implemented. So there has been a draft A name that solves this problem but other discussions going on why we couldn't and shouldn't use C name. Andrei gave a presentation about his finding using C name and DNAME and some discussion on SRV records which might be used or not by the web community. So actually I want to invite the different authors of the RFC to address some questions or points at the mic if they want to have some feedback from the group. The other thing I want to mention, sorry, is the multi provide are draft and John Dickinson, they are all here in the room, a perfect occasion to address questions. So multi provider, they are working on two models so want to have your DNS operated by multiple providers not one single point for resilience, so the simple model is only secondary operators using so the owner signs a zone and pushes to the secondary operators but in many situations you want to have more resiliency there so you might end up in a situation that you have multiproviders and they sign the zone on different places so you have to ‑‑ ZSK to these providers so there is a difference in trust model here. They describe it in the draft and I think it's good for operators to give some feedback here.

Going on. Serving stale data, the draft is, I don't know, Dave, for two years, there has been some discussion ‑‑ okay, yeah, and there has been some discussion but it's certainly useful, some implementations here. There was some IPR ‑‑ sorry, but there are also implementations and there are some measurements already by Giovane in a, serving stale service being in the wild and measured and it really helps with the resiliency of the Internet and the DNS ecosystem.

You see here some stale bread, it's in the Netherlands actually, well, in the Netherlands we make some delicacy of it, very popular with kids, not so much today, I think, but ‑‑ good.

In last call there is the algorithm update, it gives some guidance about implementation of algorithms in resolvers mainly. So we are constantly, the algorithms, there are new algorithms introduced, all the algorithms deprecated so this draft gives some guidelines at least what should be implemented, at least one is known by the resolvers so you get an answer. And back to the Kamell, so about code complexity, (this is an important discussion that goes on and on. Of course, within the IETF DNSOP Working Group there has been discussion about drafts, RFCs, adding the complexity to the codes and to everything, and the whole ecosystem and the DNS Working Group is aware of that and that makes people in the room aware why do we need to send like this, it's valuable, who wants to use it and which use case. But there are more, not stakeholders but guilty persons or organisations, also the DNS software implementers adds complexity. There is a need for speed. Also ‑‑ he has the hello DNS project and its first implementation was nice and small and exact but then he wented more (compact) performance. It got doubled. It's also the DNS implementers adding for complexity for speed. I want to mention broken software in the Internet, broken middle boxes adds complexity. There is another perspective on the complexity CAM he will, DNS (has at that to take into account all kind of corner cases, broken cases and give that answer back. It's into the one‑dimensional but three dimensional problem here. Deprive recharter, very brief. We have done most of the work for the first deprive Working Group charter, namely from the timewise ‑‑ good ‑‑ so mainly from the stub to the resolver has been now worked upon. For the next step it's from the resolver to the authoritative, has been recharter one draft working on that to be more specific. Not much to mention there yet.

Questions and answers. Any questions? Or I want to invite all authors of drafts to come over to the mic if they want to add something.

AUDIENCE SPEAKER: From Oracle Dyn. For the people who are not aware of what it is the intention behind this is to address I will yen see by giving an answer of last resort when you can't refresh an answer from the authorities. It doesn't mean to cover your case where you might have stently removed a host name. If an authority says it doesn't exist when you meant it to that is still the authoritative answer and that will be given back. But the intention is really to cover over DDOS attacks like the one that took down Dyn in October of 2016. One of the things though that this means is that if you have some type of monitoring system that relies on the detecting that somehow your delegation has gone bad through using a resolver that you don't actually control and know whether it's serve egg stale data or not you might find your testimony is not working as intended. The systems I am familiar with don't behave this way, they would still be able to detect but I want you to think approximate your own operations and if you can think of any such software you might have running that would actually in the presence of stale data not work as it is intended to work, then we'd really appreciate hearing that view.


AUDIENCE SPEAKER: Tony Finch University of Cambridge, regarding the A name draft we have been working on a kind of a camel‑sensitive simplification process of the protocol and I am hoping to get a completely revamped draft out before the IETF submission deadline on Monday. So the draft that is currently published is not the one that we are currently working on.

JOAO DAMAS: Anyone else? No. Thanks, Benno.


Warren: Did people think this was a useful discussion? Would people like to see this again approximate.

JOAO DAMAS: General review and update? Raise hands to get a bit of a sense if ‑‑ that gives me a sense. Good. Thank you.

SARA DICKINSON: Thank you. So I am going to be talking today giving a small update on some measurements around DNS privacy. This work is funded by the grant from Open Technology Fund, so thank you to them and although I am presenting a lot of the work was John by John Dickinson and John ‑‑ you might remember there were a couple of presentations there, we gave work on the work we were doing, it was quite a small scale study where we looked at four name servers with relatively few clients and we concentrated on what happened when you ran very few queries over TCP connection. We used the an extension to the dnsperf tool to do this and we used two ‑‑ fairly simple set up. At the same meeting Baptiste and he had a very different set‑up, we was lucku enough to have access to a academic Cloud network where he could scale hundreds of clients and therefore he could get up to thousands and millions of connections to the name server. He just did UDP and TCP at that time and he only looked at one bound and used a tool he'd written himself to do this.

So quickly present two of the key findings from those results, from our work we were able to show that depending on the name server software you use, we could demonstrate that you can amortise the cause of the TCP handshake with as few as 100 queries a given connection and by the rest of them it's amortised as well. The headline result from Baptiste's work was, he in his system set it up he could have 25,000 simultaneous connections running to his name server. And he compared the UDP and the TCP performance and the headline had a came out of this was that under the conditions that he did his testing, TCP was 25% of UDP. When we looked at this we were advised, we thought it should be better and maybe it was because unbound doesn't con cuntly prosets requests it gets on TCP connection we in the that had this set‑up was constraining the name server to only run on single thread. So, what we wanted to do was use our simple system to try and reproduce his results and we also wanted to look at what happens when you tonight have the restriction on the name server, we wanted to do it for some other name servers and see if they showed the same behaviour and I will mention Baptiste and was at OARC and doing very similar measurements and we have largely had the same results as well, open flee he will be publishing his in more academic forum.

So this is the first thing we did was to reproduce what he had done, a single client machine and name server machine and we were able to spin out 24,000 connections in this case, top is UDP and bottom line is TCP, we see almost exactly what he saw and with one thread about 160,000 killy ‑‑ 32% of that. We thought let's give unbound its head, 16 calls in typer threading lets give it 32 threads and that's what you see when you use the different configuration. You see that that actually much, much closer as you increase the number of clients and in fact, at 5,000 clients they are almost the same. So in this case, you have got a much bigger threw put, 620 kill queries per second and TCP is two‑thirds of the UDP. So we do see a distinct improvement here.

Having done it for unbound we wanted to look at a couple of other name server implementations and looked at Knot Resolver using 32 threads and we saw something quite similar. You see it dip down a bit more as you get up to higher clients but again at about 5,000 they are very close so a good throughput here and at that upper limit TCP is about 50% in this case.

And we also have it for BIND, not quite so good there. We see this with BIND in a lot of profiling, plait foe filing. Not as big a throughput and in this case with 32 threads TCP is down at 25%.

So we get to this stage and we start thinking okay we have fought some tools, some data, we can see the weak points in the implementations, we need to optimise and we are kind of done here right? No, this is actually just the beginning of this work, in my opinion. And the reason it is, if I show you that graph again of unbound using 32 threads with the 75 ‑‑ 67% of TCP, of UDP being TCP you look at the TCP throughput here and it's 410 killy queries per second in our measurement but that is looking at that time from the server (angle, you need to switch this around and look at it from the client perspective and if you do that and transform the data in this graph you see something like this, and this is telling you for a single client firing queries at this name server under these conditions each client when there is 24,000 connections is doing 20 queries per second. Now you see it looking very close to UDP and TCP and that is because the throughput is the name. Thin take a step back and thinking when you are doing UDP benchmarking you have the luxury of treating your population as if they are the same or very similar and you can't do that with TCP if you want anything realistic. You have to consider both the latency from the client which is what typically done but also with session based benchmarking you have to think about the throughput on the connection. We also know that clients behave very differently, you absolutely don't have uniform population talking to a single resolver so going to have to get more sophisticated start simulate client populations where you have different query rates and idle time and whole range of dimensions, you find this is a really common approach to model these client populations but today in the DNS we have no software that can do this and very little data about how the clients behave when you are trying to model them in this way.

Fortunate Nathalie a ‑‑ thank you very much to him for letting me use this in the presentation. He posted this graph and what he did is he took a seven‑minute window of data to his resolver and he counted how many clients did how many queries in that time win toe. So you are looking at how busy the busyness distribution of the clients. So if you want this data to answer, explain what we were seeing with our benchmarking you need to know what that 20 queries per connection data point was and it's up there. There is a handful of clients that he was actually seeing in the wild doing that kind of queries rate on a single connection. So to put this in a bit of perspective, we have got this very interesting set of peaks at the lower query values. That first peak there is actually a client doing only point 2 queries per second so once every five seconds so it's actually quite low. And then this other peak is at .03 queries per second so, you know, my sort of instinct here might be that is just some piece of software doing a piece of periodic probing of the network. Possibly. We don't know. The big problem is, that you look at this and you immediately realise this idea of aggregation just doesn't apply to clients for ‑‑ at the at the state we know we can't do aggregation but don't understand all the factors involved here. It could be something to do with the fact in a UDP the clients have a selection algorithm for the resolvers and they jump around quite a lot and that might not be the case when we have session based traffic. It could be the provider doing some load balancing or this period approximate I can probing from software, we certainly know users are bursty, you look supping and fire off a load of DNS queries and maybe just work on something that is really rather quiet and you put your machine down and turn it back on, so none of this is uniform. And again, some of these could be actually forwarding resolvers with many users behind them that therefore peer probably at this upper end here with the high query throughput.

We did briefly have a look at whether we could repurpose any of the http software. We surveyed quite a lot. The general feeling was that they were really rather heavyweight for what we were trying to do, virtually none of them were written in C, the two we looked at in any detail from were K 6 but we found had prohibitive start‑up times even with 1,000 clients. Tend to assume do have a farm of machines that you can use to spin this up and most of the current DNS labs don't have that, they might have a handful tonight not very many. The one that looked promising was sung, it's written, supported some other protocols and did support this concept of client populations but when we ran it on one box we could get 30,000 clients but not more than a from a single course machine. We have had a DNS into it but only synchronicity, we need to extend the framework, the stats needs some updates to give us what we need to know so it's potential but we are not 100 percent convinced it's the right way doing.

What we actually think is if we really want to get serious about benchmarking DNS across multiple transports we probably need to start from scratch and run a hybrid tool, one that can do the high level of DNS query throughput that we need but combines that with this http view of script and client populations, the existing tools tend to be very focused on data throughput as opposed and we think that is why we saw of the problems, it may be your typical benchmarking lab will need to have a farm of client machines.

We also need to design this tool if we get there so it will be extendible to all the other protocols and who knows what else, but we don't have anything like this today so what we are in the process of doing is trying to create a requirements list if a tool we might design from scratch, I am going to be posting that probably to the DNS operations, probably OARC list, and if we can get some input on thoughts about what this tool should look like that would be great. The other thing that would also be great is if we could get more data about the different client populations and how they behave and in particular from anybody who is DOH ‑‑ who will be seeing different characteristics in the traffic because it's traffic based. If you are interested in this tool and you think it was use and you want to collaborate, if you think you can contribute code or fund some of this work we would very much like to hear from you. Thank you very much for your time. That you will work will be published on the DNS privacy project website so please look out for it there, probably next week sometime and I am hoping we have time for a few questions. Yes, we do. Thank you very much everyone.


AUDIENCE SPEAKER: From DNS‑OARC, this is very interesting. I have a question about data that might be useful, would more recursive tat in tat sets like OARC still data be useful to answering questions like how recursive clients move traffic around and that sort of thing.

SARA DICKINSON: One source we thought of looking at to convert it into the data we would need out the other end but that is the one of the places we will look maybe we can talk off‑line about that.

AUDIENCE SPEAKER: Oracle Dyn. I was can you remember I couldn't say what plans you might have to look at the difference for the actual platforms that they are running on because TCP stat will make a big difference from ‑‑ is one aspect. And then the other is the simple client profiles at well‑behaved normal DNS clients and attack traffic and take a very, very different profile and there is a lot of different implications there for how you provision systems in order to withstand that attack so really interesting to look at okay, well about when thing go sideways.

SARA DICKINSON: I think that is one of the things we want to understand from the requirements gathering is, we think we are going to have to have very different modes when we throw the traffic at the serve and that is absolutely one of the things can he can't probably model with the tools we have got.

AUDIENCE SPEAKER: Oracle Dyn. When I saw that drop off at 5 K, that to me is probably related to the TCP stack tuning ‑‑ as well as ‑‑ storms ‑‑ it applies to TCP more than UDP in general. The second thing is, we do have a tool that can do the type of things that you are talking about for that distributed attack. Grafana Labs, you should look at them, http and https and a form of servers you can operate your own servers that could then run that software and you can bring on very, very large floods with that.

SARA DICKINSON: That is great.

AUDIENCE SPEAKER: I will follow up with you.

SARA DICKINSON: If you could put that on the list. We didn't do anything in terms of kernel tuning or playing with the Nick or anything like that. This is vanilla out of the box numbers, hopefully you can think ‑‑

Giovane in a: Thanks for this presentation and the previous one. If you need any data on traffic between recursives, details ‑‑ I will help with that.

JOAO DAMAS: Thank you very much, Sara.


Traditionally we have this AOB thing which is empty. Today is a bit differential we have four different items. The first one coming up is Ondrej on follow‑up with his presentation on plenary about automatic DS changes.

Ondrej: Made I made a talk in which I was talking about keeping DS records for the ‑ zones in RIPE database in sync with if you have some siren automatically rolling KSK you can get the ‑‑ in the parent synchronised. So, I get some response from people here so I actually change ‑‑ Shane recommended me to have a look at the RIPE database dumps to see how many zones are actually the case. So, 0.41% of reverse zones are secured by DNSSEC. Not very nice. But I think these tat could help us like decide the process whether if it's worth implementing some auto updating or full scans or not. By the way, this is what algorithms are used so you see mostly in algorithm 8 is used in DS data and the last graph digest types, 3 which is some obscure algorithm nobody knows about.

This is the basically the point for discussion. There are three questions how to implement this automated, maintenance. First is, should we support in the same way like cz or OCH supports it, insecure to secure boot strapping, if you sign it and put CS in your zone it will automatically get secured, so this is the first question.

The second is if not, should we automatically scan all secure delegations for the CDS records? I think this quite makes sense for me.

And the third one is not even that so how one should update for that and like I am willing to talk about it or get some discussion, either here or on the mailing list.

SHANE KERR: This is Shane Kerr. I have a quick comment about that. So, I really support this effort, I think it's too hard to do the parent and child synchronisation and in the past has been a leader like with DNSSEC could also be a leader in this area as well. I think in order to think about this first bullet which is really tricky one, we probably want to do a detailed analysis of a security risk and profile which I think we can help as a community but I think we also need to do that in conjunction with RIPE NCC staff and also with database team as well.

Peter: Cz is already doing this including boot serving from secure to ‑‑ talk to us and we can pull out your ‑‑ it's going to be the same right. If you are, you know, about to start screaming, no, no, you can't boot serve, well, the short summary is we are doing basically scan over the name space and once we find that there is a new record which wasn't there before we are doing TCP connection to the authoritative server to verify it's in there, then the scans over TCP repeats every day for one week and if somebody is spoofing TCP connections to your server for one week I think you have bigger problems.

JOAO DAMAS: Thank you very much. Ondrej: Everything else on the mailing list.

JORDI PALET MARTINEZ: I have only one quick slide, we have a few slides this morning in the IPv6 Working Group but I thought it was a good idea to show DNS‑related work that this, under the scope of v6 Ops, there are two documents that I am authoring right now, one is related to DNS ‑‑ sorry NAT 64, 464 XLAT deployment guidelines in ‑‑ we have a lot to tell about how DNS 64 can break DNSSEC and also how DNS privacy documents can or protocols can also break NAT 64 actually. So, the idea is just to have this slide, if you have comments, please come back to me. Take a look at documents, the other one is more recent, the first one is already adopted as a Working Group item, the other one still not hopefully in the next meeting in we can work towards that. I am here the rest of the day today but feel free to drop me an e‑mail if you have any comments. Please read the documents. They are really, really important. Thank you.


JOAO DAMAS: You don't have to be present at the meeting, you can work on the list.

ONDREJ SURY: From ISC. So I don't have any slides, I just have two questions for you: The venerable BIND is 20 years old and options from BIND 8 so the question is, would you care if BIND would call home with a version and the second question is would you be fine if call home with the anonymous bit match the options used so we can maybe deprecate option that nobody uses but still in BIND. Does anybody care about the software calling home with information about the version or not? So you can either answer here or a little bit more time or send me an e‑mail.

JOAO DAMAS: Thank you. Is this configureable

ONDREJ SURY: Definitely. It will be configureable and I guess the packages in this ‑‑ by default but for the software is downloaded by default but configurations options for that.

AUDIENCE SPEAKER: I want to say I think other vendors would appreciate it as well, the question is general and go talk to other vendors as well or....

BENNO OVEREINDER: Agree. But also more on the confidence model maybe, send it to trusted third party like DNS org, maybe that might be part of the discussion or not. Next question to you or maybe to the room, maybe question ‑‑

AUDIENCE SPEAKER: Just speaking as a general operator I think it's a good idea. It would be much better for us generally to have a good idea of what's out there in the wild, it would take a long time to get all that data, everything upgraded but I think it's a good idea. I might consider even going one step further to notify operators in their log when the query is run if they are significantly out of date or if they have known vulnerabilities for example.

ONDREJ SURY: Yes, that is reason for the first one why I send a version check and you should really upgrade and log into the system.

Pieter Lexis: As a data point we have this in both recursive and authoritative server, we are not doing any kind of analysis on it but using it to signal out of date information to the process itself, and we have a couple of stages in here so there is non‑critical and critical upgrades that you should to and if there is critical update we will scream very loudly in syslog that you should update and expose it as a metric. We are also exposing if you have a critical version ‑‑

ONDREJ SURY: And nobody screams at you

AUDIENCE SPEAKER: The distributions turn it off. With V DNS anyway and we have asked the distribution do you want your own sub domain and they said don't want at all but we do this in our own packages and do get people on IRC in the mailing list being happy about this that they are being notified there is a critical update for software.

JOAO DAMAS: One in past discussions about this there was another approach and I don't know if you have considered it where the server connects to well‑known server at ISC and provide my local daemon with a list of problems with versions which the daemon locally checks without having to report to the outside.

ONDREJ SURY: We want it doing even more and have under options use so we can deprecate the options. The first one just report version or get version list, it doesn't actually matter whatever it will take. But the second one is more intrusive because it will send, what you configured by hand so we have ‑‑ so we know what we can remove from BIND. That is basically the reason for doing this.

JOAO DAMAS: Thank you.


SHANE KERR: So I know you all love DNS and you haven't got enough DNS from only three hours here during this week. This is ‑‑ we are having a call for paper, where we actually have a room in FOSDEM on February 3rd, which is a Sunday. So free meeting, we have the room all day and we are looking for people who want to talk about DNS stuff which I know you all do. We will send, if we haven't already done so we will send an announcement for papers to the DNS list. Check it out.

JOAO DAMAS: That's it for today. We will hopefully see you in the mailing list and the next time around in Iceland, volcano permitting. Thank you very much.