Archives

DNS Working Group
17 October 2018
9 a.m.

SHANE KERR: If everyone can take a seat, I'd like to get started. It's the 9:01.

My name is Shane, I am one of the Working Group co‑chairs. I'll be walking us through our session here this morning. We have got a nice full agenda. We're going to start off with review of our ‑‑ we can do some agenda bashing, we published the agenda online. We don't have a whole lot of leeway. We'll be ending the session on Thursday with an AOB section, so if there is any announcements or questions to the Working Group there is a small amount of time, but we'll be doing that after our second session

IVANA TOMIC: . Today we are going through the preparations.

So, we don't have a the love action items open for the DNS Working Group ‑‑ this is the DNS Working Group. If you don't want to be at the DNS Working Group, you are both in the wrong room and very wrong. So, this is the place where you want to be.

So we only had a single action item open and I believe this was a work of updating the SOA recommended values which has been going on since I believe RIPE 52 or something like that. I talked to one of the people involved, and we decided that we're just going to close the item because it doesn't seem there is enough time or interest to do it. That doesn't mean the work is necessarily not going to get done. So if you or anyone you know has interest in updating that document, then please either just do it or come and talk to me and we can help figure out a way to do that.

So, that's that. Unless there is any questions?

The next thing is we have ‑‑ we put our minutes from the previous meeting online. I don't believe we saw any feedback so unless there is any objections we're going to declare those minutes final. Going once... going twice... all right.

The final item of administrivia on our list here is the selection of a new co‑chair. So we have I believe a two‑year period where all of us ‑‑ oh three, we have a three year period once you are selected as one of the DNS Working Group co‑chairs, you are one of the co‑chairs for three years. And then your term ends. You can put yourself forward and other people can also step forward for that role. This time we only had a single person come forward who is our current co‑chair dive night, we saw some nice comments of support for Dave in his role on the mailing list. So, we're going to declare a consensus for Dave as a returning co‑chair, so welcome back Dave.

(Applause)

All right, that's it for administrivia, let's go ahead and get start with our presentations. For those of who you don't know, there was a DNS OARC workshop last weekend, and Keith Mitchell, the head of OARC is going to come here and talk to us a little bit about what happened there. Thanks Keith.

KEITH MITCHELL: Thanks Shane. Just a quick poll first of all. Home of you were at the OARC workshop? Okay. Great. So basically I get to rerepresent the work of a bunch of people who already know about better than I do.

We have pretty good cooperation with the RIPE DNS Working Group and there is various approaches that we have tried to avoid too much overlap when we co‑locate or with RIPE. This summary presentation is basically a format that we have tried when we have co‑located with other meetings in the past. So, it would be useful to have feedback as to both from the people who were at OARC and the people who weren't, this is actually a good approach or not.

I'm really just going to skim through this because most of you I'm sure are familiar with this. But basically, OARC is here to help understand the DNS better, to close the gap between Pryor's and researchers, to close the gap between operators and basically to try and make the DNS work better.

We have global scope, which is not to say that we cover the entire planet but we dodo our workshops moving around on a fairly regular basis.

Also, I'm not going into a lot of detail in governance, it's basically similar to best practice within the industry for our types of bodies.

So, it's been a busy year for OARC, we're just about to start doing some work, some GDPR inspired work on anonomisation tools for DNS cap, that's been funded and we're hoping to deliver something on that at the end of the year. We have done a lot of work this year on the drool and DNS GIT tools for replay and replay like functions. MAT has joined us, we have done data collections and the first of these was our regular annual one, which was as big as ever. And to cope with our ongoing datasets, we have added further storage capacity to our infrastructure. We did the first workshop in the Caribbean region. And the Open Source conferencing platform that we use had a major upgrade as well.

This says 2017‑9 OARC board, we had the AGM on Sunday but the board composition didn't change. One of the changes as you will see that these directors are listed without member affiliations because they are now appointed on an individual capacity rather than being the seats belonging to the organisation.

To go with that change, we hadded some anti capture measures to the by‑laws. Also, the conduct policy that we introduced at OARC 20 has now been formally ratified by the members, that's just in line with industry best practice on diversity and inclusiveness.

Now, that we're finished or second workshop this year, we need to get started on next year's workshops and we're looking for candidates for our Programme Committee. So, if you are interested, please let us know.

So, onto the workshops.

We do these twice a year. In a sense, there is a very similar operational focus to RIPE, NANOG and other bodies. But we try and keep if focused tightly on all aspects of the DNS. They are up to two days long. This time we co‑located with CENTR. And basically, all the materials from that workshop are available on the site URL and it was our biggest yet, 192 registered attendees, and actually, the number that were registered the numbers showed up were they close as well which was great.

Hot topics: Well some of you may have noticed that the root key signing key got rolled just under a week ago. There is always talks about abuse prevention, detection and mitigation, we had some criteriaers. There was a couple of talks about DNS privacy protocols. And then the usual selection of measurement analysis tools and there was a bit more detail on our own systems and software.

So, I don't think there is anything I'm saying here that nobody in the room is aware of. The key got rolled. They were going to do it a year ago. There was some strange data which nobody quite understood which meant that they were uncertain of the outcome. So, after a lot of measurement, outreach, proposals for new measurement technology, and a lot of planning, Dwayne and Matt, the other teams at ICANN and VeriSign, pushed the button on Thursday in Amsterdam, and it seems to have gone very well. There is, you know, a few bumps and anomalies and wrinkles in the data but at the moment it seems like things have gone smoothly.

OARC's role in this. We didn't make the key roll happen. I'm certainly not going to take any accurate for that at all. But we were asked by ICANN and the root operators to gather data at one of our PCAP dumps from the root sources, mostly from Root‑Ops, we still need for data from resolver operators. And that was done over an 82 hour window so basically 32 hours either side of the actual rollover, plus 10 hours to account for time zone differences for the people in Amsterdam.

The other thing we did is that we had an hour's worth of open programming at OARC 29 so that the key players could talk about that. And we had a few interesting presentations there.

We also provided some space for all the people who were physically in Amsterdam and involved in the key roll. Some of our contributors OARC staff, various usual suspects. And we basically sat and watched it. There was a really nice progress metre from the NLnet Labs and SIDN guys. We provided a Jabber room. Actually spent more time arguing about which technology to use and how to get people signed into it than we actually spent monitoring it, but it was, we think it's part of our role to provide that kind of space so that people can coordinate when there is big things going on with the DNS.

So, the various presentations that sometime was spent on. ICANN, very pleasant non‑event, which is a great thing to be able to say.

The gaffe that I just showed you, SIDN showed how they use the Atlas probes to query the root ‑‑ the RRSIG for the root and show the old key going down on the new one. There were some minor issues reported, some browsers with DNSSEC built in and needless to say the crypto currency community had their spanner to throw in the works.

But, you know, you can look at that graph and you can see that there is some strange inflexion points there and also some jitter in the data. We had the data, there are other data sources out there so the researchers will be looking at that in more depth and hopefully we can learn some lessons and the old key completely going away is another event we have to look forward to.

A related talk which wasn't during that session is nic.cz talking about how they rolled over to an ECDSA key in their TLD space.

So that was the key role. We had several talks as usually on abuse related topics. They talked about using passive DNS to detect various abuses of IDN schemes to create domain names that looked like home and brands but actually weren't. Another talk on looking at how effective various DNS defence and mitigation measures were during DDoSes. We also had one member only security talk from AFNIC, so I can't tell you anything about that but it was an interesting talk.

So, we had some criteriaers. Basically, I think one of the more commendable projects that's going on at the moment is Bert Hubert's initiative to real ICE that understanding all of the DNS standards out there from an implementer or operator point of view is actually really difficult. So, he has driven a number of things. There is the hello DNS kind of WIKI knowledge based for problem solvers. There is the DNS calm he will viewer which let's you get over the hump of looking at all the RFCs and there is the TD NS teachable sample server code. So, that was, you know, I definitely think these are things you should look at more if you want to learn more about how to make the DNS work.

Other big thing that's going on at the moment is the deployment of DNS over HTTPS and TLS, the two approaches to this. We had a presentation on CloudFlare's privacy resolver, some discussion of the two protocols and comparing them from an operational and also from a performance point of view.

And then we had a preview of serious talk 2 Plenary on Monday about the whole issue of encrypted DNS transports or the brother operators just going to steal all your DNS traffic and feed it back to Chair joint clouds, do ISPs have to worry about this? So that is certainly an important point that we had some quite lively discussion about that.

Measurement analysis tools. The DNS thought tool for basically out route from the hackathon last year and how that was being used to measure caching resolver issues and also a talk on applying mission learning techniques to some resolver data to try and again understand what's out there better.

Some new technologies, which is CDN S, basically a format for taking your DNS data which is more efficient than raw PCAPs, actually can be quite efficient from both a space and CPU processing point of view. How that's migrating towards a standard within the IETF and how the code is now becoming available Open Source.

Andrei talked about CNM and a different layer of the technology talk, we talked about how DNS is actually quite being integrated into LoRaWAN wireless networks in some ways that you might not completely expect.

Gerry Lindstrom, our other software developer, told us quite a lot about what he has been doing with the software tools, there are various tools that OARC has inherited over the years or generated over the years but he has also been generating some new tools as well. Most of the progress this year has been on DNSJIT which basically is an evolution or inspired by the replay tool, so there's been some features added to drool, there has been ‑‑ DNS is not a very general purpose engine. So if you are anything to do with capturing, replaying, you want to for example stress test I strongly recommend that you took at these tools and Gerry will be here till Friday if you want to find out more.

We have add add number of new features to DSC and DNS cap, and sometimes these are contributed, sometimes these are new features that we want to add.

There's been considerable movement over the past couple of years in Monday earnising our build environment. We will find all of the Open Source tools on our GitHub site and there is fully automated build and testing and also generation of packages, far beyond the main Linux distributions there.

A bunch of other topics. I'm sure all you are aware DNS flag day is coming up. There was some talk about the various test and compliance tools that are being put out so that people don't get blind sided by this. Duane vessels talked about some changes to the way that glue records are handled between.net and dotcom zones. Talk about message digest for DNS zones. There was an update on where we are with OARC systems and more importantly where we're going with OARC systems now that we have a new broom on that. And he also gave a PGPKEY signing, we always have that at meetings, and basically this was a, don't take for granted that everybody understands how to do this talk, which I think is a good thing to have from time to time.

We also had space for lightning talks. A bit less of these than usual because we set aside the time for the key roll discussion. Some operational experiences of DoT and DoH at Cambridge university. Some talks about using DNS for some Mac address purposes and also a study of aggressive caching and TTLs and NSEC.

So that's really the summary of my recap. You can find all the slide decks on the event site that I mentioned earlier. You can find all the presentations on the OARC YouTube channel. Most of the OARC folks have stayed for the RIPE meeting, so, please do speak to myself or Gerry or MAT or Denesh if there is anything you want to know more about what's going on.

We have next year's meetings all lined up. I am afraid we're not going back to RIPE next year but OARC 30, ICANN is generously hosting that and that's going to be in Bangkok. And in a year's time we will be going to Austin Texas to co‑locate with NANOG, which is a thing that we do very regularly as well.

It was great to share the programme and the venue with CENTR. We have wanted to co‑locate with CENTR‑Tech for a long time, it's nice to be able to do that. It seems to have worked, so hopefully we'll get a chance to do that again in another couple of years, and it's always a pleasure to co‑locate with the RIPE meeting as well. I mean the RIPE NCC make is very straightforward for us.

So that's it basically. I'm happy to take questions. If you want to ask detailed questions about the presentations, I suspect I will refer you to the person that presented who is probably still in the room, but I'll certainly do my best.

SHANE KERR: Thank you Keith.

(Applause)

Sop I don't see any huge lines at the microphones. So I was at the OARC meeting and I thought it was good. It's always a great meeting, if you have involved with DNS from a technical level I really recommend it as the best place to meet the people who are really working on the DNS. Now, as DNS Working Group Chairs at RIPE, we were a bit worried, because we were worried that people might see the same presentations and things like that. But I think we didn't see that, so I hope we have a lot of interesting stuff that people haven't yet seen in our talk today. And I guess that's about it. Thanks Keith.

Up next on the schedule I think we have Roy Arends listed, but I think we are going to have Ed Lewis come and talk to us about some sort of especially privileged zone in the DNS.

EDWARD LEWIS: Ed Lewis from ICANN, I have no slides. I won my battles over that, Dave and I were discussing that.

First of all, I should start out with, this is a DNS Working Group at RIPE. Does anyone want me to describe DNSSEC? Usually we have a standard song an dance for what it is and what the KSK is. If you are in the audience and you don't want to seem like you don't know what that is, ask me later. But assume for now that KSK is a configuration parameter that has all the validating resolvers out there. That's all that matters for the purpose of what I'm going to say.

So, this week, actually last Thursday, we hit a milestone in the project. The milestone that had the most anxiety attached to it because we were going to get rid of the old KSK signature and put in the new one, and the impact is that if you didn't update whatever you had doing validation, it would just say everything in the world was down, serve fail for everything. Not just sign zones but everything. Because the root zone is signed and if it's signed, it's not going to get you anywhere if you are validate that, if you're validating. The thing that I want to get across now is that this project is going on very slowly, very methodically, we fit it into our existing institutional systems for managing the root zone so we didn't do anything special for it, this is why it takes so long. It's mostly for that. But going over the history of this project ‑ actually 3 years ago in a few months we had a meeting in this same building, there is a DNS OARC an there was a RIPE meeting where we talked about here is our plan for what was going to happen. We took the first step in, I believe it was the fall of 2016, where we created the key and no one saw that. It was done in the ceremonies. You can watch them, if you want, but no one in the Internet production wise saw that. We transferred the key around in the next three months to make sure it was replicated for operational purposes. That was done again without impact to the Internet. Then we began publicising what it was going to be. We started producing the records that you would see eventually, on July 11 of 2017 we put it into the root zone, with the thought that in October of 2010 we would 2017. We didn't do it in 2017, we found data that gave us some pause and so we had a one year delay.

So, the first thing was, the noticeable effect was in July 11, 2017, the new key went into the publication side, and people could see the public KSK for the first time in the DNS protocol. It was available on the web sites and so on before that. Now, a year plus goes by, and what we did last week was we just changed which private key we signed with at our end. That's all we changed. It's basically the change to just one record in the entire root zone, we just changed the signer of the resource record signature over the DNSSEC. Any other changes to the root zone on October 11 included getting rid of the old County Council, that's going been going on. We have rolled over the ZSK. Nothing else changed in the zone at that time.

Now, people are excited about this because this was the time when what you have configured in your servers matters the most. And if people didn't know about this change, they would be in trouble, we had automated updates available. It's never really been seen to work on a large scale. We know that protocol works, we don't have operational experience with it. We certainly don't know how to manage it. Operationally, we are still learning what to see. We are concerned more about failure but we never thought about success, many operators do that. You think about where the airport is going to come in, we don't any about what should we be seeing. We are learning a bit about that now.

We're not done. This key, the old key is still in the DNS. It's still the DNS KEY set. It's still there. The next step is in November we're going to have a certainly me which you can view and ask to attend, but it will be off line, where we will produce the next set of records for the next quarter, which will show that record is being revoked, and the revocation is under, according to the protocol in RFC 5011, the automated updates for DNS trust anchors.

The small issue there is that the DNS KEY response size when you ask us about, a few more bytes than its ever been before, about 10 or 12 more. As an operator you are always concerned about change and that's what we're looking at but it's only 10 or 12 bytes, so how bad could that be? Who knows?

That would be, if all plans stay as they are, January 11 is the day that would happen. Could it be delayed? Yeah, it's operations. But January 11 is when you would expect to see in the DNS the next difference related to the KSK and the root zone management.

That will go out there, it will be at the revocation. We will looking then do we need to alter what we do? For the first quarter of next year, that revocation should be available. We may pull it out sooner if it's causing pain. By April 1st, this current KSK should be a memory. It should be out of the DNS by then. And then we still have to clean up our institutional systems that the air gaps system to no longer have it available anywhere. All in all, this project actually runs at least through August of next year but not with the anxiety that we have seen happen in the last week.

So that's where we are. That gives you an idea. This is a long range project. The reason why I want to stress is that is that when we hear calls for the next discussions forward, we haven't finished the first time we have done this. We are learning lessons as you go through a process, the data we're seeing is not complete. We don't know what to say about what we have seen so far. We are observing things that we didn't quite expect and we are watching to see what are the impacts. We see below back and other things coming into us. But we're not sure what signals mean right now. I have been asked a few times and want to defer, questions about the out outages, what reports ‑‑ there have been some ‑‑ we have heard third hand reports, very few people ‑‑ I won't say zero, have come to us and said we had a problem, can you help us? I mean, very rarely does anyone actually contact us about an operational issue. We learn of things through other places. We ask what happened and for the most part, people find a minor thing that was overlooked and they figured it and moved on. The fear that there would be a systematic outage throughout the world never materialised and that was what we were expecting to have to react to if we reacted at all. Not that we expected it to happen, but that's what we were prepared for.

But it's interesting that we really ‑‑ that the group of us who have been involved with this, have been looking for things, we don't have a single place to go look. So, we're all ears right now, as much as we have said, this has gone smoothly, it has. That's not a scientific on smoothly yet, we're still waiting to hear. It's interesting when operators communicate to you might be after they had after outage, they fix T they reboot the system, they do root cause and then they discover what it is and only then do we get T that might even be a week or two later. Then, a baseline engineer might reboot a system and it comes back up because that fixes the symptom, but that's not always the cause of what the problem was. I have yelled at my engineers, if you say it's a reboot solve, you are not trying hard enough. So we don't really have, even at this point, I think, in my opinion, enough to go on to say here is what happened as an aftermath of that.

There are certain things that are out there that we have heard about, but literally, in one case we're trying to find out from an operator what happened because we have heard reports, but we haven't got into the operator, even at a vocal or a text level to say, do you have someone we can talk to there? We are still trying to find out.

So, we're there.

Also, one thing that we have in our plan, the final thing in our plan it to report and document what we have learned from this and again findful that I city see this as going on for almost another year before we actually put down the last keyboard clicking on this. That plan is still ‑‑ that report is still being in progress, we have learned a lot. We have learned about communicating with the operational community that I need to bundle up some stuff about that. That we have things that we can say so far about what we have learned. But there are a lot of things that come out it have this. This is a project that I never seen before. We don't know, the Internet doesn't know all the participants. There is no roster for everyone. We have had to coordinated all this. We being the Internet not ICANN. We just manage the KSK, it's a small little thing. We're done. All the configurations throughout the world had to be updated including working with the Open Source software development, to make tour the tooling worked and coding was there. Make sure the messages got out there.

We have so many barriers to communications because people fear spam that getting an operational message out there has been kind of, are we allowed to say this kind of thing.

I would say that I have spoken at RIPE on this before, the last time I spoke at RIPE one of the comments is we have heard this before. Stop coming. So... it's just interesting going through these and we are going to have to document all this. It's outside, inside the protocol, we need a more manageable system. It would be a good I think this, we don't need it, it would be nice to have more manageable.

So, that's what comes to me as what we should say about the role today and I know that at DNS OARC there was a discussion and one of the first things out of anyone's mouth there was whence the next roll and I expect people here might have opinions on that, and we welcome questions on that, we'll discuss that, but also keep in mind that we're still not done with the first one and by the way, I would say there are only two organisations I know that that have actually done a KSK roll in the upper levels and we're not one of those two next. I don't name names. If you are curious, I know of one and I think the other one has done it, so with that I'll open for questions.

SHANE KERR: Thanks ed. So, we have got already someone in the quay.

AUDIENCE SPEAKER: Joe all. During your talk you referred to the KSK available to the given name server as configuration. I think that's a mental model that we have to change. I would like people to think more about these as being state of the server rather than configuration, because thinking about configuration as we have been doing, and I am guilty of that for the last ten years or whatever, means you end up having static keys cut and paste into the configuration file, the configuration is something that's hard to change normally not only because the state of the resolver but also a lot of things. It would probably be a good time for the implementer themselves to think more about a state of the server and how the configuration of the server is changed to treat these as state which makes everything a lot more dynamic and flexible. So, move away from in model of having static things of just having pointers. Think about the set of keys that the resolver has access to in the same way as you treat zone files. And then perhaps we'll be making some progress to avoid having the same problems we had with these next time around.

EDWARD LEWIS: Let me clarify about when he say the KSK is a configuration parameter. For one thing I have actually been work.

ONDREJ CALETKA: An Internet slowly which may not come out which talks about the trust anchor database as Anna junk not name server he is process and its input there. If you also look at the port 53 protocol, what goes back and forth there is mostly data. To date, the only time that we pass back a configuration parameter is the recursion desire, recursion available bit. That's the only thing I see in the protocol which let's us say I'm not configured to let you to recursion. And in that sense the KSK is a configuration parameter that, you are right, we want the configurations to be more dynamic and if you go back to a paper from 1988, Marco pedestrian RIS talks about that we need to have more liquidity in the way the service behave and the paper is fascinating ‑‑ and it would be great to do that. We also have the fear that if my configurations become liquidity then anyone can change them. And we have this problem with the control over the years. The host name, the version .bind query become a hot topic. It's great to know what version you are running to be able to deal with T people say if you know what I'm running then you might attack the weakness in that version. This will give you an idea of of the scale offer the decades we had this debate about what we do with this.

AUDIENCE SPEAKER: Petr. I'd like to suggest not treating the KSK state or configure but as a property of the software because patch cycles always shorter than rolling cycles.

EDWARD LEWIS: That's an interesting comment. When I went through some of the older documents, talking about the roll plan ‑‑ the documents I was reading in 2015 about what was done earlier, there was actually a statement that was saying that we shouldn't trust the software, what's in software. There was a time when we believed that the software was slow, so slowly updated that it was configured in software was going to be out of date compared to the operational. Some of the older documents said don't do it that way. But we found this time around, one of the things that we tried to do was we engaged the Open Source software development community and said we're going to ‑‑ here is the key, please put it in there.

And we had an interesting thing come out of this. It's a great help. It's a great help to have the key distributed in way that no one cuts and pastes it.

Basically ‑‑ the thing that emerged was that some people in their configuration files had hard configured either the option to not update the key and so on. And so we found people were updating their software but not updating the configure, we have to get that down to is what I want to add.

SHANE KERR: It sounds like mark has something to say.

AUDIENCE SPEAKER: Mark Andrews, IFC. I think the year's delay actually helped a lot in terms of all the Open Source vendors had several maintenance releases in that time and that really flushed out the old key. Mind you, you had a bug report five minutes ago about Debbion running 984 end of life a while ago. They didn't follow ‑‑ they didn't do the key rollover. So you are getting one secondhand now rather than third hand.

SHANE KERR: Hopefully next time you come you'll have slide wear porn for us to see slides and charts.

EDWARD LEWIS: I am a really bad drawer of things.

SHANE KERR: I have one final question. So, when IPv6 doesn't work, the thing you do is just turn it off because who cares? Has there been any tracking of people who might have turned off DNSSEC instead of bothering to update this stuff, it sounds hard and I don't care.

EDWARD LEWIS: For the day I wrote a little script, and my target was more to look for that than to look at the adoption of what ‑‑ the graphic you saw when Keith was up here. There's always this difficulty, it's very hard to tell if someone is doing validation now.

SHANE KERR: You could look for ZSK.

EDWARD LEWIS: Just to keep it a short answer. If you ask anyone who does DNSSEC studies, is that IP address doing validation is not a very clear ‑‑ you know, others ‑‑ Geoff Huston has talked about that. So, it's hard to even know that. What we can do is we can look at surveys of parts of the Internet out there and tell, but the hard part is, will run name server may not let you ask the question from outside. We would love to be able to estimate that and people who want to know about DNSSEC as a technology being defused through the Internet, the Internet is not built to be measured that way. We have pervasive monitoring is a bad thing but it's essential for what we're trying to manage. Would I love to be able to do that).



SHANE KERR: Apparently the role didn't break your DNS so the vendors had to come up with a new way.

PETR SPACEK: Hello, if you were on the OARC workshop, this is a different presentation that you have seen. Okay. How many people in the room have heard about the DNS flag day? Okay, so it will be easy.

Let's make it quick. The problem basically is that the Internet is full of old unmaintained DNS service or at least seemingly unmaintained because they don't reply to queries which are valid for the last 20 years. In 1999, the EDNS standard was you know put in place and it seems that we still have servers out there which don't reply correctly or to be precise, don't reply at all when confronted with ED DNS query.

The problem is basically that it's just very minor portion of the Internet, but it is first thing when there is DNS resolvers to do weird hacks to just make this minor portion of the Internet work, which means cost for everyone at everyone who is developing this software and end users, because sometimes as every work around, the code sometimes goes wrong and then even the operators which are having perfectly fine up to date DNS servers face consequences because of some old stuff out there.

Usually the problem is caused by old software or weird firewall configuration, possibly load balancer or something.

So, vendors of DNS resolvers decided that okay, 20 years was enough space for an upgrade so it's time to stop supporting this old software, let's say.

So, beginning on February 2019, Open Source DNS implementations will stop doing work‑arounds for weird E DNS non‑compliance and the nice thing is that some of the public DNSes resolvers joined the club as well so it will not be purely depending on update cycles on the ISPs and so on, but the big operators joined as well, so it will be actually a flag day or maybe a flag week or flag month, depending on the rollout plan.

Okay. So, if the DNS resolver doesn't respond at all to E DNS queries, it will just fail and that's it. We are not going to work around it any more.

Well, the nice thing is that it's quite easy to test from the outside, because if you have the, presumably you have it for other people to ask questions. So we can test that. Here is a link to the website. I encourage everyone who didn't see the website yet, to go to the website and have a look.

Because, you know, this guy is not going to save you from the impact this time.

So, on the website there is a simple form with one field and one button. If you enter in the zone name, not domain name, but zone name and press the button, it will give you a result. In case of ripe.net it's super easy, green, congratulations ‑‑ thank you guys. But, there is a couple more options how the test could end up.

One of them is so‑called like minor problem which means that this authoritative servers are not 100% compliant but they will not break immediately after the flag day. They might break on the next flag day but not this one. In general, we still encourage operators to solve this kind of so‑called minor problems because they are still causing you know annoyances for other people, for example, it prevents some operators from deploying DNS cookies and DNS cookies would be very good measure to protect everyone from denial service attacks caused by reflective DNS briefs.

Okay. Another possible answer is so‑called slow, which means basically that one of the ‑‑ one or more of the DNS servers for a particular domain are going to fail after the flag day for some reason. But there is at least one DNS server which will survive. That practically means that DNS resolver which is attempting to resolve a name from this domain will have to go through a couple of hoops to find eventually, hopefully, the IP address which works, which means that the users will see significant latency. So if you get this is out, please go and fix your DNS servers, otherwise your phone will start ringing after the flag day.

And the worst result is it will stop. And if your domain is in this state, it basically means that it will end up as the dinosaur in the picture because after the flag day it will be totally dead. It effectively means that no IP address is able to respond to perfectly valid E DNS query, and DNS resolver vendors are not going to work around it any more. 20 years was enough.

Okay. Let me go back one slide. You can see that at the very bottom of the page, there is an URL, and that links to so‑called technical airport, which is basically a result from brilliant tool of mark Andrews in the first row, thank you. And the technical report has a bunch of information. We did different tests and some of the tests failed for various reasons and so on. And if you are an operator and you want to debug your deployment, you can scroll down on the same page and you will see the description for every single test which failed. Besides other things the description contains a comment, which is in here, and you can just copy and paste this comment to comment line, run it and see what it returns.

The page contains this description. What you should expect in the output of the command. You can do the test manually to make sure there is nothing broken with the testing site or with the evaluation logic from the test and so on. So this website, or this technical report should have all the information which is necessary for actually fixing DNS software, or deploying fixed version I should say.

If anything fails, you know, the step 1 is try to upgrade to something from this century, or ‑‑ well maybe decade. If it doesn't help, please have a look at the load balancer because often, you know, load balancers can do weird stuff with DNS. Or, third alternative is the firewall, if it's not DNS and not load balancer, maybe the firewall has something really you know inventive in its rules like drop queries which I don't understand.

And that's it.

So, I hope that your DNS is ready, and if not ask questions.

SHANE KERR: Thank you. Have you ‑‑ have people come back to you working on this project directly with reports of problem or do you think people are just fixing their own stuff with it?

PETR SPACEK: After the previous talk two days ago during OARC meeting, it turned out that the one day after the report I was asked by one of the DNS developers, please send me a bunch of domain names which are broken and I want to test whether our software behaves in the expected way and so on. So, I just used grep and found a couple of weird domains in the logs, send him a bunch of them and it turned out that half of them were already fixed. So naming and shaming really works which is surprising to me, and if you want to see the table which the shame list, it's in the previous presentation from OARC, if you are part of the shame list, talk to me, I will help you to configure this stuff.

SHANE KERR: So we're going to make a new DNS Working Group policy of shame.

PETR SPACEK: Sounds like a good plan. No, seriously, it seems that a bunch of the operators wasn't aware of the problem and you know, they are not incompetent, just not aware of the problem. So once we tell them, you know, it works. I'm making fun of it but it's actually about communication, it's like go talk to us, either me or any other Open Source vendor and we will help you.

AUDIENCE SPEAKER: Hi, John Dickinson from Sinodun. Just in case anyone in the room is running a name server behind an SRX file from Juniper, you might want to make sure your DNS ALG is turned off, otherwise it it does break. If there is anyone from Juniper here who would like to talk to me about that, talk to me after.

AUDIENCE SPEAKER: Can you provide after DNS route debugging, the results to operators. They can ‑‑ if it's available, operators can evaluate before the rollout date.

PETR SPACEK: If I understand you correctly you are asking about software versions which are okay, is that right?

AUDIENCE SPEAKER: Yes.

PETR SPACEK: Okay. There is a particular reason why we don't do that explicitly and the reason is that the DNS software is only part of the equation but it's not enough. There is very often, but not often, but it's not unlikely that there is load balancer firewall which is breaking stuff. That's the reason why we intentionally are pointing people to the website with the tool because the tooling is, you know, checking full chain from the client, or testing server, to the authoritative server, including the firewall and including a load balancer. Because having a new enough version of the DNS software is not just enough. It might break on different places in the network. So, I hope that I explained why we are not doing that and will not do that.

AUDIENCE SPEAKER: There was some software version is useful for operators.

PETR SPACEK: I can see the point, but again, we were caught a couple of times. I was just upgrading or helping to upgrade DNS server to one big Japanese operator and they found out that it wasn't a DNS server. So...

AUDIENCE SPEAKER: Mark Andrews: Just remember, the software for doing all these tests is available. So, if you are a TLD operator, test all the delegations. If you are a registrar, test the servers that are being delegated to before they get delegated. So, the software is available. It will spit out the results. It takes a couple of seconds.

PETR SPACEK: If you are interested in testing or doing surveys and so on, please have a look at the OARC talk which has way more details about carpet bumping /testing.

SHANE KERR: I have a quick question about that mark. The software is available, but in this case it's doing single tests against a single domain. Is the software optimised to like pull out individual IPs of name servers and test it in that way? Oh it is, great.

Mark Andrews: The software can take an entire zone effectively, just name, name server pairs or you can take name, name server and IP address and just feed it through and it will run several hundred queries simultaneously, so, it's designed to, for zone operators to actually test all the delegations in bulk.

SHANE KERR: Thank you.

AUDIENCE SPEAKER: MAT ace. Oracle /Dyn. Zone owners are particularly interested in this because they want their data to reach the clients. This test mainly focuses on the path put in by one client and the authoritative name servers, which is a good start, but it doesn't tell you if all your clients can reach you, because it doesn't ‑‑ it only shows you one vantage point, right, so I think it would be very useful to see like, out loss style or an other kind of vantage point, Looking Glass, whether it's working for clients all over the world.

PETR SPACEK: That's an excellent point, but let me reply that, well like about 10% of clients worldwide is DNSSEC validating and they wouldn't be able to validate if it didn't work, so we have some reasonable expectation that the problem is mostly localised on the authoritative site or near the authoritative site I would say. But yeah of course we don't have al the data, I think that's a big place.

AUDIENCE SPEAKER: To measure is to know.

AUDIENCE SPEAKER: You only need one vantage point to see that Dyn is dotcom is in effect broken.

SHANE KERR: Exactly. So for those of you who don't know, I work for Oracle Dyn. So...

Well thank you.

(Applause)

So, for those of you ‑‑ you may have noticed that we have a lot of Peter's

TORE ANDERSON: Rays in the DNS world, and we have a shortage of human names because we use them all in computers, so the next presentation is by other on the a, which is openfully something a little different and interesting. You can see, managing DNS zones using GIT.

ONDREJ CALETKA: I am continuing this cheque block of cheque people in this Working Group. I am going to talk about something completely different. Something that I have deployed in our association which is cheque national research and delegation of network operator called CESNET. It's about managing the DNS known using GIT. So I will just try to cover why some ISPs like we still manage DNS zone files manually. How to do it using GIT. Then I will elaborate a little bit more about DNSSEC signing and also it's not finished yet, there is still work to be done. So, some future work as well.

So, why ISPs are managing DNS zones manually? Well, if you do it, you probably have your reason. For me I was like trying to figure out who is managing the DNS zones and I found out like three types of enterprises, if you are just web hosting company, the managing of DNS zones is like part of your core business and usually you use some kind of your own solution integrated with control panel where you control the hosting as well.

So, you probably don't do it manually, unless you are a very small web hoster and you are probably not competitive with the others.

If you are an enterprise, you either don't care about DNS names at all or if you care, usually you have some kind of direct server or something like that that is taking care of all your name devices. It's usually one flood DNS space with host names and for your chosen name you are one forward and one reverse record and that's it.

If you are an ISP, most thing you care about DNS is reverse records. And with reverse records, it gets a little bit complex especially in the area the post IPv4 depletion. Well you don't have class A, class B, or class C of IP addresses. You have like two addresses, four addresses or eight addresses.

So this was something that was not expected when the reverse DNS was set up for IPv4. So, we have to use some kind of Hack, which is actually best current practice number 20, so I guess everybody in this room know BCP 38, so I would like to know of BC B number 20, because obviously if there is 38 there is also other numbers of BCPs, and this is a BCP is describing how to split one class for DNS reverse zone into lots of classless sub delegations. And it's something that is happening very often in IPv4 space and it's making the number of zones quite big and it's making the managing of this by some tool pretty complex issue.

So, how we did it in CESNET. Because we are pretty old Internet company, we were established in 1996, and the Internet was before. We started with DNS server and a zone file on the DNS and putting the zone file on server which is of course the first thing you can use. Don't do it. Because if you just forget to update seriously number which is typical programme, you will get inconsistency. If you put their some typo, the zone will not be loaded and you will get an outage of the service. So it's not a good idea.

A much better idea is to move primary data to some hidden master server which we did in the next step. We also sets up a DNSSEC signing in 2012, back in the time the only scaleable Open Source available was open DNSSEC, so we put up open DNSSEC for the signing of our zones. And we also had GIT repot tree for tracking of versions of zone files, because it was quite useful to know when some record appeared and when it disappeared from the zone file.

And the next step of the evolution and the thing that I'm going to talk about now is is the hidden master which is controlled by the GIT repository, which is the source of all data. There is also some tooling that is preventing from committing errors into the GIT repository. There is also some tools that is making service reloaded properly and the DNSSEC signing is now separated in the separate component on the path of the DNS data.

Also the outcome, the positive outcome of this change would be that the management of some zones could be split to some teams if we have some, we have lots of departments that are like operating pretty autonomously and some of them have some DNS zones to take care, but they don't want to operate their own DNS server because they are not DNS experts and we are not ‑‑ we were not willing to let them access zone files on our servers because we don't trust them enough that they will not accidentally do some typo and break it and they will run to us screaming fix it please. So, we are, like, looking forward, it's not yet done, that we could split some of the zones and actually do this domain management by different teams.

So, what was the motivation to this latest change? The first thing was that keeping the zone files manually was like I see five steps here, and of course if you do five steps manually, some things are breaking so you have to do the change, you have to increase seriously, you have to resign the zone if it's signed, you have to reload the DNS server and then you have to commit the changes to a GIT repository. So, it was too much work, and not necessarily needed.

Also, the open DNSSEC showed us some issue, some of the work was by our deployment, but it was something like back in the 2012, it was like that from time to time, like one per month, the S Q light database was somehow dead locked itself and the open DNSSEC signer or enforcer crashed and had to be restarted and the recommendation was to switch to my SQL, which I did. But then the 2.0 version of open DNSSEC came out and I found out the up great is non trivial, that it would probably screw up everything, export keys and import it into the new instance because simple upgrade was not working in our case.

There was also some issues with inconsistency of K ASP database. I think this was due to our setup when we decided to Chair keys between zones because it looked like better idea from operational perspective, but it looked like that it's not the feature that it would be perfectly supported or tested on the side of the DNSSEC signing over the, the open DNSSEC. So, this was probably causing some issue that made me actually read the source code and go directly into the database and issue some delete records there to fix open DNSSEC and keep it running.

So, that was also a thing that I was not happy about.

There was no support for algorithm rollover until Version 2.0. We have now this thing called CD S key where you can automate your from your parent zone. This is not supported by open DNSSEC now.

So, how do you manage zone files using GIT? Well, the desired state of how the system could work is that you have an operator with his work station, and his work station he has a GIT repository of our zones that is synchronised from some hidden master server using normal standard set of GIT protocols, which is GIT tunnelled in SSH. On the hid he will master server there is a repository with some set of hooks and this repository is taking care of publishing changes in the GIT repository onto the hidden master server. The zones are unsigned and then are transferred using normal zone transfers to a signer, which in this case is not DNS, and the signer signs the zone and transfers it again to public slave servers which are on two different implementation of DNS servers.

So, this is the idea how it should work. So, how do we manage zone files using GIT? GIT itself is like generic version control system, or reversion control system. It can be extended by the means of hooks, which are simple executables that are executed at various parts of the process GIT is doing. The important thing is that these hooks are always local, they are not cloned when you clone a repository and because they are local to the machine GIT is running, cannot enforce that they will be run on the operator's work station because you have no control over the operator's work station.

The way how to manage zone files using GIT was actually inspired by RIPE NCC, because I asked what are they using and they showed me some sort of shell script that they use for this exact case. So I was thinking like okay, ill try to do something like that. Later I found out that there is already a project called GIT zone which is Open Source, it is solving the similar issue like my ‑‑ like I was looking for. It's implemented in parallel. Unfortunately, I discovered it only after I was looking for a name for my own set of software. So, I was ‑‑ I kept with what I did and I just said this is not invented here. So I'm not going to use it.

But, you are not in my position, so I encourage you to go and give it a try.

So, how do we check if a zone files are correct? There is an an article out there which is part of BIND server. It's called named comparison. It's the thing that will adjust read zone file in the same way BIND would read it when loading the zone. It will parse everything and it will spit out the zone compiled in format. So all records are expended to FQDN and records are re‑ordered according to order or comment and white spaces are stripped. Which is great.

It also tells if you there is any error and it also tells you seriously number of the zone. So, what you have to do is actually compile zone every time you want to commit it into the repository. If it fails, just refuse to commit is the easy solution, but if it doesn't fail, you still have to check whether the seriously has been increased because if it was not, the zone would not propagate properly. So you have to compile the previous version and then compare whether those zones produced by those compiler are identical or not. If they are identical, then it means the change is only some non important change like white space ordering or comments, which is not making an impact on the compile zone.

However, if the contents of compile zone changes, you will check that the seriously number has been increased. And if it's not, you will refuse the commit. So this is what you need. This is basically what the RIPE NCC shell scripts are doing.

The thing is, in our case, we had a thing a little bit complicated. The named compile zone utility actually requires not only a zone file but also a zone name and the question is where to get zone name. The idea is, okay, let's just name the files in the same way the zones are named. Which is a good idea, I recommend it to everybody. Unless you do BC B 20 and unless you are following it like pretty strictly, that means that your zone actually contains slashes and cannot put a slash in the file name. So, even though if you read BC B 20, you will find out that they say that those slashes are just examples and you should use your own character that is somehow more appropriate like dash or something like that. We already had set up our reverse zone like this and it's quite hard to change it. So, there has to be some other solution how to do it.

The other solution is that you can put origin directive at the beginning of the zone file and the origin directive will tell these sets of hooks what's the actual zone name if it's different from the file name.

The problem is with this it starts so get complicated and then if you read something like Google shell style guide, you will read that if you are writing a shell script and it's more than 100 lines long, you should probably be writing it in my on this instead because scripts grow and better rewrite later than rewriting at the much higher code ‑‑ sorry, again, better rewrite sooner than later at much higher cost. So this is why I implemented by own tool which is named dzonegit. Don't ask me how to pronounce it or what it means, I don't know, I was just looking for some name and the name GIT zone was already taken by some project so I invented this name, but this is yet to be decided. It's written in Python. It's currently 520 standard lines of code, plus there are tests. Even though it's written in Python there is no Python dependencies. It's Open Source at MIT licence and it's made universal, so anybody could use it, there are no hard dependencies for our deployment.

How do you use it as a user, as an operator? Well the simple inst installation is that you just download the one Python script and put it into your repository for the GIT hooks, which is this. Also, you can do full instillation like a normal Python package, if you know Python, this is nothing new for you. The only dependency obviously is GIT, and also the named compiled zone behind this part of the BIND package.

On the server side, we use bare repository, this is the repository of GIT that doesn't have a working tree. It's because if you know a little bit of GIT, you know that cannot simply push to check out the branch of non bare repositories, the normal setup is that on the server side, there is just bare repository N this case it's bare repository where only the GIT objects are, but nor the name server to be able to load the zones, you can have external trees exactly what I have the little note aside is that this kind of setup also has the advantage that you don't have this .git directory in your working tree, which is not the case here, but if we download the slide on the link in the bottom and read what can happen if you deploy websites like this, it's quite a bad thing.

You can use any Git repository managements of where you like. I quite like software name Git owe light: It's able to put their custom hooks, so there are two hooks basically the hook named pre‑received is the same hook that is on the operator's side checking the zones before commit, because as I told you, the precommit hook cannot be enforced. You have to enforce same kind of check on the server side to make sure nobody who is like forcing overriding of the pre‑commit hooks would commit some commit that would break the system.

And once this pre‑received hook is finished, that means the commits are okay, the post received hook will actually make make that the connect the Git repository to the DNS server. It will check out the external working copy. It will generate a DNS server configure snippets. So part of the configuration for DNS servers to know which zones are there and what are the zone file names and then it reloads or reconfiguration DNS servers on command.

All this is stored in the Git repository configuration, so there is no extra config file or configuration for the set of hooks.

So conclude this part. My tool dzonegit is a set of Git hooks, it's only run on Git operations. It refuses anything that would break zone files, so it would break the DNS server from loading the zone file.
Every time somebody pushes something into the repository, it will publish zone files and also configure snippets that be included into the configuration of the name server.
It will call reload for every modified zone file. Rereconfigure if a new zone is introduced or deleted. There is a support for multirepositories, so there is blacklists and while lists just to avoid duplicate zone in two different repositories.

Okay. Let's move forward to DNSSEC signing. Because this was also quite a major change.

The first thing which I learned the hard way is that you should really use zone transfer for input and output to the signer, you should not use zone files because they are from time to time new record types introduced and it can happen that the the tool that is reading zone file is not able to understand resource records but the tool it writing zone files so it will break in the middle and this is not the case with zone transfers.

I also decided that we don't need an HSM for keeping the DNSSEC keys, because we don't use HSMs for SSH keys or TLS keys, so I considered this the same case, also you don't have ‑‑ you have to not just protect the zone signing keys, but also the primary data because if somebody is able to change your primary data, then it doesn't matter that they cannot process your private keys.

And I also learned the hard way that sharing keys between zones should be rather avoided because it's not a standard feature, even in not DNS signer there were slight issues with that. So if it's not a big operational issue, you should rather avoid it and with this registry site, CD S ‑‑ CDN S K E Y, which is a major part of our zones hosted, it's actually not an issue at all.

So, this is just an example how you connect the signer to the deployment. So first thing is some Git repository configurations where you put there some template which is a template of the configuration snippet for the name server. Then there is the path where you put the zone file and finally the command that is run when you want to reload the server.

And here is some simple template. Because obviously you host many zones and you don't want to apply same kind of setup regarding DNSSEC for all zones, you want some zones unsigned.

So you can put something like this in a template. Unfortunately the template is in JSON which is not the most, the greatest format for editing by hand but is the format that the natively supported in Python, so I was not going to put there a dependency to Python packages because of configure template.

The produced snippet looks like this. You can see there is a list of zones with template that is used and DNSSEC policy that is used for each zone and that can be included into the configuration of DNS server. You can also use any other signer you want because everything is universal, you can take your templates for anything you need.

So, this is what we already have. Where are we now and what I'm going to do next.

The first thing is secure delegation automation, if you have DNSSEC and changing the DNS records in parent zones is also a trouble. So it's pretty good for .cz, it place pretty well with no DNS signer and the CDS and CDNS signed records. It signals in the zone what the parent delegation should be and the registry will notify and use it, so it's great. I am happy that etc. Not only CZ it's also .ch and .li getting this support for the same mechanism and hopefully other registries will join this club because it's really great. Unfortunately we don't any C H or L I zone.

What the other zones that we host? Well, for other registries, we can implement something like this on our side, because the CDS record is a perfectly way how to signal that the parent delegation has to be changed. So first thing I looked inside was reverse zones delegated from RIPE NCC, because they support the DNSSEC. You have to put DDS record into the RIPE database and the RIPE database has a rest. There is a way forward, I will show you what I have right now.

The problems with other TLDs is we should do it why should API of some registrar, the question is which registrar is providing such API? Because I made some research in Czech Republic it's quite hard to find such.

The biggest issue I'm currently looking to solve is actually what about zones where we are the parents? Because if we are the parents, we have to put the DS records in our own zone files and the problem is if zone files are the Git you have to somehow automate the Git commits and we still have those zone file that are in still format with our own comments, we don't want to get rid of them, so it would be a little bit complicated, but I'm certainly going to look into this a little bit more.

A little bit more about this RIPE database DS update err because I think it's something that the RIPE community could be interested in. This is what I already have as a prototype, I would say it's very early, it does something but it's not certainly not a software that like would be able to be deployed anywhere yet.

It works like that I have a special maintainer with API X in the RIPE database and this maintainer basically what it does is does a RIS query for all maintained domain objects so it gets all objects that can be changed by this maintainer and once it does that, it will do the comparison between the CDS record in the child zone and the attribute of the domain object in the database and it will compare it and if there is change, if there is new DS records to be submitted to the RIPE database, it will just do another check whether last modified attribute of RIPE database object is later than the inception of the DNSSEC signature, this is actually a pretty nice way how to protect against replay attacks. So, the CDS record has to be signed later than the last modified date of the domain object in the RIPE database.

And if this check succeeds, there is a RIS update of the domain object in the RIPE database. So, as I said, this is a prototype. I think if there is an interest in the RIPE community, I can certainly share this code preferably with the RIPE NCC and they could implement this, so this update err, because it's universal, all you have to do is just put there your first DS record in the RIPE database and then put there the special maintainer for maintaining it and then you don't have to care about anything else. Someone will just run this script and this script will look for keeping these in sync.

So if there is a demand in the RIPE community, we can probably work it out with the RIPE NCC so they could run sub script at the RIPE NCC side and have it automatically updated the DS records for all reverse records of RIPE address space.

So, that's basically it. As I told you, this new signer and Git based management has been deployed in September this year, so pretty recently.

We already migrated all our .cz zones from one shared RSA keys to separate ECDSA keys using algorithm rollover.

The tool is available for free as Open Source. Issues and pull requests are always welcome. And that's it.

(Applause)

SHANE KERR: It looks like we have a couple of either comments or questions here.

AUDIENCE SPEAKER: Hi. Benno Overrider. Thank you Andrei, very insightful. Not a question but an update on open DNSSEC development. We have been very quiet. Last release was 213, September 2017, a year ago. So in the meantime, there was a lot of development refactoring of the signer, so something like fast updates. So, it has to be optimisations of the signer for more concurrency, faster updates, we can accommodate five minute updates of the zone. It's not public yet. And the CDNS Key as you mentioned will probably be something 21 afterwards. But we are actively developing. Sorry for taking microphone time for pitching this. But we're working on it and thank you for the presentation.

ONDREJ CALETKA: I would be very happy if open DNSSEC developed further. Because I am not biased against one or other against other implementation. I just want to try something that works for me and right now no DNS is working for me better than the open DNSSEC. The old version worked.

BENNO OVEREINDER: I completely understand. The way you work is very insightful for me also. So thank you very much for this presentation.

SHANE KERR: I am actual ‑‑ I'm going to cut the line because we are almost at the end of our session, but please go ahead.

AUDIENCE SPEAKER: This ‑‑ Niall O'Reilly. Loose canon. As a Q L light fan, I am interested to know perhaps in an offline follow‑up, whether you isolated the cause of the problems you were having with S Q light.

ONDREJ CALETKA: This is a question for NetLabs, I read it in the commendation of open DNSSEC and it was back in 2012, but again we can discuss this off line.

NIALL O'REILLY: A closing remark is, EMax is great for looking after your seriously numbers.

AUDIENCE SPEAKER: Tony Finch, University of Cambridge, I contributed the DNSSEC DNS tool for ISC dot.org for BIND. Using CDS records to update parenting is close to my heart and I am enthusiastic and keen to see it being deployed widely, especially by RIPE, because I have a lot of DNS zones which I would like better automation for. Thank you for your presentation, you have combined lots of my favourite tools, so I was super keen on it. That's all.

ONDREJ CALETKA: Thank you for the tool. I also give it a try and I also thank you for the NSF tool which I like a lot, even though I'm not using it.

AUDIENCE SPEAKER: Matt Pounsett. Just on the subject of APIs for updating zones, if anyone is big enough to be a reseller there are a bunch of wholesalers that provide APIs for that sort of thing but the only registrar that I am aware of that provides it directly to registrants is Gandy, there may be others but that's the only one I haven encountered so far.

AUDIENCE SPEAKER: John Dickenson. It's great to say you pushing storing zones in Git, it's a great idea. And just, I was interested to see that you are using Knot, and I just wonder, have you looked at the zone editing tools in Knot C? There is, the Knot server has a compliant programme.

ONDREJ CALETKA: You mean like direct editing of zone not using C comment line. I have looked into it, the thing is we have this legacy of zone comments and stuff like this. So for me it was like I was ‑‑ my primary goal is to keep the zone files as they are in their order with Chair comments and not to change them into any dynamic way so I would have to somehow put there some ‑‑ well, this was the most straightforward solution for me, that's why I went this way. I was thinking about this, but it was not working for me.

AUDIENCE SPEAKER: Okay. So it's transactional, so you can do things like roll back. It automatically resigns a new entries or moves signatures for things you remove. And and then you can always just check the zone Git afterwards and do it that way. Well, you know...

And the other thing, it always makes me, when I hear talks like this it makes me think it would be nice if the name server vendors allowed to you specify your URL for where the zone file is to you can either have a file URL if you want to keep the old files or Git URL or a HTTPS or whatever. It would be a nice feature I think, but...

ONDREJ CALETKA: Interesting idea. Ear ear thank you very much for your presentation again.

(Applause)

That's it for this morning's DNS Working Group. We have another session tomorrow afternoon at 2, and I hope I'll see you all there.

Coffee break.



LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.