Archives

Database Working Group
Thursday 18 October
2018


WILLIAM SYLVESTER: Welcome. Today we have another great agenda for the Database Working Group. Thanks all for coming. We'll ‑‑ I want to say thanks to the secretariat for helping out. Alistair and Fergal, both taking notes for us, and watching the chat group.

We'll jump right in. Ed.

EDWARD SHRYANE: Thanks William. You are all very welcome. I am a senior technical analysts, I work for the database in the, for the RIPE NCC.



Here we are. The team that's been working on database changes in the last six months. Most of the team are here in the room, so say hello.

Since the last RIPE meeting, since RIPE 76, we have been busy. As usually, three focus points, the areas that the RIPE NCC finds important for the database. We have been focused mostly on the Database Working Group changes and RIPE policies.

There's been a number of releases. Four minor reloses. With various bug fixes. First of all validate reserved words and prefixes, this has been in the RFC for years, but we never fully validated reserve words and it came to a head when an AS‑SET called AS any was registered this year. So that validation has been added and the objects cleaned up.

Also we were previously, we were returning detailed error messages from Zonemaster on domain updates, but those messages were not always helpful and kind of revealed interim workings of how Zonemaster worked. We are returning more generic message.

Sync updates, compressed responses. I don't know if anybody was affected by this. We notice with our external monitoring that HTTPS pressed was not handled properly by jink updates.

Previously we fix this had for mail updates. It's an an unprintable character that worked its way into the database on update. We have fixed it now with RSA PI and sync updates.

Finally the made to release was for NWI 5 in September. And following that, we also had two small bug fixes in addition to that.



Moving onto the main work we're doing, the NWI 5 implementation. There was a the low of preparation involved for this. This has been in discussion for years. There was an ex incentive efforts in discussing, agreeing and preparing for these changes. Earlier this year we published an Impact Analysis and implementation plan. There was a labs article, thanks Denis for that, explaining the changes.

Nathalie, our product manager, was also involved in communications involving the community, Working Groups, also the other RIRs and network operators so we tried to each as many people as possible, we are aware this was a huge change for the database and for the community.

Just to recap the main changes. This was all been gone over before. But just to recap it.

The changes for NWI 5 were that no new out of regions aut‑num or Route‑6 objects could be created. So any, for any space outside the RIPE region, existing objects were moved into a separate source code RIPE‑NONAUTH and we also moved the no or general authentication.

So more details.

These are implementation details or implications of the major changes. One thing to be clear about is that out of region objects can still be updated or deleted. They have been moved to a separate source but you can still maintain them and this is especially important for maintaining your objects, you still have control over these objects, and especially for contact information it's important to keep these up to date.

By default, WHOIS queries both sources, so this was a change that would mean that you'd still have, in general, WHOIS queries would still return the same objects but the source would change on the out of region objects. So, your queries will still return the same data.

AS numbers that don't exist in the RIPE database can still be used.

But we can, we will add a warning to the update response.

Researched AS numbers we added validation to that to make sure that they can't be used as an origin.

One thing we were asked to do was to notify the origin AS holder if the NONAUTH exists in the database to send a notification to the AS holder if ‑‑ when a Route‑6 object is created. And you can do that by adding a notify attribute on the aut‑num object.

Also, the MNT routes attribute has been depricated, it's been removed from all the aut‑num objects but we decided not to fail an update if the aut‑num contains it. We rather filter it out and add a warning to the response.

So, it's not a breaking changes for updates.

And finally recollect the REST API, if you specify the wrong source in the REST API is redirects to the 301 to the correct source. So hopefully the REST API does the right thing if you are querying for an object.

So then more details on the rollout. We have in advance about a month in advance we started releasing test releases to our release candidate environment, we notified all the maintainers of the objects. When we updated the objects on the 4th September. The objects themselves, we ran these in batches during the day and it took most of the day to get through all these updates.

Firstly we removed the MNT routes attribute, we moved is the RPSL maintainer completely. We updated the source and finally the pending Route‑6 objects we either created they say or dropped those according to the new rules.

We did have a roll back plan. We tested that beforehand, but in the end we didn't need it thankfully. We took away some lessons learned from that as well.

In particular, that lots of communication and testing before the release is really important. It was good to keep the community informed of the upcoming changes and also to be sure that the changes were going to work.

That the bug fix releases we made multiple bug fix releases in RC but the communication could have been better around that. And finally, on the day of the rollout, the announcement, when the deployment was finally done, it could have been made the same night. We finished late that evening.

And finally, there was cake. So, we ‑‑ when we moved the RPSL maintainer, it had a long time from 2004 to 2018, but it won't be missed.

Okay, for NWI 5, the question is where do these route and rows 6 objects go now? And one IRR is AFRINIC, for the AFRINIC space, the AFRINIC prefixes. And they saw, after the NWI 5 implementation in our region there was a large spike in route and Route‑6 object creation in the AFRINIC IRR. And they are encouraging the use of their IRR for AFRINIC prefixes for routes in that space. And a recent change they made was that ASN origin authentication is now he is not necessary.

One thing to note is that most of the out of region objects have an AFRINIC prefix, so this is now the new home for AFRINIC space, route and Route‑6 objects.

I'll give you a couple of slides on what we saw operationally for updates. Create update and deletes for out of region objects. Firstly creates. Of course now creates are not allowed any more. But we did see more terriffic continuing for out of region objects, users trying to create these objects. They will now fail with an error and the error message will explain what's happened.

We looked also at the create rate before the NWI 5 implementation to make sure there weren't massive spikes of route and Route‑6 creation before the deadline.

More modifications, again this is going from early August to mid‑October. We don't see very many modifications of out of region objects. It seems like most of these objects are created and then never modified afterwards. But it is obviously important to keep these up to date.

And finally, deletes. There are some deletes. There was a spike just before the September and there have been a few changes since then. At the current rate the out of region source will be empty in about ten years. So I wouldn't rely on that. But it's good to see there is discussion on what to do with the NONAUTH space.


The second thing we worked on was the abuse‑c 2017 implementation of the we are going validate abuse‑c e‑mail addresses on organisations at least one a year. We have been doing this using a nightly job. There will be a combination of a static syntax check and online check. We'll be using a third‑party to do that. We have already performed trial validations of LIR abuse‑c addresses earlier this year, and we found that 20, 25% were considered invalid, and Angela presented on this earlier today at the anti‑abuse meeting.

The process will be that we will notify the organisation firstly with a ticket. They'll be notified that we consider their abuse‑c address invalid. We will also send a validation to the abuse‑c address, so they have an opportunity to validate the address themselves. If it's validated we'll automatically close the ticket and nobody needs to do anything else. And if it's not validated, we'll excalate, we will send reminders and escalate after three weeks.

Our first phase we're about to start next week or by the end of the month. We'll have a trial run for 900 LIR abuse‑c addresses. That will be to fine tune our implementation, see how it goes in production and see what the impact is for users and also for the RIPE NCC.

And we'll follow that up with a Phase 2 in the new year. That will be a complete implementation for all of the abuse‑c addresses in the database.

So following on from that, we found that a lot ‑‑ some e‑mail addresses in the RIPE database, the validation rule for that is quite relaxed. It's basically anything at anything. And that was an acknowledgment that e‑mail validation is hard, especially if you rely on a static check. But I think we can do better. Relaxed validation causes problem. Parsing data and also sending mail when it comes time to contact a user, we have seen some problems with that. And this also came up during the last Database Working Group meeting, RIPE 76, that was a topic for discussion as well.

We found that we can improve this with stricter validation, which will catch cases like trailing mail to, leading mail to, leading question mark, leading colon, user puts a colon before the addressing, a trailing date so it looks like a changed attribute. Also a double at sign. So the combination of those patterns accounts for most of the failure cases. . We found that 0.5% of the existing 4 million addresses in the database will fail a stricter check. But ‑‑ if we were to improve the syntax validation, we can catch most of this. And an automated cleanup can fix 95% of the invalid values.

So, my question is whether this is something we can go ahead with, can we improve the syntax validation for e‑mail addresses in the RIPE database and/or do an automated cleanup.

I guess I can invite questions at the end for any comments on this. But this is something that would help with the abuse‑c implementation.

Something else related to abuse‑c. The numbered work item 7 that we have already implemented, we rolled that out year ago. I found that there are now 7 and a half thousand abuse‑c references on resources, but 21% of these, thee duplicate the org abuse‑c that's applies to resource, so should the RIPE NCC add extra validation to avoid this duplication and should we clean up the duplication? That's another question for the Working Group.

Okay. Lastly, just to mention a couple of things. Upcoming work between now and the next RIPE meeting, to start with we'll be focussing on Phase 2 of abuse‑c. But we'll be looking at other things including side reliability, usability, maintenance and bug fixes.



That concludes my presentation. Thanks very much. Does anyone have questions or comments?

AUDIENCE SPEAKER: I have a question or two. So, first of all, thank you very much for the work you did with relates to out of region information. I have said this in our forums but because there is an arc of software development in database maintenance it really is appreciated.

If you could go back to slide 7. Where you discuss out of region and reserved AS numbers. I'm intrigued by some of the specificity there because the word "Reserved" is one of those words which actually means different things to different people. There is the strict IETF sense, it's reserved because the IANA registry marks it reserved. And there's the RIR sense, we aren't letting people use it but it actually lies in a block which is in our control. Which one did you mean or did you mean both?

EDWARD SHRYANE: It's the former, it's the IETF ‑‑

AUDIENCE SPEAKER: Which in some sentence static, which is private. In some sense, but there is additionally the large number of 32‑bit AS which simply haven't yet been released, and they are potentially something that people expose in BGP from time to time when we are doing BGP, we see minus 1 in 64 bits because of not having full behaviour because someone has misconfigured fat fingered and used a roll around number in 32‑bit space and it does occur to me do you not protection as against yet unavailable resources but ones which could legitimately exist? The the problem is you'd have to have a met a block and have to punch for specifics into the structure to permit things to be seen, and it introduces the problem you might not do that and then someone can't make an object because you haven't given them permission. So I suspect it's not actually an easy as it seems but you answered my lead question. Thank you.

EDWARD SHRYANE: For now the problem is the IETF reserve space and if we find operationally that causes problem or we needs to extend, it's a configureable property.

AUDIENCE SPEAKER: George Michaelson: The only other thing I wanted to say is that in our region we made a decision to reach out to the NONAUTH because the feeling was we had entered into an arrangement with you the RIPE NCC for the provision of routing information service and it's a little spammy to knock on someone else December Dore to say I see you are shopping there and would you like to shop with us? It became a communications exercise with you to coordinate the dialogue with them. It feels like we may have moved into a problem space that needs more engagement. If you felt you want to say to them we really want you to move, we want to strongly signal that RARE RPSL could be managed more effectively somewhere else, we probably need to have a conversation. That's not a structural thing it's a communications thing. Maybe it's a conversation for a different day.

MARTIN LEVY: Page 13 please. And may I comment that the politeness on the part of George was amazing. My question on this slide, and it actually goes to one of the previous slides as well, is, is there now effort and coordination between RIPE and predominantly AFRINIC for the removal of the NONAUTHs that have been successfully created over at AFRINIC and other places in so this follows on a little bit with a different twist to something that George mentioned. In other words, will you proactively start removing the NONAUTHs that have been clearly put somewhere useful?

EDWARD SHRYANE: I have seen in the data there is definitely duplicate. There is duplicate routes registered between the RIPE region and AFRINIC so that's something that could be looked at. There is also an existing numbered work item to move the AFRINIC routes from the RIPE database, I don't know when that was last discussed but that was something that came up in the past as well.

MARTIN LEVY: When you say RIPE database, you mean the NONAUTH records.

EDWARD SHRYANE: Yes, objects that are now database.

RUDIGER VOLK: Let me ‑‑ there are two points I wanted to comment for your presentation. One thing is for the communications part during a conversion. I would like to point out that actually you should take care that actually an announcement goes into the service and security status page in advance about changes. That did not happen. And you make sure that during the transitional state, you actually have up there the status currently we are in something unusual. That would have also kind of addressed the question of the terminating message, but you should actually take care that the status page is well maintained and has all the operational status available because people ‑‑ if something works strange, they look there, and that's a much better way of getting the current status if it's maintained than scanning through some mailing lists.

Okay, the other part is I was baffled when you told last time that the RIPE database at some point in time lost the ability to do the syntax validation for RFC 882 addresses. And your page about how you are looking at the mail address validation here kind of still baffles me.

My understanding is that those addresses are well in the range of regular expressions, maybe a little bit complex ones. And that should be actually easy. I am not completely sure whether RFC 2882 makes things so much more complicated.

The question on that is, do you now have the syntax check available?

EDWARD SHRYANE: No, that was a proposal, so it's a question to the Working Group.

RUDIGER VOLK: Kind of, do you have the software that actually provides a bit, a bullion result for a string whether it is acceptable RFC address or not, and can you please and when will you actually put that check into all the attributes that require e‑mail addresses?

EDWARD SHRYANE: So on this slide I am presenting the results. We tested with the improved syntax check, it does conform to the RFC, closer to the RFC 822 to catch invalid e‑mail addresses and these are the results of that check. So, my question for the Working Group is whether we can enforce this. Or will this cause more trouble than it solves?

RUDIGER VOLK: Well, okay. My answer is please enforce on all object input, correct syntax as soon as you can do it.

EDWARD SHRYANE: From our point of view, improving this will help with the abuse‑c validation and also with parsing objects in the database as well. Thank you.

AUDIENCE SPEAKER: Hi, Allen bar receipt from AFRINIC. I have a comment triggered by something George said about AS number validation. So, in AFRINIC, we have also taken away the requirement that your route and Route‑6 objects have to be authorised by the AS holder, but we do check that the AS is valid and our definition of valid is a little different from yours. We actually go to all the other four RIRs and we download their daily files of which objects, which resources have been allocated and we validate against that. So if you registered and AS number at RIPE yesterday and you try to register a route objects, at AFRINIC today, it might fail because we might not have caught up to the latest version of that file.

RUDIGER VOLK: So Alan's question or comment brings me back. I have been doing that kind of validation for a couple of months and one thing that I observed is that the responsibility for maintaining that dataset that Alan was referring to seems to be not completely transparent and I got alerted when I did see inconsistency between that database and the IANA allocations for a week.

EDWARD SHRYANE: Thank you Rudiger, can you send me some details of that and I'll look into it.

WILLIAM SYLVESTER: Thank you.

(Applause)
Next up we have 2018‑06, RIPE‑NONAUTH improvement project. Job Snijders is not here today so filling in is Martin and Erik Bais.

ERIK BAIS: Martin is here for support. Thanks. I was actually looking for a tie, because this is an RPKI talk, but you know, I couldn't.

So, the proposal that Job and Martin and I wrote, we did this after the NONAUTH, the NWI 5, and we talked about this, how do we clean up the NONAUTH part of the RIPE database, specific with data that can be validated? And we thought you know, this is a good approach to look at this. And we want to use the RPKI ROAs for that, not only from RIPE, but also taking in the ROAs from other..

So if you look at this, you know, let's be honest. We have seen the presentation, the last presentation. You know, RIPE, they did a fantastic job fixing this large loophole and this is definitely, you know ‑‑ this was a major task, we are very happy about this. So, no new out of region objects can be created in the RIPE IRR, and I think it's Anna maidsing achievement. We are very happy. Long overdue, very happy that this was done.

Obviously, after the last presentation, you know, this is information that's all here. And we, as a community, you know, we want to propose ‑‑ which is now left in RIPE‑NONAUTH, and see how we can clean this up.

So, how do we propose to clean up the RIPE‑NONAUTH? First, I would like to add to this as well, this is a policy proposal that is also submitted, which is submitted in the Routing Working Group, so, if you have any comments, suggestions, that kind of stuff, you want to have the discussions on the mailing list, please do so on the mailing list from Routing Working Group.

So, we can leverage the different data sources to scrub the RIPE‑NONAUTH dataset from RPKI. So basically, what that means is if a valid ‑‑ if an RPKI ROA conflicts with data which is currently in the RIPE‑NONAUTH, we take that the information from the ROA has a better preference and was actually made with consent from the resource holder, and we take that as the truth. And then have the information which is in the NONAUTH part of the RIPE database and delete the actual conflicting route object.

So this way it will actually clean up the data which is there, that was made there, you know, from historic reasons, or perhaps with ‑‑ you know, without your consent from the resource holder.

So the proposal is let RPKI drown out the conflicting IRR. RPKI can be used for BGP origin validation but also it has other things which you can do with that. What about applying the RFC 6811 origin validation procedure to the IRR data and treat the IRR data objects as if they are BGP announcements.

So this one actually is interesting to look at. So to give you an example here, this is an object created from the space of specifically NTT, and this is an object in the NONAUTH part where somebody used an AS number which is not from NTT and basically created, punched a hole in that for this AS number to basically create this route object in the RIPE database and for people that are using prefix filters, you know, this basically punches a hole in their filtering and actually accept it.

And this was not done with consent from NTT. So, that is something which is important to understand. There was a way, not any more, but there was a way to actually create data in the RIPE‑NONAUTH database that was done without consent of the resource holder.

And the problem is the resource holder has no maintainer rights on the resource or on the route object, so it can not delete the route object, which makes it a bit difficult, and also a bit hard for the resource holder to actually clean it up if they actually find it.

So, that's why we came up with this policy. So, if a network deploys RPKI based on origin validation where an invalid reject policy which means you do route validation with RPKI with a validator and you drop all invalids that you see, and all the invalids mean invalid ROAs, or the announcements that conflict with the ROAs. So an announcement in this case where the /24 was originating from, 60068, in that case it would be rejected because the ROA would be correct for NTT, you know, for the complete /16, but there would not be a valid for the /24.

So if you use filtering purely based on IRR objects described state of network which can not access, it is in conflict with the published routing intentions of NTT. So everybody generating BGP prefix filter list now basically punctures a hole for this $24, and entity in this case has no way to actually delete the object.

So the process that can be done by the NCC is create a script to factual the RPKI ROAs, easy enough. If the ROAs cover part ‑‑ covers or part the route object in the RIPE‑NONAUTH, check if any of the ROAs, ROA origin matches with the origin AS list in the RIPE‑NONAUTH. If yes, don't do anything. If no ROA, don't do anything. And if it's invalid, then you need to delete the RIPE‑NONAUTH IRR object. There is no need to integrate this in the WHOIS software. It can be a separate script that you can just run every so many minutes if needed.

Other industry developments. So there having use for the RPKI ROAs for prescriptioning BGP prefix filters, there has been some discussion about it already. NTT is already doing that internally? Their NTTCOM database and they are also also extending IRRd so that when the IRR information is in direct conflict with the RPKI ROAs, the conflicting information will be suppressed. So, there is going to be a new version of IRRd version 4 that was community support to actually ‑‑ there was) new version for that, and that version will be implemented for WHOIS and also perhaps others in the future.

So, RPKI suppressing conflicting IRR advantages. Industry wide common method to get rid of stale proxy route objects ‑‑ by creating a ROA and you can hide the old garbage in the IRRs.

By creating a ROA, you will significantly decrease the chance of people being able to use IRR data to hijack your resources. So, by basically having valid information in there, even if you are not validating your own ‑‑ your ROAs in your own routing policy, by creating a ROA, you will help protect your own network, because you know, others will actually look at this. And if we actually have this policy in place, then we can also, you know, it shows what your intended use for your resources are.

So, there is, like I said there is PDP process that takes place in Routing Working Group. So, all the discussion and suggestions and comments need to go on the routing mailing list. And if there are any questions, then ‑‑

MARTIN LEVY: For those that weren't in the routing group yesterday, it would be good to sort of review the questions that came to the microphone, the heated conversations. And in fact actually, some sort of realisations after the fact.

But before I say that, remember this is just version 1 of a policy proposal, and I don't know any policy that's actually gone through on a version 1 in this community, but it could happen one day. It won't happen with this.

So a couple of questions that got asked which have been solved in the last 24 hours. One questioner said that I like using RIPE, it has a better interface than other places. But my stuff is all NONAUTH because these aren't resources that I am ‑‑ that I have in RIPE. Well that user has now decided that they'll do an address transfer from where they are into RIPE so that they can use the RIPE database. It seemed pretty logical.

The second thing that came up was, wait a second, you are just going to delete stuff without notifying people? And, yeah, all right, so wait for Version 2 of the document because that seems like a pretty legitimate request although we have got to work out what that means when you either know or don't know who to notify. But in all cases, the argument was that the ROA information is from an authenticated and a police where astation of the information existed which was a very valuable and quite different than at least most IRR data. One of the other things that came up was the issue of if you remove stuff, won't that change filtering? And I think we have to increase the text in the proposal to sort of say, no one is filtering on NONAUTH data. Or are they? This is a question for the routing group. But the whole concept of something being marked RIPE‑NONAUTH is to also say that this is not data that you should be running through your route filters.

There were a few other discussions, but there is four people at the microphone. Start with Nick.

AUDIENCE SPEAKER: Nick Hilliard. This looks like it could be a repeat from yesterday, but actually, I want to take the discussion in a slightly different direction and concentrate not on the body of what you are talking ‑‑ what you have been talking about but your last slide which is to say clean up is fun.

This is ‑‑ your current proposal concentrates very much on one very specific aspect of NONAUTH cleanup and I think in the Database Working Group, we need to look at ways of dealing with the general cleanup of the IRR DB, and I think it would be really useful to ‑‑ if there were a few people who could sit down with some of the people in the RIPE NCC database team, to try and figure out what sort of data is there that we know is legitimate. Because there is a huge spectrum of data in there, a pile of it is legitimate, some of it is illegitimate, some of it is stale is actively intended to be malicious and I think if we can try and get better understanding of what's in there, what's relevant and what's irrelevant and how to delete all of the irrelevant data, I think this is actually a much better long term way of dealing with the issue.

ERIK BAIS: May I comment on that? I do agree that this is just a small part. But we need to take this in small steps, and not disregarding the rest of the cleanup, I would not, you know, not postpone this while this is a valid option to actually do cleanup. So, yes, you know, you need to start shoveling somewhere and you know with small steps we will at least fix this and then we'll see whatever comes to the stable ‑‑

MARTIN LEVY: Remember, this is building upon the previous work which took quite a lot of discussion to get to the NONAUTH tagging in the source, to we're taking advantage of saying look, this has already been segregated in some way, these routes ‑‑ I don't want to say questionable, I should come up with a polite err word, but these are routes which are more open for cleanup than the general public and we all want to clean up the real stuff. So let's take a step and if this works we can build upon the experience and I want to go ‑‑ everybody wants to go after the big picture.

NICK HILLIARD: I completely agree with what you are saying. But there was one more point I think that was mentioned in your presentation. And that is that there is no way for address holders of out of region objects in the RIPE IRR so get their data removed from the IRR. And I think this is a gaping procedural hole that needs to be addressed. There needs to be some mechanism for organisations who are outside the RIPE NCC service region but who have legitimate or illegitimate objects, there needs to be some way for them to regain control of their address space in the RIPE NCC.

MARTIN LEVY: From my experience the same could be said for entries in Auth DB, RADB and various other DBs.

NICK HILLIARD: I completely agree but we're dealing with the RIPE IRR here, let's solve it here and if it works here, maybe we can export our great ideas if if works.

ERIK BAIS: Nick, before you leave, do you agree that the resource holder, by having a signed valid ROA, does that have enough legitimacy for, you know, getting stuff out of the RIPE‑NONAUTH database?

AUDIENCE SPEAKER: No.

ERIK BAIS: Because he can show his intent ‑‑

NICK HILLIARD: I think the answer is probably yes in most cases. But I think there are going to be some other cases where this is not the situation and this stems from intracompany communications where you have bits of the organisation or people even in the same department who simply don't talk to each other and where there are little bits of, you know, dependencies left lying around and I think fundamentally, the proposal on the table here is a little bit dangerous. I think we need to tone back on the current very hard line proposal it suggests and maybe go for something that allows there to be a back out procedure.

MARTIN LEVY: So the routing group and the e‑mail list and this is only version 1 and we are open for ‑‑ we already have to do edits after yesterday so let's take it onto the mailing list and move on. Elvis.

ELVIS VELEA: I have a question, how would this work if a ROA is created for a /24, but there is a route object, say a /16 in the NONAUTH, would the RIPE NCC have to delete the whole /16 block and create...

MARTIN LEVY: No, because the ROA would only be for 24 ‑‑ well in that case probably an LE 24, so it wouldn't be just the 24. Yeah, the ROA is very well defined as to what its scope is. The 16 in your example would be untouched. It may be wrong, it may be right, but it's untouched.

ELVIS VELEA: So the 16 would still stay in the RIPE database as a route object.

ERIK BAIS: Well the NONAUTH is for non RIPE INET numbs. So it's for ARIN space in this case where in the RIPE database somebody created an AS number and in combination with some IP space which just falls into that /16, and the example that we use was a /24. So, if an entity with this proposal would actually have the whole 16 assigned, then it will show the 16 plus their AS number and that does not match with what's in RIPE‑NONAUTH.

AUDIENCE SPEAKER: But maybe they have only a /17 that they have transferred from that /16 and they still want that route object that belongs to whoever to be done. They would sign the 17 ‑‑

ERIK BAIS: You can have multiple ROAs.

MARTIN LEVY: They haven't finished their ROA work if they want to protect their whole slash whatever it is, 16, 17, anything like that. I think your case is covered just by, just in general, let alone in this specific cleanup case.

AUDIENCE SPEAKER:

ELVIS VELEA: No it's not actually. Okay. Ill make it short. Let's say there was a /16 at some point in time and the route object was created for it. Half the it got transferred to entity. Entity got the /17 and they are creating a ROA for that whole /17 but there is a /16, a larger block that is already in the RIPE database, will that be disappearing?

MARTIN LEVY: Networks there is lots of examples of this around in the IRR space. It's innocuous‑ish, but it should be cleaned up because it just database wise looks wrong, but it won't be the first example of that and this would not ‑‑ this proposal would not clean that up. It would not touch it because of the size.

ELVIS VELEA: : I think it should.

MARTIN LEVY: I think that goes back to Nick's issue that we need to sit down and clean this database database and others tinsly.

ELVIS VALEA: Let's say a route object created by the legitimate holder in their IRR is created, by let's say AFRINIC, someone has a /23 in RIPE as a route object created 20 years ago, they create for the /23 in AFRINIC a route object that matches it, right, it is in the RIPE database. Would this be also something that we should think about in Version 2 where the holder that has the address space decides to put the route object in his dab database the others will be deleted.

MARTIN LEVY: Right. So this proposal is based upon the authors firmly believing that ROA data with its astation has more value than IRR data and this is the first step in essentially saying one is more authoritative than the other. It in no way deals with should one IRR object or one IRR source be more authoritative than another. That's a whole other situation. We are absolutely not addressing that. But the policy development process is open and you should write that one, because somebody at some point is going to have to write that. But it is not part of this proposal.

ERIK BAIS: Elvis, in AFRINIC, they are actually looking for a proposal or looking at ways to create ROAs and actually matching route objects in the AFRINIC database at the same time. So... sudden sudden.

AUDIENCE SPEAKER: Gert Döring. One of the things is sort of like call to order, this discussion has been very heated and parties get very much upset about what the other party says. But you are all not listening. And I have the feeling that in the discussion, half of the reason why people are getting so upset is lost. I can see Rudiger standing behind me. I don't have to look. People are using this stage. For business reasons it takes like half a year, whatever, to change the way that the mechanics work, so if you take it away from them, they get upset. They shall tell you where they are upset, not that they are upset. On the other hand, Job calling people that object to this adversaries is not really helping the thing either. . Just as a matter of of please listen more, shout less.

The other thing, what Nick said is actually an interesting approach to this. If you want your data cleaned out, one possible option to modifying this proposal, I understand I am in the wrong room and on the wrong list and so on, just to keep in mind this. If I have a ROA, and this entitles me to get a button on the RIPE site that says clean up RIPE‑NONAUTH for all the space I have a ROA for. I have the attestation that I control the address space, so I should be entitled to click on do this away, it's my space, I control the space, I want to decide where to put this. It's a twist of this. It's not automatic and it will not have the impact on the number of objects you can't automatically clean everything but it might be a useful compromise. I'm taking it to the list. I just wanted to mention that.

MARTIN LEVY: There is a side note because it doesn't affect this proposal. When the RPKI services started showing up on the LIR portal, I do explicitly remember asking RIPE database why is this here? Why isn't it just part of the existing route object creation process? And that was asked to staff a long time ago and it's equivalent to what you just said. In other words, if I create ROAs, why is my route objects in that case there was no auth, why isn't that kept the same. Rudiger.

RUDIGER VOLK: Well, okay. Let me just apologise for using foul language yesterday. I should have said mess in every application of foul four letter words. I think... I think dealing with this as a policy proposal is a he is waste of time of effort. The time and effort we are spending should be spent on making use of a technical idea of identifying dubious records and say gaps, a pointer, yes if we figure out, if we figure out and make the argument that an out of region address holder cannot remove an object, yes, well, okay, if we actually look at it, we can present a web page where the authorised owner can put in his RPKI credentials, ROA or not, and say please clean this up. But, just saying we are going to mess around with the data that's there and do that in an abstract manner without looking at the specific cases and doing the analysis before is just a really bad idea.

AUDIENCE SPEAKER: John Curran. I have very little view on the proposal but I have a question for the authors because I am just looking at this. It doesn't effect your routing presumably if you are actually doing route origin validation, you are also in the situation where you are you are preferring and other people doing route origin validation are likely preferring your valid routing, but when you end up with invalid, it could be the result of an advanceient condition, a brief typo. And I guess you have no hysteresis in here, so if you make a typo, well you are experimenting with your, you know, your ROA issuance, then you don't get an easy way to recover from that, and I guess I'm just wondering, how would you feel if that invalid was. Instead an invalid present for 72 hours in then delete the RIPE‑NONAUTH object? That way it's pretty clear you have put the ROA out there and you like it, so let's take all the things that you didn't want and sweep them away. Otherwise you get a circumstance where someone is actually using and actually does have, just because of the history of IRR, does have objects here and is relying on them but in the process of experimenting with RPKI, they get the lending wrong for example, surprise, you have just invalue dated some information and you now have an issue to go scramble and recover. Have you thought about maybe if you are not going to do notification, if you want to do an active deletion, having a hysteresis process or a minimum delay before trigger?

MARTIN LEVY: Yes, this was brought up in a different way, but it would be one of my feedbacks to a Version 2 of the document is that we need something delay wise, and there are cases, yours and others, that we need to do something, we have been told to stop.

AUDIENCE SPEAKER: One of the advantages of not having a good integrated system in the ARIN region is that we don't have this problem. We have it in a different way but when we do an integrated IRR and RPKI system which we're doing next year, we will always have alignment, we won't have the no off, not off situation because we are not going to let you bring things in, you'd have to explicitly import them but I do see because RIPE has this big basis of data, you might want to be a little careful because some of it is still operational.

MARTIN LEVY: Thank you very much everybody.

(Applause)

WILLIAM SYLVESTER: Up next, country code use in extended delegated statistics, Ingrid why the RIPE NCC.

INGRID WIJTE: Good afternoon, eye name is Ingrid, and I work for registration services in RIPE NCC. And I'm here to talk about the current situation with country codes that are registered in the RIPE database and in the extended delegated stats, and some things that we see there.

So, in general, and over the past years, we have always assumed that the country code reflects the location of the network, and that assumption was based around RFCs, RIPE documents around which we created the system.

But what does that actually mean? That a network can be spread over multiple countries, that's how for one reason got the EU country code, to reflect that. Theoretically, today it can be in one country and tomorrow in another, maybe not that quick, but changes happen. And also, as the RIPE NCC, we accept and put in what the user tells us the country should be.

And to clarify, I'm only talking currently about country codes registered for resources that were directly distributed by the RIPE NCC. So allocations and assignments, I'm not talking about customer further assignments or suballocations, purely resources distributed by us.

So, until recent years in most cases the location of the network and the legal presence of the organisation were matching, not all, but there was the general picture. And we are currently seeing a higher number of out of region members, and with that, we also see an increase in requests to update the country code in the extended delegated stats.

And another thing that we start seeing is that people start using the country code for different purposes. We're used to getting requests for delocation, language issues. But we also see some different requests coming in, and in some cases this is leaning to inaccurate data that we are asked to register.

So, to show you an example. We receive a request from an LIR that asks us to change the country code from one in the RIPE region, less than popular country code, to a country code in the APNIC region where the holder of the resource is legally based. So we have some questions. Well if the network moved from RIPE region to APNIC region, should she maybe not consider an inter‑RIR transfer as this process is now out there. But the answers are well the network didn't change, it's still in the RIPE region, however our customers are having issues with some of share applications based on restrictions, sanctions that are out there, based on that country code.

So, we update the country code as requested. Also, I show you later, because we do not have documentation on which we can base another decision. It's not clearly defined what value that country code should represent.

So, we update. And then a few weeks later, in this example, we receive a request to transfer that same prefix to an organisation, an American organisation in the ARIN region. So, what did this change actually mean? What did the country code that was updated, what did that reflect?

And what information did we put in was it accurate, was it not?

So, I mentioned the documentation. The RIPE database does not specify it it means clearly recollect the attribute says it identifies the country, the country where the addresses are used. Not a reliable way to map to a country.

STENOGRAPHER: Finished, let's go home!

INGRID WIJTE: We see the same pattern in the extended delegated stats. Also the same ‑‑ now, I am here ‑‑ it actually has the same values, it identifies the country, doesn't specify if this is the country where it's used, where resources were first allocated or assigned, so we didn't have, you know, there is nothing much to base any definition on or what it should be.

I mentioned extended delegated stats. For the ones who are not familiar with T it's published on our FTP site and it was created in 2008 as an effort from the joint RIRs to create one format and to also show the reserved and free address space in the different RIRs. The format of this file is, was highly negotiated and changes in that format would be very difficult as it has to remain the same in all five.

So, that was the documentation.

Then next step, in the RIPE database, the country code is an open attribute so once we create the object, the resource holder can update it any time to multiple country codes, change it, it's a free update. Extended delegated stats are created and maintained by RIPE NCC. So, changes in the RIPE database do not reflect in the extended delegated stats. As you can see in the example below, the INETNUM object has country code US. The extended delegated stats have country code Ukraine.

I also asked the other RIRs what the status is in their data sources, and in AFRINIC and LACNIC, both data sources represent the legal presence and region only. In LACNIC there might be some country codes outside of the region from previous times. And the RIR manges both country codes.

In APNIC, it can be legal or a network presence, and is managed by the RIR and at this moment they are looking at options to make some changes in how they represent the country code in their delegated stats.

And in ARIN, it's legal or network presence, user managed but here I need to note that in an ARIN there is a requirement to be legally based in the surface region and at least part of the resources must be used within the ARIN region.

So, my question so the room, and I actually did this presentation yesterday in Address Policy as well, is is this intended? Is the information that you as a community want to see there? Or do you think change is needed here? And if so, what should the change be? Should it be left up to the resource holder to decide which country code to use and whatever it should represent? But in that case the information could become meaningless as it becomes unpredictable what the country code would mean. Or should it be defined for both database and extended delegated stats? And if so should it be the legal country of the resource holder? From an RIR point of view, that can be verified, we can have ‑‑ we can provide accurate information for that.

Location of network, that could also be a definition. More difficult to verify by the RIPE NCC or near to impossible.

And a second question is also, should the RIPE database country code remain in sync with the extended delegated stats. As we saw before this too over time can go apart.

And with this, if you have any questions, the idea is that after this meeting, we move this discussion to continue on the mailing list. Either Address Policy or database, we'll need to work out that it goes on to one mailing list.

WILLIAM SYLVESTER: Any questions? Thank you.

(Applause)

Up next, Denis has a discussion on person objects in the RIPE database

STENOGRAPHER: No slides? We're definitely going home!

DENNIS WALKER: I am Dennis Walker, co‑chair of the Database Working Group. What I want to talk to you today, and just before I start, I'll actually have to put my glasses on because I am getting to that age where I need ‑‑ I can't see the screens all that well.

STENOGRAPHER: You're probably better off not being able to see them.

DENNIS WALKER: Okay. The issue is there are more than 2 million person objects in the RIPE database with only a total of just over 8 million objects, so we're basically saying there is 23% of the entire RIPE database is potentially personal data.

There is also 104,000 role objects which hey also contain personal data, but we simply don't know.

The justification for storing personal data in the RIPE database is that terms and conditions define one of the purposes of the RIPE database as facilitating contact for operational issues. Also the Address Policies do require some contact details for resource holders, and some end users who have public address space. But the Address Policy also recognises or acknowledges privacy concerns, so it does suggest for example if the end user with a public network is a private individual, they can actually put the contact details in of the LIR.

So, who are these people? Well there is within 1.5 million person objects that have just a single researches from an INETNUM or INETNUM 6 object, so these could be technical contacts which could be validated by the terms and conditions, or they might just be customers that have signed address space, we simply don't know. There is also half a million person objects that have the same name and phone combination as other person objects. So it's quite likely there is actually a half a million duplicate person objects sitting around in the database because somebody needed one at the time, they just simply created another one.

And if you look over the last few years, at the rate in which person objects are being created, there's been a pretty steady increase over the last 15 years around about 100,000 person objects and this graph shows the objects that exist currently in the RIPE database. So all these hundred thousand objects that have been created each year are not transient, they don't come and go, they are still there in the RIPE database. So, this 2 million plus is going up by at least 100,000 every year.

There is also about 100,000 unique e‑mail addresses in the database in various notification attributes. And again, we have no idea what the mix is of personal or corporate data with these e‑mail addresses. So we have got a lot of personal data here and we don't really know much about it.

So, is there a problem? From the analysis I have done so far, it's not possible to identify how much of this personal data can actually be justified according to the terms and conditions in the policies. Maybe 90% is justified, only only 20% of it is justified, we simply don't know. And with a general EU privacy laws and GDPR in particular, does this give someone a legal problem? But who is actually accountable for all this personal data in the RIPE database? We know who creates it, but who is accountable for it? Who is going to stand up and say, you know, this is a problem, we have got to do something about it?

So, if the justification for this data is being questioned, BIS basically what I'm doing right now, who is going to take responsibility for answering that question? Is it the community? So there is a number of issues here, and I think the answer to the title, is there a problem? There is a problem.

Also what is a contact? The terms and conditions say it's justifiable to have personal data in the database for a contact but do we really need personal data for a contact? I think the original database design intended person objects to be only referencing role objects and everywhere else researches the role objects. But there was never enforced by the software, so historically, certainly over the last 20 years or so, people just tended to use person objects everywhere rather than have that double indirection. That's fine until that person leaves the company and you realise that they are referencing 100,000 objects and you suddenly need to update them all.

Also maintainer objects can only be created as a maintainer person pair, we don't yet have the objection to create and maintain a role payer. And nowhere is it actually document that had a person object should contain business data and not actually personal data. And the name itself tends to imply that it is personal data.

When we designed the abuse‑c, we specifically designed it to enforce the use of a role object rather than person objects. And rogue objects no longer need to researches any person objects at all. So, why can't most contacts be roles? I mean let's face it, if a contact is a technical help desk, do you really need to know a list of people who are going to be sitting at that desk, who anyone of them may be the one who answers your phone or responds to your e‑mail or do you just need to know that there is a help desk and if you contact this help desk, someone will answer your question?

So, what information do we actually need to have a contact? Can we justify storing all this personal data? I think, as Hans Petter mentioned in one of the presentations yesterday, we have spent 25 years building up this WHOIS database with all this stuff in it, but is it time we actually took a step back and said is this data still what we want? Is it still fit for purpose? Do we still need it? Is it justified? Is it time to have a cleanup?

So, let's look at the distribution of these 2 million person objects. I make no apologies for the numbers but I think they need to be seen because they are quite surprising.

So the organisations with more than a hundred thousand person objects, and there are three of them, all telecom companies and between these three companies, they have 856,000 person objects. The number one on the list has 613,000 person objects. That's 5 times the number of person objects that the next two telecom companies have. It's ten times what most other telecom companies in the database have. So the obvious question is: Why does this company have so many person objects? I don't have an answer to that.

If you then look at those organisations between 5 and 50,000, because after those first three it drops to around 50,000. There is 43 organisations in this range, and they have another 704,000 person objects. So if we just summarise that. Those with more than 5,000, there are 46 organisations, 35 of them are telecom companies, 7 service providers and four other companies. Those 46 organisations control 1.56 million person objects. Are these really technical contacts? Is there any possible way we can justify these 46 companies having one‑and‑a‑half million people's personal data stored in this database?

Then there is 110 organisations with between 1 and 5,000, that's add another 230,000 person objects, and then everybody else, everybody who has any presence in any data in the RIPE database besides these 46 and 110, everybody else only totals 233,000 person objects.

I think there is a slightly unfair distribution there which certainly signifies some problems.

So what do we do next? We still don't know if any of these people are actually contacts. We still don't know if these person objects contain personal or BIS data. And over the last 20 or so years we haven't changed what a contact is. Do we really need a postal address for a technical contact? If you have got an operational problem with some network, are you going to post them a letter? Are you going to get on a plane and fly over and knock on the door and say Hi, we have got a problem with you? So why do we have this information?

And the other thing about it, this is a public database. You know, you don't even have to Hack this database to get your hands on this personal data. All you need is a block of IPv6 addresses and you can download the entire database including all this personal data. We are just saying to big data companies out there, here, there is 2 million people here, just you know, come and pick up the data and do what you want with it. So, what are we going to do?

Questions...
I will say that doing nothing is probably not an option. Elvis.

ELVIS VELEA: Can you go back a few slides, the one showing that five companies have 1.5 million person objects.

About 20 years ago when I was still living in Romania, I got a small block of IP addresses from my provider, and my provider, without even asking me, just created a person object ‑‑

DENNIS WALKER: It's probably in the small print of a contract you signed.

ELVIS VELEA: But, I got a /28 and it got registered to my name to my home address in the RIPE database without me even knowing about it. I found out many years later that this company, every time they were making an assignment, they were creating these objects for their customers, every single customer got an assignment and with that assignment comes a person, a name, an address. That data is private data. Mine was private. I wouldn't want it to be in the database. My home address or ‑‑ well the address where I used those four IP addresses. I think this came from when the NCC was enforcing creations of assignments in order to justify more address space, and I think this data should be cleaned up because we don't really need it there any more.

DENNIS WALKER: No, and perhaps another question is, do we actually need the person object at all? That's another question.

ELVIS VELEA: That's a good question.

RUDIGER VOLK: I always get nasty emotions when I feel that some people have an obsession in removing other people's stuff or messing around with it.
What I'm seeing is that the specific cases for Elvis's data that was mentioned, I would say in theory, that data should have been disappeared in May this year unless Elvis actually confirmed that this data is there. That's GDPR. We did get statements from RIPE NCC legal about all the things ‑‑ essentially if a personal data is GDPR relevant, that yes, the RIPE NCC thinks that everything is fine and responsibility for all of that rests with the LIRs.

I requested that we should get a legal statement that we actually can use with our legal Department's, so that a process for dealing with the legacy stuff can be done. Unfortunately, as far as I have seen, that hasn't happened yet. I'm very much against removing data and messing around with third‑party's data. What I'm seeing is first and foremost, an opportunity for someone who is interested to write documentation and guidelines, how with the current definition of a database, good practice for registration should be done. Like, if you say person objects are not really necessary, I'm not completely sure with that. I think there are cases where I want my personal records still there and it's valid. But, yes, if you can explain how role and other objects that are there can be used and should be preferred, please write that, I can hand that to our LIR department and I guess follow is and probably our legal department would like to have a look over their shoulder and figure out whether this makes sense and any action with clean up in old stuff should be done.

DENNIS WALKER: And how many years do you think that might take?

RUDIGER VOLK: I don't know how quickly you are writing.

DENNIS WALKER: I'm not the RIPE NCC any more.

RUDIGER VOLK: And kind of, if the legal department figures out that the company has actually big liabilities because we are not GDPR conformant, I am pretty sure that things will be moving in much less than a decade.

AUDIENCE SPEAKER: Hans Petter Holen, RIPE Chair.
Thank you very much for bringing this to our attention. I mean 1.56 million person objects, wow, how did this slip through the GDPR validation. .nl registry had a similar evaluation, they have ended up registering all their domain names to organisational ID so you can identify legally which organisation actually owns the domain. They also have a way to create an ID to link it to private persons, so they both are the options. And they have removed all person objects from the database, required all to be replaced by role objects. So that was their legal analysis on what to do with their database. I'm surprised that we didn't end up with the same result. I do understand that well this is not RIPE NCC as a controller, this is the LIR as a controller, so seriously here there are 17,000 LIRs that needs to do that evaluation and clean this up, and yes, that's something that we need to take on here to make a mechanism that actually works.

And on my personal experience, I had similar experiences like Elvis. Trying to clean this up through the relevant LIRs is close to impossible. I have a couple of address blocks that used to be assigned to a company that we merged into another company five years ago. No existing contract with that company. There is actually no response whatsoever with from the LIR. So I mean if this had been provider independent addresses, I would have had a very nice money that I could take with me. Unfortunately it's provider PA space, so, I can't really sell it, otherwise that would have been a brilliant solution. So I think ‑‑ this actually ties into yesterday's discussion on what information do we want to have in the database? And I think that just saying that what we have there is okay isn't really working well.

DENNIS WALKER: I think with GDPR we can say what we have there isn't okay.

AUDIENCE SPEAKER: Hello, Andreas from GRNET. First of all, commenting on Rudiger's comment, I would say that I understand your concern, but it is one thing to clean up a database and another thing to delete some field to change the database because things get changed and some things have stayed ‑‑ are not used any more and you need to give a period.
The second comment answering your question if we need the person objects, actually there any more? Well I believe that all of us will find that having person objects that actually help us, but ‑‑

DENNIS WALKER: In what way does a person object help?

AUDIENCE SPEAKER: Well, okay, I can remember a few cases where I was trying to contact a company and I couldn't really get an answer and I was trying to find out who works, who actually works there and then I would try to contact them through maybe like Dean or something else, these are mostly marginal cases. Certainly there are some such cases but we shouldn't design with this in mind. In concept, we do not need such data in the database. This is not Juniper compliant, this is not compliant with the Internet of 2018. So ‑‑ although I might need it in some cases, I believe this is the right direction to move.

DENNIS WALKER: Thank you.

PETER KOCH: So, we have been through this in the domain industry. In various ways in different registries coming to different conclusions. I'm not sure you suggest that person objects should be replaced by role objects but that doesn't immediately make the things GDPR compliant because even in role objects you can have addresses that PD I and so forth. Also while this is 1.5, 6 million looks impressive that alone doesn't make it GDPR non‑compliant, it signals to me that there is a strong symmetry that what is the practices of the particular maintainers, compared to what the others have done. One prudent approach would be to approach these top 5 something and try to get in contact with them as to what the practices are and are they knowing what they are doing. But I think one important question so ask really is, what is not so much what is the role of the person object, what the role of the role object, but what is the future and meaning and necessity of the technical contact and the admin contact because that is that's happening in the domain industry. Now thinks are definitely different there because you, the vast majority of domains is registered to private individuals and the vast majority of address space obviously is not. But asking that question in this day and age, what is the purpose of the technical contact, do we really need it and the purpose of the admin contact as opposed to the like the legal organisation, we have had this discussion around the legal address yesterday. Which, again, and I think Hans Petter mentioned it already, suggests that we have a bigger discussion about what the purpose of the database and the purpose of what the several fields is, which I think is also informed by the fact that some of the database educational material is kind of waving the flag saying we can't really tell what an admin contact is and what is asked of the admin contact so that means we need to have a discussion about this and feed this back. So thanks for bringing this up. Step one would be for me to talk to the top five person object holders but then initiate the discussion around purposes and so on. Not so much because the big numbers imply GDPR non‑compliance.

AUDIENCE SPEAKER: Hi. Beganey geoa. According to Dutch law, if you have a database that is hosted on hardware that is, for this purpose let's say it's owned by the RIPE NCC, and you believe that the data on it is owned by an LIR and there is personal data on there and it's published without acceptance for like a person, then we are actually not compliant with GDPR, but also bound by Dutch law to make a data leak ‑‑

DENNIS WALKER: I think we have just done since we're public.

AUDIENCE SPEAKER: We have to register the data leak.

DENNIS WALKER: You can't actually say for sure that the people don't know ‑‑ they may have signed a contract to get services, and in the small print it says your information will be put in this database, so it may actually have happened ‑‑

AUDIENCE SPEAKER: Cannot guess that, because the RIPE NCC are liable at that moment.

DENNIS WALKER: That's the big problem. We don't know the answers to all these questions. Anyway, can we ‑‑

AUDIENCE SPEAKER: Can I just clarify on that before we create any sort of emergencies here. There has been a leak in analysis on this and in this case, the RIPE NCC is the data processor, the data controller is the LIR and the contract relationship there specifies this. So this is not a data leak. The responsibility is with the controller to have this contractual relationship in place. So this is not something that the RIPE NCC is guessing or not. This is something that the RIPE NCC knows that they have in their service contract. So, the risk here is really with the LIRs. And that's where you can start to guess or not, so if Deutsche Telekom don't have that in order, Rudiger has a problem and he probably knows that, and the Germans have things in order so I'm not worried about that. But there are 17,999 other LIRs that might have a problem.

ELVIS VELEA: I think we should take something from here and I really really like your idea of role plus person creation right now. Let's change that to role ‑‑ sorry person pair, let's already start discussing, bring this to the list, let's do it role plus maintainer. Also, let's try and come up with a timeline into which we can actually request everyone holding resources that have admin text as persons to replace those withdrawals, let's try to push that change from a person being an admin abuse‑c in an resource object, push that to being a role. And come up with timeline to push everyone that's going to take probably years anyway, but let's start ‑‑

DENNIS WALKER: I think the law will say we'll do it a bit faster. Anyway we'll take it to the mailing list now. Thank you.

WILLIAM SYLVESTER: All right. With that, we're out of time. I guess it's break time. Thank you to everybody for coming. Thank you to the secretariat for helping out with the scribing as well watching online. And we'll see you all next time.

(Coffee break)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.