Archives

Routing Working Group session
17 October 2018
2 p.m.

IGNAS BAGDONAS: Good afternoon colleagues. The Routing Working Group. Let's start the meeting session. If you could please find your seats. I do understand that after the lunch, that's probably not the most high priority thing you you would like to do, but this time Routing Working Group is a little bit different.

We have two slots this time, the regular time on Thursday and also an additional meeting today. This is an experiment. And the reason for that is that for several meetings, actually for quite many meetings, the situation was that whenever the discussion starts and there is a queue of people waiting to commence something on the mic, we have to cut the line because there is no time for the session. And if we look at the value of the session versus the comments, very likely the comments are more valuable part than just the presentation. You can listen to the recording, but the discussion, live discussion with your peers seems to be more valuable. Therefore, the reason why we have two sessions is to have more time for the mics.

Presentations will still be there, but this time we are much more liberal and the microphone police will not approach you ‑‑ well unless you take two hours of the time at the microphone. Again this is an experiment and it's not necessarily likely that it will continue or not, we will like to have your feedback. The current thinking is that maybe we should do this every second time, every second meeting. It all depends on the content. In previous meeting was mostly overflowing, this time we have slightly more space.

The overall continuing topic of the meeting is routing security both today and tomorrow we are talks related to that. Then looking back at the Marseille IETF BoF, the consolidated feedback was that the community is interested in having a deeper look into specific areas of what IETF is doing, not just an overview. So there is a session dedicated for that.

And again this is a continuation of an experimentation into IETF to have the buy directional communication.

Agenda for today, best current practices on routing policy development then a look, a presentation from RACI, a look into how to deal with the tunnels in MPLS environment. And the main discussion part is on the policy proposal of cleaning up the state of the registries.

Tomorrow is an overview of the developments in BGP security related things. Some deeper insight into segment routing, how it works and how could you use that. And then an overview of what is happening in IETF in routing.

Plus, some lightning talks. If you still have an idea what you would like to share with your community, please approach the chairs by the end of today and you might win a slot.

So, with that, let's start the presentation sessions, Job, the stage is yours.

JOB SNIJDERS: Good afternoon, RIPEs. My name is Job Snijders and today I want to share with you some insights we developed over the course of a number of years, related to how to architect robust routing policies. The information in this presentation is a collection of NTT's own experience but we also asked numerous other friends for insights and asked them what is it that makes great routing policy so great.

So, so that effect effect today I will share with you a conceptual model to look at routing policy, this may make is easier to discuss routing policy with co‑workers or friends. I'll offer some terminology that perhaps can be useful and then I'll go over various design patterns that I would encourage you to consider to deploy in your own network.

The conceptual model and the terminology. There is a famous Dutch saying "One man's eBGP out is another man's eBGP in." If I have a router on the left side, the orange box, that's my own routing, I have an eBGP session with my upstream provider and any route announcements that pass from my routing to my upstreams router pass through an at that. Point we call eBGP out. And correspondingly on the other side, there is eBGP in, why my upstream will do various phases of verification and perhaps perhaps add some features. Then it makes it into the local RIB, perhaps it's the best path then out through iBGP out and correspondingly on the other side so it will be iBGP in policies. The name eBGP out, eBGP in, iBGP out, iBGP in, are not necessarily the names of the routing policies themselves, you should more view them as a function name.

If we look at IOS, Arista, Brocade configuration, you can see in the blue lines, the place where we attach a policy to a neighbour. We call this an at that. Point. And at that. Points refer to a policy and the directionality of the policy. Does this apply receiving updates or does it apply when sending routing updates?

Then policy itself is the totality for instance of a route map. And route maps are composed of multiple terms. On Junos, they even literally use the phrase term to differentiate the different parts of a routing policy.

So, let's zoom in on the first at that. Point. EBGP in. In eBGP in, in essence we want to do what I call two phase filtering. We want to prune what is never acceptable under any circumstances, and then we want to match what remains, whether it's part of our white list. So there is a block list and Anna allow list. We'll go through how these are constructed one by one.

The raw input is whatever your nape sends you. Cannot influence the raw input. It is what your neighbour gives you. It may be an infinite amount of routes, is may be no routes, it may be routes covering your space or their space, you don't know. We cannot influence this part. In IETF speak, this is the adjacent RIB in and this is the piece where we maximise prefix limits. Let's zoom in on maximum prefix limits.

These are a beautiful control feature that can help sanitise your routing tables. In essence, they are a fail safe device where we may not understand what is happening on a certain eBGP session, but we do ‑‑ we are in a position to conclude that if a number of routes goes over the threshold, something somewhere is wrong and we should self‑destruct the session. If you click on these links you'll see fascinating articles on what such fail saves are useful.

This is how we have you maximum prefix limitses. On the X axis you see time and from left to right we see time progress towards the future. And then the Y axis is the amounts of routes and as the amount of routes increases, at some point we hit the number that is configured on the maximum prefix limits and a session should tire down and boast of us are safe. Maximum prefix limits are beneficial not just for me but also for you because both of us will suffer if I inadvertently accept your route leak.

Unfortunately, our perception of how maximum prefix limits work in practice is fully aligned with what we would want them to be. Because this is where a difference kicks in between applying maximum prefix limits pre‑policy or post policy. And if you apply them post policy, a far more complex phenomena can be observed. The green line is the normal amount of announcements. So now a miss configuration was extense ated an a full table leak is starting. There will be a lot of route announcements that are filtered because they are not part of the white list or contain the wrong ASNs anywhere in the AS path. There will always be a number of routes that do pass through the filter even though they shouldn't pass through the filters because filters are never perfect. And we may end up with the weird situation that even though there is a route leak going on and even though many routes are announced, the threshold does not kick in because the threshold is applied post policy rather than pre‑policy.

Pre‑policy limits are very useful for a number of things. They protect against memory exhaustion. You wouldn't want your neighbour to be able to fill up your RAM, by announcing you millions of routes. But most importantly, pre‑policy limits are very effective against route leaks.

Post policy limits are used to protect your RIB and your FIB from exhaustion, and perhaps to enforce contractual agreements. For instance, in a Layer3 VPN concept you may have purchased a surface where you are only allowed to announce 100 routes.

Now, if we compare the various routing vendors specifically Cisco stands out as the one that does not support pre‑policy maximum prefix limits. So in all IOS and IOS XR, IOS X E where you configure maximum prefix limits, if you have very high quality filters, the maximum prefix limit unfortunately becomes far less effective than it should be.

On IOS, the one when we type in maximum prefix limits they are always post policy. On Junos, you can influence this, they have different types of key words, but it's also a little bit complicated because if you disable the adjacent RIB feature and you configure prefix limit, then suddenly you end up with a post policy implementation rather than pre‑policy.

So my request so any Cisco representatives in this room would be, please create a pre‑policy maximum prefix limit on IOS XR and X E.

Another thing we should explore as a community is out bound maximum prefix limits. When I have a peering session with one of my peering partners, normally I'll announce between 200 and 300,000 routes. If I announce 700,000 routes, I don't know what is wrong but something where is wrong and I should proactively tear down my BGP session to protect both myself and my neighbour. This only exists on BIRD today, but I would love to see a formal specification of this feature and see more widely available implementations on the commercial routing vendors.

So, this is yet another another to do item on our list.

Let's go back to eBGP in. The pruning of the information you received in the raw input. We now pass through our filters. What do those filters look like typically?

This is where we outright reject regardless of other attributes. So we focus on a single characteristic of a route announcement and based on that we will reject before we go onto the second phase. A good example of things to reject would be Bogon ASNs anywhere in the AS path of the Bogon or privates prefixes. Leaks, so for instance, NTT observing cogent behind level 3 or level 3 behind cogent, we have extensive AS path filters that try and guard against this. IXP peering LAN more specific should never be accepted. RPKI invalid announcements also go into the immediate drop list and furthermore, your own IP space, because you wouldn't want to accept your own routes from other people.

There is a study resource, the NLNOG community has sent significant time collecting all types of examples. There is a lot of material there that you can perhaps even copy /paste into your routers.

Then finally, we have the white list we could recall collate what we have received in the raw input that managed to sneak through the bad filter and then we correlate that with the white list. What we expect from people in their announcements. And there is numerous study resources in this regard, how to generate white lists, what data sources to consider, how to interpret IRR, or RPKI data, in this context.

All right, now what? Famous famous traditional Belgian saying is when in doubt use BGP communities!

The name BGP community in and of itself I think is terrible. As a non‑native English speaker I have trouble wrapping my head around why this thing was called a community and not just a label or a marker or a class fire.

From RFC 1997, the definition is "A community is a group of destination that is share some common property." It's too late for us to change the name so we'll just have to deal with the fact that it's called a community. These two RFCs make for interesting reading material, they are only a few pages each and they show the powerfulness of communities.

How do we news BGP communities?

In observing BGP communities as they are a tool for network operators, there is basically two steps. There is the classification step and execution step. Classification happens through set community X, Y, Z added an then execution is where you match on a community and perform some actions. So, common classifyers are for instance that you learned a route from a customer or learned it from a peering partner, that you learn it in a specific geographic region such as Europe or the Netherlands and then based on those class fires in your routing policies, you can execute and make sure you are sending the right routes with the right properties to the right peers.

Common execution outcomes would be do announce to a certain eBGP neighbour, or prepends the @ path on out bound. And RFC 8195 offers inspiration how to use large communities following this paradigm.

So a day in the life of a BGP announcement. (AS) my ASN 15562 generates a route 192.147.168 and it is my intention to announce this to my upstream. So on the upstream side, the first thing this passes through is eBGP in, where we check whether there is bogons, whether it's an RPKI invalid, perhaps, whether some other characteristics are wrong, and let's assume nothing is wrong with the announcement and it passes through that first phase of pruning.

Then the second check would be to see if it's part of the pre‑generated white list, and the white list is generated based on IRR or RPKI data and if it passes through that, we enter the classification part. So some communities are added that indicate this is part of a customer route, this is learned in the Netherlands, this is learned in Europe. Features are applied such as modification of the local preference attribute, and then it passes through iBGP out to other routers.

On the other routers within the administrative domain, it passes through iBGP in, this is yet another at that. Point where features can be applied such as selective blackholing or selective local preference modifications which are features used by Anycasters. It makes it into the local RIB perhaps. It perhaps is the best path and then it may pass out through eBGP out.

This is an example class fire execution matrix. On the left side you see communities that have a certain meaning such as learned from customer, learned from peer, learned from upstream. And on the the top row we see certain at that. Points where we can execute based on those communities.

So, it sounds very simple. Well if we learn a route from a customer we'll want to send it to other customers. If we learn a route from a customer, we'll want to send it to our peering partners and we'll want to send it to our upstreams because perhaps our core business is providing transit.

So most of this table is fairly straightforward and many of you have implemented similar things in your networks. However, there is one important hint I'd like to highlight, that if there is no class fire on the BGP announcement, it should never ever pass through any of these at that. Points. If there is not explicit indicator that says this route announcement should propagate to this type of neighbour, do not propagate it. And this is an incredibly powerful safety device. For instance, if you connect a BGP speaker to your network, that but somehow for some reason you did not apply any routing policy, and routes generated by that device are accepted, this mechanism will prevent those routes from propagating to your eBGP neighbours.

So if there is no community that tells you to do propagate the route, never propagate it.

In other words, this is a field closed mechanism. And this is useful because maybe you have some DDoS scrubber that goes off the charts and generates all kinds of routes that should not propagate or maybe there is a typo in a policy. This can really save your bacon.

And by using communities as your network grows, you don't need to adjust your prefix list on each an every router. Using BGP communities is a fantastic time safer that reduces the load on your operations.

Another hint I have for you: Avoid regular expressions where possible. If you look at this beautiful regular expression, is this cursing from a comic book or a legitimate part of my routing policy. Don't try to be clever. When you design a regular expression, you'll be like: 'Wow I compressed all this complexity into a single line' and then a few months later, at 4 a.m. you try and debug the regular expression and you realise that you have made a grave mistake and you don't understand the work that you did, months before.

So, focus on readability. Focus on maintainability. Consider your routing policy as if it is source code where you prioritise that the maintainability is more important than it being condensed.

Another thing I would recommend is do not mix warm main name server or yeah err MyASN err, differently expressed don't mix v4 and IPv6. Fortunately they are not designed in a backwards compatible manner and this means that we essentially have two flavours of the default Freezone and I recommend that you have separate BGP sessions for a v4 routing, separate BGP sessions for your IPv6 routing. Use separate prefix lists, use separate routing policies, keep everything separate. Because the behaviour on devices that do support mixing between the AV Is is somewhat undefined. Does a prefix list that contains only v4 entries mean you fall through and fall open when an IPv6 update passes through it or do you call closed? This is not portable across multiple vendors.

And similarly, don't announce v6 prefixes over v4 sessions or v4 prefixes over v6 sessions. This small trick will greatly simplify your operations in debugging.

Let's take a look at how many routing policies you generate it a standard transit network and I'll share some numbers from NTT side to you can try and make an estimate whether you are on the right trajectory or not.

For eBGP instyle policies, you'll generate typically saying one policy per ASN. Because you need per ASN, you need prefix filters that are the white list, and this is where you'll end up with per ASN on eBGP in separate routing policies. But for eBGP out style policies, you'll see that grouping is possible. For instance all your peering partners can share the same policy. All your customers can share the same policy.

Then eBGP out, generally speaking, it's the same across your entire network. But iBGP in, generally speaking, is unique per device because this is where we apply regionalised features such as selective blackholing or selective local preference modifications.

So eBGP in. Per eBGP neighbour, you'll have a policy for v4 and v6. In NTT's network, that means we have tens of thousands. EBGP out, there is a sharing possible, so per group, you'll have the same policy and this results in high hundreds in totality across the entire network. IBGP in is the policy sometimes the AV Is times the amount of devices in our network. So that's low hundreds. And then iBGP out it's the same policy copy pasted on every device.

In IETF, an interesting observation was made that set community X across multiple routing platforms does not do the same thing everywhere. So when we talk about this type of configuration, it may mean entirely different things on different platforms. On some BGP implementations this comment deletes all BGP communities and then sets the specific community you want to associate with that route announcement. On some implementation it is deletes some of the communities but not all, and then sets the community you wanted to add.

There is an implementation that does not delete anything but just adds.

And this makes for harder to maintain routing policies if you are operating a multivendor environment. So what I recommend instead is that you explicitly delete the communities you want to delete and then as the next step explicitly add the communities you want to add by using the equivalent of set community X additive.

And to put that in context, when we implemented graceful shut down, the well known BGP community to indicate that the paths will be going away shortly, we went from tens of thousands of instances of set community to just a few hundred.

The link below is quite interesting to read.

So, what communities do delete when you propagate routes to your eBGP neighbours? It is recommended that you scrub BGP communities that could negatively affect the operations within your own network. In different words, you want the BGP session to conform to what was contractually agreed upon. For instance, if our agreement is that I will accept your routes but not give you traffic engineering features, I should delete the BGP communities that could influence traffic engineering in my network.

In general, this kind of means that you'll delete communities that match with your ASN in the first 32 or 16 bits. Leave as many communities as you can, because somewhere down the road somewhere in the pipeline somebody may benefit from that additional information, and maybe facilitated it make better routing decisions, and this may be one of the very few places where regular expressions are acceptable.

What communities to send to your neighbours? I would recommend at least send geolocation information. Publish somewhere a list of what communities you share and distribute with others what they mean F community 1 means Europe, community 2 means North America, publish this somewhere, don't put it in your customer portal behind a login, make it a public document.

In general, I suggest do not send more than 4 BGP communities you send yourself. Try and compress the information a little bit to save on RAM.

Then finally, somewhat related to routing policy or more specifically what happens when there is no routing policy at all, there now is an RFC that defines what the behaviour is if there is no routing policy associated with a given eBGP session.

RFC 8212 specifies that by default routes should be rejected both on Inbound and on out bound unless you configure the device to do otherwise.

Currently IOS XR, BIRD and open BGPD support it natively, but there is still some work to be done. On Arista you can emulate this behaviour by copy pasting the following lines, but that's not the behaviour out of the box unfortunately.

On Juniper you can use a slacks scrip to emanate this behaviour, the slacks script will sit in a commit pipeline and kind of insert, deny all routing policy if know routing policy it defined, but it would be much nicer if there is native support for this on Juniper.

And then Nokia are is committed to release software that implements RC 8212, somewhere in the 2019 or 2020 time frame.

So your homework for today is send your account managers a short e‑mail, saying 'I would like you to implement RC 8212'. And because RC 8212 updates the core BGP specification, if a vendor is not compliant with this RFC, technically speaking they are not compliant with BGP 4, so that would be very embarrassing, right, if you try and sell a router that doesn't talk current BGP.

We have now arrived at the end of my presentation. With this, I would like to open up the floor for comments, questions, concerns, tomatoes, any feedback you may have.

(Applause)

RUDIGER VOLK: Thanks. I remotely did see a first version of your presentation from the den versus NANOG and I found it very useful because it really provided directly arguments for fixing certain things that we miss designed sometime back in the past Millennium. And I don't remember recently seeing a systemic approach for going through all the stuff like you have done. So this is extremely useful. You will not be surprised that I find a couple of details where I would say well, okay, call‑out things that are actually heuristics, being heuristics and kind of turning the attention that well, okay, considerations for heuristics typically are not really that simple. There may be cases where you might want different choices. I don't want to go through all the slides. Maybe the first slide where you are showing the ‑‑ well, okay, the Dutch prove verb, somewhere was pointing out that the just pro verb is not gender correct, what I find kind of incorrect in the thing is that you do not the eBGP out from your AS in the picture.

JOB SNIJDERS: EBGP out is the orange router.

RUDIGER VOLK: Yeah kind of as you are thinking about the stuff, I think sometimes people seem to be more obsessed with the Inbound filtering and sometimes some of the criteria and responsible on the outbound gets lost, and just having the last hop again, I think would be a good idea to just reinforce when you are anything about your AS there is actually that next hop that you have to take care of. In your presentation of course you have been taking care of that.

JOB SNIJDERS: I agree. I will happily admit that this presentation has been, you know, suffering from my bias as a transit provider, so we only care about what we accept and the rest kind of goes automatically. But if you are a different type of network, say a domestic small ISP in Germany, then you of course have a very different view in your iBGP with a tonne of more specifics of your super nets, and careful attention must be applied to eBGP out. So this is a valid point. The different at that. Points will have different importance I think to different types of networks. Thank you for your positive feedback.

AUDIENCE SPEAKER: Rafeal: Thanks for your presentation. You make that we can't explain easy to people. Would it be possible to do some kind of welcome back for IXPs like your presentation with less slides, so you provide, like, for IX Prins, provide to new customers the best practice about this? It's kind of more exact and you have to read it at get some points so you spread the word about doing things correctly. You might suggest to go, can you send it to all your new customers for IXPs?

JOB SNIJDERS: Am I hearing a volunteer to help write this document? I think in our industry we so far have not taken a systematic approach to how routing policy is designed, and, yes, I would like to work towards making a BGP document or perhaps some software that generates a lot of these policy elements for people to easily adapt them to their own environments.

AUDIENCE SPEAKER: Okay, but you will have to wear a different tie for each ‑‑

JOB SNIJDERS: What about a bow tie?

AUDIENCE SPEAKER: Hi. I am from telecom Egypt. I am just a bit confused about how we can apply this recommendation on our multihomed operator, multihome provider, so how we can use these communities to make the required optimisation in the IPs advising to the upstream providers, so how we can use it if we should keep in mind that we need to make a load balancing between different upstream provider, so I am a bit confused how to apply this.

JOB SNIJDERS: So your question essentially is in the context of traffic engineering, what is ‑‑

AUDIENCE SPEAKER: As example, my company, we are not using communities to advertise to the upstream providers. We just advertise a prefix of IP addresses.

JOB SNIJDERS: I hear what you're saying. In this instance, I would recommend using both approaches. So the foundation of the network policy is to use BGP communities to decide which announcements propagate where. Because I think by default, you'll want to announce all your prefixes to all your neighbours. And then on top of that in eBGP out, you could use an explicit prefix list filter to selectively block or unblock certain announcements. So the two approaches of using prefix lists and BGP communities are not mutually exclusive at all and if you use BGP communities as described here, and if you use the, this specific classification execution matrix, you'll have a very safe foundation to further build upon your traffic engineering policies. So, I would use this as the basis and then add a prefix list based traffic engineering to that.

AUDIENCE SPEAKER: Okay. Thank you.

AUDIENCE SPEAKER: Hi. Theo from RIPE NCC. I have a comment from a remote participant. Cynthia Rebstrong, acting in a personal capacity. 'This talk was very good and explained things very well'.

JOB SNIJDERS: Very kind words. All right. If you have questions about this presentation, always feel free to e‑mail me, and thank you for your time.

(Applause)

IGNAS BAGDONAS: Thank you. Now, the next one is the talk from the RACI detecting MPLS tunnels. Doll doll Hi everybody, this is my first RIPE meeting and I really enjoy it so thanks for inviting me.

Actually I'm there as a back up of my folks from the university ofly edge. So he was the guy that prepared the slide and introduced a lot of fancy effects such that I will look like as an idiot, at some moment, but don't worry I know the work quite well so I will present it in the best term that I can.

So, the purpose of this talk is to try to teach you a lot of tricks that we discover by kind of reverse engineering to, in the best case, reveal the MPLS tunnels that are either in the Internet or in the worst case at least detecting them. So that's a joint work between my university where I am an associate professor and the university ofly edge in Belgium.

The agenda is quite basic. I'll start with some motivation. Basically the objective is to better understand deployment of MPLS and the way that some ISPs use it, and for example, if they try to hide their own tunnel contents and what the kind of feature that they use.

The second part is more research objective, is to try to avoid the base that Layer2 may put on the Internet model. For example with MPLS, you can ‑‑ you will have ‑‑ you will shorten most of the paths in the Internet, and the other problem ‑‑ the other problem is the fact that sometimes nodes look to have a very high degree but it is not the case in practice.

So I will show you at the end of the talk when we apply the fix that we are able to do because we have a clear view of the tunnel contents or it's changes the view that we have of the Internet.

So first I will introduce very briefly what we call network fingerprinting, it's like N map but for router, the goal here is may need to group router in several grounds according to your OS object able to know what kind of router we are facing, to adapt our technique or tunnel according to the kind of router we face.

So, with the idea is to organise what we call signature, it is a set what I will show you in the next slide, of what we call initial TTL, it's a directive to recognise most of the router, the most classical router.

So the initial TTL, when you read the RFC about it states that it should start with the classical value of 64. But unfortunately for the RFC but fortunately for us it's not the case in practice and it's very rare that this value is used. In fact it depends on the hardware for sure there is a difference between Cisco and Juniper and it's really helpful for us. Even for Juniper device, it may differ between the tools, the main tools that it deploys. We can also try to probe this router with different kinds of probes to be able to get more information.

So, basically the most interesting kind of message that we use is to try to trigger some error message on the router, or more information message. So the ITT L message that we find is between 32 and the maximum possible value on the byte. So the basic idea is to send broad validity of different probe and try to determine what is the brain of the router. So all we did the ITT L value is quite simple. We get the in the answer we receive and we peak the smallest value, such that we will peak one of them. We will try to build a vector of as many ITT L as the probe that we send. But in practice, we realise that two of them are enough and must F we try to send more probes, the additional probe will fail down in one of the two categories, so basically we are sending a message with very low TTL, such that we receive an ICMP message, or we send some ping, so it is quite easy to understand.

The outcome of such a basic thing is that we are at the very course ground able to distinguish most of the router here. So, for example, the Cisco, and quite obviously we know for example that it falls into the same category as Cisco and we have some difference between the two kinds of Juniper OS we analyse so it's quite easy to have first idea of the router that we face. If I'm presenting that to you, it's because we will use that then to retrieve the MPLS information and in addition to that, it's quite interesting to see that for Juniper OS you have two different kinds of ITT L, so first it will be very full because we play with many it will in that presentation, so the tricks that I was talk before about, rely mainly on auto broad variety of TTL, including the MPLS one, that I will show you, just after.

I will try to be very belief about the MPLS background because for sure I guess that most of you know quite well how it works and behaves.

So it's basically a small header that you put between the Layer2 and the Layer3 and the only field that will be very interesting to me is what we call the LSE TTL that it used inside the tunnel. The other fields can be useful but I don't want to talk about it in this presentation.

So let's imagine that very simple announcement as this one, we have one vantage point on the left and one destination on the right. When you try to probe an MPLS network, you send a packet from the source to the destination and the content of the internal ISP may be hidden. So it's a common belief that Layer2 hides the consent of the tunnel, but in practice, it's not always the case that it happens. So I will just introduce some terms that I will use all along the talk.

So there is an entry point that is called an ingress level edge router. There been an egress point that is called the LER router. And more interestingly you have the first up LSR but you also have what is called a PH LSR for penalty made up popping function that is most of the time or at least by default removes the label stack entry just one hop before the end of the tunnel. So now, if we send a packet through the LSP that I just allied here, what will happen here is that on the first ingress router, I will push a label on the top of the IP stuff here. This label will be swapped on the first LSR here, so it will turn from 4 to 5 in that example. It's exactly the same operation on the one in the middle. And here because in that example I used a default case of penultimate hopping, here the label is not removed at the end but just one hop before.

So enough background, now I will enter in the real content of this talk.

So, we start this work quite long ago with a very preliminary analysis of a classification of different tunnel, but actually I have to be honest, there is a lot of flow in that paper that we know are able to understand and correct a lot, so I will show you the more present advance that we discovered in that.

So just two things to know, that to be able to see an MPLS tunnel I mean natively result of technique you need two option. The first one is this RFC that is almost implemented in all common routers, there are some exotic router that does not but it starts to be very rare.

The second one is if at the ingress LER you do not propagate the option you will skip the entire tunnel. This feature is very important and if you do nothing on your MPLS environment, this is the by default configuration, so the tunnel is visible.

So I will briefly show you all these options works with still the PHP enabled in that example. So when you have the TTL propagation options that enabled on the first hop, what happens is that you could he MIT T L not MPLS TTL to ensure that the entire tunnel can be visible. At the end of the tunnel you will continue the trace normally. Then it behaves as usual. The MPLS TTL is decreased at each hop and at the penultimate hop you POP it and then the packet continues to flow on.

But when you disable that option it starts to be more difficult to understand what happens, because on the ingress LER you do not copy the TTL. You start with the maximum initial value that is 255 on that example. So, what happens is that then this TTL is decreased and so on all along the tunnel. But at the penultimate hop, you have to then choose what is the IP TTL you want to go out. So here it's quite funny the trigger that most ISP use so Cisco document it but not Juniper but actually both use the same, they don't want to rely on any synchronisation between the entry and the exit point. So what they are doing is quite smart, is that they just peak the minimum between the two to ensure that they mitigate to look to avoid too long trace, but if it is done well, if I have time during the question, I can show you very fancy example when they do not use the minimum between the two and just after the trace you have a jump of 200 up that skips not the tunnel but the hop after the tunnel. But it's quite funny and very buggy and a lot of implementation, that I will present later.

But theoretically, here, what will happen is that you will pick the minimum of the two so at the end of the tunnel you have the same value as at the entry of the tunnel so you skip the contents.

So here is the big picture of of classification. For sure, I won't enter in all the detail here and I'm pretty sure that some machine learning can be useful to deal with all the TTL that I show here. So I won't ‑‑ I'm not interested in that talk in let's say the left part of the slide because most of them are very easy to see, so what we call explicit tunnel is a tunnel propagation options that you see anything with the MPLS label and so on. Imsplit is a small value where it's for all routers that do not implement the RFC such that you see the internal hop but you are not able to see ‑‑ to understand that this is actually an MPLS tunnel. And the right part is very interesting for us because this is the Eden part of the MPLS tunnel. So I will event in the details for invisible PHP and U HP since we try to elaborate the most generally possible solution, we try with at least four OSs to emulate them to understand what they are doing, but it's quite difficult because all of them, has its own way of implementing that, so it's quite difficult affair of reverse engineering, to understand all the bug and let's say, idiot stuff that they are doing to deal with all this here. So I will really focus on the two last boxes, on the right, that you can see here,

Just the last thing here you can see that I use two colours, the orange one and the red one. The orange one is what we call indicator, is a way to see, okay, here there is some tunnel evidence. And the red one is what we call trigger, is hints that there are some kind of MPLS tunnel there such that we can launch new probes to be sure to reveal the content.

What is interesting is that you can already maybe guess for some of you that there are some very strange shifts in the TTL that we receive. And that's basically the idea here when we see such shift, we know that there is some kind of weird stuff such that we don't want to break the Internet to reveal every MPLS tunnel. We rely on that value and I will start this part with the triggers.

So, invisible tunnels. To some up is when there is not TTL propagation option. And we also produced a paper about it.

So basically, you are ‑‑ so here we tried to be not realistic, I'm not saying so, but at least we use a symmetrical routing inside to have the forward origin past and the end to show a more complex case than just a basic line and where coming back is the same as the forward path.

So a lot of effects. So, the trace routes start to be perfectly norm amount of you send the first packet with a TTL of 1. You receive the. A second TTL 1, you receive the answer of PE 1. Then you you skip the entire consent of the tunnel such that you will end up at P 2 and this is the first very funny bug that we observed in Cisco, it's only on Cisco router. When the packet arrives at the PE, the TTL is not decreased, you dot not receive a message from this router, what Cisco does is just pushes silently the packet to the next IP here, so CE 2. So what you see just then that you receive an answer by the different paths of CE 2. But then it's even more funny because you will continue to send a new TTL with a value that is just one more. And what happened in that case, the router tried to do something a bit better, it analysed the TTL. This times it knows the TTL is not equal to one and decreased it and you received again an answer of CE 2. For those of you who are familiar with scamper, it's something that happens quite a lot. You have duplicate IP. It's so frequent that the basic stamper by default implementation do not try to resolve that as a loop. They consider it's frequent so they do not stop the trace when they see that and apparently that's a good idea because it's a not a loop at all, it's just a weird tweak of the Cisco PE egress point. So here you can see that the TTL shift is quite impressive and in more, we have a duplicate IP for forest, it's a very interesting trigger such that we have two evidence that maybe there is something behind that.

The second trigger ‑‑ the second one is mainly focused on Juniper router. If you remember well what I showed to you before with the fingerprinting stuff, they use two different kinds of ITT L. So we can play with this router, sending two kinds of probes. The normal one that will trigger time exit and also ping ones such that the two values are different.

So as before, I send this, it will end up on the egress point here. And here you can see that it will initial lies the IP TTL and the LSE TTL with the same value. There is no problem because they are the same. So you will end up with an answer like this. So, it's time to recall that domain operation is applied on the egress but on the return path, so it will pick the smallest value among the two. So since the IP TTL is at the maximum value, you will pick in that case the minimum one that is the MPLS one, and it's first interesting fact here, while you can't see the paths, the tunnel you mean, on the forward path, you have already kind of evidence of a tunnel on the return path. But that is a trick that we will use later for the sole trigger. Here, what we do is more nice stuff.

Now we send the ping to the same IP that we collect from the transit trace that we sent first. And if we send the ping, the IP TTL is initialised to 64 but the MPLS TTL is initialised to its maximum value. So what happened in that case is the same minimum operation won't select the MPLS TTL this time but it will select the ping TTL this time, such that we have a great difference between the two and it's one of the most efficient trigger that we use, but it only works for Cisco router. ‑‑ for Juniper router, sorry.

So here we can already even estimate the length of the paths by playing with the TTL that we collect, such that we are able to see that there is two hops inside the route, I am talking about the return while the forward one has 3 internal hops.

So, the last trigger will be the one that is, that suffers from many drawbacks because this one is sensitive to BGP asymmetry so I'm not saying that it's the one that we want to use. This is the one of the last resort when the previous one does not work. You just try to basically analyse the difference between the number of hop and the ITT L we receive. So here basically, it's again the minimum operation that is applied on the egress of the return path, that will select the minimum value, such that if we are in a perfect symmetry case, which is not exactly the case here, we see a gap. So then we can say maybe there is something there, we can try to do something. Actually we can do something that is much more smart than that, is to analyse the met a difference between sub sequence TTL to try to mitigate that effect.

So here again, in the perfect symmetry case, you can estimate roughly the size of the tunnel but again the BGP asymmetry can increase or decrease this value, so it's not perfect for sure.

So, now maybe for you the most interesting part we play with trigger is just because we try to calibrate our tool to avoid to launch new revelations all the time. We implicate that in stamper, it's available to you can play with it.

So I will present you two very simple techniques that are able to reveal, actually, the content.

The first one works for not only Juniper router but by default, you know, I guess most of you know that, by default implementation of Juniper is to only inject look‑back address of PE I object side L PE, such that the internal traffic is not MPLSen caps laid. So what happened in that case it's very easy. You get trigger with the RT L stuff that I just presented before. And then you say okay, maybe there is evidence of something here. So you just relaunch a new trace to the IP where the shift of TTL applies, so here on P 2, and what happens here is since P 2 may be an internal prefix in the AS, you directly reveal all the contents in one shot. So it's quite easy to reveal this content. So you may say, oh, but LDP can be announced with the independent mode and with Cisco the difficulty is the fact that the whole IGP is pushed inside LDP such that you have LDP route for all the internal network.

So, hopefully, we also have a technique for that, so first we send a normal trace route again. So here it's the less efficient trigger that pops up and we say okay, we will target PE2. So the fact is that here we won't reveal the entire content of the tunnel in one shot. We will reveal only one more hop. It's not due to PHP because people here will say oh it's because of PHP you have that. No it's due to the locality of the prefix, since both routers perceive this prefix as their own the one that is more on the left will cut the path one hop before such that when you receive, you can discover the content of the tunnel. So here it's easy. You reveal one first hop. You introduced this first hop. So P 3 in the trace, sorry for all this ‑‑ so P 3 is not there. You can continue like this until you find back again PE 1 results nothing new. So I continue with this. And then the LSP becomes very short and so on. Up to the point where I find the three internal hop. So I will do that. So here is the final content of the tunnel. I will try to do it quick to present at least some result about that.

Maybe just very briefly, we duplicate IP, it's a bit more difficult because here US P works is enabled, so on the last hop is really the egress router. So in that case, you have duplicate ISP so the trigger prefer quite well but you have to play, you will get an IP of CE 2, so if you target that IP you will relegally nothing. So what we are trying to do is to infer with a simple heuristic that we call the buddy, the outgoing IP of PE2. Then if you send a message to P 2 that sends an error reply you can get will zero interface that I draw just there, such that it's a bit more tricky but the end is exactly the same. We can reveal the content.

So a brief summary of the capability of what we are able to reveal here. We performed lots of emulation to be able to reproduce all the behaviour that we opened and we discovered that for all classical P2P circuits, and whatever other kind of public function you use, you are able to reveal it with the technique that I showed before. So, a good question is does this technique work also for more complex overlay such as VP R N, stuff like this? So hopefully it works, but we are not able to reveal the contents because it's a bit more tricky and I don't have the time to present all the results here.

But we are some evidence, because there is ‑‑ it's not again a bug, but the Cisco implementation, it's quite funny because when you end with the two label stack at the end, when you use US P for example, you consider that the tunnel breaks normally, it's too abrupt because normally you should end up tunnel with an implicit label or within explicit label. Since you have two labels in the stack when you remove the top one you still have the last one with the to say this is the VPN I choose. So for Cisco it's like NAT group hunting and then we are able to see just an MPLS indication on the last hop.

So, we do lots of emulation test beds to be sure that all these techniques works in the control environment and hopefully it works. So I will be very brief with that. So

I go up to the end directly, just to show you what we have done. We have installed a very small test bed in our Strasbourg, so you can try to trace route that IP and you will recognise exactly the behaviour that I present during the talks. So we will have something like this. And I end up my presentation on this slide.

Thank you.

IGNAS BAGDONAS: Thank you. Any comments, questions?

Thank you then.

Applause)

Going to the next part I see you are dressing up with the RPKI tie, cleaning up the routing registries.

JOB SNIJDERS: All right. Today I would like to present to you a policy proposal from myself, Martin Levy, and Erik Bais to help clean up the RIPE non authoritative database. We are now in a post NWI 5 world. I would like to congratulate the RIPE NCC staff on a very successful deployment on the NWI 5 changes. Honestly this has made a really big difference for the routing security landscape, has closed a very significant loophole and they did so with tonnes of announcements and you know, it was a smooth ride, there was no issues. A small applause for Nathalie Trennamen and her team.
(Applause)

Now what? The RIPE I remember database has split into two components. Under is under the label RIPE, the other is under label RIPE‑NONAUTH. The RIPE IRR database exclusively contains routing information that was created with explicit consent of the resource holder. On the other hand, the RIPE‑NONAUTH database contains data for which we don't know who put it there, for what reason, whether it's still relevant, or whether the resource holder even was aware that that information was put there.

? The NWI project, we purposefully left out of the discussion what to do with this pile of unvalidated data to increase the success chances of the NWI proposal.

So now that NWI 5 has been executed we need to come to a consensus on what to do with the pile.

We propose to use RPKI data to clean up the RIPE‑NONAUTH database. The RIPE non authoritative database contains a mixture of elements that are necessary for operations, route objects that were created perhaps up to 15 years ago, mean child the resource has perhaps switched hands multiple times and there no longer is a relation between the creator of the route object and the owner of the resource.

So part of it is garbage, part of it is not garbage, it's very hard to determine which is which. Now we do know with RPKI, that ROAs are always created with the consent of the resource holder. The five RIRs have adhered to this model, so, when I talk about quality routing data, what I mean with quality is that the owner of the resource was involved in the creation of the routing information. It doesn't need to be correct data. Because the owner of the resource is of course free to miss configure things as they deem necessary. But high quality data only means that the owner of the resource was involved.

So the proposal is to use the RPKI data to drown out conflicting IRR information. We can use RPKI data for BGP origin validation and we now propose to use a similar process for IRR validation. In other words, we should treat IRR route objects perhaps as if they are a BGP announcements and if this way we can figure out which of the IRR objects are in conflict with the stated intentions of the resource holder.

So let's take an example. 129.250.15.0 /24 is part of an NTT prefix. And somebody registered this /24 for illustrative purposes and NTT has no influence over this route object.

Now, if we look at this /24 and query who is BGP Monday.net we can see that an RPKI ROA exist and that the ROA exists exclusively for the /16 does not allow up to /24 and does not allow the ASN stated in the previous route object. So the ROA is the IRR object is in conflict with this ROA.

Now, imagine a world where many networks are doing RPKI based BGP origin validation. The route object as stated cannot exist, because from an origin validation perspective, that combination of the prefix length and the origin ASN will be marked as an RPKI invalid and the networks that do origin validation you would reject such announcements.

So, this route object describes a state of the network that should not exist that NTT does not want to exist, and we have clearly communicated our intent by publishing an RPKI ROA. However, because this route object exists, anybody generating a prefix list filter based on AS60068 ASN, will end up with a filter where a hole is punched for this specific /24 without the consent of NTT. And in all of this, NTT currently has no mechanism to delete such route objects.

So, a process to get rid of such data would be that we ‑‑ that a script is written, that affects all the RPKI ROA. A lookup is done. If a ROA covers part of a route object, we check if the origin ASN in the ROA, and the origin in the route object match and if not we can go through a simple decision‑making process. If the ROA in the route object are not in conflict with each other, don't touch the route object. It doesn't harm anybody. If there is no ROA, don't touch the route object, because without ROA, cannot know what the intentions of the resource holder are, and if the route object is in conflict with the RPKI ROA, the IRR route object should be deleted because clearly the resource holder did not content to what is stated in the RIPE non authoritative database.

I don't think this requires any extensions to the software. This can be done in a separate script that is run every few minutes.

This is an adaptation of the RFC 6811 pseudo code, to illustrate how the RPKI ROAs and how the IRR route objects could interact with each other.

Other industry developments where we are looking into doing similar things. NTT has already using RPKI ROAs as if they are IRR route objects. If you click the link in this presentation, you'll find more documentation on how that works. And NTT has funded a from scratch rewrite of the IRR daemon that will at some point have the functionality to suppress IRR route objects both in NRT M streams, both in input into an authoritative database and in output towards clients generating prefix filters where we do the same trick. If an IRR object conflicts with a ROA, we pre‑end tend the IRR object never existed in the first place.

Tomorrow in the Open Source Working Group, we'll cover for of this in detail. But RIPE would not be the only tent tee that does this type of scrubbing on an unvalidated IRR datasets.

I think there is significant advantages to embracing the concept of RPKI information trumps IRR information. Not only does it help us to clean up the past to scrub the 20 years of IRR object creation that all of us have contributed to, but by creating ROAs, for instance in context of RADB and NTTCOM, you prevent creation of rogue IRR route objects because your ROAs would be blocking such object creation. So we're cleaning up the past. This is beneficial. And we're locking down the future and making it much, much harder to do IRR based hijacking.

Now, with that, I would like to open the floor for some discussion on this topic on how we should proceed, questions, comments, concerns, and keep in mind cleaning up things is fun.

IGNAS BAGDONAS: Open microphone. So Martin and Erik, maybe you want to take the stage and fight back all the audience, and the microphones are open. Discussion.

AUDIENCE SPEAKER: George Michaelson from APNIC. I too would like to thank the staff of the NCC for the work they have done moving through this process. It is greatly appreciated by APNIC help desk and Hostmaster who have frequently to mediate conversations from resource holders, within the APNIC region, who didn't understand the existence of IRR objects in this region. This is a very good step towards a collaborative, mutual and respectful approach to managing the data.

The thing that I like about the specifics of your proposal is this idea that somebody has to be actively engaged in a process that is their authority to commit statements about routing. So to make a ROA, you have to be in the system. You have to be mentally engaged and you make a positive assertion that is attestable, cryptographically validatable assertion. If you see one of those, you have incredibly strong evidence that someone with routing authority is saying what they want. And so you have gone to the statement well if that's contradicting historic data that might exist from other sources, why do we keep the older data when here is an active current assertion from someone with authority to make a statement.

I also like that you have said where contradicts. So you haven't said we're just going to sweep this stuff. And that is quite important. My measurement of participation from within the APNIC region suggests that approximately 380 entities across members, non‑members, historical non‑members, NIR members and sub account holders still have these objects, so this is quite a large community of people who have engaged in RIPE routing practice and have objects. And the age of these objects is not just archaic. There are huge numbers in the 2010 window which I think is when a change was made to authority statements and so all objects acquired a day. But there are actually a lot of objects that are dated 2014, 2016, 2017, and that's not so long ago that it's completely implausible. So I really like your approach because I think it's respectful of that sense, some people may still be using this technology. Thank you.

JOB SNIJDERS: Thank you for your feedback.

AUDIENCE SPEAKER: Randy Bush. IETF meeting NOG. First of all, cleaning up the IRR is an interesting task, good luck. Why don't you also remove conflicts in the RIPE data?

JOB SNIJDERS: That is a good question. The reason in this specific proposal we limited ourselves to the RIPE non authoritative is that in the RIPE database any routing information is created with the explicit consent of the resource holder and as that stands, it is far less concerning to me if those things conflict with an RPKI ROA than the objects that were never created ‑‑ that were created without any consent of the resource holder. So yes, there may be things that are incorrect where there is conflict between RPKI data and IRR RIPE data, but I trust that the RIPE NCC staff eventually will be able to present a cohesive user interface and address inconsistencies by nudging end users towards a ‑‑ away from miss configurations rather than that a policy proposal was needed to accomplish that.

RANDY BUSH: You don't know that the no off data was not created by the owner of this space. That's an unsupportable assertion. And ‑‑

JOB SNIJDERS: I never made that assertion.

RANDY BUSH: You did. You just said that you don't have the authoritative owner in there. You may. But you can find examples of both and I would say that for instance you should show leadership herein stead of just slides, and in the NTT IRR instance, apply the RPKI data so you don't have IETF meeting space IRR objects in the NTT database.

JOB SNIJDERS: So that you missed a slide where I highlighted that we are in fact going to do this same method. So that should address that concern. Yes, NTT has some garbage as well, and we intend to clean it up.

AUDIENCE SPEAKER: Warren Camary, Google. I got some personal resources that are out of region but I stuck them in RIPE because I like RIPE and the IRR works. I am fine creating ROAs for them but will I still be able to edit my IRR objects and what if I want to make a new IRR object in RIPE for an out of ‑‑ out of region resource that I have a ROA for? That seems like is would still be reasonable, you know it's me. There is a ROA. And I can use the RIPE IRR which is nice.

JOB SNIJDERS: In general, I would recommend that you use the IRR database associated with the IRR that gave you the space in the first place. (RIR) RIPE is not chartered to service out of region to that full extent. So, yeah, move your objects to a better place.

AUDIENCE SPEAKER: Hello, Nick Hilliard. I think in general, I like the idea here. But there is a concern that may have been mentioned on the mailing list, the DB Working Group mailing list unfortunately, what's happening here is that if somebody in another RIR region creates a ROA, the RIPE NCC is just going to, you know, click the trigger in the gun and just blow the non‑authoritative object away. This is not a very good idea, as Randy points out, some of the data and in fact, I'd actually suggest quite a lot of the data, which is tagged at NONAUTH, is actually accurate. That accuracy will diminish over time.

JOB SNIJDERS: How can it be accurate if it conflicts with a ROA? It's not accurate at that point.

NICK HILLIARD: Because of lots of reasons. If you go in and create a ROA, you can assign it any different, any ASN you want. So, you can have different ‑‑ or the same announcement coming from multiple regions with different ASNs, and essentially what you are saying here is that you are wiser than the person who put the data into the RIPE database. Now, if it's going to be deleted, okay. But please, if you are going to do this, if you are going to go down this road, please put in a grace period and please put in a notification system so that the person who has created the ROA on the other side of the world gets a notification that something in Amsterdam is going to be blown away. Because they are not necessarily going to be following RIPE NCC procedures, they are not going to be following RIPE policies and they may have no idea that what they are doing in their IRR is going to affect routes and Route‑6 objects in the RIPE NCC database.

MARTIN LEVY: But this isn't how it works today. You could go and log into RADB or ALTDB and generate an object that is in complete conflict with any other IRR anywhere else or vice versa.

NICK HILLIARD: This is a parallel problem. It's not related to the problem we're talking about.

MARTIN LEVY: That's the point. That we wanted to try and get one thing done and get one thing that at least sits on solid logic.

NICK HILLIARD: Any routing filtering system which depends on IRR data needs to filter on the source tag. Okay. So if you are pulling information from a database, you need to be aware of what the source tag is. So, most organisations are going to be, you know, say pulling from RIPE and accepting everything or else pulling from RADB and accepting just RIPE and select lists of prefixes, so it doesn't matter to them if you stick something else in say ALTDB or the NTT IRR, it's pretty much irrelevant.

JOB SNIJDERS: This is not my experience.

NICK HILLIARD: If they are doing, if they are pulling everything from every database, that's a really dumb to do and they shouldn't do it and ‑‑ but this is a different discussion.

MARTIN LEVY: Just to take a step back without getting too controversial. The choice of which database somebody uses, let's say at an Internet Exchange or at a large network, may well be very regional in choice or may be very global in choice. However, the ROA creation, to go back and deal with this, is something that at least has the authority of the prefix which is, in theory, the most ‑‑ the most important thing to control. And therefore, if we go back to the text of the proposal and sort of the key issue here, we're dealing with RIPE‑NONAUTH data, not the whole world and we're dealing with ROAs that are created by default they are always created in an authoritative manner, so we're trying to take the intersection of those and deal with this. The conversation that is valid that you are bringing in, which would love to see done after this, is well what about everything else? And in fact, that was part what was brought up with Randy's very accurate comment about there is a big mess in the IRR, go clean it up. No disagreement at the microphone about that. But we're trying to segment a very specific solution here and to get that into play.

NICK HILLIARD: I agree with everything you say Martin. I'm not actually really trying to discuss the bigger issue. I am trying to discuss the very specific issue that we are dealing with here is when you create a ROA in another IRR that blows away data in the RIPE NCC ‑‑ sorry in the RIPE IRR database, if we're going to do this, we need to notify the holder of that object. It's not a big issue.

MARTIN LEVY: So notification in either direction.

NICK HILLIARD: And a little bit of grace time. Maybe two weeks or something like that, just so that if there is an issue that they can do something about it because otherwise the way that the proposal is currently stated, is that essentially the moment you create your ROA, your IRR route object is going to disappear and there is no back out out of this, there is no way of reversing the procedure, there is no way of getting the route object back in. There is no way of stopping the process. And it may be that you're going to issue a whole bunch of ROAs and that you have accidentally missed one of them or something like that and then the next thing it's blown out of the RIPE IRDB.

MARTIN LEVY: It should be pointed out that independent of whatever the implementation is by anybody, which may be transactional, no end user of any of this routing data has access to a transactional based roll back environment. So it's ‑‑ thank you. Yes, we'll take it as input. But... at the moment, we don't even have systems that do this today. You delete something, you change something in IRR or even in a ROA and the old data is gone.

NICK HILLIARD: These are all different situation because if you change something in other IRR you can change T if you make a config change you can roll it back. In this particular situation there is no back out. It's gone and that's the end of it and that can break your router.

ERIK BAIS: Nick, this is already reality in BGP, there are networks filtering incorrect ROAs. Or incorrect announcements. So, if there is a valid ROA and other stuff is not correct, it will just basically drop it already instantly.

RANDY BUSH: But there are networks filtering based on IRR and you are going to shoot me because I created the ROA over here and that network is based on ‑‑

MARTIN LEVY: Let's go through the mic. This one may need to come back to.

RUDIGER VOLK: First one short remark. Every time you are saying the object in the RIPE database are now authorised by the resource holders is not really true. The change of NWI changed that authorisation down to exactly the address holders. And I'm not ‑‑ and kind of that was something like throwing the baby out with the bath water, but coming back to this, I think ‑‑ I think the idea of applying the origin validation classification from RPKI data on building prefix lists and for evaluating RPSL data that's around, is a very nice idea and I'm sorry I never came up with it myself, and I think yes, that is really useful to be applied. But jumping from that, we want to have a policy that says we are fucking around with existing data is jumping a little bit too far. What I really suggest is postpone the policy part and let us first work on applying that idea and filling out whether, well, okay, maybe Job and the NTT IRR figure out their way of applying this to their database and showing us what communication they do in advance of removing stuff. And let us figure out what communication we would need and want for the databases that we are responsible for as a community here before we apply it and actually move that to a policy that means we are fucking with data that is already there and where we do not have easy access to the people who are responsible and are relying.

And as a last point, when you talk about applying the RPKI classification to RPSL data, you should be aware that the dynamics of what is happening in the RPKI and what is happening in the RPSL databases are just different. And removing, say, more specific roots that someone has in the database, because today the RPKI says it's invalid, while tomorrow for three days I may want to actually turn it on and I decide that synchronising exactly the RPSL data with this is not reasonable, well, okay, take that as an example, where from a technical point of view you should not be removing RPSL data because you do not know what is actually behind it and what are the consequences of the removal.

JOB SNIJDERS: Just put it in the correct database.

IGNAS BAGDONAS: We are over the time of this meeting. From what I hear and see in the discussion and that the community stays and doesn't run for the coffee break. This is important. Would you be willing to spend time on tomorrow's session for continuing this discussion? Cannot really run into the break time.

JOB SNIJDERS: Could we give each say 30 seconds to at least understand the topic?

AUDIENCE SPEAKER: Marco Schmidt, policy officer from the RIPE NCC. I don't have a comment about the proposal itself, but I want to mention something about the PDP, because although I see many policy experts in the room I just checked that the Routing Working Group at the last policy proposal discussion in July 2010, so that everybody might know. So it's important of course what is what is said here on the mic will be taken into account by the proposers, but it's important that you give your opinion on the mailing list because only then it can be considered for the PDP. And please do it on the Routing Working Group mailing list.

AUDIENCE SPEAKER: Alexander, Qrator labs, just a small hint. From what I learned here from several private discussions during this meeting discussing this proposal, I found out that there is no clear understanding about the quality of NONAUTH database. And maybe, maybe it will be much simpler to from delete just suppress this output. You have already control of mirrors aunt entity or RD PB databases. Just say suppress at the moment will give you a freeway to this policy.

AUDIENCE SPEAKER: Just a quick clarification question first. When you are talking about ‑‑ Brian Dixon, go daddy ‑‑ when you are talking about the ROA that you are referring to in this, is that only ROAs in RIPE?

JOB SNIJDERS: No.

MARTIN LEVY: Global.

AUDIENCE SPEAKER: That's an important detail. So even an outside ROA. Instead of delete, how about change ownership and notify the owner under the control of the contact for the ROA.

MARTIN LEVY: This is already only for the RIPE‑NONAUTH. So we're already down from actual RIPE core data, if I want to use that word. So we're already one step, with one foot in the grave as far as these records are concerned.

ERIK BAIS: These records should not be in the RIPE database.

MARTIN LEVY: I am wondering is anybody using them for filtering?

AUDIENCE SPEAKER: Is it not the case that there would be any RIPE ROAs that would conflict with any of these data?

JOB SNIJDERS: No.

AUDIENCE SPEAKER: This is strictly out of region. Thank you.

AUDIENCE SPEAKER: John Curran, ARIN. I am just wondering if the proposers, you go for invalid match in your process. You go immediately to delete and I am just wondering did you consider the same proposal with notification rather than delete?

JOB SNIJDERS: Who do you notify? The adversarial that created the route object without my permission?

AUDIENCE SPEAKER: To the extent that you can do a notification ‑‑ did you consider that as an option, that would be my only question? Because in some cases RIPE is, probably has the data somewhere in the database for many of them to try to find a valid contact.

JOB SNIJDERS: No.

MARTIN LEVY: Not as many as you think.

RANDY BUSH: Could we not use dehumanising language. It is not an adversarial. It is somebody who registered an object.

JOB SNIJDERS: I'll trade that if you stop referencing racially driven genocide.

RANDY BUSH: That was what I was trying to compare your use to. I am trying to tell you what you are doing; it's Trump. .

JOB SNIJDERS: Whatever.

IGNAS BAGDONAS: Thank you. Mics closed. If you think that you would need to continue this discussion tomorrow, and audience in general, do you think this is something that needs to be discussed while we are here face‑to‑face? Will that be more efficient than trying to use a mailing list?

ERIK BAIS: We need to use the mailing list for PDP purposes anyway.

JOB SNIJDERS: From a PDP perspective, mailing list of course has most value. Tomorrow a presentation on the data quality would be interesting, but I don't know if there is time enough to generate statistics.

RANDY BUSH: Have we learned more doing face‑to‑face?

IGNAS BAGDONAS: That's a question for you, not for me.

AUDIENCE SPEAKER: I will try to bring some data about the quality of the route objects in this database, but I cannot promise it will ready for tomorrow.

IGNAS BAGDONAS: It seems that there is interest in the community, no decision yet, probably then the interested parties we get together and then decide what to do. We have sometime available and some topics can be shuffled a little bit for tomorrow's session.

RUDIGER VOLK: Let me just remind, we could actually split the technical discussion of how to deal with this, how to do the classification and how to apply it and how to do notification and postpone the thing that ‑‑ and kind of that technology is something that can be applied to any IRR database and just split off and postpone the idea and the discussion of forcing change in a specific existing database.

IGNAS BAGDONAS: I hear what you are saying. So, interested parties will gather and discuss and we'll notify the mailing list by end of today or what is the plan of that.

Thanks everyone for coming. Thanks everyone for participating.

(Applause)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.