Archives

Open Source Working Group
Thursday 18 October 2018
4 p.m.

ONDREJ FILIP: Hello, good afternoon, we are just going to start. Can you please sit down and finish your conversations.

So, good afternoon, if you came to see IoT Working Group you are in the wrong room, but please stay here, this is much better, trust me, this is Open Source Working Group, one of the best here at this meeting and it's perfect that you all are here. I am Andrei Philippe and here is Martin Winter and we are both co‑chairs of the Working Group.

MARTIN WINTER: Welcome everyone. So let's start with the agenda. We have a few administrative matters first, which we are going over. Then we have a few interesting talks, first like the IRRd version 4. Job Snijders who had to leave earlier and Sasha Romijn will give the talk. Then we go on about talking a little bit about using Salt from Miricea, then a little bit about Snabb, it's interesting, Andy will be talking about that. And we have a few lightning talks after that, they are mainly about open BGPd, update, there is a community project that did this part of that then the Routenator 3000, they are only about 1,000 years ahead of time.

So one thing first, we have ‑‑ it's the full meeting again, so we had the elections part. We made out the call for it. If you look at it, the Working Group Chair Selection Committee and the guidelines they are online on the RIPE web page, we made a call for anyone else who wanted to become a Chair, there was no nomination. We are both willing to continue, so that's basically unchanged there. The next elections there, if you are interested to become a Chair of this Working Group, will be in a year. And there will be an about two weeks before the next full meeting, there is the call for nominations again.

From the agenda, I am not sure if there are any last minute changes required. I haven't heard anything yet, so I think that should be the final agenda there.

And the previous minutes, they should have been posted on the RIPE website. I'm never really sure where exactly, but you should be able to find it. So you can read it up there. I assume ‑‑ I haven't heard any corrections or anything, I assume they are approved.

ONDREJ FILIP: Before we start, I just, one technical issue, we announced we have her add owe who is taking notes of the meeting, right. Do we have a Jabber monitor? We havely a, thank you very much for helping us, and I think we can start.

MARTIN WINTER: There is one other thing. We have a new slot this time. Before we usually had in the morning 11 to 12:30 next to the anti‑abuse Working Group, now we are against the IoT Working Group and I'm curious what people think about it, if lots of people had a big decision between going to the IoT or this one, is that important from the conflict point of view? Can you raise hands if you don't like this overlap? Quite a few. How many had issues if they overlapped with the anti‑abuse? Nobody. Oh, there was one person. But he is waving...

Okay. We'll look into it and we'll discuss that, so maybe next time it's moving again.

ONDREJ FILIP: I think we can invite the first speaker, at the start there was two speakers but unfortunately Job Snijders cannot be with us, so he apologises in advance and we perfectly understand that, so we are very sorry to hear that, but Sacha will do the presentation.

SASHA ROMIJN: Thank you. So, yeah, unfortunately Job couldn't be here, my name is Sacha, I work for a small company called dash care and I am here because NTT contracted us to develop a new version of the Internet routing registry Kay Monday, which runs rr.ntt.net and a few other things. And the current version of IRRd version 3, it's an organically grown thing, it's about 20 years old and 20 years ago we had three different ideas and very different resources in engineering. So, it's all C N pro, plait text file back end, a custom and memory index, it has reliability issues currently, it is difficult to still maintain. But it is also absolutely critical to entities daily work because all prefix filters are generated everyday from this code base. It's also rather big. So it's about 61,000 lines of code.

And the reason to move forward from this is to, the lack of the possibility for innovation, it is very hard to extend the current version of IRRd to build extensions for things like better RPKI integration. IRRd currently in version 3 is also not very smart. So, yeah, it can store database objects, it can return them. But basically there is a need for smarter middle aware it how it processes data. Job also talks about the routing security angle a bit at LACNIC, so if you want to know more you should check out those slides.

And for IRRd 4, we had the goal of having a single modern architecture, an architecture that is designed to provide options for further extensions. A single code base that the well documented that is consistent in style, that is maintainable, that basically meets modern software development standards that we have. And that's basically why the contract ended up, we were asked to do this because we have a lot of experience doing exactly these kinds of projects and I happen to have a history with IRR data also.

It has an extensive test suite because it's important you get things right, because otherwise you will misconfigure filters, you may allow updates that are not supposed to happen. And some of the things we'll be doing very soon, is running Q A checks and having supporting software for that against the current version. We are going to launch a whole bunch of queries and see whether we are up to spec with what IRRd currently does.

It's BSD 2 clause licence, it's Open Source. It's a complete rewrite. We haven't cooped a single part of the original code pace, everything is Python 3.6. We use Postgres as a back end. And it's been specifically planned to make possible to expand it later to add new query methods, to add new query languages, different mirroring methods and basically things we haven't even thought of yet.

We also try to use a lot for third‑party projects, there is a lot mover available now. We didn't right our own IP parser, other people have done it and they probably did it better.

We aim to have a lot of strict requirements on data validation and consistency, we try to keep the data as high quality as possible, this gives you some interesting challenges,
We have 100% coverage on our unit tests so basically all the code is continuously tested for union test, if you forget something this is a build fame user, we have a continuous integration system, there are a few exceptions of things which are very expensive to test and very low risk, things like that.

And also very ex incentive documentation on how does it work but also how do the internals work.

These are the goals we have. It sort of fits together. This is a sort of simplified architecture. The docs will have more. There is a database interface in front of that that basically deals with all the SQL with things like if you have a question, like give me all the objects that are less specific of this prefix, we'll translate it to all the proper SQL and translate it away. Junction like journal keeping happen, for both providing history an NRTM data, so basically if you tell this layer like I have a new object and it needs to be put in the database, it will magically make sure that the journal is kept up to date, that the seriously yell is updated that the statistics of the database are up to date.

On top of the WHOIS query parser with a number of RIPE style queries and a number of IRD specific queries. We have importing from NRTM, or from fat files, usually both, using an RPSL parser, mirroring is simple because we have to look at each individual object and insert it in the database, we don't have to check whether it's authentic and things. This is something we have to do for example with e‑mail updates. Currently e‑mail is the only method of sending updates to your objects. And there also we check authentication, reference against other objects, are you allowed to update this object, does creating this object or deleting this object create an invalid reference to another object? And so we focused a lot on also doing things like having a single place where things are configured. If you look in the IRD version 3 code base and ask which attributes is this object allowed to have? You will find a different opinions in different places of the code base, some of them are probably not in use.

One of the things we have. This is how define RPSL obs in our code, basically by defining an AS‑SET, this is a little trimmed ‑‑ it's basically a list of feeds and they have particular types and when you define a particular type of it gives you this, it starts with AS. It defines which lookup fields we have, which objects reference over objects and of which type. And this basically only the place where we define what is our actual RPSL objects that we have, what are their attributes and everything else sort of derives from that so the reference validator will use this object to figure out is this a valid reference.

In our RPSL database we store the original object's texts but also we do a lot of metadata extraction. Of course, the primary key source basic attributes like that, then we make a cleaned version of all the relevant indexed data that it has, so in this case the maintainers or the list of maintainers, any context, any members of an AS‑SET, and by cleaning all this data in advance we also for example upper case it all because references between objects are case insensitive in RPSL and people make all sorts of case mixes.

So, and here for example, for a route object, we have extracted information like IP addresses, ance that are related, the size to we can do first level less specific queries, and this also, the way we stored this, so we use post JSON storage, our database is independent from the RPSL objects we store in it, so for a different object we have different data. You can index this very efficiently, it's easy once you have this to run queries like all objects maintained by a particular maintainer with a certain, for a certain IP version.

So that gives us a lot of flexibility. It means you can come up with all kind of new queries that you hadn't thought of yet when you designed the database.

Some of the challenges we encounter. Almost everyone, so we have to mirror a lot of different databases, they all deviate from the RFC and do it in slightly different ways. I don't think there is a single IRR out there that follows the RFCs. Our RPSL is a bit of a strange data format. It's kind of key value but not really because there is also comments and repeated keys and ordering. It's a bit tricky to find a good storage method that fits. A lot of objects, a lot of databases that we mirror also include invalid objects, sometimes in different degrees of invalid. This has gotten better by something this project. NRTM is not a very documented protocol. It is also consistently implemented, so there is individual variations between every stream. The updates are in the stream because people will submit multiple objects and their validation is interdependent on the objects in the update that you send. So you might be submitting a person and a maintainer, or you might be submitting a person and deleting a maintainer and the deletion of the maintainer may be valid except that the person you are trying to submit references the maintainer you are just trying to delete, to you have to sort of inter exec all these objects against each other and against your database. This is all solved. She is are the some of the interesting things but we fixed all of this.

RPSL has a lot of nice features that Ien occurred that I didn't know about about. This kind of object trimmed that you can also write as this, RIPE database won't let you do this. The second line is not actually empty, that would be invalid. It has a space which makes it a not empty line and instead a continued line. There is all kinds of invalid objects in different ways. So, like, we heard about e‑mail already before, people do this kind of things a lot. We don't usually care about this, except if it's an authoritative object. So for mirrors we don't look at the notify attributes, we don't need to e‑mail them, so this is not really a problem.

We also see things like this object, I have anonymised all the objects because this isn't about making people's database look bad. We do have all the other public information at GitHub. So this is like the kind of case to which we encounter. Technically we can't parse this object, we can't accept it from an NRTM origin, because it has to AS, so we can't extract a valid primary key so we can't put it in the database. You might argue maybe we should be flexible because I know what this object means, I can modify the parse to accept this even though you are not supposed to do this, what we have done for most of these, and I think we have encountered one or two dozen cases like this, usually it's a small number of objects, so I basically send Job Snijders to talk to all the IRRs that we found these kind of things in and instead asked them to fix the object. So this one is fixed now or the original object, there are still three AS dot objects that are not fixed yet in one particular database. These kind of objects we also don't allow, we vantage find a valid primary key but we can sufficiently index them like members is a lookup field to us, we use this in resolving references to other objects, and basically AS1.0 is nothing to us, it is not a valid AS number or AS‑SET, so currently we deny this object. We could technically allow it but then we would have issues like this is object is only partially indexed, there is some data in there we have tried to ignore, if you run different queries you will see some objects that you were expecting but you didn't because we ignored something we didn't understand.

There are only 3 objects in the databases NTT mirrors that have this and I think that will get fixed.

Code something a funny one, the RFC says you can only use ASCII text characters. You encounter all kinds of databases autoses different codeings. You also encounter interesting problems like this particular bug that Ien encountered, which is an object that we couldn't validate because it has an empty line, yet I can look at the object and see that it is not there. Until you try to complain about it to someone and you paste it in your slack and an empty line appears, because this e‑mail address in the update tool ends in a union code line separate err character so if you parse your text union code aware this will be seen as a separate line, when you print it out in something it depends on the implementation of that particular software whether that's a new line. We fixed this by fixing the software so this is technically not invalid, it's an invalid E male address but this is a mirror so I don't care. It is kind of weird that ‑‑ so, you have to, at the right time, use the right kind of interpretation of what is a new line for example to parse this.

Some NRTM streams only send half the objects when they are trying to delete it. We say it's invalid. I'm not sure why, but it seems pretty prevalent, to we're going to ‑‑ we have the information there so we're going to make this a little more flexible so that we can handle this. I don't like it but we have to be pragmatic.

Some of the statistics base on the what we currently run for. There is about 2.6 million objects we put in the database. We don't mirror INETNUM RIPE. It's about once every 40 seconds, they are pretty light. They do come in bulk, sometimes in large bulks. Average objects are about 400 bytes. The largest is about 3.5 megabits of telecommunications which is owned by Job Snijders and so indexing all of this basically means you have about 1.6 gig of Postgres data. There is still overheads. It's small. You could run this on your phone.

And so these kind of statistics are easy to extract now that we have a normal SQL database, because you can ask different questions that you didn't know you were going to have. We have 300,000 ‑‑ 330,000 queries that we're going to run against two deployments which both run on the same data to see whether they are going to give the same answers. Then of course the interesting question will be, which IRD was correct?

And the nice thing that I remembered version 3 for example does it might give you the same AS multiple times because some wrote it in lower case and some in upper cases and it sees those as different ASes.

So about 80% through the project. There is about 4,000 lines of code so a lot smaller. 3,000 lines of text because we are very thorough especially in areas like authentication, like if somebody submits a BGP plain text e‑mail update and sneaks in a second bit of text which is not part of the signed text, does the authentication validator correctly correct the text or deny the whole thing? Basically RPSL parsing validation, the storing of objects, indexing them, keeping journals, querying them, that's all done along with mirroring objects, so there is a bunch of housekeeping that we still need to do. Providing mirroring service to say others is still something we are working on a logo which unfortunately was not finished in time.

What we're doing now is Phase 1 where we are implementing the existing functionality so that we can replace the deployment. Then for Phase 2 once we have this more stable base some ideas are using RPKI to negate conflicting IRRs information like the RIPE‑NONAUTH discussions that have been happening integration with the authoritative RIR database and giving either suppressing objects or maybe optionally or giving extra metadata about this objects was there, but it's kind of icky.

Also in querying providing an HTTPS API instead of this e‑mailing mechanism. Perhaps new options for NRTM version 4, this is trickery because it requires more people to be involved. More ex incentive query languages. Something like graph Q L maybe the one. And we're also interested in your own input.

This is on GitHub already, so you can follow along with the development as we continue and the interesting issues we encounter, and I'm hoping that we'll be, we'll have a releasable deployable version somewhere later this year, maybe early next year.

Thank you very much.

(Applause)

ONDREJ FILIP: Thank you very much. Are there any questions for Sacha?

AUDIENCE SPEAKER: Ly a from the RIPE NCC. I have a question from a remote participant. The first question ‑‑ from Peter with no affiliation. The first question is, is there any good Python RIPE database re to parse IRR data like the Route‑6 object parse RPSL data?

SPEAKER: No, this is, there is not the met a date extraction that it does it is possible that we could extract it into a separate project so that if you need to parse it you may do a lot more than it needs, because it extracts a lot of information. Maybe we'll release that as a separate project, but currently there is nothing that actually works.

AUDIENCE SPEAKER: And the second question, also from Peter is, will you publish SQL dumps?

SASHA ROMIJN: That is more an NTT question than a question for me, I'm not operationally involved. I suppose we could, but I'm not sure who would use them right now. Because, you would also, if you import a dump from a database, you would want to keep up to date, so importing some SQL and doing NRTM feels not very clean to me.

AUDIENCE SPEAKER: Hello, this is store I say from AMS‑IX, first of all congratulations for your good work, it looks promising. I had one question but I think partly was answered from the previous speaker, it was about the RPSL parser that you had your design. Is it a full parser that can parse all the syntax and the strange things that you can find in a policy?

SASHA ROMIJN: No it doesn't parse all the policy at this time. So we have to still ‑‑ one of the things on the stable still is doing more sensitive validation on that. I don't know how much data we'll extract. It depends on where we're going with Phase 2. How much we want to be able to query that data. Currently that's just data we take and to be able to send the text back.

AUDIENCE SPEAKER: Okay. But one request if you can ‑‑ if you can finish that part, put it in a separate project because the community needs a modern parser. I tried to write one by myself three years back, it's very tough. And if you want to do it in a Python language where everyone can adopt and use it easily, it's a little bit ‑‑ quite some work, so one request is. The second request is you said you accept features that you can implement. Could it be AS resolver be a feature that you can integrate? So to resolve the AS‑SETs and keep them in postscript X L for example so I don't need to use a BGP tool or any other tool to do it.

SASHA ROMIJN: You mean as in figuring what the members are of ‑‑

AUDIENCE SPEAKER: Yes, an AS‑SET with members. It's a really really hard ‑‑

SASHA ROMIJN: We already have this. This is also of current IRDA feature. It's just a query you can do. All all members of an AS‑SET optionally recursive resolverly and we do that on‑the‑fly, so we don't reindex that because it's pretty fast.

AUDIENCE SPEAKER: Okay. Thanks.

ONDREJ FILIP: Thank you very much again.

(Applause)

And the next speaker is going to talk about his experience with Salt.

MIRIEA ULINIC: Hi. I am from CloudFlare and I am happy to speak to you to walk you through the last three years of automatic networks using Salt.

We have started this project about three years ago just because we couldn't manage our network manually, it was continuously growing and simply doing all of it manually didn't make sense.

Now, let's make a step back and look at what automation actually means.

I found the following definitions with which I personally find very self explanatory. It means that the technique, method or system of operating or controlling a process by highly automated means, or by electronic devices, reducing human intervention to a minimum.

Another one is the technique of making an apparatus, a process or a system operate automatically. Where automatically means having a self acting or self regulating mechanism.

At the same time, automation in the Internet community and not also in general as well, automation is viewed as ‑‑ is misunderstood as just a configuration management. Or in simple terms, in our world, it means in a template or whatever other side of kind of template, generate some config or basically just a blob of text, load it on and that's pretty much it, what about the hundreds of other tasks you are doing. What basically when after you generate the configure and you poppy paste it from the generator into your CC L I or the same boring e‑mail you send over and over again to providers. Other man plea notifications you need to act on manually or route exchange you learn about only after minutes or hours when the incident already happened, the list can be nearly infinite of the things that are going on in the background you probably don't act immediately on it.

All of this are against the definitions I mentioned of automation. All of these are done most of the them manually, this is the total opposite of automation. But the good news is that all of this County Council automated and this was our goal from the very beginning.

We looked at what we have on the Open Source area, what frameworks were available then. Back then, we could choose between Ansible, Chef or Puppet but we found the limitations that they were not event driven, neither data driven.

We looked at the Salt, which had all the things and many others, the configuration management, the reacters, and all of this, but back then it didn't have any features at all for automation. And we said let's do it.

Another change was the architecture of Salt. Typically it's having a master that controls minions, minions being a process on the box which you target to manage.

But, in our world, it's the unfortunate situation where we can't do whatever we want on the network devices, we can't instale custom software, most of time we just configure and pray that it's going to work.

This is why the proxy minions has been born. It's a process that only needs to connect to the Master and to the remote net device over whatever channel, it can be via HTTP, via SSH or NETCONF.

The next step was identifying the which we are facing we know the differences and also in the configuration mode but also from the operational side, there are big differences between operating a Cisco or a Juniper device. And we started looking into a library that was available back then named NAPALM which stacks away all these things for you.

We have integrated the NAPALM in Salt and has been available since November 2016, has been integrated into the core, so since then, since November 2016 when you install Salt you already had access to all this code. For example it tracts away when you receive the Arp tables and many other things, and so on. A pretty long list of features. So matter you are running against a Juniper or a Cisco, the format is presented in the exact same manner.

The same goes also with the configuration management. In the same way you don't need to worry what is behind, what is doing behind, everything is tracked away and allows the configuration you want to manage. You only say I want to configure NTP, I don't care what happens behind.

The next step was the gate opener and opened the gates for event driven automation on the network side as well. In 2017, in July, we had this release where we have added a long list of features. One of them being for event driven specifically importing and I see local messages in an open way.

This allowed us to implement a long list of nice features, for example auto increasing the prefix limits on our BGP neighbours, from time to time, some networks decide to announce more prefixes and then of course, if the prefix limit is set, if they are breaching, the session is going to go either. For this reason, behind the scenes in the background, we ‑‑ our system automatically increased this limit and the session comes back up.

However, not all the time we can have this sort of automation. For example, if the MD5 is incorrect, then we have an automatic open for us and engineer should look into this.

Another one. That brings a lot of joy on our side is when we collect some matrix for our trans providers. We don't own our back boned we rely on some carriers which we know how good they are. And from time to time, when our network, our monitoring systems decide huge packet loss on their side, we start collecting some M T Ls, compile an e‑mail and we say look, guys how much you failed.

Next step, is in just a few weeks, no November 2018, is going to be at least the next Salt measure release, Flourian, which is going to bring new features, which is such at commit at or commit in, not only for Juniper, but for any other platform that is managed through NAPALM and Salt. As well as other nice features, such as replacing configuration or backup configuration at specific intervals. Or integrations with some existing features from Juniper, from Arista, from Cisco nexus and so forth.

Besides other integrations with well known Open Source software such as Cisco CONF parse which allows you to look into your configuration, also another one with integration with net box, if you know about the Open Source NAPALM and the data centre infrastructure management software Open Sourced by digital oceans, this brings all the integrations you probably need for ‑‑ or at least the basic runs to add in your addresses. And also of course to gather what we have in the NAPALM side.

Besides it's not only about NAPALM, it's also a list of integrations with other Open Source tools, such as net meet owe, Arista and Cisco nexus and the others I have already spoken about.

Also with peering DB database, which allows us for example to create a different set of automatic e‑mails such as this one when some BGP bugs are breaching the prefix limits. I also mentioned that we automatically bump the limit on our side. However, if we increase this limit to ‑‑ or in case where we are running a list and they keep announces even more prefixes or if they announce more than the maximum we increase automatically, it's probably time to e‑mail these guys and ask what's going on.

And also we check what they have configured in their peering DB.

And we compile another e‑mail. We put all the locations where they are breaching the sessions, on the sessions they have with us and we are waiting on the reply.

Today, there are a couple of big players that are already using Salt, not only us, there is also digital ocean, Bloomberg, Comcast, also looking into it and so on.

Besides all I have mentioned, everything is Open Source, is available on GitHub. You can build what I have shown. If it makes sense to you, but these are only examples. You can build whatever makes sense to your business, the list of things and the features and what you can implement is nearly infinite. I only brought in some examples that make sense for us.

Everything is documented as well. And also, there is this little book, I still have some copies with me if you want. Otherwise it's available for free on the Internet, you can download. At PDF and I hope that it helps you to implement some nice things.

Here on the GitHub, everything is available. Nothing is owned by us, we are only contributors. It is on the SaltStack, who maintains the Salt project and the NAPALM automation who is a community completely separate from any company.

You can also get help or ask questions in slack, they are two places where you can probably ask questions, and network the code and also the other one, which is supported by the Saltstack, the company behind Salt. And to automated all the things, because, you know, we always hear that the statement it's always the network. However, I believe that it's always the manually operated network that is going to always be the case.

For now, if you have some questions...

ONDREJ FILIP: We are perfectly on time. So please don't hesitate to ask some questions. There are some people approaching the mic.

AUDIENCE SPEAKER: Serial from select air. How do we do, from the point of view of interactable of services, for example, when you configure an interface from access to tram or vice versa it can be a service interruption during this update on the real network hardware.

MIRIEA ULINIC: I don't really understand the question. Even if you do this from the CLA manual you have the same issue. There is no magic.

AUDIENCE SPEAKER: I can do it very fast. For example, so, no visible service interruption could be seen, but from automation side, it can be done with moving to trunk, can meet a configuration and so on. How do you ‑‑ how can we deal with sort of this interruption? How can we be sure that service interruption will take a small time as possible.

MIRIEA ULINIC: I never heard of anything like this. What you can do from the SLA also from automated. At the minimum you can automate what you can do from the S L I, based on what you can do. Also you have the APIs which should deal with all these things behind. For example, you upload the config and then you don't, on Juniper at least, you don't immediately see the changes reflected only after you commit. Or on the Cisco if you are asking about Cisco, Cisco IOS 3 doesn't have any, at all which is basically em laying what you do on this S L I F you don't see any down time on this S L I you shouldn't see either from automation.

AUDIENCE SPEAKER: Blake from eyebrows. Thank you for this work. It's very useful. I notice that you have added the NAPALM logs module to this switch is pretty cool. At some point, and maybe this is better for the hallway track, but at some point there is a line between observability with regards to like streaming tell met re and metrics and so forth versus logs which sort of fall more into the case of like stem style monitoring and stuff. Do you have any clear idea of where that line might be drawn like how far you want to take the observability side of Salt and NAPALM logs and so forth versus the streaming tell met re that I get off the packet forward engine in a router.

MIRIEA ULINIC: That's a good question, there is certainly an overlap between the NAPALM logs and the streaming extremely. Ideally is NAPALM logs shouldn't exist at all. But still, you know the reality is that streaming tremetries is still not ready. I have been tested ‑‑ I tested the few months ago, I was looking for stream on Junos 16 and only Junos 17 might be something ready and who knows, Junos, 18, 19, 20, 21, hopefully will be something in a few years, and even so, I believe that there is some specific messages from NAPALM logs won't be completely replaced by streaming tremitry. However, others for sure will be.

ONDREJ FILIP: So there are no other questions. So thank you very much for your presentation. Thank you.

(Applause)

And next presentation will be Andy, and it's about Snabb tool kit.

ANDY WINGO: Hi. Thanks for having me. It's a pleasure to be here with you all. This is a talk about Open Source software and data planes specifically, I know we have seen a lot of discussion in the conference about Open Source especially as it relates to configuration. But I'm going to focus a bit more on in this talk about processing packets, and let's have a bit of fun.

I'd like to start my talks with a bit of kind of take the feel of the room. I guess everybody here is using Open Source of course. How many of you all have network functions running in your network written in software? We have a few. That's great. So, you all are processing packets in software. I guess Linux kernel processing packets, yes? User space data planes, anybody? We have a few. This is going to be a great time.

Okay. So, this talk is about Snabb, we'll go a bit about why it is is that Snabb was made and what are the parts inside it. I feel like it's important to discuss the composition of these things so we can really understand them. Then we'll go from that sand say what are the pieces, what can we combine together and what can we do with it. There is going to be a little bit of code. So if that's not your thing, go. The exit is right there.

Right. Have you ever had the issue, like you have some ‑‑ you have heard about something, maybe it's an Internet draft, maybe it's in a fresh RFC, it's some or the of the architecture you'd like to deploy, but you can't actually ‑‑ you call up your vendor and they are, you know, you are not really getting any engagement there, right. They are not able to sell it to you. There is no price at which it's affordable to you. And then like what do you do right and ten years ago or so, there is not shall you could do, right, because you just can't ‑‑ you can't recreate what you could do in hardware in software. But these days, commodity servers and commodity NIXes are fast enough to do a lot of useful tasks in networks.

Specifically if we take a commodity server and put Open Source software on top of it and look at get anyone to make us this software or even writing it ourselves. So that's how Snabb came to be. It was an effort to distil this pattern into a fresh, usable software that even someone whose main job is not software development could go and start to even implement new RFCs and get they will to put it in useful ways on their networks.

When I talk about network functions, I just mean something that's on the network. It's an abstraction of what could be something you would slot into a rack in the past but now you can do it using software and I talk about user space network functions, user space data planes to mean that the Linux kernel is not involved in processing packets. And what this gives us essentiallily is speed. Speed and customisability. It's hard to get the Linux kernel to change what it does and it's hard to get really high throughputs out of packet processing in a Linux kernel. And there are a few examples of technologiess in this space, this talk is about Snabb, but also most of you all have heard about DPDK, the data plane development kit, VPP which is part of the FD.IO, which is pronounced Fido apparently, but we're going to go a bit more about how Snabb is but the fundamentals of this apply also to these other technologies.

So how this works from the operating systems perspective. If you log into the server, and I have config or whatever command will though you the interfaces, you would see a Linux device associated with your PCI NIC, your network card. But for user space data plane you tell Linux to forget about the device. You disassociate the kernel driver from that piece of hardware and then after that you map the PCI memory range for that device into the address space of the software, and at that point, you have memory in your address space which is directly mapped to the registers on the NIC. So you read the data sheet from the NIC to figure out what kinds of memory reads and writes corresponding to what kind of registers peaks and pokes it takes to bring up that NIC. You effectively write a driver and user space. It's a selection of user space drivers for network cards.

And Snabb also has drivers as well. At that point you have your NIC up. You configure it to have a ring buffer where packets are going to be coming in and a transit ring buffer, you have a receiver buffer and a transmit buffer. From there you just programme, you get the packets off the receive buffer and do whatever you want to them. If you send them on you put packets freshly on the transit buffers. That's the essence of what a user space data plane is.

The advantages of this approach is that you get the whole packet in software. And so you can do anything with T there is no question of like what is a fast path and what is a slow path because the fast path is implemented by hardware and if I stray off of what are the particular capabilities of this hardware and firmware can do then maybe I'm going to be slower and stop driving packets. You have it all in the CPU. You can use whatever technology you want to programme. You can programme in Rust, you can programme in C, you can programme in Lua, which is what we do in Snabb. All of Snabb is in Lua. It's a language that makes small programmes and we use the LuaJit implication of Lua which results in fast programmes as well.

And final, you don't have to wait on anybody to provide the functionality that you need if you need to implement a new extension, a new version of the draft, a new RFC, you can do it yourself or you can hire one of many people that can work on this software. It offers a lot of freedom and if at the end you produce Open Source as I do, then you have something you can use without any licence fees forever.

There are some limits to this approach. I feel like I should be honest and mention them.

One CPU core in a modern server can process, oh, depending on your workload, maybe up to gigabits or so. On some other work loads, maybe 5 gigabits, right, it depends on the workload and you need to keep these things in mind when you are working on sizing solutions to problems. But there are, you know, you can parallel eyes with multiple NICs, usually with none NIC you have two ports. So, already there you have usually two processes associated with it, you can devote multiple processes to one particular NIC and in servers these days you probably from 22, 32, 48 course, so you can scale out fairly well. It's not the same as having 20 100‑gig ports in a server, right. You probably won't reach that bandwidth. But for 20 10 gig ports, certainly 40 10 gig ports probably also, so that's a kind of size solutions we can look at.

And I should also mention that when you talk to people in industry, this work often is mentioned in the same breath as open stack or kubernetes or containerisation because people want you to run these in fabrics which link together with complicated configuration. For me, I don't understand the complexity in these problems. Like, I can't imagine deploying this in anything I have responsibility for. And my users, when I see people running software like this, they run it directly without any virtualisation or anything else. It's just a programme they run on their server. And I advise you to think of it like this and if you need to make the next step to that flexibility which kubernetes might give you, definitely take a look at it then, but it's not something that you have to buy into in the beginning.

Okay. So, about Snabb. And the Snabb project, we try to write what we call rewritable software. Meaning, software that if you looked at it you could say is that all? I could do that in a weekend. And the hard part is obviously not typing it out, but searching the space of programmes good and bad to find these small programmes that suit the use case.

So I am going to show a bit of code, a bit of concepts in Snabb. These concepts more or less correspond to what you could see in VP P for example.

A jab programme, a network function, consists of of apps, these apps are linked together in a directional graph. These directional links are called links. And then the Snabb programme itself processes packets in units of breaths, I will go into each of these terms.

I am going to start with a simple programme, and the code is going to be on the next slide, but just to prepare you for it, it starts where we instantiate a step of apps and then we are going to follow by declaring the links between it and then follow that by a simple busy loop that runs the breaths.

So this is the Lua code corresponding to a simple packet filter. We start by instance ating an app for the Intel interface, in this case we are using the standard 82599 interface. Then we start by importing a couple of modules, the module for the Intel app and the module for a filter that takes TCP dump expresses as input. We have a compiler for that language into very efficient Lua that you can then include into your draft. We begin by making a graph, which in this slide is called config, because we have many things called config unfortunately, anyway...

Then we follow‑on by actually instance ating the apps, we instantiate the NIC app and the filter app. We link the transit from the NIC to the input on the filter, then from the filter to ‑‑ to the Nick to its receive, which to the world is the transit.

We there any the graph which saying and then we have a busy loop. And you could paste this into a Lua file and run Snabb, that file and it would run, right, right now.

As you can see we specify the NIC by PCI address, not by FP0 or what have you. Because this really is getting the kernel out of the way. And if you did an S trace on this process, it shows you a log of the sys calls that are a process does, you will see nothing. Right. It doesn't talk to the kernel at all. And that's part of how it can keep low latency, how it doesn't drop packets and gets good throughput.

So, I mentioned we did, that Snabb is written in Lua. The LuaJit is kind of our secret sauce here, it's haul the way down, it's not like Lua's configuration with a C library, everything is in Lua from top to bottom. There are some small bits in seam builder, even those are made from Lua.

So about breaths. A breath is a couple of phases. Just go a little bit deeper into these components. We start by pulling some packets into the software system, so you pull them off the receivering buffer of the NIC. And then you programme those packets, push them through the graph. So to inhale the packets you run pull functions on the apps that have them. And to process the packets, you run push functions on the apps in the graph that have them according to the links that go between them.

This is just an example. This is not something that you'd have to write yourself. But this is the pull function for the NIC. As you can see, it's just a loop that says while there are packets available for me to pull, pull them off and transit them on my output link.

Similarly, this is a push function for the filter app. I pull packets off of the input link of the app, and then if I pass it to the accept function predicate, because the filter written in the language of TCP dump gets compiled to a function that will return a bullion, so when I run that function on the packet it will return true or false. If it returns true I turn it on, if it turns false I free it.

So, just one more piece of code and then we can come up for breath. This is what a packet is and what a link is. I know many of you have probably done some programming in the Linux kernel networking stack with SK BoFs, a bunch of linked packet descriptors, a bunch of metadata. That's not what we have here. We literally have just a length marker and the bytes. Similarly, the link is just a ring buffer. So, we have really condensed things down to a mechanism that we can find so try to express network functions in its smallest amount of code possible. And this let's you be more agile, change things, write new things, mix‑up the system.

So at this point, you know all of the basic concepts? Snabb and you can rewrite it and I definitely suggest that you do so, rewrite it in Rust or C or C++ or scheme or any of these languages. But just like to finish off this talk by discussing a bit of things that we have built in Snabb and that you can use today.

To check out Snabb you just clone the Git repository and it runs in about a minute. In the early days of Snabb we had a code budget and the whole thing had to build in a minute. We also had a budget of 10,000 lines, which was very interesting, it forced us to try to make smaller and smaller solutions in code, but now that we're seeing pro production use cases, we have gotten a bit more horizontal expansion there.

To open up up the box here, there are a lunch of included apps, if I remember of all to get packets in and out of your system. We have drivers written in Lua for a few cards. Notely the really common cards would be these Intel 82599 cards along with their siblings, I 210 and 350. My the Mellanox drivers are nice as well. Neck go up to $100 gigabits but these cards are not well size to the PCI device bandwidths. So, it's more common to run those cards at gigabits. In addition you can talk to the kernel. You can take packets in and shouldn't packets you don't care about or don't feel like handling back to the kernel. You can use to interface with virtual machines. And of course pull packets in from PCAP files and save them to PCAP files. We have a number of other components fleshed out to implement your standard L2 L 3 connectivity bits. Above that we have apps, to she is reusable nodes in a packet processing graph for IPFIX exporting, for lightweight 4 over 4 functionality, it's an IPv6 transition mechanism. For deep packet inspection, filtering, firewall and IPsec and all kinds of things.

So, there is a set, I'll have a link ‑‑ in this slide we have a link to the documentation. I also mentioned that we have fleshed out recently is a uniform way to configure Snabb applications using yang and in this case literally, the graph of apps is a function of a configuration in terms of a Yang model. Literally a function. You write a function that translates the Yang configuration to the graph of apps and Snabb handles all the rest. A query of the state of the programmes, counters, the state and also its configuration. Multiprocess model where you can devote many workers to one NIC. Historical statistics allegation, including the kind of black box functionality where statistics are recorded into RRD files and you can go back in for the last couple of hours and see where things went wrong, is it your fault or somebody else's fault. All this sort of thing so I advise you to take a look at that documentation.

And additionally besides apps, there is a number of library like function at, which you can import but it doesn't participate in the packet graph as sufficient. So this prefix matching, really fast hash tables that support parallel lookups, a number of different compilers and assembly submitters.

I would mention that we don't have a full router implementation yet. And this is in contrast to VP P for example. If you run a network based on VP P, it includes it as kind of base graph of apps, router functionality, to, in ways that you expect. And for various reasons we don't have this in Snabb and it's something we're working on we'd love to spend some more time on that.

For more examples of uses I had a lightning talk a couple of days ago, eight waysen nears use Snabb, they can that out.

First of all, Snabb is fantastic for prototyping. So if you see a lot of packet flow and you would like to use Scapy to analyse it or Scapy to generate data but Scapy can't deal with the packet rate, Snabb is the thing you need to be using. And I know a very large CDN that uses it for exactly this purpose.

I'd like to mention the great experience of an engineer Swiss switch, the Swiss a.m. deck ISP that wanted a VPN technology that was still in development at that point. He just built it minimum self and he is deploying it and he runs it and it's great.

Similarly we have a new IPsec VPN which has got some interesting capabilities. The author is here if you want to talk about that.

And then the largest deployment that we have is an IPv6 transition technology. There was a talk on this at RIPE 76, the OTE engineer about this one, this is the bit that I have been most focused on myself. And it has a capable interestingly being in software, it doesn't have very many limits on the scaling of the size of the binding table which is a set of customers that you can support. We have tested out up to 40 million entities, which is pretty large.

So, do check this out. We have got the GitHub page. There is also a slack channel. And we're not ‑‑ it's non denomination al, so you can come, if you are using moon Jen or using DPDK or things like that. There is a joint link at the bottom of the page. My e‑mail and Twitter right there. So happy hacking with Snabb. Thank you.

(Applause)

MARTIN WINTER: Thank you Andy. Questions?

AUDIENCE SPEAKER: Hi. Bengt Gordon, Resilians, Sweden. Thank you for the presentation. It's interesting. How do you cope with the inter cards with the Q, is it fully implemented all the Q handling?



SPEAKER: That is a good question, we try not to depend too much on all firmware features. We have implement BNVQ filtering based on VLAN and Mac address and additionally support the RSS features of the card. So, I think that covers the standard use cases there.

AUDIENCE SPEAKER: I was specifically thinking of DDoS attacks, do you separate the control plane and the data plane from the multiQ aspect.

ANDY WINGO: We do not currently. For better or for worse. But I'd like to hear about your experiences in this regard.

AUDIENCE SPEAKER: We had ‑‑ we made some ‑‑ we wrote to paper for ten years ago about towards 10 gigabit routing and we did some ‑‑ we had to patch the kernel drivers for Intel cards to actually do that so we could reach the card during a DDoS attack. That was quite interesting.

ANDY WINGO: That does sound interesting. I think we ‑‑ the general approach that we would like is to scale to handle small packets at high rates and I know that some systems have more of an overhead for control packets than others, and our goal is to reach a throughput that we don't have the special case control in that case. That might not be a complete solution but that's where we're at right now.

MARTIN WINTER: Okay. No more questions. Okay. Thank you very much.

(Applause)

So before we get to the lightning talk we have a quick announcement from Peter van dike, is he around?

SPEAKER: Hi, I am Peter lex I say, I work for power DNS. So some of you might know in February there is false diagram in Brussels the big Open Source conference, we're running a DNS Dev room there, the Dev room will be on Sunday February 3rd, we will send you guys the CFP on the mailing list this week and we all invite you to come and talk about cool things you did with DNS Open Source and just have a good time there. That's it. Thank you.

MARTIN WINTER: So we come to the lightning talk. The first up is Peter Hessler.

PETER HESSLER: Thank you. My name is Peter Hessler I am a developer with open BGPd. But to give you a little update of what's happened. So, October 18, today, is the 23rd anniversary of open BSD and we just realised coincidentally at the very beginning of this session our 6.4 release.
(Applause)

So, open BSD as you may know is a general purpose operating system and we have done a lot of things in this release. Major improvements in the R M 64 and arm V 7 platforms. Many more improvement for driers based laptops. Most of the things that you are running in this room. We have our own virtualisation hyper advisor and there's been a the love improvements there for guest operating systems. We have also added a lot of security improvements. Defences again ROP attacks and misbehaving applications. In I believe, in our arm 64 lib C threw now zero Rob gadgets available for attackers to use.

We are also the upstream project for quite a few pieces of software you may use, open SSH, T MUX, lib ES S L, they all have a large amount of new features.
Around the portable releases should be released in the near future and available in every other OS, hopefully soon.

In the network side, we have done quite a few new things, we have added the join support in the Wi‑Fi stack, we now have administrative knobs for LACP. We have the ethernet over IP implementation that is compatible with micro particular. We have done a lot more work on making an SMP safe network stack. We have some sys calls that you now fully unlocked. You can see the list on the slides. And for those of you who remember the beginning of IPv4, and before DNS was a thing, there used to be the networks file format. We finally have removed support for that.

Sometimes improvements are deletions.

We have a number of network daemons that are built into open BSD. The OSPF6 routing domains has writing support, which may be commonly understood as VRF. Our slack daemon to talk slack for the interfaces is now fully pledged. Iffage attacker can break in and tries to execute the programme, the kernel will kill the programme. SLAAC D behaves a lot better on networks now. Address detection, network roaming, etc.
We used to have the old RTA DBD implementation from the Com A implementation and that was for router advertisements and we have removed that and placed it with a new preliminaries which is understandable by people and can be used in the wild, it's called RAD. And then open BGPD is also part of the project. And we have raised some money thanks to the RIPE Community Projects Fund. Also, money came in from quite a few other sponsors, DE‑CIX, NetNod, AMS‑IX, BS‑IX Lonap, Asteroid, NamEx, University of Oslo, I hope I got everyone, it was kind of a long list. I hope it gets longer.

So we raised some money and then we spent some of that money. Claude Jeger who was one of the original developers of open BGP has quit his regular job and is now working full‑time at open BGPD. We have raised one year of funding, we are hoping to get to two years of funding and five months of that has now been spent and everything that I'll be showing you has been paid for by you the community and has been the result of clawed and other people's very hard work.

First thing that we did was RFC 8212 default which is a default deny policy. As part of that we removed the previous default policy which was announce self, which was only announce prefixes that belonged to my own AS. We have, we're moving all of that into filter rules. So you need to actually positively filter instead of just assuming that the defaults are safe for you.

We have added RPKI ROA support to open BGPD. It is currently a static table so you get your dump from validator, we don't yet have RTR support. We also have prefix sets, AS‑SETs, origin sets, which is a combination of prefix and source AS. This allows you to place many of your filter rules within extremely fast lookup. It also let's you reference them in multiple place in your configs. We have also done background soft reconfig which removes the blocking process of the updates. So while you are configuring your BGP router, and its processing and handling all the new filters and applying them to the RIB etc., withdrawals and updates are still being processed with the old rule set and then they will be atomically switched over when it's ready.

There's been 154 commits since 6.3.

YYCIX, which is an Internet exchange in Calgary in Canada is using this in production. They have 46 members. In 6.3, there were 370,000 filter rules, and in 6.4 it's less than 6,000. And that is for a full IRR and full RPKI for all their members, they have some small ones, they have some large once, Hurricane Electric is there and as you all know they have a large large IRR database.

So, all of that was what was committed in open BSD 6.4 is available right now. In 6.5, for the future, in the next release, which should be in by May 1 next year, we want to do a better community filtering. Currently you are limited to only matching on one community in the, we want to be able to add more than one community so you can do more complex things. Currently adding and removing communities to the filters is, could be a lot faster, we want to improve that. We want to do general filter improvements. We want to refactor the RIB. And then make the portable versions ‑‑ or make the multiRIB situation a lot faster for you. We also want to resurrect the portable version of open BGPD, so you can run it on Knot open BSD systems.

Future work beyond the 6.5 release. We would like to do multithreaded RIB support, to allow for a much faster processing of the RIB and updates. So you can start using more than just a single core.

We want to add a FIB support for other operating systems in the portable release, so, on Linux for example, you would be able to add and remove routes in the kernel table.

We also are interested in new features. The usual suspects, ADD PATH, multipath, BMP, RTR, and anything else that looks interesting to us.

And of course, because this is Open Source, we don't have any commercial sponsors. We need for funding to condition work. We need general funding, sponsors for hackathons, etc. We have a foundation, the open foundation which is a Canadian not for profit corporation and for direct work it's best to contact the developers directly.

So that's it. That's the update.

(Applause)

MARTIN WINTER: Okay. Thank you Peter, are there any questions?

AUDIENCE SPEAKER: Tom hill. I'm actually quite curious as to the changes that have been implemented in RAD versus ‑‑ does it behave properly with carp fail overs.

PETER HESSLER: Yes it is aware of Carp now, it is able to change announcements on that, yeah. That was at least one of the reasons why we had in code that was usable and code that was readable by others.

AUDIENCE SPEAKER: Tom Stricks from CloudFlare. I was wondering with your slack implementaion, are you doing /64s or are you using smaller assignments ‑‑

PETER HESSLER: We have removed the artificial limitation and allow any prefix side you want, it can be small or large.

AUDIENCE SPEAKER: Awesome. Thanks.

PETER HESSLER: I personally use it in a /12 on a production environment.

AUDIENCE SPEAKER: Gert Döring. I just want to mention that I like this origins thing. Because in BGP filtering you want to verify that this origin matches this prefix and this is sort of in what we have is cumbersome. This looks great.

MARTIN WINTER: Thank you. No more questions. Okay. Thank you Peter.

(Applause)

Next up we have Alex Band and Martin Hoffman. Okay, just Martin. He is talking about the Routenator 3,000.

MARTIN HOFMANN: Hello. A while ago N T L lance committed to producing a set of Open Source products for RPKI to help the up take of that. The publicationings and relying party software. I talked a bit about this this morning in the Routing Working Group, so I want to here do something else and talk a bit about the decisions we made and how they panned out.

So this story starts when ‑‑ so, this story starts when spring made it finally to Amsterdam final about particulars months ago and we found ourselves in the position to ask the most important question, or the most important question for software developers when starting a new project, it is which language to write it in.

So, should we write it in C because we are actually a C shop? All our other stuff is in C. But then we thought it's 2018 and we're writing a security related product, maybe we should take advantage of the last 40 years of progress in programming languages.

Shall we do Java ? Well the community told us in no uncertain terms what they thought about that idea.

So maybe Python, Ruby or what the kids do now, Node US. That's not a bad idea in general, we did some development in Python, that worked out great. Certainly performance shouldn't be an issue here. But, well there is one thing which is deployment, which is always kind of painful in these kind of languages. And the other one is that if you do large scale development and you don't ‑‑ or you do maybe want to have a compiler that helps you a bit with looking at all the code paths and stuff. So, what about the new kids in town? There is go and there is Rust, which both brand themselves as a system languages. Go specifically also talks about being good for networking. Maybe we should try these: We did and we started out with building a prototype. One in go and one in Rust. To be fair the Rust one started earlier, just for logistical reasons.

And after trying this out a bit we ended up choosing Rust mostly because we liked it more. The reasons why for engineers are well the one thing that everyone talks about which is memory safety and what is feelers concurrence, so the thing is where you can have memory safety without having to rely on a garbage collector which is great for performance. But actually, almost everyone talks about this, there is a bunch of other things that are really great that kind of sort of are neglected. There is the type system is probably what makes it preferable over go. There is things like generic. There is ENUM types that be carry data with them, which is really about the most fantastic thing. The build system is fabulous, it's probably the best one I have ever seen. It just works, you write one configuration and it just goes off and does its thing. It takes care of dependencies, there is hardly ever problem with different versions of libraries. Because we are a C shop, FFI, foreign language interface, or foreign function interface, so interfacing C and consequently with pretty much everything works great. There is a great friendly community, which is very open. And you get as a software developer recollect the challenge of a lifetime, that is well you get a challenge.

I guess the biggest advantage of go is that it's relatively quick that you are productive in it, that's certainly not something that happens with Rust. There is this thing it's called the fighting to borrow checker which you do about for the first two months of your career as a Rust developer. But once you are past that, the compiler actually turns in an into a very friendly individual.

The other side of it is the why knots? Surely, Rust is a very young language, so, the question that you have to ask yourself if you start a project using their language is, well, how often will it break? Rust by now is us autoed in production by quite a few companies. I probably should have added that slide here. So, well, it's still a very young language and there is stuff happening. But it does work.

There is a question of continuity. Is it just a fad that everyone jumps for now, or will it be around in five years? The way this looks right now is that more and more people pick this up. There is more and more Open Source projects that integrate a code not just a Mozilla obviously but I think also RS V G is moving to Rust. A very important question is the ecosystem, do we have to right everything ourselves or is there libraries, well there is but not tomb. On the other hand not using some weird libraries that you get off the Internet is maybe not the worst idea. And then there is the big question, well can I get people to work on this?

An absolutely non scientific data on this, is ever since we announced our Rust projects, I think the amount of people who applied to work for us specifically to work in Rust has well sky rocketed basically. Which, obviously this is always a function of demand and supply. So, there is hardly any demand so the supply will do.

So, we wrote the thing that we called the Routenator. The performance several worked. It's running on a little tiny little thing there to validate the entire RPKI database. How does it look in practice?

We broke it up in three libraries, the first one is to the point off ecosystem, which is if you do RPKI or if you do like PKI in general, you need ASN 1 parsing or B 1 parsing, that wasn't there, we had to write it ourselves which turned out to be a good thing because we actually found that about two thirds or so of all ROAs in the RPKI library aren't actually RFC conforming. That ended up about 4,000 lines of code, Crate being the Rust term for library, because you have to have fancy names. Talk about, I'm guessing that was probably the bulk of the work how to figure out how to do this properly. It depends on two other things, one it Moore byte handling the other is for roaring handling. The bulk of the stuff went into its own library. So that's right now at 4,000 lines of code it is going to grow substantially. As you can see that has more dependency, CRYPTey XMLs ‑‑ finally the Routenator ties it all together. I think that's the version that has RTR. And then you have 14 direct dependencies for network IO, stuff like that, which once you start compiling it, actually turns into 113 dependencies. This is where the build shines because you don't need to worry. And as a sort of benchmark point, not a real benchmark, the release build is 3.4 megabits. If you strip it down it goes to 2.5 megabyte. Rust distinguished between release build and the bug build. The bug build is not only slow but bigger.

So, if you want to return this in practice, you need to remember to build for the release.

As an end point, if you are interested in this, we are of course very happy for contributions etc. There are the URIs to go to, we also just got Routenator 3,000 as a Twitter handle, unfortunately the Routenator is too young for Twitter so they have blocked us, but we're going to fix that.

Thank you.

(Applause)

MARTIN WINTER: Okay, we have time for some quick questions, if there are any. I see someone from the Jabber.

AUDIENCE SPEAKER: Ly a from the RIPE NCC. I have a question from a moment participant. Peter, with no affiliation. The question is: Do you also plan to implement the BGP preview functionality that RIPE RPKI validator 3 has? If that functionality is not known, you can ask it about a prefix no AS and it tells you if if all sub prefixes are valid or invalid or unknown.

SPEAKER: At this point we have in the Routenator a very simple thing that talks to your router. We are aware of these, we want to build these things, we haven't figured out yet how exactly we are being to do it, but it's somewhere along the way absolutely.

AUDIENCE SPEAKER: And further he would like to make a comment on that, he is not ‑‑ he said I am only interested in an API. I don't need the web UI.

SPEAKER: There is at this point no plan to have a web API. If that comes and people want that, then it's going to be something separate. If you don't want it it won't be in the thing that you use.

MARTIN WINTER: Okay. Thank you.

MARTIN HOFMANN: Thank you.

(Applause)

MARTIN WINTER: Okay. That gets us to the end of the session. I want to please remind you, like rate the sessions, we can really use the feedback to hear back what you like, what you didn't like there. How we can improve for the next time.

Okay. That's the end of the session. Thank you everyone.

(Applause)

LIVE CAPTIONING BY
MARY McKEON, RMR, CRR, CBC
DUBLIN, IRELAND.