Archives

MAT Working Group

17 October 2018

At 2 p.m.:

CHAIR: All right. If you can find your seats, please. Welcome. We will get started. We have quite a packed agenda and by the way, Hi, I am Nena, I am your co‑chair for this MAT Working Group session today. I expect Brian, hello Brian out there on the streaming, over there, Brian can't be here today because of happy circumstances in the family approaching so he sends his regard and he will be here next time. So I will be running the show today. And we have announced this agenda, and I just realised that we forgot the formal part of the agenda which is approval of the agenda, approval of the meeting minutes from the last time and are there any comments? So we are just putting that in as a point 0 right now. The minutes from the last meeting was posted on the website and the mailing list I think yesterday. Apologies for being late. Any comments to that? Or for the agenda? Awesome. We have a scribe with us today, which is Matt, and we have Terence monitoring the chat channel and as always we have our amazing you with the typing thing person.

So thank you so much for the help to getting this session running. And I think we should just move on and start with our first person Ermias. Come up and give it a go.

ERMIAS WALELGNE: Hello everyone. I am PhD student from Aalto university. This is a joint work with Kim, Steffann and others and Aalto University. This I already published in IF P 2018 networking conference.

A brief introduction, the Internet traffic flow, the vast majority are very short, especially studies shows the traffic flows generated by smartphone are very short life traffic flows. And usually this short life traffic flows are driven by latency than native throughputs, this could be due to the need to resolve multiple domain names or need to the requirement to establish multiple connection before fetching the content from the web.

There are some studies that try to quantify factors that are responsible for DNS lookup time and connect time but here in this study we want to know what are the main factors affecting DNS lookup time and speed connect time, and we want to understand the distribution of ‑‑ the impact of DNS cache and TCP proxy for improving latency for both DNS lookup time and connect time of we also look at how DNS lookup failure is existing in network and using pink test we try to understand the packet loss (ping)

So these are the streaming contribution that we provide on this paper, first we try to look at the the distribution of DNS lookup failure based on the response number that we get from DNS query requests, and of the total measurement that we have it stands about 2% DNS lookup failure we observed for different reasons. And we also analysed best on radio technology and impact of device model. Here we observed LTE for obvious reason out performs all the other radio technologies and finally, we study the impact of the presence of ISP cache and DNS server proximity for both DNS look up time and speed.

How to we do this? . Google, YouTube, Facebook and he willees is a .net. For both DNS look up time and TCP and for ping test we measure ‑‑ and when we saw measuring lookup times just simply sending DNS request to DNS server and getting a response from the server to the client so the DNS lookup time is the time difference between the two request and response. TCP connect time is sending a request to the web server and then getting it back a reply from the server to the client. The IP address and the port number for TCP connect time, the version is IPv4 on port 80, doing the measurement of both DNS and TCP we recurred the time what was the radio technology that was being accessed during the measurement and what was the device model and some response codes also. For ping test as I said, we only use measure ‑‑ as it's simple ‑‑ and the number of he can could requests per ping test varies from five to nine and the payload six was to 16 bytes.

This is the total data set that we have for measurement collected ‑‑ typically Elisa and the distribution, the map on the left shows you the geographical distribution of Finland. It's more or less collected more than 200 ‑‑ 25 K subscribers in Finland and the amount of data that we got per website for both DNS and TCP connect time measurement is almost top of 1.6 million for each website.

Next I will show you some of the results we found out of this. The first one is DNS lookup failure, we used DNS response code to identify whether a failure happens or not, and out of the total measurement that we have, about 2% of show DNS failure and the majority of them were due to the service level implementation mismatch and when we look at by radio technology, even if here I want to note that the majority of the measurement that we had was connected with ‑‑ here the percentage rise we might see like 1.9 DNS lookup failure so when that user was accessing the network that doesn't exactly reflect that LTE was showing very less DNS failure because we have measurements conducted than the other radio technologies like ‑‑.

The table is ‑‑ the DNS failure by website technology, more or less it's evenly distributed throughout all the websites and, yeah, even if there are some websites like Facebook and YouTube that show more failure but it's more or less evenly distributed.

Next we use ping test to ‑‑ we only ‑‑ I mean we use this for testing packet loss, but here you can see how the radio technology is vary per different measurement, for instance. The minimum RTT if we consider that, basically legacy has the fastest ping response time than all the others. Here I call them legacy radio technologies, what I mean is all ‑‑ so the message or the LTE has shorter latency during ping test.

Packet loss by radio technology, here the plot shows the distribution of packet loss as a fraction of total ping by radio technology and out of the total ping test that was conducted accessing LTE network we got like 2.4 of them lost at least a single packet. Like I said the packet that we were sending per ping testing was varying from 5 to 9 and the number of /TPHAEBGT we sent actually matters whether we observe packet loss or not. For instance here you can see after we start sending like 6 echo requests per ping test instance we start observing packet loss and one good take away especially for those measurement guys who would be I think if you want to analyse packet loss based on the ping test measurement it's better to say the number of echo requests to at least more than 5 echo requests so that you will see the real traffic.

DNS lookup time by radio technology, for the four different websites, here obviously LTE out performs all the other radio technologies in all of four websites, for instance if we take YouTube and 200 millisecond as a benchmark, for LTE we got more than 75% of the time completed within 200 milliseconds whereas on the other only we have 25% of the time that finished within 200 milliseconds.

For TCP connect time similarly LTE out performs all the other ones. The plot on the left is TCP connect time to YouTube and on the right to Google. If we take 100 milliseconds as a benchmark more than 92% of the time YouTube finished below 100 milliseconds whereas 5G it was 28% of the time. So the distribution seems ‑‑ on the other websites also.

We also look at if the device model, if there is a device model, has an impact on the DNS lookup time or TCP connect time. Here we don't see clear pattern by DNS lookup time when the device models are new one, when we say new, we only take device models really ‑‑ so the colour shows from 2012 up to 2016. So we were wondering if the new device have an advantage over the old devices, just only by year of release but we don't see any clear pattern. But one observation we found was some devices that have larger storage capacity have shorter DNS lookup time. But in case of TCP connect time by device model we don't see any impact at all.

This one is DNS lookup time by websites. Here, we observe a lot of cache entries for Google and the one in the blue is Elisa which is ISP's own network so has faster DNS lookup time than the other, but the one ‑‑ Google one I think, the green one, is this one, is ‑‑ has gotter advantage than the other two due to the cache entry presence in the ISP network. I think also, I will have something to ‑‑ towards this. And similarly for TCP connect time, here also we observed about 90% of the time Facebook and YouTube can be reached under 100 milliseconds from the client eighths device whereas for Google and he willees is a it was only 90% and 70% respectively.

For instance, one interesting thing what that we was wondering was whether this could be varied by destination ASN number, autonomous system number, on the one on the left is request towards YouTube, on the right is request TCP connect time sent to Facebook and the colours are ASN names that we convert using RIPE service. So, the take away from here is, for instance, for Facebook, almost all the time it was sitting exactly the IP ‑‑ in Ireland for YouTube in this case, we got number of cache entries in different ASN regions like cache entries. Due to that, Facebook has slower TCP connect time than the for instance, Google in this case. So the take away would be cache can prove especially on fetching small files.

We start further investigating this one we took the time difference, TCP connect time difference between ISP cache, in this case, Elisa and Google, that were getting reply from the same ‑‑ getting a reply by the two networkers for the same users within one hour time interval. So the X Axis is a time difference between when the request gets replied by ISP cache or by Google, within one hour interval for the same user. So the X Axis shows the native ‑‑ in the case of ICP cache faster we can see about 70% time the Google lower latency when it's hit by ‑‑ hit ISP cache entries.

In conclusion, and this in this work we look at DNS lookup time and TCP connect times for different factors and we observed that TCP connect times varies by websites and by network technology and also by device ‑‑ somehow by device model. We study ISP cache and DNS server impact for lookup time. One take away is cache entries can closer to ISP improve TCP connect time and also the proximity of DNS server to the subscriber can improve time lookup performance. We analysed using ping test and one good take away from this one is if the number of packets are to be sent per ping instance are greater than five we might see a real observation whether packet loss happen or not otherwise at least if our case we don't see any packet loss when the number of packets we send per instance is less than 5.

So, it's better to at least say it more than 5 ecosystem per ping test instance.

That's it. I am happy to take questions.

(Applause)

CHAIR: The questions lining up.

JEN LINKOVA: You correctly mentioned when you are doing measurement to the name you actually, your client might go to completely different destinations, right?

ERMIAS WALELGNE: Yes.

JEN LINKOVA: So addresses might be close to you and far away from you so I am not advised you are seeing different numbers so it's really hard, I believe, to make any conclusions out of this.

ERMIAS WALELGNE: True, the clients can be anywhere.

JEN LINKOVA: Destination could be anywhere because it might be this particular address you get for this name and you are connecting to might be ideally should be close to you but if you run five measurements for five clients from different technology you might go to completely different addresses right so it would be really interesting to see if you measure different access technology, clients in different type of the network going to the same destination, it probably would be more interesting because it might be that one client in slower network was just talking to server which is far away, despite the fact both were talking to Google.com

ERMIAS WALELGNE: When you say destination.

JEN LINKOVA: Google server, Facebook server.

ERMIAS WALELGNE: It depends, the location might obviously affect the performance because that's one way to affect, but I would say considering the number of data points that we have and more or less the network coverage especially for instance if we take LTE network coverage, network in Finland is more or less, somehow, stable, so the difference not ‑ dish guess it will not be that much significantly different.

JEN LINKOVA: Okay. Another question. Could you go back to the slide with DNS failure rates.

ERMIAS WALELGNE: This one?

JEN LINKOVA: The numbers it was for different percentage of resolution issues, probably one of the earlier ‑‑ yes, this one. Yes. So, did you look at any correlation with the resolver you were using because I presume you were using one provided by mobile network right?

ERMIAS WALELGNE: Yes.

JEN LINKOVA: I am surprised you see different failure percentage for domains, it might be for domain and noter domain you are trying to resolve. What I am saying maybe it does /THOT matter what name you are trying to resolve, maybe what does matter is what DNS server you are using

ERMIAS WALELGNE: Absolutely, absolutely. I just put this radio technology just to see whether there is an impact by radio technology and like I said, the number of measurements that we have by ‑‑ from more than 60%, so it doesn't mean that for instance the radio technology using LTE have more DNS failure than using ‑‑

JEN LINKOVA: The last comment. You mentioned in your very first slide most of the traffic is TCP and you referring to some measurements done like five years ago, right? I wonder what the data is now because I would expect more quick traffic, especially from Android devoices and some mobile network websites which support QUIC so probably you might want to look at how much of UDP you are seeing now.

ERMIAS WALELGNE: Okay thanks.

AUDIENCE SPEAKER: Kind of similar questions about the DNS lookup, if the clients were using the same resolver, if the resolver was prime, because you might be measuring different things that you think you are measuring rather than the lookup time from the hand‑held device to the resolver, and that might be skewing your results so probably it requires ‑‑ approach you and ask you about the details, but to keep that in mind.

CHAIR: Please the next person state your name and affiliation when you go to the microphone, I for got that at the beginning and stay with one, maybe one or two comments and not five, please.

AUDIENCE SPEAKER: Florence MP I I. I am also challenging what you have measured on the radio technology. Have you any information on the back hole organisation or how the ISPs doing the back hole from L /TEFPLT versus the traditional technology because I know that ISPs now use fibre to connect while before they used traditional and old copper lines with two end bit.

ERMIAS WALELGNE: I ask them if they have, I couldn't get access, the IP ‑‑ the radio technology that we are considering that has been accessed during the measurement was one we get from, if it is Android device, from the Android operating system.

AUDIENCE SPEAKER: The question is, if you are using LTE you will connect via fibre lines and the traditional HTP A could be organised by ‑‑ connections from ‑ dish know they changed the technology they are connecting with south towers with so you are not probably ‑‑ maybe not measuring LTE but the back‑end network of the provider, which is different for LTE than for other technologies.

ERMIAS WALELGNE: Yes but it ‑ /TKURB right but I don't know how to know what kind of backup network was used.

AUDIENCE SPEAKER: That is what I meant. You don't know what you are measuring there.

ERMIAS WALELGNE: The one that we got is from the Android operating system, for instance, if it is Android mobile device or from iOS, but I'm not sure how we can infer what radio technology at the back end was using other than that. Unless we have the access from the the network operators.

DANIEL KARRENBERG: Interested engineer. Did you actually collect data on the public IP address that the client actually connected from once they hit the Internet?

ERMIAS WALELGNE: The data is collected from end users' device so it can be private ‑‑ from the private IP address or ‑‑

DANIEL KARRENBERG: But did you collect the information that was basically the ‑‑ when it goes to the Internet, what the client's IP address was? Because if we are doing that kind of stuff, it might be interesting on not how the back hole actually worked but something that you could find out is where it hits the Internet and whether the same place for different radio technologies and how, where it actually hits the Internet, affects your measurements, that would be interesting. Where I am coming from is that you ‑‑ the back hole might work very differently, depending on which network it was gone and which radio technology. Does your data set have that? Okay. I will take it off‑line.

Lee Howard: This is interesting work. I was curious about when you described your methodology in the beginning slides, that you chose to use IPv4 only, I can see that at least three of the websites listed here for instance would have provided you an IPv6 address when you do a DNS lookup and you decided, IPv6 is different or something, there is, I saw Geoff slug there, there is a small body of work suggesting that latency over IPv6 is better than over IPv4 on mobile networks

ERMIAS WALELGNE: Yes but IP that we have was only IPv4. I don't know why it's only accessing IPv4 but one information I got from the operator was, the IPv6 is, the deployment was IPv6 in their network is increasing but here in this data set we didn't see any IPv6 at all.

AUDIENCE SPEAKER: That would be interesting future work.

CHAIR: Great. Thank you very much, that was interesting.

(Applause)

Next we have Maxime, who is going to talk about what we can learn or we can learn network states from RTT measurements.

MAXIME MOUCHET: Good afternoon, I am going to present you a work I am doing part of my PhD and I did with my on clustering observation on the Internet so it's quite theoretical and mathematical work but we thought it might have more practical application that might be of interest to you so I will try to show you why we think it's interesting.

When you observe the delay on the Internet when you do ping what you observe is the sum of different components and mostly the propagation time which is the time the information takes to travel to the fibre and so this is ‑‑ by the speed of light and queuing delay which is the, which is zero if the traffic is low and as it increases increasing the start to fill up and the /THRAEU increases.

So if you look at the delay on short time scales like a few hours we must see something like this, on the Internet this is measurements from RIPE Atlas so you will see the delay oscillating around the baseline which is bunded by the propagation time and some kind of measurement or whatever. Now, if you look on a few days, you will see that this baseline changes, so the minimum value of the delay change of our time so this is due to change in the routing in the Internet and also that the standardisation or the variation ‑‑ the variation of the delay change of sometimes, sometimes stable and sometimes it's really goes up and down because of maybe there is more traffic in that time of the day.

Now, the question is, that we want to, just by looking at this, can I say okay, this delay observation at that time of the day was the same network path and the same traffic level as the one in another day? And that's what I call here network state, and I will also call that a hidden state because you cannot see, when you look at the delay you cannot see what the network path, the IP path you take and you cannot see the traffic level.

So would you want to do that, maybe you want to detect when new network states appears, there is a problem in your network or change your routing based on the performance of different path, and maybe you have a lot of customer problems at one particular time of the day and you want to correlate that with particular network states and maybe you are just interested in study RIPE Atlas measurements and finding patterns in the measurements.

So you maybe like why are we not choosing trace route because after all in that I can get the IP path but the problem ‑‑ so that's trace fruit ‑‑ well it doesn't explain all the changes we observe in the delay. Some of the changes they explain so everything in blue is in one IP path and everything in Orange ‑‑ so there is a few ‑‑ trace routes is more expensive to perform than pings or delay measurements and if you want to work on historical data you are more likely to have delay than trace routes. Routing may be asymmetric so you need to have a full view of path changes on your network and of course, you cannot see change in the traffic level and changes on the IP layer in the trace routes. It turns out that in statistics machine there is cuss erring what is what we want to do, but just by looking at the data (clustering) trying to find data points that are similar and /TPWRAOUPG grouping them together and, in the position to classification when you want to classify traffic you have to say okay, this is TLS traffic, you have to give some examples. Here we don't give an example we give the data to the algorithm, the delay observation and get the clusters. So how do you do that? So the approach we have is, it's something /PWA*EUGS statistics and explain how your data is generated so in this case how we think the delay is generated from the hidden states we want to find. Then, use some method to find the parameters of this model and using this model we have just learned aside each observation to the hidden states. So this is the idea. If you know more or less the data generated we can find back if the model is good we can find back the true hidden state that generated the data. I am not going doing into the details on how this algorithm works because it's quite complicated but I am going to show you some examples. The first question is which model do you choose and what is called mixture model, why you suppose that the delay you observe at time T just depends on the network state at time T but doesn't fend on the past. What you propose to hidden mark /O*F model, depends on the network state at time T but also at minus one there is ‑‑ which kind of makes sense. Because in practice it's ‑‑ the network state and the routing at one point in time depends on the routing and the point in time before.

I am not going to show you how it looks with mixture model but if you want to see that it doesn't work very well for this problem you can look on where you are computer at the end of the slide, I put some plots. But I am going to show you what we have ‑‑ so that is what we obtained with trace route but it doesn't work very well, he we cannot find the cluster while there is the traffic level increase and cannot detect all the path change and that is how it looks using hidden mark /O*F model, and visually at first it looks much better so there is some kind of correlation with the trace route, we see that the ping states they are the same as the Orange states in the trace route so that is good and we can see had a /STPWHR‑FRPBLGTS pink) the model clearly separates time when the delay is stable and when the delay is kind of noisy, there is more traffic. So that is what you learn and you learn for each state so a state, for each the mean and standardisation so stuff like that.

So, yeah. It works better than mixture model but you will see that tend of the slide. It tends to be quality with trace route but at the end of the slide I put a plot where we show the correlation between IP path and AS path and the states we learned and what was great with this model is we also have information on what the average ‑‑ in average how long do we stay in the blue state and pink state and being in the blue state what is the ‑‑ pink state on the ‑‑ so now you may be like okay I have this colourful plot and it's nice but what can I do with it? Well, we think it may be interesting, I will show some examples, so I will show an example or two, web make detect congestion in upstream networks so networks that doesn't belong to you, you can of course use that, /THAOUS to detect significant network changes just by looking at the delay, and as I said you can use that to diagnose some problems. You can also use that for more research and stuff and I will show you some example of that.

So here what we did the same path as before, the same clustering we obtained and took the trace route data from and we grouped each state with IP path and so yes, so what is interesting is that when you look at the difference between the IP path the only IP that changes the IP in the cogent autonomous system so in the transit AS on this path and you can see that all the states with high variation of the delay they belong to the path A so what we can more or less infer from this is that we can know by colour by trace route where in the path there is congestion or at least problem and expected change in the delay and we can know how much depends, what is the average /TRAEUGS when it up ends (up ends) here the pink state is good but the blue state is really a lot of ‑‑ in the delay. So, yes, we can know that.

Another example ‑‑ application you can do is let's say you wanted to so in earlier this year there was an outage in Frankfurt and some people may be interested to find which path have been affected by this outage so which path got rerouted and which path just lost packet, which /PA*ET were not affected and what you can do just by looking at the delay using this model you just ‑‑ you cluster the path using your model so you obtain something like that and you just have to look at path which got a new state, state never seen before touring the time of the outage so the outage was in the night of the 9th April, so we can see that just before the 10, this path potentially affected because we got a new state. So of course visually you can see that but here you can to it in some kind of automated way and even if the measurements are very noisy like this one, well clustering is working quite well.

And another example, just to show the numbers you get, if you ‑‑ when you ‑‑ when you to all your delay measurements get a bunch of random numbers and when you ‑ dish just show the clustering the colours but you get all these parameters so the transition metrics which gave you (matrix) what the relationship between the state so in a given state was probability of going to another state. And you get some probability distribution for each of the states, so you have some information on what the delay looks like in the ‑‑ those states. That's an example of what you can to with this number so here we took all the hyper /TPWHRAS occurring measurements, so measurements, we learned the model for each of those and we looked if there was a relationship between how long last and what the ‑‑ what the variation of the delay in the State and what we see is that at least for all the path between hyper glass stable states they last longer so it means we spend less time in states we have a variation of the delay. Which is kind of expected but still we get confirmation here. So yes, so the idea of this talk is that the delay observed depends on some hidden network state that you cannot see and using the right kind of model you can find back more or less the states. We don't know if they are the true states but it's much more or less. And in comparison to other models to pre ticket, newer networks, as I shown before, it's very easy for human to interpret this so HM M can be used for automated stuff but also for manual ‑‑ as a human you can just look at the numbers and the information.

So, this work, so what I presented you, I just presented you the application but of course how to infer the model parameters and to learn the model so we are publishing a paper on that, just submitted the paper. And the original use case we add for that which I didn't presented, what is called parsimonious monitoring, you can predict delay with this model we published the paper in some cases you can reduce a lot the number of measurements you need to do because the delay is stable on the Internet and with the this model you can predict it. In future works we would like to do that thank in realtime, because now we use historical data but in practice you may want to start from scratch. And that's it. Thanks and I will be happy to answer any questions.

ROBERT KISTELEKI: Fascinating work, thank you very much for that. I would love to explore how much of this we can actually use and attach it as close as possible to RIPE Atlas itself to inform operators and state changes and all that so I want to take that discussion off‑line with you and we should have a chat.

MAXIME MOUCHET: Yes.

ROBERT KISTELEKI: You touched at the very end on that realtime monitoring, so my question is: Assuming that you to get a realtime, like real realtime feed, how long does it take for your machinery to realise now you are in a different state?

MAXIME MOUCHET: Okay, well I am just starting right now on this path so ‑‑ this part, so no, I don't have the answer, but yeah, I am starting to work on this part.

ROBERT KISTELEKI: I am offering you a seat for you to find it out.

AUDIENCE SPEAKER: Giovane in a. Thanks for the work, very interesting. The first example you showed the measurements of RTTs, I think you showed the number of the probes that you use. Are those anchors by the way?

MAXIME MOUCHET: Sorry, which one?

AUDIENCE SPEAKER: I think the first shows an RTT, if you could go back to those slides, your first time ‑‑ like this one, this is one particular, this is two anchors talking or who is talking here?

MAXIME MOUCHET: Yes it's not written but it's the delay between two anchors, yeah.

AUDIENCE SPEAKER: Two anchors, yeah. And this is like you are using pings right here mews mews yes we used the built in measurements so we can have a big data set

AUDIENCE SPEAKER: If you could repeat this analysis with a measurement for the anchors, that is DNSMON towards the route servers because ICMP 10 tends to get low priority in ISPs so it may have different ‑‑

MAXIME MOUCHET: Different patterns.

AUDIENCE SPEAKER: Yes. DNS in theory would be more stable so you may want to repeat that just to double‑check what would happen to DNS

MAXIME MOUCHET: I think we will still see these routing changes, this change of baseline because it depends on the network topology but maybe the variation of the delay will be different because ICMP is low priority.

AUDIENCE SPEAKER: I mean, it depends on the network so just to be sure, I would look. Thanks.

AUDIENCE SPEAKER: More /A*UF NTT Communications, very interesting work and it's just a comment, not so much on the theoretical portion, which ‑ /TK‑RBTS interesting using hidden mark /O*F models for this but on the marketical side for the operators who might eventually use or use an application of this, adding the dimension of AS path may make this very immediately beneficial. Just a comment.

MAXIME MOUCHET: Okay. Thank you.

AUDIENCE SPEAKER: Chris tell from University of Strasbourg. So, you mentioned that (you are able to determine the probability to going from one state to the other.

MAXIME MOUCHET: Yes.

AUDIENCE SPEAKER: And my question is, whether you looked at sequences of states and did you observe repetition in the sequence of states?

MAXIME MOUCHET: No, but that's a good question.

AUDIENCE SPEAKER: Because that would be something interesting to investigate.

MAXIME MOUCHET: Yes and maybe study the ‑ city of some states, but no, not yet, that's a good idea.

AUDIENCE SPEAKER: Regarding Robert's question on how fast you can detect changes, I have been using the RIPE Atlas data to do similar things and most of it depends on the frequency of the measurements. So ‑‑

MAXIME MOUCHET: Yes, thank you.

(Applause).

CHAIR: So next up we have Tim. You are going to talk about measuring global DNS propagation using RIPE Atlas.

TIM WATTENBERG: I am Tim, glad to see you all here. I am here to talk about DNS propagation times. I did some measurement work in my bachelor's thesis, I just graduated from the University of Dusseldorf, although I am from Cologne, if you are in Germany you know it's like, you are not supposed to say that. Anyway. Let's talk about DNS. So as you already know, I guess, we have this wonderful system to translate names into numbers, and I wanted to have a particular look on how fast changes to the DNS are globally visible, and that's what I wrote about. Of course, or when I was looking into the tools I can use for this, of course RIPE Atlas was like a brilliant solution because you have like all these probes and all the different networks and so I started exploring, yeah, what to find out.

Okay, so of course the problem as already mentioned is that we have the structure of DNS with all the resolvers and ideally you have resolvers pretty near to you, so you can benefit from already previously queried, yeah, addresses so they are already resolved. On the other hand, if you don't or you have the time to live so it depends how long the response you get back is still valid, and if you make changes while they are still some TTL caches going on in some resolvers then your customers or end users don't see the new configuration you are trying to push out. So the best thing to get a sense of how the state is just to measure it. So RIPE Atlas.

I kind of did or I choose one zone where I had deployed like a custom name server which was able to scheme various TTL cereals like time stamps or stuff like that, and so I chose to (serials) make a measurement with the Atlas probes from as many networks as possible, so worldwide measurement and I Geried one specific zone I prepared, via the Atlas platform so the measurements were /TPAOERL the same point of course, of course there is a little bit of variation but as most of the time the TTLs for DNS are more in the hours or days, I guess a few seconds don't matter that much. And then I gathered and compared the results and just looked if if they differ or how. I chose the SOA resource record with the serial number to identify which version of the zones I was getting back from the resolver.

Yeah, okay. I already talked about this. I had RIPE client, RIPE Atlas is the client, my custom implemented name server and some scripts for parsing and plotting all the results around it.

So in one measurement I just set the SOA serial to the current time stamp and the TTL was just set to one day, so and now I just looked what came back from the resolvers. You see ‑‑ the particular data points are not relevant right now but it's quite different what we get back. On the X Axis we have just the time of the measurement and on the Y we have the time which is like coded into the serial of the SOA record. So, you see some major steps, for example here and here and although the hard to read that's like exactly one day later, so that's kind of what we expect if we have a TTL of one day then we have resolvers also renewing the TTL ‑‑ renewing the resource record after one day. But also we see some which like seem to query the zone almost every time, I have a few patterns like here is one example which clearly sticks to the TTL, I am sorry you tonight see the numbers because I split up the graphic so that (don't) the pictures are a bit bigger but in the end I have them all in one picture. That is like a resolver which sticks to the TTL and renews them every 24 hours.

We have also, or I saw also resolvers which reduce the TTL but pretty consistently. So, yeah, it's configured to query it more often but in a very consistent way. There are also some inconsistent TTLs. If you look closely, this is from resolver 8.8.8 .8 so from Google and whenever I saw something like this I took a look in the BGP routes and most of it were Anycast DNS servers so, yeah, that explains why, for example, sometimes you have like a new state and then it jumps back to an older state. That was whenever I had a pattern like this I checked for if the an Anycast IP and that kind of worked out pretty good. Disclaimer: What I didn't do was looking into NSID so checking which name server responded to me with ‑‑ or which resolver responded to me, that would be something which would be possible with the name server identification, that could be future work.

Oh, I skipped one. Because another pattern which was clearly is that some resolvers just completely ignore the TTL, I mean for the end user in this case if do you a change it's good because there is /TREBGTly the new version, the end user is seeing on the other hand we have a bigger load on the name server. (Directly) you see the comparison and it's quite different what resolvers are doing outside. Of course, most of the resolvers or many of the resolvers at least were in private IP space so /SPROPL deployed in the local network so I wasn't able to do any /PAOURPLTS directly with them, but as I said for example, in the public resolvers already some more measurement or data points which could be gathered.

Yeah. How can we work on the problem that we have anticipated change to the DNS infrastructure? We are having in our company or somewhere else? And we want to see or we want to provide it to our customers as quickly as possible. If you look at the RFCs, so like the original RFC I don't know if it's 1034 ‑‑ 1035 it suggests just the TTL to zero and of course, I then thought, /H*PL, let's see if the resolver behave correctly and do what you want them to do. And yes, I can assure all resolvers are measured really didn't stick to any TTLs so they ‑‑ queried the name server every time. So one take away I guess for DNS providers who can anticipate some changes because your infrastructure is changing or something like this, your advice to reduce the TTL prior to this change to zero because then if you change something it's propagated pretty quickly.

So, then I had like another proof of concept idea. I built like a little web project is my DNS live.com, it's still online so you might as well check it out. It's like a proof of concept for a tool which is for zone administrators to use the RIPE Atlas network to measure the state of their zone and to get like a little insight how their zone ‑‑ how the status of their zone is viewed all around the globe. What I do is, I just query the, like directly query the authoritative name server, get the freshest SOA serial number and create a platform and see what these measurements are resulting in. And then I just check if the serial matches, then the customer or the probe already sees the latest configuration and of course if it doesn't then there is still data in the caches somewhere.

Yeah. And goal was to neighbouring as end user‑friendly as possible so you are trying on my RIPE Atlas time so I think it's not really an end user project because if I run out of credits for my RIPE Atlas of course this product stops working but maybe you have something in your infrastructure where you want to check if all your customers are seeing like the most recent SOA serial, so that is like a proof of concept for this.

Yeah, and with that, I come to the conclusion. I mean, what I found out was basically that it's pretty good possible to build infrastructure for measuring the consistency of DNS, for example with tools like RIPE Atlas and I conducted several measurements and tried how ‑‑ which actions you can take to have your changes proper allocated as quickly as possible. As I said before, some future directions would be to expand the measurement capabilities beyond SOA serial so you can really see the different resource types. I didn't look into negative response caches so if you have like ‑‑ these are cached if there is a resource record not existing then this also gets cached. As I said earlier it might also be worth evaluating the NSID option maybe in some later work.

And with it, I'm ready and happy to take questions.

(Applause)

AUDIENCE SPEAKER: Hello, Martin Swissy AFRINIC. If I understand correctly you are saying simply querying for an SOA record for zone gives you the status of that zone?

TIM WATTENBERG: Well at least if you change something to that zone you are supposed to change the serial as well.

AUDIENCE SPEAKER: That's not the point. I can change the zone, the serial, but if in the interval I can ask for some other record, the TTL will go the same way so I can have as many statuses for that same /STPWHROEPB as the number of RR /STPWHR‑FPLT records it contains. That is my point.

TIM WATTENBERG: Yes, that's correct, of course.

AUDIENCE SPEAKER: So I think it's a simplistic way to say hey, we can monitor the status /TPWHAEU SOA because it is important to know it but if you want really the status you have to add all those records inside.

TIM WATTENBERG: Yes I totally agree. It was just the first step to implement the SOA record but I totally agree.

AUDIENCE SPEAKER: Two questions. Did you notice some of the probes going up and down?

TIM WATTENBERG: Yes, of course.

AUDIENCE SPEAKER: Do you have an explanation for that?

TIM WATTENBERG: Sorry, do you mean like the online status of the probes? Because I had of course ‑‑

AUDIENCE SPEAKER: No, serial number you got.

TIM WATTENBERG: Oh, yes, yes, because these were mostly, if the probes are using Anycast resolvers. Because then you might hit another instance of the Anycast resolver and that one still had an older cache entry.

AUDIENCE SPEAKER: Or can be the probe querying the second address of the resolver they have set up. So you are talking to a different resolver?

TIM WATTENBERG: Yes, yes. That's like the problem with Anycast. That is why I mentioned the NSID option to identify which instance you were talking to.

AUDIENCE SPEAKER: The thing is the NSD I option doesn't work very well with resolvers usually so you see it more on the authoritative side.

TIM WATTENBERG: Okay.

AUDIENCE SPEAKER: The second question is about, I think this is slide 16, you go and check ‑‑ you go and get the fresh SOA from the primary, how do you determine who is the primary?

TIM WATTENBERG: You have a field to enter the IP address of the primary. So in order, if you change the primary, then you can query the right one.

AUDIENCE SPEAKER: There is an architecture where people will have hidden prime Reese.

TIM WATTENBERG: Yes.

AUDIENCE SPEAKER: So that value is useless.

TIM WATTENBERG: Yes. I could also build just a field where you enter the last serial you want to see, that is like shall as I said, proof of concept. It doesn't apply for any ‑‑ for all of chances you can use it.

AUDIENCE SPEAKER: To keep it in mind. Giovane knee:

I presented here on Monday and work that is very related to yours. You might want to take a look into that. I also tried to measure caching in the wild and it's pretty much what you tried to do. I want to make a comment that you may be over/SEUFRPifying the resolver infrastructure, they may have multiple layers of resolvers and forwarders in between, so it's very hard to know which one you are hitting because there may be multiple hidden layers. To give an example, I saw one probe that had two local resolvers and when I talked to authority tiffs I had eight different addressing hitting me so I had no idea what is in between. So you can't say just by looking at the local resolver anything about their status of the entire network, it's way more complex.

TIM WATTENBERG: Yes, yes and of course it's difficult because you can't talk them directly, just through the probes because they are in private network.

AUDIENCE SPEAKER: Yes.

CHAIR: I love how we are getting questions in this session today. Please be brief, otherwise Robert will not have time in the end.

ROBERT KISTELEKI: I am already speaking. Thank you for your work. If you need more credits, let us know and we will give it to you, seriously, we are offering that all the time, people who want to do more, well, not you, Geoff, if people want to do this kind of stuff they run out of credits, talk to us, we will help you. The other point is about TTL 0, that is, as far as I understand, more debated than you presented it, and I seriously suggest that you do that again with way more probes for all of your time and see if you get the same result.

TIM WATTENBERG: Thank you.

AUDIENCE SPEAKER: I am here to talk to you about TTL 0. If you are going to play with measurements again try with TTL 1, 510 or something like that and operational advice: Don't ever use TTL 0 because bad things happen on the zones which are used by many clients. It's like ‑‑ go for 5, if you would.

TIM WATTENBERG: Very interesting. Thank you.

AUDIENCE SPEAKER: One short remark. I would really like to see other resource records to be queried because I think some name servers will implement different caching for ‑ /TKURB might want to look into that. And not just concentrate on Anycast, you also have DNS servers with local ‑‑ you may hit different name servers without knowing because they are distributed ‑‑

TIM WATTENBERG: Yes, of course, yes.

CHAIR: Thank you.

(Applause)

Next up we have Trinh Viet Doan and you will talk to us about tracing the path to YouTube.

TRINH VIET DOAN: So, I am a PhD student at technical University of Munich and glad to be here to present our work. I can't go into too much detail here since we are short on time, if you are interested the pre print of the paper is available on‑line as well as the slides. So take a look at that.

So let's get straight into it. In previous work we have seen that video streaming interest YouTube performs a little bit worse over IPv6 compared to IPv4 and we were wondering why is this the case, and on speculation was maybe it's because of content caches so maybe the content caches are not dual stacked or they just behave differently so this is what we were interested in this work. So we were asking ourselves how far are content caches away from the users, how much benefit provide and how to they compare over IPv4 and IPv6 and to do that we have deployed around 100 of probes around the world, (Sam knows probes) these are deployed behind dual stacked and fix line networks so we tonight have any mobile measurements but we using these probes to do measurements and what we exactly measured is that we have done trace route measurements using scamper towards YouTube media service over both IPv4 and IPv6 every hour so the peer addresses of these media servers we have received from the previous test that we ran for this, for the performance matrix and this test obtains IP addresses where the actual media files are stored and we measure these using trace route and we have done that since May 2016 so roughly two plus years now. And this is our results, or some of these results that we have seen. If we just look at the path distribution we see that IPv6 and IPv4 compare like relatively similar ‑‑ or relatively similar to each other, we can see some sort of tendency that the RTT IPv4 so the latency is a bit better over IPv4, however the paths are a bit shorter over IPv6 so that's a first observation that we did, that we had, but obviously you can't just compare those based on the distribution so what /STKWE is looked into what what we call destination pairs since we have done hourly measurements over both IPv4 and IPv6 we can just say okay, this probe measured these two paths during this hour and then we can group these measurements together and then obviously calculate the difference between the v4 and v6 matrix that we see for trace route. And if we look at that distribution we can see that overall it's quite evenly distributed so there is no sudden jumps in the distribution, with some numbers you can see below so in roughly one‑third of the cases IPv6 is better and/or has shorter paths and some in roughly one‑third it has longer paths and roughly one‑third it has equal paths compared to IPv4 and similar goes for RTT so the latency in roughly 50% of the cases, the latency over IPv4 is better and the other 50% IPv6 was better so it's kind of okay, nothing too like fatal I would say, but obviously also the differences are mostly located around zero so roughly 90% of the destination paths at that we measured are located on centering around minus and plus five, regarding the links and minus and plus 20 regarding latency. So how to caches play in all of these? Obviously caches are deployed within ISP networks to bring content as close to the user as possible. In order to reTuesday the latency, so how we identify the caches is that we looked at /STPWHR‑FRPBLGTS reduce) AS numbers so if the source AS number of the trace route and the destination AS number are the same, then we state within the AS boundary meaning we didn't leave (stay) the local home network so to speak and that a local cache was hit, and secondly we also did Reverse‑DNS look‑ups for the IP addresses that we traced through the host names and looked for /KAE words such as cache, Google ‑ cache but the host name approach was not that visible because a lot of these machines do not have host names assigned with them, so you mostly relied on the AS number matching for the cache identification.

Now, this leads us to four specific cases, two times two matrix pretty much where we have either a cache or not and either IPv6 or IPv4. And then if we split the destination in these four cases this is what we get. So, right. So just quickly going over it, if we look at the cases where we only have IPv4 cache which is the, which are the with blue triangles here we can see it's heavily shifted towards the left side meaning IPv6 performs worse since we are going to a cache over IPv4 but over IPv6 we go to the original server. For the IPv6 case, so the red squares we see that it's also a little bit shifted to what is the right side but not as much so we can see that the paths are at least or most of the times shorter over IPv6 but if we look at the RTT on the right side, there is still roughly 30% of the measurements slower over IPv6. Even though we go to the cache over IPv6 but not necessarily over IPv4. And then if we look at the purple cases, where we have caches on both sides it's convergent to, a good indicator because obviously if we go to caches on both sides we would expect that they perform similarly and the difference is not that big between these cases so that's a good thing.

So, if we look at the paths themselves again, and split those respective to the four cases we see that most of the ISP ‑‑ nearly 100% of the ISP caches are reachable within seven IP hops and if we compare those numbers to what is the numbers that we identified as no caches for the majority of the measurements we saw that the caches are reachable or 90 percent are reachable but six IP hops the no caches for the same fraction quite up to 12 hops so the path was up to reduce, by up to 50%.

Now, the same for the RTT, most of the caches are reachable within 20 milliseconds, as we /SAURBGS and if we look at the comparison between no cache and cache again, obviously in this case we looked at 80%, the relative improvement from IPv4 was just one‑third roughly whereas IPv6 could be improved by up to 50% using a cache.

So concluding: Caches are were roughly located within six to 7 hops and reachable within 20 milliseconds over both IPv4 and IPv6. Regarding the Ben fifths the caches and the performances, we saw that the hop, the hop counts or the IP paths were up to 50% reduced over both IPv4 and v6 and the latency was fending on whether we are measuring IPv6 or IPv4 and just reduced by up to one‑third or one half. And surprisingly, the IPv4 ‑‑ IPv6 caches had a shorter path compared to the IPv4 non‑caches but still had high latency so there is something which doesn't really add up and regarding that the take aways pretty much, so there is obviously room for improvement regarding IPv6 content delivery, for YouTube and I our suggestion would be to ensure caches are dual stacked since we saw cases where we went to a cache over one address family but not the other. Also to optimise delivery, obviously regarding performance, routing forwarding, processing and whatnot and lastly, that since we had the speculation that con /TAPBT caches are responsible for the difference in performances for IPv6 and v6 that caches are not the end of the story, so there is still sufficient to look at.

The data set and code for the analysis are on‑line and this is my e‑mail address if you have any questions please feel free to ask me and I would be happy to take questions now. Thanks.

(Applause)

JEN LINKOVA: Google. I can give you a few examples, I am not saying it's what happens here, off top of my head where you can see low number of hosts but higher latency is when you have three hops within the country and one hop across transatlantic link or if your traffic is sent through LSP for example which is like high priority traffic or just one HOP which is congested and you see high latency, so latency and HOP limits are not necessary.

TRINH VIET DOAN: Exactly. That is not what we were necessarily saying, it's just an observation we is it so one IP HOP doesn't necessarily represent a specific like distance in terms of kilometres or something so obviously one hop could be from here to the next router or it could be one, across the ocean for instance. Thanks.

AUDIENCE SPEAKER: Hello. Cyril. You mentioned that IPv4 latency was slightly less than IPv6 but IPv4 is ‑ /TK‑FRBS slightly longer, reach IPv6 path ‑‑ but have you thought about the conclusion that IPv6 infrastructure is still non‑financial and critical for most of us and then IPv6 is also treated just as ‑‑

TRINH VIET DOAN: That is also what we were thinking, that the IPv4 topology is simply more optimised because it's been around for much longer and IPv6 has been experimental for quite some time so that is also what we thought but I haven't mentioned it here but thanks.

AUDIENCE SPEAKER: Will from A 2613, I was wondering if you spotted similar ASNs in the path for the IPv6 that were not cached? For instance, if HT was like appearing there in your resources or not or if you had like anything that you spotted there? Did you look at that.

TRINH VIET DOAN: At the top of my head I can't quite remember any of these but can have a look into that. Thanks.

/SKWRAOEF in a: I think I have seen the presentation of the original work I think in 2015.

TRINH VIET DOAN: I think that could be the case. I don't quite remember.

AUDIENCE SPEAKER: I don't know about YouTube but they are announcing their addresses using Anycast which means the same IP, it's announced from various, could be announced from various places in the globe and I have seen with DNS, if you try to reach one route server, let's say B route which has the same IP is available in Miami and LA, for some probes when you try to contact them, over IPv4 you go to Miami, over IPv6 you go to LA, and my question is, if that would be ‑‑ that was factored into your measurements, like if this is Anycast and if it is, how that would have impacted on your results because you say IPv6 is lower or faster but there is also a question if they are having the same routes they get the same physical locations.

TRINH VIET DOAN: So the methodology how we retrieve the IP addresses was that probes themselves are looking up all ‑‑ or mimicking the video streaming so YouTube would reply to us with a specific IP address so ‑‑

AUDIENCE SPEAKER: 10, 20 /#‑RBGS 50, 100 places in the /TPHROEB.

TRINH VIET DOAN: We we would assume in most of the cases it would be a local replica, so obviously we cannot really confirm this or ‑‑

AUDIENCE SPEAKER: You may want to look at that for future work. /TKOEPB tone thanks.

CHAIR: Thank you. Applause.

Next one is Kevin, are you here? There you go. We are going to talk about multi level MT A light Paris trace route. You have 25 seconds.

KEVIN VERMEULEN: I am very happy to be here to present my work in collaboration with Stephen from RIPE NCC, I had a pH student and ‑‑ so if you are not familiar with trace route I just make short recap. So Paris trace route, supposed you have this topology to discover so the 16 interfaces at this HOP are due to per load balancing so Paris trace route just give you a path between source and test nation and multipath detection algorithm MDA, although it's to discover all the path between the source and the destination so here if you run MDA you will discover the 16 interfaces.

So, before 2018, in the literature, the topologist that have been reported (had a maximum of 16 interfaces (top /OLgies) and maybe today we are going through more complexity. And what we have discovered in our last year study was that we could discover this kind of top /OLgies between a single source and single destination and it's just a single MDA trace route so we found this topology in a data centre which was 7 hops deep with this hop, interfaces and also not only in data centre but also in tier 1 AS we could find this kind of topologist, 7 hops deep and here 55 interfaces.

Also, in the (topologist is top apologies)









Form, the most common cases don't show these kind of he can /SEPB /TREUSty and in a tier 2 AS for example we found this kind of topology, 7 hops /TAOEP and 92 interfaces at a single hop. And here you see that topology has less edges than the previous one so we can say it is unmeshed.

So the natural question that comes to mind is, are these topologist (top /OLgies) really that complex, you have multi level, so (we build a tool that is capable of running in a single common line both IP and router level. So we are using for the LE S resolution several state‑of‑the‑art techniques such as the monitoring bound test borrowed from my DAR, fingerprinting TTLs and MPLS characterisation.

So here in the first top /OLgies that I have showed, before the resolution you get this and after you see okay, it's still complex but at the hops, at the hop where there were 32 interfaces, here we could say okay there is a maximum of 6 routers. Same one here. Before 55 interfaces and then maximum 37 routers. Here, it doesn't change anything. But maybe it's because our techniques were not able to reveal that there were routers, typically if the interfaces are not responsive to AS resolution probes.

And here also no change.

So, the question is, could we deploy this algorithms on Atlas to reveal all these diversity in the Internet? So, for example, for this topology, on Atlas, so if you use a single probe, you run Paris trace route with ‑‑ and you get only this so you miss a lot of the topology. If you are very courageous and you launch 64 measurements because you can make the Paris ID vary from one to 64 you can get pretty much of the topology. But on this one you will still miss, you will still miss a lot of interfaces and a lot of links. And here it's the same. And here you will miss a lot of edges. So, if you want to be representative when you are doing studies it will be really, really helpful to get all of this.

So, but that is not as it seems because on RIPE Atlas you have to respect resource constraints of the probes and you do not want to swamp home DSL, also there is a question of the NAT which are re‑crafting packets which can be hard to identify the flow IDs of the packets. So, the question is, is it interesting for the community? And if you want to test the tool, please feel free to download it and give us some feedback on this address.

So, thank you. And if you have questions, feel free to ask it.

(Applause)

AUDIENCE SPEAKER: Strasbourg university. So, I am also very interested in discovering multipath in ASs, so yes it's interesting, at least for university academic people. And then another question is regarding the way you discover the paths so usually we change the flow ID but I have seen some ASs that do load balancing but at layer 3 and so changing the flow ID won't do anything because in order to discover the different paths between an entry point and exit you would need to vary the source IP addresses or the destination IP addresses, so I think there is still work to do in that space as well to discover the path. Maybe you did that. So that's my question.

KEVIN VERMEULEN: What we describe is I think per destination load balancing right. So MDA is not crafted to discover destination load balancing but in the literature there is already some people who has ‑‑ who have developed ex /TEPBGs to the MDA to end all the prodestination that binds ‑‑

CHAIR: Great thank you.

(Applause)

And finally Rob. I think I can see the feedback going, I think we accepted a bit too many talks for this session.

ROBERT KISTELEKI: So here is the use tools up date ‑‑ MAT Working Group. I am not /TPWOG repeat everything I said last year. Version 4 probes out /R* out now, we are working on our backlog so we have the new batch and working on more probes, if you have applied we will get to you or at least that's the plan. No promises on exactly when but we are working on it. About the VM anchors, I mentioned it the last time we have been exploring if we can do RIPE Atlas anchors in a VM hardware fashion, you can jump on. We are about to conclude this so the pilot phase was concluded in Junish I think, and now we are going to publish what we are going to do (but the bottom line is already on the slides. We plan to allow individuals to join the effort so if you can run a VM in your own network we will take it. Most likely. And we are also cooperating with big Cloud providers and Amazon in particular to install VM anchors in their Internet. So please do not nominate Amazon to run your VM anchor. That is not going to fly well. We published an article about comparing physical and virtual. Bottom line is, if you don't know one which is which you can /THOT tell the difference. We worked on a bit of streamlining of the process so from the user's perspective it's way easier but from the back‑end we can make less mix takes because the whole thing is streamlined. We are about to conclude some work that we have done in the office about making the UI and the APIs faster so hopefully we can scale up better which will allow a whole lot of new use cases, some of my colleagues are experimenting with putting RIPE Atlas tat on bit queries on /OT ‑‑ some of the things they want to extract are way faster than what we could ourselves do so I expect that we will come out with this and open it up in some sense, some shape or form to the world. We are looking at the visibility of software probes, would mean you get something like that, you install it wherever you want to, and then register that machine as a probe. There are a lot of things we have to look at first, so I hope that to report back on that soon on various mailing lists and so on.

Every now and then we to periodic security reviews, meaning we ask an independent company to look at how we are dealing with things, another one is coming up this year especially because if we want to do software probes we need to look at what tours are still open in that sense.

And then more support for probe selection, Emile and others have been working on figuring out if probes are similar or dissimilar so that if we expose this to the users then you can ask forgive me as diverse probes as possible for my measurement or you can just say this one fell out, I want to have a replacement and it should be really, really like the previous one was. Please do not give me anything else. Some other options, working on improving realtime data feed. That's just streaming. My expectation, I think that is the fair assumption more people will want to use realtime data and more use cases are built on that and the interface is also using that more. So that's just a nice thing to do. And we are happy that people, you guys, are also using RIPE Atlas for various things. That was a measurement at SIDN labs did about the KSK roll‑over, really good stuff.

RIPE Stat. I mentioned this the last time as well that it's operating at a really high load so this is 60 million queries a day, a lot for a small team like us. So we are dealing with that, we are /RAOEUG to scale up the infrastructure and finding the bottlenecks and fixing them. We are also working together with other requires to see how can we improve the user experience but also what kind of other data sets we can pull in and most notably abuse‑c would be really nice to know for other RIRs as well.

Looking /TPWHRAS has been enhanced, as you may have heard RIS has switched to a different internal infrastructure so RIPE Stat follow that one. Upstream visibility is a feature that was released the last time, it's available in RIPE's stat, please take a look, it gives you really nice /TPWRAFS and shows and visualises the changes that we observed in routing system. And finally we have a public feature tracker so you can have a simpler way of influencing what developments are actually done in RIPE stat, this is in a test phase and third party tool, we will evaluate if this is actually the right way doing. Probably at the end of the year.

OpenIPMap or RIPE IP map as it's called now, it is now fully out there, it's /SPWE /TPWRAEUTed with Atlas and Atlas feeding data into it, it is feeding data back to Atlas and we are working on various other features to make it more useful for people. Research activities: We are participate of participating at various Internet measurement ‑‑ these have been accepted in IMC, the MDA, you already heard about, the clusters and the expanse will be explained tomorrow if you are interested, join the v6 Working Group session. And other research activities, I am not going doing through this list, if some of these titles catches your eye read them. Finally if you want to keep up to date with what we are doing and you want to he will us things or you are just receiving what we are about to till, these are the two Twitter accounts that you can follow, if you don't follow them it's a good time now. That's it, thank you.

(Applause) (tell you)

CHAIR: Thank you Robert. Any questions for Robert?

/SKWROF Annie: Thanks again for the infrastructure, it's great. Use it all the time. Curiosity about your Cloud deployment, you are is the idea to have IVM per region that they have

ROBERT KISTELEKI: Ideally yes. That is a discussion we are having with Amazon and they seem to be fully en/TPWAEUPBLG with that and I would love to do that with other providers, digital ocean and so on and some of these are already supporting us in this so they offered VMs so we can play around with them so that is awesome.

AUDIENCE SPEAKER: Hi, Christian RIPE NCC. I just wanted to mention that right after this session we are going to have in the coffee break, we have a RIPE Stat Q&A session so if you are interested in RIPE Stat then you can join us at the meet and greet stand.

DANIEL KARRENBERG: I have a comment, no questions. It's great to see that the stuff that the RIPE NCC provides is getting widespread use. And I would hope that it would develop a little bit more also into products like Robert has said that are useful to the ISPs. So the jibe in this room is really, really good. If you are an ISP and if you are RIPE NCC member, it might be a good idea to actually go to the other side where the RIPE NCC Services and the RIPE NCC Board and so on and let them know that you really appreciate this stuff. Because we sometimes get some under current about this is all very interesting for ac/TEPL /EUBGS but is this really something the RIPE NCC should be doing so if you feel that the RIPE NCC should be doing this it might be a good idea to let that be known on the other side of the things. Thank you.

CHAIR: Thank you. Wise words. All right. I think that concludes the day. Make sure to fill in the surveys and do everything that is needed to be a good community member and then it's time for coffee.

LIVE CAPTIONING BY AOIFE DOWNES, RPR
DUBLIN, IRELAND