# # $Id: ripe63minutes,v 1.3 2012/03/29 16:08:56 jim Exp $ # RIPE 63 -- DNS Working Group Minutes -- Session 1 ================================================= A. Administrative matters ------------------------- Peter Koch opened the session by introducing himself and the co-chairs Jim Reid and Jaap Akkerhuis, then he presented the agenda for the two DNS Working Group sessions. B. Matters arising from RIPE 62 minutes and review of action items ------------------------------------------------------------------ The minutes of the previous meeting were unanimously approved. There were no open action items to review. Jim Reid intervened to bring up an item that had been dangling over the Working Group for a long time: the task force that had been set up about an interim trust anchor five years before. However, in the meantime IANA had set up a trust anchor repository which in turn had been obsoleted now that the root had been signed. There was no more need for the task force which had been idle for a long time. He then asked the present attendees if it would be a good idea to close the task force. Peter Koch asked for a show of hands. Most of those present were in favour, nobody was against and there were three abstentions. He concluded that there was consensus for closing the task force and thanked Jim for bringing it up. C1. IETF Reports -- Richard Barnes ---------------------------------- Richard Barnes presented details of the DANE WG. The working group was developing ways for domain holders to publish security properties such as certificates and public keys. He explained that there was an initial protocol in the works, proposing to have a resource record that has a number of fields to allow one to make statements about TLS and how the signature was bound. The use cases document was complete, the protocol document was still undergoing work, starting to get fairly well flushed out, starting to close issues. Richard Barnes said that would be a good time for other people to review the documents and submit comments to the working group mailing list. He also asked for comments and questions at the venue. Wolfgang Nagele said there seemed to be some confusion about the current implementation in Google Chrome and asked if the statement released had some implementation but not based on the draft. Richard said it was based on one of the initial proposals that was a predecessor, conceptually similar but with slight differences. Wolfgang clarified that he was referring to the current implementation and the fact it required one to fold the DNSSEC chain into an extension on the service side and continuously refresh that. He asked if that was still part of the protocol. Richard answered that had never been part of the protocol, it was a separate concept that Google put forward. It was not considered in DANE but come up a little in TLS, possibly to extend TLS to carry DNSSEC information. DANE however was focused on defining the record format that would be used in such a system. A speaker from the audience asked what the current status of plans and known rumours about browser support for use-case number three was. Richard answered that he knew Firefox in particular had been involved in the working group, they were tracking progress, and as Wolfgang had mentioned, Chrome had a prototype out and in stable release, implementing use case number two. He thought the browser vendors were actively interested, looking at it as increased security to their product and their users. The speaker from the audience inquired if there was no push back or political lobbying. Richard answered that there had been active contributions from that community. Peter Koch asked the attendees how many had been aware of the project before that morning, and about a third to a half raised their hands. He then pointed out that apart from Richard Barnes, there was one of the Working Group chairs of that group in the room and they would be available for further discussion. C2. DNSCCM -- Sara Dickinson ---------------------------- http://ripe63.ripe.net/presentations/151-DNSCCM_RIPE63.pdf Peter then introduced the next speaker with an extra item in the agenda, Sara Dickinson from Sinodun Internet Technologies Ltd. Sara Dickinson thanked the Chairs for accommodating the talk and explained she would be giving a progress report about DNSCCM: DNS configuration, control and monitoring. It was a software tool designed to use those three functions. Behind it was NSCP, a single cross-platform and cross-implementation protocol for name servers. The motivation behind the project was to insure DNS high-availability, and one way to achieve that was genetic diversity. Sara explained that the idea of NSCP was to bring together disparate nameservers in order to ease management. Regarding the current state of development, she explained that they were still depending on the implementation of NSCP with the support of NL.net, but although NSCP was still in draft stage, they believed doing an implementation was the right thing to do in order to get more feedback and offer people something to try. At the moment it was a prototype, not production ready, but the plan was for an alpha release towards year end. Sara then gave a live demo of the operation of the software. Afterwards, she discussed some future areas of improvement, like development of a graphical interface that would allow monitoring and multiple nameservers, core visualizations with statistics and group management. She said there was a project website and she was also available for direct contact. D. DNS operations at RIPE NCC -- Wolfgang Nagele ------------------------------------------------ http://ripe63.ripe.net/presentations/124-RIPE63_WolfgangNagele_DNS_update.pdf Wolfgang gave an overview of the DNS operations at the RIPE NCC. Peter Koch noted that he wanted to clarify the removal of the "gov.il" domain. Wolfgang explained that it had been removed by the domain owners themselves. Peter Koch added that there was an action point on the Database Working Group agenda to get rid of some of the attributes that were only needed in forward but not reverse DNS. Robert Martin-Legene asked about the use of DS records in the reverse tree and if there were indications that people were using it for DNSSEC in an unexpected way. Wolfgang answered that they could not definitely confirm that but the NCC would monitor the situation. Robert continued by noting that the TCP graph looked very static in a very dynamic environment, and inquired whether it would have been possible for them to have hit a server limit. Wolfgang answered that this was not the case. The NCC had done extensive load tests before rolling out the signed root and TCP traffic levels could not have been the bottleneck. Likely server limits on query rates would probably be a thousand times higher than the actual traffic level. Peter Koch then asked about the secondary service for developing top-level domains. As new TLDs were expected to develop over the following months, he wanted to know the RIPE NCC's position regarding those: would the NCC provide secondary service for any of them? Wolfgang clarified that they were talking about the new TLDs approved by ICANN. He explained that would be a separate issue compared to the secondary service normally provided by the RIPE NCC: one that was specifically developing countries that might not have enough funding to arrange stable DNS service for their ccTLD. The NCC would continue to offer that service for any country that needed it. However the new gTLDs would be ruled out. First because they were not ccTLDs and second because anyone affording the money to apply for one of those domains should also be able to fund the gTLD's DNS infrastructure. E. Knot – DNS, a new high-performance authoritative name server -- Ľuboš Slovák, CZ.NIC --------------------------------------------------------------------------------------- http://ripe63.ripe.net/presentations/145-KNOT-20111103-LS-RIPE63.pdf Ľuboš Slovák presented Knot, an authoritative DNS server developed by CZ.NIC Labs. It offers performance comparable or better than the most widely-used implementations and advanced functionality at the same time. A member of the audience inquired about the increase in memory footprint mentioned in the presentation and if it was possible to provide some numbers. Ľuboš explained that it depended on the zone file, for example it could be four times the amount of memory as the zone occupies on disk. Depending on the underlying OS, it could vary between three to five times. Ľuboš confirmed that Knot's quick hash algorithm and quite complex internal data structures affected the memory footprint. Ľuboš was asked when support would be available and said this was planned to begin in the following two or three weeks. Daniel Karrenberg commented that while working at the NSD project, his role was to test the software. In order to achieve that, they built a testlab that sent the same queries to both NSD and BIND and analysed the differences. He said the Knot team could do the same in order to get more acceptance and confidence in the software. The code for doing that was still available, not in a great state since it was just hack, but they could use it in order to compare Knot, NSD and BIND. Ondřej Surý, one the co-developers of the project, explained CZ.NIC actually had similar code in use, and they had gathered two months of CZ.NIC traffic in order to replay it. The code was not publicly posted anywhere, but if anyone wanted to use it, CZ.NIC would make it available. Emile Aben relayed a question from the remote participation chat. Anand Buddhev from the RIPE NCC wanted to know what the plans were for future development models and whether development was planned to continue within CZ.NIC or would be opened up. Clarifying, he asked what CZ.NIC's long-term plans to support the software were. Ondřej responded, explaining Knot hat was just one project CZ.NIC was working on. They wanted to open it up as much as possible, and welcomed other people joining them. CZ.NIC wanted to support the project for the long-term, fixing bugs and helping others with deployment. F. Beyond Bind and NSD -- Peter Janssen, EURid ---------------------------------------------- http://ripe63.ripe.net/presentations/154-RIPE63-DNSWG-BeyondBindAndNSD-PeterJanssenEURID.pdf Peter Janssen from EURid presented the new nameserver they developed, explaining the motivation behind it and giving a performance comparison with BIND and NSD. Peter Janssen explained that the intention was to run some of the EURid nameservers on the new software called YADIFA. It was a long-term project and they were committed to maintaining the software. Daniel Karrenberg (RIPE NCC) asked if it was correct that two of the EURid nameservers were already running on the new platform, as stated in the presentation. Peter confirmed that was the case. Daniel Karrenberg welcomed the news. He renewed the offer to provide the old code developed for replaying DNS traffic when NSD was first developed. Wolfgang Nagele from the RIPE NCC commented that the K-root capacity testing that had been done before rolling out the signed root zone. That had lead to a white paper being published that could provide more information. Daniel Karrenberg clarified that having more than two authoritative DNS server implementations would be in everyone's interest. Peter Janssen ended by explaining what YADIFA stood for: "Yet Another DNS implementation For All". RIPE 63 -- DNS Working Group Minutes -- Session 2 ================================================= G. What was all that traffic to the root? -- Wolfgang Nagele, RIPE NCC ---------------------------------------------------------------------- http://ripe63.ripe.net/presentations/125-RIPE63_WolfgangNagele_K-root_traffic_spike.pdf Wolfgang Nagele presented a report on the excessive increased query load on the root name servers that occurred for a brief time during the summer of 2011. Jaap Akkerhuis asked if the other root operators Wolfgang mentioned were involved in the investigation had seen a similar pattern. Wolfgang responded that there was confirmation that they saw it going on but only some of them did an in-depth analysis. He said Duane Wessels had promised to write a report about this. H. DNSSEC-Trigger -- Olaf Kolkman --------------------------------- http://ripe63.ripe.net/presentations/172-RIPEWG-DNSSEC-trigger.pdf Olaf Kolkman presented an application developed for use in DNSSEC-hostile environments such as behind NATs, hotel networks or a neighbourhood coffee-shop in order to test DNSSEC functionality. Roy Arends commented that with a little bit of help from Olaf and Jaap, he was able to install the application the previous Monday. While he was not a power user, the software worked very well when he tried it in the hotel. He added that he was trying to make it work with OpenVPN as it overrode the DNS settings. Once that was done he would start pushing it within Nominet as well as he was quite impressed with the functionality. Olaf Kolkman asked that when Roy would have new insights into making it work with OpenVPN he should post that to the mailing list. Olaf said that the software was intended as an end-user tool. If end users ran into problems they should be able to find the log files and discover what was happening in order to troubleshoot it -- something he was urging all users to do. Roy also added that he had used the tool with several websites that were set up to test DNSSEC, and it had detected all of them correctly. Richard Barnes said the presentation clearly showed the tool was performing an interesting function in terms of determining DNSSEC support on the local network. He asked how applictions could access that functionality. Olaf responded that the software reconfigured the setup of a machine so the nameserver pointed to the local interface. That meant that software which wanted to work with the tool would only have to look at the AD bit, assuming of course the local machine itself had not been compromised. If there was a DNSSEC validation failed, applications, would not receive an answer. Although this would not be a great user experience, that was what the software was designed to do. A speaker from the audience pointed out that the software did not work with the DNS resolvers of the hotel. Olaf confirmed that the Swisscom network used in many hotels did not work with version 0.7 of their software, It was a more complicated problem to solve because of the way Swisscom's network handled DNS packets. I. The IDN Variant Issues Project, an Update -- Joe Abley --------------------------------------------------------- http://ripe63.ripe.net/presentations/157-jabley-ripe63-variant-issues.pdf Joe Abley presented a study by ICANN to gain insight into the problem of variants for the same IDNs. Six case studies had been completed in the following six scripts: Arabic, Chinese (Han script), Cyrillic, Devanagari, Greek and Latin. K. News from the DNS-EASY -- Stéphane Bortzmeyer ------------------------------------------------ http://ripe63.ripe.net/presentations/20-for-ripe-dns-wg.pdf Stéphane Bortzmeyer presented a report on the "DNS Health" conference that had been organised by The Global Cyber Security Center in Rome together with ICANN, as well as the workshop on "DNS Security Stability and Resiliency" that followed it. Peter Koch commented that the SSR workshop was about security, stability and resiliency of the DNS System, and in ICANN circles the people in suits usually meant something different when talking about DNS than the people in T-shirts. In that context, looking at the agenda, it seemed that the "take down" industry was having a very heavy weight there. He asked Stéphane to elaborate on the relationship between how much was dealing with the DNS and the infrastructure itself and how much was addressing issues that would happen with the DNS instead of to the DNS, together with his stance on that matter. He wondered if perhaps there was a loss of focus or losing sight of the stability of the infrastructure because a couple of those "world-leading security experts" appreciated the DNS as the hammer for the nails they wanted to address. Stéphane Bortzmeyer said that the problem for a long time had been that all the issues about take downs had been directed to the registries like in the Conficker case. Registries were asked to act or risk being regarded as bad Internet citizens and the assumption that if the domain was deleted at the registry it would disappear from the Internet. More recently however, the trend was for more and more requests to be directed at the resolvers like it happened in France. Also, regulators for on-line gaming were asking ISPs to filter illegal gambling sites at the DNS resolver level. The same could happen when dealing with a botnet or similar threat. Instead of asking the registries to remove a domain and risk a refusal, it could be done by the resolver. Typically, take downs were no longer a problem for registries, but also for ISPs, a different population that was less represented in things like the DNS Working Group at RIPE. Not many ISPs were represented at the SSR meeting. Most attendees come from the registry industry. He was unable to identify them because Chatham House Rules applied to the meeting. Participants could not be identified or their comments quoted directly. It was not clear how ISPs would respond to take down requests. However in most countries if an ISP was running a DNS resolving service (which would be a typical case for an ISP) that would require doing some sort of filtering due to demands from many different organisations to fight against child pornography, intellectual rights infringement and so on. Therefore, take downs might no longer bother domain name registries but would go directly to ISPs. That would probably be the next big change. L. Discussion "Domain Name Synthesis -- For Fun, Profit and Law Abiding Citizens." ---------------------------------------------------------------------------------- The Panel was moderated by Peter Koch and was composed of: João Damas from ISC, Matthew Pounsett from Afilias and Patrik Fältström from Cisco. The panellists introduced themselves with a short description of their affiliation and activity. Patrik Fältström mentioned that ICANN had received a question in early spring 2011 from the government advisory committee on their view of domain name blocking. Their response was documented in document number 50, a two-page document translated in six languages, which he encouraged the attendees to read. In summary, it would read that blocking was not a black-and-white issue but more a question on what harm a certain action on the DNS flow would be creating. It also urged everyone that would be trying to or be interested in doing something like blocking or synthesizing or changing responses, to calculate the balance between benefit and harm, because any kind of touching of the flow of responses had the ability to affect services and anything else on the Internet. After publishing that document they were also looking at various kinds of reputation systems and that was something they were investigating, who would be impacted as well as general implementations regarding blocking. Peter Koch thanked Patrik for broadening the scope of the discussion. He then went on to note that the title of the panel was not referring to blocking for security purposes but to different flavours of response rewriting that would extend maybe even to the Sitefinder example of some time ago. Then he mentioned that Stéphane had seeded the ground referring to the filtering discussion in Rome and one of the issues that was brought up was self-inflicted filtering or third-party inflicted filtering which might be looked at differently. Some of the statements read so far on blocking, rewriting and filtering were giving a strong sense that all that these measures were challenging the stability of the Internet or puuting that at risk. It might be contradictory to see some organisations holding warning signs in one hand and actually supporting blocking for something like botnet fighting purposes on the other. He asked João if he could say something about that. João clarified that Peter was probably referring to the fact that ISC had a very public position on government interference for whatever reason as a means of blocking Internet-wide access to a given set of names, and at the same time that they had implemented things like support for NXDOMAIN redirection in BIND. Peter Koch added RPZ to that list. João continued by explaining that in his opinion the two had nothing in common, they were separate issues. It was quite different to block things at the registry level where it affected everyone compared to the local level where it affected only local users (especially in enterprises, where the demand for that functionality was). He said it was more a proactive move since it was a fact that the DNS was used by the bad guys, which was not going to change. Some of those bad things could be mitigated by things like RPZ. That may explain why there was a perceived difference in ISC's position in two completely different environments. Matthew Pounsett commented that he had something to add, even for people that saw both to be very similar. When one started rewriting or blocking DNS answers, it would reduce the coherence of the DNS system and the stability of the Internet. In order to justify doing that they would have to increase stability in another way. If they were balancing that out somehow, then it could be justified -- for example using RPZ to block mail domains locally, that would help to keep people from being compromised, particularly if they were in control of how it was done. On the other hand, it would be much harder to justify blocking further away from the user as it had a wide range of effects. It was not under control of the user and in some cases had no benefit anyway: for example commercial rerouting for advertising. Patrik said that it was a complicated issue. They had to remember that if domain name blocking or rewriting was used in a country, the country was the decision-maker on that domain. Since the Internet was global, that action would affect users outside of that administrative domain. He thought it was very important to view registries and resolvers as one of the categories of intermediaries. The current trend among governments and regulators at least in the EU was that intermediaries were not allowed to touch the packets whatsoever when flowing through their networks. If that was the case, there should be exceptions to that rule, but that would need to be based on legislation -- that was what the EU strongly suggested. Therefore, one would have to be careful, if implementing such a system, that they would not be breaking any laws. Additionally, there was a big difference between blocking domain names (like a TLD registered where the name was illegal) and the interest of doing blocking because the domain was used to access services or information that was not wanted. The community needed to clearly help the discussion to separate the two cases. Jim Reid wondered if the idea of non-interference with packets extended to decrementing the hop count as packets traversed the network. Patrik replied that from a general point of view that was the case. There were exceptions in the directive to ensure someone providing those services could guarantee the protection of the network from incidents and malicious activity. In general however the rule was that intermediaries were not allowed to touch anything. João asked how intermediaries were defined. Patrik explained that it was not only ISPs, but the definition included search engines for example. João said that looking closely, interference with DNS traffic at the level of resolver was not rewriting. Patrik replied that it was good to have those technical discussions, he just wanted to make it clear that while those in the room would probably welcome an agreement on which intermediaries were not allowed to touch the flowing information, there were also many parties that did not want to have such a rule. Jim Reid commented that it was clear that bad guys did bad things, and in that case they needed to be stopped from their bad practices. However the implementation of the NXDOMAIN rewriting feature in BIND was sending a bad message that it was OK to do such a thing, something he found regrettable. João disagreed, saying he thought Jim was wrong. He explained that they had debated the issue internally at ISC. NXDOMAIN rewriting it was not a new development and was widely in use already. Strictly speaking it was outside how the protocol was supposed to work. One thing to remember was that there was a gap: when hitting the resolver, one was not hitting the authoritative server, it was not necessarily the same answer as the authoritative server had provided to begin with. There was already some data manipulation going on. João said he never liked external manipulation at any level in DNS. What had pushed ISC in that direction was the fact operators were already doing things like NXDOMAIN rewriting. By not having that functionality in BIND it would actually cause people to go away from BIND into software that was at a much lower level of protocol compliance. As such, the logic was "the people who want to do NXDOMAIN rewriting, are doing it anyway". If ISC didn't provide the software to meet that requirement, those organisations were going to use poorer quality DNS software that was could be more harmful. In addition, there were companies hacking BIND to provide that functionality, creating the obvious risks of code forks and software maintenance problems. With regards to the second part about RPZ, ISC was not providing tools for the bad guys, they already had those tools. It was a typical case where the good guys suffered because the bad guys were able to take advantage. Olaf Kolkman mentioned that when it came to blocking, he could go a long way with taking into account locality and other aspects. He asked the panelists when it came to blocking versus rewriting, what were the tradeoffs in light of commercial pressure, in cases when that made it so that not all the web was visible to the user. He clarified that he was trying to understand the issue because as a Free/Open Source Software provider they had noticed the same set of pressures. One way he could interpret it was that when under the administrative domain it was useful to block, but if the policy implied rewriting, it was not clear they were doing the good thing. He wanted to make sure that he did the right thing as an implementer. Patrik said he had an extremely strong opinion on that matter in that blocking was the only solution while rewriting was a definite no-no. It was preferable to not give back a response at all, like in the case of DNSSEC. Matthew agreed and also said there were probably cases when rewriting could be done without too much trouble, but only a small number of them like in enterprises where the admin domain was small and people close together. He concurred that in the general case, if it needed to be done at all, blocking was the only option that would work widely enough to prevent serious problems. João agreed as well, pointing out that when most people talked about redirection or rewriting they were already assuming that the action involved was web-browsing; however, if another application would have been involved, the result was unpredictable. Patrik added that people looking to implement blocking were usually trying to solve a problem of policy and using DNS for that was not the right approach. Olaf Kolkman said it was good to allow people to adhere to their own policies and Matthew added that giving more control to the user was a difficult choice sometimes. Maria Häll from the Swedish Government said she appreciated the dialogue and understood the issues that the community was facing were not black-and-white. From the committee she was part of they had to give advice and prepare statements, and towards that goal she welcomed the discussion with the technical community. It was important for them to learn the difference between things like blocking and rewriting, and she urged the community to continue its dialogue and education efforts. Patrik proposed a question as reflection for the audience regarding the new TLD process, and that was a hypothetical case when an illegal string would have been proposed as a top-level domain. He asked about the consequences of that if it would have been approved by ICANN and what reaction the internet community should be in such a case. João said it was a hard problem since when registry-level blocking was used, they could always switch to a different TLD. If the blocking was at the root that would become impossible since there was only one of those and breaking that assumption, making the root incoherent, would completely change the nature and extent of the problem. Patrik added that it was also possible for such a domain to be illegal and blocked only in some jurisdictions, and for its users to discover that only after investing in marketing for that domain. Since investigating the legal restrictions around the world was an arduous prospect, the responsibility was creating a lot of sleepless nights and uncertainty for those applying for new gTLDs. Jim Reid commented that a large ISP had been doing rewriting for a while in order to increase ad revenue, but had decided to stop the practice in order to implement DNSSEC across their network. They'd probably calculated that the revenue earned by continuing the old behaviour was not worth the cost of not having a secure DNS infrastructure. He wondered if the fact that domain name rewriting precluded the use of DNSSEC could be brought as an argument. João said he understood how the decision had been made, since the financial people always trumped the technical ones, and sometimes DNSSEC which was only seen from a cost perspective was not seen in the best light. Patrik noted that Cisco did something similar with NAT64 that they were advocating on some products, implementing DNS synthesis for nodes that only had IPv6 access. Matthew mentioned that other items to be factored in were the costs of doing the rewriting, using specialised equipment, as well as the loss of customer satisfaction when they would complain about access. João commented that part of the problem was that finance people only saw the issues caused by spam and malware and were ready to use existing tools to fix the problem. He added that if the ISP Jim mentioned came forward with a cost analysis for their action, that would provide valuable business information for others to draw on. Maria Häll added that the discussion in the EU system was about the distinction between blocking someone and the harm it did to the Internet infrastructure as a matter of concern. Another concern was the harm done to the consumer as they would be unable to reach information from some areas, leading to fragmentation of the Internet. All those issues had to be sorted out before 12 January 2012, the date of the new ICANN gTLD process. Peter Koch mentioned that he had heard many reasons for doing blocking: eg combating spam. However he had not seen any discussion about the governance aspects of those efforts. There was talk about a near real-time facility that would enable certain groups to influence resolvers to block or rewrite resolution data. However a registry had liability to their customers. Even in the Conflicker case, where blocking had been handled manually, there had been a lot of collateral damage because domain name holders had their names on the block/takedown list. He asked the panellists for their ideas on the subject. Matthew said that there were all sorts of different situations but there was no easy answer one way or another, they were handled on a case-by-case basis. Patrik noted that was one of the reasons he worked with human rights groups on such issues and looked at what intermediaries could do. He said that many takedown or filtering actions were not based on legislation. That was a concern that had to be addressed in order to have clear rules on the process. João added that in many cases it was a problem of conflicting legislations and jurisdictions. He gave the example of a site in Spain that had been blocked because it was deemed illegal by the US administration, even though two trials in Spain had decided the site was legal. This was a case of one state imposing their judicial system and values on another. Patrik answered that it was indeed the fact that before harmonisation could happen between judicial systems in order to avoid such problems, they first had to have legislation in place to do that. However a current problem was the amount of action being taken without suitable legislation. Matthew disagreed with Patrik, mentioning that particularly in terms of security and malware the rules were covered in contractual obligations and a lot of takedowns were simply violations of acceptable use policies. Patrik responded that it was impractical to incorporate human rights protections into contracts. While providers were entitled to take whatever actions necessary to protect their service, using a contract to override human rights aspects was not a solution. João then questioned what would happen if the Spanish judicial system were to order the operator of the .com registry that the information on the servers they had located in Spain should have had the information reverted back to the original parties. Patrik agreed it was a good question. On this note, Peter Koch closed the discussion offering the attendees food for thought as the issues needed further addressing. Z. AOB (Any Other Business) --------------------------- There was no other business. Peter Koch thanked the panel, the audience and the RIPE NCC staff and stenographer for their assistance, thus ending the DNS Working Group session. He asked for feedback on the panel format and suggestions for other panels for the next meeting.