9th NMRG meeting Seattle, Washington May 12-13, 2001 List of participants: 1. (AP) Aiko Pras U Twente 2. (DP) Dave Perkins SNMPinfo 3. (AW) Andrea Westerinen Cisco 4. (BW) Bert Wijnen Lucent 5. (FS) Frank Strauss TU Braunschweig 6. (JS) Juergen Schoenwaelder TU Braunschweig 7. (MB) Markus Brunner NEC Europe 8. (DH) Dave Harrington Enterasys 9. (JP )Jean-Philippe Martin-Flatin AT&T Labs Research (Sunday) 10. (GP) George Pavlou U Surrey (briefly on Sunday) 11. (NA) Nikos Anerousis Voicemate (briefly on Sunday) Note Takers: DH, JS Agenda: (1) MIB Item Lookup Service (AP) (2) Recent IETF Activities and the NMRG (JS) (3) Universal Information Models (JP) (4) Simple Capabilities Tables / Session-Based Security (DP) (5) MIB Testing (AP) (6) Research what can be done with SNMPv3 (to enhance deployment) (DH) (7) Configuration Management (What are configuration objects? Which objects are good for cloning?...) (DP) Items (5)-(7) will be discussed if there is time available and/or at the bar. **************************************** 12-05-2001 **************************************** MIB Item Lookup Service: - AP starts to explain the MIB Item Lookup Service. - DH acknowledges that there is a real problem, but there is no revenue potential and thus there is no vendor interest. - AP says that vendors already support public MIBs archives. - BW asks how you deal with broken MIB modules? - DP explains the concepts behind the MIB central web site: - partnering with SNMP Research, IWL, etc - currently freely available as a test - once he gets content and tools, he will start charging - DP do managers really need access to MIB modules? - BW doubts that mgmt apps would want to depend on the web. - BW asks whether there are security implications, especially for real life applications? - DH talks about small applications and multiple MIB repositories in an administrative domain. - Internal use as part of a n application suite, with one parser process and then models are distributed. Reduces implementation duplication, etc. - External use, going to Internet to get unknown MIBs from an Internet server. - JS explains why there are actually two different problems. - DP talks about the need for support for multiple MIB versions. - A discussion on agent capabilities starts... - DP explains that you can not figure out value restrictions (e.g. sizes restrictions, enumeration subsets) by probing. - AP explains how they handle different versions of a MIB. - JS explains that existing MIB APIs must be supported in order to allow for an easy integration. - DP asks whether there is a mechanism to return additional info, beyond what is in the MIB? - JS asks whether this is a request for something that could be used for application-specific information? - DP says that when multiple apps get defined, it can become obvious later that there is common info. I have always wanted to replace display-hint with a stored code fragment for developers. - DH says that he likes to have a mapping form SNMP object to CLI attribute for vendor-specific use. - AP says that there is currently no such mechanism. - JS and AP disagree: Do we have actually three problems? - JS says that there are number of apps with libsmi interfaces to do MIB lookup. It would be helpful to allow this interface to continue to be used. In general, various toolkit interfaces must be mapped into the MIB object lookup interface. The existing APIs should be able to be accommodated efficiently. - AP says that client requests maybe the complete module from some external process which may be on the same system or on a different system. This is similar to getting the file from the local system. A small piece of the client knows how to get the info from the external process. Then the client processes it according to the way it currently does. - DP says that this reminds him of DNS lookup, where a proxy could cache info, etc. - AP responds that caching should be done as much as possible. - DP says that APIs vary in terms of the richness of the info they can provide. The schema for the meta-info would require sort of reverse-engineering the most common APIs. - JS says that the protocol needs to support various lookup capabilities, for performance reasons. Improving the parsing performance by pre-parsing the data into the server. But a bad protocol mapping with the various APIs could lose performance that would offset the gains. - JS continues to explain that different apps may request different things, such as different versions of a MIB. It is important that 2 apps talking to the daemon may cause different info to be cached. AppA requests IF-MIB from rfc2030; AppB may want just any IF-MIB. Daemon needs to know which app needs what. - BW doubts that version-specific lookup is needed. - JS says that only things like an SMI-diff need this. - AP summarizes that the original focus of the discussion was to get MIB info not known; JS wants efficient API and keeping the definitions in a common local repository. Now he understands the need for an API. - AP makes are better picture which we better agree with. r = remote repository l = local repository r r r l r | | | | | ,--- server +-------- server-----+--- server - l | `--- server client | | l | ,- r demon---- r | `- l ,-+-. | | app1 app2 - Schema of a meta info at the "demon" requires the reverse engineering of the APIs. - JS: Is always the latest version of a given module good enough? Still requires to have views in the "demon". - How many organizations put modules in encompassing documents? Cisco does it, Lucent for some stuff, and of course other standards organizations. - A discussion about RFC publications and how to fix bugs in published RFCs starts... - Bert says that the IETF will handle this in the future by creating a central repository which contains authoritative versions of MIB modules. - How do MIB updates / additions effect compliance statements? - AP explains inter-server communication. - JS asks why HTTP? Why is HTTP considered to be simpler? - AP says that it would be up to the client to decide which server to accept as authoritative. - DP agrees that having an authoritative source for the MIB, gives you somebody to try to resolve the differences Between the non-authoritative sources. - DH asks whether we need to have authoritative source for each annotation? - JS says that maybe we just need user feedback about who they want to accept as the authoritative source. With Internet-available servers, users can determine which site they want to use. - Side stepping on MIB testing... - AP has a report which shows that existing products do not implement some counters correctly: - Vagueness of the object definition (intentional or not) - Lack of testing - Implementation constraints (chipsets) - Costs of counter updates (slow access to hardware registers) - JS remarks that people are not using SNMP because the counters are inaccurate. There might be variations between vendors - but this is probably tolerable in most situations. - DP asks whether applications would be more useful if definitions were unambiguous and implementations were correct. Perhaps the exercise if more academic? - BW says that there is a demand for certification. - Someone suggests to design a test methodology and let a magazine run and publish the tests. - AP asks which MIBs are most important to get methodologies defined for. There was no clear answer. - DP says that IWL may be willing to work out an agreement with Universities to do testing. - DH argues that IWL will not let anybody publish the results. - AP agrees that it will be better to remain independent. - JS says that the effort to make it work with IWL may not be justified since students can just easily write test scripts. - DP says that there are benefits of not reinventing the wheel. IWL has already lots of tests: generic MIB tests, RMON tests, Printer-MIB tests, SNMP protocol verification, DOCSIS tests and so on. They are also working with Smartbits to generate traffic pattern. - JS asks which role the NMRG could/should play here? - AP says that he currently just likes to report what they are doing in Twente and that this is not an NMRG work item. - DP talks about a MIB which can generate arbitrary notifications in order to test notification filtering and forwarding behavior. He wants to write a document which explains notification filtering and where you should have instrumentation. DH is interested to read DP's proposal. - Back to lookup service. Write document on lookup service protocol between servers and potentially between clients and servers. - Need to collect MIB lookup APIs (Adventnet, libsmi, WinMIB?, Scotty, NET-SNMP,...) and try to figure out what the commonality between them really is. - Focus on NET-SNMP support for the lookup protocol. IETF Activities and the NMRG: - JS brings up an access control issues: Does the proposal from Lauren open up access to targets so that applications can e.g. create / delete / modify targets that can cause security problems? - JS talks about the semantics of retries and timeout in the target table. Just say they do not apply to SNMP over TCP in the SNMP over TCP document. This is better than giving them other semantics (e.g. a connection establishment retry interval.) - JS asks how we want to publish the final version. Shall we submit as individual for proposed standard? AP likes the idea to put it out as proposed standard and to see whether people implement it so that it can be moved along the standards track. - JS will provide another revision which prepares the document to go on the standards track (which implies that all NMRG prefixes should be removed). **************************************** 13-05-2001 **************************************** - JP gives a presentation on Universal Information Models (UIMs): - traditional approach: brainstorm -> concepts info model -> data models (SNMP MIB, CIM schema) representation/encoding -> - problems: o quality (errors in data models, missing pieces in data models) - speed of the standardization process (too fast in the critical section) - best experts in technology often not involved o reinvent the wheel anti-pattern - groups work in their own church (SNMP, COPS, OSI, etc.) - lose time (no reusability) - terminological confusion o right level of abstraction - need more intermediate levels to go from brainstorming to concrete implementation - proposed approach: single universal model (UML classes) data model (SNMP MIB, CIM schema) - JP argues that universal models are more attractive to technology experts than data models. - DP and DH say that UML is not readily available for industry tools because (a) many engineers in networking still do not know UML and (b) tools still do not work reliably enough. - DH argues that UML does not produce products. Coders that program from UML diagrams lack information. DH notes that models are not robust. Real world is changing and models usually do not reflect those changes or get in the way. - AW argues that conceptual models are important, not necessarily UML. - DP argues that the typical MIB designer usually lacks an object-oriented background and thus does not know how to use UML. They just end up doing ad-hoc MIBs. - AP argues that vendors have not really an interest to deliver open management interfaces. And customers do not ask for them enough. The customers interest is to get management solutions - they do not care how they are realized. - JP continues with his presentation: - How does the proposal address the problems identified above: o makes the process slower o UIMs increases the interest of the best experts o better reuse o reduced terminological confusion o partly addresses level of abstraction issues - How to address the remaining problems: conceptual UIM analysis phase | v specification UIM high-level design phase | v implementation UIMs (CIM schema, SNMP MIB) low-level design phase | v products implementations - Iterative and incremental development process: (1) prototyping phase brainstorming | v light-weight UIM | v data model prototypes | v prototype applications (-> $$ from technology) (2) refinement phase conceptual UIM | v specification model | v data models | v applications -> $$ from management application - Strict deadlines needed in order to ensure that the process does not take forever. - AP presents a picture from his thesis (cyclic design process): ,------- user requirements / / v +-------+ +-------+ | v | v initial requirements | better requirements | . | | | | . v | v | . ..... | ..... | | | | | v | v | implementation | better implementation | | | | | +---------------+ +-----------------+ - DP talks about the difference between new and existing technologies: o new: build simple CLI (can be done efficiently to get things rolled out and experience quickly), debugging o existing: build sophisticated configuration, status, statistics and error isolation - DH wonders how cyclic design interacts with vendor requirements and product life cycle. Vendors are interested to create a niche and they need to get market share (which usually implies to be the first). In the first pass, vendors create a CLI. In the second pass, to address ease of use, vendors move to proprietary MIBs and competition prevents vendors from exchanging or aligning MIBs. In the third phase, to address integration, vendors move to standard MIBs. Vendors market that they go to standards and they work hard on extending standards to provide differentiation. - BW stands up and argues that leading vendors should go for standards in the second phase to be first and leading in the standards process and to force competitors to follow on the standards. - DP says that some companies have the model to add capabilities to differentiate while others focus on being better (faster, more reliable, ...) in order to differentiate. There seems to be a difference between companies that try to be pioneers and companies that try to be better based on stable specifications. - JP says that his new information modeling process primarily applies to cases where you create management interfaces for new technologies and you create standards in the early cycles. - BW asks whether there is real cooperation between DMTF/IETF or is this just some folks who try to sell the same ideas in multiple places? Experience tells that W3C/IETF, DMTF/IETF, ITU/IETF cooperations are difficult. - BW asks why does CISCO go to DMTF, TMF and IETF? Why not pick one and stick to it? - AP says that ATM forum did exactly this approach. Why did ATM forum did not have success with it? - DP says that the original push for the DMTF was the observation that SNMP was not suited to managing desktops. Originally, only a few large companies were funding the DMTF. JP says this has completely changed. Many companies have joined in, and the scope now embraces enterprise management at large. - DP says that the original push for DMTF was the observation that SNMP is too complex. What is the DMTF scope now? Originally it was Intel (IBM, Microsoft) who was funding DMTF. Is it driven by personalities? JP says this has completely changed. - BW things the UIM approach will fail again because most people who are there are interested to make money. - A discussion breaks out which is hard to follow... - MB says that there will be communication problems with technology experts and modelers. Configuration Management with SNMP issues: - DP starts to talk about configuration management. There is no generic configuration information in the SNMP world - we currently only have configuration information specific to technology areas. Part of the problem is that it is not possible to determine which objects are configuration objects. It would be useful to be able to identify which objects are configuration objects. CMIP also provides configuration change logging. There should be a configuration change log (who, what, when, etc.) which works across multiple management interfaces and which is capable to record configuration management transactions so that some configuration transactions can be undone. - JS says that SNMP does not have to do everything. Perhaps a common format for configuration files and a common mechanism to securely upload/download configuration files is all that is practically needed. - DP responds that here are multiple proprietary MIBs to control the download of configurations. There should be a common MIB to standardize this. - JS wonders why RMON probes are SNMP configurable while most of the other stuff is not? - DP says it is the technology life cycle. SNMP is a basis for RMON, so SNMP management made sense. - AP remarks that RMON probes are autonomous, and you need remote management applications to use them. This is not true for network elements. - DP says that there is a sort of requirement that SNMP management is needed for RMON probes. However, routers do not need SNMP to work. - AP says that probes are a management device; routers are not. - DP says that different vendors follow different models. Some modify the configuration immediately, others apply changes at some point later in time. Some vendors automatically make configuration changes persistent while others distinguish between changes that affect the running system and changes that affect the persistent configuration of a device. - There is some agreement that a common MIB to control the upload / download /activate configuration files would be valuable. - JS is not convinced that a full blown configuration change log is realistic due to the complexity involved. - DP argues that there are two levels of applications that benefit from a configuration change log: A configuration manager remains aware of the configuration and can restore it if requested. A policy-based manager sets a configuration and then it watches the log to verify it does not change. - JS says that if a policy manager tries to enforce a configuration policy, and another application or a user changes something, then it does not make sense to have the policy manager set the configuration back when it detects a change, because it does not understand why the change was made. Operators tell us that a device gets configured and then it stays that way; it is not dynamically modified = never. - JP argues that in real life, you must cope with interactive access. The approach of a single policy-based manager is not viable during troubleshooting and during initial configuration. - JP asks whether a configuration manager should allow to send single configuration commands rather than a full configuration file. The time required to parse a complete file may exceed the window available for correcting a problem quickly enough. Being able to send an individual line can reduce the response time to meet troubleshooting needs. - JS says that allowing different entities to modify the same configuration is not a god thing. A program may run just fine, but if another entity is allowed to peek and poke the memory image, it may cause the program to not run properly. - DP responds that if you tracked transaction sets, you can better control what can be changed. - DP says that he brings this up in the NMRG because he does not think it has been studied enough to be dealt with in the IETF. - JP sees a difficulty with this approach when an inexperienced operator can go into 'config mode', locking out all others. Then he iconifies the window and goes home, keeping everybody locked out for a long time. - DP says we need more discussion to identify the problems and the tradeoffs between possible solutions. - A discussion about the COPS-PR locking mechanism starts... - JP expresses concerns about being able to troubleshoot while the lock is in place. - JS says that COPS-PR requires an application to coordinate policy enforcement and manual troubleshooting. The application may allow for a troubleshoot mode, where it suspends the enforcement of policy on certain objects while an operator manually manipulates those objects. It is implementation-dependent how the application handles this. - JS argues that with one or two devices, you are likely to do manual configuration. However, if you have 500 devices, you will likely use automated configuration tools. - DP says that as a technology moves through its life cycle, it becomes more capable of being automated. - DH thinks that a download MIB could be useful for configuring those aspects of a device which are net yet standardized enough to manage in a policy-driven way. - DH says that research what the different configuration management models are and recommendations would be useful. What will NM be like in 5 years: - JS argues that in several years, most network elements will run standard operating systems and vendors will open up devices to allow management software to run on the device. - Riverstone has the concept to provide source fragments for management applications and to cooperate with management application vendors to provide management solutions. - What will be the impact of component models? You need to decide on one type of middleware. - Are there security or safety/reliability issues with programmable network elements? The meeting ends with these open questions and people meet at the bar to have some of the local beers - which are pretty good for a coffee city. ;-)