projects techreports press lab location staff
citi top.2 top.3
citi mid.3
bot.1 bot.2 bot.3
star

Projects : Network Testing and Performance

NTAP Project Information

  • What is the NTAP project?
  • What is SeRIF, and what's it have to do with NTAP?
  • Why is NTAP cool?
  • How does NTAP work?
  • How do I set up a PMP (Performance Monitoring Platform)?
  • How do I upgrade a PMP installation?
  • How do I build the PMP software myself?
  • What is the LDAP directory of routers and PMPs for?
  • How do I set up the LDAP directory of routers and PMPs?
  • What happens when stuff doesn't work?
  • How do I set up the webserver?
  • How do I add another testing program?
  • What "runs" the performance tests?
  • What's Walden and why is it cool?
  • What's new?
  • What's coming?   How to use Walden
  • updated July 19, 2007
    back to front page
    What is the NTAP project?
    In the briefest description possible, what we're doing is using a secure, remote invocation architecture to build a network performance testing service.

    We create PMPs (Performance Measurement Platforms), which are basically commodity servers with our software installed. By placing a few PMPs around the network (each associated with a specific router), we have an infrastructure that allows us to schedule performance tests along a network path between pairs of PMPs.

    Here at the University of Michigan, we plan to place a "few" (maybe 6 or so) PMPs on our core routers and thereby get hopwise throughput/latency data. An important goal of the project is to enable inter-institutional performance tests in a simple, efficient, secure fashion.
    [up]

    What is SeRIF, and what's it have to do with NTAP?
    SeRIF stands for Secure Remote Invocation Framework, and it is the formal name for our secure remote invocation architecture. NTAP is the name of a network testing and performance service we have built using the SeRIF architecture.

    The boundaries between SeRIF and NTAP are not sharply drawn. Some of the features and capabilities we are adding to NTAP, such as automated testing, are really part of SeRIF and thus available to other services built using it.

    We apologize in advance for our habit, in some of what is written here and in our other documentation, for using the term NTAP when we really mean SeRIF.
    [up]

    Why is NTAP cool?
    At first blush, network performance testing is kind of esoteric, and a rubrik by which to evaluate systems isn't exactly obvious, since there are a wide variety of problem domains that people investigate. Some of the defining characteristics of our approach are:
    • Primarily, it's a huge drain on network folks when they have to answer support calls like, "My lab's Internet is slow, can you help me?" and then have to diagnose the problem. NTAP will let you point the user at a webpage, run a test, and send you hopwise performance results and (soon) recommendations for fixes.

    • End-to-end strong security is paramount and has been a primary consideration from the get-go. Coupled with centrally-administered, fine-grained authorization mechanisms, NTAP is flexible enough to be used by a wide user base while nevertheless preventing the PMPs from being compromised or "over-used" (2 x 1-Gigabit NICs x number of PMPs = lots of potential test-traffic on the network).

    • We want to limit the administrative overhead of NTAP as much as possible, and so derive a lot of utility from our recent integration of Walden: a lightweight, scalable grid authentication/authorization technology that, along with the rest of our software, leverages an institution's existing security infrastructure (which, here at the University of Michigan, is Kerberos).

    • Tests can be run on-demand or scheduled, either through a web interface or on the command-line.

    • There's an emphasis on having the PMPs try to accurately proxy (in a heavily-VLANed environment) the test-invoker's traffic, in order to traverse, e.g., certain QoS filters along a testpath and find bottlenecks.

    • Due to the high degree of parameterization in general network stuff, we are working on (easier) ways for the test-invokers to specify all of the test software's (for now, just iperf) options and to, e.g., capture and replay complicated tests periodically. Future work will include TCP-stack tuning on the PMPs and advice for end-users.

    • We use several grid-computing technologies to handle our need for a secure distributed infrastructure. In order to keep our work as "close to the community" as possible, we are working closely with MGRID developers and incorporating each other's work.
    [up]

    How does NTAP work?
    As you might imagine, there are many levels of detail available. There are development notes available, which are unpolished but readable. We also have a Powerpoint presentation available, which contains a good overview of the project and its architecture.

    To explain with a usage scenario, a user schedules a test through a web interface, which requires Apache and the K-PKI. Using the K-PKI, the user's Kerberos credentials are automatically translated into short-term kx509 credentials by the webserver. Software on the webserver uses the kx509 certificate to securely contact the PMPs along the requested testpath and schedules pairwise tests between them. Eventually, the tests finish and the results are returned and/or stored. This is the bird's-eye view.

    Cleaner documentation is forthcoming.
    [up]

    How do I set up a PMP?
    Fortunately, this step has gotten a lot easier than it used to be -- you won't have to compile any of our software (possibly just a prereq or two, like Kerberos, e.g.). Our PMP software has been tested on Fedora Core 2 and 3, but past versions of our software have worked fine from Redhat 7.3 forward.

    Assuming you have the PMP's hardware ready (a PMP really just needs 1 or more 802.1q-capable NICs and an x86-compatible chip), the high-level install procedure is:
    1. Install the OS (e.g., Fedora Core 2).

    2. Download and install the NTAP PMP RPM (on the front page).
      • sudo rpm -ivh pmp-X.Y-Z.i386.rpm

      ... where X.Y and Z are replaced by the version and release numbers of the current RPM, respectively.

    3. Post-install configuration.
      During the install, the RPM will print out information about post-install configuration PMPs need. It directs you to the file /usr/local/ntap2/pmp/setup/SETUP_POST_RPM_INSTALL, which has simple ''mandatory'' and ''optional'' setup steps for the PMP's constituents. Simply follow all of the ''mandatory'' steps, although the NDT / Web100 steps MAY be ommitted initially or if first-hop tests are not needed.

    4. Post-install verification.
      During the install, the RPM will also print out a message about running the NTAP post-install verifier script, located at /usr/local/ntap2/pmp/bin/ntap-postinstall-verify.sh. To run it, you must first acquire Kerberos credentials in the appropriate realm (e.g., kinit richterd@CITI.UMICH.EDU). Then, launch the verifier with the sudo command (man sudo). Thereafter, the verifier script should do a fairly comprehensive job of ferreting-out common errors and reducing the amount of user involvement.

    A detailed installation procedure may be found here.

    At this point, you should have a working PMP. If you don't, but followed the post-install instructions correctly, please email us.
    [up]


    How do I upgrade a PMP installation?
    When using RPMs, people often upgrade existing packages with the rpm -u command; however, the PMP RPM does not support upgrading in this manner.
    To upgrade an existing installation, instead un- and re-install using the following commands:
    • sudo /etc/init.d/diffserv_mgr stop
    • sudo rpm -ev pmp-Xold.Yold-Zold
    • sudo rpm -ivh pmp-X.Y-Z.i386.rpm
    • sudo /usr/local/ntap2/webserver/bin/ntapctl --local restart
    ... where Xold.Yold and Zold are replaced by the version and release numbers, respectively, of the old RPM, as displayed by rpm -qai pmp.

    One thing to note is that the PMP RPM's rpm -e behavior is conservative in the sense that it only erases the files it installed originally (technically, any files named the same as those originally installed). This means, e.g., that the hostkey and hostcert installed in /etc/grid-security won't be deleted.

    After re-installing, you should glance at the new post-install configuration instructions, to see if anything has been changed.
    [up]

    How do I build the PMP software myself?
    This is somewhat complex. For the moment, we recommend that you email us and request a tarball of the software.

    NOTE: the following documents (online and in the software) are out of date, now that we've switched to an entirely Globus-2.4 -based GARA. Also, GARA's (somewhat) easier to build now, but we've found that the default build tools on FC1 cannot correctly utilize the build utilities that come bundled with GARA. We will be writing up a work-around soon.

    Therein (and also in the RPM), you'll find the documents: Those files contain our build notes: they aren't pretty, but they have a good amount of detail.
    [up]

    What is the LDAP directory of routers and PMPs for?
    There are several stages involved in actually running a network performance test and the help of an LDAP directory is needed throughout. When a client initiates a performance test, the NTAP software must first "locate" the client's machine in order to find a PMP "near" it on the network -- specifically, the goal is to find the PMP that is the fewest hops away from the client's machine.

    In order to do this in a reasonable amount of time, the NTAP software uses an LDAP directory ("the router and PMP directory") to locate which router the client machine is using. From there, the software sets about finding a nearby PMP. The LDAP directory consists of a hierarchy:
    • AdminRealms -- e.g., "CITI-NTAP" or "University of Foo-NTAP"
    • HostTypes -- e.g., routers or PMPs
    • Hosts -- e.g., "pmp-1.myschool.edu" or "CORE-ROUTER-1" (doesn't need to resolve)
    • Interfaces -- all of the (virtual) interfaces in each host
    So, each "institution" would probably have an AdminRealm, under which are categories of NTAP-related machines (just routers and PMPs now), each of which contains some number of hosts, which themselves contain some number of network interfaces (here, "interface" means "is uniquely identified by an IP address", which means a single physical interface might actually have several virtual interfaces, one for each IP address).

    There is schema information available (also packaged in the RPM). Due to varying needs, please contact us if some portion of the schema is lacking or incompatible with your institution's needs.
    [up]

    How do I build the LDAP directory of routers and PMPs?
    NOTE: this section is incomplete. email richterd@citi.umich.edu for quick help.

    The process of creating the router/PMP LDAP directory involves:
    1. Acquiring and familiarizing yourself with the NTAP schema
    2. Getting your institution's router data assembled and formatted for LDAP
    3. Loading the prepped router data (LDIF file(s)) into the directory
    4. "Assigning" PMPs to routers
    5. Getting your institution's PMP data assembled and formatted for LDAP
    6. Loading the prepped PMP data into the directory
    7. Trying sample PMP-discovery tests to verify that the directory is working
    Since routers and PMPs have mostly-identical types of data stored in the directory, prepping the data and loading it into the directory are essentially the same for routers and PMPs. However, we've found that it can be convenient to start with the routers and add PMPs later. Accordingly, here at the University of Michigan, we got a copy of the core router database (in tab-delimited text output, one physical router interface per line) and trimmed out the routers that would not have any PMP(s) attached. Then we used a script (included in the RPM) to convert from the raw text columns to LDIF, a format used by LDAP. The LDIF file is loaded directly into the directory, but there are some prerequisites that must be met.
    [up]

    What happens when stuff doesn't work?
    The NTAP architecture and its dependencies are a complex suite of software that can be somewhat inscrutable when something isn't working quite right. Many of the error messages returned by Globus (atop which NTAP is built) are... not as helpful as they might be. Therefore, here are some ways that one can troubleshoot an installation.

    Step zero: please refer to the documentation online, in the RPM, or in your source bundle. There are a lot of little tidbits sprinkled throughout. Another helpful resource is the listing of Globus error codes, which can help narrow things down even if they are often inscrutable. For instance, If you've gotten back an error message that says anything about a protocol error, a GRAM-related problem, or a failure in the GSS code, a Globus configuration problem is the likely culprit.

    If a GARA error was reported during a test, we have a listing of our error codes and their meanings. If you end up having to report a bug to us and you saw one of these messages, they help narrow the search considerably.

    To help with difficulties in general, we have a post-installation verifier script that tries to clear away the errors we most-commonly encountered. So, after you've installed the software and done the post-install configuration, check out ntap2/pmp/bin/ntap-postinstall-verify.sh. It more or less does the following (keep in mind that you'll want to have your hostcert in-hand at this point):
    • checks for the existence/permissions of some "vital" files
    • checks to make sure that hostname -f, globus-hostname, and the common name (CN) listed in the PMP's hostcert are all correctly-configured
    • checks that the Globus gatekeeper is configured and running
    • performs a basic Globus authentication test (if it works, you're well along the way!)
    • runs through some simple remote Globus jobs
    • soon, it'll test remote jobs at a higher level with a GARA client
    So, just ssh into your PMP and run the verifier script. If you've run through the verifier script and it's not telling you that anything is wrong, please contact us and we'll try to see where in the chain things are breaking down.
    [up]

    How do I set up the portal/webserver?
    We've yet to come up with a better name than "webserver" for the machine that coordinates NTAP tests in a given AdminRealm. Our webserver runs Apache with extra modules that transparently handle authentication and credential translation for the user. We also use PHP to provide the web front-end. One thing to note before beginning is that you'll need access to your KCA administrator in order to finish.

    In tandem with the httpd-duties that the "webserver" host has, we also host our LDAP directory on that machine. What we have is a fairly-detailed installation guide for how we do much of the setup for our machines at CITI. If one is planning on using Java Webstart technology and/or digitally sign Java jar files, or is planning on setting up NDT/Web100 things manually, we have another writeup available.

    One thing with any installation with this many pieces is that integral setup steps or repairs become habit and are accidentally left out. Please help us make the guide better by sending feedback. Thanks.
    [up]

    How do I add another testing program?
    In its current incarnation, the NTAP software is designed to run essentially predefined tests between pairs of PMPs along a testpath. In other words, pairwise invocations of iperf were the model around which the testing software was written. However, adding another program of your own choosing is possible. The amount of effort involved depends on: how many arguments the new program takes, whether the new program runs client/server-like (like iperf) or singly (like tcpdump), and what you want to do with the data when you get it back.

    The steps are:
    1. set up a parameter configuration file (iff your program takes arguments)
      • from the webserver tarball, look at the file $DOCROOT/params/iperf.conf, for example. It defines mappings between checkboxes/textfields that the user sees on a test-scheduling webpage (e.g., "Bind iperf to which port?") and the command-line arguments involved (e.g., "-p 5001"). Its contents look like:
        	option(port)
        	{
        	  type = numeric;
        	  size = 6;
        	  default = 5001;
        	  description = Server port to listen on/connect to;
        	  context = both;
        	  flag = -p (#);
        	}
        					
      • a more-comprehensive explanation of constructing the parameter file is contained in $DOCROOT/README.params. However, it really is about as easy as copying one of the existing .conf files and tailoring it.

    2. add a program/paramfile entry to $DOCROOT/program-file.conf

    3. set up a PHP page on the webserver for the new program:
      • first, look at $DOCROOT/index.html; you'll see entries at the bottom that run PHP scripts (i.e. runscript.php).

      • In $DOCROOT/runscript.php, the only line that really matters much is the require(parser_functions.inc) line (or something very much like that).

      • In $DOCROOT/parser_functions-ntap_demo.inc, the key line contains shell_exec(...). The script that gets executed during a "normal" NTAP iperf test discovers a testpath between the two IP addresses specified. If you do need that functionality, you might want to define your own parser_functions.inc file and either edit the NTAP script or write your own. If, however, you do not want to run programs along a test path and all you want to do is pick two PMPs and run some test between them, then you can probably just use $DOCROOT/parser_functions.inc.

    4. modify the script that runs the test(s) from the webserver
      • If you just want one "use" of the webpage to run a test on one PMP or a test between two PMPs, you likely will not need to make your own script. Instead, start with $DOCROOT/index.html and follow through how traceroute is ultimately invoked by the globus_client app.

      • If, on the other hand, you want one "use" of the webpage to do multiple things for you, you will at least need to modify the main NTAP script, ntap-testpilot.py to use a program other than iperf. You might need to launch the program in a different way, too, which might necessitate modifying or making a new version of one of the $DOCROOT/parser_functions.inc-like files.

    5. configure your PMPs with the new program On each PMP, you'll need to:
      • copy over the program you're adding

      • edit /usr/local/gara-1.2.2/etc/diffserv_manager.conf and add your program at the very bottom. If your program runs client/server style, you'll add two entries, in the form programName-client and programName-server. If your program runs singly (like traceroute), you should only put in the client entry.
    [up]

    What "runs" a performance test?
    On the webserver host, we have a Python program that coordinates all of the NTAP performance tests; we call this program the testpilot. The testpilot can be run from the commandline and, as such, has options you can specify and whatnot. We regularly use the commandline version during the course of development and testing.

    A helpful new little "glossary" that describes the larger components in the NTAP system was recently put together.

    A more-complete description of the testpilot (its duties, limitations, and design) will be forthcoming. However, its usage information is regularly updated and informative.
    [up]

    What's Walden and why is it cool?
    Developed by both CITI and MGRID, Walden is a lightweight, scalable grid authentication and authorization package that we have knit into NTAP. A paper about Walden is available, but there's not a project website yet <cough>. Here are the broad strokes of why Walden is good, both for NTAP and for Globus-based grid infrastructures in general. One thing to keep in mind is that NTAP involves many players, and so pretty much every detail here that doesn't directly relate to Walden is getting really glossed-over or is only precise enough to keep the narrative going. One last important point: I will only detail how Walden contributes to the authentication part of NTAP; the authorization portion, while roughly-congruent, expands this (already-too-long) narrative overmuch.

    We'll look at how Globus works with and without Walden. You might be wondering why I'm talking about Globus so much and not NTAP; suffice it to say (only somewhat imprecisely) that all of NTAP's authentication occurs through Globus. In other words, Walden is a handy, general-purpose framework.

    The common environment. The environment in which we'll compare the with/without Walden scenarios is this: let's say that there are three PMPs (PMP-1, PMP-2, and PMP-3) and each of which is attached to a different core router on a college campus. Let's then say that the network admin in charge of the PMPs, Alice, has privileges to run network tests at any time. Now she wants to give Bob (a new hire) that access. Let's also assume that the campus uses Kerberos and all the users have principals.

    Globus without Walden. Alice wants to run a performance test directly between PMP-1 and PMP-2, so she gets her Kerberos creds (e.g., kinit && kx509). When she uses them with the testpilot (either on the command-line or over the web) to kick-off her network test, the testpilot sends her creds (along with the test info) to both PMP-1 and PMP-2. On each PMP, a Globus Gatekeeper daemon is running and receives Alice's creds and the network test parameters. To authenticate Alice, each Gatekeeper (1) verifies that the CA that signed Alice's creds is actually trusted, (2) extracts Alice's distinguished name (DN) from her creds, (3) looks in the file /etc/grid-security/grid-mapfile for Alice's DN and a corresponding local username, and (4) then looks up that local username in /etc/passwd. If all of these steps work, Alice has authenticated to the Gatekeeper.

    This pattern ultimately means that, in order to grant access to Bob, Alice has to do some setup that's pretty garish: she has to make a local user for Bob in /etc/passwd on every PMP that she wants to give him access to. Plus, she has to add his DN-to-local user mapping in the grid-mapfile on every PMP. (There's even at least one more file to edit when the authorization component is also considered!) Imagine doing this for an entire cluster (remember, Walden's rather general-purpose for grid applications). Clearly, this doesn't scale and is a big headache.

    Globus with Walden. The upshot of Walden is that it eliminates the need to keep going back to every Globus grid resource (i.e., our PMPs) and editing multiple files every time that you want to grant or revoke privileges. Instead, you can make a pool of guest accounts on each grid resource (our PMPs) one time, and thereafter can centrally-administer all of the machines. Better yet, the way that Walden works is really pretty simple.

    Going back to the example above, assume that the Gatekeepers on all three of the PMPs have been "Walden-ified". When Alice wants to run a performance test directly between PMP-1 and PMP-2, she again will get her Kerberos creds and start a performance test with the testpilot, which again will send her creds and test info to the PMPs' Gatekeepers. The Gatekeepers again perform steps (1) and (2) above; however in step (3), each Gatekeeper contacts a Walden policy daemon on the PMP (that is, each PMP has a Gatekeeper and a Walden policy daemon running). The policy daemon accepts three things from the Gatekeeper: the Alice's distinguished name (DN), the name of the resource-host she wants to access (i.e., the hostname of the PMP itself), and the name of the action she wants to perform (i.e., "run a performance test").

    At this point, the Walden policy daemon takes those three things (user, resource, and action) and consults a policy file, which is either on the local machine (e.g., /etc/grid-security/localhost-policy.xml) or located on a central server (much more scalable, easy to administer). The policies are expressed in XACML, which is a flexible, extensible, chatty-but-not-too-obscure language ideally suited for this type of work. The policies would be awful if you had to include every username in them (this is the problem with the old grid-mapfile approach), even if the policies were stored on a central server. Instead, Walden will try to consult a central LDAP server that maps users into different groups; then, policies can be written against entire groups, which is much better. So the policy daemon will take Alice's DN and try to find it in a group; let's say her DN is in the grant-all-access group on the LDAP server. The policy daemon verifies that that group can do that action on that resource and either succeeds or fails (and logs the basis for its policy decision).

    However, the job's not done yet. Remember, the whole reason that the Globus grid-mapfile exists in the first place is because it maps from an authorized-DN to a valid local username in /etc/passwd; then, when the user's process is eventually forked/execed, it runs as that local user. So just having the Walden policy daemon say, "Yes, you can do that" isn't enough; you need a local user principal. This is another cool thing that Walden does: at the point that the policy daemon is making its decision, it looks to see if there are any obligations that apply to the policy (obligations are neat and beyond this example). Walden uses/can use an obligation to say, "If a user is allowed to do their action, then map it to a guest account for the duration of their work". Then, you can make, say, 10 guest accounts, named guest01, guest02, etc., and Walden will keep track of them and dole them out automagically. You can set the guest account's shells, for example, to /bin/false and not worry about any logins; we do.

    So! To briefly finish Alice's narrative: the Gatekeeper passed her DN, the resource name, and the action to the Walden policy daemon. The policy daemon finds her DN on the LDAP server in the grant-all-access group. Assume that the local policy states that members of that group can do anything. Then, the policy daemon just maps Alice to a local guest account guest01, replies with the decision and the user-mapping to the Gatekeeper, and the Gatekeeper kicks off the job. Alice gets her results when the tests are done. So, if Alice wants to add Bob in this scenario, she fires up her LDAP browser and pastes his DN into a single field in the grant-all-access group. Piece of cake. Keep in mind, you can make fabulously more-expressive policies with the Walden/XACML setup.

    One last thing: what if you don't want to have all of your users in LDAP? Well, you can configure Walden so that it first checks the LDAP server for the group(s) the user's in and, if none manage to fulfill an appropriate policy, the Gatekeeper can fall-back to reading entries from the /etc/grid-security/grid-mapfile, if you like. You can, for instance, put a backup or super account in all of your grid-resources and handle the rest through LDAP.

    So there's a brief overview of how Walden impacts NTAP. Walden makes Globus-based grid authentication- and authorization-management solutions scale.
    [up]

    What's new?
    October 10, 2005
    • two critical bugfixes in web100srv (Web100 daemon)
    • updated Web100 userland libraries (v1.5)
    • logrotate now handles two previously-unbounded logfiles
    • bugfix for `ntapctl --local'
    • bugfix for the CITI-modified NDT Tcpbw100 applet
    • new PMP RPM (pmp-0.5-9.i386.rpm) with these changes
    September 15, 2005
    • much newer Web100-ified kernel (2.6.10)
    • PMP default firewall setup included
    • critical bugfix in web100srv -- libpcap issues
    • new PMP RPM (pmp-0.5-6.i386.rpm) with these changes
    September 6, 2005
    • critical bugfix in Walden policy daemon (lost LDAP connection would lock process).
    • cancelled repetitive jobs' credentials are immediately deleted now.
    • a new glossary should help clarify NTAP's parts.
    • documentation overhaul: most docs are now clustered in /usr/local/ntap2/docs/ and have been updated throughout.
    • new PMP RPM (pmp-0.5-5.i386.rpm) with these changes
    August 16, 2005
    • repetitive, long-running tests are now supported (and with credential renewal)
    • bugfixes in the postinstall-verifier, the repetitive test scheduler (the copilot), and elsewhere
    • added new utilities to get at data in NTAP: show-last-NDT-results.sh, expanded ntaputil and ntapguru
    • all Web100 kernel statistics are recorded during NDT tests
    • removed the install steps of (1) installing openssl and (2) building kx509; some new docs
    • new PMP RPM (pmp-0.5-4.i386.rpm) with these changes
    June 6, 2005
    • simplified install, completely new post-install verifier, new docs
    • new updated testpilot has significantly more features
    • integrated tcptraceroute and owampd/owping
    • integrated NDT and Web100 on PMPs and the portal setup to allow for first-hop client discovery and testing
    • new PMP RPM (pmp-0.5-2.i386.rpm) with these changes
    Nov 5, 2004
    • updated default Walden policy
    • updated documentation on Walden setup and structure
    • included simple C-client for accessing (and testing!) a Walden setup
    • couple of minor bugfixes
    • new PMP RPM (pmp-0.4-3.i386.rpm) with these changes
    Oct 25, 2004
    • our Walden sources, configs, and build notes were added
    • sets up Walden policy daemon init script
    • new PMP RPM (pmp-0.4-2.i386.rpm) with these changes
    Oct 15, 2004
    • new integration of Walden significantly simplifies authentication and authorization
    • more-uniform directory layout and naming
    • important logcleaner bugfix (corrupts the diffserv_manager)
    • updated sample {PHP, portal} {code, graphics}
    • new Makefile for the peripheral GARA tools
    • new PMP RPM (pmp-0.4-1.i386.rpm) with these changes
    Aug 3, 2004
    • added our first output-serialization, -processing, and -display code (iperf-specific, for now)
    • more-flexible command-line options for the testpilot, bugfixes
    • better error-propagation and numerous bugfixes in the GARA code
    • updated schemas, both for Router/PMP directory and the output directory
    • updated testpilot documentation and usage information
    • updated webserver-host documentation on general setup and new PHP additions
    • new PMP RPM (pmp-0.3-1.i386.rpm) with these changes
    July 14, 2004
    • configurability and interface refinements throughout the PHP code
    • improved error-reporting in GARA and python components
    • improved stability of the GARA components
    • new PMP RPM (pmp-0.2-3.i386.rpm) with these changes
    July 8, 2004
    • the GARA components are more robust -- better resource management, fewer stray processes under error conditions.
    • smoother edges: logfile/tempfile wrangling, init.d-style startup script, more documentation, more tests included.
    • new PMP RPM (pmp-0.2-2.i386.rpm) with these changes
    June 30, 2004
    • added webserver host configuration instructions and helper tarball
    • PHP code now handles line-buffered output so the webpages load incrementally
    June 22, 2004
    • changed from a hybrid Globus 2.2.4 / Globus 2.4 setup to pure Globus 2.4
    • included all of the PHP code we use on our webserver
    • significant updates to the VIF-selection modes ("modes") available with the testpilot
    • new PMP RPM (pmp-0.2-1.i386.rpm) with these changes is available
    [up]



    back to front page
    blank.space
    b.star projects | techreports | press | lab | location | staff Email address
    or call +1 734 763 2929
    Copyright © 1996-2013
    The Regents of the University of Michigan
    bottom.line
    citi