![]() |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
![]() DirectoryMark 1.3 Run RulesThe purpose of these run rules is to:
The major difference between these run rules and those for DirectoryMark 1.2 is the elimination of the use of directory size classes. We also made corresponding changes in the reporting requirements and we eliminated the requirement to publish price/performance metrics, although we do specify how to compute them. While we believe that the use of directory size classes helps customers determine if a particular directory server will support their needs now and in the future, it presented a significant burden and barrier to testing directory servers because it tripled the number of configurations that had to be tested. Therefore, we have specified standard directory sizes to be tested in order to facilitate comparing products. 1.0 Run Requirements1.1 Test EnvironmentDirectoryMark 1.3 tests directory servers that conform to either the LDAP Version 2 or LDAP version 3 protocols as defined in RFC 1777 (http://ds.internic.net/rfc/rfc1777.txt) and RFC 2251 (http://ds.internic.net/rfc/rfc2251.txt), respectively. In addition, both the System Under Test and the client test systems must satisfy the appropriate following RFCs:
1.2 System Under TestThe System Under Test (SUT) includes the following components:
1.2.1 Run RequirementsIn order for a test run to be considered valid under these run rules the following requirements must be met:
1.3 Load Generator SystemsDirectoryMark 1.3 uses computer systems to generate a load on the SUT; these are called load generators. The load generator systems must meet the following requirements for a test run to be considered valid and to be published:
1.4 Directory SizesThe number of entries in a directory can affect performance. Therefore, pick a directory size that reflects your current needs or your future needs from Table 1. Table 1: Standard Directory Sizes
1.5 Test ScenariosThe following scenarios are required to be tested for the directory size selected:
For the above mix of operations, the value required to be reported is the number of searches/second. You must use the test script scriptgen generated for your measurements to be valid. 1.5.1 Warm-Up and Measurement Run TimesThe purpose of a warm-up run is to fill the LDAP directory server cache on the SUT to simulate a system that has been running for some time. A valid measurement will consist of one warm-up run followed immediately by one measurement run for each test scenario, except for the Loading scenario which shall have no warm-up run. Table 2 below shows the warm-up and measurement time in minutes. Table 2: Warm-up and Measurement Times
For measurements to be considered valid, the LDAP directory server may not be restarted nor may the SUT be rebooted between test scenarios. You are allowed to restart or re-initialize the LDAP directory server and/or reboot the SUT before testing a different directory size. 1.5.2 LDAP Directory SchemaAll testing shall be done using the LDAP organizationalPerson schema. At least the following attributes must be indexed for fast searching:
2.0 Documentation Requirements2.1 ReportTo report DirectoryMark 1.3 results, you must show the operation rate performance reported by the DirectoryMark client(s). If you use more than one client system to test an SUT, you must aggregate results from all client systems and report the total operation rates. You are required to specify enough information about the SUT and the test environment to allow a knowledgeable user to reproduce your test results. 2.2 Archiving RequirementsYou must archive the following items and make them available for Mindcraft's review, if you want your results published at our Web site:
3.0 DirectoryMark Metrics3.1 Performance MetricThe DirectoryMark performance metric is DirectoryMark Operations Per Second or DOPS™. Performance is always to be reported for a specified directory size.DOPS are computed as a weighted average of the operation rates of all of the test scenarios. The weights used are:
The following examples show the three acceptable alternative ways to express the DirectoryMark performance metric in press releases and other publications (numbers in the examples below obviously will change to reflect the measurements and classes tested):
In addition to reporting DOPS on the standard DirectoryMark Results Web page, you must also report the metrics specified in Section 1.5 for each of the test scenarios. 3.2 Price/Performance MetricYou may also report the price/performance of the SUT. It is computed using the formula:
It is expressed in terms of $/DOPS. The components that will make up the SUT price include:
The prices used may be street price (substantiated by a reseller, customer invoices, or publicly available street price information from the manufacturer) or list price. The type of pricing used must be included in the report. You need to fill in all relevant data in the standard pricing Web page (pricingdm11.html). You may use the spreadsheet (pricingdm11.xls) to help you compute the price/performance metric. 4.0 Publishing Results4.1 Unreviewed ResultsYou can publish your price/performance and DOPS results in any medium you find appropriate, as long as you followed these run rules. When publishing results, you need to say which version of DirectoryMark was used for the measurements. You may publish the standard DirectoryMark Results Web page any where you want as long as you also publish the associated standard DirectoryMark Pricing Web page. Mindcraft will publish unreviewed results at its Web site. If you want your results included there, please contact us. Note, you will need to provide us a copy of the licenses for all software tested and a release from the product vendor, if their license precludes publishing benchmark results without their prior approval. 4.2 Reviewed ResultsIf you want your results reviewed by Mindcraft and published at Mindcraft's Web site at a location for reviewed results, please contact us. Note, you will need to provide us a copy of the licenses for all software tested and a release from the product vendor, if their license precludes publishing benchmark results without their prior approval. 4.3 Certified ResultsMindcraft will perform DirectoryMark testing for you and will certify the results. We will publish the results at our Web site at a location reserved for certified results. Contact us for more information about this service. 4.4 Contacting MindcraftYou can contact Mindcraft at directorymark@mindcraft.com or phone. |
![]() |
Copyright © 1997-2003. Mindcraft, Inc. All rights
reserved. Mindcraft is a registered trademark of Mindcraft, Inc. For more information, contact us at: info@mindcraft.com Phone: +1 (408) 395-2404 Fax: +1 (408) 395-6324 |