Complexity Theory May Offer Best Benchmarks For Testing Computers

Release Date: June 1, 1995 This content is archived.

Print

BUFFALO, N.Y. -- A mathematical approach called complexity theory may be more effective than current methods used by manufacturers of computer hardware and software to test performance claims -- or benchmarks -- for their products, according to computer scientists at the University at Buffalo.

Using the methodology, they recently randomly generated a test utilizing high-precision division arithmetic that pinpointed the flaw in the Pentium computer chip.

Jin-Yi Cai, Ph.D., associate professor of computer science at UB, said that prototype benchmarks using complexity theory developed by him and his colleagues and students have the potential to test computer hardware for a range of variables, including processor speed and memory capacity, as well as specific software packages.

"The purpose of a benchmark is to verify a manufacturer's performance claims about the speed and accuracy of a specific product," Cai explained.

Problems may get past design teams, he noted, because current benchmarks are not rigorous or objective enough. By subjecting products to a problem with a specified complexity, Cai said, manufacturers could prevent or correct design flaws before they hit the market.

Most benchmarks require that the hardware or software being tested be able to run specific sets of data in a specific amount of time, he said. But because the benchmarks are known to all manufacturers in advance, the test results are not always trustworthy.

What's missing from the current generation of benchmarks, Cai believes, is an objective criteria and a certain amount of randomness in generating the actual test that will provide a realistic assessment of how a product will perform under the most demanding and unpredictable computational conditions.

Cai and Richard J. Lipton, Ph.D., computer science professor at Princeton University and co-investigator on a National Science Foundation grant, argue that a benchmark based in computational complexity would be more accurate because it would safeguard against "loopholes" in current benchmarks that allow some errors to go undetected.

"Computational complexity tries to classify computational problems according to their complexity mathematically," said Cai.

"Physical laws place limits on you," he said. "For example, they tell you you have to spend a certain amount of energy to accomplish certain tasks. Computational complexity places similar limits on computational processes. There are things you cannot do in a certain amount of time and space, according to computational complexity theory."

According to Cai, this characteristic can enhance computer security.

"If the computational complexity of deciphering a secret message from scratch is extremely hard mathematically, you can be assured that your 'enemy' cannot do it without what's known as a 'trapdoor,' whereas your 'friend,' who has the 'trapdoor' can recover it easily," he said.

In the same way, if a benchmark is generated that has a certain specified computational complexity, then a manufacturer who can pass the test in a certain amount of time essentially proves his performance claim.

"The difference between benchmarking and secret encryption schemes is that you do not want a benchmark that an honest manufacturer could not complete in 1,000 years, as you would want in cryptography, but you do want manufacturers to be able to complete the test without a shortcut², said Cai. ³Without a shortcut, the amount of time it takes for a manufacturer to do the computations will be a good indicator of how good the product is."

Cai said that he and his colleagues could generate an objective benchmark based on computational complexity and a certain amount of randomness that certifies the objectivity of test results of many computer products by any manufacturer.

For example, they have designed a benchmark for matrix multiplication, a computation essential to many scientific applications. The question the test would pose to the manufacturer is, how fast can your matrix multiplier multiply matrices correctly?

"To the manufacturer, the matrices generated would look like a set of totally random matrices," Cai said. "And you can trust the benchmark result, thanks to computational complexity theory."

Cai has been awarded the prestigious Alfred P. Sloan Fellowship and a Presidential Young Investigator Award for his work in complexity theory.

Media Contact Information

Ellen Goldbaum
News Content Manager
Medicine
Tel: 716-645-4605
goldbaum@buffalo.edu