Copyright 1989-2016 by Kevin G. Barkes All rights reserved. This article may be duplicated or redistributed provided no alterations of any kind are made to this file. This edition of DCL Dialogue is sponsored by Networking Dynamics, developers and marketers of productivity software for OpenVMS systems. Contact our website www.networkingdynamics.com to download free demos of our software and see how you will save time, money and raise productivity! Be sure to mention DCL Dialogue! DCL DIALOGUE Originally published August, 1989 By Kevin G. Barkes Cold Fusion and DCL The late entertainer Will Rogers is perhaps best known for his immortal quote, "I never met a man I didn't like." Ol' Will obviously never ran into a prospect-starved salesman on the last day of a DEXPO. Another of his famous lines is, "It's not what we don't know that hurts, it's what we know that ain't so." I call this "The Will Rogers Syndrome"- accepting as fact the utterings of an individual in a position of influence, without following up to verify the accuracy of the statement. Certain groups are less prone to this weakness than others. Physicists are a particularly stubborn lot, as the recent business over "cold fusion" proves. Short of a personal visit from the Almighty (assuming the Almighty has the foresight to bring along a published paper and the necessary lab equipment), physicists will look askance at any claim which has not been independently duplicated and verified. On the other hand, VMS users, especially newcomers to the community, have a tendency to believe everything they read in the magazines and hear in DECUS sessions. Fortunately, there are (to borrow a Jimmy Carter-ism), "moral-equivalent" physicists who are also VMS users. A GOOD QUESTION At the May DECUS Symposium in Atlanta, I mentioned in passing during my session that the efficiency of DCL command procedures could be increased by "pre-tokenizing" the code prior to execution. This was based on an experiment I conducted a while back, prompted by a letter from a reader. While wandering through the microfiche, the reader had discovered a routine within DCL called "DCL_DIET". Intended to improve the interpreter's efficiency, the routine performed a number of functions. It collapsed all excess spaces in command lines, shortened commands to their smallest unique form, threw away comments and did several other things to speed up subsequent execution of the same lines of code. To test this out, I quickly threw together a command file which looked something like this: $ COUNT = 0 $ GOTO BOTTOM $ TOP: $! . . 10,000 comment lines . $ BOTTOM: $ COUNT = COUNT + 1 $ WRITE SYS$OUTPUT "At bottom - count ''COUNT'" $ IF COUNT .EQ. 2 THEN EXIT $ GOTO TOP When executed, it took quite a while for DCL to chug through all the comment lines before it printed out "At bottom - count 1". However, "At bottom - count 2" appeared almost instantly, proving that DCL had indeed blown alway the superfluous comment lines. After I told this story at DECUS, someone said they had run benchmarks on a 250-line command file and had observed virtually no difference in elapsed cpu time. Another recounted how their site had appeared to have cut the execution of a procedure in half by shortening commands to the smallest acceptable length, and later discovered the improvement was actually due to a reduction in i/o caused by the file allocation size shrinking from 6 blocks to 3 blocks. This was serious stuff. Upon returning from DECUS, I decided to perform my own benchmarks, using a more realistic-sized command file so that inordinate amounts of useless i/o would not skew the results. THE TEST I decided to use the Tic-Tac-Toe game, TTT.COM, which appeared in the last issue. It seemed a good choice, since it contained code executed just once (during initialization) as well as subroutines repeatedly called throughout the procedure. TTT1.COM was identical to original procedure; TTT2.COM had the "pre-tokenizing" lines commented out. I played each version of the game 100 times, using the same play pattern (user moves 1, 2, 7, 6, 8 and "N" for play again.) The averaged results were: VMS 4.7 / VAX 11/750 TTT1.COM TTT2.COM Elapsed cpu: 7.99 8.17 Page faults: 36 36 I/O: 134 131 VMS 5.1-B/ VAXstation 3100 Elapsed cpu: 2.19 2.30 Page faults: 37 37 I/O: 103 105 Aside from showing that a VAXstation 3100 is about 3.5 times faster than a 750, what can we conclude from these figures? CONCLUSIONS The differences in I/O and page faulting can be attributed to a number of factors beyond the immediate control of the user, such as system load, bus configurations, etc. I've seen varying figures for these parameters when I've conducted my test informally on different machines. The elapsed cpu time, however, remained consistent in each test, regardless of processor. On the 750, the "pre-compiled" procedure ran .18 seconds, or 2.2% faster. The 3100 test showed the optimized TTT1.COM ran .11 seconds, or 4.8% faster. Why the variation? Could it be differences in VMS versions? Software-emulated microcode in the VAXstation CPU? Sunspots? I performed another set of tests in which I answered yes to the "Play again?" question, then played 99 additional "standard" games. In each case, there was virtually no difference in execution time between the procedures. The reason for this result is obvious... you can only "optimize" DCL once. After the first pass through the procedure, DCL has performed all the optimization it can. So our 2.2 and 4.8 percent "advantages" dwindle on subsequent executions; after 100 passes using the "Play again?" option, the difference is only 1/100th of the original value. Does this mean the "pre-compile" step is worthless? Definitely maybe. It depends upon the size, structure and execution pattern of the command file. If TTT.COM were an important procedure which could not be converted into a program written in a compilable language, and was executed 10 times daily by 10 users, we would save 18 seconds of cpu time a day on the 750, 11 seconds on the VAXstation. If, on the other hand, TTT.COM was a "captive" command file in which the users operated, only the first pass through would result in any cpu savings, and a small one at that. Is it worth it? Only you can decide, based on your own analysis of command file usage. It should be noted that TTT.COM was already "optimized" when it was written, using the shortest legal command abbreviations available as well as extremely brief symbol and label names. The savings could have been greater if the original file had contained lots of comments, expanded commands and verbose symbols and labels. Testing of this hypothesis is, as they say in textbooks, "left as an exercise for the reader". As for me, I'm now looking for heavy water, some palladium rods and an old car battery. ---------- Kevin G. Barkes is an independent consultant. He publishes the KGB Report newsletter, operates the www.kgbreport.com website, lurks on comp.os.vms, and can be reached at kgbarkes@gmail.com.