The end of last week’s post I mentioned wanting to make this benchmark more modular. As I thought about it over the weekend, it became more and more important. There are a couple potential issues that come to mind that I think would be useful to tackle before the benchmark goes live.
First, when we test CPUs, we try to keep the rest of the platforms as similar as we can make them. So they all have the same amount of RAM, same GPU, same drives, etc. this allows us to eliminate as many variables as possible. Then when we want to test the GPUs, we’ll use one platform and just swap video cards between runs. The way the script is written currently, is just one long script, so it would run the whole suite of tests every time. Doesn’t make much sense to run the GPU Rendering benchmark on 10 systems when the GPU is the same between all of them.
So I needed a way to choose which tests would run each time. Luckily MAXScript has some UI capabilities. What I’m thinking is a simple UI to choose CPU or GPU tests. I’d have it auto set to CPU with a button to start the tests. I’ll already need some automation software that will instal MAX, launch it, and open the script, so having it click an extra button shouldn’t be much work. If I were to run the tests manually, it will give me a way to customize each run. This is my first mockup of the UI:
It is pretty basic right now, but it offers a lot of potential. Reading through the documentation, I could potentially have the results displayed on this UI, or a number of other options. There is a progress bar functionality, but I'm not sure how that will work. Maybe a simple check mark next to the completed tests, just to give a sense of how far it has gone.
The second issue I looked into was that adding or subtracting tests from this script was a little messy. I’m anticipating around 10 different tests when all is said and done, so the script was getting pretty long. Doing some research, I found a function that allows me to call one script from another. What I’ve done is move each test into it’s own script. So now instead of one huge master script with 10 tests, I have 10 scripts, each with a single test. The “master script now looks like this:
score = openfile "$scenes/scores.txt" mode:"w"
resetmaxFile #noPrompt
-- Modeling test
fileIn "$scenes/box.ms"
Sleep 5
-- CPU Rendering
fileIn "$scenes/CPU_render.ms"
Sleep 5
-- GPU Rendering
fileIn "$scenes/GPU_render.ms"
Sleep 5
-- Texture baking
fileIn "$scenes/tex_bake.ms"
Sleep 5
-- Fluid Sim
fileIn "$scenes/fluid.ms"
Sleep 5
-- cloth Sim
fileIn "$scenes/cloth_sim.ms"
Sleep 5
-- loading/saving
fileIn "$scenes/load_save.ms"
Sleep 5
--Viewport FPS
fileIn "$scenes/FPS_test1.ms"
Sleep 5
close score
Now it's only a couple lines of code to add or remove a test. To tie this in with the UI above, I’ll have the script set a variable depending on what options were selected, and then the master script will run or skip tests based on those variables.
So that is great, but one problem has come up. In the original script, I needed to leave the Viewport FPS until the end because I was never able to get it to wait for the animation to complete before moving on. Through some sloppy coding I was able to get it to function well enough. However, once I moved it to a separate script, it broke again. Every other script properly waits to complete before moving to the next. However, as soon as the FPS test beggins, the master script moves on to the Sleep and “Close score” commands.
Here is the Viewport FPS script:
(
global timeCheck
local s = animationRange.start as integer
local f = animationRange.end as integer
local startFrame = s
local endFrame = f
startAnimTime = timeStamp()
fn timeCheck =
(
if (currentTime == 1000) do (
stopAnimation()
endAnimTime = timeStamp()
format "Playback ran at % FPS/n" (1000 /((endAnimTime - startAnimTime) / 1000.0)) to:score
unRegisterTimeCallback timeCheck
)
)
(
playbackloop = false
realtimePlayback = false
playActiveOnly = true
sliderTime = 0
registerTimeCallback timeCheck
playanimation()
)
)
It functions correctly as long as I dont want anything else to happen after this. If you see something I should change, or another work around, feel free to let me know in the comments.
That's all for this week. As always, if you want to keep following along with my behind the scenes look into Puget Labs, be sure to subscribe.