win7+GPU: reported CPU time is too small
log in

Advanced search

Message boards : Number crunching : win7+GPU: reported CPU time is too small

Author Message
Profile BeemerBiker
Avatar
Send message
Joined: 12 Aug 09
Posts: 39
Credit: 223,866,050
RAC: 1,326
Message 669 - Posted: 22 Aug 2009, 0:24:47 UTC

I am seeing CPU time under 1 sec for 9800gtx+ and opteron 270 under windows 7 as shown here

A slightly faster opteron with the same 9800gtx+ is showing a more realistic (?) 95 seconds as shown here

An even faster quad with gtx280 is showing cpu seconds of about 65 seconds as shown here

I do not know what the correct time values are, but I suspect about 95 or so seconds. These systems exist only for a boinc farm and the incorrect values are skewing my performance data as shown here

Also, it would be nice if the cuda accelerator was identified in the task id so on would know which accelerator was used. gpugrid does a nice job if identifying the cuda device that was used.

riptide
Send message
Joined: 7 Aug 09
Posts: 54
Credit: 1,060,610
RAC: 0
Message 670 - Posted: 22 Aug 2009, 10:06:17 UTC

Same with me... but I still get the proper credit regardless of reported CPU time usage. WU's vary between 5 seconds and maybe 45 seconds usage, but with 14-15mins WU run time.

Profile Logan
Avatar
Send message
Joined: 2 Jul 09
Posts: 124
Credit: 37,455,338
RAC: 0
Message 671 - Posted: 22 Aug 2009, 11:03:39 UTC
Last modified: 22 Aug 2009, 11:13:24 UTC

BOINC reports your CPU time usage. Not the GPU time...

Te real time is in your log...

Name collatz_1249559402_11424_0
Workunit 297331
Created 6 Aug 2009 13:44:56 UTC
Sent 22 Aug 2009 7:40:59 UTC
Received 22 Aug 2009 9:12:02 UTC
Server state Over
Outcome Success
Client state Done
Exit status 0 (0x0)
Computer ID 1853
Report deadline 21 Sep 2009 7:40:59 UTC
CPU time 0.140625
stderr out <core_client_version>6.6.36</core_client_version>
<![CDATA[
<stderr_txt>
Beginning processing...
Collatz CUDA v1.10 (GPU Optimized Application)
worker: trying boinc_get_init_data()...
Looking for checkpoint file...
No checkpoint file found. Starting at beginning.
Success in SetCUDABlockingSync for device 0
Generating result output.
2361184340892950702440
2361184340897245669736
2361184340895823172946
1661
2055358249760
Elapsed time: 1554.94 seconds<<<<<----GPU+CPU time (so 'real' time)
called boinc_finish

</stderr_txt>
]]>
____________
Logan.

BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)

Profile BeemerBiker
Avatar
Send message
Joined: 12 Aug 09
Posts: 39
Credit: 223,866,050
RAC: 1,326
Message 677 - Posted: 22 Aug 2009, 17:40:35 UTC - in response to Message 671.
Last modified: 22 Aug 2009, 18:02:21 UTC

BOINC reports your CPU time usage. Not the GPU time...

Te real time is in your log...


Granted, log shows elapsed time, but ---

collatz uses the phrase "CPU time" for user results. However, only Windows-7 is reporting those very small "CPU time". It seems the problem is what WIN7 is reporting. However, when I checked at seti cuda for the same windows 7 machine, I do not see the same problem.

A seti cuda result for the same win7 system

Run time 181.5781
stderr out <core_client_version>6.6.36</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 1 CUDA device(s):
Device 1 : GeForce 9800 GTX/9800 GTX+
totalGlobalMem = 536870912



note that they use "Run time" for the phrase rather then "CPU time". Maybe collatz has a different version of the host web software?

Here is an other system (vista64) running seti cuda

Run time 113.0539
stderr out <core_client_version>6.6.36</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 1 CUDA device(s):
Device 1 : GeForce GTX 280



Note that this vista 64 "Run time" compares with the win7 runtime of 181 seconds. One would expect that a gfx280 to be faster than a 9800gtx+ so seti cuda web page shows consistency.

The collatz web page is not consistent in that the windows7 (64bit) times are extremely small compared to my other vista64 systems with the same or even faster CUDA processor.

Granted, I should be using the elapsted time for performance measurements, but I am not sure if the ET includes time suspended which would happen if using wall clock time.

I put together a program that analyzes my projects and compares performaance, but I ran into problem when I simply divided the average granted credit by the average cpu time. For some projects this works fine, for others that use a coprocessor there are problems.

Profile kashi
Send message
Joined: 28 Jul 09
Posts: 164
Credit: 100,303,718
RAC: 0
Message 682 - Posted: 23 Aug 2009, 2:38:42 UTC
Last modified: 23 Aug 2009, 3:16:36 UTC

If tasks are not running on the CPU waiting to process on the GPU, then CPU time shown is sometimes in an inverse relationship to the speed of processing of GPU tasks. In other words short CPU time=slow Collatz GPU task, long CPU time=fast Collatz GPU task.

Therefore using the reported CPU time of Collatz GPU tasks to compare relative performance of different projects is not a valid measurement.

In some GPU projects insufficent CPU resources allocated to support processing on a video card will often cause a reduction in performance. With the current Collatz GPU application a faster/higher model GPU with not enough CPU resources allocated to support GPU processing may perform the same or worse than a slower/lower model GPU with adequate CPU resources. I do not know the details of which other projects you are running on how many cores, but a quick look shows that one of your 9800GTX+ is completing Collatz tasks faster than your GTX280, so perhaps this issue is affecting you.

If you wish to investigate this, as an experiment you could try running a Collatz GPU task without running anything else on the CPU cores and you will be able to see what difference it makes to Collatz GPU processing speed and reported CPU time. It is my prediction that your processing time will decrease and the reported CPU time will increase.

In both this project and MilkyWay ATI I always allocate 1 or sometimes 2 cores out of 8 to support processing on my video card and thus enable the greatest speed/efficency. Other than some recent Collatz tasks where using an invalid parameter caused inconclusive results this has always worked well for me.


Post to thread

Message boards : Number crunching : win7+GPU: reported CPU time is too small


Main page · Your account · Message boards


Copyright © 2018 Jon Sonntag; All rights reserved.