Newbie question..
log in

Advanced search

Message boards : Number crunching : Newbie question..

Author Message
KATHY
Send message
Joined: 25 Jul 09
Posts: 2
Credit: 819
RAC: 0
Message 271 - Posted: 29 Jul 2009, 13:01:28 UTC

Hi, I am new to this and do not have a video card that can do the fancy GDU stuff. But, this does not seem quite fair, can someone explain this to me? Thanks in advance.

Task ID
click for details Computer Sent Time reported
or deadline
explain Status CPU time (sec) claimed credit granted credit
36776 1090 28 Jul 2009 5:12:27 UTC 29 Jul 2009 10:22:35 UTC Completed and validated 0.14 0.00 83.76
36777 997 28 Jul 2009 5:17:31 UTC 29 Jul 2009 12:55:31 UTC Completed and validated 16,929.92 82.54 83.76

Profile Gipsel
Volunteer moderator
Project developer
Project tester
Send message
Joined: 2 Jul 09
Posts: 279
Credit: 77,193,069
RAC: 77,543
Message 272 - Posted: 29 Jul 2009, 13:16:01 UTC - in response to Message 271.

Hi, I am new to this and do not have a video card that can do the fancy GDU stuff. But, this does not seem quite fair, can someone explain this to me? Thanks in advance.

Task ID
click for details Computer Sent Time reported
or deadline
explain Status CPU time (sec) claimed credit granted credit
36776 1090 28 Jul 2009 5:12:27 UTC 29 Jul 2009 10:22:35 UTC Completed and validated 0.14 0.00 83.76
36777 997 28 Jul 2009 5:17:31 UTC 29 Jul 2009 12:55:31 UTC Completed and validated 16,929.92 82.54 83.76

You can't compare the CPU time of the GPU application to the CPU app. As the CPU has do do virtually nothing, almost no CPU time is required during the computation on the video card. Actually, the WU took a lot longer than the 0.14 seconds CPU time you see there (about 9 minutes on a GTX260).
Furthermore you are running the 32bit version of the application. The algorithm benefits a lot of the parallel processing capabilities of a GPU (which has hundreds of processing units!) as well as from processing larger chunks of data with a 64bit application.

Maybe the next version will bring the 32 and 64 bit versions a bit closer together, but the 64bit version will still be about twice at fast (and a high end GPU quite a bit more than that).

KATHY
Send message
Joined: 25 Jul 09
Posts: 2
Credit: 819
RAC: 0
Message 278 - Posted: 29 Jul 2009, 18:57:15 UTC - in response to Message 272.

Ok thanks, understand now... still do not think it is quite fair though! lol

Profile TomaszPawel
Avatar
Send message
Joined: 13 Jul 09
Posts: 29
Credit: 23,946,954
RAC: 0
Message 286 - Posted: 29 Jul 2009, 23:24:36 UTC - in response to Message 272.
Last modified: 29 Jul 2009, 23:56:34 UTC

Actually, the WU took a lot longer than the 0.14 seconds CPU time you see there (about 9 minutes on a GTX260).
Furthermore you are running the 32bit version of the application. The algorithm benefits a lot of the parallel processing capabilities of a GPU (which has hundreds of processing units!) as well as from processing larger chunks of data with a 64bit application.


LOL....

Why on 64 bit Vista, 64 bit collatz CUDA app 1.10 runs in 32 bit mode?!



On XP 32bit on GTX260SP216 app 1.10 CUDA 32 bit runs at 6m25s.
On Vista 64bit on GTX260SP216 app 1.10 CUDA 64bit runs as 32bit !!! at 6m24s.....

OMG....

Also,

On XP 32bit on GTX260SP216 app 1.10 CUDA 32bit runs at 6m25s, but when MilkyWayCPU SSE4.1 runs on 4 cores with Collatz, it takes 9m for Collatz WU...

On Vista 64 bit on GTX260SP216 app 1.10 CUDA 64bit runs as 32bit at 13m !!! when also on 4 cores Collatz 64bit CPU is runing....
____________
POLISH NATIONAL TEAM - Join! Crunch! Win!

Profile Gipsel
Volunteer moderator
Project developer
Project tester
Send message
Joined: 2 Jul 09
Posts: 279
Credit: 77,193,069
RAC: 77,543
Message 288 - Posted: 30 Jul 2009, 0:26:53 UTC - in response to Message 286.
Last modified: 30 Jul 2009, 0:34:51 UTC

Why on 64 bit Vista, 64 bit collatz CUDA app 1.10 runs in 32 bit mode?!

For a GPU application it does not matter how the app is compiled as the actual computation is done on the GPU. It makes exactly zero difference for the performance.

Regarding the variation of the computation times depending on the CPU load, I have a trick up my sleeve (actually I've just read the BOINC documentation) that will rectify it in the next versions ;)

Profile Gipsel
Volunteer moderator
Project developer
Project tester
Send message
Joined: 2 Jul 09
Posts: 279
Credit: 77,193,069
RAC: 77,543
Message 289 - Posted: 30 Jul 2009, 0:34:02 UTC - in response to Message 278.

still do not think it is quite fair though! lol

Just to make up some numbers. A GTX260-216 has 216 compute elements running at 1.24GHz. Your Core2 (specifically one core of it) is just a single compute element (albeit roughly twice or a bit more as powerful in 32bit mode) running at 2.66GHz. You do the math.


Post to thread

Message boards : Number crunching : Newbie question..


Main page · Your account · Message boards


Copyright © 2018 Jon Sonntag; All rights reserved.