Long crunchtime?
log in

Advanced search

Message boards : Number crunching : Long crunchtime?

Author Message
Profile DoctorNow
Avatar
Send message
Joined: 12 Jul 09
Posts: 30
Credit: 102,805,175
RAC: 0
Message 73 - Posted: 12 Jul 2009, 9:58:11 UTC
Last modified: 12 Jul 2009, 10:47:37 UTC

Hi!

I just have my first WU under Linux 64-Bit.
The estimated time is 35 hours, but after 16 minutes it only has crunched 0,5% so far, so if that goes constantly it would come about 55 hours!
Is that right?


Edit:
Ok, looks like I can answer my own question, it is correct, after I have another long one on another comp and a team mate did confirm it. ;-)

Profile DoctorNow
Avatar
Send message
Joined: 12 Jul 09
Posts: 30
Credit: 102,805,175
RAC: 0
Message 81 - Posted: 12 Jul 2009, 13:45:33 UTC
Last modified: 12 Jul 2009, 13:47:10 UTC

Heck, I don't get it.
After looking now in my results I see the one I have under Linux 64-Bit is already crunched by a Windows 64-Bit machine in 224 sec, and it's no CUDA!
And I'm now at 2,6% with 1,5 hours!!!
And I've readed in the other thread the credits would be low anyway even with long run times.
It looks weird...
My machine is an AMD X2 5200.

Profile Cori
Avatar
Send message
Joined: 12 Jul 09
Posts: 304
Credit: 6,246,688
RAC: 0
Message 83 - Posted: 12 Jul 2009, 13:54:04 UTC - in response to Message 81.

Heck, I don't get it.
After looking now in my results I see the one I have under Linux 64-Bit is already crunched by a Windows 64-Bit machine in 224 sec, and it's no CUDA!
And I'm now at 2,6% with 1,5 hours!!!
And I've readed in the other thread the credits would be low anyway even with long run times.
It looks weird...
My machine is an AMD X2 5200.

My Win x64-lappy has also a very long unit...guessing from now on it will have a final run time of 33 hours.

PS. This WU is not the one on which I noticed that the checkpointing resets the progress bar - I have aborted that WU.
I hope I can continue my current WUs without having to restart BOINC - otherwise it would be even worse for the run times if the progress bar would start from scratch after a BOINC restart. :-(
____________
Grrrreetings from the Lazy Cat

Liuqyn
Send message
Joined: 8 Jul 09
Posts: 26
Credit: 164,516,656
RAC: 0
Message 90 - Posted: 12 Jul 2009, 16:17:51 UTC - in response to Message 81.

Heck, I don't get it.
After looking now in my results I see the one I have under Linux 64-Bit is already crunched by a Windows 64-Bit machine in 224 sec, and it's no CUDA!
And I'm now at 2,6% with 1,5 hours!!!
And I've readed in the other thread the credits would be low anyway even with long run times.
It looks weird...
My machine is an AMD X2 5200.



the windows box was probably running an ATI card, those don't show in the results(that I'm aware of)

Profile Slicker
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 11 Jun 09
Posts: 2525
Credit: 740,580,099
RAC: 1
Message 108 - Posted: 13 Jul 2009, 5:34:22 UTC - in response to Message 81.

Heck, I don't get it.
After looking now in my results I see the one I have under Linux 64-Bit is already crunched by a Windows 64-Bit machine in 224 sec, and it's no CUDA!
And I'm now at 2,6% with 1,5 hours!!!
And I've readed in the other thread the credits would be low anyway even with long run times.
It looks weird...
My machine is an AMD X2 5200.


If you take a closer look at the result you linked to, you will see that it definately has an ATI card which is like having several hundred CPUs in a single box. A high end GPU can process hundreds of numbers in parallel verses one at a time like on CPU (or maybe 4 at a time with SSE2 specific code). A $200 USD nVidia or ATI GPU can outproduce a $2000 Intel Core i7 CPU, or at least on a project that can take advantage of the parallel processing nature of the GPU processors. That is not the case for all projects, just like a scalpel and a machete each have their own specific uses.

That having been said, the CPU apps will get a major update to speed them up. Expect an order of magnitude faster. It is on the "to do" list, but there's only one of me and I also work for a living (thankfully) so time is limited and there are other things like checkpointing and getting the assimilator output into human readable form need to happen first.

Profile DoctorNow
Avatar
Send message
Joined: 12 Jul 09
Posts: 30
Credit: 102,805,175
RAC: 0
Message 109 - Posted: 13 Jul 2009, 7:38:44 UTC - in response to Message 108.

If you take a closer look at the result you linked to, you will see that it definately has an ATI card which is like having several hundred CPUs in a single box.

Well, NVidia cards are listed when you look at the comps, but ATIs not. That's why I first thought it could be something weird. Having a look over results from MilkyWay gave me a hint finally, so you two are right. ;-)

That having been said, the CPU apps will get a major update to speed them up. Expect an order of magnitude faster.

That sounds good. :-)
I still have to check how the CUDA app runs, I have an appropriate card, but not the chance yet to try it out, I'm sure it runs faster. ;-)

Profile STE\/E
Avatar
Send message
Joined: 12 Jul 09
Posts: 581
Credit: 761,710,729
RAC: 0
Message 381 - Posted: 4 Aug 2009, 11:13:06 UTC

It seems the Wu Running Times have Doubled for the NVIDIA Cards but the Credit's have stayed the Same ??? FYI the Credits here were running Neck & Neck with the GPUGrid Project, if they stay the same as they are now you will be effectively cutting them to 1/2 what the GPUGrid Project gives for the NVIDIA Cards.

I'm not complaining but just bringing up an observation I noticed this morning after the Server came back up again ...

Profile Logan
Avatar
Send message
Joined: 2 Jul 09
Posts: 124
Credit: 37,455,338
RAC: 0
Message 382 - Posted: 4 Aug 2009, 11:31:53 UTC - in response to Message 381.

It seems the Wu Running Times have Doubled for the NVIDIA Cards but the Credit's have stayed the Same ??? FYI the Credits here were running Neck & Neck with the GPUGrid Project, if they stay the same as they are now you will be effectively cutting them to 1/2 what the GPUGrid Project gives for the NVIDIA Cards.

I'm not complaining but just bringing up an observation I noticed this morning after the Server came back up again ...


Credits are doubled too...;)
____________
Logan.

BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)

Profile STE\/E
Avatar
Send message
Joined: 12 Jul 09
Posts: 581
Credit: 761,710,729
RAC: 0
Message 384 - Posted: 4 Aug 2009, 11:48:02 UTC - in response to Message 382.

It seems the Wu Running Times have Doubled for the NVIDIA Cards but the Credit's have stayed the Same ??? FYI the Credits here were running Neck & Neck with the GPUGrid Project, if they stay the same as they are now you will be effectively cutting them to 1/2 what the GPUGrid Project gives for the NVIDIA Cards.

I'm not complaining but just bringing up an observation I noticed this morning after the Server came back up again ...


Credits are doubled too...;)


I don't think they were at first but I see that they have now, somebody may have Jogged their Memory ... :) I like the Longer Run Times, gives me at least 6-8 Hour's now of run time depending on the Box if the Server goes KAABOOM ... ;)

Profile Slicker
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 11 Jun 09
Posts: 2525
Credit: 740,580,099
RAC: 1
Message 385 - Posted: 4 Aug 2009, 12:32:00 UTC - in response to Message 384.

It seems the Wu Running Times have Doubled for the NVIDIA Cards but the Credit's have stayed the Same ??? FYI the Credits here were running Neck & Neck with the GPUGrid Project, if they stay the same as they are now you will be effectively cutting them to 1/2 what the GPUGrid Project gives for the NVIDIA Cards.

I'm not complaining but just bringing up an observation I noticed this morning after the Server came back up again ...


Credits are doubled too...;)


I don't think they were at first but I see that they have now, somebody may have Jogged their Memory ... :) I like the Longer Run Times, gives me at least 6-8 Hour's now of run time depending on the Box if the Server goes KAABOOM ... ;)


Yes, I doubled the size of the WUs just about the time the Comcast went down. Once Gipsel's new client gets released, the size will go up about 10 times so as not to overload the server. Credit is awarded according to total steps calculated, if the size doubles and therefore the number of steps calculated doubles, then the credit also doubles.

Profile Logan
Avatar
Send message
Joined: 2 Jul 09
Posts: 124
Credit: 37,455,338
RAC: 0
Message 386 - Posted: 4 Aug 2009, 12:40:29 UTC - in response to Message 385.

It seems the Wu Running Times have Doubled for the NVIDIA Cards but the Credit's have stayed the Same ??? FYI the Credits here were running Neck & Neck with the GPUGrid Project, if they stay the same as they are now you will be effectively cutting them to 1/2 what the GPUGrid Project gives for the NVIDIA Cards.

I'm not complaining but just bringing up an observation I noticed this morning after the Server came back up again ...


Credits are doubled too...;)


I don't think they were at first but I see that they have now, somebody may have Jogged their Memory ... :) I like the Longer Run Times, gives me at least 6-8 Hour's now of run time depending on the Box if the Server goes KAABOOM ... ;)


Yes, I doubled the size of the WUs just about the time the Comcast went down. Once Gipsel's new client gets released, the size will go up about 10 times so as not to overload the server. Credit is awarded according to total steps calculated, if the size doubles and therefore the number of steps calculated doubles, then the credit also doubles.


Gipsel's new client will be ATI, CUDA, or both...?
____________
Logan.

BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)

Profile Gipsel
Volunteer moderator
Project developer
Project tester
Send message
Joined: 2 Jul 09
Posts: 279
Credit: 77,476,758
RAC: 76,461
Message 388 - Posted: 4 Aug 2009, 13:51:12 UTC - in response to Message 386.
Last modified: 4 Aug 2009, 13:53:03 UTC

Gipsel's new client will be ATI, CUDA, or both...?

It will be CPU and ATI, but the new algorithm is easily ported to CUDA, too. Shouldn't be a problem for Slicker.
In fact, the nvidia GPUs may be even faster than ATI this time, as the CAL compiler apears to have some problems to generate good code from the Brook+ source. Furthermore the SFUs of the GTX2xx series may be very well used in parallel to the normal ALUs (with earlier cards it does not works so well, likely equating in quite some advantage for the GT200 bases GPUs). But maybe I will create a faster version later (resorting to IL assembly like for MW).

I will send him the whole suite of applications for testing in an hour if nothing goes wrong (tried to send it last night already, but the server was down). What is still missing is the encryption for the output result, but that should be a matter of minutes to add (at least I hope so).

From my preliminary results we badly need longer WUs with about 6 minutes per WU on a 2.5GHz Phenom (64bit) and 30 seconds or so on a HD3870. So ten (or 16 to stay at a power of two) times longer would be good ;)

Profile Logan
Avatar
Send message
Joined: 2 Jul 09
Posts: 124
Credit: 37,455,338
RAC: 0
Message 389 - Posted: 4 Aug 2009, 14:00:04 UTC - in response to Message 388.
Last modified: 4 Aug 2009, 14:00:48 UTC

Gipsel's new client will be ATI, CUDA, or both...?

It will be CPU and ATI, but the new algorithm is easily ported to CUDA, too. Shouldn't be a problem for Slicker.
In fact, the nvidia GPUs may be even faster than ATI this time, as the CAL compiler apears to have some problems to generate good code from the Brook+ source. Furthermore the SFUs of the GTX2xx series may be very well used in parallel to the normal ALUs (with earlier cards it does not works so well, likely equating in quite some advantage for the GT200 bases GPUs). But maybe I will create a faster version later (resorting to IL assembly like for MW).

I will send him the whole suite of applications for testing in an hour if nothing goes wrong (tried to send it last night already, but the server was down). What is still missing is the encryption for the output result, but that should be a matter of minutes to add (at least I hope so).

From my preliminary results we badly need longer WUs with about 6 minutes per WU on a 2.5GHz Phenom (64bit) and 30 seconds or so on a HD3870. So ten (or 16 to stay at a power of two) times longer would be good ;)


Thanks for the info Gipsel!!!

I hope Slicker will port your code to CUDA app and optimize it for GTX2XX series...;)

It could be great!!!
____________
Logan.

BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)

Profile Slicker
Volunteer moderator
Project administrator
Project developer
Project tester
Project scientist
Avatar
Send message
Joined: 11 Jun 09
Posts: 2525
Credit: 740,580,099
RAC: 1
Message 390 - Posted: 4 Aug 2009, 14:07:41 UTC

I'm planning on using CUDA 2.3 to compile the new version but I have to read up on all the new and improved methods for improving perfomance and async calls in the new version. CUDA v2.3 should give the 200 series cards better performance but will require the 190.xx drivers in order for it to work which means making sure that the majority of users don't have issues with those drivers. Either that or I need to create versions for both and then change the scheduler code on the server to check the driver version and download the appropriate app sending the CUDA 2.1 version to anyone using pre-190.xx drivers and CUDA 2.3 for anyone with the latest version.

Profile Logan
Avatar
Send message
Joined: 2 Jul 09
Posts: 124
Credit: 37,455,338
RAC: 0
Message 392 - Posted: 4 Aug 2009, 14:13:35 UTC - in response to Message 390.

I'm planning on using CUDA 2.3 to compile the new version but I have to read up on all the new and improved methods for improving perfomance and async calls in the new version. CUDA v2.3 should give the 200 series cards better performance but will require the 190.xx drivers in order for it to work which means making sure that the majority of users don't have issues with those drivers. Either that or I need to create versions for both and then change the scheduler code on the server to check the driver version and download the appropriate app sending the CUDA 2.1 version to anyone using pre-190.xx drivers and CUDA 2.3 for anyone with the latest version.


I'm using 190.38 drivers on a GTX260 without any problems...;)
____________
Logan.

BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)

Profile Logan
Avatar
Send message
Joined: 2 Jul 09
Posts: 124
Credit: 37,455,338
RAC: 0
Message 393 - Posted: 4 Aug 2009, 14:18:08 UTC - in response to Message 390.

I'm planning on using CUDA 2.3 to compile the new version but I have to read up on all the new and improved methods for improving perfomance and async calls in the new version. CUDA v2.3 should give the 200 series cards better performance but will require the 190.xx drivers in order for it to work which means making sure that the majority of users don't have issues with those drivers. Either that or I need to create versions for both and then change the scheduler code on the server to check the driver version and download the appropriate app sending the CUDA 2.1 version to anyone using pre-190.xx drivers and CUDA 2.3 for anyone with the latest version.


Or have CUDA 2.1 version as stock one and CUDA 2.3 as opti app...;)
____________
Logan.

BOINC FAQ Service (Ahora, también disponible en Español/Now available in Spanish)

LookAS
Send message
Joined: 28 Jul 09
Posts: 1
Credit: 875,019
RAC: 0
Message 394 - Posted: 4 Aug 2009, 14:18:35 UTC

no problem here with 190.38 + gtx285 either. looking forward to cuda2.3 app

Profile Gipsel
Volunteer moderator
Project developer
Project tester
Send message
Joined: 2 Jul 09
Posts: 279
Credit: 77,476,758
RAC: 76,461
Message 395 - Posted: 4 Aug 2009, 14:30:54 UTC - in response to Message 390.

CUDA v2.3 should give the 200 series cards better performance

From what I've heard only some stuff gets faster with CUDA 2.3. The main improvement for the GT200 cards come from the CUFFT library, so if you don't use FFTs no significant performance jumps are expected (but some smaller improvements are still possible). Afaik SETI got some nice gains, but GPUGrid shows virtually nothing. As Collatz don't use FFTs I would not expect anything special from the CUDA 2.3 release. But the GT200 based cards in general should be quite powerful for the new algorithm (as long as the compiler doesn't do stupid things).

Rabinovitch
Avatar
Send message
Joined: 8 Aug 09
Posts: 20
Credit: 22,902,469
RAC: 39,881
Message 713 - Posted: 26 Aug 2009, 6:25:50 UTC - in response to Message 392.

I'm using 190.38 drivers on a GTX260 without any problems...;)


So am I. 8-)


Post to thread

Message boards : Number crunching : Long crunchtime?


Main page · Your account · Message boards


Copyright © 2018 Jon Sonntag; All rights reserved.