Posts by Gator 1-3
log in
1) Message boards : Number crunching : PCI-e Bandwidth Usage (Message 22614)
Posted 576 days ago by Gator 1-3
The most "high end" cards I have are GTX 580s. They're a bit old, but on a few projects they're still one of the fastest cards for crunching. Up until a few days ago, they were the 2nd fastest card for Milkyway@Home, doing WUs in 120 seconds versus GTX 980s doing them in 90 seconds. When I had two hooked up to one computer (one directly on the PCIex16 slot, and the other attached to a PCIex1 slot with a 1x16 ribbon-style riser), they both knocked out tasks in the same amount of time. I'm no computer expert (and I've never played one on TV), but from my understanding, they only time you need to worry about bandwidth is when you're using your video card for what it's actually made for... rendering graphics for output to a display. I will admit my experience is limited to the few GPU projects that I crunch, so it could be different for other projects.

Of course, since you're planning on buying these cards in the first place, you could always buy the mobo/CPU/other necessities, and get two cards at first. Make sure the mobo has the PCI slots spaced out so that you can attach them directly if this experiment fails. Buy two 1080s, attach one directly to the slot and the other via a riser, and see if the riser is bottlenecking the performance. If it is, make sure you switch the cards around to ensure it isn't the card doing it. If the output is still bottlenecked, you can always put the mobo into a normal case and use it like a normal computer with a pair of 1080s SLI'd. If the performance isn't affected, buy the rest of the cards and crunch on

Finally, I think you're going to be waiting a long time to get the information you desire if you want it to be specific to 1080s. This is partially due to them being fairly new, so a lot of people don't have multiple ones yet, and those that do attach them directly to PCIe slots and use them for gaming. The other reason is that most miners use AMD cards and not NVIDIAs in their rigs. Since risers cost about $3 for the ribbon style and $6 to $10 for the powered style, you'd probably be better off trying it yourself instead of waiting to see if others have done it. If you do that, please let us know how it turns out. Personally, I'm always interested in learning new things.

Good luck!
2) Message boards : Number crunching : PCI-e Bandwidth Usage (Message 22607)
Posted 577 days ago by Gator 1-3
I, too, have looked into making one of those mining rigs for GPU crunching. I've accumulated most of the parts and am about to buy the pieces that I don't have, and will hopefully get it up and running by the first week of July, depending on shipping speeds. You'll probably only be interested in #1 below, but for others thinking about making a similar rig, I'll share some info I've found out along the way...

1) With the exception of one of my computers, any computer I have where more than one GPU is attached to it, the additional GPU(s) is/are attached by a riser. A few are powered risers using the USB cable setup you mentioned, but most are the simple ribbon-style x1 to x16 risers. As far as I can tell, every task that gets crunched on those GPUs takes the same amount of time as ones being crunched on the GPU plugged directly into the PCIex16 slot. (That one machine I mentioned above has two GPUs plugged directly into x16 slots, and one plugged in with a powered riser. All crunch at the same speed.) I have checked this on several different GPU projects, such as Milkyway@Home, SETI@Home and this project. If you look at my results and see one every now and then that took longer, it's because a couple of my machines have different types of GPUs... which brings me to...

2) If you use more than one GPU by the same manufacturer, but different model numbers, you'll have a few problems. The first is that none of your projects will accurately show your GPUs when you look at your computer setup in the "My Computers" links. For example, one of my machines has a GTX 470 and a GTX 275, but on the sites it's listed as having two GTX 470s. The biggest problem, however, is if you're using models that require different drivers, as in that setup I just mentioned (the 470 uses 368.xx, which the 275 uses 341.xx). Try to avoid this if possible. I've had to reboot that computer over and over, plugging one card in, updating a driver, then rebooting and unplugging/replugging to update another, etc. It's a pain, so if you want to use all of your slots but don't have all the same models, at least try to get them so they use the same driver.

3) Be prepared for LOTS of electricity usage. Those mining rigs eat up electricity like you wouldn't believe. I watched a video of a guy running six Toxic Sapphire R7 290s, which required two 1200 watt and one 600 watt PSUs. That's 3000 watts, or the equivalent of running thirty 100 watt light bulbs all day, every day. Of course, you can use GPUs that use up less electricity... I've found a decent website where you enter the specifics about your rig and it'll tell you how much wattage it will consume, and then you can make changes to things (like which GPUs you'll be running) to make comparisons. Try http://outervision.com/power-supply-calculator

4) Also be prepared for LOTS of heat. That first computer that I mentioned in #1 earlier has three GTX 285s in it, and it cranked out so much heat I had to remove it from my bedroom. After just one hour the room was over 90 degrees, and that's with a window mounted AC unit going. I'm not looking forward to how much heat is going to be produced when that gets upped to six cards. I'll probably have to keep it in my garage.

5) Build a rig for your super GPU cruncher. You can find the specifics for them on any number of sites. There's a really great design on a site called Highoncoins, but it has a big problem... it's made of wood. The guy who designed it says he saw other rigs using angle aluminum for the frame, but he wanted cheaper, and since aluminum conducts electricity he was worried about a short. Of course, he forgot that those cards are almost always attached to computer frames which are made of aluminum, so I think that's a rather foolish reason to use wood... which will catch on fire if it's heated up too much. Do yourself a favor and use aluminum. It's more expensive, but has to hit 1200 degrees in order to even melt. I'm trying to redesign his rig for aluminum angle, but I may just give up and go with a slightly less impressive setup that's already been designed.

6) Last, but probably most importantly, you'll need to make sure your CPU can handle six cards (if that's how many you're going to use). By that I mean, most people forget that their GPU still uses some CPU when crunching BOINC tasks, as can be seen in the task window of your manager. Every GPU task will have something like "Running (0.733 CPUs and 1 NVIDIA GPU)" in the status column. As far as I can tell, the amount of CPU that is used is determined by both the project and the type of GPU you have... some cards use more CPU, some less. The problem comes when you attach another GPU. In this example you're then using 0.733 CPUs for each card, for a total of 1.466 CPUs. When you add up the CPU usage, if you go over the amount you have, GPUs won't crunch. Using that number above, if you plan on using six GPUs, if you multiply 0.733 by 6 you get 4.398. If you only have a dual core, most of your GPUs will be idle. If you have a quad core, you're still going to have at least one not doing anything if you have six attached. Coin miners don't have to worry about this, so they usually buy the cheapest CPU they can get. A cruncher using a mining rig setup won't have that option. Personally, I plan on getting a hex core processor, but may opt for an 8 core if I can find it cheap enough on Ebay.

I know all you really wanted to know about was the risers, so I'll leave you with this nugget... If you mount your GPUs in a rig and attach all of them with risers, try to get a ribbon style 16x16 riser for the card that will actually have the video cable attached. This will essentially eliminate the bandwidth issue for at least that card, since all the riser really is is an extension cord. I've plugged my monitor into cards attached with 1x16 risers just to see if they still worked... they did, but I didn't experiment to see how much of a graphics load they could handle. As I mentioned earlier, though, they all crunch at the same speed as similar cards plugged directly into a x16 slot.
3) Message boards : News : Sieve in Production; Large and Solo deprecated (Message 21374)
Posted 843 days ago by Gator 1-3
I had to update a program I use on my 32 bit system, which required a reboot. I don't know if rebooting is what did the trick, but I can now process a sieve workunit without getting an error message. It may have been that updating the C++ runtime needed to reboot to take effect and I missed it.
4) Message boards : News : Sieve in Production; Large and Solo deprecated (Message 21350)
Posted 845 days ago by Gator 1-3
I don't know if this is true of all 32-bit machines, but mine will not allow me to update the C++ for 64-bit. I can update 32-bit with no problem, but the 64-bit gets an error message and closes when I try to run it on my 32-bit systems.

On the 32-bit systems, when I open it up to downloading sieve tasks, all of them fail as soon as they start. I'm assuming that's because the tasks are all 64-bit tasks.

2 questions: is there a way to get the 64-bit C++ to update on 32-bit systems? If the answer to that question is no, then is there a way to get only 32-bit tasks for my 32-bit machines, or am I just going to have to deal with having a ton of tasks that error just to get the occasional 32-bit task?

Thanks




Main page · Your account · Message boards


Copyright © 2018 Jon Sonntag; All rights reserved.