Portal Home > Knowledgebase > Articles Database > Best GPU Hosting That Isn't $1000+/mo


Best GPU Hosting That Isn't $1000+/mo




Posted by TomorrowHosting, 11-11-2016, 07:04 PM
I'm looking for good dedicated server GPU hosting that can give 4-12GB GPU with a solid graphics card that isn't $1000/mo. The only thing I could really find is an offering from OVH (https://www.ovh.com/us/dedicated-servers/gpu/) that offers a fairly well equipped server with 2 x GeForce GTX 970 for $195.99/mo, but it says it is "coming soon." Are there any other server providers currently out there which have similar specs/prices to that for GPU servers?

Posted by madRoosterTony, 11-11-2016, 09:15 PM
Many companies can do GPU servers, just most do not advertise them as there is no typical "Setup" that seems to work in advertising. Your request being a prime example, you are asking for 4-12GB video card. What do you prefer? Even in that, will the GeForce series work for you, or do you really need something out of the Quadro Series? Reach out to a few companies and ask if they can do custom configurations? Most can and will.

Posted by PCTechMe, 11-11-2016, 09:18 PM
I saw an announcement a few days ago that Hivelocity was offering new gpu instances. I don't need one right now so I did not investigate further, maybe you can. https://hivelocity.net

Posted by TomorrowHosting, 11-11-2016, 10:02 PM
We likely need at the very minimum (just to be able to load into memory everything needed) 60-80GB which can be broken up into chunks of 4 or 8. Our total server budget for this is in the $1000-2000/mo range. So ideally we would get extremely high performance but I understand having a whole bunch of Tesla cards would probably be unrealistic from a cost perspective. So we are looking to maximize our performance for every dollar spent. High end GeForce cards would likely be acceptable but something higher performance would obviously be preferred. I guess I could contact companies individually but it seemed confusing as to who would offer this and who wouldn't offer this, and the server companies we tend to use that are the most cost effective also tend to have the least easy customization options (Hetzner being a prime example). I was worried any company that would spend the time to work out a custom deal would likely charge a premium for doing so, so if possible I'd like to work with a server company that already is used to selling these types of servers so there is less cost involved with setting up something specifically for me.

Posted by hivelocitygm, 11-11-2016, 10:15 PM
We did just begin offering GPUs with our servers. Currently we offer PNY Quadra and NVidia Tesla cards. You wont find any page on our site mentioning it....yet, but a GPU can be added to a server right from within our cart.

Posted by 24x7group, 11-12-2016, 05:18 PM
Most providers should be able to offer it for you. What GPU cards were you looking at? Or doesn't it matter much?

Posted by madRoosterTony, 11-13-2016, 12:22 AM
I can understand that you might think some companies might charge a premium for custom builds, but that is not the case in most instances. Also typically with custom builds, company often have the option to pay a setup fee vs a higher monthly fee for some "addon" products such as GPUs. So the base server might be $xxx, adding a GPU, would be + $yyy a month, making your monthly total $zzz. Or you could pay $xxx a month, plus a one time fee for the GPU, therefor over time reducing your costs, or a lower setup fee and it only adds a much smaller amount to your monthly costs. As I mentioned before, the things with GPUs is there is no idea situation for everyone. While one person might be running an application that uses cuda cores to process information not related to the GPU at all, there very next person is using the GPU to process video and in which case more RAM in the card makes more sense. When it comes to offering custom dedicated servers, there are 1000s different of options out there and putting them all in an online custom ordering pages, just confuses the general public, so companies have to select what options they offer by default and what is more considered custom. As GPUs are not really common in the "leased" server market quite yet, most companies do not try to confuse their general customers.

Posted by NortheBridge, 11-13-2016, 02:29 AM
@TomorrowHosting - are you absolutely certain that a GeForce series card will actually work for you? When companies (or even people) opt for a GPU server they tend to do so for faster data processing and parallelization and while GeForce cards are 'acceptable' one of the reasons Quadros and Teslas are so common (as well as FirePros) is because of a specification called Double Floating Point Precision (DFP). GeForce cards after the 600 series aren't very good at DFP and only now with the 1000 series card has DFP really begun to return in full force to the GeForce lineup. This is why a 1080 can compete and outperform most Quadros except for one which is essentially the 1080's counterpart in the professional series. You should be absolutely certain about which GPU will suit your needs. I do not know what workloads you are planning aside from "processing 60-80GB of memory broken up into 4-8GB chunks" but these are considerations you should take into account. Honestly, when someone goes with a GPU server, I strongly recommend colocation and buying the server outright. Say you get the server for $2000 a month; that's $24,000 a year. Depending on your needs, if you only need the one server or a quarter cab within 1-2 years you'll pay it off in savings by colocating to a reputable datacenter. This is also the reason you don't really see GPU offerings - the price/benefit ratio is completely non-existent across extended periods of time with a GPU server and there hasn't really been a company that makes renting a GPU server for more than a month or two make sense (usually render farms).

Posted by Purevoltage, 11-13-2016, 03:24 AM
Most providers are able to offer this type of setup however a lot don't advertise those options as they often times require custom builds. As Tony said above buy downs etc normally are used for systems such as this to help lower monthly fees. The other great option NortheBridge said is buying the equipment and using colocation many people go this route due to the costs it can save. I'd highly suggest not using gaming cards as well due to performance lacking quite a bit they also tend to heat up a lot more than other cards we have found in the past with some different builds.

Posted by WootHosting - Jason, 11-13-2016, 05:55 PM
Most providers can offer this, but won't advertise this as a product since its not high demand. I'd suggest reaching out to some companies instead of looking for companies who advertise this particular product. maybe send an email to HostDime and their sales guys can configure something special for you.

Posted by TomorrowHosting, 02-08-2017, 11:21 PM
As an update we are now much more certain in what we need. We need some quantity (4-10?) servers with 2x GeForce GTX 1070/1080 graphics cards. Colocation is definitely an option, although I was really hoping there would be a very competitively priced alternative. We don't colocate most of our other servers since Hetzner is nearly as cheap and offers far more flexibility and we'd like that to be the case for our GPU servers as well.

Posted by madRoosterTony, 02-09-2017, 01:07 AM
To clarify you are looking for 2 GTX cards in each server in SLI mode? You are going to pay a premium for this as the cards are going to require a 4U case at minimum and due to airflow would be better place in a 6U case. As well as the extra power required to run everything needed. Not saying it can not be done, it can very easily be done by many hosting companies. But you are going to be better off colocating in this case, simply due to what a hosting company is not only going to charge you for the hardware, but also the rack space and power usage.

Posted by TomorrowHosting, 02-09-2017, 01:43 AM
Yes, that is what we are looking for. Alternatively we are willing to go with another card that has similar performance and the same amount of memory (16gb) if it costs less to put on a server. For colocating wouldn't we be paying the exact same premium, since presumably a similar size case would be needed in both instances?

Posted by madRoosterTony, 02-09-2017, 02:21 AM
While the case would be the same, there are two completely different business models between leasing servers and colocation. With colocation, pricing is fixed per U. 1/4 Rack is $xxx, 1/2 Rack is $yyy and a full rack is $zzzz When it comes to leasing servers, the goal is to max out the profit per U. This is where blade servers are ideal in some cases. In 3U of space a company can fit 14 servers as long as they have no more then 2 drives and do not need any external cards (i.e. RAID, GPU, etc). From there goto a less efficent blade that allow 4 drives. Then to 1U system that allows upto 10 drives and 1 RAID card, etc. Each time you step up, the cost per server per U goes up and is added to the monthly lease cost. In your case, you are talking of taking up the same space that a hosting company could put 28 servers if you use a 6U case as recommended for best air flow. So while they can not charge you 28 times the cost of their average server in this case, you will have to pay a premium. However with colocation, the data center does not care how you use the rack space you lease as long as you do not go over the power you pay for, you can use every U you lease in however you see fit. So you could fit 10 servers if you drop to 4U cases in a full rack for probably less then you would pay for a few servers giving the setup needed.

Posted by madRoosterTony, 02-09-2017, 02:29 AM
In reviewing the cards request a little more in depth, it would be possible to use a 3U case as well. But as these cards generate a lot of heat, the more space you can give them, the better for the lifetime of the servers.

Posted by swiftnoc, 02-09-2017, 04:39 AM
We have setup multiple 4 card deployments in a 1U chassis specially build and cooled for GPU deployments. While these are custom build, deploying this in 1U is not a problem at all - with the right design of chassis and cooling. The limitation we found, is that you need a Dual CPU setup to activate all 4 GPU card slots. While a 2 GPU setup works on a single CPU setup a 4 GPU setup needs 2 CPU's No idea why they go for a 1060 in that offer, its a far better idea to offer the GTX 1080 that has 2560 CUDA cores, same as two 1060s. Last edited by swiftnoc; 02-09-2017 at 04:46 AM.

Posted by madRoosterTony, 02-09-2017, 01:13 PM
Would personally be interested in that chassis and how it manages the heat load. We have seen very minor concerns in the higher end Supermicro chassis with certain RAID cards and 8 fans. I understand you can use PCI Express cable extenders instead of riser cards, but that must be some crazy setup.

Posted by swiftnoc, 02-09-2017, 01:35 PM
The main issue is heat, the heat inside chassis is perfectly manageable but you do not want too many of these servers together in one rack, the key is to distribute them wisely. Each server has 2500watt redundant platinum+ level PSU. We see peak usages of 400 watt per card and average 200 watt per card loads so you can safely assume these servers use between 1KW and 2KW each continuously, taking into account the active coolers, CPU and such..... One such server with 4 cards is using ~$160,- / month in power alone. That is cost-price if 1 kWh = $0,11 and that is about our average cost price per kWh for full disclosure. For many that are used to Amps instead of KW It would be about 4 to 9 Amps on 230 volts per server or 9 to 18 Amps on 110 volts per server That is pretty serious power usage. Last edited by swiftnoc; 02-09-2017 at 01:39 PM.

Posted by madRoosterTony, 02-09-2017, 01:43 PM
Sounds like you still might be cooking some bacon on these things if you are not careful. We might have a much tighter standard on acceptable heat load and I will admit we very OCD about it. But we learned early on years ago that heat will kill a server prematurely if not not managed. Because of that we attempt to keep our system over all temperature below 20 degrees Celsius, with a max of 25C. Needless to say we invest in a lot of fans.

Posted by swiftnoc, 02-09-2017, 01:58 PM
You cool these cards to be below 25 degrees Celsius each continuously? that is really though and not needed, we do aim to let the GPU itself not run hotter than ~80 degrees Celsius each however one of our competitors that shall remain unnamed, has good results with ~100 degrees Celsius. We are not *that* adventurous. We do deploy hot/cold isles with racks and that helps enormous in keeping the servers cool enough to be in the good safe levels. Last edited by swiftnoc; 02-09-2017 at 02:01 PM.

Posted by NortheBridge, 02-10-2017, 12:42 AM
Since we don't offer GPU hosting to clients with exception to those using an in-development platform, we are a bit more adventurous in terms of cooling GPU systems that are handling the heavy number crunching that the regular processors alone would take far longer to do. Ours run off a riser system (technically PCI cable extenders to a daughter board) built so that the GPU cooling pipes don't get yanked out by the rack auto-ejectors (custom built racks) that we invested in for the particular purpose. As you noticed, I did say pipes because these purpose built racks have rather large radiators for liquid cooling because the heat generation threshold of the cards were being exceeded too frequently and would burn out (as in scorch marks) so we had to take drastic measures to advanced the platform in testing. Even with such cooling, our observant heat is an average between 55C-65C. Running these cards at 80C-85C (or higher) is just asking them to die too early. However, this has allowed the standard cooling units to maintain the rest of the server at more nominal temperatures such as the CPU at 0C (+/- 7C deviation). Utilizing the purpose built cooling system however results in a 1U minimum loss (usually 2U-4U) per a server installation running the platform. We've learned from burning out servers in their entirety to just burning out GPUs and CPUs that pushing the cards to 80C over conventional active air will shorten the life of the cards and the servers (cascading radiated heat will still heat up the rest of the server by a lot over conventional active air). Obviously, to deploy a system such as ours requires a lot of logistics that simply aren't feasible for a hosting company hosting regular clients and I don't think regular clients usually run these cards at 100% 24/7 either but I would have to ask what is you failure rate with such a high upper threshold limit on GPU servers?

Posted by swiftnoc, 02-10-2017, 05:11 AM
Maybe you misunderstand. The server does not get 80 degrees Celsius, the GPU itself reports it gets 80 degrees Celsius at maximum load. Obviously we keep air flowing from cold alley to warm alley continuously so the temperature inside the server itself stays well under 30 degrees Celsius at all times.

Posted by NortheBridge, 02-11-2017, 02:54 AM
No, I am talking about the GPUs at 80C. Most GPUs now, whether professional or consumer, start throttles at 75C. In our experience, 80C 24/7 on GPUs leads to early burnouts of the cards and sometimes less performance if they are throttle type GPUs. That's why I would be surprised not to see a higher than normal failure rate on those cards.

Posted by magas, 02-11-2017, 03:53 AM
You can put 4 cards in 1U: http://www.pny.eu/professional/explo...-4-x-tesla-gpu

Posted by swiftnoc, 02-11-2017, 02:30 PM
You might have over engineered your solution.With over 1000 cards in production we still need to see one fail. Our setup and temperatures have been discussed with the manufacturers of the cards, they advised to keep the GPU temperature under 100 degrees Celsius (and that we do by a wide margin).

Posted by HumaneHostingOwner, 02-11-2017, 05:03 PM
@OP If your unable to find a hosting solution at a reasonable price mark, you might want to consider colocating your GPU builds instead.

Posted by NortheBridge, 02-11-2017, 08:47 PM
Fair enough @swiftnoc. Different workloads require different envelopes. We have nVidia cards deployed and after encountering the first few scorched cards we spoke with the vendors to come up with a solution for our particular use and 80C was just too hot for the workload they were doing because other parts of the card were far exceeding the 80C at the location of thermal diode. It's why we have such an engineered solution to maintain equal cooling across everything from the VRMs to the central processing unit on the GPU. Of course, we would never deploy this solution for regular clients - this is an enterprise-only use plus a few partners. Max thermal limits of most modern GPUs is 105C at which point they will shut off (or should) but anything above 80-85C is too hot whether in a DC environment or not so it's good to hear that you are hitting that target. In my experience, most get concerned at 80C and 85C is a red flag but in a DC environment an allowance should be made. I guess you all have been fortunate that the other areas of the GPU aren't heating up beyond their thermal maximum but as I did say, each workload is different and will heat up differing parts of the GPU. @TomorrowHosting - Your best option is truly colocation. There's no "Hetzner" of GPUs and most GPU facilities are render farms. Any provider here can likely offer you GPU hosting but the reality is the most cost effective solution is to buy the servers and GPUs and deploy them in a datacenter. Obviously there will be a lot more capital expense at the beginning but you'll make up for it through less operational expense if this is for a long term project (1-2 years).

Posted by swiftnoc, 02-12-2017, 07:29 AM
I strongly disagree with you there, looking at our own operation the break even point on a server comes a little after 2 years of operation, we buy bulk, have our own staff onsite and can source power cheaper then can be purchased on a single colo pack. Most customers do not calculate the requirements for having their spare parts onsite & cost of remote hands - specially these kill often the calculation of dedicated vs colocation. IMO for project of 2 years or more: ----------------------------------- - If you can fill a rack with 100+ servers and CAPEX is not an issue, go for colocation. - Single server? renting is better unless uptime is no issue at all. - Anything between? needs carefull calculations. IMO for project of 2 years or less ----------------------------------- - Renting dedicated servers is usually the best way to go, that or operational lease (still a rent, basically). - When you need a very ujnique & custom build (like ie NortheBridge has), then you can consider building it yourself. Just my 0,02 cents on the matter.

Posted by funkywizard, 02-20-2017, 06:18 PM
At that kind of power use, I don't see a downside to using a 4u case. 1a 208v per 1u is fairly dense already, requiring 2x30a 208v circuits per rack. At "4-9a 230v", the lower end of that range is still above 1a 208v per 1u, and the upper end of the range is well over 2x that figure. From the point of view that "wasting the rackspace will cost extra money", in this example I have to disagree. Leaving the rackspace empty because there's no more power / cooling available is not much better than using the space on a larger chassis.

Posted by swiftnoc, 02-21-2017, 04:34 AM
Obviously you should not stuff a rack full with these servers, they are perfect to balance the power/space ratio where needed. In our US locations we use standard 32Amps / 208 volts feeds per rack, sometimes two active feeds (in addition to the B feeds) and in Europe 230 volts 32 Amps. 208/230 volts is more efficient than 110/120 volts so it makes sense to deploy 208/230 volt circuits.

Posted by RackService, 02-21-2017, 11:46 AM
Perhaps you can take a look at Ikoula. They offer some dedicated servers with GPU, but not sure if those are powerful enough for your needs.



Was this answer helpful?

Add to Favourites Add to Favourites    Print this Article Print this Article

Also Read
nectartech.com down? (Views: 627)
Servint Issues (Views: 618)

Language: