Top Business School in Hyderabad



  • IMT Hyderabad has been consistently ranked among the top management institutes of the country. IMTH is one of the Top Business School in Hyderabad. Today, it is the proud alma mater of more than 300 C–suite executives and thousands of professionals serving in leadership positions in the best-known organizations in India and around the world, in key business functions of Sales, Operations, Human Resources, Consulting, Information Technology, Marketing, and Finance among others. An autonomous, not–for–profit institute, offering highly sought after postgraduate programs over the past more than three–and–a–half decades, IMTH currently offers four AICTE approved programs – Post Graduate Diploma in Management (PGDM) Full Time, PGDM Executive, PGDM Part-Time, and PGDM Dual Country Programme (DCP).
    alt text





  • Found these two items:
    http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/ethernet-x520.html
    http://www.zyxel.com/us/en/products_services/xgs1910_gs1910_series.shtml?t=p

    Together they are expensive, but using them with an accelerator SSD on each node will still be cheaper than using 2 TB drives on every node. Of course, if you can streamline your OS and apps, you probably can get away with just your garden variety GbE peripherals. Just install the OS and the render software on the SSDs and have a centralized runtime. I'm still have more than 20 GB free space on my boot/system 80 GB drive.


  • administrators

    @'matthacker':

    For network and multi user/node, the golden measurement is IOPS not bandwidth. And forgive me for correcting you, where in the last post I suggested using USB3? MicroSATA is basically a full SATA port and definitely not USB. Here's an example - http://www.techspot.com/review/571-cruci…msata-ssd/

    For network connections, I wrote that saving renders hardly saturate even a 1 GbE connection. If you look again at my post, I recommend using 10 GbE peripherals, you know as in 10 Gigabit Ethernet?. It should offer quite a bit of boost compared to 1 GbE.

    If you need fast response time for applications, place them on an SSD stripe array in the file server/NAS box. Even with SSDs striped together, read and write will still be bottlenecked by the SSD and not the 10 GbE connections.

    well yes I actually had a quick look but couldn't find any NASes in newegg or such places that supported 10Gig, that's why I thought you must have been talking about 1Gb, I'm sure there is some out there but not much availability at this point, so yeah this must be more of a dream setup :)
    but once 10gig finds more support then yes this setup is something to consider, I mean I do like the idea in theory

    I'm sure you'd run into problems with some software but generally speaking centralizing storage is nice

    @'matthacker':

    Instancing and texture management can help, but what if your scene doesn't fit anymore in local memory? The fastest hardware won't do you much good when it can't render at all. But that's just one of many problems I cited. I still can't stand the noise most GPU renders have. If you do, then good for you.

    to fill a the 3GB GPU you'd need at least 16-24GB of local RAM for voxelation, another thing that needs to be considered, but all this stuff can be addressed, I run a few GTX580s perfectly fine, they make a bit of noise and generate heat, but nothing too major :)

    @'Alpensepp':

    I'm not trying to turn you into an Apple acolyte, or something^^ Peoples OS choice is none of my concern, just wanted to point out, that the system is not significantly more restrictive than Windows (and has a UNIX shell).
    The fact that your favorite software is not available does not make it a closed system :P With homebrew as package manager it feels a lot like a pretty Linux, if you ask me (minus the option to compile the kernel yourself…)
    I don't approve of Apple's strategy for mobile devices either, but Macs are not affected (yet...)

    as I said when I was referring to closed I was referring to their Appstore and Iphone, not the MacOS, but that correlation is enough for me not to go with Mac, I know it's Unix based and if I was using it mainly for programming I'd probably even consider it, but the type of software that I run, will cause too much headache on a Mac with even less of a community and less software support in general

    @'Alpensepp':

    Erm… why waste so much thought on application load times? I really do not see how this is performance critical in this scenario. I mean you load the required application once per node and then they keep running... it's not like you constantly start and stop them...
    Even if you had to start up the applications anew for every rendering process and asset loading over the network is slower... the required time is still fairly low compared to the actual time to render the image, isn't it? 10Gbit Ethernet stuff comes with a pretty hefty price tag compared to a 1Gbit peripherals afaik. I don't think it's worth the investment.

    You have to run Windows 7 on all Nodes, don't you? Not exactly lightweight imo... since all you really need is the render engine and some sort of server... ( I'm thinking of something like a minimal Debian/Ubuntu not Mac OSX :P )
    Why micro SATA? The only difference is a smaller connector to connect 1,8 inch drives, no? Are they cheaper in comparison?
    And how the fuck do you install an OS in a RAM drive?!? I thought you need specific software to "abuse" your RAM as a regular drive... and to run programs you need an OS. Are there tools to access a remote machines RAM? That's the only solution I can think of...

    yeah I think with the nodes you can be a lot more flexible, possibly go Linux and hook them up to a NAS, I don't see it for my main yet for my next computer purchase, but maybe for the one after that it's option, but who knows we could have gotten rid of local computing altogether and be accessing all software via the Internet cloud and video stream :)



  • @'Alpensepp':

    Erm… why waste so much thought on application load times? I really do not see how this is performance critical in this scenario. I mean you load the required application once per node and then they keep running... it's not like you constantly start and stop them...
    Even if you had to start up the applications anew for every rendering process and asset loading over the network is slower... the required time is still fairly low compared to the actual time to render the image, isn't it? 10Gbit Ethernet stuff comes with a pretty hefty price tag compared to a 1Gbit peripherals afaik. I don't think it's worth the investment.

    Well, we are talking about dream hardware. :) If cost effectiveness is a concern, you could mix and match. 1 GbE adapters for nodes (most motherboards comes integrated with one nowadays) and a dual port 10 GbE adapter for the file server. Obviously, you'll need a switch that has dual 10 GbE upload port and load balancing in the file server. The uplink connections should be able to handle the load of concurrent 1 GbE connections.

    @'Alpensepp':

    You have to run Windows 7 on all Nodes, don't you? Not exactly lightweight imo… since all you really need is the render engine and some sort of server... ( I'm thinking of something like a minimal Debian/Ubuntu not Mac OSX :P )

    Not quite. It depends on your renderer. For example, the 3Delight standalone renderer (similar to the one integrated into DS) will work on Windows, OSX and Linux. Many popular renderers (Octane, Luxrender) are also available in OSX and Linux.

    @'Alpensepp':

    Why micro SATA? The only difference is a smaller connector to connect 1,8 inch drives, no? Are they cheaper in comparison?
    And how the fuck do you install an OS in a RAM drive?!? I thought you need specific software to "abuse" your RAM as a regular drive… and to run programs you need an OS. Are there tools to access a remote machines RAM? That's the only solution I can think of...

    Basically to save space and cables. The smaller, slimmer and integrated profile will allow better airflow as well because lack of cables. You don't have to use microSATA though. Even those 20-40 GB SSD (the ones targeted as caching drives) will do the job just fine and cost less than a 2 TB drive.

    Installing an OS into a RAM drive is not an easy process. Check this - http://www.disklessangel.com/en
    I actually tried it with a streamlined Windows XP Pro. Works great for dumb terminals. To avoid hassle with power outages, you can still install a USB or SSDs to host the image (and have some space for temporary working space, configs and pagefiles).

    @'Alpensepp':

    I'd probably go for a less fancy solution to be honest. But I must admit hosting the OS of each node in RAM and pulling all required files including the applications from a remote file server sounds neat xD
    You just have to cross your fingers that the system never crashes and no one pulls the plug…

    Yeah. It really is my dream machine.



  • @'miro':

    The OS is not a hugely important decision for me so my reasoning isn't too comprehensive.
    The main consideration for me is that I have certain software that must run with full support and that I can get community support for as well. That's just not possible on the Mac, you'd have to be very specific about what you'd like to run. Still lots of software that doesn't even run on the Mac like 3DS Max.
    Other than that from the little I know about Steve Jobbs, I can't stand the corporate culture he's created and his iron fist approach, the closed app store, the closed nature of the iphone, won't allow flash (wtf??).Yes, I get that their hardware and software is nice, but the gap isn't big enough to warrant a switch. And regardless of how innovative you are you should never think that you're so smart that everything should be measured by your standard only.

    I'm not trying to turn you into an Apple acolyte, or something^^ Peoples OS choice is none of my concern, just wanted to point out, that the system is not significantly more restrictive than Windows (and has a UNIX shell).
    The fact that your favorite software is not available does not make it a closed system :P With homebrew as package manager it feels a lot like a pretty Linux, if you ask me (minus the option to compile the kernel yourself…)
    I don't approve of Apple's strategy for mobile devices either, but Macs are not affected (yet...)

    Erm... why waste so much thought on application load times? I really do not see how this is performance critical in this scenario. I mean you load the required application once per node and then they keep running... it's not like you constantly start and stop them...
    Even if you had to start up the applications anew for every rendering process and asset loading over the network is slower... the required time is still fairly low compared to the actual time to render the image, isn't it? 10Gbit Ethernet stuff comes with a pretty hefty price tag compared to a 1Gbit peripherals afaik. I don't think it's worth the investment.

    You have to run Windows 7 on all Nodes, don't you? Not exactly lightweight imo... since all you really need is the render engine and some sort of server... ( I'm thinking of something like a minimal Debian/Ubuntu not Mac OSX :P )
    Why micro SATA? The only difference is a smaller connector to connect 1,8 inch drives, no? Are they cheaper in comparison?
    And how the fuck do you install an OS in a RAM drive?!? I thought you need specific software to "abuse" your RAM as a regular drive... and to run programs you need an OS. Are there tools to access a remote machines RAM? That's the only solution I can think of...

    I'd probably go for a less fancy solution to be honest. But I must admit hosting the OS of each node in RAM and pulling all required files including the applications from a remote file server sounds neat xD
    You just have to cross your fingers that the system never crashes and no one pulls the plug...



  • @'miro':

    I think the idea is great, but am I missing something aren't you suggesting to run this via gigabit ethernet?

    theoretical limits:
    external bandwith of gigabit ethernet is 125mb/s
    internal bandwith of SATA3 is 6Gb/s
    bandwith of hdd 7.2k rpm is 125mb/s
    bandwith of ssd is 500mb/s

    By these numbers gigabit ethernet could barely handle one standard hdd?
    Ok for transfer of general files but running applications? Even if connected via USB3 you'd barely manage SSD speeds.

    For network and multi user/node, the golden measurement is IOPS not bandwidth. And forgive me for correcting you, where in the last post I suggested using USB3? MicroSATA is basically a full SATA port and definitely not USB. Here's an example - http://www.techspot.com/review/571-crucial-m4-msata-ssd/

    For network connections, I wrote that saving renders hardly saturate even a 1 GbE connection. If you look again at my post, I recommend using 10 GbE peripherals, you know as in 10 Gigabit Ethernet?. It should offer quite a bit of boost compared to 1 GbE.

    If you need fast response time for applications, place them on an SSD stripe array in the file server/NAS box. Even with SSDs striped together, read and write will still be bottlenecked by the SSD and not the 10 GbE connections.

    @'miro':

    sure GPU has teething problems, but even today 3GB is enough to fit a mid size scene with several characters with decent capacity management and once you've got the scene in the GPU the advantages of parallel computing kick in

    Instancing and texture management can help, but what if your scene doesn't fit anymore in local memory? The fastest hardware won't do you much good when it can't render at all. But that's just one of many problems I cited. I still can't stand the noise most GPU renders have. If you do, then good for you.


  • administrators

    @'matthacker':

    Well you can actually. I actually know people who use RAM drive to store complete installation of streamlined Windows (including necessary software) and remove disk based storage altogether.

    But let's say that you still want to install an OS on the main machine. You could still run applications from a NAS. We are living in the age of portable and cloud apps that doesn't need installing. Just mount the folder(s) where your applications are and then run it. The only thing you need to do is install the shortcuts.

    Data can be read/written to a file server/NAS as well. You can easily setup the file server/NAS with full redundancy AND performance enhancing tricks (like RAID and stuff). For the applications, just setup two SSD in a striped RAID array and for data, you can use a mirror RAID array or RAID stripe with parity. Another alternative will be to use the SSD array as a temporary disks, and configure backup software to make daily backups to the disk array(s).

    I think the idea is great, but am I missing something aren't you suggesting to run this via gigabit ethernet?

    theoretical limits:
    external bandwith of gigabit ethernet is 125mb/s
    internal bandwith of SATA3 is 6Gb/s
    bandwith of hdd 7.2k rpm is 125mb/s
    bandwith of ssd is 500mb/s

    By these numbers gigabit ethernet could barely handle one standard hdd?
    Ok for transfer of general files but running applications? Even if connected via USB3 you'd barely manage SSD speeds.

    @'matthacker':

    I disagree. For me GPU rendering is still too cumbersome, with a fractured ecosystem (CUDA, OpenCL, DirectCompute, x86), poor/proprietary implementation and the pitfalls (memory capacity) is too high.

    sure GPU has teething problems, but even today 3GB is enough to fit a mid size scene with several characters with decent capacity management and once you've got the scene in the GPU the advantages of parallel computing kick in



  • @'miro':

    ok the external storage thing could be an idea re the nodes, but with main I mean the main computer running OS & software, surely you can't run the OS and software via the network?

    Also the nodes still need some software installed on them afaik, you may not need 3TB drives though.

    Well you can actually. I actually know people who use RAM drive to store complete installation of streamlined Windows (including necessary software) and remove disk based storage altogether.

    But let's say that you still want to install an OS on the main machine. You could still run applications from a NAS. We are living in the age of portable and cloud apps that doesn't need installing. Just mount the folder(s) where your applications are and then run it. The only thing you need to do is install the shortcuts.

    Data can be read/written to a file server/NAS as well. You can easily setup the file server/NAS with full redundancy AND performance enhancing tricks (like RAID and stuff). For the applications, just setup two SSD in a striped RAID array and for data, you can use a mirror RAID array or RAID stripe with parity. Another alternative will be to use the SSD array as a temporary disks, and configure backup software to make daily backups to the disk array(s).

    @'miro':

    well yes, the shift from CPU to GPU rendering is underway

    I disagree. For me GPU rendering is still too cumbersome, with a fractured ecosystem (CUDA, OpenCL, DirectCompute, x86), poor/proprietary implementation and the pitfalls (memory capacity) is too high.


  • administrators

    @'Alpensepp':

    How is Mac OSX a closed system. or more closed than a Windows system? You can already do anything with sudo and if you really have to, you can also get real root access… which is just a bad idea in 99.99999% of all scenarios...
    Even with sudo you can lock yourself out with a single line.... believe me :P

    The OS is not a hugely important decision for me so my reasoning isn't too comprehensive.
    The main consideration for me is that I have certain software that must run with full support and that I can get community support for as well. That's just not possible on the Mac, you'd have to be very specific about what you'd like to run. Still lots of software that doesn't even run on the Mac like 3DS Max.
    Other than that from the little I know about Steve Jobbs, I can't stand the corporate culture he's created and his iron fist approach, the closed app store, the closed nature of the iphone, won't allow flash (wtf??).Yes, I get that their hardware and software is nice, but the gap isn't big enough to warrant a switch. And regardless of how innovative you are you should never think that you're so smart that everything should be measured by your standard only.

    @'matthacker':

    I say if the target use is rendering, be it CPU and/or GPU, those hardware are overkill. Why would you want 2 SSDs and 4 HD in the main machine and drives in the node machines? Even with full HD resolution, saving renders won't tax a 1 GbE connection. Just get yourself 10 GbE peripherals and have all storage centralized.

    ok the external storage thing could be an idea re the nodes, but with main I mean the main computer running OS & software, surely you can't run the OS and software via the network?

    Also the nodes still need some software installed on them afaik, you may not need 3TB drives though.

    @'matthacker':

    The ratio of CPU:GPU also worries me. Which such a configuration, you ultimately lock yourself into a more GPU focused software. Plus I shudder at the though of cooling all those GPUs and the awful noise 4 of them generates.

    well yes, the shift from CPU to GPU rendering is underway



  • @'Alpensepp':

    Well I guess no sane person would allow you to install a Windows OS, so we'll never know.

    Well there\s always virtualization. :)
    I would say Titan is able to raytrace Crysis in realtime, but this is pure guess work on my part.

    @'Alpensepp':

    How is Mac OSX a closed system. or more closed than a Windows system? You can already do anything with sudo and if you really have to, you can also get real root access… which is just a bad idea in 99.99999% of all scenarios...
    Even with sudo you can lock yourself out with a single line.... believe me :P

    Can't say that Apple bothered me much while installing software... the only thing I ever installed through the AppStor is XCode... which is an Apple product... you can probably get your c compiler form somewhere else, if you must.

    And since having a UNIX shell ist ultimatly better than not having a UNIX shell, I prefer my companies Mac over my Windows Laptop for development purposes. Would still not buy anything from Apple with my own money, but I still think the Mac is a quality product... other i-stuff however... I'd rather throw my money at Samsung, or Google...

    I agree with this. Let's not turn this into an Apple/OS X bashing thread. I mean seriously, OS X is the best GUI UNIX distribution out there. :)

    @'miro':

    Failing that I'd go for a render farm of something semi attainable in the real world, that would be currently something like, x number of nodes, each made up of:
    main: i7 3930K, 4 x GTX580s 3GB, 2 x 500 GB SSD, 4 x 3TB HD, 3 x 27" monitors
    node: i7 3930K, 4 x GTX580s 3GB, 2 x 3TB HD

    I say if the target use is rendering, be it CPU and/or GPU, those hardware are overkill. Why would you want 2 SSDs and 4 HD in the main machine and drives in the node machines? Even with full HD resolution, saving renders won't tax a 1 GbE connection. Just get yourself 10 GbE peripherals and have all storage centralized.

    If you really want to have local storage, go with boards that have microSATA ports. You could still fit a barebone OS and have some space left for caching data. Smaller, less power, less heat, less chances of mechanical failures. If you take out the GPUs, you can get away with using miniITX boards and power, so less space per node as well.

    The ratio of CPU:GPU also worries me. Which such a configuration, you ultimately lock yourself into a more GPU focused software. Plus I shudder at the though of cooling all those GPUs and the awful noise 4 of them generates.

    As for the concept of accelerators (which is what a GPU is in HPC world), I actually prefer Intel's Xeon Phi approach. Unfortunately, it doesn't address the main drawback of accelerators - non unified memory addressing. So the drawbacks of GPU rendering - memory capacity - still exists.

    I wish AMD release a G34 version of Trinity. A 4-way system with such a CPU will be quite capable, be it with pure CPU software and hybrid GPU solutions. Memory capacity will never be a problem, since you can install more than what your average software need. Memory addressing and latency (even with node hops) will stil be much lower than with add in cards/accelerators. With 8 GPUs tied together, it can probably be a decent gaming rig as well (I'm thinking at the very least similar to two 7850 cards)



  • The "problem" with Mac, as many windows users see it, is a lack of hardware and software options. Of course, they are looking at it from their perspective. Here's a couple things that most people on the Windows side don't realize:

    1. Limited hardware also means limited hardware incompatibilities, and less software needed to make that hardware run. Less software needed means slimmer OS, and more efficient use of resources.
    2. Modern Mac OS is built on BSD, a UNIX based system, which is rock solid and has been tested for years in systems that are more mission critical than the average windows system. 90+% of the internet's backbone is built of UNIX based systems.
    3. While there are less software options, that means that those pieces out there had to go through a VERY tough crucible to survive. As someone once said, 90% of what is available for windows is crap, and crap won't survive in the Mac market. So you know the software, at least if it's been around a while, is going to be good because otherwise it wouldn't BE around.
    4. Because it is based on a solid core, and because the architecture is closed, and because Apple is VERY good at patching holes (hell, the core of their OS is open source, and they do watch where it goes) there are fewer OS holes for malicious software writers, as such, there are fewer viruses and trojans out there.
    5. It may not seem as customizable out of the box, but that is there for good reason. Too much customization leads to problems that the uninformed user can create, things that make the computer slow or worse. Bonzi Buddy. Need I say more?

    And Alpensepp is right, I have XCode installed on mine, and actually do coding in gcc from time to time, dropping into shell.



  • @'matthacker':

    Of course, the obvious questions is - but can it play Crysis 1,2 and 3 ? :)

    Well I guess no sane person would allow you to install a Windows OS, so we'll never know.

    @'miro':

    Don't think I'd ever buy Apple! Really don't like the idea of a closed system and autocratic rule.

    How is Mac OSX a closed system. or more closed than a Windows system? You can already do anything with sudo and if you really have to, you can also get real root access… which is just a bad idea in 99.99999% of all scenarios...
    Even with sudo you can lock yourself out with a single line.... believe me :P

    Can't say that Apple bothered me much while installing software... the only thing I ever installed through the AppStor is XCode... which is an Apple product... you can probably get your c compiler form somewhere else, if you must.

    And since having a UNIX shell ist ultimatly better than not having a UNIX shell, I prefer my companies Mac over my Windows Laptop for development purposes. Would still not buy anything from Apple with my own money, but I still think the Mac is a quality product... other i-stuff however... I'd rather throw my money at Samsung, or Google...


  • administrators

    Don't think I'd ever buy Apple! Really don't like the idea of a closed system and autocratic rule.

    I'd like a handful of 'Titans' thank you :D

    Failing that I'd go for a render farm of something semi attainable in the real world, that would be currently something like, x number of nodes, each made up of:
    main: i7 3930K, 4 x GTX580s 3GB, 2 x 500 GB SSD, 4 x 3TB HD, 3 x 27" monitors
    node: i7 3930K, 4 x GTX580s 3GB, 2 x 3TB HD



  • If there are no constraints, I'd probably like something like this -

    Titan supercomputer - http://en.wikipedia.org/wiki/Titan_(supercomputer)

    Titan has 18,688 nodes (4 nodes per blade, 24 blades per cabinet), each containing a 16-core AMD Opteron 6274 CPU with 32 GB of DDR3 ECC memory and an Nvidia Tesla K20X GPU with 6 GB GDDR5 ECC memory. The total number of processor cores is 299, 008 and the total amount of RAM is over 710 TB. 10 PB of storage (made up of 13, 400 7200 rpm 1 TB hard drives) is available with a transfer speed of 240 GB/s

    Titan draws 8.2 MegaWatts, 1.2 MegaWatts more than Jaguar did, but it is almost ten times as fast in terms of floating point calculations

    Of course, the obvious questions is - but can it play Crysis 1,2 and 3 ? :)



  • @'Alpensepp':

    Apple's prices are hilarious… 2000$ for 8x8GB Ram... what does a 8GB DDR3 stick with ECC cost? Like 60$?

    How else do you expect them to pay for Steve Jobs "iTomb" :P

    My ultimate PC is whatever PC can play the latest games with all the graphics options maxed out, and can be bought easily without me having to take the time to order a custom built machine

    Of course my Dream PC will always be whatever Starfleet installs on their Galaxy Class Starships :) (yea, I'm nerdy like that)



  • @'Alpensepp':

    Apple's prices are hilarious… 2000$ for 8x8GB Ram... what does a 8GB DDR3 stick with ECC cost? Like 60$?
    And of course everything is ~30% more expensive in the EU store...

    It's not that expensive here, but it's still pretty fucking expensive compared to other computers.



  • my dream pc? let me customize one in alienware :P



  • Apple's prices are hilarious… 2000$ for 8x8GB Ram... what does a 8GB DDR3 stick with ECC cost? Like 60$?
    And of course everything is ~30% more expensive in the EU store...

    I'd still like to have one of these new Mac Book Pros with Retina display... just because. Never thought I'd say this, but I started to like my Mac Book xD



  • Why whould you want dual HD 5770 when you can have HD 5870? A single card consumes less power, space, actually cost cheaper too. If multiple monitors is the goal, since you're just doing two displays, the card has output for both.

    Almost two times the frame rate, handy for Luxrender too. You can easily build a custom box that offer the same performance for much less. The only caveat will be you have to use Windows.


Log in to reply
 

Looks like your connection to NodeBB was lost, please wait while we try to reconnect.