Buy Oxycotin Online



  • Oxycontin may be a brand of Oxycodone, this is often the controlled-release Oxycodone tablets, intended to be taken every 12 hours.Oxycodone may be a semi-synthetic opioid synthesized from thebaine, an opioid alkaloid found in the Persian poppy, and one among the various alkaloids found within the Papaver somniferum . It is a moderately potent opioid analgesic, generally indicated for the relief of moderate to severe pain. Oxycodone is out there as single-ingredient medication in immediate release and controlled release(intended to be taken every 12 hours). A 2006 review found that controlled-release oxycodone(Oxycontin) is like instant release oxycodone, morphine, and hydromorphone within the management of moderate to severe cancer pain, with fewer side effects than morphine. Buy Oxycontin Online without prescription here legally
    https://www.paincarecircle.com/product-category/pain-releif/buy-oxycontin-online/



  • This topic is interesting



  • I've gone and done it. I am now the proud new owner of a brand new 6GB GTX980ti and it arrived today. My starter graphics card while I get my bearings. :D

    http://www.newegg.com/Product/Product.aspx?Item=N82E16814127910&cm_re=gtx980_msi--14-127-910--Product

    There was an even faster version, yes, but I really didn't have another $100 or so.

    I'd post my own pic of it but I don't see an uploading album option for this forum. I know, I'll have to get an account somewhere to show off my pr0n. (Any suggestions? lol)

    Obviously you all have probably read that the die for 8GB versions of the GTX980 didn't pan out so that leaves just the 6GB version (or maybe they just didn't want to have a lower series card doing better than their first Titan). The absolute best ones approach the $900 mark.

    Gotta say this thing is fuckin' huge for any graphics card I have ever come across personally. Got-dayum. I'm kind of worried it won't fit in my starter computer case. That 580 I sold off was pretty big.

    While I know it probably won't hit the marks of a professional GPU, I expect it's better than nothing! 6GB vram is at least something to play with.





  • That I have Fred…and I checked an upcoming 6Gb version... Hoooooo boy 6GB? We may be in store for 8GB–though I am cautiously optimistic as well about that. Just rumors at this point...or......

    http://www.game-debate.com/hardware/?gid=2490&graphics=GeForce GTX 980 Ti 8GB

    http://www.gamespot.com/forums/pc-mac-discussion-1000004/gtx-980-8gb-gddr5-allegedly-inbound-landing-in-nov-31672983/

    As for a 980 Ti..Wait, what's this, a Titan 2 is ALSO coming?!
    http://wccftech.com/tsmc-buys-14b-worth-equipment-16nm-volume-prediction-begins-q2q3-2015/

    http://www.game-debate.com/hardware/?gid=2490&graphics=GeForce GTX 980 Ti 8GB

    In the words of Tourettes guy during the Head'n'shoulders incident:
    HOOOO-LY PISS!



  • Has anyone googled "gtx980ti" recently?

    If the rumoured specs are true it sounds great!

    But will no doubt have a "great" price tag :D



  • ^^^Or you could try your local CC library. They'd have some means of digitally accessing textbooks or whatever through the library, I'm sure of it. Or intra college loan between libraries for hard copies.

    Though you can probably also go to a local, err nearest major bookstore and see if they have it on the shelves by chance.

    @'fredfred5150':

    Actually the majority of the heavy lifting in zbrush is done by your hard-drive ;)

    "ZBrush does not actually use the Open Graphics Library (OpenGL) specification when it displays 3D objects on the screen. Pixologic has developed its own protocols for 2D and 3D images based on the pixol. This means that ZBrush is free from the polygon limits imposed by the OpenGL standard.

    It also means that ZBrush is not dependant on the power of your machines graphics card. Instead, ZBrush requires a fair amount of RAM (a gigbyte or more) and lots of free hard disk space. For this reason ZBrush runs quite well even on a decent laptop"

    Source: Introducing ZBrush 4. E.Keller ISBN-13: 978-0470527641

    Does no-one buy textbooks anymore? :D

    Well, why else do you think you're here? :P

    In all seriousness, that's interesting to note and now suddenly a few things make sense. Hard drive does the heavy lifting…better make sure it has adequate ventilation then.



  • I'm not talking about piracy, I'm talking about online tutes and such.



  • Because piracy is wrong and I would never do it, honest ;)



  • Why buy books when that information can be found for free online?



  • Actually the majority of the heavy lifting in zbrush is done by your hard-drive ;)

    "ZBrush does not actually use the Open Graphics Library (OpenGL) specification when it displays 3D objects on the screen. Pixologic has developed its own protocols for 2D and 3D images based on the pixol. This means that ZBrush is free from the polygon limits imposed by the OpenGL standard.

    It also means that ZBrush is not dependant on the power of your machines graphics card. Instead, ZBrush requires a fair amount of RAM (a gigbyte or more) and lots of free hard disk space. For this reason ZBrush runs quite well even on a decent laptop"

    Source: Introducing ZBrush 4. E.Keller ISBN-13: 978-0470527641

    Does no-one buy textbooks anymore? :D



  • @'Morfium':

    @'~ArgonCyanide777':

    @'Morfium':

    I've already worked with zbrush since I have my two 980, had no problems but uses zbrush the graphicards so intense I mean isn't that overall only the preview they calculate?

    Sorry I didn't see this until just now… :s OK...I know you've said you have some issues with English so let me see if I understand you...

    You've had no problems with your graphics cards in z-brush. However, using them with z brush is "intense" (--Perhaps you mean 'excessive'?).

    No, I mean that there is no big thing to calculate for the graphiccards, except the previews of the polygons and that starts to get a problem when you havea massive amount of it, I guess. So I think zbrush is not the best way to test the graphiccard power. Hope that is more understandable.

    Ah okay, so that will overload the graphics cards. Gotcha. What is your best setup recommendation or method for z-brush? Just the CPU + RAM to get the job done?

    @'miroforpres':

    Sorry i havent read all this thread so i may repeat something thats been said but here are some facts that may be interesting.

    Gaming and workstation video cards have the same GPUs in them from generations to generations. In this example ill take nvidia cards. The kepler architecture GPU was used for Geforce GTX cards and for Quadros (GTX780/TITAN and Quadro K series) the major differences are the amount of RAM, the instruction sets and the hardware surrounding the GPU itself. Quadros will have indeed better voltage control units and longer lifespan when used 24/7. The Quadro will have instructions that are ''Locked'' on the geforce sister card wich are used(instructions) to render and calculate a ton of data. If it was not for these instructions, the geforece would do the same exact job as the quadro. Clock speed is also sometimes lowered on quadros to reduce chances of error and lower the overall wattage and heat. The TITAN is a very exceptional card as it was a somewhat hybrid card in the sense that its not crippled by intructions locks to do rendering and at the same time was priced in a way that a gamer could use it. It was still a very expensive card but in a way some gamers beleived it had enough bang for bucks to buy it for gaming. An instruction crippled AND cheaper 780ti is still a better gaming performer but it wont render nearly as fast as a TITAN or K series quadro of the same speed.

    Everything you said made total sense to me. You reinforced the meager tidbits I already knew, and confirmed the stuff I suspected, but did not know.

    You know, I have always heard these companies have had more or less the best one in their beta, but then make revisions to create "inferior" versions for that generation and like dropping a trail of breadcrumbs, they market the lower end stuff first to keep people coming for more since they basically already have the best one. This would seem to support that assertion I think.

    Can you play 3dgames with a quadro card? Yes, very well. But the price for performance is very poor. At exactly the same clock speed on the same generation, its gonna play just as well as a geforce but will cost you a lot more.

    Should you buy a quadro card if you do ''some'' rendering? No, the price does not justify the time you save when you do a little rendering. It's not until you do HUGE project that it's going to be viable. Saving 32 minutes every couple days isnt worth it. Saving HOURS every days is worth it.

    Right. That's the reason computer magazines talk about margin of diminishing returns on a performance per $$$ basis when building a system.
    As for myself… Yeah I basically have no dream of going that far in for a workstation card.

    Can you mix videocard brand and specs in SLI? Well, yes but to a certain extent. Since both card work together, one can not work harder than the other. You are likely to slow the faster card to the speed of the slower card. If one card has 2gig and the other has 4gig, only 2 gigs will be used on both cards AND it will not add up, video memery is NOT shared between cards, its 2 copies of the same data, each GPU has access to the same data on its own ram. You can mix brands SOMETIMES, an EVGA GTX970 and a MSI GTX970 may SLI just fine but its not garanteed at all uless they are ''reference'' models. As soon as a company like MSI fiddles with the hardware, the bios may be affected and prevent other non-reference bios to SLI together. You may look further into this by researching reference design versus custom design.

    Actually that is not unlike semiconductors and trying to get different brands to work together. Again, 'reference design' is what to look for cuz small changes can make a big difference.

    Should you SLI at all? Money is the answer here.(using the geforce 700 series for example) If you can buy a 780Ti, then just buy it. If you can only buy a 760 today and another 760 in 2 months, that could be a good choice to sli. But for a given budget, you should always buy the single big card instead of 2 cheaper cards. 2 cards means more heat, more chances of failure and more possible problems. On the other hand, if you have a HUGE budget, then you go ahead and buy 2 or 3 780Ti, because no single card can beat that. Always aim for the strongest single card your budget can buy. If you opt for a single 760 today and buying another to SLI in the future, do not wait too much as the availability decrease with time.

    A titan surely would be ideal, no doubts there… a 6GB gtx780 is what I'd prefer overall... and a 4GB gtx980 is the grudging compromise I'm checking out here, it may be faster but has less memory.

    If my living situation had a bit more certainty, I'd probably just buy the single 780 straight away. I'm hoping the GTX980 will get a 6GB version like the 780 got in 2014 but somehow I doubt it, to be realistic.

    What would I be doing?
    Generally: I'll be learning 3DX and rendering part time, multimedia part time, vidya part time (SWTOR, emulator from 8-bit to 360, steam stuff, left 4 dead and other valve games, etc.). And probably using everything else for either trade school or eventually business/book keeping. So I'd turn off the GPU for those long slogs of time I'm not using it. (can't figure out why some folks burn out their cards like that)

    Specifically: I'd prefer enough vRam to make a render of something like this with the same number of people but different characters:
    http://aimg.rule34.xxx//samples/8/sample_b9af2cfa793ef42dbfb7a661eee9ed919dd6678f.jpg?7989

    AMD or Nvidia? AMD makes Radeon for gaming and Firepro for rendering. Nvidia makes Geforce for gaming and Quadro for rendering. Both companies have very good solutions for every problems. Depending on the program used, sometimes Quadro will do better, sometimes Firepro will do better. Most of the time, the price for performance will be quite similar and it will come down to the user preference. Same for games, some games will run better on radeon, some better on geforce, and again, the price for performance will be relativly similar. There may be some exceptions but it comes down to user preference again.

    All in all, it comes down to, Price, peference, usage and bling bling. Do not spend too much on something you will not need, do some research to select what is the best solution for your needs and choose wisely. Hardware forums will often be filled with information on particular hardware before it even reach the shelves of your local dealer. Also, buying the latest generation of hardware might not yeild the best bang for buck either. It's wise to check if you can buy a better performing 780Ti than a 970 if the price is worth it. Oh and BTW there was no desktop geforce 800 serie, they skipped to the 900 right away, like they did with the 300 serie.

    Lots of info, use it or skip it but it may be useful to some.

    Thank you. I appreciate it and I'm sure others in a similar sitch appreciate it too. :)

    I suppose I could do some research to see if render monkey could work, and if a comparable AMD card might be out there, though I'm under the impression this is more for professionals than hobbyists like myself. And the prices is probably similar…



  • Thanks. I admit I fell out of the loop for about 5 years regarding computer shtuff. I built my own systems since 1998, and only built 4, one about every 4 or 5 years, mostly adding/upgrading hard drives and audio cards/systems in between. I didn't really get into the whole video card thing until the end of 2013 when I built my latest system. I opted for the 760 4GB since it was far enough above the specs for Skyrim (which was really my only concern at the time, what with the Creation Kit and all).



  • Sorry i havent read all this thread so i may repeat something thats been said but here are some facts that may be interesting.

    Gaming and workstation video cards have the same GPUs in them from generations to generations. In this example ill take nvidia cards. The kepler architecture GPU was used for Geforce GTX cards and for Quadros (GTX780/TITAN and Quadro K series) the major differences are the amount of RAM, the instruction sets and the hardware surrounding the GPU itself. Quadros will have indeed better voltage control units and longer lifespan when used 24/7. The Quadro will have instructions that are ''Locked'' on the geforce sister card wich are used(instructions) to render and calculate a ton of data. If it was not for these instructions, the geforece would do the same exact job as the quadro. Clock speed is also sometimes lowered on quadros to reduce chances of error and lower the overall wattage and heat. The TITAN is a very exceptional card as it was a somewhat hybrid card in the sense that its not crippled by intructions locks to do rendering and at the same time was priced in a way that a gamer could use it. It was still a very expensive card but in a way some gamers beleived it had enough bang for bucks to buy it for gaming. An instruction crippled AND cheaper 780ti is still a better gaming performer but it wont render nearly as fast as a TITAN or K series quadro of the same speed.

    Can you play 3dgames with a quadro card? Yes, very well. But the price for performance is very poor. At exactly the same clock speed on the same generation, its gonna play just as well as a geforce but will cost you a lot more.

    Should you buy a quadro card if you do ''some'' rendering? No, the price does not justify the time you save when you do a little rendering. It's not until you do HUGE project that it's going to be viable. Saving 32 minutes every couple days isnt worth it. Saving HOURS every days is worth it.

    Can you mix videocard brand and specs in SLI? Well, yes but to a certain extent. Since both card work together, one can not work harder than the other. You are likely to slow the faster card to the speed of the slower card. If one card has 2gig and the other has 4gig, only 2 gigs will be used on both cards AND it will not add up, video memery is NOT shared between cards, its 2 copies of the same data, each GPU has access to the same data on its own ram. You can mix brands SOMETIMES, an EVGA GTX970 and a MSI GTX970 may SLI just fine but its not garanteed at all uless they are ''reference'' models. As soon as a company like MSI fiddles with the hardware, the bios may be affected and prevent other non-reference bios to SLI together. You may look further into this by researching reference design versus custom design.

    Should you SLI at all? Money is the answer here.(using the geforce 700 series for example) If you can buy a 780Ti, then just buy it. If you can only buy a 760 today and another 760 in 2 months, that could be a good choice to sli. But for a given budget, you should always buy the single big card instead of 2 cheaper cards. 2 cards means more heat, more chances of failure and more possible problems. On the other hand, if you have a HUGE budget, then you go ahead and buy 2 or 3 780Ti, because no single card can beat that. Always aim for the strongest single card your budget can buy. If you opt for a single 760 today and buying another to SLI in the future, do not wait too much as the availability decrease with time.

    AMD or Nvidia? AMD makes Radeon for gaming and Firepro for rendering. Nvidia makes Geforce for gaming and Quadro for rendering. Both companies have very good solutions for every problems. Depending on the program used, sometimes Quadro will do better, sometimes Firepro will do better. Most of the time, the price for performance will be quite similar and it will come down to the user preference. Same for games, some games will run better on radeon, some better on geforce, and again, the price for performance will be relativly similar. There may be some exceptions but it comes down to user preference again.

    All in all, it comes down to, Price, peference, usage and bling bling. Do not spend too much on something you will not need, do some research to select what is the best solution for your needs and choose wisely. Hardware forums will often be filled with information on particular hardware before it even reach the shelves of your local dealer. Also, buying the latest generation of hardware might not yeild the best bang for buck either. It's wise to check if you can buy a better performing 780Ti than a 970 if the price is worth it. Oh and BTW there was no desktop geforce 800 serie, they skipped to the 900 right away, like they did with the 300 serie.

    Lots of info, use it or skip it but it may be useful to some.



  • @'Nuke':

    Regarding Nvidia cards:

    2. I also read somewhere that one of the major differences between Gaming and Workstation cards was the Clock Speed - faster for games, slower for greater stability for long/complex renders (and another point of difference being the level of driver and hardware support for workstation cards). My card is an EVGA-branded model, and came with an app that lets me adjust Clock Speed. I haven't used LuxRender that much, but in the two renders I tested it with, when the card was running at "default/gaming" speed, LuxRender crashed after some time, yet when I backed the speed down to -40, it was more stable. Has anyone else tried this with their cards? Do other "brandings" have such an app (PNY, etc).

    3. Since my card is EVGA branded, should I get a 2nd EVGA GTX760 or can I use, say, a PNY GTX760? What about the VRAM? Since mine is the 4GB model, will a 2GB model slow things down or give me 6GB total (in SLI)?

    Thanks :D

    To #2
    Workstation cards (like Quadro, Tesla) are for 24/7 usage.
    They have a way longer lifetime when used 24/7 for rendering compared to consumer cards (Geforce).
    Their architecture to deflect hot air is different and better. They can also handle way more OpenGL data. So your viewport while modelling runs way fast if the scene is quite full.

    These are things a consumer or gaming card doesn't need that much. For gaming you don't need that much OpenGL data, and they usually don't run at full load for a longer time.

    Got a test with between a between an older quadro 4000 and a gtx780 with 3dequilizer (camera tracking software).
    The 780 was still noticeable slower calculating the scene than the old quadro.

    So it really depends on what you are doing or what you want to do. The prices for the quadro series is high, but if you are going to build up a 24/7 gpu renderfarm, it is the better choice. But even hollywood stays with cpu renderfarms because it's way easier (and cheaper!) to build multi-core server cpu workstations, where one machine has 28 cores or something.

    about #3
    As long as it is another Nvidia card which uses the same driver it should be no problem.
    But is is even possible to have an AMD and Nvidia card in one setup, but for this you might ask Gazukull, he has done something like this :)
    Having a Quadro and a Geforce card is also possible, but you need some workarounds for it.



  • @'Nuke':

    Ah, I see. Thanks. Shame that Vram isn't doubled :(

    On the subject of the gaming vs workstation card, there was an interview with an AMD guy - VP of marketing I thinkn - where he was asked "what's the difference" and he evaded the question by saying "if you put the chip from a workstation card onto a gaming card, it's behave differently", and then went on to say that with the higher prices for WS cards you get better service and customized drivers if the client needs them.

    However, as you say, if gaming cards are more for processing low-poly game models and such, hi-res models would choke it. However, in that case, clock speed would be a factor (as well as bus speed on the card, cores, etc etc as I've read elsewhere in comparisons between WS cards). With the ability to slow down the clock, I have noticed more stable rendering of demanding images - Lux will actually near completion instead of hang and crash.

    I'm a relative newcomer so I don't know too much more.

    You also do have to factor in not only clock speed but architecture and whether or not the program is optimized for it–be it GPU or CPU. I don't know about GPU differences between Radeon and nVidia but for sure the CPUs have some major differences. Radeon is better for multitask and computation/ number-crunching analogous to production (which really makes me scratch my head when I see that some rendering programs are for nVidia), and nVidia is meant for single processes & applicaitons.

    I think what it comes down to is that while one chipset is more efficient at processing info, the other is run faster to make up the difference. shrug I can't tell for sure, since this is only what's been passed onto me through forums online and magazines.

    Sadly, Poser uses the CPU for rendering, so the point is relatively moot anyway :lol:

    Thanks for the assist :D

    You're welcome. And thank you for that tidbit, I didn't know.



  • Ah, I see. Thanks. Shame that Vram isn't doubled :(

    On the subject of the gaming vs workstation card, there was an interview with an AMD guy - VP of marketing I thinkn - where he was asked "what's the difference" and he evaded the question by saying "if you put the chip from a workstation card onto a gaming card, it's behave differently", and then went on to say that with the higher prices for WS cards you get better service and customized drivers if the client needs them.

    However, as you say, if gaming cards are more for processing low-poly game models and such, hi-res models would choke it. However, in that case, clock speed would be a factor (as well as bus speed on the card, cores, etc etc as I've read elsewhere in comparisons between WS cards). With the ability to slow down the clock, I have noticed more stable rendering of demanding images - Lux will actually near completion instead of hang and crash.

    Sadly, Poser uses the CPU for rendering, so the point is relatively moot anyway :lol:

    Thanks for the assist :D



  • @'Nuke':

    Regarding Nvidia cards:

    1. I read somewhere (Tom's Hardware I think) that the chipset on the GTX 760 can only handle 2GB, and that the 4GB 760 (which I have) was merely "snake oil" - can anyone confirm this either way?

    Greetings…Ehh, well the fact you HAVE one should answer your question, shouldn't it? Whoever called it snake obviously either didn't know what s/he was talking about, or it was posted prior to any news about release of the very card you have...or they possibly were just full of shit...

    2. I also read somewhere that one of the major differences between Gaming and Workstation cards was the Clock Speed - faster for games, slower for greater stability for long/complex renders (and another point of difference being the level of driver and hardware support for workstation cards). My card is an EVGA-branded model, and came with an app that lets me adjust Clock Speed. I haven't used LuxRender that much, but in the two renders I tested it with, when the card was running at "default/gaming" speed, LuxRender crashed after some time, yet when I backed the speed down to -40, it was more stable. Has anyone else tried this with their cards? Do other "brandings" have such an app (PNY, etc).

    Nope. No card of my own yet, sorry.

    I would say gaming cards are more for game 'physics' and their animations, while workstation cards are better at calculations for productions and editing.

    Maybe I have my facts backwards but I was under the impression games and animation require a lower poly count and resolution than for single images. –Makes logical sense to me, else I would think it'd cause a crash by virtue of putting more strain on the hardware for res and poly count AND animating. At the very least it'd be glitchy and jerky due to lag delay.

    Sorry I can't be more helpful.

    3. Since my card is EVGA branded, should I get a 2nd EVGA GTX760 or can I use, say, a PNY GTX760? What about the VRAM? Since mine is the 4GB model, will a 2GB model slow things down or give me 6GB total (in SLI)?

    Thanks :D

    Ok as to your first question, yes stick with the same brand. Actually nVidia cards will only work together with the same make and model!!! Radeon is a bit more flexible on the other hand–still need the same make (i.e. sapphire, generic name brand, etc.) but you can use different models.

    Second question: You're merely adding together computational power to increase speed. As to the vRam, you are limited to the smallest one. So if you have a 2GB and a 4GB, you'd still be limited to only 2GB which means wasted space for the 4GB card though it'd run smoother and faster for all things 2GB and under. If you want to utilize that full 4 GB, your other card needs to be 4GB as well.

    Good luck.



  • Regarding Nvidia cards:

    1. I read somewhere (Tom's Hardware I think) that the chipset on the GTX 760 can only handle 2GB, and that the 4GB 760 (which I have) was merely "snake oil" - can anyone confirm this either way?

    2. I also read somewhere that one of the major differences between Gaming and Workstation cards was the Clock Speed - faster for games, slower for greater stability for long/complex renders (and another point of difference being the level of driver and hardware support for workstation cards). My card is an EVGA-branded model, and came with an app that lets me adjust Clock Speed. I haven't used LuxRender that much, but in the two renders I tested it with, when the card was running at "default/gaming" speed, LuxRender crashed after some time, yet when I backed the speed down to -40, it was more stable. Has anyone else tried this with their cards? Do other "brandings" have such an app (PNY, etc).

    3. Since my card is EVGA branded, should I get a 2nd EVGA GTX760 or can I use, say, a PNY GTX760? What about the VRAM? Since mine is the 4GB model, will a 2GB model slow things down or give me 6GB total (in SLI)?

    Thanks :D



  • @'~ArgonCyanide777':

    @'Morfium':

    I've already worked with zbrush since I have my two 980, had no problems but uses zbrush the graphicards so intense I mean isn't that overall only the preview they calculate?

    Sorry I didn't see this until just now… :s OK...I know you've said you have some issues with English so let me see if I understand you...

    You've had no problems with your graphics cards in z-brush. However, using them with z brush is "intense" (--Perhaps you mean 'excessive'?).

    No, I mean that there is no big thing to calculate for the graphiccards, except the previews of the polygons and that starts to get a problem when you havea massive amount of it, I guess. So I think zbrush is not the best way to test the graphiccard power. Hope that is more understandable.


Log in to reply
 

Looks like your connection to NodeBB was lost, please wait while we try to reconnect.