Buy Bitcoin with Cash App - Step by step guide [SOLVED]



  • Cash App provided the option from which you can sell and purchase Bitcoin. Here are the steps by which you can buy Bitcoin with Cash App.



  • 2.5k for no grain? I need to hit 6k minimum for that to be the case. I often go as far as 20k. Although, I did notice after 6k you suffer massive diminishing returns. No way I could stop it at 2.5k though, even with the amount of postwork I do. I'm not a wizard, Harry.

    Yeah, for some reason RAM never did move quite as fast as graphic cards and cpu's. I guess because it wasn't really necessary for anything so even a 5-7 year old rig shouldn't suffer any speed loss. Not sure how exactly ooc works though.

    As for Pascal, I don't buy hardware(or software) until the end cycle so it's going to be a year, if not more, until I consider touching that.

    Anyway, thank you for taking the time to test all that, hibbli.



  • @hibbli

    I think a better comparison between a pair of 980 Ti's and a pair of 780's would be to run the same scene on both machines.

    If you are running your slave on wifi that makes a huge difference in performance.

    But based on Octanebench performance a 980 Ti is 33% (theres abouts) faster than a 780. I guess what I am getting at is, you can get a used 780 for like 225 USD these days which means a 980 Ti would have to be 310 USD for performance to dollar scaling.

    Don't get me wrong, if I were buying new, right now, I would look hard at 980 Ti's… but probably cheap out and get a used 780 Ti off ebay. As it is, I think I am going to scale back 1 render machine, so I have to sell at least 3-4 780 3GB cards... I want to get myself ready for Pascal!



  • 2500 samples :) my standard value to be sure no grainy stuff showing up if i use blackbody emission lighters as usual. i have no idea IF that working without speedloss is possible for everyone, but i guess it should be not thaaat big difference in motherboards and ram nowadays, but honestly, i dont know :)

    rendered image:
    http://hibbli3d.com/lineup.jpg

    just with very slightly contrast postwork



  • @'hibbli':

    @nox: did the same scene (without network rendering, just main rig). at first i rendered it with the usual 4gb of textures used. then i put in some other gen3 figures to boost up the textures to 7gb… so i had to enable the ooc memory option, enabled 16gb. the result: absolutely zero speedloss... 20 or 30 seconds longer, thats it. so it seems vram isnt that much more interesting or needed as it was 1 year ago :)

    I didn't even know ooc was a thing. I tend not to update unless it's a solid number. v1 to v2 for example. So I guess I missed that. Nice to hear that that problem is no longer present, hopefully. Thank you for bringing it to my attention.

    I still don't know why I have to manually update a program that requires an always on internet connection in the first place.

    btw. How many samples? The numbers don't mean anything without that.



  • I know it's not about GPU rendering, but i hope this statistics will be interesting for someone :)

    Here is my current bad ass monster, full config info on CPU-Z Validation Page :)

    And three render minions:
    one: http://valid.x86.fr/9k27k8
    two: http://valid.x86.fr/sf1qfs
    three: http://valid.x86.fr/4tmq58

    For example, THIS 4k Frame was rendered:

    without render nodes: ~26 minutes
    with only 2x i7 5820k render nodes: ~14 minutes
    using all render nodes: ~10 minutes

    scene contains ~3.800.000 triangles and all 4k textures, with not optimized geometry and textures, with decent optimization it might be 50% faster to render.



  • I know it's not about GPU rendering, but i hope this statistics will be interesting for someone :)

    Here is my current bad ass monster, full config info on CPU-Z Validation Page :)

    And three render minions:
    one: http://valid.x86.fr/9k27k8
    two: http://valid.x86.fr/sf1qfs
    three: http://valid.x86.fr/4tmq58

    For example, THIS 4k Frame was rendered:

    without render nodes: ~26 minutes
    with only 2x i7 5820k render nodes: ~14 minutes
    using all render nodes: ~10 minutes

    scene contains ~3.800.000 triangles and all 4k textures, with not optimized geometry and textures, with decent optimization it might be 50% faster to render.



  • I know it's not about GPU rendering, but i hope this statistics will be interesting for someone :)

    Here is my current bad ass monster, full config info on CPU-Z Validation Page :)

    Also it have three render slaves:

    one: http://valid.x86.fr/9k27k8
    two: http://valid.x86.fr/sf1qfs
    three: http://valid.x86.fr/4tmq58

    Rendering THIS 4k frame:

    without render slaves it takes: ~26 minutes
    with render 2x i7 5820k slaves it takes ~14 minutes
    with all slaves it takes ~10 minutes

    scene contains ~3.800.000 triangles and all 4k textures,
    and really poor optimization for textures and geometry, it might be much faster with decent optimization, also old vray core without Intel Embree, i hope i can get new version in near future which support Intel Embree, that can boost render times up to 30-300%, that depends on type of calculation.



  • ok… did some testings today... maybe interesting :)

    3000x2250 pixel render, mainrig with 2x 980ti 6GB and one slaverig with 2x 780 6GB

    1x 980 2:21 h
    2x 980 1:10 h
    1x 980 2x 780 (network) 1:00 h
    2x 980 2x 780 (network) 0:42 h

    @gazu: do the math :) seems those 980 card is really superiour compared to that 780 thingie... 2 of them in network render have been just 10 minutes faster than a second 980 in main rig. or maybe its the network rendering itself that slows things down, not sure yet. lets see after the next OCDS plugin patch that should show up this weekend

    @nox: did the same scene (without network rendering, just main rig). at first i rendered it with the usual 4gb of textures used. then i put in some other gen3 figures to boost up the textures to 7gb... so i had to enable the ooc memory option, enabled 16gb. the result: absolutely zero speedloss... 20 or 30 seconds longer, thats it. so it seems vram isnt that much more interesting or needed as it was 1 year ago :)



  • Again…the appeal of Titan is not the 200mhz of speed but the 6GB of VRAM it has over the 980.

    You want big scenes? Titan.
    You don't? 980.

    Speed is not the relevant stat in the 980 vs Titan discussion.



  • atm its… 2x 980ti, 2x780 :) upgraded a bit... but speed is fast enough now anyway... could do a third network slave with the remaining 2x 780s maybe, but lets see when the new developer of that daz plugin gets network rendering fixed.. THEN i think about it...



  • Results for Octane - but Iray results are near the same in terms of relative performance.

    As you can see, the 980 Ti costs 1/2 that of a Titan X but is basically the same in performance….

    So yeah! A pair of 980 Ti would be an excellent choice.

    @ Hibbli - if I recall your setup... you are on 4 780's? I would probably not make the jump to 980 Ti's... it's ALOT of money for not that much of a perf increase. Especially since you have the super cool 6 GB 780's.

    -G



  • Cool thanks a lot for the advice. I'll probably get the 980ti for the time being. Save that extra money for a new monitor.



  • what about 2x gtx 980ti instead of 1 titan? those cards are awesome fast



  • If in need THIS moment, I would look at the TI. The near double cost for near the same performance on the Titan X is not that justifiable I think. That being said, I am probably going to wait until Pascal hits before another GPU upgrade.



  • I currently run on a GTX 970. I'm thinking of upgrading. Should I invest in a 980ti or go all in and get a Titan X? I mainly use DAZ and Iray for my comic series.



  • Are you using just the GPU to render or adding the CPU as well? Try letting the graphics card do it all, as the CPU may be the problem. I have a 980 and it renders fine, but the older 760 Ti is much better as it has almost twice as many CUDA cores. Running two 760 Ti cards really cuts the time down.



  • If your issue is related to memory (as it seems it is from the info you have provided) and Iray, you might want to try the new DS 4.9 Beta, as it includes updated Iray with a number of bugs fixed. Worth a try before you spend big bucks on a new system, unless you want to do that anyway of course.



  • @SinCyprine - Iray does all that on the fly for you as it loads into memory. It is really fucking great! Especially coming from Octane where we had to do it manually.

    Side note: I would go with the Skylake build. I mean, you might as well get the new hotness :)



  • i dont know how iray works but you could try to resize the texture of yr models. who needs a 4000x4000 pixels iris map in a scene where the eye is 50x50 px in the final render ?


Log in to reply
 

Looks like your connection to NodeBB was lost, please wait while we try to reconnect.