5870 unleashed

Elder III

Elder III

Furnace Stoker

Join Date: Jan 2007

Ohio

I Will Never Join Your Guild (NTY)

R/

^^^looks like the Tesla Computing GPU..... or is it a prototype of the GT300?

Burst Cancel

Burst Cancel

Desert Nomad

Join Date: Dec 2006

Domain of Broken Game Mechanics

Projections on availability and performance are worthless for computer hardware. Until the retail parts are in our machines undergoing real-world performance testing, I'm really not interested in anything manufacturers have to say about their products. I've had more than enough of bullshit paper launches, underperforming products, and shockingly arrogant pricing - oh, and unwarranted forum hype.

And while video card upgrades are nice and all, they're only relevant if we actually have quality games to play. Until developers decide to get off their collective asses and make something worth playing instead of more terrible casual-gamer shovelware, me-too MMOs, and Generic FPS v10.4, it's going to be hard justifying more video card investment. After all, it doesn't take cutting-edge hardware to play Starcraft.

moriz

moriz

??ber t??k-n??sh'??n

Join Date: Jan 2006

Canada

R/

Quote:
Originally Posted by Rahja the Thief View Post

And what is this DX11 doesn't matter business? GT300 and all mainstream plans for the chip are DX11 capable, and you will find that prices will be far better than the GT200 launch.
http://www.tomshardware.com/news/Nvi...-ATI,8687.html

not that i fault nvidia for making this kind of statement. after all, they don't have a retail product using DX11.

i also agree with Burst Cancel: as i've said in a previous post, GPU speed has already managed to make a core i7 965EE a CPU bottleneck at 2560x1600 resolution. if nvidia's cards are to drop right now, we'll just see the exact same thing. unless you have old hardware and making a full system upgrade right now, these cards are not worthwhile to upgrade to.

riktw

Wilds Pathfinder

Join Date: Jul 2008

netherlands

Mo/E

hello GT300
mind to send me one
its small, compared to the HD5870 and HD5870*2 leaked pictures.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

Quote:
Originally Posted by Elder III View Post
^^^looks like the Tesla Computing GPU..... or is it a prototype of the GT300?
GT300 is a Tesla cGPU solution for general compute. G300 is the GPU version, but they are very similar.

The version pictured above is the Tesla unit, but suffice to say, the final retail rendering card will be extremely similar in dimension and cooling solution (that doesn't mean all manufacturers will use the same model though)


What is important to understand about GT300 is that it isn't just another GPU. nVidia's primary focus isn't 3D gaming anymore, since the PC gaming market has depreciated and is in a state of downward trends. GT300 addresses many issues with GT200 in terms of general compute, and is definative move towards a multi purpose card that not only serves as a very powerful GPU, but as a powerful CPU-like solution. OpenCL and Windows7 DirectX Compute will be the single biggest boon that GT300 brings to the table.

GT300 is meant to be a one time purchase for Windows7, at least until CPU technology advances enough to surpass nVidia cGPU compute (which I don't see happening until AMD Bulldozer)

The sheer power of Fermi coupled with the amazing advancement of hardware driven code manipulation and caching is going to really show the muscle of the large die design.

Here is what you can expect from GT300 vs GT200 (I can't give you hard numbers, but I can reiterate what you will find from full white paper release and press releases)

  • 4x the Anti Aliasing and Anisotropic Filtering performance at resolutions of 1920x1200 or higher.
  • 10x faster CUDA parallel processing with active switching into a pure C++ environment.
  • Low power consumption (225w or less depending on model)
  • Full DX11 and Tessellation support for Windows7
  • Fully parallel CPU/GPU transfers for 5-6x performance gains
  • Improved efficiency (140%+) thanks to MIMD structure and OP lineup (2x precision FP has been improved over 4x, and is 2.5x faster then even the HD5870 at its peak)

Bearing in mind though, any press release material you will find is for GT300 (not G300, our purely graphics oriented solution lineup)

I'm happy to answer any questions as they pertain to the press release material you can find online.

I've had 5 sections of my NDA lifted, but only pertaining to direct questions, not unlimited disclosure. Make questions specific, and I'll do my best.

Burst Cancel

Burst Cancel

Desert Nomad

Join Date: Dec 2006

Domain of Broken Game Mechanics

HPC isn't really relevant to home users either. nV has been pushing CUDA and GPGPU for a while, but Joe Sixpack still doesn't give a shit.

Honestly, about the only real benefit that mainstream consumers have gotten from recent advances in computing hardware is lower power consumption (and related noise/heat). In terms of actual raw computing strength, we've already passed the point of "good enough" for the vast majority of non-business users.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

Quote:
Originally Posted by Burst Cancel View Post
HPC isn't really relevant to home users either. nV has been pushing CUDA and GPGPU for a while, but Joe Sixpack still doesn't give a shit.

Honestly, about the only real benefit that mainstream consumers have gotten from recent advances in computing hardware is lower power consumption (and related noise/heat). In terms of actual raw computing strength, we've already passed the point of "good enough" for the vast majority of non-business users.

CUDA will come into its own with Nexus. GT300 has full native, hardware based support for C++.

Improvavel

Desert Nomad

Join Date: Apr 2007

Quote:
Originally Posted by Rahja the Thief View Post
GT300 is meant to be a one time purchase for Windows7, at least until CPU technology advances enough to surpass nVidia cGPU compute (which I don't see happening until AMD Bulldozer)
Sounds expensive...

Elder III

Elder III

Furnace Stoker

Join Date: Jan 2007

Ohio

I Will Never Join Your Guild (NTY)

R/

It sounds like NVIDIA will be the way to go for work stations and research behomoth computers - for the home video gamer though I would say it will be a matter of whatever manufacturer gives the most bang per buck... which is the way it should be - competition galore = better deals for the consumer and since I build computers on the side, the more $$$ I can save for Joe Sixpack means the more I can potentially earn myself. So the more competition the better.

*Rahjah - do you feel that G300 (graphics oriented card) will compete with or surpass the HD 5870 in terms of gaming performance at the current price level? If that's too close to any NDA terms I won't be offended by a non-answer either.

riktw

Wilds Pathfinder

Join Date: Jul 2008

netherlands

Mo/E

rahja, are this real pictures of the GT300
http://www.hardware.info/nl-NL/news/...rt_op_de_foto/
its a dutch site, dont try and understand what they say

as they look like someone grabbed a saw and shortened the card.
if that are fake pictures, can you post good ones, as i would like to know how long the card will be.
i cant imagine thats it longer than my HD4870*2 but well

anyways
keep the shiny chrome

moriz

moriz

??ber t??k-n??sh'??n

Join Date: Jan 2006

Canada

R/

hmm, 6+8 pin PCI-E connectors... this does not bode well for power consumption.

EDIT: according to semiaccurate.com, that card is a fake.
http://www.semiaccurate.com/2009/10/...mi-boards-gtc/
keep in mind that this IS semiaccurate, so take what charlie said with a huge grain of salt. nevertheless, his take on it is quite compelling.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

Quote:
Originally Posted by moriz View Post
hmm, 6+8 pin PCI-E connectors... this does not bode well for power consumption.

EDIT: according to semiaccurate.com, that card is a fake.
http://www.semiaccurate.com/2009/10/...mi-boards-gtc/
keep in mind that this IS semiaccurate, so take what charlie said with a huge grain of salt. nevertheless, his take on it is quite compelling.

He's a nutcase... no really. The card is a Tesla unit, made for rack implementation, not for standard installation as a desktop GPU unit. The final GPU (G300) unit will look slightly different from its GT300 brother.

As for 6+8 pin connectors, that explanation is simple. When you plan a GPU for power consumption, you never try to come close to peak power consumption, but rather, plan for above it. GT300 has a ~225w TDP, but the connectors allow for 300w TDP. It really is quite power efficient, and has many power saving hardware based features. Load balancing has been drastically improved as well, so if the card is pushed to full load for long periods of time (which is mostly unlikely given the sheer computing power), the total power consumption won't be off the charts.

As for the validity of those Dutch photos, it's hard to tell. The card layout and design for the Tesla unit is exactly like that, yes, but are those real cards... unknown to be frank. But, I'll put it this way; if those are fake, they look almost exactly like the release Tesla unit (not the GPU unit though, G300)


Does that ^ help?

Quote:
Originally Posted by Elder III
*Rahjah - do you feel that G300 (graphics oriented card) will compete with or surpass the HD 5870 in terms of gaming performance at the current price level? If that's too close to any NDA terms I won't be offended by a non-answer either.
No, happy to comment. The G300 GPU will surpass the HD5870 in terms of gaming performance based on ATi's pricing model at this time. That doesn't mean ATi won't change their pricing model once their lineup is fully released, or once nVidia (getting out of the habit of saying we is tough...) releases their G300 GPU variant lineup. G300 is cheaper to produce than GT200 was by a significant margin, and as a result, you can expect a large chunk of those savings to be passed on to you, the consumer. GT200's release pricing was also horribly wrong, so take the past with a grain of salt.

moriz

moriz

??ber t??k-n??sh'??n

Join Date: Jan 2006

Canada

R/

um... why does the card have... wood screws on the heat exhaust? i thought the heat shroud should be hollow on that end... that kinda implies the entire shroud is made out of a single block of plastic.

no rahja, this is almost definitely a fake.

EDIT: in case you still don't trust charlie, here's techpowerup's take:
http://www.techpowerup.com/105052/NV..._Unveiled.html

kinda goes against what you said about "many functioning samples" available, doesn't it? if this were the case, why would nvidia show a dummy board? and a badly made dummy to boot.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

Quote:
Originally Posted by moriz View Post
um... why does the card have... wood screws on the heat exhaust? i thought the heat shroud should be hollow on that end... that kinda implies the entire shroud is made out of a single block of plastic.

no rahja, this is almost definitely a fake.

EDIT: in case you still don't trust charlie, here's techpowerup's take:
http://www.techpowerup.com/105052/NV..._Unveiled.html

kinda goes against what you said about "many functioning samples" available, doesn't it? if this were the case, why would nvidia show a dummy board? and a badly made dummy to boot.

Sigh... the board he held up isn't a "fake". Because showcased samples are mostly likely destroyed due to EMD, SD, etc etc, they are rarely working boards, and are altered at times for press release to prevent damage. GPU boards are not meant to be man handled by hundreds of people, and expected to survive. They are made to be installed of a stationary case (carefully I might add), and stay there until they are removed to be cleaned or die. The demo board isn't a fully functional board, but it is an accurate mockup of the release boards. Taking first series production boards away from the debugging and testing units would be foolish just so you could have 100% real "eye candy". The demo shown is on working GT300 silicon, and this entire "it's fake" business is getting rather silly.

We are still several months away from a retail launch, and people are complaining that we aren't passing them out as party favors or in nicely organized celeb gift sacks. Sorry, this isn't the Emmy's... get over it. In reality, GT300 is alive and well, not fake, and 100% working. nVidia doesn't have many of the final release boards floating around (I certainly won't be getting one) at this point in time, since they have only received about 3 bins worth. Suffice to say, just stop. It's real, the performance is real, and trying to debunk reality is rather dumb.

I get it, you like ATi, but enough already. This nVidia bashing is absolutely out of hand. ATi has the upper hand with a launch months ahead of ours. It's fine and dandy they got out ahead of us, and they will reap early adopter benefits this time around. However, that doesn't mean that GT300 isn't going to outperform ATi, and isn't going to really shake up the concept of what a GPU can do.

If I could choose a card for Windows 7, I'd choose GT300 over RV870 any day of the week, and that is said without bias. More PC centric features are found on GT300, and Eyefinity, while very cool, isn't practical. So, that's the bottom line. Which one will make your PC run faster, cooler, and more stable??? GT300, hands down.

moriz

moriz

??ber t??k-n??sh'??n

Join Date: Jan 2006

Canada

R/

i am not bashing nvidia, and i don't particularly like one brand over the other. i'll always put my money towards whatever happens to be the best buy at my time of purchase.

i am not doubting that the GT300 exist. i also have no doubt that it will be better than the RV870. i was just saying, the card put on display is most definitely not a working sample, or even a disabled sample. it's a mockup, and a poorly done one at that. i'd have no problem with it if it was actually presented as a mockup. unfortunately, it wasn't. your CEO clearly held that thing up and said: "this puppy here, is Fermi." no it's not.

i have great respect for nvidia and what's it's trying to accomplish... but pulling this kind of stunt is just stupid. i'd rather they show nothing at all.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

The cooling system and PCB specs are accurate, and there was a sealed GPU inside of that sample. It's a bit beyond a mockup, but not a working card.

Bear in mind, this is months prior to launch, so this isn't abnormal for any company.

Quaker

Quaker

Hell's Protector

Join Date: Aug 2005

Canada

Brothers Disgruntled

My question is - does nVidia (or ATI) seriously think that GPGPU functions are going to be the "driving force" in graphics card purchases? Can they be that dumb? Do they really think that Joe Average, who doesn't use anywhere near the full potential of their computer now, is going to care if their GPU can do anything else?
Will someone who actually has a use for GPGPU apps actually buy a GPU to do it, or simply buy another additional PC? Will those few people who would actually use/want/need to do GPGPU functions actually be a significant enough user base to be a "driving force"? I seriously doubt it.

Elder III

Elder III

Furnace Stoker

Join Date: Jan 2007

Ohio

I Will Never Join Your Guild (NTY)

R/

Quote:
Originally Posted by Quaker View Post
My question is - does nVidia (or ATI) seriously think that GPGPU functions are going to be the "driving force" in graphics card purchases? Can they be that dumb? Do they really think that Joe Average, who doesn't use anywhere near the full potential of their computer now, is going to care if their GPU can do anything else?
Will someone who actually has a use for GPGPU apps actually buy a GPU to do it, or simply buy another additional PC? Will those few people who would actually use/want/need to do GPGPU functions actually be a significant enough user base to be a "driving force"? I seriously doubt it.

My thoughts are running parallel to Quaker's... I recently have been reading up on the TESLA cGPU (GT300 right?) and if the next video card out from NVIDIA (G300 if I have my cookies straight) has a fraction of the computing power (not 3D graphics power) then it will be amazing. Cuda and ATI's equivalent (sorta) are both awesome and truly amazing when I think of what could hypothetically be accomplished with these new(ish) technologies. ***However, John & Jane Smith buy new video cards to play video games and sometimes to work with 3D design programs. Other than the workstation user, scientist and researcher (and they are a small % of the market no doubt)... who will use these wonderful features, or take them into consideration when buying?

Improvavel

Desert Nomad

Join Date: Apr 2007

Quote:
Originally Posted by Elder III View Post
My thoughts are running parallel to Quaker's... I recently have been reading up on the TESLA cGPU (GT300 right?) and if the next video card out from NVIDIA (G300 if I have my cookies straight) has a fraction of the computing power (not 3D graphics power) then it will be amazing. Cuda and ATI's equivalent (sorta) are both awesome and truly amazing when I think of what could hypothetically be accomplished with these new(ish) technologies. ***However, John & Jane Smith buy new video cards to play video games and sometimes to work with 3D design programs. Other than the workstation user, scientist and researcher (and they are a small % of the market no doubt)... who will use these wonderful features, or take them into consideration when buying?
They don't have a CPU.

What do you think will happen to nVidia when it is time for Intel and AMD (if it survives - I hope so, monopolies suck my money out of my wallet, vide for example, the GT200 when nVidia thought ATI was lame*tease *) to integrate CPU and GPU on the same die?

nVidia needs to come out with something to survive in the long term.

On the other hand, I won't be paying for the extra GPGPU features I wont utilize. The company that gives me more GAME performance around my budget and its the cheaper, will get my money.

moriz

moriz

??ber t??k-n??sh'??n

Join Date: Jan 2006

Canada

R/

improvavel is correct. nvidia is pushing GPGPU because they don't have a CPU. not that they don't have the expertise to make one, though. it's because they don't have a x86 license. there was a push some time ago by nvidia to buy VIA, except their x86 license is non-transferable, so nvidia won't be able to use it to make a CPU.

as for myself, i'm going to skip this generation entirely. my HD4890 is more than powerful enough for whatever i throw at it. my next upgrade would be a platform one, since my G33+E7200 platform is getting seriously long in the tooth. i'm also vaguely suspecting that it's beginning to bottleneck my graphic card's performance. not having DX11 does not detract from the windows 7 experience at all for me, since i've been using it for months now, and haven't missed DX11 one bit.

my next graphic card update will likely fall around july of next year. here's hoping both AMD and NVIDIA can put out some compelling products around that time.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

OpenCL + Windows7 + your PC = screaming performance. DirectX Compute = godly too. G300 won't just make games run at their highest settings, but also Windows7 and many many related apps.

Improvavel

Desert Nomad

Join Date: Apr 2007

Quote:
Originally Posted by Rahja the Thief View Post
OpenCL + Windows7 + your PC = screaming performance. DirectX Compute = godly too. G300 won't just make games run at their highest settings, but also Windows7 and many many related apps.
Isn't the RV870 capable of that as well?

KZaske

KZaske

Jungle Guide

Join Date: Jun 2006

Boise Idaho

Druids Of Old (DOO)

R/Mo

Quote:
Originally Posted by Rahja the Thief View Post
OpenCL + Windows7 + your PC = screaming performance. DirectX Compute = godly too. G300 won't just make games run at their highest settings, but also Windows7 and many many related apps.
Sounds like I will be staying with nVidia when I replace this computer next summer.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

Quote:
Originally Posted by Improvavel View Post
Isn't the RV870 capable of that as well?
Not to the levels that GT300 (and G300) are capable of. You are probably looking at a 1000%+ performance difference due to the nature of the GT300 chip and its hardware bound technologies versus the software driven parts of ATi's RV870. Mind you, that number is pulled out of my ass, but I am trying to suggest the difference will be very noticeable.

Also remember, GT300 can natively run C++, and CUDA will see a much larger section of the market lean towards its use with Nexus.

If you are a programmer/software junkie, you might find this bit of info interesting to say the least (though you won't be able to access the really juicy developer logs of Nexus and its abilities, you will find some very cool stuff here)
http://developer.nvidia.com/object/nexus.html

Enjoy.

Improvavel

Desert Nomad

Join Date: Apr 2007

Quote:
Originally Posted by Rahja the Thief View Post

Also remember, GT300 can natively run C++, and CUDA will see a much larger section of the market lean towards its use with Nexus.
"Natively". That would be interesting to see. It has a new unified to access memory (from what I get from the white paper). So apparently it can use OOP language. It still needs to be compiled to Femri, though.

From Femri white paper:

Quote:
Fermi is the first architecture to support the new Parallel Thread eXecution (PTX) 2.0 instruction set. PTX is a low level virtual machine and ISA designed to support the operations of a parallel thread processor. At program install time, PTX instructions are translated to machine instructions by the GPU driver.
Quote:
Fermi and the PTX 2.0 ISA implement a unified address space that unifies the three separate address spaces (thread private local, block shared, and global) for load and store operations. In PTX 1.0, load/store instructions were specific to one of the three address spaces; programs could load or store values in a specific target address space known at compile time. It was difficult to fully implement C and C++ pointers since a pointer’s target address space may not be known at compile time, and may only be determined dynamically at run time.

With PTX 2.0, a unified address space unifies all three address spaces into a single, continuous address space. A single set of unified load/store instructions operate on this address space, augmenting the three separate sets of load/store instructions for local, shared and global. The 40-bit unified address space supports a Terabyte of addressable memory, and the load/store ISA supports 64-bit addressing for future growth.
Quote:
Conclusion

The importance of data locality is recognized through Fermi’s two level cache hierarchy and its combined load/store memory path. Double precision performance is elevated to supercomputing levels, while atomic operations execute up to twenty times faster. Lastly, Fermi’s comprehensive ECC support strongly demonstrates our commitment to the high-performance computing market.

On the software side, the architecture brings forward support for C++, the world’s most ubiquitous object-orientated programming language, and Nexus, the world’s first integrated development environment designed for massively parallel GPU computing applications.

moriz

moriz

??ber t??k-n??sh'??n

Join Date: Jan 2006

Canada

R/

sounds like something my dad would be interested in. he does ocean current modelling, and currently rely on expensive mainframes/computing clusters for his simulations, which also takes a lot of time. a single $2000 fermi tegra might potentially reduce the computational time significantly while cutting back on costs.

Improvavel

Desert Nomad

Join Date: Apr 2007

http://www.xbitlabs.com/news/video/d...cs_Cards.html#

Quote:
Nvidia Corp. has confirmed in a brief media interview that it will be able to cut-down the Fermi-G300 graphics processors in order to address certain specific markets and price-points. The move is natural for all graphics chips designers, but this time Nvidia openly admits that many of implemented capabilities will hardly offer benefits for the consumer right away.
Quote:
“We're not talking about other (chips) at this point in time but you can imagine that we can scale this part by having fewer than the 512 cores and by having these cores have fewer of the features, for example less double-precision,” said Mr. Dally, who did not explain how it is possible to reduce double-precision floating point performance without decreasing single-precision point speed, something which is needed by video games. In fact, Mr. Dally’s comment may imply that non-flagship Fermi derivatives will have not only be slower in terms of performance, but will be seriously different in terms of implementation.
Now, this, if true, seems much more realistic.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

Yes, Fermi (GT300) is a Tesla unit. G300 will be the graphics derivative, but will still have many of the cGPU features needed for OpenCL and DirectX Compute. Double precision isn't needed for much in relation to gaming (minus a few new forms of AA and AF that haven't seen major markets yet), and ECC isn't needed for GPU driven work in a graphics environment.

Fermi's scaling allows us to be much more mobile in marketplace segments than we were with GT200. You will see a much larger and more robust lineup of GPUs spanning many customer segments and price points, which is a great thing for the consumer in the end in regards to performance per dollar.

Fermi is what GT200 should have been, but we needed to see what a very large chip design could do in real world environments first. GT200 was successful where it needed to be, and G300 will fill in the areas of GT200's weakness.

Improvavel

Desert Nomad

Join Date: Apr 2007

Quote:
Originally Posted by Rahja the Thief View Post
Fermi's scaling allows us to be much more mobile in marketplace segments than we were with GT200. You will see a much larger and more robust lineup of GPUs spanning many customer segments and price points, which is a great thing for the consumer in the end in regards to performance per dollar.
I hope so. I don't like to have to pay more than €250 for a graphics card Gotta love the Ti4200 and the ATI 9500 Pro that would unlock to 9700 Pro

And before the 4850, the last good midrange card was the 6600 GT (although the 7600 GT was quite decent too). I was starting to miss those times.

KZaske

KZaske

Jungle Guide

Join Date: Jun 2006

Boise Idaho

Druids Of Old (DOO)

R/Mo

Quote:
Originally Posted by Improvavel View Post
I hope so. I don't like to have to pay more than €250 for a graphics card Gotta love the Ti4200 and the ATI 9500 Pro that would unlock to 9700 Pro

And before the 4850, the last good midrange card was the 6600 GT (although the 7600 GT was quite decent too). I was starting to miss those times.
I really miss the drivers (pre 100). Small, clean and they actually worked well. These new drivers have too many dependancies and are a prime example of broken bloatware.
Why can't they break the driver down into chunks allowing you to install only the parts you want?

Zomgyogi

Zomgyogi

Desert Nomad

Join Date: Apr 2007

In a park

Quote:
Originally Posted by Rahja the Thief View Post
Though, the HD5850 is a great card for the money, hands down. An excellent buy, especially if you plan to run 2 in Crossfire! Big kudos to AMD for the 5850, and a golf clap for the 5870.
So buying a 5850 is a good buy over nVidia at the price point?

Also how many "gaming" DX11 cards will nVidia be releasing?

Ec]-[oMaN

Ec]-[oMaN

Desert Nomad

Join Date: May 2005

Toronto, Ont.

[DT][pT][jT][Grim][Nion]

W/

Quote:
Originally Posted by Zomgyogi View Post
So buying a 5850 is a good buy over nVidia at the price point?

Also how many "gaming" DX11 cards will nVidia be releasing?
Based on previous releases I'm skeptical Nvidia will or can offer a part that out performs anything ATI is offering or will currently offer for the next 6 months below the 300$ mark.

5850 doesn't seem like a bad choice at all. You can also buy the 5850 now where's we don't know when we can even get our hands on a sub 300$ dx11 Nvidia part. If you are just a tad up to date with upcoming releases you always sell the 5850 just before the competition releases something assuming it's comparable or better at that price point.

Zomgyogi

Zomgyogi

Desert Nomad

Join Date: Apr 2007

In a park

Well I already bought one and they are beasts! They run very cool, but the fan is still pretty loud above 45%. It running load temps about 5c higher then my 4890 idle temps.

Highest so far...850/1200

moriz

moriz

??ber t??k-n??sh'??n

Join Date: Jan 2006

Canada

R/

wow, very impressive overclock there. that should give you almost 5870 performance except in shader heavy instances.

Elder III

Elder III

Furnace Stoker

Join Date: Jan 2007

Ohio

I Will Never Join Your Guild (NTY)

R/

nice OC and nice temps my stock cooled 4850 often hits 80 degrees, and is about half as powerful.

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

The 5850 is a great buy, no doubts there. G300 will be for higher end computers, to be perfectly honest. The release will be intended to smash the 5870 and 5870X2 into the ground. Since, by current numbers, G300 should be able to at least match the 5870X2, the dual derivative card should be the absolute pinnacle of performance, but as usual, don't expect a low price on a dual card solution. If you are thinking about buying a 5870, wait. If your budget only allows a 5850, invest now and get one. Hope that helps.

Elder III

Elder III

Furnace Stoker

Join Date: Jan 2007

Ohio

I Will Never Join Your Guild (NTY)

R/

Quote:
Originally Posted by Rahja the Thief View Post
The 5850 is a great buy, no doubts there. G300 will be for higher end computers, to be perfectly honest. The release will be intended to smash the 5870 and 5870X2 into the ground. Since, by current numbers, G300 should be able to at least match the 5870X2, the dual derivative card should be the absolute pinnacle of performance, but as usual, don't expect a low price on a dual card solution. If you are thinking about buying a 5870, wait. If your budget only allows a 5850, invest now and get one. Hope that helps.
That sounds about like what I expected...... frankly though, with the games that are out on the market now or that are likely to come out this fall season, I don't see any need to get a 5870 over a 5850 - nothing out there that the cheaper card can't handle and if you have a Crossfire capable mobo you are well set for 2-3 yrs of gaming in my ever so humble opinion.

I am curious to see how much the G300x2 (or whatever it will be called) can beat a 5870x2 by and what the price difference is.... time will tell.

Improvavel

Desert Nomad

Join Date: Apr 2007

Quote:
Originally Posted by Rahja the Thief View Post
The 5850 is a great buy, no doubts there. G300 will be for higher end computers, to be perfectly honest. The release will be intended to smash the 5870 and 5870X2 into the ground. Since, by current numbers, G300 should be able to at least match the 5870X2, the dual derivative card should be the absolute pinnacle of performance, but as usual, don't expect a low price on a dual card solution. If you are thinking about buying a 5870, wait. If your budget only allows a 5850, invest now and get one. Hope that helps.
So shouldn't we expect smaller G300 variants?



By the time G300 arrives the 5850 should be sub $200 and the 5870 around the $250 with the 5870x2 on the $400 market.

And then rumours have the 6xxx series by the end of 2010, so if nVidia takes as long to get a 2xG300 as it did to have the GTX295 it might have to compete with the 6xxx series.

Grabbing popcorns to enjoy engineers teams go on the fight

Lord Sojar

Lord Sojar

The Fallen One

Join Date: Dec 2005

Oblivion

Irrelevant

Mo/Me

I doubt ATi will release the 6000 series by the end of 2010. That is assuming it is a new architecture, and it better be, or they have no chance of keeping this momentum going. They will need a brand new architecture to continue this.

The 5770 is a great card too, btw. Though, the GTX260 is a better choice, but... the 5750 certain takes the cake on budget cards, seeing as how it is priced the same as the GTS250 and kicks its ass. However, the GTX260 can be had for the same cost as a 5770, and it beats it, hands down.

KZaske

KZaske

Jungle Guide

Join Date: Jun 2006

Boise Idaho

Druids Of Old (DOO)

R/Mo

This is interesting, not a video card but the first Tesla on the market? What do you think of this thing Rahja?
http://www.tigerdirect.com/applicati...7BBTkwCjCECjCE