Вторник, 26.11.2024
Волошин Сергей Сергеевич
Меню сайта
Категории каталога
Мои статьи [202]
Новости [218]
Интересные статьи [109]
Статьи [11]
Мини-чат
200
Наш опрос
Оцените мой сайт
Всего ответов: 68
Главная » Статьи » Новости

AMD's next-gen GPU powers Crysis on an iPhone
AMD unveiled its next-generation GPU architecture at an event today aboard the USS Hornet—2.5 teraFLOPS of floating-point power, or well over twice the company's current high-end cards. The company also had hardware and software partners on-hand to demonstrate their own applications of AMD's technology, and one of these partners was OTOY. While AMD gave a number of very impressive demos of their next-generation DirectX 11 part (detailed technical discussion to follow later this month), OTOY's demo of Crysis running on an iPhone was probably the most profoundly intriguing use of AMD's upcoming GPU that I saw all evening.

Ok, I know that 90 percent of you just did a double-take—Crysis, the standard gaming benchmark for high-end 3D hardware, running on a next-gen GPU on an iPhone? Let me explain.
The return of the thin client model

I haven't covered OTOY's remote gaming service in the past because I've been skeptical of it—the firm's combination of secrecy and big talk raised red flags, and I've had plenty of company in my skepticism here at Ars. In a nutshell, OTOY claims to be able to deliver 3D games in real-time over the Internet, so that you can play, say, Crysis by using a remote render farm as a kind of terminal server that pushes out frames to a thin client that just does display and user input.

This sounds a little bit wacky, and when Ars Gaming Editor Ben Kuchera saw a working demo at the recent GDC he was impressed, yet voiced some suspicion that maybe it was all faked. After a live demo and a brief chat with two OTOY engineers, I was able to get a much better handle on how it works and what its prospects actually are.

First, the game is rendered like normal on the server machine, where frames from it are grabbed by the OTOY server-side software. Next, these frames are compressed and sent out over the network to the client, which decompresses them using a very small chunk of code (about 780K, hence the iPhone demo) and displays them in a window. User input is sent back to the server over UDP because it's tolerant of packet loss, so you don't add to latency by resending dropped packets.

The demos of Bioshock, Grand Theft Auto, and World of Warcraft were surprisingly responsive, despite the fact that the games were being served up by machines in Los Angeles. There was some discernible lag, but not much worse than what I was comfortable with in my Quake deathmatch days. The main problem with these demos was that I could easily see a ton of compression artifacts on the large monitors that AMD was using, to the point where text wasn't very readable. Smaller monitors and fewer demo stations (for more bandwidth per station) would've put OTOY in a better light, but unless the visual experience was very significantly improved I couldn't see immediately see many PC or console gamers settling for this in its present state.

But for casual/handheld gaming, this tech has immediate potential, as Crysis running the iPhone demostrated. The iPhone's screen was small enough that I couldn't discern any compression artifacts, and the gameplay was smooth and responsive. Aside from the half-baked control scheme, which was apparently hacked together at the last minute, this really was Crysis running on an iPhone.

As for OTOY's prospects for eventually reaching the hardcore, my chat with their engineer proved instructive.
Wrong side of Moore's Law?

When I was talking to the OTOY engineer, I suggested to him that his company was on the wrong side of Moore's Law, in the sense that the number of transistors that you can cheaply pack into a client machine is rising much faster than available network bandwidth. I took the implication to be that at some point, when the gap between CPU and GPU horsepower and network bandwidth is a few times greater than it is now, you'd rather host that power yourself to get a better gaming experience than deliver it over the network and deal with latency and compression artifacts.

His response was good, and sounded plausible to me. He suggested that there's a network bandwidth threshold beyond which OTOY's technology is "good enough" to compete with a locally run game, and that this threshold is at the 20Mbit mark, which is the point that OTOY can push 1080p frames across the network. As long as latency is under control, if you're playing at full 1080p then you'll be just as happy gaming remotely via, say, a set-top box, as you are playing locally on very expensive hardware.

The OTOY + lightweight client value proposition gets even better as server-side real-time rendering prowess increases to cinematic levels. In other words, if you can game on way more parallel computing power at 1080p over the network than you could ever afford to buy locally, then even a hardcore gamer may be willing to tolerate a little latency and loss of sharpness, at least for certain types of games.

I can see where game developers love this idea, because it solves their piracy problem in one whack, eliminates support costs, and gives them a recurring revenue stream via subscriptions. I'm still having a bit of trouble getting my head around why AMD/ATI thinks this is a good long-term idea, though, because it seems like it could sell a lot more silicon to gamers than it could to render farms. It's probably the case that AMD is happy to sell GPUs to both clients and servers for a while, especially if it opens up a new market like iPhone gaming to high-end GPUs.

Источник: http://arstechnica.com

Категория: Новости | Добавил: CrytekMan (11.09.2009)
Просмотров: 549 | Рейтинг: 0.0/0 |
Всего комментариев: 0
Добавлять комментарии могут только зарегистрированные пользователи.
[ Регистрация | Вход ]
Форма входа
Поиск
Друзья сайта
Статистика

Онлайн всего: 1
Гостей: 1
Пользователей: 0
Copyright MyCorp © 2024
Хостинг от uCoz