VOGONS


Reply 21 of 46, by vetz

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

Ok cool, keep us posted 😁
It'd be awesome to learn a bit more about the NV2 as well.

Will do. Btw, if you want to learn something about the NV2, check out this article: http://www.firingsquad.com/features/nv2/default.asp

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 22 of 46, by Stiletto

User metadata
Rank l33t++
Rank
l33t++
F2bnp wrote:

Oh yes, that would definitely be interesting. I think I can contact hxc and ask him about his contact info, would you like me to?

Nice, I know hxc through some collaboration related to the Sega Genesis Virtua Racing cart microcontroller years ago.

"I see a little silhouette-o of a man, Scaramouche, Scaramouche, will you
do the Fandango!" - Queen

Stiletto

Reply 23 of 46, by vetz

User metadata
Rank l33t
Rank
l33t

Diamond Edge 3D 1.0 driver CD now available on Vogonsdrivers:

http://www.vogonsdrivers.com/getfile.php?fileid=402

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 24 of 46, by vetz

User metadata
Rank l33t
Rank
l33t

Ok, I've gotten intouch with him. Any suggestion for questions about the NV1 or NV2?

FYI: I'm not sure if I can make any answers public though, will have to clear that up first.

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 25 of 46, by F2bnp

User metadata
Rank l33t
Rank
l33t

Awesome vetz! Sure, here are a few things on the top of my head:

-Did he ever work or got to see a game done on the NV1 that never got released? I figure he must have there at SEGA. For example, were there ever any versions of Virtua Fighter 2 or other Arcade/Sega Saturn games for the NV1?

-Did he ever see the NV2 in action?

-Does he own any development tools for the NV1 and is he willing to release any of those?

-Does he still have any of his work on NV1, such as the UFO game? Does he want to release these?

Finally, on a completely unrelated matter:
-Does he have any of his work on the Sega 32x and is willing to release any of it?

Why can't you release the conversations to the public btw?

Reply 26 of 46, by vetz

User metadata
Rank l33t
Rank
l33t
F2bnp wrote:

-Did he ever work or got to see a game done on the NV1 that never got released? I figure he must have there at SEGA. For example, were there ever any versions of Virtua Fighter 2 or other Arcade/Sega Saturn games for the NV1?

-Does he have any of his work on the Sega 32x and is willing to release any of it?

-Does he own any development tools for the NV1 and is he willing to release any of those?

I kind of asked around on the UFO game, but he already said he didn't want to talk or share much about that and other non-public stuff at SEGA and nvidia unless he had gotten legal advice. If we can help him with this then he seems interested in talking about it.

F2bnp wrote:

Why can't you release the conversations to the public btw?

I don't think it should be any problem, but I just want to ask him about his permission first.

Last edited by vetz on 2012-10-31, 16:07. Edited 1 time in total.

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 27 of 46, by Putas

User metadata
Rank Oldbie
Rank
Oldbie

I would ask about native alpha blending capabilities of NV1, how much of performance it looses, or which stages are unused, to render triangles, details about the lighting. If it is true Sega gave Nvidia meaningless job after NV2 came out death, if AM2 team really pushed for triangles and against NV1. Does he agree with any shortcomings of quadrilaterals, how to overcome them? Could Nvidia make programming as easy as chips with classic pipeline?

Reply 28 of 46, by [GPUT]Carsten

User metadata
Rank Newbie
Rank
Newbie

Great stuff here as usual! Thanks for digging this (guy) up vetz! 😀

One thing unrelated to Vogons caught my attention though: Seems like Gary Mc Taggart, who eventually wasn't hired at Sega, made his way in the industry:
http://www.mobygames.com/developer/sheet/view … veloperId,8270/

What I'd be interested in about NV1 - still having one myself - would be, if it is possible to tell from first-hand visual experience how much of it's potential was lost by not programming it natively. IOW: How much better* did the native demos look compared what's available to the public.

* I know, very naiive... but maybe he could compare their looks to games that came out later - like equivalent with, say Half-Life graphics or Morrowind or whatever.

Reply 29 of 46, by vetz

User metadata
Rank l33t
Rank
l33t

Background info for some of the questions:

On the topic of techdemos:
I came to Sega right after that card was released because they gave us the commercial product to install in our machines and use as a placeholder until the NV2 was ready. I loved that little bastard child of a card and the SH2 was probably my favorite programming chipset I ever did Assembly programming on. There were a number of demos that came with the card or that nVidia gave us separately (can't remember exactly off the top of my head) that were very misleading. They were really trying to show of the QTM (curved ploys to me) technology by having each demo render as polygons and then as QTM's (quadratic texture maps… basically 9 point polygons - 4 corners, 4 sides, 1 center equally spaced) and with colored lighting. The demos were so impressive they looked like seeing an XBox360! But, they were almost all illusions and I, personally, scolded the tech department at nVidia for trying to show off what looked like millions of polygons but were actually illusions. For example, one demo showed you in the center of a room of colored pipes going all the way down a hallway and back. Any direction you looked it was tons of pipes and you could pivot around at 60fps or greater and your jaw just dropped. Then there was a button to toggle polygon vs QTM and the difference was barely noticeable so we thought this was 1000 times faster than software rendering. Then I noticed you couldn't move from the center of the room, you could only pivot. Well, it turned out it was what we call a 'cube map' so the cube map was a really high resolution pre-rendered image on all six sides which is only 12 polygons! Then if you pressed the button to use QTM's instead it was a spherical map (12 to 32 polygons) and just made the corners of the cube map a little cleaner. The illusion only works if you can change your position in space and only let the user pivot. You basically create a very high poly scene in an art package and then render perspective images on all sides of a cube and save those out as textures which get place on an inverted cube. So instead of millions of ploys we were tricked into seeing it was about as few ploys as you could have. There were a number of other demos showing off the technology for rendering triangles, quadrangles, and QTM's. I don't remember them exactly, but they were all fairly simple tech demos.

On the topic of Direct3D support:
I was aware of the Direct3D drivers but so many of the drivers sucked that I never tried it out. I heard the DirectX drivers were very poor. John Carmack of id fame was a primary reason for the death of NV1. Here's an article detailing his public thoughts years after the NV1: http://firingsquad.com/news/viewcomment.asp?r … l=&orig_pnum=81

Unfortunately, I have to disagree with him as what he said was inaccurate and worked fine on the NV1. I was able to do some pretty amazing stuff with QTM's on an NV1 that looked as good as a PS2 game 10 years later. The mantra at the time was 'more triangles' and we had inside friends at 3DFX that we knew personally who honestly believed that. Carmack also convinced Microsoft to have Direct3D use triangles so that pretty much killed any other innovative methods because Microsoft was so dominant. Only the PowerVR chipset went on to do some completely different hardware which went into the Dreamcast. There was another forum post at the time of the NV1 where Carmack talked about the NV1 and it was wrong as well (claimed you couldn't do sliding textures, a very rare and often cheesy effect, or other features). Again, he was wrong. The NV2 had some really amazing tech that we personally saw demonstrated just for our team by Curtis Priem himself (the NV chip designer and founder of nVidia). It's most amazing feature to me was the ability to do on chip programmable decompression; practically microcoding the chip directly. You could even change the code behavior on the fly. For example, say you had four players with different colored uniforms on the screen. You could give it microcode to decompress the same image but to different colors based on the team you were rendering. He also showed how to get 100 to 1 compression and had a tool to compress an image with 300 different algorithms. Crazy stuff. Just the fact that the NV1 had on board audio, expandable memory and a joypad expansion card demonstrate how advanced nVidia was at the time.

On the topic of NV2 failure:
I interviewed at nVidia a few years after leaving Sega and they said the main reason the NV2 failed and QTM's failed is that programmers weren't ready for it and they couldn't do the math. I have to agree with that, but it could have been remedied. I made a direct contact with a brilliant Russian guy in their company, Yakov Kamen, who helped me solve the math I needed for collision detection. Once I had that and figured out how to program the graphics pipeline (oh yea, did you know it had a programmable graphics pipeline?) I was able to make some very interesting stuff with it at high frame rates (60fps or better): Colored lighting, up to 32 dynamic lights, dynamic modification of textures, dynamic terrain modification, light maps, and all that without even optimizing it with any LOD models or mipmapped textures.


Questions:

1. The Mechwarrior2 port for the NV1 was cancelled while in development, but Nvidia still made a press release that it had been released. The same happened with another game, Daytona USA with nvidia having a press release about it's NV1 support on the 6th of May 1996 (though the game didn't retail untill October the same year). The only thing that is left of NV1 support in the retail release is a mention in the readme that the game supports the Saturn gamepad on a NV1 card (which doesn't work for me). Do you know what happened with the port of this game? All indicates that the game had support for the NV1 while in development, but was removed sometime before release.
Mechwarrior 2, what a brilliant game! I think this has to be one of my favorite giant robot games of all time (other great ones: Iron Soldier for Jaguar, Battletech The Crescent Hawks' Revenge for Atari ST, and MicroProse's extremely rare B.O.T.S.S., and the BattleTech Centers, Steel Battalion and MechAssault for the XBox). I almost worked on that as Activision wanted me to do a port of it to the Sony PS1 in 4 months! and the game wasn't even complete for the PC yet! I think you can see why I turned that down. That project went through tons of people. The team had a lot of turnover. If you look at the credits for any game (i.e.: check Mobygames.com or the game manual) you'll see who worked on the game, but that's not accurate, that's who survived working on the game. Anyone who wasn't there the day it shipped was likely left off of the game and others get forgotten or ignored. Two guys I knower who worked on it were Ed Kilham, brought in to fix a lot of broken stuff I think, and Denny Thorley. Denny did a lot of mech related stuff and was CEO of FASA Interactive and later, Day 1 Studios, who did the brilliant MechAssault. I can't tell you any more than that so you should either track down one of the programmers, or Yakov who did some ports to NV1 while at nVidia, or anyone in a Producer or higher level position who would know about any alternate SKU's that were in production (SKU is a reference to any unique platform like PC, XBox, NV1…).

2. Do you know about any other games that were going to feature NV1 support, but ended up with just with software rendering in the retail release? Specially mentioned in press releases are Absolute Zero, Destruction Derby and US Navy Fighters. What about the other SEGA games at the time like arcade games (Virtua Fighter 2, Sega Rally Championship (came to the PC in 97 btw), etc), where there specific plans to port these to the PC and the NV1?
When I came in to Sega (STI - Sega Technical Institute) and found out about the NV1 they had already put out the NV1 with several games. Sega said they had other 3rd parties interested and working with the NV1 but weren't sure about hardware accelerated games at all. Hardware acceleration sounded fantastic, but faster CPU's were coming out and software rendering, if heavily optimized, could be done better in many cases. Every tech guy I knew was floored when demos of Trespasser: Jurassic Park came out in 1998 (2 to 3 years after the NV1) and it was doing bump mapping, tons of trees, physics and lots of other stuff all in software. In this 1994 to 1996 period hardware acceleration was very dubious. There's lots more on that if you want more info.

To commit to a platform for any company is a very big deal! You don't say you're going to make a game for the Sega Saturn if you can't do it or you're already doing the game for Sony as it would really tick them off. To commit to a 3D accelerated hardware was sort of a half commitment. You could say to a 3D card manufacturer you'd try and if it didn't take to much time or resources then you'd support it. Much like how audio cards or various input controllers would be supported. Many of these would be supported after the main game was done and out the door.

Many companies were interested in the NV1 and rumors would fly about as to who was supporting the NV1. Not surprisingly, all of the game developers just wanted faster polygons and didn't want to learn about QTM's or quads because they'd have to redo all of their art and it just wasn't worth the effort for a port. Because the NV1 was unproven nobody wanted to do an original title on it. You also have to remember that nearly every company wrote their own rendering engine and none of them matched what a hardware library used. It's nearly a whole rewrite of a rendering engine to support a hardware card. Now while some of you younger folk think that it's no big deal to hook up polygons by using such and such calls to DirectX or OpenGL you may not realize that neither of those rendering engines existed yet and were in constant discussion for years before and after. On the NV1 you even had to feed it every pixel of every polygon texture every game loop! There were no buffers that auto streamed like audio or todays cards. Well, more on that some other time…

There were a number of projects we would hear about that were going to try using the NV1, but very few commitments. Any commitments were also not guaranteed and with other cards from 3DFX coming out that strained the resources of any developer even more. 3DFX was the dominant player at the time and I personally knew Gary McTaggart who wrote most of the Glide library (the equivalent of NVLib for the NV1, but for 3DFX cards). Gary's library was hands down a brilliant little piece of work that most developers loved. Even the Atari/Midway arcade games like San Francisco rush ended up using Glide and 3DFX cards because it was so clean and reliable. The NV1, on the other hand, had very cool specs but the triangle rendering performance was below that of the 3DFX cards so many developers worked on 3DFX first and NV1 second. Also, 3DFX and nVidia did a lot of the porting for other studios at the time to get their engines working. This is why Gary left 3DFX. I'm not sure what Yakov ported for nVidia or if they had other guys to help out.

So, depending on the resources nVidia expended on helping other studios port their games, that would be very telling in determine what actually was ported. Michael Hara at 3DFX was Sega's main point of contact and a great guy to work with. He, Jen-Hsun Huang (nVidia CEO), Curtis Priem (nVidia CTO then), or Yakov would know any and all NV1 or NV2 games in development or release.

3. When the NV2 tech was demonstrated before you, was it demonstrated on the NV1 or a prototype of the NV2?
It was not in hardware at that time. NVidia stated that all engineering was simulated as if in hardware and 3DFX stated this with their development as well. 3DFX did one thing NVidia didn't which probably led to the downfall of the NV2 more than anything, but that's a long and amazing story for another day.

The prototypes we saw were what they referred to as 'Fractal Compression'. It's a brilliant way to get highly compressed textures into a tiny little cache (there's more on that if you want to know). If I remember correctly, the idea was to get textures down to the equivalent space of 16 x 16 pixels such that the NV2 could pass these efficiently to the card instead of feeding every pixel in software. Yet, I digress again, ah the potential of that NV2 chip!

They demonstrated a highly rendered image of Sonic the Hedgehog as a full screen image and then zoomed in on it to show it was all made of a tiny little rough renderings of the same Sonic image at something like 16 x 16 pixels. This is purely going off of memory, but it was impressive none the less. It reminded me of old Atari ST fractal demos using their DSP chip. Curtis Priem did the demo and explanation.

I don't know what other demos we saw as there was other slides presenting what the chip would do and I think these were all theoretical or pre-rendered. Yakov may have shown me some demos, but I can't recall. It was truly a blast to meet with them and talk about all of the potential of this chip and making it into Sega's next game console for that year. I think this was all around March or April of 1995 or perhaps as early as January of that year. This was truly the fusion of two great companies with their best minds working together and it would all crash and burn within a couple of months! (more on that some other time).

Others at this meeting were probably Robert Morgan (Tech Director, STI), Mark Kupper (Tools Programmer), Dave Sanner (programmer), Russell Bornschlegel (programmer) and possibly some others though it was a rather small group on both sides.

I recall that Yakov sent me some of his demos either before or after to review and look at the source code. If I saw any demos you might have I'd definitely recognize them.

4 .How were the alpha blending capabilities of the NV1?
Boy! That's tough to remember because I was always fighting the graphics pipeline. This amazing little card had a fully programmable pipeline, but if one connection was not correct it would totally fail or give weird results. I spent a long time just getting a stable pipeline to do what I wanted. I know I did transparent (color keyed) textures and I think there were demos of it doing blending quite well. The thing could do beautiful colored lighting and I think it even had colored blending. The well known problem with alpha blending was that is was easily the most costly performance for the GPU to do. Full screen blending was a bad idea! I also remember the metrics for any video cards at the time would fantastic results and say x millions of triangles per second and then when I asked about that the test it was actually measuring the smallest renderable triangles with the least effects and no textures in the most efficient way possible. Lots of those metrics were heavily optimized to get a maximum number so none of the metrics were reliable for real game use. I know it was always a battle between number of polygons versus pixel fill rate as to the performance killer. The pixel fill rate, for me, was definitely the killer on the NV1 because I wanted to texture everything. For most games they were concerned about number of polygons and polygon throughput and used a lot of flat or gouraud shading.

I vaguely recall something about being able to alpha blend the whole screen 7 times per frame at 60fps or something like that. At the end of the day, the 'capabilities' seemed very good and powerful, the 'performance' was regrettably slow but still useful. I do remember fighting with all of the variations of source to dest blending trying to figure them out so I know it had quite capable features. I'd have to look at some code or a tech manual to know what it could fully do with alpha. I believe it had per pixel alpha, and colored vertex alpha.

5. Do you believe in any of the shortcomings of QTMs? You mention after some help were able to program very neat stuff for it.Would it be surmountable in the end for the majority of programmers and developers? Could nvidia have made programming as easy as chips with a classic pipeline?
You'd have to specify the shortcomings of QTMs because there could be many things considered as shortcomings including things I personally experienced that most other programmers, including Carmack, wouldn't have thought about. A lot of the shortcomings would be gotten around just like the early shortcomings of 3D triangle rendering (i.e.: PS1 couldn't clip triangles with perspective so you would subdivide the triangles on the fly to reduce the clipping issue, Saturn couldn't render triangles so you would duplicate a point on the rectangle and use a pre-streched texture such that it would like correct when display as a triangle, lots of other examples can be provided if you're interested).

The ones Carmack or others might point out were really unjustified:

  • Can't do scrolling textures - Baloney. You fed in every pixel so you just start from an x, y offset and feed the pixels yourself
  • Can't clip efficiently - Not true. The QTM's used 'forward texture mapping' instead of the far more common 'inverse texture mapping'. Inverse is easy to clip. Forward you just allow extra buffer space on the sides, top and bottom of the screen to reduce clipping checks. I believe nVidia did state they used extra buffer space to clip the edges and it worked very well to me. Note, the PS2 even had clipping issues with 'long polygons' as they were often called; the solution was to break it into smaller ones as well.
  • Not efficient to render - What Carmack and others forget is that you get a monstrous efficiency from forward rendering because your texture cache is so much more effective than inverse rendering. In forward mapping you read the texture in from left to right and top to bottom; ideal for caching. In inverse you read the texture at any angle and scale you need to and write from left to right and top to bottom; the worse case for caching! I even wrote a forward texture mapping engine for Sonic 32X using the SH2 and the speed was amazing. The worst case rendering (without clipping) took up to a maximum of twice the time it would take to just copy any chunk of memory from one location to another. I had this sprite engine (sans clipping) rendering thousands of transparent, scaled and rotated sprite rings and Sonic all in software on a single SH2 CPU (it was faster than the hardware sprite engine in the Genesis).
  • Textures took to long to render - No, they took exactly as long as the detail you provide. Mip-mapping would massively improve this (and became standard in the industry) as would polygon triangulation (either precomputed or done dynamically).
  • QTM's would invert or show the wrong side - This was minor. I didn't even account for this and only rarely would a polygon flip inside out. NVidia even pointed out that all you had to do was give the engine a 'hint' to tell it which was to render. Mainly, this is only an issue if you twist/distort the polygon too much and all you have to do is subdivide. It's the same issue with resolving cusps.
  • QTM's produce cusps - Because of the curve rendering technique there wasn't a sophisticated way built in to realize that a curve was continuous between multiple polygons. Again, just subdivide the polygons a little bit so there aren't extreme curves. The distortion was really quite low. If you render a six sided sphere (yes, a cube with the center positions pulled out) then it would show cusps that were obvious when about 25% of the screen or greater. If you used a 32 polygon sphere you could not see a single deformation / cusp even when blown up to the full size of the screen!
  • No 3D packages modeled QTM's / curves - this was true and I had to do all of my art by hand! Literally, plotting it on a graph and adjusting values in text, crazy. The Rhino 3D editor had just started to support curves. Most packages had some sort of bezier or other spline support but these were curves that relied on the curve going through the points you adjusted. Most of us likened them to NURBS without the N, or URBS. Mathematically, I don't think that is accurate but the concept that they were 'uniform rational splines' was what they were best described as. NURBS still took time to get 3D packages correct for but they could have built reasonable tools to support QTM's or post modify triangle or quad models.
  • Not enough memory for textures - Not even close to true. Not only did we have the weakest cards doing 1mb (or was it 512k?), but you had to feed it textures all the time anyways as did later consoles. This would cause 'texture thrashing' which was expensive performance, but it's what let us break any texture limitations. Only the Nintendo 64 had severe limits with textures. If you see what I did with it you'd see that texturing had no detail or quantity limits, just performance (or pixel/texel fill) limits.
  • Blurriness or scaling problems with textures - Again, untrue. Subdivide ploys and don't extrude them too much. Mip-mapping solves a lot of this. Really, this is a lame excuse for laziness and has logical workarounds.

What other limits are often attributed to QTM's?

As for other 'would it be surmountable for other programmers'… definitely! I've seen much more difficult stuff done in shaders. The issue would be to make it really easy for the average programmer so they didn't have to figure out all of the math dealing with parametric surfaces. I guarantee you that if I could do it then others could as well. We wouldn't learn to have a decent 'middleware' package that provided these abilities until Renderware for the PS2 which is what the average game programmer would need. We would have this ability until Unity and iOS software to make it easy for the non-game programmers. So, yes, definitely if the library had been much more developed and given more time. The expectation, at that time, was that programmers at game companies should be able to figure it out and, no, that was too much time and sophistication then (and probably now).

NVidia could have made it as easy as the classic programming pipeline and better! They could have built a variety of pipelines that already worked that you just selected. Especially powerful was the fact that you could switch the pipeline on the fly. It's like shaders where there is a steep learning curve (in the early days) because everything is in assembly language, you can do anything the instructions allow, anything you do wrong will break the pipeline easily. When Nintendo built the GameCube they optimized and simplified this by providing 'shader combiners' so it was more like mixing and matching pipelines and, while much less flexible, it was way easier to get pipelines rendering very cool stuff. Just look at Transworld SURF: Next Wave for the GameCube to see what we did with dynamic ocean water (most say it's better than the XBox version)!

6. Adding audio, wavetable, joypad, 2D windows acceleration and 3D acceleration to the same card must have made it more expensive for the consumer and to produce than if Nvidia went the way like 3DFX did with the Voodoo which just featured 3D acceleration. Do you think it was the right strategy in the end?
That's a philosophical discussion with many political and not just technical views. The right strategy would probably have been to produce both types, one for the audience who just wants rendering, the other for the one who wants an all in one package. Here are some points of view on why both are good solutions:

Everything on one card (or in one package)

  • You don't have to fiddle with tons of INI file settings! - Remember, this is before 'Plug and Play' so you had to hand configure everything on a PC. Lots of conflicts between devices. Lots of non-technical people who don't know how to configure these or make them work without bugs
  • It just works! - You know that all of the tech works together and has whatever limits it has. Also, much easier for programmers and companies to verify their stuff works completely (video, audio, input). No cross contamination
  • Limited to its capabilities. - I'm not sure if you could bypass the audio, but you could use any other input if the game could find it. Like any 'on board' solution it may not provide the best video or audio or input but you at least have something fully working.
  • Power efficiency - All of these devices on one card can use much less electricity than multiple devices.
  • Easy to install - One card in and done. Okay, in this case, two but not hard for a novice PC user!

3D accelerated video card exclusively

  • Video only - If you do only one thing you will likely do it to the best of its ability. 3DFX even took this to only doing triangles so they had better performance as well.
  • The near term future - Obviously, this was the route the cards went so it won as the solution. But now you are seeing phones and tablets and other tech as a single integrated solution and it's all the rage.
  • Overclocking and power - You can let your users overclock the chip and the manufacturers mess with power to give even more performance that the card was not originally designed for. This can compensate for a lack of performance as needed.
  • Low profile and single slot - You can reduce the size of the chips and card as a generation gets older. You also have more room in the future to use more space for more powerful processors
  • Consistent scalability (and profit) - Chip makers love following Moore's law by doubling performance on a regular basis because it improves sales. They can literally map out when you should be buying a new card based on demand of games and what a consumer expects. Note, that by being able to take up more physical card space, provide more power or use other bulking up techniques they don't actually have to innovate to keep up with Moore's law. Thus, they don't rely on innovation, but scalability. This alone is the main winning solution to the argument.

If you look at any technology it generally follows a pattern of success: do as much as you can, scale down to doing the single best thing you can, optimize and increase performance, scale, add in other optimal solutions to own a larger piece of the technical pie, rinse and repeat. So it's a fish eating bigger fish methodology. It's natural and it works. The trick is to not swallow a fish that will choke you to death which is what the NV1 / NV2 nearly did to nVidia. One generation and nine months later, the NV3 came out and was clearly dominant and nVidia has been in the lead ever since with a couple of challenges by ATI. Now, if the NV2 came out as a complete console for Sega that year, they would have dominated and probably taken over the graphics industry years earlier. They certainly would have dominated the game consoles until the PS3 or XBox. There are many other chip manufacturers who failed completely both on the fully integrated solution side (see some of Sega's arcade technology manufacturers) and on the video card only side (3DFX bought by nVidia, ATI merged with AMD)

7. The stuff that was done in the tech demos and that you programmed yourself, is there any newer released game that you can compare it to in terms of graphics?
In terms of graphics only, I'd say it compares to most PS2 games up through 2005. Now this is going off of my memory and memory tends to glorify things so I can guarantee you that there are many PS2 games it couldn't come close to doing. You have to remember this was demo stuff I did so there is a lot of stuff not going on such as physics, AI or other stuff. From a purely graphics perspective you could get PS2 quality graphics due to using a lots less ploys, smooth curves, really nice colored lighting and other effects. Its limits would primarily be throughput of polygons and the texel fill limits. Having programmed on the PS2 quite extensively I can say the PS2's strength is in pumping through masses of polygons while it has difficulties with dynamic polygons, colored blending, and anti-aliasing. Now this is just comparing an NV1 to modern platforms; the NV2 would have blown the PS2 out of the water and would have been second to none until the XBox only because an XBox can do shaders which are extremely versatile and powerful. Then again, I could be living in retro fantasy world and be completely wrong so it's up for you or others to judge and not me.

8. You say the 3400XL with 4MB gave huge performance benefits, do you mean in potential performance when coding directly for the card and not the ported triangle games?
Yes, in my demos it should and I will know soon enough! I only have the 3240 and just ordered a 3400 (can't believe I found one). What we had at Sega was the 1mb or less version and even the technical docs stated that the performance for the low end card was about 30% less. I usually take these technical measurements with a huge grain of salt so I was very shocked when I used a 3240 and saw double the frame rate performance on heavily optimized code. It's not an illusion as I clearly recall this demo doing 20 to 30fps consistently and when run on the 3240 is was 30-60fps. Likely, this was due to pushing too many pixels to the tiny 1mb video card buffer, but it could also be that the PC CPU used with the 3240 was more powerful that the one I had at Sega.

Comment on previous question: I own a 2200 (2mb with DRAM) and a 3240 (2mb with VRAM) and there is no noticeable difference in the supported games in terms of performance. I also made a quick video just for this purpose, available here (unlisted). I've also talked to the guy behind http://www.vintage3d.org/nv1.php and he confirms the same on his 3400XL when he compares it to a 3240 (which is possible by just removing the memory expansion). The 2 to 4mb increase in memory doesn't seem to affect performance in the ported games that came out with and very little difference in the Direct3D support (as you can see from the chart on the webpage). To me it looks that none of the software and drivers that came out took any benefit from the increased memory.
I don't think you would see a difference between the two unless the game used well over 2mb of data and used it efficiently. Normally, the advantage of more memory is storing polygons, textures, and nowadays animations up on the video cards memory so it has immediate access to it; we even did this in the SNES/Genesis days for massive performance boosts even though the cartridge memory was quite fast. Since these NV1 games are all ported games using the weakest technology in the NV1 (triangles) it will unlikely show improvement. QTM's and nVidia tech demo's should show very specific performance metrics. I think between a 2mb and 4mb it would have to be a texture hungry demo. I'd have to actually do some programming on real NV1 hardware or review tech docs to know for sure.

9. Is it true that AM2 within SEGA really did push for triangles in the end as the Firingsquad article mentions?
Yes. We were told directly that SOJ just wanted a triangle based rendering engine and nothing else. The literally said 'We want Virtua Fighter 3 to run on this platform and be available by Christmas (1995)'. Being a VF fan I asked when VF3 would be available in America. They said it wasn't even out in Japan, but should be by the end of the year. If you look at the Wiki you'll see they didn't even have a demo till a year later (March 1996)!

STI was directly told by SOJ they just wanted triangles and not to have any quads (massive Saturn fail) or QTM's. Even SOA didn't want anything but triangles as figuring out QTM's seemed like stepping into another 'quad rendering' disaster. STI was allowed to explore this directly with nVidia after six months or more because SOA was tired of dealing with nVidia. SOA and SOJ just wanted a good triangle rendering engine and 3DFX was clearly better at doing that in early 1995 so there was massive pressure on nVidia to just put something out and fast.

What's interesting is the Sega Saturn wasn't even released in America yet and Sega knew it was going to fail against the Sony Playstation who never made a game console before but their demos were amazing so Sega was seriously trying to get a new platform done before the end of the year! Sega had already pre-eminently killed the 32x by announcing it as the lesser bastard child of the Saturn and, if this NV1 info had gotten out, the Saturn would have been a lesser child of the NV1. Sega was going through very difficult times as Nintendo was out of the game in 1995 (N64 came out in 1996), Sega's great Tom Kalinske was leaving and later to be replace by Bernie Stolar, and everyone's expectations for a 3D Sonic game were so high it was ridiculous. Those are all stories for another time though.

As for other tech, at the time, was the PowerVR chipset and this used intersecting planes which was bizarre to most programmers. No programmers liked that until the PowerVR let you submit everything as triangles and it would convert it to planes. A huge benefit with PowerVR is that you'd get volumetric shadows or reflections of polygons on glass/mirrors for free. The Wiki doesn't seem to describe this, so I could be wrong, but I recall the Power VR guys telling me this is how the data was organized so it could do super efficient ray casting. Having written a volumetric ray-tracing engine in software in college I would have to say I'm probably right as thats how I had to define my shapes. Depending on how you define volumetric shapes this could have done spheres, ovals and other primitives without any polygonal edges but I don't think the PowerVR did this as it is described as a form of ray-casting and not ray-tracing.[/u]

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 33 of 46, by subhuman@xgtx

User metadata
Rank Oldbie
Rank
Oldbie

WOW! you are the time machine vetz 😀 never thought my topic would end up with such an exciting read like this one!

Perhaps if we manage to contact Don(AGAIN!!), we can ask him if he could share with us some of its work with the NV1.. Techdemos he made I said? 😁

Last edited by subhuman@xgtx on 2012-11-03, 00:28. Edited 4 times in total.

Reply 34 of 46, by vetz

User metadata
Rank l33t
Rank
l33t
Davros wrote:

vetz mind if I post a link to this on beyond3d.com ?

Go ahead 😀

btw, don't only thank me. Don have been very forthcoming and I'm impressed of the amount of time he must have taken to answer each of the questions. It sure is quite a wall of text 😜

I also think he misunderstood a bit on the first question. I was asking about Daytona USA and only mentioned Mechwarrior 2 since it had the same "mysterious treatment" from nVidia.

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 36 of 46, by Putas

User metadata
Rank Oldbie
Rank
Oldbie

Lots of stuff, thanks Vetz.
I don't think Carmack would pursue Microsoft to use triangles only (it would be kinda silly no matter how famous he was), afaik at that time he just wanted no immature corporate api to compete with opengl.

So NV uses forward texture mapping and yet lacks filtering, I would expect it to be easier to implement. He calls for mip-mapping but doesn't it need some special workaround exactly because of forward texturing?

At the end I guess he talks about arcade PowerVR chipset. IIRC Sega chose Real3D for new console, only later it turned out 3dfx/PowerVR would be better choice for domestic device.

Reply 37 of 46, by vetz

User metadata
Rank l33t
Rank
l33t

I just installed and tested the NV1 (3240) on a P120 with 32MB of RAM. I was shocked to see NASCAR Racing running alot better, it's is comparable with the Matrox version and certainly better than software mode!

Can anyone theorize what the hell is going on? On our Pentium III systems we have significant worse performance than on a P120..

Last edited by vetz on 2012-11-12, 19:15. Edited 1 time in total.

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes

Reply 38 of 46, by Putas

User metadata
Rank Oldbie
Rank
Oldbie

My CPU is Duron 650, not sure what can it be, just one thing came to my mind. There was some readme saying for optimal performance the card should be on prime PCI or something like that. Diamond had an executable checking if the Edge 3D is not behind PCI to PCI bridge, which is always happening on AGP motherboard.
Millennium of course cannot help much the CPU.

Reply 39 of 46, by vetz

User metadata
Rank l33t
Rank
l33t
Putas wrote:

There was some readme saying for optimal performance the card should be on prime PCI or something like that. Diamond had an executable checking if the Edge 3D is not behind PCI to PCI bridge, which is always happening on AGP motherboard.

I'm running it on a socket 7 ATX card with AGP. Just tested with a P90, and even with that CPU NASCAR runs better.

3D Accelerated Games List (Proprietary APIs - No 3DFX/Direct3D)
3D Acceleration Comparison Episodes