
Hi Grant, We don't have the common high-end consumer desktop graphics cards from the past year in our lab right now. We either have consumer cards more than a year old (Radeon 9800 Pro, GeForce 4 Ti vintage) or workstation cards that are rare (Quadro FX 3400, FireGL X1). This is basically what you'll see on our volume data Chimera rendering benchmarks web page. http://www.cgl.ucsf.edu/chimera/benchmarks.html In terms of handling large systems, say 50000+ atoms, Chimera is slow doing many simple manipulations: selecting, coloring, changing display style. This is because Chimera loops over the atoms in Python code instead of a compiled language like C++, C, or Fortran. Working with large models involves alot of these simple actions and I find that to be the main performance problem using Chimera on large models. Other molecular graphics programs are faster. With Chimera a fast CPU (rather than GPU) can ameliorate this. A dual CPU system will help by allowing other task (like X windows) run. Chimera can only use one CPU. Tom