Hi Carlos, That all sounds about right. You've got 1.3 million residues and 1.3 million atoms, and Chimera is very inefficient with memory use for molecules. Seems you are using about 7 Kbytes per atom to get to 10 Gbytes memory use. I'm used to seeing about 2 Kbytes per atom but you have the unusual case where you have as many residues as atoms, that could double the memory use from the many atoms per residue scenario, and I also measure 1.9x memory use for the 64-bit Mac Chimera versus the 32-bit Mac Chimera recently. So it all adds up. Your final "combine" command is trying to make another copy which would take 20 Gbytes and that may not be feasible due to paging with your 16 Gbytes of memory. You should be able to do it in nogui mode on a machine with 32 Gbytes. I don't think you are seeing a memory leak. A program does not always shrink its memory footprint as seen by the operating system when it frees memory. This depends how the memory allocator works -- it may want to hold on to the processes over-sized memory to handle future allocation requests by the program. This seemed more common with the 64-bit Mac Chimera than on the 32-bit Mac Chimera. The question is what will you do if you can combine the 1.3 million atoms in a single model? You won't be able to write it to a PDB file -- those are limited to 100,000 atoms, and 62 chain identifiers (a-z,A-Z,0-9). I think most everything you can do with a single model in Chimera you'll be able to do with the 60 models. If your goal really is to get a PDB file out of this, you may need to settle for writing out 60 PDB files, for the 60 asymmetric units. The mmCIF format could handle the large model size. But Chimera only reads that format -- it doesn't write it. I don't know whether it is easy to apply the icosahedral symmetry in PyMol or VMD. Those programs use about 10 times less memory per atom than Chimera. Tom
Hello,
I am currently trying to make a pseudo-atomic resolution model of a virus capsid, but I've run into memory problems whenever I try to combine all the asymmetric units together into one large PDB file composed of 60 copies of the asymmetric unit arranged using icosahedral symmetry. My problem is that I keep running out of memory, and I was wondering whether this is normal for the dataset I'm working on. The method I use has worked fine for smaller models, so I'm sure I'm running into the memory problems because this model is particularly large, but I still wanted to check with you guys to see what else I could try.
I first start out with an asymmetric unit composed of 52 copies of protein gp24 (PDB ID: 1YUE [monomer] and 1Z1U [hexamer]). That is, I use 8 hexamers and 4 monomers. The individual chains in the unit are stripped to their alpha-carbon backbone by removing all their residues (including the solvent) to minimize how much memory they take up. The monomer weights 47.274 kDa (with residues), so that gives about 22,347 total residues for the asymmetric unit. After stripping off everything except the backbone, I ended up with a 6.4 MB PDB file. After positioning the model, I then run the command sym #1 group i,222r. This creates the 59 other copies in the expected locations.
The problem arises when I try to combine all the units together into one model. Having all the models open takes up about 10 GB (out of my machines 16 GB) worth of memory, though there is still some inactive and free memory left. When I then use the copy/combine command, the memory required rises steadily until it starts to page out to the swap space and bring the machine to a grinding halt. I tried leaving it to run overnight (+18 hours), but even then it still wasn't finished.
To recap: I'm trying to do a memory intensive reconstruction. I've already:
1) closed all the other models I'm not working on 2) closed all other programs on the machine 3) removed all the side chains so I'm only working with the alpha-carbon backbone 4) tried the chimera --nogui option to send the commands without having to render the graphics
It's still having issues, so I was wondering if you had any additional ways I can cut down on the memory usage enough to combine the units into one map. Here are some of the things I'm still thinking of checking on:
1) Running the job on a computer with no graphics card but more RAM than my desktop station. I am able to SSH to the machine and send instructions through the console and the chimera --nogui option. Is it a problem that it doesn't have a graphics card?
2) Checking for a memory leak in Chimera--I've noticed that sometimes not all the memory is given back when I manually close models in the model panel. How can I check for sure/fix it from within a session? I've been able to deal with this by closing my session and opening it again, but it'd be best to be able to do it from within a session.
3) My system keeps saying I still have 4 GB of inactive RAM, though it's never used. I don't know if this is an issue with Chimera or with my machine, but I'll try to find out.
If you need any more info, please let me know.
Thank you for your help!
~Carlos Lopez
My system specs: Mac Pro 4,1 Mac OS X 10.6.3 Quad-Core Intel Xeon, 2.66 GHz 1 proc, 4 cores 16 GB memory Graphics Card NVIDIA GeForce GTX 285, VRAM total: 1024 MB Chimera: Alpha version 1.5 build 30286 2010-03-23 platform: darwin 64bit, windowing system: aqua _______________________________________________