
I am struggling a bit and would be interested in any pointers from the knowledgeable folk here. It seems that I cannot use the mouse to get arbitrarily close to a surface in orthographic mode, but I can in perspective mode. In orthographic mode (Viewing -> Camera -> Projection) if I try to zoom in, the camera instead moves in until the front surface clips, but without much actually zooming. (True for the different "Center of Rotation" methods). In the Viewing -> Side View panel, the width of the field is represented by red lines that I cannot control but which I really want to move close together. According to the help for the Viewing -> Camera panel: horizontal field of view (allowed range 1.0-179.0°) - how much perspective is used; does not affect the orthographic projection. The vertical field of view will change along with the horizontal field of view, but their exact relationship depends on the aspect ratio of the graphics window. In fact, with the nightly builds of 1.4 this parameter *does* affect the orthographic projection, and does zoom in (or out) without clipping. Is there an updated explanation of how this works, and if it has changed since 1.3 as I seem to find? Am I looking in the wrong place for help on nightly builds? Thanks, James Conway

Hi James, I see sometimes zooming in close in orthographic projection is not possible while it works fine in perspective in the current daily build. That is a bug and I'll create a bug report. Zooming behavior has changed between Chimera 1.3 and 1.4. Here are some details that may help you. Zooming moves your view closer to a point at a certain depth in the center of the screen. What depth? In Chimera 1.3 the point to zoom in on was a hidden parameter (called "center") not shown in the Camera panel. It was set when you used the focus, window, or center commands and when you first opened a model to the center of the bounding sphere. In 1.4 I changed this to zoom in towards the center of rotation. The default location for the center of rotation also changed from 1.3 to 1.4. In 1.3 it was the center of the bounding sphere of the displayed models. In 1.4 if you are zoomed in it rotates about the closest visible object in the center of the screen and if you are zoomed out it uses the older center of bounding sphere. By "zoomed in" I mean that the bounding sphere of the displayed models is at least twice as big as the window width. The result is that in 1.4 daily builds you should zoom in on the closest visible object in the center of the window. In 1.3 you would zoom in on a point that may be in front of the closest visible object (so you could never get close) or may be behind the closest visible object (so you would collide with the object when zooming). What I've described seems to be working correctly for perspective projection (the default projection mode). Something seems slightly wrong (but not too far wrong) for orthographic projection where you can't get as close as you want. Last week I fixed a bug that may be causing your problem. It was a bug where the center of rotation point was not calculated correctly but was put in front of the closest visible object in the center. That would prevent zooming in (in both perspective and orthographic projection). It only occurred when multiple models were shown in certain spatial arrangements (a model in front had no object in center of window, but its center of bounding box was in front of closest visible object). That was fixed September 8. Tom -------- Original Message -------- Subject: [Chimera-users] Zooming and orthographic projection mode From: James Conway To: chimera-users Date: 9/11/09 12:45 PM
I am struggling a bit and would be interested in any pointers from the knowledgeable folk here. It seems that I cannot use the mouse to get arbitrarily close to a surface in orthographic mode, but I can in perspective mode. In orthographic mode (Viewing -> Camera -> Projection) if I try to zoom in, the camera instead moves in until the front surface clips, but without much actually zooming. (True for the different "Center of Rotation" methods). In the Viewing -> Side View panel, the width of the field is represented by red lines that I cannot control but which I really want to move close together.
According to the help for the Viewing -> Camera panel:
horizontal field of view (allowed range 1.0-179.0°) - how much perspective is used; does not affect the orthographic projection. The vertical field of view will change along with the horizontal field of view, but their exact relationship depends on the aspect ratio of the graphics window.
In fact, with the nightly builds of 1.4 this parameter *does* affect the orthographic projection, and does zoom in (or out) without clipping. Is there an updated explanation of how this works, and if it has changed since 1.3 as I seem to find? Am I looking in the wrong place for help on nightly builds?
Thanks,
James Conway _______________________________________________ Chimera-users mailing list Chimera-users@cgl.ucsf.edu http://www.cgl.ucsf.edu/mailman/listinfo/chimera-users

Hello all, I have recently solved the structure of a novel membrane protein, and have been using chimera quite a bit for my figures and analysis. In my free time I have been playing around with the 3D animation package Blender which I must say is a total blast, and makes fantastic movies and animations. However exporting from Chimera to Blender is a little tricky, in that when using VRML2 or X3D formats while the actual 3D "mesh vertices" import quite well, the colors are not imported, and imported items such as cartoon helices seem to have totally random materials assigned to nonsensical segments (instead of materials assigned by chain for example) . Ideally I would like to perform and export the coloring operations using chimera because it's obviously an order of magnitude easier to select residues, chains and monomers with chimera. Or at least have the grouping of objects imported correctly. This is really just for fun so I'd thought I'd ask if anyone here has any experience with importing Chimera models into Blender. Of course I have no idea how VRML2 actually works. Thanks all. Andrew Waight Wang Lab Skirball Institute of Biomedical Sciences NYU School of Medicine P.S. Extra Credit: Does anyone have any idea how to export the calculated electrostatic surface into UV mapping for a Blender Object (the surface mesh obviously)? ------------------------------------------------------------ This email message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is proprietary, confidential, and exempt from disclosure under applicable law. Any unauthorized review, use, disclosure, or distribution is prohibited. If you have received this email in error please notify the sender by return email and delete the original message. Please note, the recipient should check this email and any attachments for the presence of viruses. The organization accepts no liability for any damage caused by any virus transmitted by this email. =================================

Hi Andrew, Chimera exports the surface vertex and individual atom and ribbon residue colors in the X3D and VRML output. If you raytrace a scene with Chimera File / Save Image... it first exports X3D and then converts that X3D to input for POVray which does get all the colors right. So I think your question calls for some expertise with Blender, to find out what color information it can handle when importing those files. I don't know anything about Blender. Tom -------- Original Message -------- Subject: [Chimera-users] Exporting from Chimera to Blender - basic questions. From: Waight, Andrew <Andrew.Waight@med.nyu.edu> To: chimera-users@cgl.ucsf.edu <chimera-users@cgl.ucsf.edu> Date: 9/15/09 10:31 AM
Hello all,
I have recently solved the structure of a novel membrane protein, and have been using chimera quite a bit for my figures and analysis. In my free time I have been playing around with the 3D animation package Blender which I must say is a total blast, and makes fantastic movies and animations. However exporting from Chimera to Blender is a little tricky, in that when using VRML2 or X3D formats while the actual 3D "mesh vertices" import quite well, the colors are not imported, and imported items such as cartoon helices seem to have totally random materials assigned to nonsensical segments (instead of materials assigned by chain for example) . Ideally I would like to perform and export the coloring operations using chimera because it's obviously an order of magnitude easier to select residues, chains and monomers with chimera. Or at least have the grouping of objects imported correctly. This is really just for fun so I'd thought I'd ask if anyone here has any experience ! with importing Chimera models into Blender. Of course I have no idea how VRML2 actually works. Thanks all.
Andrew Waight Wang Lab Skirball Institute of Biomedical Sciences NYU School of Medicine
P.S. Extra Credit: Does anyone have any idea how to export the calculated electrostatic surface into UV mapping for a Blender Object (the surface mesh obviously)?
------------------------------------------------------------ This email message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is proprietary, confidential, and exempt from disclosure under applicable law. Any unauthorized review, use, disclosure, or distribution is prohibited. If you have received this email in error please notify the sender by return email and delete the original message. Please note, the recipient should check this email and any attachments for the presence of viruses. The organization accepts no liability for any damage caused by any virus transmitted by this email. =================================
_______________________________________________ Chimera-users mailing list Chimera-users@cgl.ucsf.edu http://www.cgl.ucsf.edu/mailman/listinfo/chimera-users

On Tue, 15 Sep 2009, Waight, Andrew wrote:
Hello all,
I have recently solved the structure of a novel membrane protein, and have been using chimera quite a bit for my figures and analysis. In my free time I have been playing around with the 3D animation package Blender which I must say is a total blast, and makes fantastic movies and animations. However exporting from Chimera to Blender is a little tricky, in that when using VRML2 or X3D formats while the actual 3D "mesh vertices" import quite well, the colors are not imported, and imported items such as cartoon helices seem to have totally random materials assigned to nonsensical segments (instead of materials assigned by chain for example) . Ideally I would like to perform and export the coloring operations using chimera because it's obviously an order of magnitude easier to select residues, chains and monomers with chimera. Or at least have the grouping of objects imported correctly. This is really just for fun so I'd thought I'd ask if anyone here has any experience ! with importing Chimera models into Blender. Of course I have no idea how VRML2 actually works. Thanks all.
Andrew Waight Wang Lab Skirball Institute of Biomedical Sciences NYU School of Medicine
This is more of a Blender question than a chimera question, in the sense that it is failing when you import the file into Blender, rather than when it is exported from chimera. So normally, I would recommend that you ask in a Blender discussion group something like "Why does blender fail to import this VRML2/X3D file when it displays just fine in my browser?" But the X3D importing that Blender 2.46b does is really bad. It only tries to do the subset of X3D that corresponds to VRML97. I can't tell why the surface colors are lost. Matthieu Delanoe has a much much better X3D importer at <http://matthieu.delanoe.googlepages.com/blenderX3D.html>. It needs to be updated for Python 2.6 by adding "# -*- coding: latin-1 -*-" as the second line to get it to work. And again, the colors are lost even though there is code for them in the import script.
P.S. Extra Credit: Does anyone have any idea how to export the calculated electrostatic surface into UV mapping for a Blender Object (the surface mesh obviously)?
If the surface has no holes (not true in general for molecular surfaces), then the mapping would be possible. Not knowing Blender, can you say why you want the UV mapping? -- Greg

Turns out Matthieu Delanoe's X3D importer for Blender does get the correct colors from chimera. Not being a Blender user, I didn't realize I had to switch the Viewport Shading mode from Solid to Textured to see the colors. - Greg On Wed, 16 Sep 2009, Greg Couch wrote:
On Tue, 15 Sep 2009, Waight, Andrew wrote:
Hello all,
I have recently solved the structure of a novel membrane protein, and have been using chimera quite a bit for my figures and analysis. In my free time I have been playing around with the 3D animation package Blender which I must say is a total blast, and makes fantastic movies and animations. However exporting from Chimera to Blender is a little tricky, in that when using VRML2 or X3D formats while the actual 3D "mesh vertices" import quite well, the colors are not imported, and imported items such as cartoon helices seem to have totally random materials assigned to nonsensical segments (instead of materials assigned by chain for example) . Ideally I would like to perform and export the coloring operations using chimera because it's obviously an order of magnitude easier to select residues, chains and monomers with chimera. Or at least have the grouping of objects imported correctly. This is really just for fun so I'd thought I'd ask if anyone here has any experience ! with importing Chimera models into Blender. Of course I have no idea how VRML2 actually works. Thanks all.
Andrew Waight Wang Lab Skirball Institute of Biomedical Sciences NYU School of Medicine
This is more of a Blender question than a chimera question, in the sense that it is failing when you import the file into Blender, rather than when it is exported from chimera. So normally, I would recommend that you ask in a Blender discussion group something like "Why does blender fail to import this VRML2/X3D file when it displays just fine in my browser?"
But the X3D importing that Blender 2.46b does is really bad. It only tries to do the subset of X3D that corresponds to VRML97. I can't tell why the surface colors are lost.
Matthieu Delanoe has a much much better X3D importer at <http://matthieu.delanoe.googlepages.com/blenderX3D.html>. It needs to be updated for Python 2.6 by adding "# -*- coding: latin-1 -*-" as the second line to get it to work. And again, the colors are lost even though there is code for them in the import script.
P.S. Extra Credit: Does anyone have any idea how to export the calculated electrostatic surface into UV mapping for a Blender Object (the surface mesh obviously)?
If the surface has no holes (not true in general for molecular surfaces), then the mapping would be possible. Not knowing Blender, can you say why you want the UV mapping?
-- Greg _______________________________________________ Chimera-users mailing list Chimera-users@cgl.ucsf.edu http://www.cgl.ucsf.edu/mailman/listinfo/chimera-users
participants (5)
-
Greg Couch
-
James Conway
-
Thomas Goddard
-
Tom Goddard
-
Waight, Andrew