
Hi David, Other than feedback from other ChimeraX users (anybody?)... Maybe you can ask the AlphaFold-Multimer developers, or maybe their publication has more statistics on this kind of thing. Protein complex prediction with AlphaFold-Multimer. Evans R, O'Neill M, Pritzel A, et al. bioRxiv 2021. <https://www.biorxiv.org/content/10.1101/2021.10.04.463034v2> Other than that, all I can suggest is to take a look at the following links for some discussion and ChimeraX tools for AlphaFold dimer prediction. However, we are not the authorities on this topic. <https://www.rbvi.ucsf.edu/chimerax/data/alphapairs-oct-2023/alphapairs.html> <https://www.rbvi.ucsf.edu/chimerax/data/afbatch-jan2024/rim_dimers.html> I hope this helps, Elaine ----- Elaine C. Meng, Ph.D. UCSF Chimera(X) team Resource for Biocomputing, Visualization, and Informatics Department of Pharmaceutical Chemistry University of California, San Francisco
On Mar 14, 2024, at 5:55 AM, David S. Fay via ChimeraX-users <chimerax-users@cgl.ucsf.edu> wrote:
Hi,
This question isn’t specifically about chimerax, but I hope it’s in the ball park. For most of the protein pairs I’ve tested using multimer, the ipTM scores for the 5-10 models are relatively consistent. For example all 0.2–0.3, or 0.75–0.85, or 0.4–0.6. But occasionally I’ll get one or two very high scores (>0.7) with the rest being low or somewhat randomly scattered. My instinct, of course, is to put more faith in scores when the models converge. At the same time, I don’t want ignore a potentially real interaction. So, I’m wondering how people interpret ipTMs when the numbers are variable between models. What is the reason that some protein pairs show this pattern? Is the good score(s) indicative or something real or a false positive. Are the bad scores indicative of true or false negative?
Any thoughts, insights, and possible follow up approaches would be greatly appreciated!
David