Specify GPU id for ChimeraX

Hi, Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks. Best, Shasha

Hi Shasta, ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows. https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-... <https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...> Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hi Tom, Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting? Thanks, Shasha On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hi Shasta, I don't know if you have multiple displays with different GPUs driving different displays. I think ChimeraX will use the default GPU for the display it is started on. This is a question about how to configure Ubuntu and Nvidia drivers. There is nothing in ChimeraX to control which GPU is used. ChimeraX simply asks for an OpenGL context using the Qt window toolkit and is given one. You'll need to search online for guidance on how Ubuntu 20.04 decides which GPU to use and how to control that. Please post a message how it is done if you figure it out. Tom
On Nov 23, 2020, at 4:51 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net <mailto:goddard@sonic.net>> wrote: Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-... <https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...>
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>

Hello, You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so: CUDA_VISIBLE_DEVICES=1 chimerax replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command. You can also make this environment variable stay around until you close the shell session like so: export CUDA_VISIBLE_DEVICES=1 then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated. When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from. I hope this helps, Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net <mailto:goddard@sonic.net>> wrote: Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-... <https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...>
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be: alias cx="CUDA_VISIBLE_DEVICES=1 chimerax" Other shells have similar (but not necessarily identical) syntaxes. --Eric Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net <mailto:goddard@sonic.net>> wrote: Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-... <https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...>
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hi Guillaume, and Eric Thanks for the tip. The temporary assignment of visiable GPU devices is exactly what I want to get. Though it looks like the recipe of using 'CUDA_VISIBLE_DEVICES=1' does not work at least on my ubuntu 20.04 with chimerax 1.0. I also tried Eric's suggestion just now. sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES sf@sf-MS-7C35:~$ export CUDA_VISIBLE_DEVICES=1 sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES 1 sf@sf-MS-7C35:~$ chimerax & [1] 673010 sf@sf-MS-7C35:~$ nvidia-smi Tue Nov 24 12:09:28 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 207... Off | 00000000:2D:00.0 On | N/A | | 60% 74C P2 191W / 215W | 763MiB / 7974MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 207... Off | 00000000:2E:00.0 Off | N/A | | 0% 34C P8 14W / 215W | 14MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1343 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 2338 G /usr/lib/xorg/Xorg 174MiB | | 0 N/A N/A 2463 G /usr/bin/gnome-shell 233MiB | | 0 N/A N/A 671633 G ...AAAAAAAAA= --shared-files 45MiB | | 0 N/A N/A 672504 C /opt/conda/bin/python 229MiB | | 0 N/A N/A 673010 G chimerax 33MiB | | 1 N/A N/A 1343 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2338 G /usr/lib/xorg/Xorg 4MiB | +-----------------------------------------------------------------------------+ After setting the environment variable and running chimerax in the same session, it still runs on GPU 0. I also tried a recipe that defines "export NVIDIA_VISIBLE_DEVICES=1, export CUDA_VISIBLE_DEVICES=0" shared here [ https://stackoverflow.com/a/58445444]. It does not work either. To ChimeraX developers, I wonder how ChimeraX is exposed to CUDA. I have basis in CUDA computing and using CUDA in Python. If you can give some clues, that would be great. Best, Shasha On Tue, Nov 24, 2020 at 12:18 PM Eric Pettersen <pett@cgl.ucsf.edu> wrote:
To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be:
alias cx="CUDA_VISIBLE_DEVICES=1 chimerax"
Other shells have similar (but not necessarily identical) syntaxes.
--Eric
Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hi Shasha, ChimeraX does not use CUDA. It only uses the graphics card with OpenGL for graphics rendering, not for non-graphical calculations. There is one exception to that, the ISOLDE plugin to ChimeraX can use CUDA if you tell it to. So I think the environment variable you would need to use is NVIDIA_VISIBLE_DEVICES. I don't know why that would not work. ChimeraX is using Qt to create a QOpenGLContext(). That Python code is in your distribution in file chimera/lib/python3.7/site-packages/chimerax/graphics/opengl.py # Create context from PyQt5.QtGui import QOpenGLContext qc = QOpenGLContext() qc.setScreen(self._screen) The Qt window toolkit has no capabilities to choose the GPU as far as I know. I don't have a multi-GPU nvidia system to test on, but I tried starting ChimeraX from a bash shell with NVIDIA_VISIBLE_DEVICES=1 chimerax and put in code to print the environment variables before the QOpenGLContext is created and the environment is printed and set. I was worried that ChimeraX might remove some environment variables but that does not happen. So I cannot explain why the environment variable does not work. I know nothing about Nvidia-SMI but am surprised that it can choose between different graphics cards while rendering to the same screen. I am more familiar with macOS with an external GPU and two displays. With that operating system if I run ChimeraX on the iMac and MacBook laptop display it uses the computer's graphics, and if I run ChimeraX on an external display attached to the external GPU it runs it using the external GPU -- in other words the display you run on controls which GPU is used. In fact, on macOS it remarkably switches which GPU is being used if I simply drag the ChimeraX window from one display to the other. Of course Ubuntu is entirely different and it seems like NVIDIA_VISIBLE_DEVICES=1 should work. Tom
On Nov 24, 2020, at 9:21 AM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Guillaume, and Eric
Thanks for the tip. The temporary assignment of visiable GPU devices is exactly what I want to get. Though it looks like the recipe of using 'CUDA_VISIBLE_DEVICES=1' does not work at least on my ubuntu 20.04 with chimerax 1.0. I also tried Eric's suggestion just now.
sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES
sf@sf-MS-7C35:~$ export CUDA_VISIBLE_DEVICES=1 sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES 1 sf@sf-MS-7C35:~$ chimerax & [1] 673010 sf@sf-MS-7C35:~$ nvidia-smi Tue Nov 24 12:09:28 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 207... Off | 00000000:2D:00.0 On | N/A | | 60% 74C P2 191W / 215W | 763MiB / 7974MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 207... Off | 00000000:2E:00.0 Off | N/A | | 0% 34C P8 14W / 215W | 14MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1343 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 2338 G /usr/lib/xorg/Xorg 174MiB | | 0 N/A N/A 2463 G /usr/bin/gnome-shell 233MiB | | 0 N/A N/A 671633 G ...AAAAAAAAA= --shared-files 45MiB | | 0 N/A N/A 672504 C /opt/conda/bin/python 229MiB | | 0 N/A N/A 673010 G chimerax 33MiB | | 1 N/A N/A 1343 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2338 G /usr/lib/xorg/Xorg 4MiB | +-----------------------------------------------------------------------------+
After setting the environment variable and running chimerax in the same session, it still runs on GPU 0. I also tried a recipe that defines "export NVIDIA_VISIBLE_DEVICES=1, export CUDA_VISIBLE_DEVICES=0" shared here [https://stackoverflow.com/a/58445444 <https://stackoverflow.com/a/58445444>]. It does not work either.
To ChimeraX developers, I wonder how ChimeraX is exposed to CUDA. I have basis in CUDA computing and using CUDA in Python. If you can give some clues, that would be great.
Best, Shasha
On Tue, Nov 24, 2020 at 12:18 PM Eric Pettersen <pett@cgl.ucsf.edu <mailto:pett@cgl.ucsf.edu>> wrote: To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be:
alias cx="CUDA_VISIBLE_DEVICES=1 chimerax"
Other shells have similar (but not necessarily identical) syntaxes.
--Eric
Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org <mailto:guillaume@gaullier.org>> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net <mailto:goddard@sonic.net>> wrote: Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-... <https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...>
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hi Tom, Thanks for the tips. Tristan, the ISOLDE developer, also mentioned to me that ISOLDE GPU selection can be specified by "isolde set gpuDeviceIndex {n}" on chimerax cmd line. After digging into the QOpenGLContext and your description of GPU switches on macOS, I realize that it has something to do with OpenGL interaction with nvidia-settings. As shown in the snapshots in this thread [ https://askubuntu.com/questions/280972/how-to-understand-nvidia-settings-sav...], on ubuntu there is an 'NVIDIA X Server Settings' utility. The OpenGL is bound to X server/screen, which is a Samsung screen that is loaded on GPU 0. So it looks that I would need a second screen, then when I drag the chimerax program there, the job would automatically appear on the second GPU. This is not a smart solution, but still a solution... For the time being, I will shift the existing jobs to the second GPU. Thanks, Shasha On Tue, Nov 24, 2020 at 1:47 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasha,
ChimeraX does not use CUDA. It only uses the graphics card with OpenGL for graphics rendering, not for non-graphical calculations. There is one exception to that, the ISOLDE plugin to ChimeraX can use CUDA if you tell it to.
So I think the environment variable you would need to use is NVIDIA_VISIBLE_DEVICES. I don't know why that would not work. ChimeraX is using Qt to create a QOpenGLContext(). That Python code is in your distribution in file
chimera/lib/python3.7/site-packages/chimerax/graphics/opengl.py
# Create context from PyQt5.QtGui import QOpenGLContext qc = QOpenGLContext() qc.setScreen(self._screen)
The Qt window toolkit has no capabilities to choose the GPU as far as I know. I don't have a multi-GPU nvidia system to test on, but I tried starting ChimeraX from a bash shell with
NVIDIA_VISIBLE_DEVICES=1 chimerax
and put in code to print the environment variables before the QOpenGLContext is created and the environment is printed and set. I was worried that ChimeraX might remove some environment variables but that does not happen. So I cannot explain why the environment variable does not work.
I know nothing about Nvidia-SMI but am surprised that it can choose between different graphics cards while rendering to the same screen. I am more familiar with macOS with an external GPU and two displays. With that operating system if I run ChimeraX on the iMac and MacBook laptop display it uses the computer's graphics, and if I run ChimeraX on an external display attached to the external GPU it runs it using the external GPU -- in other words the display you run on controls which GPU is used. In fact, on macOS it remarkably switches which GPU is being used if I simply drag the ChimeraX window from one display to the other. Of course Ubuntu is entirely different and it seems like NVIDIA_VISIBLE_DEVICES=1 should work.
Tom
On Nov 24, 2020, at 9:21 AM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Guillaume, and Eric
Thanks for the tip. The temporary assignment of visiable GPU devices is exactly what I want to get. Though it looks like the recipe of using 'CUDA_VISIBLE_DEVICES=1' does not work at least on my ubuntu 20.04 with chimerax 1.0. I also tried Eric's suggestion just now.
sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES
sf@sf-MS-7C35:~$ export CUDA_VISIBLE_DEVICES=1 sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES 1 sf@sf-MS-7C35:~$ chimerax & [1] 673010 sf@sf-MS-7C35:~$ nvidia-smi Tue Nov 24 12:09:28 2020
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. |
|===============================+======================+======================| | 0 GeForce RTX 207... Off | 00000000:2D:00.0 On | N/A | | 60% 74C P2 191W / 215W | 763MiB / 7974MiB | 99% Default | | | | N/A |
+-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 207... Off | 00000000:2E:00.0 Off | N/A | | 0% 34C P8 14W / 215W | 14MiB / 7982MiB | 0% Default | | | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage |
|=============================================================================| | 0 N/A N/A 1343 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 2338 G /usr/lib/xorg/Xorg 174MiB | | 0 N/A N/A 2463 G /usr/bin/gnome-shell 233MiB | | 0 N/A N/A 671633 G ...AAAAAAAAA= --shared-files 45MiB | | 0 N/A N/A 672504 C /opt/conda/bin/python 229MiB | | 0 N/A N/A 673010 G chimerax 33MiB | | 1 N/A N/A 1343 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2338 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
After setting the environment variable and running chimerax in the same session, it still runs on GPU 0. I also tried a recipe that defines "export NVIDIA_VISIBLE_DEVICES=1, export CUDA_VISIBLE_DEVICES=0" shared here [ https://stackoverflow.com/a/58445444]. It does not work either.
To ChimeraX developers, I wonder how ChimeraX is exposed to CUDA. I have basis in CUDA computing and using CUDA in Python. If you can give some clues, that would be great.
Best, Shasha
On Tue, Nov 24, 2020 at 12:18 PM Eric Pettersen <pett@cgl.ucsf.edu> wrote:
To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be:
alias cx="CUDA_VISIBLE_DEVICES=1 chimerax"
Other shells have similar (but not necessarily identical) syntaxes.
--Eric
Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hi Shasha, That Ubuntu forum post from 7 - 8 years ago may not be recent enough to describe how it works now. Here is another post still pretty old 4 - 5 years discussing how to tell the X server which GPU to use for a screen. https://askubuntu.com/questions/787030/setting-the-default-gpu <https://askubuntu.com/questions/787030/setting-the-default-gpu> I'm a bit puzzled by what configurations are possible. If you have your display physically plugged into GPU_0 with say a display port cable, then definitely that GPU is doing part of the job of rendering to the display since it is sending electrical the signal. If it is possible to configure things so ChimeraX uses GPU_1 to render and then to get it to appear on your GPU_0 screen it will have to send that rendering from GPU_1 to GPU_0 (probably by way of the CPU). That could be slow and you end up interrupting both GPUs to render graphics. If X windows uses just one GPU for a screen, say GPU_0, I think it would make a lot more sense for you to run compute jobs on GPU_1 and leave all graphics rendering on GPU_0 and not try to change which GPU ChimeraX runs on. It is probably much more common and easier to choose the GPU to use for a compute job. Tom
On Nov 24, 2020, at 6:12 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Thanks for the tips. Tristan, the ISOLDE developer, also mentioned to me that ISOLDE GPU selection can be specified by "isolde set gpuDeviceIndex {n}" on chimerax cmd line.
After digging into the QOpenGLContext and your description of GPU switches on macOS, I realize that it has something to do with OpenGL interaction with nvidia-settings. As shown in the snapshots in this thread [https://askubuntu.com/questions/280972/how-to-understand-nvidia-settings-sav... <https://askubuntu.com/questions/280972/how-to-understand-nvidia-settings-sav...>], on ubuntu there is an 'NVIDIA X Server Settings' utility. The OpenGL is bound to X server/screen, which is a Samsung screen that is loaded on GPU 0.
So it looks that I would need a second screen, then when I drag the chimerax program there, the job would automatically appear on the second GPU. This is not a smart solution, but still a solution... For the time being, I will shift the existing jobs to the second GPU.
Thanks, Shasha
On Tue, Nov 24, 2020 at 1:47 PM Tom Goddard <goddard@sonic.net <mailto:goddard@sonic.net>> wrote: Hi Shasha,
ChimeraX does not use CUDA. It only uses the graphics card with OpenGL for graphics rendering, not for non-graphical calculations. There is one exception to that, the ISOLDE plugin to ChimeraX can use CUDA if you tell it to.
So I think the environment variable you would need to use is NVIDIA_VISIBLE_DEVICES. I don't know why that would not work. ChimeraX is using Qt to create a QOpenGLContext(). That Python code is in your distribution in file
chimera/lib/python3.7/site-packages/chimerax/graphics/opengl.py
# Create context from PyQt5.QtGui import QOpenGLContext qc = QOpenGLContext() qc.setScreen(self._screen)
The Qt window toolkit has no capabilities to choose the GPU as far as I know. I don't have a multi-GPU nvidia system to test on, but I tried starting ChimeraX from a bash shell with
NVIDIA_VISIBLE_DEVICES=1 chimerax
and put in code to print the environment variables before the QOpenGLContext is created and the environment is printed and set. I was worried that ChimeraX might remove some environment variables but that does not happen. So I cannot explain why the environment variable does not work.
I know nothing about Nvidia-SMI but am surprised that it can choose between different graphics cards while rendering to the same screen. I am more familiar with macOS with an external GPU and two displays. With that operating system if I run ChimeraX on the iMac and MacBook laptop display it uses the computer's graphics, and if I run ChimeraX on an external display attached to the external GPU it runs it using the external GPU -- in other words the display you run on controls which GPU is used. In fact, on macOS it remarkably switches which GPU is being used if I simply drag the ChimeraX window from one display to the other. Of course Ubuntu is entirely different and it seems like NVIDIA_VISIBLE_DEVICES=1 should work.
Tom
On Nov 24, 2020, at 9:21 AM, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi Guillaume, and Eric
Thanks for the tip. The temporary assignment of visiable GPU devices is exactly what I want to get. Though it looks like the recipe of using 'CUDA_VISIBLE_DEVICES=1' does not work at least on my ubuntu 20.04 with chimerax 1.0. I also tried Eric's suggestion just now.
sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES
sf@sf-MS-7C35:~$ export CUDA_VISIBLE_DEVICES=1 sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES 1 sf@sf-MS-7C35:~$ chimerax & [1] 673010 sf@sf-MS-7C35:~$ nvidia-smi Tue Nov 24 12:09:28 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 207... Off | 00000000:2D:00.0 On | N/A | | 60% 74C P2 191W / 215W | 763MiB / 7974MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 207... Off | 00000000:2E:00.0 Off | N/A | | 0% 34C P8 14W / 215W | 14MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1343 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 2338 G /usr/lib/xorg/Xorg 174MiB | | 0 N/A N/A 2463 G /usr/bin/gnome-shell 233MiB | | 0 N/A N/A 671633 G ...AAAAAAAAA= --shared-files 45MiB | | 0 N/A N/A 672504 C /opt/conda/bin/python 229MiB | | 0 N/A N/A 673010 G chimerax 33MiB | | 1 N/A N/A 1343 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2338 G /usr/lib/xorg/Xorg 4MiB | +-----------------------------------------------------------------------------+
After setting the environment variable and running chimerax in the same session, it still runs on GPU 0. I also tried a recipe that defines "export NVIDIA_VISIBLE_DEVICES=1, export CUDA_VISIBLE_DEVICES=0" shared here [https://stackoverflow.com/a/58445444 <https://stackoverflow.com/a/58445444>]. It does not work either.
To ChimeraX developers, I wonder how ChimeraX is exposed to CUDA. I have basis in CUDA computing and using CUDA in Python. If you can give some clues, that would be great.
Best, Shasha
On Tue, Nov 24, 2020 at 12:18 PM Eric Pettersen <pett@cgl.ucsf.edu <mailto:pett@cgl.ucsf.edu>> wrote: To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be:
alias cx="CUDA_VISIBLE_DEVICES=1 chimerax"
Other shells have similar (but not necessarily identical) syntaxes.
--Eric
Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org <mailto:guillaume@gaullier.org>> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net <mailto:goddard@sonic.net>> wrote: Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-... <https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...>
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hi Tom, I myself also checked out the "NVIDIA X Server Settings" utility on my ubuntu 20.04 and spent some time tweaking the setting. The binding between rendering X screen and OpenGL service makes it difficult to separate, unless I have a second screen loaded on GPU 1 and run ChimeraX there. I was not sure how the attachment would work in mail list archive, so I opted to use the old forum post to refer to the setting. I initially intended to hack this through, as the ability to manipulate on which GPU to run ChimeraX was really desired. Now discovering how ChimeraX works with OpenGL, I do agree that choosing the GPU for a compute job is much easier. Thanks for the discussion and for sharing your insights. Best, Shasha On Tue, Nov 24, 2020 at 9:39 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasha,
That Ubuntu forum post from 7 - 8 years ago may not be recent enough to describe how it works now. Here is another post still pretty old 4 - 5 years discussing how to tell the X server which GPU to use for a screen.
https://askubuntu.com/questions/787030/setting-the-default-gpu
I'm a bit puzzled by what configurations are possible. If you have your display physically plugged into GPU_0 with say a display port cable, then definitely that GPU is doing part of the job of rendering to the display since it is sending electrical the signal. If it is possible to configure things so ChimeraX uses GPU_1 to render and then to get it to appear on your GPU_0 screen it will have to send that rendering from GPU_1 to GPU_0 (probably by way of the CPU). That could be slow and you end up interrupting both GPUs to render graphics.
If X windows uses just one GPU for a screen, say GPU_0, I think it would make a lot more sense for you to run compute jobs on GPU_1 and leave all graphics rendering on GPU_0 and not try to change which GPU ChimeraX runs on. It is probably much more common and easier to choose the GPU to use for a compute job.
Tom
On Nov 24, 2020, at 6:12 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Thanks for the tips. Tristan, the ISOLDE developer, also mentioned to me that ISOLDE GPU selection can be specified by "isolde set gpuDeviceIndex {n}" on chimerax cmd line.
After digging into the QOpenGLContext and your description of GPU switches on macOS, I realize that it has something to do with OpenGL interaction with nvidia-settings. As shown in the snapshots in this thread [ https://askubuntu.com/questions/280972/how-to-understand-nvidia-settings-sav...], on ubuntu there is an 'NVIDIA X Server Settings' utility. The OpenGL is bound to X server/screen, which is a Samsung screen that is loaded on GPU 0.
So it looks that I would need a second screen, then when I drag the chimerax program there, the job would automatically appear on the second GPU. This is not a smart solution, but still a solution... For the time being, I will shift the existing jobs to the second GPU.
Thanks, Shasha
On Tue, Nov 24, 2020 at 1:47 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasha,
ChimeraX does not use CUDA. It only uses the graphics card with OpenGL for graphics rendering, not for non-graphical calculations. There is one exception to that, the ISOLDE plugin to ChimeraX can use CUDA if you tell it to.
So I think the environment variable you would need to use is NVIDIA_VISIBLE_DEVICES. I don't know why that would not work. ChimeraX is using Qt to create a QOpenGLContext(). That Python code is in your distribution in file
chimera/lib/python3.7/site-packages/chimerax/graphics/opengl.py
# Create context from PyQt5.QtGui import QOpenGLContext qc = QOpenGLContext() qc.setScreen(self._screen)
The Qt window toolkit has no capabilities to choose the GPU as far as I know. I don't have a multi-GPU nvidia system to test on, but I tried starting ChimeraX from a bash shell with
NVIDIA_VISIBLE_DEVICES=1 chimerax
and put in code to print the environment variables before the QOpenGLContext is created and the environment is printed and set. I was worried that ChimeraX might remove some environment variables but that does not happen. So I cannot explain why the environment variable does not work.
I know nothing about Nvidia-SMI but am surprised that it can choose between different graphics cards while rendering to the same screen. I am more familiar with macOS with an external GPU and two displays. With that operating system if I run ChimeraX on the iMac and MacBook laptop display it uses the computer's graphics, and if I run ChimeraX on an external display attached to the external GPU it runs it using the external GPU -- in other words the display you run on controls which GPU is used. In fact, on macOS it remarkably switches which GPU is being used if I simply drag the ChimeraX window from one display to the other. Of course Ubuntu is entirely different and it seems like NVIDIA_VISIBLE_DEVICES=1 should work.
Tom
On Nov 24, 2020, at 9:21 AM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Guillaume, and Eric
Thanks for the tip. The temporary assignment of visiable GPU devices is exactly what I want to get. Though it looks like the recipe of using 'CUDA_VISIBLE_DEVICES=1' does not work at least on my ubuntu 20.04 with chimerax 1.0. I also tried Eric's suggestion just now.
sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES
sf@sf-MS-7C35:~$ export CUDA_VISIBLE_DEVICES=1 sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES 1 sf@sf-MS-7C35:~$ chimerax & [1] 673010 sf@sf-MS-7C35:~$ nvidia-smi Tue Nov 24 12:09:28 2020
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. |
|===============================+======================+======================| | 0 GeForce RTX 207... Off | 00000000:2D:00.0 On | N/A | | 60% 74C P2 191W / 215W | 763MiB / 7974MiB | 99% Default | | | | N/A |
+-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 207... Off | 00000000:2E:00.0 Off | N/A | | 0% 34C P8 14W / 215W | 14MiB / 7982MiB | 0% Default | | | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage |
|=============================================================================| | 0 N/A N/A 1343 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 2338 G /usr/lib/xorg/Xorg 174MiB | | 0 N/A N/A 2463 G /usr/bin/gnome-shell 233MiB | | 0 N/A N/A 671633 G ...AAAAAAAAA= --shared-files 45MiB | | 0 N/A N/A 672504 C /opt/conda/bin/python 229MiB | | 0 N/A N/A 673010 G chimerax 33MiB | | 1 N/A N/A 1343 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2338 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
After setting the environment variable and running chimerax in the same session, it still runs on GPU 0. I also tried a recipe that defines "export NVIDIA_VISIBLE_DEVICES=1, export CUDA_VISIBLE_DEVICES=0" shared here [ https://stackoverflow.com/a/58445444]. It does not work either.
To ChimeraX developers, I wonder how ChimeraX is exposed to CUDA. I have basis in CUDA computing and using CUDA in Python. If you can give some clues, that would be great.
Best, Shasha
On Tue, Nov 24, 2020 at 12:18 PM Eric Pettersen <pett@cgl.ucsf.edu> wrote:
To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be:
alias cx="CUDA_VISIBLE_DEVICES=1 chimerax"
Other shells have similar (but not necessarily identical) syntaxes.
--Eric
Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users

Hello Shasha, My bad, I suggested this solution because I knew it worked for compute jobs, but I had not tried it with ChimeraX when I sent my email. I just tried now, and I observe the same behavior that you describe: ChimeraX uses the GPU connected to the monitor, no matter what environment variable I define before starting it (with ChimeraX 1.1 on CentOS 8.1). I hope the workaround of choosing which GPU gets used by your compute jobs will be enough for you. Cheers, Guillaume
On 24 Nov 2020, at 18:21, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Guillaume, and Eric
Thanks for the tip. The temporary assignment of visiable GPU devices is exactly what I want to get. Though it looks like the recipe of using 'CUDA_VISIBLE_DEVICES=1' does not work at least on my ubuntu 20.04 with chimerax 1.0. I also tried Eric's suggestion just now.
sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES
sf@sf-MS-7C35:~$ export CUDA_VISIBLE_DEVICES=1 sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES 1 sf@sf-MS-7C35:~$ chimerax & [1] 673010 sf@sf-MS-7C35:~$ nvidia-smi Tue Nov 24 12:09:28 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 207... Off | 00000000:2D:00.0 On | N/A | | 60% 74C P2 191W / 215W | 763MiB / 7974MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 207... Off | 00000000:2E:00.0 Off | N/A | | 0% 34C P8 14W / 215W | 14MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1343 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 2338 G /usr/lib/xorg/Xorg 174MiB | | 0 N/A N/A 2463 G /usr/bin/gnome-shell 233MiB | | 0 N/A N/A 671633 G ...AAAAAAAAA= --shared-files 45MiB | | 0 N/A N/A 672504 C /opt/conda/bin/python 229MiB | | 0 N/A N/A 673010 G chimerax 33MiB | | 1 N/A N/A 1343 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2338 G /usr/lib/xorg/Xorg 4MiB | +-----------------------------------------------------------------------------+
After setting the environment variable and running chimerax in the same session, it still runs on GPU 0. I also tried a recipe that defines "export NVIDIA_VISIBLE_DEVICES=1, export CUDA_VISIBLE_DEVICES=0" shared here [https://stackoverflow.com/a/58445444 <https://stackoverflow.com/a/58445444>]. It does not work either.
To ChimeraX developers, I wonder how ChimeraX is exposed to CUDA. I have basis in CUDA computing and using CUDA in Python. If you can give some clues, that would be great.
Best, Shasha
On Tue, Nov 24, 2020 at 12:18 PM Eric Pettersen <pett@cgl.ucsf.edu <mailto:pett@cgl.ucsf.edu>> wrote: To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be:
alias cx="CUDA_VISIBLE_DEVICES=1 chimerax"
Other shells have similar (but not necessarily identical) syntaxes.
--Eric
Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org <mailto:guillaume@gaullier.org>> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net <mailto:goddard@sonic.net>> wrote: Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-... <https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...>
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com <mailto:shaalltime@gmail.com>> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu <mailto:ChimeraX-users@cgl.ucsf.edu> Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users <https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users>
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
participants (4)
-
Eric Pettersen
-
Guillaume Gaullier
-
Shasha Feng
-
Tom Goddard