
Hi Guillaume, and Eric Thanks for the tip. The temporary assignment of visiable GPU devices is exactly what I want to get. Though it looks like the recipe of using 'CUDA_VISIBLE_DEVICES=1' does not work at least on my ubuntu 20.04 with chimerax 1.0. I also tried Eric's suggestion just now. sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES sf@sf-MS-7C35:~$ export CUDA_VISIBLE_DEVICES=1 sf@sf-MS-7C35:~$ echo $CUDA_VISIBLE_DEVICES 1 sf@sf-MS-7C35:~$ chimerax & [1] 673010 sf@sf-MS-7C35:~$ nvidia-smi Tue Nov 24 12:09:28 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce RTX 207... Off | 00000000:2D:00.0 On | N/A | | 60% 74C P2 191W / 215W | 763MiB / 7974MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 GeForce RTX 207... Off | 00000000:2E:00.0 Off | N/A | | 0% 34C P8 14W / 215W | 14MiB / 7982MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 1343 G /usr/lib/xorg/Xorg 35MiB | | 0 N/A N/A 2338 G /usr/lib/xorg/Xorg 174MiB | | 0 N/A N/A 2463 G /usr/bin/gnome-shell 233MiB | | 0 N/A N/A 671633 G ...AAAAAAAAA= --shared-files 45MiB | | 0 N/A N/A 672504 C /opt/conda/bin/python 229MiB | | 0 N/A N/A 673010 G chimerax 33MiB | | 1 N/A N/A 1343 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 2338 G /usr/lib/xorg/Xorg 4MiB | +-----------------------------------------------------------------------------+ After setting the environment variable and running chimerax in the same session, it still runs on GPU 0. I also tried a recipe that defines "export NVIDIA_VISIBLE_DEVICES=1, export CUDA_VISIBLE_DEVICES=0" shared here [ https://stackoverflow.com/a/58445444]. It does not work either. To ChimeraX developers, I wonder how ChimeraX is exposed to CUDA. I have basis in CUDA computing and using CUDA in Python. If you can give some clues, that would be great. Best, Shasha On Tue, Nov 24, 2020 at 12:18 PM Eric Pettersen <pett@cgl.ucsf.edu> wrote:
To supplement Guilaume's very helpful answer, you could make an alias to reduce the typing involved, and you could put the alias in your shell startup file. For the bash shell, the syntax for making an alias named 'cx' for the command would be:
alias cx="CUDA_VISIBLE_DEVICES=1 chimerax"
Other shells have similar (but not necessarily identical) syntaxes.
--Eric
Eric Pettersen UCSF Computer Graphics Lab
On Nov 24, 2020, at 12:09 AM, Guillaume Gaullier <guillaume@gaullier.org> wrote:
Hello,
You can restrict which of your GPUs ChimeraX will be able to detect by starting it from the shell like so:
CUDA_VISIBLE_DEVICES=1 chimerax
replace 1 with the device number you want, this is the same one as reported by nvidia-smi. This will work until you close ChimeraX, next time you run it you still need to add the environment variable before the "chimerax" command.
You can also make this environment variable stay around until you close the shell session like so:
export CUDA_VISIBLE_DEVICES=1
then you can open ChimeraX from the same shell session, close it, and reopen with only the "chimerax" command and it should still only see the GPU you indicated.
When you close and restart your shell, you will have to export the environment variable again. I don’t recommend adding the export to your ~/.bashrc or other shell initialization script, because then all your shell sessions will have this environment variable set, so all the commands you run will only see this GPU, which is probably not what you want. It is less likely to get in your way down the road if you only set this environment variable for the duration of a shell you opened specifically to run ChimeraX from.
I hope this helps,
Guillaume
On 24 Nov 2020, at 01:51, Shasha Feng <shaalltime@gmail.com> wrote:
Hi Tom,
Sorry about not clarifying my operating system. I am using ubuntu 20.04 with two NVIDIA GPU cards. Do I need to change OpenGL setting or reconfigure the nvidia setting?
Thanks, Shasha
On Mon, Nov 23, 2020 at 6:58 PM Tom Goddard <goddard@sonic.net> wrote:
Hi Shasta,
ChimeraX has no way to select which GPU it uses. The operating system or opengl driver decides. You didn't mention which operating system you are using. Here is an example of how to set the default OpenGL GPU in Windows.
https://www.techadvisor.co.uk/how-to/pc-components/how-set-default-graphics-...
Tom
On Nov 23, 2020, at 2:38 PM, Shasha Feng <shaalltime@gmail.com> wrote:
Hi,
Is there any way to specify which GPU device for ChimeraX to run on? Currently, it uses the default GPU 0, which can disturb the existing jobs. Thanks.
Best, Shasha
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users
_______________________________________________ ChimeraX-users mailing list ChimeraX-users@cgl.ucsf.edu Manage subscription: https://www.rbvi.ucsf.edu/mailman/listinfo/chimerax-users