Simple dotfile repo for my short lived installations, mainly in wsl2 + ubuntu.
That said, this repo is more for misc install instructions, as a pure dotfile repo where I simply copy the same dotfiles to new installs does
not always work that well for me due to the particulars of any given system and use case.
Install and customization:
- Base installation
- How to customize .bashrc
- How to manage ssh
- Graphics and CUDA support
- Docker
- Compacting to free up space
- Mount external and network drives
- WSL and GUI apps
- Serial over USB
- Permissions for devices under
/dev/tty*
- Setting permissions after copying dirs between distros
- Fix corrupt VHD
- Troubleshooting WSL | Microsoft
- Kill WSL
Misc
Fix PATH in .profile
In Ubuntu-22.04 they messed up the order of things so that required paths are not set until after .bashrc
is executed.
nano ~/.profile
- Find the block that starts with
# if running bash
- Move that block so that it is the last thing executed in the file (we want any PATH stuff to be above it)
- Save and reload shell
Configure ssh key for github and clone repo
- Create the dir for all ssh keys,
mkdir ~/.ssh
- Generating a new SSH key and adding it to the ssh-agent| Github Docs
- Start the ssh agent, run
eval $(ssh-agent -s)
- Add the private ssh key to the agent, example
ssh-add ~/.ssh/id_github_home
- Create development dir:
mkdir ~/dev && cd ~/dev
- Finally clone this repo, the final path should be
~/dev/dotfiles
Update and install defaults
- Update to latest, run script:
. ~/dev/dotfiles/scripts/update-all.sh
- Install default tools: run script:
. ~/dev/dotfiles/scripts/install-defaults.sh
- Add custom settings to
~/.bashrc
, see instructions - Add ssh keys
- (WSL) Install pretty shell
- Install tmux
- Then symlink config file
~/dev/dotfiles/configs/.tmux.conf
to~/.tmux.conf
ln -sf ~/dev/dotfiles/configs/.tmux.conf ~/.tmux.conf
- Then symlink config file
- Connect USB device to WSL2 via network
- How to manage WSL disk space | Microsoft
Very nice doc for handling some of the weird errors you see once in a lifetime
All custom settings are stored in a separate file .bashrc_dotfiles
in order to keep .bashrc
as clean and default as possible (quite handy when handling lots of installs).
- Open
.bashrc
in an editor - At the bottom of the file, add the following and save
NB! Make sure the path is correct for your system.### dotfiles: shared settings source "$HOME/dev/dotfiles/configs/.bashrc_dotfiles" ###
- Reload shell
- Done!
I use keychain for easier management of ssh keys.
Install and configuration of keychain are handled by default install script and bashrc extras config file.
All you need to do is to prepare the ssh keys for usage:
- Copy keys to
~/.ssh
- Set required permissions: run script
. ./scripts/set-ssh-permissions.sh
- Done!
- Run Linux GUI apps on the Windows Subsystem for Linux
Graphics are enabled via [WSLg] by default on win 11. It use a vGPU behinds the sceen and then open a window on the host via FreeRDP.
sudo apt install x11-xserver-utils
so you can use things likexhost
Go to Nvidia for updated instructions (see below).
The main challenge with CUDA on WSL is that the default Ubuntu package for CUDA Toolkit comes with a driver. This driver will overwrite the link to the windows driver. The solution is to install a WSL-Ubuntu specific package for CUDA toolkit directly from Nvidia.
- Follow the manual installation steps in nvidia doc CUDA on WSL
- Remove old key as instructed
- Then run "Option 1: Installation of Linux x86 CUDA Toolkit using WSL-Ubuntu Package – Recommended"
Ifgpg
hangs then modify the command to saygpg --no-use-agent
as it is most likely wsl/ubuntu waiting for the agent to start up. - Done!
Updating the nvidia driver in windows can sometimes mess with the symbolic links that WSL depend on.
The fix is to recreate the links in windows, then update links in wsl like so:
- Run CMD in Windows (as Administrator)
C: cd \Windows\System32\lxss\lib del libcuda.so del libcuda.so.1 mklink libcuda.so libcuda.so.1.1 mklink libcuda.so.1 libcuda.so.1.1 # If you run into permission trouble then first make sure you are using CMD in Administrator mode. # If still no-go, open explorer.exe and attempt to delete the file there, as you will get a different error message: # Open explorer at current location explorer.exe . # Usually it turns out to be TrustedInstaller that is the owner. # First take ownership of the file takeown /f libcuda.so # Then give admins the rights to change it icacls libcuda.so /grant Administrators:F /T # ...And now you should be able to delete and recreate the links as described above
- Open WSL bash
If this last command fail then restart wsl and run it again.
sudo ldconfig
- Install nvidia container toolkit
- Docker should work with CUDA out of the box with the latest version of Docker installed on win11 as described in WSL 2 GPU Support for Docker Desktop on NVIDIA GPUs
- You can verify that CUDA is available in docker by
a. Running the examples found in the docker page above, b. or simply run CUDA benchmark like so (run both to be sure):# Option: Use local dockerfiles (easy to use in k8s as well if you need to test gpu on node) docker build -t cuda-test -f docker/cuda-test.dockerfile docker/. docker build -t nvidia-smi -f docker/nvidia-smi.dockerfile docker/. # nvidia-smi will dump some gpu info in stdout docker run --rm --gpus=all nvidia-smi # cuda-test will rune some simple calculations using the cuda core and dump result in stdout docker run -it --rm --gpus=all cuda-test # Option: Use external docker run --rm --gpus=all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi docker run -it --gpus=all --rm nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -benchmark
Note that the parameter --gpus=all
is the way to tell docker to use gpu, otherwise it will just use the cpu.
To enable gpus then define gpu as a capability under the deploy
tag.
docs
# Example docker-compose.yaml
demo:
build: something
command: something
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
Sometimes installs in wsl (like nvidia-container-tools) can mess up the docker wsl distro which will set docker in a bad state.
Fix:
- Close docker desktop
- Unregister the docker wsl distro:
- In windows, open a cmd as admin and run
wslconfig.exe /u docker-desktop
- In windows, open a cmd as admin and run
- Start docker desktop
WSL2 lives inside a virtual disk which is usually stored by windows in a ext4.vhdx
file.
This file will grow over time even if contents inside WSL2 are deleted. This "feature" is one of the small quirks of virtual file systems, they tend to eat up space on the host system usage over time goes up.
You can reclaim this space by trimming the .vhdx
file.
Sources:
- See post from iuriguilherme
- See post from MS
Update 2024:
WSL can automatically trim drives as long as the distro/volume has been set to set-sparse=true
. The default setting is false
.
How to turn it on: wsl --manage <distro> --set-sparse true
If you want to trim then do the following:
- Shutdown WSL
- Set sparse to false for the target distro, as trim cannot run when set to true
- Trim the target
*.vhdx
file(s) - Set sparse to true again for the distro(s)
- Done!
Update 2023:
MS has updated WSL2 to automatically trim itself, and so far it seems to work on my system.
The exception is docker vhd which sometimes seems not to shrink and has to be handled manually:
- Prune docker,
docker system prune --all
- Shutdown docker
- Open Windows Powershell in admin mode
- Shut down wsl
wsl --shutdown
- Trim docker vhd
Optimize-VHD -Path "$env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx" -Mode Full
- Trim ubuntu vhd
Optimize-VHD -Path "$env:LOCALAPPDATA\Packages\CanonicalGroupLimited.Ubuntu22.04LTS_79rhkp1fndgsc\LocalState\ext4.vhdx" -Mode Full
optimize-vhd
require windows feature "virtual platform" to be installed:
Optimize-VHD -Path "$env:LOCALAPPDATA\Docker\wsl\data\ext4.vhdx" -Mode Full
diskpart
is available in all windows distros, and can be used to shrink virtual drives like this:
diskpart
select vdisk file="C:\Users\<user>\AppData\local\Docker\wsl\data\ext4.vhdx"
attach vdisk readonly
compact vdisk
detach vdisk
exit
Note: Mounting works for drives.
To access usb devices like microcontrollers etc then use usbipd.
Examle where the drive is available under f:
in windows
# WSL
sudo mkdir /mnt/f
sudo mount -t drvfs f: /mnt/f
Example where networked storage is already showing in Windows under \\server\share
# WSL
sudo mkdir /mnt/share
sudo mount -t drvfs '\\server\share' /mnt/share
Access control seems to be disabled in WSL as xhost
will print the following:
access control disabled, clients can connect from any host
SI:localuser:wslg
This means there is no point fiddling with xhost + something
in WSL.
Example use case: read serial output from an arduino nano.
This works quite well when using usbipd-win
.
You can inspect the traffic in linux terminal using the tool minicom
.
Install and configure minicom for usb:
- Install package
sudo apt install minicom
- First check with
dmesg | grep tty
if system recognize your adapter - Then try to run minicom with
sudo minicom -s
, go to "Serial port setup" and change the first line to/dev/ttyUSB0
- Finally save config as default with "Save setup as dfl"
Connect minicom to device:
sudo minicom --device /dev/ttyUSB0
- Select "comm Paramenters" and speed to what ever the device is using
- You should now see output from device
This comes into play when you want to access misc USB stuff or other devices.
The /dev
directory is recreated at every boot, so any settings via chmod
will vanish.
Normally the group dialout
should be the owner for serial devices.
Unfortunately there is a bug in WSL2 where group root
is the only owner, ref microsoft/WSL/issues/9247.
Use chmod
option until fixed in WSL to avoid breaking stuff.
This setting will not survive reboot.
sudo chmod a+rw /dev/ttyACM0
This setting is permanent.
# Find owner group. If this is root then we should not add users to it...
ls -l /dev/ttyACM0
# Add user to group. BEWARE: check group first, do not join root!
sudo adduser $USER $(stat --format="%G" /dev/ttyACM0)
Aka "why do the directory have a green background?"
Apart from coloring files based on their type (turquoise for audio files, bright red for Archives and compressed files, and purple for images and videos), ls also colors files and directories based on their attributes:
- Black text with green background indicates that a directory is writable by others apart from the owning user and group, and has the sticky bit set (o+w, +t).
- Blue text with green background indicates that a directory is writable by others apart from the owning user and group, and does not have the sticky bit set (o+w, -t).
A "de-greener" command to get back the rights,
chmod -R a-x,o-w,+X thatGreenFolderWithSubfolders/
Sources:
- 1 - What causes this green background in ls output? | stackexchange
- 1 - What causes this green background in ls output? | stackexchange
- Open CMD with admin access
- Run the following commands:
taskkill /F /FI "IMAGENAME eq wsl.exe" taskkill /F /FI "IMAGENAME eq wslhost.exe" taskkill /F /FI "IMAGENAME eq wslservice.exe"
Once in a blue moon you will see a system error message containing "Read-only file system" when running normal operations that should work.
This is usually down to corrupted VHD, and for Ubuntu in particular that is /dev/sdb
.
The linux tool e2fsck
can fix this as long as it can work outside the VHD. Meaning you cannot run it properly from inside the distro that has a bad VHD.
The workaround is to use another distro for this task.
Example:
I have two distros installed:
- Ubuntu
- docker-desktop
"Ubuntu" is the distro that has a bad VHD. I can then use "docker-desktop" to fix that VHD.
- Shutdown all wsl distros:
wsl --shutdown
- Mount the VHD:
wsl --mount %LOCALAPPDATA%\Packages\CanonicalGroupLimited.Ubuntu24.04LTS_79rhkp1fndgsc\LocalState\ext4.vhdx --vhd --bare
- Fix the VHD from the other distro:
wsl -d docker-desktop -u root e2fsck -f -y /dev/sdc
- Unmount when done:
wsl --unmount %LOCALAPPDATA%\Packages\CanonicalGroupLimited.Ubuntu24.04LTS_79rhkp1fndgsc\LocalState\ext4.vhdx
- Done!
Source: how-to-repair-a-vhd-mounting-error? the tutorial not work at all! | wsl git issue