ubuntu problems ! written by mahdic200
note
build V1.1.1
in this series , I've wrote a bunch of notes which are really handy ! . specially for beginners like us in ubuntu which are not comfortable without GUI .
there are many answers for questions like :
- how do I setup a proxy in my ubuntu ?
- how do I use clipboard ?
- how do I solve network issue with apt ?
- what is permission denied message ?
- why can't I login as root ?
- how to reset my password ?
- how do I reset root password ?
which are scattered throughout the web and finding them maybe overwhelming and annoying for starters and beginners . because many of users use windows as their desktop operating system and it's so user-friendly . but ubuntu and other linux distributions are for professionals and expert programmers . despite of this fact that ubuntu and other linux distributions become more user-friendly in recent few years .
have a great day and good luck ! .
written by mahdic200 .
debian 12 post installation
there are a few steps you need to do after installing debian 12 with gnome GUI .
add sources list
enter command sudo nano /etc/apt/sources.list
then paste these lines down there :
# deb cdrom:[Debian GNU/Linux 12.7.0 _Bookworm_ - Official amd64 DVD Binary-1 with firmware 20240831-10:40]/ bookworm contrib main non-free-firmware
deb https://deb.debian.org/debian bookworm main non-free-firmware
deb-src https://deb.debian.org/debian bookworm main non-free-firmware
deb https://security.debian.org/debian-security bookworm-security main non-free-firmware
deb-src https://security.debian.org/debian-security bookworm-security main non-free-firmware
deb https://deb.debian.org/debian bookworm-updates main non-free-firmware
deb-src https://deb.debian.org/debian bookworm-updates main non-free-firmware
appindicator
install this on your debian system :
sudo apt install task-gnome-flashback-desktop
sudo apt install gnome-shell-extension-appindicator
desktop icons ng
after adding add sources list to /etc/apt/sources.list
enter this command to bash
then restart the pc .
sudo apt install gnome-shell-extension-desktop-icons-ng
after restarting the pc go to the extension application in debian and the enbale the desktop icons (ng) .
Graphics card driver switching
in Debian 12 when you install NVIDIA
driver your initramfs
will be updated by /etc/modprobe.d/nvidia-blaklists-nouveau.conf
file .
so you won't be able to use nouveau
driver anymore , such a problem , isn't it ? . but here's that my solution comes out !
Selecting Nouveau
open /etc/modprobe.d/
folder with your VSCode or any editor you want (and be able to save file as root) . open nvidia.conf
file and comment out all of it , first by selecting all content with Ctrl + A
and then commenting with Ctrl + /
and then save .
now open nvidia-blacklists-nouveau.conf
file and do the same . you may notice that there is a comment line on top of the file , it's telling something important ! you have to update the initramfs
file with this command :
sudo update-initramfs -u
it is really important to do this because this is related to your bootable file which loads the kernel , and kernel modules are loaded at boot time so you have and must to update this file after any major changes or driver switching like this .
now you can close any application and reboot your system safely :
sudo systemctl reboot
Checking the Loaded Driver
after a complete reboot you can check whether your driver is loaded or not :
sudo lspci -k | grep -A 3 -i "VGA"
and you should see something like this :
00:02.0 VGA compatible controller: Intel Corporation Alder Lake-P GT1 [UHD Graphics] (rev 0c)
Subsystem: Lenovo Alder Lake-P GT1 [UHD Graphics]
Kernel driver in use: i915
Kernel modules: i915
--
01:00.0 VGA compatible controller: NVIDIA Corporation GA107M [GeForce RTX 3050 Ti Mobile] (rev a1)
Subsystem: Lenovo GA107M [GeForce RTX 3050 Ti Mobile]
Kernel driver in use: nouveau
Kernel modules: nouveau, nvidia_current_drm, nvidia_current
Selecting nvidia
there is nothing special about reversing those operations back :) . just open the /etc/modprobe.d/
directory with code or any desired editor and uncomment the contents of these files nvidia-blacklists-nouveau.conf
and nvidia.conf
. then enter this command again :
sudo update-initramfs -u
and then reboot :) .
hiding grub bootloader
open up your /etc/default/grub
:
sudo nano /etc/default/grub
and change the GRUB_TIMEOUT
to 0
and add the GRUB_HIDDEN_TIMEOUT_QUIET=true
after it.
now update the boot-loader :
sudo update-grub
now after rebooting you won't see the grub menu , just a simple graphical verbose which indicates which kernel version is being loaded .
mysql installation for xampp
to install xampp on ubuntu and debian 12 you need to install this package first :
sudo apt install net-tools
and mysql-server this is a full reference digital ocean :
wget https://dev.mysql.com/get/mysql-apt-config_0.8.30-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.30-1_all.deb
sudo apt update
sudo apt install mysql-server
night light issue
open a terminal then enter these in it :
gsettings set org.gnome.settings-daemon.plugins.color night-light-enabled true
gsettings set org.gnome.settings-daemon.plugins.color night-light-schedule-from 0
gsettings set org.gnome.settings-daemon.plugins.color night-light-schedule-to 24
gsettings set org.gnome.settings-daemon.plugins.color night-light-temperature 4500
Source
This debian wiki is the reference and you MUST read it ! . and also if you want to use CUDA for Tensorflow you MUST read it's documentation .
Nvidia driver installation
1. Ensure Non-Free Repositories Are Enabled
Your /etc/apt/sources.list
already includes contrib
, non-free
, and non-free-firmware
. No changes are needed.
2. Update the System
Update the package lists:
sudo apt update
3. Install Prerequisites
Install the required tools:
sudo apt install dkms build-essential linux-headers-$(uname -r)
4. Install the NVIDIA Driver
Install the driver package:
sudo apt install nvidia-driver
5. Verify the Installation
Reboot the system:
sudo systemctl reboot
After reboot, check the GPU status:
nvidia-smi
6. Optional: Install CUDA Toolkit
If you need CUDA support (e.g., for AI development):
sudo apt install nvidia-cuda-toolkit
7. Blacklist Nouveau (if necessary)
If you encounter issues with the Nouveau driver, create a configuration file:
sudo nano /etc/modprobe.d/blacklist-nouveau.conf
Add the following:
blacklist nouveau
options nouveau modeset=0
Update the initramfs and reboot:
sudo update-initramfs -u
sudo reboot
shortcut customization
go to settings > keyboard > customize shortcuts :
then you have to add ctrl + alt + T
for gnome terminal .
and you have to go to navigation section of shortcuts , then change the Switch applications
shortcut to Super + Tab
and change the switch windows
to Alt + Tab
.
XDM configurations
XDM javashared resources folder
The javasharedresources
folder is typically created by the Java Runtime Environment (JRE) or Java Virtual Machine (JVM). It's used to store class data sharing (CDS) caches, which improve the performance of Java applications by reducing startup time and memory usage. This folder is not specific to Xtreme Download Manager (XDMan) but is likely created because XDMan uses Java.
The JVM can be instructed to avoid creating the javasharedresources
folder by overriding its default behavior:
2. Modify JVM Options
Find XDMan's Executable Script: Locate the script or command that launches XDMan. It might be in /usr/bin/xdman
or similar locations.
Edit the Launch Script: Add the following JVM option to disable the use of javasharedresources
:
-Xshare:off
this is the bash script located in /usr/bin/xdman
which I changed it due to above explanations :
#!/bin/bash
2 if [ $EUID -eq 0 ];then
3 echo "It's not recomended to run XDM as root, as it can cause proble
ms"
4 fi
5 /opt/xdman/jre/bin/java -Dsun.java2d.xrender=false -Xmx1024m -Xshare:off -jar /opt/xdman/
xdman.jar
I changed it and -Xshare:off
before -jar /opt/xdman
. also I changed the /usr/share/applications/xdman.desktop
to use the /usr/bin/xdman
script file as executable .
and it seems it did the trick for me .
XDM tray Icon
go to XDM settings and scroll down to Advanced Settings and then check the Show the Tray icon (needs restart)
.
go to system monitoring and then search for xdman
and then kill the processes and open the XDM again .
development issues
in this section I'll provide some solution for problems I've encountered .
hardware control
in this section I've provided some solutions for hardware control and monitoring.
CPU temperature
sudo apt install lm-sensors
sudo sensors-detect
answer yes to all .
sudo service kmod start
then :
sensors
command line system monitoring
Latex usage
latex installation and usage instructions
latex installation
installation
sudo apt install chktex latexmk texlive texlive-base texlive-binaries texlive-extra-utils texlive-fonts-recommended texlive-latex-base texlive-latex-recommended texlive-luatex texlive-plain-generic
to generate a pdf from .tex
files :
pdflatex file.tex
Network Issues
in this chapter I will explain a bunch of problems about network . in ubuntu .
clear DNS cache:
sudo resolvectl flush-caches
problem to installing packages:
sudo apt update
sudo apt-get update
if net not working for getting apt packages :
sudo nano /etc/resolv.conf
and add these lines before nameserver ... :
nameserver 8.8.8.8
nameserver 8.8.4.4
solve apt update network issue:
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf > /dev/null
nothing works for apt update Ign Error !
if nothing works for your problem just open a terminal and run this command :
sudo nano /etc/apt/apt.conf
and write this line of code inside of apt.conf file :
Acquire::http::Proxy "http://yourproxyaddress:proxyport";
go in software and updates . then click on select-box "download from" , click on "other ..." . and click on select best server in popped up window . wait to test complete .
enter this command:
code /etc/apt/sources.list.d/
and open ubuntu.sources
file .
comment this section:
# Types: deb
# URIs: http://security.ubuntu.com/ubuntu/
# Suites: noble-security
# Components: main restricted universe multiverse
# Signed-By: /usr/share/keyrings/ubuntu-archive-keyring.gpg
Global network proxy
since we are Iranian people and we have a lot of network issues (limits , 403 errors etc.) we have to set a proxy that works for all of our applications in Linux systems , and though Linux systems does not provide such a thing , I have a big boom solution !! .
use your phone's hotspot !
the only thing you need is to install VPN Hotspot
application which it's Icon is a green key (for now of course) . your phone has to be rooted (if you don't have such an option please use USB tethering on your android phone) . and with this method we can have a good global proxy system which is a trick . and don't remember to have these things on your mind :
- turn on your android VPN
- use New York (United States) IP address and timezone
- don't forget to use your network when Internet connection is strong
- make sure your Linux system is using your phones VPN (you can check your IP address) that must does the trick .
weak alternative with sshuttle
This is a very weak alternative but may work for you .
sudo apt install sshuttle
sudo sshuttle --dns -x <server_ip> -r <username>@<server_ip>:<port> 0/0
verify your proxy settings are applied :
curl ip-api.com
just use the --x
or --proxy
option followed by the proxy server URL:
CURL -x http://proxy-url.com:8080 https://target-url.com
or
CURL --proxy http://proxy-url.com:8080 https://target-url.com
xray-core
download this from this link:
extract this file in ~/.local/bin/
.
Qv2ray gui
download from this link:
extract it in ~/.local/bin/
folder and give it permissions .
open Qv2ray gui (give it executable permission)
open preferences tab . navigate to kernel tab . there are two select buttons . for first one give it xray-core executable which you extracted . for second one give the parent folder path of xray-core executable file .
docker.ir see this page
for iranian people , you must see this page and use it's mirror and proxy server . thank you :) , none of the previous steps are requierd .
just add docker.ir proxy server in ~/.docker/config.json
and add mirror to /etc/docker/daemon.json
. FUCK IRAN AND FUCK DOCKER .
sudo mkdir -p /etc/systemd/system/docker.service.d
sudo touch /etc/systemd/system/docker.service.d/http-proxy.conf
sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf
inside the .../http-proxy.conf:
[!WARNING] HTTPS is not HTTPS ! in some cases you have to put the http:// instead of https inside of HTTPS_PROXY variable . if proxy is not working try to do this .
[Service]
Environment="HTTP_PROXY=http://example.com:3129"
Environment="HTTPS_PROXY=https://example.com:3129"
# Environment="NO_PROXY=localhost,127.0.0.1
[!IMPORTANT] reload the docker in some cases like this you have to reload the docker to new settings be loaded in program and work correctly .
sudo systemctl daemon-reload && \
sudo systemctl restart docker
verify completed everything fine :
sudo systemctl show --property=Environment docker
you should see something like this:
Environment=HTTP_PROXY=http://example.com:3129 HTTPS_PROXY=http://example.com:3129
Telegram MTProxy
this article is a copy of this article .
How to set up a Telegram Proxy (MTProxy)
This tutorial will teach you how to set up a Telegram MTProxy on an Ubuntu 22.04 sever using AWS Lightsail, although you can use any other Linux distribution and cloud provider.
Using a Telegram proxy is a safe, easy and effective way of overcoming Telegram bans. It's useful, for example, to keep using Telegram under tyrannical regimes, or to circumvent judges' decisions to block Telegram.
Telegram proxies are a built-in feature in all Telegram apps (both mobile and desktop). It allows Telegram users to connect to a proxy in just one or two clicks / taps.
Telegram proxies are safe: Telegram sends messages using their own MTProto secure protocol, and the proxy can only see encrypted traffic – there's no way for a proxy to decrypt the traffic and read the messages. The proxy does not even know which Telegram users are using the proxy, all the proxy sees is just a list of IPs.
This guide exists because the README in the official TelegramMessenger/MTProxy repo is both incomplete and hasn't been updated for the past 5 years, and fails at multiple points if you try following the steps described. This is an updated version as of March 2024. There's also an official Docker image, but it's also outdated.
I've used these exact steps to set multiple proxies myself, including my public Telegram proxy that you can use if you don't want to go through the hassle of setting up your own.
Instructions
To begin, launch a clean Ubuntu 22.04 instance. I'm using AWS Lightsail. I chose a $3.5/mo instance (512MB, 2vCPUs, 1TB transfer), which should be enough for non-intensive usage. You can choose a bigger instance, or set up your server in any other cloud provider (DigitalOcean, Linode, Hetzner, etc) based on your needs and preferences. Note that the physical location you choose for the instance has to be a place where Telegram is not banned / restricted for the proxy to work.
ssh
into the machine you've just launched:
ssh ubuntu@ip
- Update apt:
sudo apt-get update
- Install dependencies:
sudo apt install git curl build-essential libssl-dev zlib1g-dev
- Clone the repo:
We're going to use an unofficial MTProxy community fork instead of the official one. Why? The official Telegram MTProxy repo is considered abandoned: it hasn't been updated for many years. Many problems that need fixing haven't been fixed, and if you try using the official repo code, MTProxy will unexpectedly break in production. This fork is a community effort to keep things up to date so you can keep running MTProxy in production without surprises.
git clone https://github.com/GetPageSpeed/MTProxy
cd MTProxy
- Change the
Makefile
and add the-fcommon
flag toCFLAGS
andLDFLAGS
as per this PR
nano Makefile
Save and exit
- Build the binaries
make
Make sure it compiles without errors.
- Move the binary to
/opt/MTProxy
for ease of running:
sudo mkdir /opt/MTProxy
sudo cp objs/bin/mtproto-proxy /opt/MTProxy/
- Go to the new directory:
cd /opt/MTProxy
- Obtain the Telegram secret:
sudo curl -s https://core.telegram.org/getProxySecret -o proxy-secret
- Obtain the Telegram configuration:
sudo curl -s https://core.telegram.org/getProxyConfig -o proxy-multi.conf
- Generate a proxy secret. This will output a string of random numbers and letters. Keep the result at hand, you will need it in a few steps:
head -c 16 /dev/urandom | xxd -ps
- Create a
mtproxy
user to run the proxy:
sudo useradd -m -s /bin/false mtproxy
- Update the ownership of the MTProxy directory to the new user
sudo chown -R mtproxy:mtproxy /opt/MTProxy
- Allow traffic on port 8443 by opening the ports in the AWS Lightsail instance:
- Navigate to your AWS Lightsail instance
- In the Networking tab, under "IPv4 Firewall", click "Add rule"
- Add a rule for a "Custom" TCP protocol on 8443. Make sure "Duplicate rule for IPv6" is active
- Click "create"
- Since you're at it: close port 80, which is open by default, because we're not going to use it
If your instance uses the ufw
firewall, you will also need to do:
sudo ufw allow 8443/tcp
- Now we need to know our AWS instance's private and public IP to pass them to MTProxy.
All AWS instances are behind a NAT, and this causes the RPC protocol handshake to fail if a private-to-public network address translation is not passed explicitly to MTProxy as the --nat-info
param. If you don't do this, the proxy will look like it's running normally, but Telegram clients will not be able to connect, and the app will show a message like "Proxy unavailable" or an infinite "Conecting..." message.
If you don't know how to look up your AWS instance's public and private IPs, follow these steps
Once you have your private and public IP, which should look something like 170.10.0.200
and 18.180.0.1
, keep them at hand because you'll need them in a moment and continue.
- Set up a
systemd
service to run the proxy:
sudo nano /etc/systemd/system/MTProxy.service
Copy the folliwng config, make sure you edit it with your own params:
[Unit]
Description=MTProxy
After=network.target
[Service]
Type=simple
WorkingDirectory=/opt/MTProxy
ExecStart=/opt/MTProxy/mtproto-proxy -u mtproxy -p 8888 -H 8443 -S <YOUR_SECRET_FROM_STEP_11> --aes-pwd proxy-secret proxy-multi.conf -M 1 --http-stats --nat-info <YOUR_PRIVATE_IP>:<YOUR_PUBLIC_IP>
Restart=on-failure
[Install]
WantedBy=multi-user.target
Save and exit
- Reload the systemd daemons:
sudo systemctl daemon-reload
- Test the MTProxy service and verify it started just fine:
sudo systemctl restart MTProxy.service
After that check the proxy status, it should be active:
sudo systemctl status MTProxy.service
The proxy is ready!
You should now be able to connect to it inside Telegram by using a link like:
tg://proxy?server=<YOUR_PUBLIC_IP>&port=8443&secret=<YOUR_SECRET_FROM_STEP_11>
Or share it literally anywhere by using an HTTP link:
https://t.me/proxy?server=<YOUR_PUBLIC_IP>&port=8443&secret=<YOUR_SECRET_FROM_STEP_11>
If you want to enable random padding in your client, add dd
at the beggining of your secret (read more in the official MTProxy README)
Example of a proxy link with random padding enabled client-side:
https://t.me/proxy?server=<YOUR_PUBLIC_IP>&port=8443&secret=dd<YOUR_SECRET_FROM_STEP_11>
- Make sure you enable the service so the MTProxy starts even if you reboot the machine:
sudo systemctl enable MTProxy.service
- Set up a cron job to update
proxy-multi.conf
on a daily basis. Telegram recommends that proxies update their Telegram config information at least once a day, since it may change. To do that with the right permissions, we need to set up a cron job as the root user:
- Switch to the root user:
sudo su
- Open the root user's crontab file for editing:
crontab -e
If prompted, choose an editor (e.g., nano) to edit the crontab file.
- In the crontab file, add the following line at the end. This will set up the update task to be run every day at 4am. Since the proxy might experience a very short downtime while restarting the MTProxy service, choose a time (usually at night) that minimizes the effects of downtime.
0 4 * * * curl -s https://core.telegram.org/getProxyConfig -o /opt/MTProxy/proxy-multi.conf && chown -R mtproxy:mtproxy /opt/MTProxy && systemctl restart MTProxy.service
Save and exit the root session.
✨ Congrats!
Your proxy is now all set and will continue to be updated automatically!
You can leave it here – but there's more you can do, if you want.
For example, Telegram rewards people that set up proxies by allowing them to promote a channel of their choice to all users connected to the proxy. This channel shows up in the top of their chat list labeled as "Proxy Sponsor".
If you want to register your proxy to get usage statistics and set a promoted channel to proxy users, talk with Telegram's official MTProxybot – it will give you a tag
you can pass to mtproto-proxy
with the flag -P
in the systemd config from step 16, like this:
ExecStart=/opt/MTProxy/mtproto-proxy -u mtproxy -p 8888 -H 8443 -S <YOUR_SECRET_FROM_STEP_11> -P <YOUR_MTPROXYBOT_TAG> --aes-pwd proxy-secret proxy-multi.conf -M 1 --http-stats --nat-info <YOUR_PRIVATE_IP>:<YOUR_PUBLIC_IP>
This way MTProxyBot can keep track of your proxy's requests to compile stats and set the promoted channel.
As a finishing touch, you can also use your own domain/subdomain instead of your public instance's IP, for more readable proxy names and URLs, like what I did with my proxy telegramproxy.rameerez.com
:
https://t.me/proxy?server=telegramproxy.rameerez.com&port=8443&secret=dca82c6c73f2dbc3ca8e9045ee760283
To do this, just set up a DNS "A" record pointing to your proxy's IP. If you're using Cloudflare, make sure to turn off the proxy feature for that particular record. Oddly enough, the domain/subdomain you use can't contain hyphens; just stick to alphanumeric characters.
Thanks for reading and happy proxying!
personalizing
in this chapter I'll answer a few question about "how to personalize my ubuntu system ?" .
changing system / host name
to change system name or host-name you just can simple enter this command :
sudo hostnamectl set-hostname <my_new_hostname>
example :
sudo systemctl set-hostname boyshostname
sudo error after changing host-name
error after changing the hostname :
$ sudo su -
> sudo : unabe to resolve host boyshostname: name or service not known
Two things to check . /etc/hostname
and /etc/hosts
they should have the same name for your system .
/etc/hostname
file contains just the machine name .
/etc/hosts
has an entry for localhost
. It should have something like
127.0.0.1 localhost
127.0.0.1 my-machine
go to settings > Accessibility > seeing > turn on the large text option .
desktop icons for applications
navigate to ~/.local/share/applications
. there should be some file with prefix .desktop
. if it's not then you can create one . this is an example of how to do it .
nano dev.zed.Zed-Preview.desktop
then put these lines in it :
[Desktop Entry]
Version=1.0
Type=Application
Name=Zed
GenericName=Text Editor
Comment=A high-performance, multiplayer code editor.
TryExec=/home/mahdi/.local/zed-preview.app/libexec/zed-editor
StartupNotify=true
Exec=/home/mahdi/.local/zed-preview.app/libexec/zed-editor %U
Icon=zed
Categories=Utility;TextEditor;Development;IDE;
Keywords=zed;
MimeType=text/plain;inode/directory;x-scheme-handler/zed;
Actions=NewWorkspace;
[Desktop Action NewWorkspace]
Exec=/home/mahdi/.local/zed-preview.app/libexec/zed-editor --new %U
Name=Open a new workspace
Icon=zed
you can specify the executable binary , the Icon and the path etc .
Terminal Customization
I like my terminal to be like this :
uname@hostname:~
$
rather than this :
uname@hostname:~$
and also you can backup your data with this command :
dconf dump /org/gnome/terminal/ > ~/gnome_terminal_settings_backup.txt
Reset (wipe out) the settings before loading a new one (probably not really required):
dconf reset -f /org/gnome/terminal/
Load the saved settings:
dconf load /org/gnome/terminal/ < gnome_terminal_settings_backup.txt
your entire dconf database
it is stored in the single file ~/.config/dconf/user
.
My Setup
[legacy/profiles:]
default='b1dcc9dd-5262-4d8d-a863-c897e6d979b9'
list=['b1dcc9dd-5262-4d8d-a863-c897e6d979b9', 'adad0998-8247-48d8-a62b-6b5eb8f27dc9']
[legacy/profiles:/:adad0998-8247-48d8-a62b-6b5eb8f27dc9]
palette=['rgb(23,20,33)', 'rgb(192,28,40)', 'rgb(38,162,105)', 'rgb(162,115,76)', 'rgb(18,72,139)', 'rgb(163,71,186)', 'rgb(42,161,179)', 'rgb(208,207,204)', 'rgb(94,92,100)', 'rgb(246,97,81)', 'rgb(51,209,122)', 'rgb(233,173,12)', 'rgb(42,123,222)', 'rgb(192,97,203)', 'rgb(51,199,222)', 'rgb(255,255,255)']
use-theme-colors=true
visible-name='normal'
[legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9]
audible-bell=true
background-color='rgb(0,1,26)'
background-transparency-percent=51
font='Monospace 14'
foreground-color='rgb(0,172,161)'
highlight-colors-set=false
palette=['rgb(23,20,33)', 'rgb(192,28,40)', 'rgb(38,162,105)', 'rgb(162,115,76)', 'rgb(0,85,190)', 'rgb(163,71,186)', 'rgb(42,161,179)', 'rgb(208,207,204)', 'rgb(94,92,100)', 'rgb(246,97,81)', 'rgb(51,209,122)', 'rgb(233,173,12)', 'rgb(42,123,222)', 'rgb(192,97,203)', 'rgb(51,199,222)', 'rgb(255,255,255)']
use-system-font=false
use-theme-colors=false
use-theme-transparency=true
use-transparent-background=false
visible-name='mahdic200'
here is a shell script to import my color palette for yourself automatically :
#!/bin/bash
# Function to convert rgb(r, g, b) to hex format
rgb_to_hex() {
# Extract numbers from rgb(r, g, b) format
rgb=$1
numbers=$(echo $rgb | grep -o '[0-9]\+')
r=$(echo "$numbers" | sed -n '1p')
g=$(echo "$numbers" | sed -n '2p')
b=$(echo "$numbers" | sed -n '3p')
# Convert to hex
printf "#%02X%02X%02X" "$r" "$g" "$b"
}
# Function to get the profile UUID
get_profile_uuid() {
# First try the Mint/Cinnamon specific path
local uuid=$(gsettings get org.gnome.Terminal.ProfilesList default | tr -d "'")
if [ -z "$uuid" ]; then
# If that fails, try getting list of profiles
local profile_list=$(gsettings get org.gnome.Terminal.ProfilesList list)
# Get the first profile from the list if it exists
uuid=$(echo "$profile_list" | grep -o "'[^']*'" | head -1 | tr -d "'")
fi
if [ -z "$uuid" ]; then
# If still no UUID found, check alternative location
uuid=$(dconf list /org/gnome/terminal/legacy/profiles:/ | grep '^:' | head -1 | tr -d '/:')
fi
echo "$uuid"
}
# Function to update color setting
update_color_setting() {
local profile_uuid=$1
local setting_name=$2
local color_value=$3
# Try gsettings first
gsettings set "org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:$profile_uuid/" "$setting_name" "$color_value"
# If gsettings fails, try dconf
if [ $? -ne 0 ]; then
dconf write "/org/gnome/terminal/legacy/profiles:/:$profile_uuid/$setting_name" "'$color_value'"
fi
}
# Get the profile UUID
profile_uuid=$(get_profile_uuid)
if [ -z "$profile_uuid" ]; then
echo "Error: Could not find terminal profile UUID"
exit 1
fi
echo "Found profile UUID: $profile_uuid"
colors=("rgb(0,172,161)" "rgb(0,1,26)" "rgb(23,20,33)" "rgb(192,28,40)" "rgb(38,162,105)" "rgb(162,115,76)" "rgb(0,85,190)" "rgb(163,71,186)" "rgb(42,161,179)" "rgb(208,207,204)" "rgb(94,92,100)" "rgb(246,97,81)" "rgb(51,209,122)" "rgb(233,173,12)" "rgb(42,123,222)" "rgb(192,97,203)" "rgb(51,199,222)" "rgb(255,255,255)")
# Check if we have exactly 18 colors
if [ ${#colors[@]} -ne 18 ]; then
echo "Error: Expected 18 colors, but found ${#colors[@]}"
echo "Format should be: foreground, background, followed by 16 palette colors"
exit 1
fi
# Convert and set foreground color
foreground=$(rgb_to_hex "${colors[0]}")
echo "Setting foreground color to $foreground"
update_color_setting "$profile_uuid" "foreground-color" "$foreground"
# Convert and set background color
background=$(rgb_to_hex "${colors[1]}")
echo "Setting background color to $background"
update_color_setting "$profile_uuid" "background-color" "$background"
# Process palette colors (remaining 16 colors)
palette_colors=()
for i in {2..17}; do
hex=$(rgb_to_hex "${colors[$i]}")
palette_colors+=("'$hex'")
done
# Set the palette
color_array="[$(IFS=,; echo "${palette_colors[*]}")]"
echo "Setting color palette"
update_color_setting "$profile_uuid" "palette" "$color_array"
echo "Color settings updated successfully!"
here is the ai generated bash script for reading a color.txt palette dynamically :
#!/bin/bash
# Function to convert rgb(r, g, b) to hex format
rgb_to_hex() {
# Extract numbers from rgb(r, g, b) format
rgb=$1
numbers=$(echo $rgb | grep -o '[0-9]\+')
r=$(echo "$numbers" | sed -n '1p')
g=$(echo "$numbers" | sed -n '2p')
b=$(echo "$numbers" | sed -n '3p')
# Convert to hex
printf "#%02X%02X%02X" "$r" "$g" "$b"
}
# Function to get the profile UUID
get_profile_uuid() {
# First try the Mint/Cinnamon specific path
local uuid=$(gsettings get org.gnome.Terminal.ProfilesList default | tr -d "'")
if [ -z "$uuid" ]; then
# If that fails, try getting list of profiles
local profile_list=$(gsettings get org.gnome.Terminal.ProfilesList list)
# Get the first profile from the list if it exists
uuid=$(echo "$profile_list" | grep -o "'[^']*'" | head -1 | tr -d "'")
fi
if [ -z "$uuid" ]; then
# If still no UUID found, check alternative location
uuid=$(dconf list /org/gnome/terminal/legacy/profiles:/ | grep '^:' | head -1 | tr -d '/:')
fi
echo "$uuid"
}
# Function to update color setting
update_color_setting() {
local profile_uuid=$1
local setting_name=$2
local color_value=$3
# Try gsettings first
gsettings set "org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:$profile_uuid/" "$setting_name" "$color_value"
# If gsettings fails, try dconf
if [ $? -ne 0 ]; then
dconf write "/org/gnome/terminal/legacy/profiles:/:$profile_uuid/$setting_name" "'$color_value'"
fi
}
# Check if file is provided
if [ $# -ne 1 ]; then
echo "Usage: $0 <color_file>"
echo "File should contain 18 colors in rgb(r,g,b) format, one per line:"
echo "Line 1: foreground color"
echo "Line 2: background color"
echo "Lines 3-18: palette colors"
exit 1
fi
# Check if file exists
if [ ! -f "$1" ]; then
echo "Error: File $1 does not exist"
exit 1
fi
# Get the profile UUID
profile_uuid=$(get_profile_uuid)
if [ -z "$profile_uuid" ]; then
echo "Error: Could not find terminal profile UUID"
exit 1
fi
echo "Found profile UUID: $profile_uuid"
# Read all colors
mapfile -t colors < "$1"
# Check if we have exactly 18 colors
if [ ${#colors[@]} -ne 18 ]; then
echo "Error: Expected 18 colors, but found ${#colors[@]}"
echo "Format should be: foreground, background, followed by 16 palette colors"
exit 1
fi
# Convert and set foreground color
foreground=$(rgb_to_hex "${colors[0]}")
echo "Setting foreground color to $foreground"
update_color_setting "$profile_uuid" "foreground-color" "$foreground"
# Convert and set background color
background=$(rgb_to_hex "${colors[1]}")
echo "Setting background color to $background"
update_color_setting "$profile_uuid" "background-color" "$background"
# Process palette colors (remaining 16 colors)
palette_colors=()
for i in {2..17}; do
hex=$(rgb_to_hex "${colors[$i]}")
palette_colors+=("'$hex'")
done
# Set the palette
color_array="[$(IFS=,; echo "${palette_colors[*]}")]"
echo "Setting color palette"
update_color_setting "$profile_uuid" "palette" "$color_array"
echo "Color settings updated successfully!"
timezone clock
to set your timezone in command-line you have to execute this command in command-line :
sudo timedatectl set-timezone <timezone>
for example :
sudo timedatectl set-timezone Europte/Amsterdam
remote server connection
anything you need to know about remote connections .
SCP file transportation
scp -4 -P 9011 <host_file_path> [username]@[host_ip]:<remote_upload_path>
tutorials
in this section I've put a bunch of tutorials for Linux users wish I somebody would tell me as a beginner .
Command line shortcuts
Navigate without the arrow keys
Ctrl+A
: you can move the cursor to the beginning of the line .
Ctrl+E
: to move the cursor to the end of the line .
Alf+F
: moves one word forward .
Alt+B
: moves one word back .
Don't use the backspace or delete keys
Ctrl+U
: instead of Backspace . to erase everything from the current cursor position to the beginning of the line .
Ctrl+K
: instead of Delete . to erase everything from the current cursor position to the end of the line .
this section is related to web development
thats because I'm a web developer and this section is very handy for web developers too .
deploy laravel on nginx
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:ondrej/nginx-mainline
sudo apt update
if anything goes wrong just install ppa-purge
and remove ppa:ondrej/nginx-mainline
and re-enter above commands .
sudo apt install php8.1 php8.1-fpm php8.1-cli php8.1-common php8.1-mysql php8.1-zip php8.1-gd php8.1-mbstring php8.1-curl php8.1-xml php8.1-bcmath
general command :
sudo apt install php php-fpm php-cli php-common php-mysql php-zip php-gd php-mbstring php-curl php-xml php-bcmath
now configure php for your site nginx settings :
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
here is a full example :
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
root /path/to/your/laravel/project/public;
index index.php index.html index.htm;
access_log /var/log/nginx/yourdomain.com_access.log;
error_log /var/log/nginx/yourdomain.com_error.log;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/var/run/php/php8.1-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
# Increase file upload limit
client_max_body_size 100M;
}
I'm so confused and tired at this point I'm writing this note for myself . after a lot of hard-work I configured my website mahdic200.ir with ssl and dns set all hand configured by myself .
run these commands in shell :
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx
you'll be asked to answer a few questions . don't worry answer all of them .
sudo certbot renew --dry-run
now go to your nginx domain conf file or default file . lets say it with example :
example:
sudo nano /etc/nginx/sites-available/domain.ir
there should be some code like this :
server {
listen 80;
listen [::]:80;
listen 443 ssl;
listen [::]:443 ssl;
server_name domain.ir www.domain.ir;
ssl_certificate /etc/letsencrypt/live/mahdic200.ir/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mahdic200.ir/privkey.pem;
}
now save the file and come out . first test and then restart the nginx :
nginx -t
if everything is ok :
sudo systemctl restart nginx
Nginx HTTPS redirection
just separate listening ports for 80 and 443 on a config file for a domain .
server {
listen 80;
listen [::]:80;
server_name domain www.domain;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name domain www.domain;
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /peth/to/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
}
introduction to Linux Local data storage system
in linux for git authorization we have some solution . consider all your personal data will be stored in ~/USER with hidden folders (folders which start with .
) . for example all your vscode data , git data , snap data , ssh etc ... . will be stored in this format .
the rest of this page is in the reference this article tutorial
Personal Access Token (PAT) based authentication
I won't explain how to make Personal Access Token or PAT . I'll explain how to store it in your local Linux desktop machine (instructions are in reference don't worry just read the reference ) .
to temporarily cache the token in Linux use the following command :
git config --global credential.helper cache
Note, however, that the token is only cached for 15 minutes by default. If you want the token to be cached for a longer period of time, add the 'cache --timeout=XX\'
option where XX is time in seconds. For example, to cache the token for 24 hours (86,400 seconds), type:
git config --global credential.helper cache --timeout=86400
to permanently cache the token on Linux, type:
git config credential.helper store
The next time you are prompted for your GitHub user name and token, the information will be stored permanently in a .git-credentials
file under your home
folder. Note, however, that this file is not encrypted. For a more secure permanent solution, you might want to check the SSH based authentication option.
SSH based authentication
[!WARNING] can't use usual remote anymore ! NOTE: if you adopt as SSH based approach to authentication , you will need to connect to your repo via
SSH
. For example , if usersomeguy
's repo name issomeproject
, you would connect to it via :
git@github.com:someguy/someporject.git
warning
this differs from the HTTPS option adopted in this workshop:
https://github.com/someguy/someproject.git
checking for existing SSH keys
you might or might not already have public keys under ~/.ssh
file .
ls -al ~/.ssh
If you do, look for the files ending with .pub
. The contents of these public keys are used to link your local repos to your GitHub account. By default, the filenames of the public keys are one of the following: id_ed25519.pub
or id_rsa.pub
. If these files exist in your ~/.ssh
folder, you can jump to the Adding SSH key to your GitHub account section of this tutorial.
Generating a new SSH key
follow these instructions
ssh-keygen -t <DESIRED_ID> -C "<your_github_email_address>"
example:
ssh-keygen -t rsa -b 4096 -C "someguy@gmail.com"
example :
ssh-keygen -t ed25519 -C "someguy@gmail.com"
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/user/.ssh/id_ed25519):
This creates a new SSH key, using the provided email as a label. Accept the default file location and press the Enter
key.
At the prompt, type a secure passphrase. This passphrase will be used instead of your password when performing a Git/GitHub transaction from your computer, so don’t forget it!
> Enter passphrase (empty for no passphrase): [Type a passphrase]
> Enter same passphrase again: [Type passphrase again]
You will then see an output similar to this:
Your identification has been saved in /home/user/.ssh/id_ed25519
Your public key has been saved in /home/user/.ssh/id_ed25519.pub
The key fingerprint is:
SHA256:AuErG+8I8YUkRbNn1iNiGB/T3P6p4oWtmHA821i3bPO user@my_college.edu
The key's randomart image is:
+--[ED25519 256]--+
| ..o.o |
|.o . o o o |
|o O o . o . . |
|.* * o . . . o |
|*++ + . S . E |
|**oo . . . |
|*++. . o |
|o=o. . .o. |
|+o. ..o.. |
+----[SHA256]-----+
Adding SSH key to the ssh-agent process
Next, you need to add the previously generated key to a process called ssh-agent
.
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_<the_id_picked>
example , without any postfix like .pub
:
ssh-add ~/.ssh/id_ed25519 # or ~/.ssh/id_rsa
You’ll then be prompted to enter the passphrase used in the earlier step.
Enter passphrase for /home/user/.ssh/id_ed25519:
Identity added: /home/user/.ssh/id_ed25519 (someguy@gmail.com)
Adding SSH key to your GitHub account
In ~\.ssh
you should see a file ending with .pub
such as id_ed25519.pub
. This is the public key generated earlier in this tutorial that you will share with your GitHub account.
Open the contents of the ~/.ssh/id_ed25519.pub
file in your home folder.
cat ~/.ssh/id_ed25519.pub
Copy all its contents. It should start with ssh-ed5519 ...
and end with your email address.
- On GitHub, click on your avatar, then select Settings.
- On the left sidebar, click on SSH and GPG keys.
- Click on New SSH key.
- Assign a Title for this key (this is only used for your reference but is should be descriptive enough for you to know which client computer it is referencing).
- Paste the key from the
.ssh/id_ed25519.pub
file in the Key field (be careful not to add empty spaces). - Click Add SSH key. You might be prompted to type your GitHub password.
Next, you can test your connection from your Bash environment.
In Bash, type the following (do not substitute the git@github.com
address).
ssh -T git@github.com
You might see the following warning (including the public key as shown below):
The authenticity of host 'github.com (140.82.114.3)' can't be established.
RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Type yes
to continue.
The next warning should list your account name (e.g. user
in this working example).
Warning: Permanently added 'github.com' (RSA) to the list of known hosts.
Hi user! You've successfully authenticated, but GitHub does not provide shell access.
Cloning a GitHub repo using SSH
At this point, you should be all set. As mentioned at the beginning of this page, when using SSH to connect to your GitHub repo, you need to use the SSH protocol. For example, to clone repo1
, you would type:
git clone git@github.com:someguy/someproject.git
You might be prompted for the passphrase that was used in an earlier step when you created the SSH key.
Converting an existing HTTPS local repo to SSH
If you’ve already cloned a repo using HTTPS on your local computer, you will need to make a few changes to your local repo before benefitting from the SSH system.
First, check that you are indeed using HTTPS:
git remote -v
origin https://github.com/user/repo1.git (fetch)
origin https://github.com/user/repo1.git (push)
To change from a HTTPS URL to a SSH URL, type:
git remote set-url origin git@github.com:user/repo1.git
there is a problem when you open your git repository . all files seem to be changed and git repository shows that all files are changed but when you go for inspecting say to you that file permission is changed .
to solve that issue go to that repository's folder and run this command :
git config core.fileMode false
import sql from command line
open a shell :
mysql -u <username> -p <database_name> < /path/filename.sql
example :
mysql -u mahdi -p admin_db < /home/mahdi/backup.sql
laravel broadcasting instructions
first of all configure laravel workers, the easiest way is to use database
driver .
php artisan queue:table
php artisan migrate
then go to the .env
file and change the QUEUE_CONNECTION=
line to QUEUE_CONNECTION=database
when you want to make a job to be done , you have to set a worker . you can read about this in documentation but first learn this command :
php artisan queue:work
if you have priority you can set priority like this :
php artisan queue:work --queue=high,default
in this example , the high
queue has more importance than default
queue .
you have to use supervisor
for manging the above command , cause there is a probability that it might fail and your workers won't work anymore .
so the next thing is to install supervisor
(debian) :
sudo apt install supervisor
now we have to configure it :
sudo nano /etc/supervisor/conf.d/laravel-worker.conf
and the content might be :
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/forge/app.com/artisan queue:work sqs --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=forge
numprocs=8
redirect_stderr=true
stdout_logfile=/home/forge/app.com/worker.log
stopwaitsecs=3600
you have to change the forge
user and command according to your user and project page .
activating :
sudo supervisorctl reread
sudo supervisorctl reread
sudo supervisorctl start "laravel-worker:*"
you have to apply broadcasting for your event :
<?php
namespace App\Events;
use App\Models\Order;
use Illuminate\Broadcasting\Channel;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Broadcasting\PresenceChannel;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Queue\SerializesModels;
class OrderShipmentStatusUpdated implements ShouldBroadcast
{
/**
* The order instance.
*
* @var \App\Models\Order
*/
public $order;
}
and this is initially queued , so you needed the worker part mentioned above .
and the last thing , you have to use soketi
websocket server and pusher
driver for your laravel application .
first of all , go to api.php
top of all routes write this :
<?php
use Illuminate\Support\Facades\Broadcast;
Broadcast::routes(['middleware' => ['auth:sanctum']]);
I assumed you are using sanctum as your api authentication package .
and then :
composer require pusher/pusher-php-server
and change the BROADCAST_DRIVER=
to BROADCAST_DRIVER=pusher
in .env
file .
you have to change the broadcasting.php
file too :
'connections' => [
// snip
'pusher' => [
'driver' => 'pusher',
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'app_id' => env('PUSHER_APP_ID'),
'options' => [
'cluster' => env('PUSHER_APP_CLUSTER'),
'host' => env('PUSHER_HOST') ?: 'api-'.env('PUSHER_APP_CLUSTER', 'mt1').'.pusher.com',
'port' => env('PUSHER_PORT', 443),
'scheme' => env('PUSHER_SCHEME', 'https'),
'encrypted' => false,
'useTLS' => env('PUSHER_SCHEME', 'https') === 'https',
],
],
// snip
],
and in your server :
$ npm --version
v18.20.6
you must have nodejs
version 14 or 16 or 18 LTS .
and you may start it via supervisor
. make a configuration file like /etc/supervisor/conf.d/soketi.conf
and a config file for soketi /etc/soketi/config.json
:
[program:soketi]
process_name=%(program_name)s_%(process_num)02d
command=soketi start --config=/path/to/config.json
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
user=mahdi
numprocs=1
redirect_stderr=true
stdout_logfile=/var/log/soketi-supervisor.log
stopwaitsecs=60
stopsignal=sigint
minfds=10240
and contents of the config.json
file :
{
"debug": true,
"port": 6001,
"appManager.array.apps": [
{
"id": "id",
"key": "api-key",
"secret": "secret",
"webhooks": [
{
"url": "http://127.0.0.1",
"event_types": ["channel_occupied"]
}
]
}
]
}
you can generate the id
, api-key
and secret
with openssl
library in Linux or manually pressing your keyboard . but let's show you how you can do it :
below code produces a random string with 32
characters in base64
:
$ openssl rand -base64 24
d0wU1tBWSwdApGSkR/A3VdtPHGv6xSTTZKB1zCKqS24=
notice that the above code produces 8 more characters than it's input argument which in this example is 24
so the output length will be a string with 24 + 8 = 32
characters .
done , good luck :) !
web development issues
go to dash.cloudflare.com . register a new website from left sidebar . the cloudflare will give you two DNSs . set them on your domain registration provider panel . this process needs to be validated . this will took at most 4 hours . after that you must go to your websites list in cloudflare dashboard and then click on website .
but don't care and just go to DNS records section of your website . add two records one with A type and @ value refering to your vps IPv4 address and disable proxy check . and one again with A type and value www refering to your ip address and disable proxy check .
that should work BUT . PLEASE ! I begging you . BE CAREFUL at putting that GOD DAMIN IPV4 address . I put wrong IP address and I gone into a complete chaos for 3 god damn days . copy the ip address from your vps login session and do not trust your eyes or memory either ! . good luck :) .
just use pm2
npm install -g pm2
starting a nextjs server:
navigate to your project folder and enter this :
sudo nano eco.config.js
module.exports = {
apps: [
{
name: "nextjs-app", // Name of the application
script: "npx", // The script to execute
args: "next start", // Arguments for the script
cwd: "./", // Current working directory
env: {
NODE_ENV: "production" // Environment variables
},
// Set the maximum memory usage before restarting the application
max_memory_restart: "300M" // Restart the application if it uses more than 300MB of RAM
}
]
}
listing processes :
pm2 ls
saving processes to execute in boot time :
pm2 save
monitoring :
pm2 monit
connecting nodejs server to domain
replace the 3000 port with your nodejs server port :
server {
# ... config
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
# ... config
}
mysql installation
digital ocean is the best : https://www.digitalocean.com/community/tutorials/how-to-install-mysql-on-ubuntu-20-04
sudo apt update
sudo apt install mysql-server
sudo systemctl start mysql.service
sudo mysql
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
exit mysql :
exit
mysql -u root -p
ALTER USER 'root'@'localhost' IDENTIFIED WITH auth_socket;
sudo mysql_secure_installation
CREATE USER 'username'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
to grant a single privilege to a user for a database's table :
GRANT PRIVILEGE ON database.table TO 'username'@'localhost';
to grant all privileges for all databases to a user :
GRANT ALL PRIVILEGES ON *.* TO 'username'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;
Reference
see this page if you wanna reference.
Installation using apt
adding docker to apt repository:
# Add Docker's official GPG key:
sudo apt-get update # do this if its necessary
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
install :
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
you have to set proxy for docker (if you are in Iran like me :) , read this page [[docker proxy]] sign up to docker and copy your password, for logging to your account from CLI :
create a file and paste your password WITHOUT \n
character OK ? , or simply paste your password to the file and delete the next line . just your password .
nano ~/my_password.txt
cat ~/my_password.txt | docker login --username <YOUR_USERNAME> --password-stdin
just ignore the warning .
run this command to verify your installation:
sudo docker pull hello-world && \
sudo docker run hello-world
Configure Docker to start on boot with systemd
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
for disabling this just do this :
sudo systemctl disable docker.service
sudo systemctl disable containerd.service
/etc/docker/daemon.js # daemon folder
/etc/systemd/system/docker.service.d/ # folder
~/.docker/config.json # for proxy
docker run -d --cpus="0.5" --memory="512m" your_container_image
optimize docker objects :
docker system prune -a
xampp on ubuntu
to install xampp on ubuntu and debian 12 you need to install this package first :
sudo apt install net-tools
and mysql-server this is a full reference digital ocean :
wget https://dev.mysql.com/get/mysql-apt-config_0.8.30-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.30-1_all.deb
sudo apt update
sudo apt install mysql-server
and then you need to download the installation run file from apache friends website and download the latest version for ubuntu
then you need to install the installation file :
sudo chmod 755 [package_name]
then sudo ./[package_name]
for running xamp you need to create a sh file to do this code for you:
sudo /opt/lampp/./manager-linux-x64.run
verifying PHP to ubuntu
sudo ln -s /opt/lampp/bin/php /usr/bin/php
starting xampp in terminal
for starting xampp on ubuntu you have to enter some command like this :
sudo ./xampp start
for stopping xampp you have to do sth like this :
sudo ./xampp stop
creating a systemd service for xampp
or you can create a service for xampp, create a new file :
sudo nano /etc/systemd/system/xampp.service
[Unit]
Description=XAMPP Service
After=network.target
[Service]
Type=forking
ExecStart=/opt/lampp/lampp start
ExecStop=/opt/lampp/lampp stop
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
save the file in nano ctrl + X
and Y
then Enter .
reload the systemd manager :
sudo systemctl daemon-reload
enable the service to start on boot sudo systemctl enable xampp.service
start the service using sudo systemctl start xampp.service
stop the service using sudo systemctl stop xampp.service
to check the status sudo systemctl status xampp.service
accessing android interface
use scrcpy . please don't forget to read the README file for prerequisites and also your specific distribution . and also todos for your specific system (Linux, Windows, MacOS etc) .
extra details
I won't provide any extra details as their documentation is so complete and doesn't need any extra explanations :) .
add user to group
sudo usermod -aG <group_name> <username>
adding or removing users
sudo adduser <username>
making it a sudoer
usermod -aG sudo <username>
verifying operation :
groups <username>
removing a user
if you wanna completely remove that poor user :
sudo deluser --remove-home <username>
if you wanna keep files :
sudo deluser <username>
when you run sudo apt update
and you see a package repository name which is for an uninstalled application or package in your ubuntu machine you have options .
/etc/apt/sources.list.d/ directory
navigate to this directory :
cd /etc/apt/sources.list.d
then find the package name with find :
sudo find ./ -name '*package_name*'
and after this :
sudo grep -r 'package_name'
and just BECAREFUL ! and CAREFULLY remove those files .
bash auto completion usage
when you have an auto completion file and you want to use it , you have to put the file in the /usr/share/bash-completion/completions/
folder .
then you have to edit the ~/.bashrc
file and add these line in the end of your file . for example I have a mdbook
bash auto completion file and I want to use it , first I put the file in the /usr/share/bash-completion/completions
folder and then I come and edit the .bashrc
file .
// snip
# mdbook auto completion file loading
source /usr/share/bash-completion/completions/mdbook
then save file and exit terminal . now enter this command in the shell .
. ~/.profile
enter command :
history -c
best clipboards for ubuntu :
- Clipit
- Diodon
- Gpaste-2
installation guide :
# for clipit
sudo apt install11 clipit
# for diodon
sudo apt install diodon
# for gpaste-2
sudo apt install gpaste-2
Guide
if apt is not WORKING CORRECTLY , please consider reading these pages I wrote for apt proxy settings , but consider these works to do before going into those pages :
- first setupt a proxy for your ubuntu (told you in first page)
- connect apt to your local proxy settings in ubutnu (127.0.0.1)
pages :
- [[proxy in ubuntu]]
- [[apt network issues]]
compression in linux
compression
tar -czvf archive_name.tar.gz /path/to/directory
decompression
tar -xzvf archive_name.tar.gz
list archive
if you want to take a quick look at archive items you may use simple -t
option which is an alias for --list
option . let me give you an example :
tar -tf archive.tar.xz
exclude
sometimes you just want to compress your folders and files but with one or more exceptions . you may simply use --exclude
. let me give you an example :
tar --exclude=pattern -cJvf archive.tar.xz /folder1 /folder2 /folder3
Handy Linux Commands
1. Compressing and Decompressing in Linux
Compressing a Directory
Using Gzip
tar -czvf archive_name.tar.gz /path/to/directory
Using Bzip2
tar -cjvf archive_name.tar.bz2 /path/to/directory
Using XZ
tar -cJvf archive_name.tar.xz /path/to/directory
decompressing
Decompressing Gzip Archive (.tar.gz)
tar -xzvf archive_name.tar.gz
Decompressing Bzip2 Archive (.tar.bz2)
tar -xjvf archive_name.tar.bz2
Decompressing XZ Archive (.tar.xz)
tar -xJvf archive_name.tar.xz
2. Changing Permissions
Change the Permissions of All Folders Inside a Directory
find /path/to/parent_directory -type d -exec chmod 755 {} \;
Change the Permissions of All Files Inside a Directory
find /path/to/parent_directory -type f -exec chmod 644 {} \;
Compressing and Decompressing with 7z
Installing 7zip
sudo apt install p7zip-full
Compressing
usage example :
7z a archive_name.7z /path/to/directory
Decompressing
7z x archive_name.7z
it is usual to sometimes ubuntu can't play a specific format from audio files or video files . there is no problem with Rhythmbox or other players , but you have to install a bunch of libraries for your ubuntu desktop .
Gstream library
it is a powerful library for supporting audio and video files formats . some of its components are installed by default but for more support you need to install all its parts . this is reference link . but if you have not time this is for you :
sudo apt install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstreamer-plugins-bad1.0-dev gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
unable to mount PATH to /media/USER/PATH
mount manually
find your drive name
sudo fdisk -l
sudo mkdir /media/USER/pick_a_name
then mount the drive
sudo mount FULL_PATH_OF_DRIVE /media/USER/picked_name
unmount with umount
sudo umount /media/USER/picked_name
done :)
Read only file system error
for ntfs file systems you should fix them because dual boot will harm your drive system file type :
sudo apt install ntfs-3g
sudo ntfsfix -b -d /path/to/drive
debian suspend crashing
there is a problem with debian , when you suspend your laptop you have to close and open the lid to be able to wake it , or laptop just tries to turn on monitor but refuses to do this and after hard resetting the system you will see thousands of error logs with journalctl | grep -i suspend
command . how to fix it ? follow this note .
remove nvidia drivers
there is nothing wrong with debian , you probably have installed a bunch of nvidia drivers which is a really bad move . you have to uninstall all of them .
sudo apt remove --purge nvidia*
for me , it fixed the issue .
navigate to your ~/.local/bin
folder :
cd ~/.local/bin
then make two files named nson
and nsoff
respectively :
touch nson nsoff
then give them permission to become executable :
chmod +x nson nsoff
nson file content
#!/bin/bash
# Variables for DNS addresses
DNS1="185.51.200.2"
DNS2="178.22.122.100"
# Get the name of the WiFi interface
WIFI_INTERFACE=$(nmcli device status | grep wifi | awk '{print $1}')
# Get the name of the active WiFi connection
WIFI_CONNECTION=$(nmcli -t -f active,ssid dev wifi | grep '^yes' | cut -d: -f2)
# Set the DNS for the WiFi connection
nmcli con mod "$WIFI_CONNECTION" ipv4.dns "$DNS1 $DNS2"
nmcli con mod "$WIFI_CONNECTION" ipv4.ignore-auto-dns yes
# Restart the network connection to apply changes
nmcli con down "$WIFI_CONNECTION" && nmcli con up "$WIFI_CONNECTION"
echo "DNS for WiFi connection '$WIFI_CONNECTION' set to $DNS1 and $DNS2."
nsoff file content
#!/bin/bash
# Get the name of the WiFi interface
WIFI_INTERFACE=$(nmcli device status | grep wifi | awk '{print $1}')
# Get the name of the active WiFi connection
WIFI_CONNECTION=$(nmcli -t -f active,ssid dev wifi | grep '^yes' | cut -d: -f2)
# Clear the DNS settings for the WiFi connection
nmcli con mod "$WIFI_CONNECTION" ipv4.dns ""
nmcli con mod "$WIFI_CONNECTION" ipv4.ignore-auto-dns no
# Restart the network connection to apply changes
nmcli con down "$WIFI_CONNECTION" && nmcli con up "$WIFI_CONNECTION"
echo "DNS settings for WiFi connection '$WIFI_CONNECTION' have been cleared."
go in the folder where you want to download the file . it's really fast :
wget --inet4-only <URL>
firewall tutorial
digital ocean : https://www.digitalocean.com/community/tutorials/ufw-essentials-common-firewall-rules-and-commands
inspection
sudo ufw status
enable
sudo ufw enable
disable
sudo ufw disable
Block an IP Address
sudo ufw deny from 203.0.113.100
Delete a rule
sudo ufw delete allow from 203.0.113.101
list available application profiles
sudo ufw app list
enable application profile
sudo ufw allow "OpenSSH"
disable application profile
example :
sudo ufw delete allow 22/tcp
font config not found
I was trying to compile my Rust application in linux ubuntu , and I encountered an error :
library fontconfig for development not found !
and I searched a bit through the web , it seems that linux systems have some libraries for themselves which are important and essential for development , and I didn't have one of them called fontconfig
and the full name is libfontconfig-dev
. I searched how to install it and a page came up which was for ubuntu distribution . it was listed in this page .
then I simply installed it via :
sudo apt install libfontconfig-dev
you can simply verify installation via pkg-config --modversion fontconfig
digital ocean : https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-20-04
settings up SSH key instead of password
digital ocean : https://www.digitalocean.com/community/tutorials/how-to-set-up-ssh-keys-on-ubuntu-20-04
digital ocean : https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-laravel-with-nginx-on-ubuntu-20-04
uname
dpkg --print-architecture
sudo chown [username] [filename]
sudo apt install python3 python3-venv
python3 -m venv <PATH>
rustup docs opens telegram
https://users.rust-lang.org/t/rustup-doc-opens-signal-instead-of-the-browser/45352/7
see this link https://github.com/lucasresck/gnome-shell-extension-alt-tab-scroll-workaround
it will help .
for files :
ln -s [FILE] [LINK]
for directories:
ln [DIR1] [DIR2]
"startup applications" app in ubuntu shows the list of start up apps . there is a professional way to edit this list . if you are willing to be a professional .
in your home directory which is indicated by ~
symbol in documentations , there is a hidden folder named ".config" and has a child folder named "autostart" which has files with postfix "desktop" .
these files are list of config files for each program which should be run when a user logs in . inside of them are some config lines .
you can create , delete and edit these files . there is an attribute called Hidden
and the value is boolean (true , false) .
Hidden=false
if this attribute has value true . you can change it to false , if you wanna see its parent file in startup apps list .
systemctl suspend
symlink issue on NTFS
when you have a partition which is NTFS filesystem type you may encounter errors when trying to create symlinks . to avoid this problem and fix the issue you may want to do these steps . by default , the linux kernel mounts partitions using ntfs3 driver . but you can use ntfs-3g driver which is a more mature driver to mount these type of file systems .
find the partition UUID :
sudo blkid
it is sth like this :
/dev/nvme0n1p5 UUID="XXXX-YYYY" TYPE="ntfs"
make a note of UUID .
edit the /etc/fstab
sudo cp /etc/fstab /etc/fstab.backup
sudo nano /etc/fstab
add the following to the /etc/fstab
file :
change the E_DRIVE path with your path , and uuid with your uuid .
UUID=XXXX-YYYY /media/mahdi/E_DRIVE ntfs-3g uid=1000,gid=1000,umask=022,permissions 0 0
then mount the partition with your gui in this case gnome .
use jobs
command to tell you what job(s) you have suspended .
you can choose to add the job(s) in the foreground using fg
command .
ubuntu boot problems
ubuntu destroys / deletes the windows boot loader
solving problem using boot repair
boot a light ubuntu system on pc and then:
- install boot repair from these documentation: askubuntu question
- run boot repair
- go to Advanced Options -> Other Options tab -> Repair Windows boot files.
- The boot flag should be placed on the same partition on which Ubuntu is installed
- The partition on which Ubuntu is installed can be identified from the Disks application which is built-in in Ubuntu
- If you're unable to select the Repair Windows boot files option because it's grayed out
sudo add-apt-repository ppa:yannubuntu/boot-repair <br/> sudo apt-get update <br/> sudo apt-get install -y boot-repair<br/> sudo boot-repair
- Open the Boot Repair application, try the recommended repair first, then restart
- If Windows still doesn't show in the Grub menu, boot into Ubuntu and open Boot Repair again
- click advanced options, and in the second tab from the left (Grub location), select the dropdown menu for the option: OS to boot by default and select Windows (via the sda5 menu)
- Restart the computer. In the grub menu, to boot by Windows, quickly tap down to go to Windows and press enter, making sure you cancel the 5 s timeout by tapping down, before the 5 s timeout automatically selects the option highlighted.
solving problem using Rescatux
Rescatux is a free bootable live CD/USB that can repair GRUB and the Windows bootloader. Rescatux has a graphical interface with a menu of operating system rescue tasks. If your hard disk has the MBR partitioning format, you can select the Restore Windows MBR (BETA) option to repair the Windows bootloader. If your computer has UEFI firmware, you can select among the UEFI boot options.
expert tools
- boot repair
- GParted
- OS-Uninstaller
- Clean-Ubiquity
- PhotoRec
- TestDisk
Alternative options
Create a file, /boot/grub/custom.cfg (by running sudo -H gedit /boot/grub/custom.cfg) with these contents:
#This entry should work for any version of Windows installed for UEFI booting
menuentry "Windows (UEFI)" {
search --set=root --file /EFI/Microsoft/Boot/bootmgfw.efi
chainloader /EFI/Microsoft/Boot/bootmgfw.efi
}
go to settings and search : workbench: external browser