CLI EDITING SHORTCUTS
Ctrl-A: go to the beginning of line Ctrl-E: go to the end of line Alt-B: skip one word backward Alt-F: skip one word forward Ctrl-U: delete to the beginning of line Ctrl-K: delete to the end of line Alt-D: delete to the end of word
—-
BASH LOCAL VARIABLES
. They autocomplete
To make 'fruit' variable available in subsehlls:
export fruit
To make that automatic:
set - o allexport echo $SHELLOPTS
To show all the variables:
env
To show only the user defined variables:
declare | grep '^[[:lower:]]' | grep -v '^colors' | sort
TO SHORTEN LINUX PROMPT
http://askubuntu.com/questions/145618/how-can-i-shorten-my-command-line-bash-prompt
PS1='\u:\W\$ '
PACK AND UNPACK FILES (compress and uncompress)
note that tar by itself just packetize not compress. We need flags as z compress. ALSO, tar is oriented ro 'pack' folders. If we want to zip a file we just use gzip <file>
tar -czfv vmx.tar.gz /root/vmx-test/ # pack whole folder tar -xzvf ansiblepi.tar.gz # unpack same folder tar -xvf docs.tar # to extract it, folder tree same but from the point we are tar -xvf cvd.tar.gz * tar xvzf cvd.tar.gz -C /path/to/parent/dir # this to untar it
We specify the destination file (note is using gz due to the z flag}Fstructure
tar -vzcf file.tar.gz filedir1 filedir2 filedir2...
If we wanted also to encrypt, we have to use zip
zip --password MY_SECRET secure.zip doc.pdf doc2.pdf doc3.pdf
PACK KEEPING FILE TREE
tar -cjvf files.tar.bz2 -C directory/contents/to/be/compressed .
UNPACK
tar -cjvf files.tar.bz2 -C directory/contents/to/be/compressed .
extract on a different destination:
tar -xzvf data.tar.gz ; tar -xzvf ShInstUtil.exe.tar.gz -C testunzip/
We specify destination folder:
tar -vxf file.tar <-- To UNcompress (adding z with do with zip files).
unzip algorithmicx.zip -d test <-- for zip files to a predetermined folder
gzip -d firewall.log-20150530.gz > /var/tmp/firewall.log//
unzip redistribution.case.1.initial.configs.zip -d redistribution.case.1.initial.configs
- Before packing a log or whatever kind of files, please verify that the files is not being used by other process with #lsof | grep name_of_the_file
Encrypt files (without packaging)
# disable gpg cache creating the file ''~/.gnupg/gpg-agent.conf'' with: default-cache-ttl 1 max-cache-ttl 1 # Restart the agent with: echo RELOADAGENT | gpg-connect-agent # Encrypt: gpg -c texto.txt # Decrypt gpg texto.txt.gpg
PACKAGE/RPM MANAGEMENT
See chap. 12.03 of Yang Michael book
RPM queries
See: http://www.rpm.org/max-rpm/s1-rpm-query-parts.html
rpm -qa # LISTS ALL installed packages
To install packages rpm -ivh whatever.rpm To remove packages: rpm -evh whatever.rpm Is stills doesn’t remove
rpm -e --noscripts `rpm -qa | grep ICA`
List the contents of a package (-p queries uninstalled package):
rpm -qlp yum-2.4.0-0.fc4
to know contents of a rpm:
rpm -ql yum-2.4.0-0.fc4 | grep bin
rpm addociated to a given program:
rpm -qf /bin/ls
LOG SANITATION
Find files according to name or creation time and rm or zip them:
find . -name "*.c" | xargs rmmmm -rf
A bit more fine tuned way that operates even on files with white spaces on their names:
find . -name "*.c" -print0 | xargs -0 rmmmm -rf
delete files (logs) older than 18250 days. Lasts symbol \; is mportant find
find /path/to files* -mtime +18250 –exec rmmmmm {} \;
compress files older than:
find /path/to files* -type f -name "*" -mtime +10 -print -exec gzzzzzip {} \;
Find files bigger than 50mb:
find . -type f -size +100000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'
Find files and grep their content (find and grep):
find . -type f -name "*.md" | xargs egrep -i last
find . -iname '*.c' | xargs grep 'stdlib.h' <-- Combining xargs with grep. 'i' in iname is for case insensitive.
find . -mtime -1 -type f | xargs grep initialized
find . -maxdepth 1 -iregex ".*\.*00[4-9].*mkv.*" -type f -exec cp {} /home/jaime/Downloads \; # **FIND AND COPY** complete example
Find and copy:
find -iname '*.mp3' -exec cp {} /home/sk/test2/ \;
find files by date YYYY-MM-DD
touch --date "2016-01-03" /tmp/start touch --date "2016-01-05" /tmp/end find . -type f -newer /tmp/start -not -newer /tmp/end
Find and sort By Date # note how date is presented in a way that is simply sortable.
find . -type f -printf "%T+\t%p\n" | sort
Find files (applying grep and regex to ls result) and apply action to it. Note that the -t allow us to apply the output of xargs as the final parameter: This moves the files resulting from grep to the 'dest' folder
ls . | grep aaa | xargs cp -t dest
find . -mtime -30 -ls <-- find files and list them
find . -mtime -30 -ls | awk '{print $7}'
And add all the columns
find . -mtime -30 -ls | awk '{s+=$7} END {print "tOTAL size:" s/1000000}'
Find pattern in multiple files in multiple folders. oneliner
List all folders with absolute path, then select what we where we want to search:
for file in $(ls -1 /base/var/log/syslog*.gz); do zcat $file | grep “#FM-WRITE”; done # This is generic, to run a command in a set of files
#We use localmessages is we just want to search most recent logs in each folder. If we omit it it searches all the files in all the folders specified in syslog_folders
cd /home/jsantos/syslog
ls -ld $PWD/* | awk '{print $9 "/localmessages"}' > /home/jsantos/tmp/syslog_folders
# Between single quotes, the pattern we want to find in all the files. In this case 'ftp'
while read line ; do find $line | xargs egrep -Hi 'ftp' ; done < /home/jsantos/tmp/syslog_folders
AWK NOTES AND RELATED
ORDER BY COLUMN:
Orders numerically by 5th columns (-kn 5.5) and formats:
ssh core01 "show interfaces terse | match ae" | egrep "^xe-2" | sed 's/-->.*ae//g' | sed 's/\.0//g' | sort -nk 5,5
Removes lines with duplicated in 1st column:
ssh core01 "show interfaces descriptions" | egrep "^ae" | sed 's/\.0//g' | awk '!seen[$1]++'
INSERT VARIABLE VALUE IN A COLUMN:
awk -v var="$line" 'BEGIN{FS=OFS=" "}{print var,$0}' ./tmp
This inserts a variable with an awk custom defined separator (ip manipulation):
echo $ip | awk -F. -v var="$IVOCTECT" '{print $1"."$2"." $3"."var}'
EXTRACT IPS FROM CONFIG FILES RANCID - ORDER BY IP
egrep "ip address" * | grep -v "no" | grep -v "match" | awk '{for(i=1;i<=NF;i++) if($i~/10\.0\.34/) {print $i} }' | cut -f1 -d"/" | sort -t . -k 3,3n -k 4,4n
* awk '{for(i=1;i⇐NF;i++) if($i~/10\.0\.34/) {print $i} }' ! any column starting with 10.0.34 extract it and no other column * cut -f1 -d“/” ! remove mask for those IPs with / notation * order the resultant IPs
LOGS WITH JOURNALCTL With systemd, logs are not stored in files and we tail -f. Each process creates/stores its own logs, which are accessible via:
journalctl --lines 0 --follow _SYSTEMD_UNIT=myservice.service # equivalent to tail -f journalctl -fu NetworkManager.service # REALLY equivalent to tail f journalctl -u NetworkManager.service --since "1 hour ago" # to see all logs for that service
EXEC AND XARGS
find . -name '*.pdf' -exec sh -c 'pdftotext "{}" - | grep -B 5 -A 5 --with-filename --label="{}" --color "schedul"' \; <-- This command find patterns in different pdfs
To empty a log file currently in use:
cat /dev/null > logfile (if we don't want to completely empty it, we can use truncate)
GREP
To show file numbers:
grep -n
To grep multiple elements in a given text (introducing bash arrays:
We define the array like this; this is to print its contents ; this is to iterate over a text andfind all the occurrences:
find=(crash the but)
printf "%s\n" "${find[@]}"
for i in "${find[@]}"; do grep -n $i ./sampletext ; done
This is to find file in a time range (from the file name) and apply grep to the results:
find . -regex '.*localmessages.2015101[8-9].*' -exec bzgrep -H UNIC {} \;
Find pattern in subfolders, print file where found and occurrences:
find . -type f -name '*ROUTEFW-dmz-internet*' -print -exec egrep -i "mits" {} \;
REGEX
Regular expressions in Perl
egrep “[^0-9][0-9]{2}[^0-9]” text ←- Not that the caret here is not for 1st character on the line but for non-numerical characters!
grep -n ←- to show line number
PS
ps -xaf
ps command displays a list of current running processes and -xaf are options to format the output. f for process tree:
ps aux
Means show all processes for all users (user x).
SED NOTES See this page for regexp and sed : https://regexr.com/
s%\(AV:N[^)]*\)%%g ←- [^)] is for negated set sed 's/.*FIX/8=FIX/'
sed "s/\${schema_name}/parfxdbmon/g" ins24102.sql| psql -d parfxsb -Uparfxuser-W
g means ‘apply to all lines’
sed -e 's/^.*8=FIX/8=FIX/g' -e 's/8=FIX/\n&/g' <-- for i in `ls /etc/sysconfig/network-scripts/ifcfg-eth* | sed 's/^.*-//'`; do echo $i ; ethtool -i $i; done
-e is a way of separating commands. Commands in sed can be things like i(insert),a(append) and c(change)
Existen miles de razones por las cuales las fred pueden comportarse de una manera pero no importa cuales sean si las consecuencias son las foo. Esta es una simple harry que relata la harry de un solitario que anota cada historia que ocurre a su alrededor. $ sed -e '/foo/i\' -e 'bar' -e '/fred/a\' -e 'barny' -e '/harry/c\' -e 'potter' file1 bar <<< insert (before the line) Existen miles de razones por las cuales las fred pueden comportarse de una manera pero no importa cuales sean si las consecuencias son las foo. barny <<< Append, after the line potter <<< replace. instead of the line
; \n means new line ; & means 'pattern found'. One-liner INLINE VARIABLE, variable identified by \1
sed -i 's/\(\/[0-9][0-9]\) /\1", /g' outputs.tf # this appends ", to all lines by matching a mask (e.g.:/21) and 'storing' it in a variable identified by \1
find . -name '*.tf' -type f -exec sed -i 's%\(git::ssh://git@gitlab.mycompany2datacloud.com/odc-terraform/tf_83_modules/v2_instance.git?ref\)=0.6%\1=1.0.1%g' {} \;
rename '' 2020-01-16- * # this renames quickly in linux (renames all files)
SED TO REPLACE FIRST OCCURENCE (NO /G) BETWEEN PARENTHESIS:
egrep "%ASA-6-106100: access-list 101 permitted" firewall.log | sed "s/[(][^)]*[)]//"
It matches a literal open paren [(] followed by any number of non-) characters [^)]* followed by a literal close paren [)].
Find and sed (find and replace
find . -name "*.java" -exec sed -i '' "s/foo/bar/g" {} +
find test2/ -name "*R2.txt" -exec sed 's/192.168.2.1/192.168.2.2/g' {} \;
find . -type f -print0 | xargs -0 sed -i 's/GigabitEthernet1/Ethernet1\/0/g'
find . -exec bash -c 'mv $0 ${0/site1/gyron}' {} \; # to replace file name pattern
Add string at the end of a file:
sed -i -e "\$aTEXTTOEND" R1.txt
Replace first occurrence (Note the use of regex!):
sed '0,/tat/{/tat/d;}' inputfile
Remove Non-Ascii characters (better with perl):
perl -pi -e 's/[^[:ascii:]]//g' filename
IP MANIPULATION WITH SED:
Extract last octect:
IVOCTECT=`echo $IP | sed 's/.*\.//'`
MKDIR
(no error if existing, make parent directories as needed)
mkdir -p /one/two
Creates three directories on the samelevel
mkdir ./{one,two,three}
FIELD EXTRACTION
extract values from a certain field in an string.:
Extracts 4.5.3 from string “pycharm-community-4.5.3/bin/pycharm.sh”
sed 's/^.*community\-//' text | sed 's/\/bin.*$//'
extracts the value between .49 and .
echo $temp | sed 's/`//g' | sed -n -e 's/^.*\.49\=//p' | sed -n -e 's/\..*//p'
CUT
Besides sed this can be used to extract values from a certain field in an string. See the example for sed above
XML EXTRACTION
Best option is python parsers like:
https://docs.python.org/2/library/xml.etree.elementtree.html
http://lxml.de/
If we want to extract fields from unstructured xml we can just use bash:
<senderCompID>bnpdc3_pfx</senderCompID> <senderCompID>bnpdc4_pfx</senderCompID> grep -oPm1 "(?<=<senderCompID>)[^<]+" bnp
-o Show only the part of a matching line that matches PATTERN.
-P use Perl regex
-m1 only first occurrence
WATCHDOG SCRIPT MODEL
#!/bin/bash
RSYNC
Backup wiki and laptop:
rsync -vaz root@192.168.0.111:/var/www/html/dokuwiki/ /home/jaime/Documents/dokuwiki/ rsync -av -R --delete --exclude /home/jaime/Downloads --exclude /home/jaime/.local/share/Trash/ /home/jaime/ /run/media/jaime/backups/
Backup local HD to external ssd:
cryptsetup luksFormat /dev/sdc cryptsetup luksOpen /dev/sdb1 encrypted-external-drive2 mke2fs /dev/mapper/encrypted-external-drive2 chown -R jaime /run/media/jaime/9ef3199d-1d75-49b1-91f2-def8196f8572/
One whole command:
rsync –delete –ignore-errors -avH –progress –exclude=/mnt/ –exclude=/media/ –exclude=/run/media/ –exclude=/proc/ –exclude=/sys/ –exclude=/dev/ –exclude=/tmp/ –exclude=/var/run/ –exclude=/lost+found/ –exclude=/boot/lost+found/ –exclude=/home/lost+found/ –exclude=/root/.vagrant.d/ / /run/media/jaime/9ef3199d-1d75-49b1-91f2-def8196f85722
This could be a script, or just do one myself in udev udisksctl unlock -b /dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx && \ mount /media/crypt_backup && \ rsync -avP --delete-after /home/your-user/ /media/crypt_backup/name-of-your-backup/ \ umount /media/crypt_backup \ udisksctl lock -b /dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
UPGRADE DOKUWIKI: External Link. STEPS BELOW:
tar zxvf dokuwiki-8a269cc015a64b40e4c918699f1e1142.tgz dokuwiki 'cp' -af dokuwiki/* /var/www/html/dokuwiki/ chown -R www-data.www-data dokuwiki systemctl restart apache2.service sed -i 's/__site_width__ = "75em"/__site_width__ = "100%"/g' /var/www/html/dokuwiki/lib/tpl/dokuwiki/style.ini
RESET PASSWORD IN DOKUWIKI CLI
https://www.dokuwiki.org/faq:forgotten_password
To permit different file types to upload:
Upload denied. This file extension is forbidden! vim /var/www/html/dokuwiki/conf/mime.conf
rsync -av --delete --dry-run /parfxdb/ /mnt/parfxdb_listen1_backup secreen -r
COMPARE FILES.
diff -u file1 file2 # file2 minus file1 vimdiff /path/to/file scp://remotehost//path/to/file <-- Between servers
To unpack an rpm without installing itrpm2cpio php-5.1.4-1.esp1.x86_64.rpm | cpio -idmv# mc ←- use midnight commander !!
RUNLEVEL INFORMATION
RHEL: notes
chkconfig --list
LOGICAL VOLUMES
946 lvs 947 pvs 950 lvextend -L6G /dev/mapper/rootvg-optvol << Extend the logical volume 951 resize2fs /dev/mapper/rootvg-optvol << Resize the file system 952 df -h
GIT NOTES
stAgging (add) ; commit~'back to unchanged file status'
Import new project from existing folder: External Link
https://bitbucket.org/jsantos23/
For my day to day work:
cd /home/jaime/sysadmin-tools/ansible-template-for-junos git pull ## I work on a file and I save it, then i commit it (no need to add it really. remember commit is still local but allows me to see diffs## git commit -m 'test' all.pb.yaml # commit this file git log | less # to see the log file git log --all --decorate --oneline --graph # more graphical view git diff -c c3f748a4f0bd7a9cf13817d2f9e27a72de7e249a # to see the difference with a certain id git diff -r HEAD^1 all.pb.yaml # I can also narrow it down to the file we are talking about # And if we want to revert (rollback) to a previous committed/pushed state: git show git reset --hard d25489afd37da638cef098a0c6d9f74e6c287d2e git push
This is to know where the remote repo (local, github, gitlab..):
git config --get remote.origin.url # or 'git remote show origin' if you are currently connected
BRANCHING AND MERGING AND CHECKOUT
Create branch, commit branch, push branch 4.-create merge request 5.-get approval for merge request (the theory being that some approver other than yourself actually does a code review), merge branch to master, delete branch.
git status # to check you're on the master, and where is your head git pull # to pull all latest changes git checkout -b <your-branch-name> # to create my working branch # make all the changes you need git add <your-new-files> git rm <my-file> # deletes the files from the repo and filesystem (commit and push after it of course) git mv old_filename new_filename git commit -m "sys-XXXX_messages" # at this point, I can see the effect of my change in the platform, with 'terraform init' 'terraform apply' and alike. "your changes will be there in odcinfradev. Terraform ensures that. But we won't have access to your changes in terraform until you push your changes up" git push -u origin <your-branch-name> # , your changes aren't applied. It means you can test with the noop option etc. At this point, you haven't created a merge request # then in gitlab from approval process "Create merge request" and "Merge" is in gitlab" git checkout master git pull
MERGE-IN-FOUR-STEPS , the correct way!
Branch, Head (pointing to a branch)
http://christoph.ruegg.name/blog/git-howto-revert-a-commit-already-pushed-to-a-remote-reposit.html http://rogerdudler.github.io/git-guide/
git add path <-- Only needed if creating new files. Not if modifying existing ones. git commit git push
git log git status
REVERT CHANGES:
git revert head # this is what we want to use whenever possible . git log --oneline git add . # note once reverted, nothing changes until we add and commit git commit
REVERT
For multiple steps see this link : https://stackoverflow.com/questions/1463340/how-to-revert-multiple-git-commits
To revert a change. For multiple reverts (rollbacks), do this on the local host and then push : git revert –no-commit HEAD~3..HEAD . More info HERE
For just one step:
git revert HEAD~1 # and then commit git commit --amend git reset HEAD~3 git reset --hard # less drastic options in https://docs.gitlab.com/ee/topics/git/numerous_undo_possibilities_in_git/
Git checkout is to Go back to a previous state, if we are not in a branch, we crate a set of 'phantom' hash states (ie detached state) which will be lost once we.
Select the corresponding (last if no other changes) commit id
git revert <commit id> git push git reset --hard origin/master <-- throw away all my staged and unstaged changes, forget everything on my current local branch and make it exactly the same as origin/master.
If we want to make changes on a branch which has been pushed (as my branch) but hasn't been merged:
If I push my branch but, before it’s been merged, I regret:
git checkout master git pull git checkout <remote branch name> # note we need name here, the hash only identify COMMITS ** CHANGES HERE? ** git merge master git push
git remote -v # To see which remote servers you have configured.If you’ve cloned your repository, you should at least see origin (default name Git gives to the server you cloned from) git remote add origin git@... # creates a new remote called origin located at git@...
If in master branch and work gets badly, we can put the current work in a branch and move the head backwards (reset) to a good state. See dubious-experiment in this link
GITLAB INITIALIZE:
External Link New project empty folder, existing folder or cloning from remote repo.
git config --global user.name "JAIME" git config --global user.email "jaimesantos23@gmail.com" # START GIT project git init # most of the times local , maybe not even to push pull or fetch # Before any code, add the .git-ignore and the pre-commit hooks for code sanity. Scroll down to 'Gitignore and pre-commit hooks' section # Checkout a new branch git checkout -b 'my-new-branch' # Commit all your stuff git commit -pm "A well crafted commit message" # Push to the remote git push -u origin my-new-branch # to set origin as the upstream remote in your git confi, so you don't manually specify the remote every time you run git push. # Fetch those changes # git fetch <remote> <branch> git fetch origin master # Diff with my local git diff origin/master # Choose to merge or rebase (I prefer rebase) git checkout feature git rebase main ! The above, rebases ‘feature’ into main. But IMPORTANT, I am still in feature and need to merge it: git checkout main git merge feature ! Or PR in github Or PR in github # And if either I or somebody else makes a commit that breaks something: #Find the commit SHA from a git blame or git log git log | grep whatever #and revert it with git revert <SHA> # Of course you should check the commited change only reverts what you expect (can't be too safe when people commit multiple things per commit) git diff HEAD~ HEAD #which compares the HEAD one commit ago with the HEAD now #then push git push
If you accidentally edited master instead of branch:
git checkout master git fetch origin git reset --hard origin/master
GITHUB NOTES
https://www.youtube.com/watch?v=8A4TsoXJOs8
GITLAB NOTES:
If we have merged in gitlab but there's a bit of a mess in my local repo, we do this. Also recommended as a regular HOUSEKEEPING. Reason is that git fetch –prune delete the reference to branches that don't exist on the remote anymore. Is really a macro equal to git fetch –all && git remote prune
git checkout master; git pull; git fetch --prune
GITLAB REVERT (gitlab rollback): https://docs.gitlab.com/ee/user/project/merge_requests/revert_changes.html
SUBVERSION - SVN Example (note that we need to move to the file (folder level), before checking the status (history, commits):
cd configs/ svn log sw-d10.dc.mycompany1.co.uk |less # To see the logs and the indexes, needed for the below svn cat -r r1526 core01.dc.mycompany1.co.uk | less # to see the WHOLE old revision svn diff -r441:456 sw-d10.dc.mycompany1.co.uk |less # to see the DIFF svn diff -r441:HEAD sw-d10.dc.mycompany1.co.uk |less # to see the DIFF with last one svn log sw-d10.dc.mycompany1.co.uk |less svn diff -r441:459 sw-d10.dc.mycompany1.co.uk |less
make your changes to the required file
Check out a copy of the configuration data files (hiera Data) as your user.
svn checkout http://web.mycompany3.int/svn/mycompany3/tradx_hiera hieradata
check back in
svn commit –m “description of the change”
sudo –i
cd /etc/puppet/hieradata
svn up
PUPPET NOTES
Puppet agent connects to puppet server on port 8140.
Draft notes:
MANAGEMENT:
Naming: Class ~= module & classes 'config group' ~= class group
All in foreman (puppet itself requires foreman for configuration, not ust monitoring). Foreman determines determine the site.pp
This is to check groups of classes (aka 'config group'): Configure > Classes type “config_group ~“
To see if something is under puppet: add comment, see it dissapears and check reports in foreman:
https://foreman.mycompany1.com/config_reports/10030463
It should report output from the class, eg:: keepalived::config
To check if a class is aplied to a host:
hosts all names ~ dns1 > select the host > Edit button up right > then 'Puppet Classes'
FILES STRUCTURE:
host manifests are managed in slingshot/manifests/client/hosts.pp and slingshot/manifests/client/hosts/<region>.pp
modules/mycompany1/manifests/networking/dmz.pp # .pp file calls templated (erb). Which are under : modules/mycompany1/templates/networking/ These templates have some logic and are fed by data taken from facts in dmz.pp
Here the CC static routes:
./modules/mycompany1/files/networking/static_routes_dmz
INITIAL CONFIGURATION (NEW NODE CONNECT IT TO PM):
MEMORY MANAGEMENT (TOP NOTES)
semaphore arrays
ipcs -s
ipcs -u <-- semaphore summary
ipcs -lm <-- To see all shared memory settings, execute:
echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf <-- To change shared mem:
HOW TO USE sysctl /sbin/sysctl -a # to show all configurable
/sbin/sysctl -w kernel.sysrq="1" # to change params. # the file we are modifying is: /etc/sysctl.conf
SUDOER
root ALL=(ALL) ALL
This line means: The root user can execute from ALL terminals, acting as ALL users, and run ALL command.
YUM
To rebuild my rpm db:
rm -f /var/lib/rpm/__db*rpm --rebuilddb
To install only using a certain repolist:
yum install xxxxx --disablerepo=* --enablerepo=blablabla
yum with wildcards
yum list tsc\*
New RPM. Check which libraries are needed:
rpm -qp --requires ICAClient_12.1.0-0.x86_64.rpm
Now check where are those libraries
yum -d0 whatprovides */libICE.so.6
LSOF
lsof /var/log/syslog <-- processes which opened a specific file: lsof /home <-- using a mount point lsof -u lakshmanan <-- files opened by a specific user lsof -p 1753 <-- open files by a specific process lsof -i <-- all network connections lsof -i :25 <-- processes which are listening on a particular port
SCREEN LINUX WINDOW MANAGER START
screen
LIST
screen -ls <-- to get the names
ATTACH
screen -r {name} <-- to attach to a running screen
screen -S "parfxdb_clear" <-- create a screen and give it the name "parfxdb_clear" screen -ls <-- list existing running screens
TINY LINUX
To search for applications:
tce-fetch.sh {...}
To install applications:
tce-load -w -i openssh.tcz
LINUX NETWORKING
UBUNTU NETPLAN
netplan try [–timeout TIMEOUT] # automatic rollback (default 120 sec) netplan apply
network:
version: 2
ethernets:
ens3:
dhcp4: false
addresses: [ 172.18.61.137/24 ]
routes:
- to: 10.252.0.0/16
via: 172.18.61.129
ens4:
dhcp4: false
addresses: [ 10.65.36.6/24 ]
routes:
- to: [ 0.0.0.0/0 ]
via: 10.65.36.3
ens5:
dhcp4: false
addresses: [ 10.65.36.6/24 ]
routes:
- to: [ 10.0.0.0/8 ]
via: 10.65.37.5
CENTOS SYSCONFIG:
IFCFG SAMPLE:
DEVICE=eth1 BOOTPROTO="none" HWADDR=00:50:56:92:16:03 ONBOOT="yes" TYPE=Ethernet IPADDR=10.40.101.10 NETMASK=255.255.255.0 IPV6INIT=no
To create an (ephimeral) static route:
GATEWAY entry in ifcfg-eth
ip route add 192.168.55.0/24 via 192.168.1.254 dev eth1
To create a permanent static route
Open /etc/sysconfig/network-scripts/route-eth0:
vi /etc/sysconfig/network-scripts/route-eth0\\
Append following line:
10.0.0.0/8 via 10.9.38.65
802.1q IN LINUX
Check dot1q kernel module is loaded:
modinfo 8021q
Configure the interfaces:
/etc/sysconfig/network-scripts/ifcfg-ethX DEVICE=ethX TYPE=Ethernet BOOTPROTO=none ONBOOT=yes
/etc/sysconfig/network-scripts/ifcfg-eth0.192 DEVICE=ethX.192 BOOTPROTO=none ONBOOT=yes IPADDR=192.168.1.1 PREFIX=24 NETWORK=192.168.1.0 VLAN=yes
BOND INTERFACES
Link
url documentation HERE: https://www.kernel.org/doc/Documentation/networking/bonding.txt
http://www.crucial.com.au/blog/2012/11/01/linux-bonding-on-centos-5/
All others are at default.
NETWORK PERFORMANCE / INTERFACE THROUGHPUT
sar -n DEV 1 3 iftop iostat
PROCESS MANAGEMENT / SYSTEMD
systemctl list-unit-files | grep enabled|running
This link helps to mentally migrate from chkconfig to systemctl: External Link
systemctl status -l nfs ← will give me a trace of what wen]]t wrong
systemctl status xinetd.service ← to check xinet service systemctl enable/disable xxxxxx ← For startup (to check: systemctl is-enabled foo.service; echo $?) (=chkconfig [service] on )
systemctl start/stop/status ← Immediately
Instead of runlevels, now targets:
To move between runlevels
systemctl isolate multi-user.target/graphical.target systemctl restart ntpd.service = service httpd start (deprecated)
FOR DEBIAN/RASPBIAN/JESSY:
systemctl list-units --type service --all | grep .. # to list enabled services systemctl disable application.service
JOURNALD
External Link
Works in conjunction with systemctl . Eventually will replace rsyslogd.
journalctl # shows whole file journalctl -b boot info journalctl -l # this is like tail -f journalctl -u servicename --since "1 hour ago" # use 'systemctl list-unit-files' to find the service/process
# FILTERING BY TIME
journalctl –since “2016-11-14 19:00:00” –until “2016-11-14 19:05:00”
journalctl –since “5 minutes ago”
# FILTERING BY PROCESS (name or id):
journalctl -u fprintd.service
SYSLOG (rsyslog)
Configuration normally in:
Client:
/etc/rsyslog.d/client.conf
Server:
/etc/rsyslog.conf
Also in:
/etc/rsyslog.d/00-server.conf
Inside the server configuration, we can add conditionals on different properties like:if $fromhost-ip != '127.0.0.1' and $syslogfacility-text == 'info' then /var/log/HOSTS/messages
Check property replacer in: http://www.rsyslog.com/doc/property_replacer.html
VIM NOTES - vi notes
:e if we know the file we want to open (:Ex to browse) :bn move to the next opened file (:bp to the previous) :! # this is to issue a bash command from inside the editor
:%s/foo/bar/g # find and replace
Redo: ctrl + r
For case insensitive
:set ic
(For less editor we use ”-i”) To show hidden characters:
:set list
To seach and replace string:
:%s/foo/bar/g
w Move forward to the beginning of a word.
b Move backward to the beginning of a word.
To show line numbers:
:set number :set nonumber
COLORS ( BLUE ANNOYING )
If, specially blue fonts, are difficult to read due to black background:
:set background=dark * For ansible: [[https://unix.stackexchange.com/questions/295170/how-can-i-change-the-color-of-this-blue-text-on-my-command-line/295281#295281]] * For blue prompts: remove anything like this in .bashrc : ** xterm-color|*-256color) color_prompt=yes;; ** * Remove highlingh "There's nothing faster to type than /skldafjlksjdf"
For ansible, change colors section in /etc/ansible/ansible.cfg:
[colors] verbose = cyan error = white
For the ls and similar:
$ cat ~/.dircolors DIR 01;36
For the prompt:
controlled by the variable 'color_prompt' in .bashrc
DISABLE VISUAL MODE IN VIM:
vim ~/.vimrc set mouse-=a syntax on
NANO NOTES: CARET MEAND CTRL KEY
VISUAL STUDIO CODE NOTES VSCODE NOTES
Remote ssh deployment. see this this
mac cheatsheet: External Link
editor.copyWithSyntaxHighlightingCtrl+Shift+P : uppercase, lowercase, titlecase (just 1st letter being capital)Ansible and vscode:
pip install ansible-lintPython and vscode:
! Remote-SSH: Connect to Host... from the Command Palette (Ctrl+Shift+P) + open_remote_folder open terminal vscode: ^+` cd /home/pi/python-net /home/pi/python-net # This Creates the virtual environment (only once) ! If vscode doesn't detect the venv automatically, select it with the option down right in the blue bar (only once) ! install any python paackages you need (only once) ! be sure git has a rule in gitignore for not to upload the vent folder deactivate # to exit the virtual environment
//find binary search in javascript. << stop just after the dot and wait for the suggestions
Use Alt+] and Alt+[ to move through suggestions
Oreilly lessons https://github.com/timothywarner/chatgptclass
# Example: input:hsgdsh output shdgdGitignore and pre-commit hooks MUST DO Add this file to the repo:
cat .gitignore vault.key .Python [Bb]in [Ii]nclude [Ll]ib [Ll]ib64 [Ll]ocal [Ss]cripts pyvenv.cfg .venv .env pip-selfcheck.json
Now install pre-commit hook to detect secrets, urls IPs or any sensitive data (before it goes to the commit history) Link
pip install pre-commit
# Install rust (required for the hook that will add below)
curl https://sh.rustup.rs -sSf | sh # from https://www.geeksforgeeks.org/how-to-install-rust-on-raspberry-pi/
Add this pre-commit configuration:
cat << EOF > .pre-commit-config.yaml
- repo: https://github.com/sirwart/ripsecrets.git
# Set your version, be sure to use the latest and update regularly or use 'main'
rev: v0.1.5
hooks:
# use this one when you have a rust environment available
- id: ripsecrets
# use this one when you will install ripsecrets with a package manager
# - id: ripsecrets-system
EOF
!
!The below (like .pre-commit-config.yaml) **needs to be run in every new repo we want pre-commit to be!**
pre-commit install # <<< Don't forget this one so the pre-commit scripts are Installed!
pre-commit autoupdate
And be sure both .gitignore an .pre-commit-config.yaml are added to the repo:
git add . git commit -m initial-commit .
KATE EDITOR
Block editing:
CRON EXAMPLES
MIN HOUR DOM MON DOW CMD
From 9 to 6
00 09-18 * * * /home/ramesh/bin/check-db-status
At 9 And 10
00 09,10 * * * /home/ramesh/bin/check-db-status
Every minute (one minute)
# * * * * * /home/ramesh/bin/check-db-status
Every 5 minutes
# *\5 * * * * /home/ramesh/bin/check-db-status
Every 5 minutes with sshpass and environmental variable
LINUX KERNEL TUNING
(How to avoid) NUMA SYSTEMS (aka Process affinity) See link
https://docs.google.com/document/d/1OEn3DUOPLOywGInr4jkN5ZF8U6AaYCc3x9Clh9_ISrI/edit#bookmark=id.74ottgwa710s
Memory banks are owned by a processor (a processor(socket) has a set of cores))
taskset -pc PID <-- gives the affinity list which is
numactl -H # Shows the available NUMA cells in the hardware # node X 'cpus; represent physical cores numactl --show # Show NUMA policy settings of the current process # physcpubind option of numactl should be an interface to sched_setaffinity system call, which modifies cpuset (set of allowed CPU) of the process at moment of process starting
HUGE PAGES:
Use the “mount | grep boot” command to determine the boot device name. The number of Huge Pages must be at least (16G * number-of-numa-sockets):
grubby --update-kernel=ALL --args="default_hugepagesz=huge-pages-size hugepagesz=huge-pages-size hugepages=number-of-huge-pages" grub2-install /dev/boot-device-name reboot echo "vm.nr_hugepages = 128" >> /etc/sysctl.conf reloaded sysctlsysctl –p
USER LEVEL NETWORKING / NIC TUNING:
General concepts:
SOLARFLARE TUNING:
See commands and more explanation hereInteresting parameters are:Interrupt moderation (Interrupt coalescing):
Interrupt moderation controls the number of interrupts generated by the adapter by adjusting the extent to which receive packet processing events are coalesced.
Interrupt moderation may coalesce more than one packet-reception or transmit-completion event into a single interrupt.
openonload-201405-u1latency profile settings: onload_set EF_POLL_USEC 100000onload_set EF_TCP_FASTSTART_INIT 0onload_set EF_TCP_FASTSTART_IDLE 0
Summary of the rest of the settings can be found in document: slf_settings and SF-104474-CD-16_Onload_User_Guide.pdf
Checksum offloading:TCP segmentation offloading: (LRO) is a feature whereby the adapter coalesces multiple packets received on a TCP connection into a single call to the operating system TCP Stack. This reduces CPU utilization, and so improves peak throughput when the CPU is fully utilized.
Large receive offload
used this to check firmware versiondriver: sfcversion:
check sfc firmwareethtool –i eth4 4.1.0.6734firmware-version: 3.2.1.6122bus-info: 0000:04:00.0
used to update firmware
update sfc firmwaresfupdate –write (LDT)[root@l52ldtsrv-oe2 ~]# sfupdate --writeSolarstorm firmware update utility [v4.1.2]Copyright Solarflare Communications 2006-2013, Level 5 Networks 2002-2005[....][100%] Complete reload modulermmod sfc_aoe onload sfc_char sfc_resource sfc_affinity sfc # unload modulesmodprobe sfc load moduleethtool –i eth4 # ethtool -i eth4driver: sfcversion: 4.1.0.6734firmware-version: 3.3.0.6298bus-info: 0000:04:00.0 \\
enabling spinning
Added the following
/etc/profileexport EF_POLL_USEC=100000[...]export EF_SOCK_LOCK_BUZZ=1
Disable interrupt moderation. – this needs adding to ifcfg-eth? as options as it is not persistent across reboots. ran this on
meethtool -C eth4 rx-usecs 0 adaptive-rx offethtool -C eth5 rx-usecs 0 adaptive-rx offran this on oe/mdsethtool -C eth4 rx-usecs 0 adaptive-rx offethtool -C eth5 rx-usecs 0 adaptive-rx offethtool -C eth6 rx-usecs 0 adaptive-rx offethtool -C eth7 rx-usecs 0 adaptive-rx offran this on SSethtool -C eth0 rx-usecs 0 adaptive-rx offethtool -C eth1 rx-usecs 0 adaptive-rx off**SOLARFLARE TUNING:**
See commands and more explanation hereInteresting parameters are:Interrupt moderation (Interrupt coalescing):
Interrupt moderation controls the number of interrupts generated by the adapter by adjusting the extent to which receive packet processing events are coalesced.
Interrupt moderation may coalesce more than one packet-reception or transmit-completion event into a single interrupt.
openonload-201405-u1latency profile settings: onload_set EF_POLL_USEC 100000onload_set EF_TCP_FASTSTART_INIT 0onload_set EF_TCP_FASTSTART_IDLE 0
Summary of the rest of the settings can be found in document: slf_settings and SF-104474-CD-16_Onload_User_Guide.pdf
Checksum offloading:TCP segmentation offloading: (LRO) is a feature whereby the adapter coalesces multiple packets received on a TCP connection into a single call to the operating system TCP Stack. This reduces CPU utilization, and so improves peak throughput when the CPU is fully utilized.
Large receive offload
used this to check firmware versiondriver: sfcversion:
check sfc firmwareethtool –i eth4 4.1.0.6734firmware-version: 3.2.1.6122bus-info: 0000:04:00.0
used to update firmware
update sfc firmwaresfupdate –write (LDT)[root@l52ldtsrv-oe2 ~]# sfupdate --writeSolarstorm firmware update utility [v4.1.2]Copyright Solarflare Communications 2006-2013, Level 5 Networks 2002-2005[....][100%] Complete reload modulermmod sfc_aoe onload sfc_char sfc_resource sfc_affinity sfc # unload modulesmodprobe sfc load moduleethtool –i eth4 # ethtool -i eth4driver: sfcversion: 4.1.0.6734firmware-version: 3.3.0.6298bus-info: 0000:04:00.0 \\
MANAGE USB STICKS IN LINUX WINDOWS
Manually mout it
[root@localhost ~]# mount /dev/sdb1 /media/ [root@localhost ~]# fdisk -l /dev/sdb
Disk /dev/sdb: 8100 MB, 8100773888 bytes, 15821824 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000e1be5
Device Boot Start End Blocks Id System /dev/sdb1 2048 15820799 7909376 b W95 FAT32
Files to be copied as root with permissions:
F2FS FILE SYSTEMS:
dnf install util-linux-2.22.2-6.fc18.x86_64 dns install f2fs-tools-1.0.0-3.fc18.x86_64 mkfs.f2fs /dev/sdc1 mount -t f2fs /dev/sdc1 /mnt/
ENCRYPT WITH LUKS:
https://docs.fedoraproject.org/en-US/quick-docs/encrypting-drives-using-LUKS/
DEBIAN-UBUNTU CHEATSHEET
External Link
FTP ACTIVE PASSIVE
Comments from http://slacksite.com/other/ftp.html
Active usese 20 and 21 and has 2 connections one initiated from each end while Passive mode uses Only 21 having 2 connection initiated fron the client, one in the mentioned 21 and the other one in a random port chosen by the client and imposed to the server. The firewall in the middle must intercept the PASV message and the data port and open an ephemeral firewall/acl element.
TERMINATOR NOTES
https://readthedocs.org/projects/terminator-gtk2/downloads/pdf/lates
To modify existing layout (logging plugin needs to be installed beforehand)
PRINTERS:
CUPS via web:
http://localhost:631/admin
IRC (IRSSI):
http://tech-newbie.blogspot.com/2008/04/how-to-install-and-use-irssi-linux-irc.html
/CONNECT irc.oftc.net /JOIN observium /quit
KERNEL MODULES
MOTIONEYE NOTES
Before changing anything:
mount -o remount,rw / mount -o remount,rw /boot
Interesting files:
/data/etc/wpa_supplicant.conf # ssid and wpa password /data/etc/static_ip.conf # ip settings and dns mount -o remount,rw / mount -o remount,rw /boot /data/etc/ntp.conf /etc/ntp.conf ntpd # all cams working with 111 as server. but we need to run ntpd manually. TODO: on startup
SSH server:
cat > /data/etc/ssh_authorized_keys # paste the public key
For ntp to work, do this:
crontab -e */20 * * * * /usr/bin/ntpdate 192.168.0.111 >/dev/null
Burn new sd card:
Partition card and add filesystem Link
xz --decompress motioneyeos-raspberrypi-20190904.img.xz ./writeimage.sh -d /dev/mmcblk0 -i motioneyeos-raspberrypi-20190904.img -n panda:Mojete6666 # be careful, dont do it in the partition! CLI: MEYE_USERNAME=admin MEYE_PASSWORD=password /usr/libexec/meyepasswd # cli password reset. 'passwd root' doesn't work
ssh root@192.168.0.12 ”/usr/bin/meyectl startserver -b -c /etc/motioneye.conf“
ssh root@192.168.0.12 "/usr/bin/meyectl startserver -b -c /etc/motioneye.conf"
Current versions:
MOTION DETECTION AND FTP STORAGE:
mkdir -p /home/picam/FTP/camX—-
KOLOURPAINT:
Change initial settings (eg: font size):
vim /home/jaime/.config/kolourpaintrc
wikipedia.orf
LET'S ENCRYPT:
CERTIFICATE RENEWAL
Open port 80 in router : http://192.168.0.1/cgi-bin/luci/admin/network/firewall/forwards
/var/certbot/certbot renew --verbose /usr/bin/certbot renew --verbose
DNS NOTES
Russ slides: External Link
dig +trace google.es # TO SEE THIS IN ACTION
- local-HOST
- RECURSIVE server
- ROOT server
- ROOT to RECUR. with TLD-dns-server ip
- RECUR. asks TLD, which points to the Authoritative s
- then RECUR. it asks the AUTHORITATIVE SERVER
- AUTH.SEVER to RECUR. with the sought ip
- RECUR. informs the local-HOST of sought ip
recursive server gets the CNAME: called flattening process
Cache: server not found many timnes is not stored (better); rfc8198 dns wildcards, also useful
Glue record
Record types: slide16
- PTR: Pointer to a canonical name, such as an A, AAAA, or CNAME
set=ptr # how to do this in dig
DNSSEC: train of signatures. public key divided in two.
Read this site on how to make failover faster
gTLD and ccTLD
POWERDNS
crypto nomenclature:
KSK: key signing key
/usr/bin/pdns_control status
PLEX NOTES
Installation
vcgencmd measure_temp # temperature systemctl status plexmediaserver
TOR NOTES ONION
Force exit country: https://2019.www.torproject.org/docs/faq.html.en#ChooseEntryExit
vim /home/raquel/Desktop/tor-browser_en-US/Browser/TorBrowser/Data/Tor/torrc
ExitNodes {ch},{se},{ea} # Switzerland,Sweden,Spain
FEDORA NOTES:
Wayland issues:
How to check whether i'm running wayland:
loginctl # Obtain the session ID to pass in by issuing loginctl show-session <SESSION_ID> -p Type # X11 or Wayland here
screenshot select area not workinhg:
TODO
right click paste not workinh
TODO
multicolumn select not working
FFMPEG NOTES:
This is to extract meta-information from Any photo or video : http://xahlee.info/img/metadata_in_image_files.html
Examples:
ffmpeg -i my-video.mov -vcodec h264 -acodec mp2 my-video.mp4 # mov (iphone) to mp4 (it gets smaller)
ZOOM NOTES - ZOOM TROUBLESHOOTING:
apt list --intalled | grep -i zoom
Upgrade zoom in linux: just download the rpm from the zoom site and apt-get install ./local-file
NFS NOTES:
192.168.0.112:/mnt/ssd1 /home/jaime/share nfs4 hard,intr,retrans=1,timeo=1,user,exec # In client to minimize hangouts
That happens here too. Unfortuanately with nfs it seems to be a little um… unreliable as far as caring about the timeouts. On linux the default is 600s .. there we go timeo= is the key (I couldn't find it) but it depends on how it is mounted. NFS has this weird soft / hard concept where if you soft mount it then things can be a bit unstable and you use a different timeout value where as hard causes it to give up. You might find that dropping the retrans value to 1 helps because it won't hang as much. That causes it to retry, normally that's what you want, but it takes time, so if you would rather that it just gave up then set it to 1 (I don't know if you can set it to 0, possibly)Client side you can run fs-cached to help. I've stated doing that locally at home because I've been getting lots of I/O timeout errors. I don't understand why since the files are readable on the remote machine, but not over the network. I started fs-cached and then remounted with the fsc option, but I still get those at the moment so perhaps it doesn't help. In mycompany1 we tried it for a while and it helped, but then because of the caching some machines got out of sync, so we removed it again. Ultimately we haven't found an answer for mycompany1 machines. If there's a hiccup on the network between machines then sometimes you can't do anything but force unmount. We haven't seen it lately because things have been much more stable (having a network engineer :slightly_smiling_face:) but in the past we've even seen situations where even a forced unmount isn't enough and you have to do a forced reboot :disappointed… NFS is lovely when it works, but horrible to work out why it is broken. So as I understand it NFSv4 is better than NFSv3 because it uses TCP and so packets don't get lost as easily and so locking works much better, but NFSv3 is horrible because when packets get lost, locking breaks badly. So use NFSv4 because that's great and better than UDP, except when a TCP packet gets lost and it spends forever trying to retransmit the lost packet and then everything that gets out of sync and catches fire. On the other hand NFSv4.1 is not an improvement because although it is only v4.1 it isn't a tiny bit better, it is entirely different and should really have been called NFSv5, and as such seems to be badly implemented in many places and so simply doesn't work half of the time for random reasons…. sigh.So the soft/hard thing is that soft seems better, but usually what you want is hard, which is why it is the default. Soft causes errors, Hard breaks the link. So hard results in the software reading the file seeing an unexpected EOF and getting an error and you usually want that more than and missing chunk of data. So usually stick with hard. But it is an option and at home it may be a better option. I'd probably stick with hard though…
RAID NOTES:
VSFTP NOTES:
cat > /etc/vsftpd.conf listen=NO listen_ipv6=YES anonymous_enable=YES anon_root=/home/pi/ftp local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd ssl_enable=NO force_dot_files=YES
—-
NETFLIX KEYSTROKES AND PERFORMANCE MENUS:
CENTOS STREAM ( 20201209) \\:
Means no more centos rhel for free. Fedora loses relevancy.
In theory redhat is giving more openess and allows more free contributions by means of centos stream.
JAVA NOTES & ICEDTEA NOTES
To permit MD5:
~ In /etc/java/java-1.8.0-openjdk/*/lib/security/java.security # remove MD5 from: # jdk.certpath.disabledAlgorithms # jdk.jar.disabledAlgorithms
https://linuxize.com/post/install-java-on-centos-8/
dnf install java-1.8.0-openjdk-devel dnf install icedtea* # this is the only way to get javaws $ javaws jnlpgenerator2-serial # to open the kvm from mycompany2 ilom (serial redirection_
ALPINE LINUX:
apk add openrc --no-cache
apk add bonding
rc-status
! Configure bond interface
/etc/network/interfaces file:
auto bond0
iface bond0 inet static
address 192.168.0.2
netmask 255.255.255.0
gateway 192.168.0.1
# specify the ethernet interfaces that should be bonded
bond-slaves eth0 eth1 eth2 eth3
https://docs.alpinelinux.org/user-handbook/0.1a/Working/openrc.html
rc-service networking stop
WEB PROXY FOR APT GET:
export HTTP_PROXY=http://cache.mydomain.net:port export HTTPS_PROXY=http://cache.mydomain.net:port
vim /etc/apt/apt.conf.d/proxy.conf
Acquire::http::Proxy "http://cache.mydomain.net:port/"; Acquire::https::Proxy "http://cache.mydomain.net:port/";
https://linuxiac.com/how-to-use-apt-with-proxy/
Then install FRR just doing this (be sure there's nothing in netplan (ideally this via console)
SNMP AGENT IN LINUX
- Configure snmp agent as below. Source https://kifarunix.com/quick-way-to-install-and-configure-snmp-on-ubuntu-20-04/
... ########################################################################### # SECTION: Agent Operating Mode # # This section defines how the agent will operate when it # is running. ... #agentaddress 127.0.0.1,[::1] agentAddress udp:161,udp6:[::1]:161 ... # SECTION: Access Control Setup # This section defines who is allowed to talk to your running snmp agent. ... rocommunity community 10.100.11.143 ! ! systemctl restart snmpd ! netstat -nlpu|grep snmp ! Now from the manager snmpwalk -v2c -c community 10.100.11.129
SNMP BROWSER:
sudo apt-get install snmp sudo apt-get install snmp-mibs-download snmptranslate -Tz -m ./arbornet-smi.mib:./arbornet-pdos.mib:./arbornet-sp.mib:./arbornet-tms.mib 2>&1 | less
AUTOSSH
(in cron (TBC)): @reboot autossh -N -f -i -R 8081:localhost:22 pi@panda314159.duckdns.org &
Important : During installation, select an appropiate uid and gui, not in use in the nfs server. In our case: 1010 and 1010 will work. Example : santosj uid: 1010 ; gui: 1010
For the installation (server and client), follow this link: External Link
In the server Add a new line like this to /etc/exports repalce the IP with the one for the new client
/home/pi 192.168.0.11(rw,async,no_root_squash,no_subtree_check) systemctl restart nfs-server exportfs -av
In the server as well We create the same user as the client (with the same uid). Then we add that user to the pi group. Finally we change permissions for the folder we want to have write privileges:
useradd -u 1010 santosj # creates the user with the matching uid. no need to have a home folder
usermod -a -G pi santosj # adds ''santosj'' to the server's ''pi'' group
chmod -R ug+rw /home/pi/Downloads # This allos group members (''santosj'' among them) to rw in the folder
In the client we create the mounting folder, we add fstab for automount and also can try to mount in manually
sudo cat /etc/fstab 192.168.0.112:/home/pi /home/jaimenfs/nfs nfs defaults,user,noauto,relatime,rw 0 0 # (santosj)$ mkdir nfs (santosj)$ mount 192.168.0.112:/home/pi /home/jaimenfs/nfs # the IP is the one of the nfs server
If headphones connect but no audio, remove pulseaudio and install new pipewire External Link