Thursday, August 28, 2008
Nog a sweet IP Tables rule
iptables -t nat - A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 3128
assuming that your squid proxy is configured to use port 3128 the above rule on your iptables firewall.
or if your squid proxy server is on a different server say 192.168.1.10 then
iptables -t nat - A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT -d 192.168.1.10 --to-ports 3128
if you need to insert the rule at line number 5 of an existing chain then
iptables -t nat - I PREROUTING 5 -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 3128
will forward all standard port 80 http traffic to your Squid Proxy server on port 3128 .......sweet
Tuesday, August 26, 2008
Making files undeletable, updatable only and unbackupable
1: Undeletable (Even by root)
2: Updatable only (Even by root)
3: Unbackupable :) (Even by root)
1: to make a file undeletable type
chattr +i filename
even if root trys to delete the file they will not be allowed to, To make the file deletable again you will need to type
chattr -i filename
2: to make a file updateable only, which means that you will be able to append content to the file but you will not be allowed to delete it or remove content from it
type
chattr +a filename
3: to prevent a file from being backed up by admins using the dump command
type
chattr +d filename
to list the set attributes of a file type
lsattr filename
Tuesday, August 19, 2008
tip to extract configured settings only in a config file
sometimes you need to just have a look at the settings that are configured and not all the comments.
to do this try the following.
cat filename.conf |grep -v "^#" | grep -v "^$" | less
will list the config file without all the lines with comments and will also leave out all the blank space
giving you information on your configured settings only.
the -v switch in grep tells grep to list files that do not contain the following string
the ^ (caret) means the beginning of a line and the $ means the end of a line and then passing the output
to less, allows you to page through the file using your arrow keys
Thursday, August 14, 2008
Centralised Logging host
Log files are essential in monitoring your system and recovering it back to a working state after a failure or after being compromised. It is a very good idea to have your Log files stored on a central server, both for convenience and security reasons. In a large network it is also more convenient to have all your log files accessible in one central place.
decide on the server that will accept log messages from the other servers. On that server edit your
/etc/sysconfig/syslog file
and edit the stanza SYSLOGD_OPTIONS="-m 0"
add a -r like so
SYSLOGD_OPTIONS="-r -m 0"
-r =(receive log files)
restart syslogd by typing /etc/init.d/syslog restart
now your server is ready to accept logging messages from your other servers/machines
on the machine that you want to send the log files from. Edit your /etc/syslog.conf file
and add the following line
user.* @192.168.1.60
were 192.168.1.60 is the ip address of the server that you setup to receive the log files, you can substitute the ip address with the hostname of the server if you want.
restart syslogd by typing /etc/init.d/syslog restart
your server @ 192.168.1.60 will now receive and store all the log files from your machine that you have setup to send from.
you can test this new setup by using the logger command to create a log message
logger -i -t Clive "Testing centralised logging"
The message should appear in your centralised logging servers /var/log/messages file
Tuesday, August 12, 2008
Automounting Home Directories from a Centralized NFS server
Right, so NFS server is setup and running on your Server, you must export the /home folder on your NFS server by adding it to your /etc/exports file and then type exportfs -a
NIS should also be configured and running on the same server as described in my previous post.
On the clients computer you need to edit your /etc/auto.master file and add the following entry
/home /etc/auto.home
then create a file called /etc/auto.home and put the following line in your new /etc/auto.home file
* -rw,nosuid,soft servername:/home/&
were servername is the name or ip address of your nfs/nis server
This entry in your /etc/auto.home file will insure that any directory a user tries to access under their own local /home directory (due to the"*" character) will cause an NFS mount on the server within its exported /home filesystem.
you could also add the following mount options to further improve matters
rsize=8192,wsize=8192 which would speed up NFS communication for reads (rsize) and writes (wsize) by setting a larger data block size, to be transferred at one time. do not set this option on older Linux kernels with older network cards as some network cards do not work well with the larger block sizes .
to add this option the file would look like so.
* -rw,nosuid,soft,rsize=8192,wsize=8192 servername:/home/&
The ampersand (&) takes the value of the user name in each line.
Done, your users home directories are now all centralized on your /nfs/nis server
Monday, August 11, 2008
Centralized user authentication with NIS
You can have multiple NIS servers on the same domain acting as Master and Slaves all managing one central user database.
The server acts as the central repository for all user names, passwords, and groups. The data is replicated from the /etc/passwd file to NIS databases.
On the server, you need to install a package called ypserv.
type apt-get install ypserv if you are using a debian based distribution or type
yum install ypserv if you are using a Red Hat derivative one.
After installing ypserv you need to setup a domain name that is used by server and client.
to setup your domain name type
domainname example
to make it persistent edit /etc/sysconfig/network file and add the following line
NISDOMAIN = example
were example is the name of your domain,
/usr/lib/yp/ypinit -m
From now on, every time you add a user, delete a user, you have to update the NIS database. You can do this using the command:
make -C /var/yp
you should setup a cron job to run every hour or so to update the database for you automatically, do this by typing in crontab -e
and then adding the following line to your crontab file
0 * * * * make -C /var/yp &> /dev/null
this will build your nis database at the top of every hour
save the file
start the NIS server by typing/etc/init.d/ypserv start
The server is now ready to handle authentication requests from the clients.
On the client, you need to install the yp-tools package, apt-get install yp-toolsfor debian based distro's and yum install yp-tools for red hat derivative ones
then type
system-config-authentication
which will open your gui configuration program
click on enable NIS
and then click on configure NIS
enter the domain name ie example
and the ip address of your NIS server. if you don't have a gui then you can alternatively edit your /etc/yp.conf file, and point it to the appropriate server and domain name by adding the following line
domain example server servers_ip_address
The /etc/nsswitch.conf file lists the order for how lookups for various things are done, such as DNS lookup, user authentication, etc . to make NIS authentication faster, change the following in your /etc/nsswitch.conf file from:
passwd: files nisplus nisshadow: files nisplus nis
group: files nisplus nis
To the following:
passwd: nis files nisplus
shadow : nis files nisplus
group: nis files nisplus
/etc/init.d/ypbind start
you will now be able to login to your client machine using the
usernames that are stored on your NIS Server. you will get
an error about not being able to mount your home directory, but my
next post on automounting home directories centrally addresses that problem
Sunday, August 10, 2008
Auto mount for Red Hat Derivative Distributions
Auto mounter to the rescue.
automounter will mount your shares on a temporary basis, as and when you need them. it will also umount your shares automatically after an interval of inactivity (60 seconds by default).
to setup automounter edit your /etc/auto.misc file and add the following
name_of_share -fstype=nfs 192.168.0.160:/share
were 192.168.1.60:/share is your nfs share on the server
name_of_share can be any name you choose, this will just specify the directory name you need to change to to automount the share.
save your /etc/auto.misc
now your /etc/auto.misc file is informed by the information in your /etc/auto.master file (you shouldent have to change anything in there but to understand how automount works it is a good idea to look inside the file.
if you look inside your /etc/auto.master file you will see 2 entries that look like so
/misc /etc/auto.misc
/net -hosts
this informs your auto.misc file to temporally mount your share under /misc
and your share will also be browsable under /net
you will need to ls /net/192.168.0.160 to see the shares on your server
or cd into /misc/name_of_share to access the share (even though you don't see the server or the share under /misc and /net they will appear only when you cd into them
the mount will stay mounted whilst the directory is in use and then for 60 seconds longer after inactivity The mount will disappear you will need to cd or ls the sharename to access it again.
if you type cd .. so that you are in the /misc folder and then type ls again you will now see your share, but it will only be there for 60 seconds and then it will automatically umount and dissapear
if you wanted to change the place that the automounts take place from /misc to some other directory of your choice then you will need to edit your /etc/auto.master file and change /misc to whatever you like. Once saved restart the automounter service.
/etc/init.d/autofs restart
for your changes to take effect.
Debian based distributions uses different tools to accomplish the same, I will cover the debian tools in a future post.
NFS Server
first make sure that nfs is installed and running as a service
/etc/init.d/ nfs status
if it is not running type /etc/init.d/ nfs start
and to make it start automatically at boot time type
chkconfig nfs on
(if your distribution doesn't come with chkconfig read this previous post on how to install it CHKCONFIG on Ubuntu)
nfs is also reliant on some other processes to be running for nfs to work these processes are portmap, rpc.mountd,nfsd and rpc.rquotd except for portmap the nfs script will start the others up, however if you have problems with nfs you can check if these services are running by typing
rpcinfo -p
NFS is very straightforward to setup there is only one file you have to edit and that is /etc/exports
edit it and add the following
/directory_name_to_be_shared 192.168.0.0/24(ro,sync,insecure)
make sure there are no spaces between the allowed network and the first (
were 192.168.0.0/24 is the network that you want to make the share available to, this could be a single ip address, a few ip addresses separarted by commas a domain name etc
ro = read only (you can change this to rw for read / write access)
insecure = will allow access on ports above 1024
save your file then type
exportfs -a to activate your nfs shares
If you add more shares to your /etc/exports file just add them underneath one another and then type
exportfs -ua and then
exportfs -a
this will re-read any modifications that you may have made.
NFS Client
showmount -e [server ip address or server name]
eg
showmount - e 192.168.1.60
This will show you what shares are available.
to temporally connect to the share you can mount it, and connect to it eg
mkdir /mnt/sharename
mount -t nfs 192.168.0.60: /share /mnt/sharename
to make the following persistant after reboots add the following to your /etc/fstab file
192.168.0.60: /share /mnt/sharename/ nfs soft, intr,timeout=100 0 0
-soft = will error out if share is not available
-intr = will allow nfs to be killed if server is unreachable
-timeout =100 very important as without a timeout if the share hangs you will not be able to login to your system.
save your /etc/fstab file and you are done.
Swap Space
If your kernel needs more memory than what your physical RAM can produce it will write data to your swap space and use it as RAM, people will argue on how much swap space you should make available, some seem to think that if you add enough ram you don't even need swap space however this is not a good idea since swap space is always used no matter how much RAM your system has, Linux will always move infrequently used programs and data to swap space even if you have Gigabytes of Ram. Since Hard disk costs are so low I always stick to the rule of thumb with regards to how much swap space is enough, no matter how much physical Ram I add to my system I always increase my swap space accordingly up to a maximum of 4Gb (I never make swap space bigger than 4GB no matter how much RAM I have). The rule of thumb is that you should have double the amount of swap space available as the physical RAM on your computer. If you have 1GB of physical RAM then you should allocate 2GB of your hard drive to swap space.
Swap space is configured during installation but you can easily add more at anytime.
There are 2 methods of adding swap space to your system the one method is to create a partition of the swap space type and allocate that partition to swap space. The other way is to create a file of the required size (like a paging file) and then allocate that file to swap space.
To use the partition method you will need to create the partition using fdisk
create the partition to the size that you want to allocate as swap space and then set the partition id to type 82
once the partition is created issue the following command
mkswap -L SWAP-hda7 /dev/hda7
were hda7 is the the partition that you created
edit your /etc/fstab file and add the following entry
LABEL=SWAP-hda7 swap swap defaults 0 0
Save your /etc/fstab file and then issue the following command to read it into memory and turn it on
swapon -a
Done
Option 2,: To use a file instead of a partition, lets say you want to add a 2GB swapfile to your system
type
dd if=/dev/zero of=/swapfile bs=1024M count=2
this will create a 2GB sized file called /swapfile
then type
mkswap -L SWAPFILE /swapfile
then edit your /etc/fstab file and add the following entry
LABEL=SWAPFILE swap swap defaults 0 0
save your /etc/fstab file
and then type
swapon -a to read the file into memory and turn on all swap entries
you can check your swap status by typing
swapon -s
Friday, August 8, 2008
Fuser
but you can't as you keep getting a "Device is busy" error.
You cannot umount a file system that has open files, file handles, or if the file system is currently in use. not knowing what is using the device or what is keeping it busy can be extremely frustrating.
fuser to the rescue.
fuser will tell you what processes are using a file system and keeping it busy, fuser will also allow you to kill the processes that are preventing you from umounting the filesystem or device.
Lets say it is your usb memory stick on /dev/sda1 that you cannot umount.
Type
fuser -v /dev/sda1
will show you what and who is locking your device.
Then type
fuser -km /dev/sda1
to kill all the processes that are locking up and keeping your device busy.
then you will be able to umount your device without any errors.
fuser will also tell you what process or user is accesing a specific file.
Type fuser -v /filename eg
fuser -v /home/cgerada/filename.txt
and if you wanted to kill the process that is locking up the file, simply type
fuser -km /home/cgerada/filename.txt
Wednesday, August 6, 2008
Working with Groups and Shared Directories.
namely the Sticky Bit, 1770 (+t) the Set Group ID (sgid) 2770 (g+s) and the Set User ID (suid) 4770(u+s)
The Sticky bit and the sgid permissions are useful to set onto folders that are shared and accessed by a group of users.
the suid permission i will explain in another post since it is not relative to this topic.
Sticky Bit:
The sticky Bit sets permissions to a Directory that allows for only the owner of a file to be able to delete the file.
when the sticky bit is set users are only allowed to delete files that they created.
to set the sticky bit onto a folder simply type
chmod 1770 /folder_name
or
chmod +t /folder_name
eg chmod 1770 /marketing
or chmod +t /marketing
this would mean that every file created in the /marketing folder can only get deleted by the user who created that file.
Set Group ID (Sgid):
The sgid permission set onto a directory will insure that every file that is created inside that directory will inherit its permissions from the directory group and not from the person who created the file. This is essential in shared directories as it allows all users who are part of the group to have access to the files in the directory. an FTP shared directory for example would have the sgid permission set so that all files uploaded into the ftp folder would inherit the groups permission and not the permissions of the person who uploaded the file.
to set the sgid onto a directory simply type
chmod 2770 /folder_name
or
chmod g+s /folder_name
eg chmod 2770 /marketing
or chmod g+s /marketing
it makes sense to set both the sticky bit and the sgid onto a group directory. To set both permissions onto the same directory type
chmod 3770 /folder_name
eg chmod 3770 /marketing
lets demonstrate this in a real life scenario. We need to setup a group and a shared folder called Marketing and we want clive, jenny, ian and anthony to all have access to the Marketing folder
we need them all to be able to save files into the folder and we need them to be able to edit their own files but we do not want them to be able to delete each others files.
First we need to create the marketing folder
mkdir /marketing
next we need to create the marketing group that clive, jenny, ian and anthony are all part of. a quick way to do this is to type
groupadd marketing
and then edit your /etc/group file and add the users seperated by commas that you want to have access to the group were you find marketing:x501: (the users must exist on the system)
like so
marketing:x:501:clive,jenny,ian,anthony
next assign no specific user and the marketing group to have ownership to the group. type
chown nobody.marketing /marketing
next we want to assign the sticky bit as well as the sgid to the /marketing folder, so that all files created in the /marketing folder are accessible by everybody who is part of the marketing group
but only users who created the files are able to delete them.
type chmod 3770 /marketing
Done.
Find
- All Programs do only one thing but they do it really well.
- All programs must work together i.e. The output of one program must be able to be passed to the input of another.
- Programs must handle text streams because that is the universal interface.
This philosophy is what makes every command in Linux so Feature rich, and the Find command is no exception. Find does one thing, and that is to find files on your system and it does it really really well.
the basics of the find command are
find -name "file1.txt"
this will search all directories from the directory that you are in and will look for the file named file1.txt
find -iname "file1.txt"
will search all directories from the directory that you are in and will perform a case insensitive search for the file named file1.txt ie it will find in its search if it exists File1.txt FiLe1.TxT, FILE1.TXT and file1.txt
find / -name "file1.txt"
will search your entire file system for file1.txt
find / -name "*.txt"
will search your entire file system for all files ending in .txt
find / -name "*able*"
will search your entire file system for all files that have able in their name.
right, now that we've got the basics out of the way.
multiple search criteria can be passed to find eg
find / -name "*.txt" -user cgerada
will search for and find all .txt files that are owned by user cgerada only.
find / -name "*.txt" -not -user cgerada
will search for and find all .txt files that are NOT owned by user cgerada (so all other users .txt files will be listed)
Right ... Moving on ................
find / -size +10M
will search your file system finding all files that are greater than 10Mb in Size.
this is extremely useful for cleaning up your hard drive of large zip archives that are taking space and are no longer needed but you do not remember were they are.
you can substitute the M for a G for gigabyte ie
find / -size +1G
will find all files on your file system that are larger than 1 GB
moving on...................
lets say you need to find a document on your system that you were working on in the last hour, you cannot remember the name of the document the only unique criteria you know is that you worked on it less than an hour ago.
find / -amin -60
will list all files that were accessed less than 60 Minutes ago
-amin = when file was last read (accessed)
-mmin = when file was last changed (modified)
-cmin = when file data or meta data last changed
if you want to look for files that were accessed more than 1 day ago use -atime instead of -amin eg
find / -atime -5
will list all files that were accessed less than 5 days ago
you can narrow down you search to find all .txt files that were accessed less than 5 days ago and are less than 1mb in size
find / -name "*.txt" -size -1M -atime -5
moving on ..................
find is able to find files that are newer or older than a known file.
lets say you want to find a file that you know was created after the time that you created file1.txt but you cannot remember the name of the file.
find / -newer file1.txt
will list all files that are newer than file1.txt.You can combine search criteria to narrow down your search eg
find / -name "*.txt" -newer file1.txt will search for all .txt files that were created after file1.txt and of course you can negate your search criteria so that you can search for all files that are older than file1.txt eg
find / -name "*.txt" -not -newer file1.txt
will find all .txt files that were created before file1.txt
moving on.......................
Find is able to execute other commands on files that it has found.
to execute other commands on the results of your search we use the
-ok or the -exec switches
lets say for example your hard disk has become full and you want to move all files that end in .zip that are bigger than 100Mb into a directory called /largefiles
find / -name "*.zip" -size +100M -ok mv {} /largefiles/ \;
the -ok switch lets you confirm each file before it is moved
if you substitute -ok for -exec then you will not be asked and the move will just take place.
I advise you to first run the command with out the -ok or -exec option which will just list the files, then if you are happy that the correct files have been listed re-run the command and add in the -exec option
the {} are place holders that will put every file it finds in the place holder and run the command on that file name .
The reason that the command ends in a \; is because find uses ; as the delimiting character and so does the bash shell so to prevent bash from interpreting the ; we need to put a \ in front of it. that way the interpreting of the ; delimiter is passed on to find.
other examples: Lets say you want to rename all .zip files on your system to .old files
find / -name "*.zip" -exec mv {}.old \;
or lets say you want to find all files on your system that end in .sh and make them all executable
find / -name "*/sh" -exec chmod 755 {} \;
remember whenever you use the -ok or -exec option your statement must end in a \;
lets say you want to be prompted before removing all of cgerada's tmp files that are over 5 days old
find /tmp -ctime +5 -user cgerada -ok rm {} \;to do the same but not be prompted you would use the -exec option like so
find /tmp -ctime +5 -user cgerada -exec rm {} \;
this will remove all empty directories/folders
find -depth -type d -empty -exec rmdir {} \;
flatten directory tree
This will move all the files with extension JPG in the directory tree starting at the current location to directory /home/cgerada/Pictures/2012-05-13.
find -name "*.JPG" -exec mv {} /home/cgerada/Pictures/2012-05-13 \;
This will list the filesizes of all the .zip files starting from the current directory.
find -name "*.zip" -exec du -h {} \;
This will tell you the total size of all files created from 2013-12-01 until now
find -name "*" -newermt 2013-12-01 -exec du -h {} \; | awk '{ total += $1 }END{ print total }'
Saturday, August 2, 2008
IP tables rule to secure against Brute force Attacks
those tips first and then applying these rules.
The following 2 iptables commands will limit the amount of ssh logins to your server to only 4 allowed per minute from the same ip address
as compared to the default unlimited setting. you can change the numbers to any limits that you wish.
The reason why you would want to do this is to protect against scripts that are written to gain access to your system via brute force attacks to your server.
look at your /var/log/secure log file to see just how often a dictionary of user-names and passwords has been tried to login to your server.
by instating the following 2 iptables rules you secure yourself against these type of brute force login attacks.
change eth0 to whatever Ethernet port your server is connected to the outside world through.
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent \--set
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m
recent \--update --seconds 60 --hitcount 4 -j DROP
you must then save your rules
/etc/init.d/ iptables save
The --state switch receives a comma separated list of connection states as an argument, by using "--state NEW" this makes sure that only new connections are managed.
The --set parameter in the first rule also insures that the IP address of the host which initiated the connection will be added to the "recent list", where it is then checked if used again in our second rule.
The --update switch in the second rule checks whether the IP address is in the list of recent connections, to port 22, port 22 will be in the list because we used the --set switch to add it in the first rule.
Once it has confirmed that the ip address of the host has indeed connected before, then the --seconds switch is used to insure that the IP address is only going to be flagged if the last connection was within the time frame specified. The --hitcount switch will measure if the count of connection attempts is greater than or equal to the number given.
this rule will drop any connections if The IP address which initiated the connection has previously been added to the list and The IP address has sent a packet in the past 60 seconds and The IP address has sent more than 4 packets in total.