Thursday, October 30, 2008

How to change the Time Zone on your Redhat / Centos system

cp /usr/share/zoneinfo/Africa/Johannesburg /etc/localtime

will set your time zone to SAST

you can simply use Tab auto completion to find the exact time zone you want by choosing the continent and city of your choice for example to change your timezone to London's time

cp /usr/share/zoneinfo/Europe/London /etc/localtime

to change to universal time type

cp /usr/share/zoneinfo/Universal /etc/localtime

to change to GMT time type

cp /usr/share/zoneinfo/GMT /etc/localtime

if asked if you want to overwrite the localtime file answer yes.

Tab auto completion is very useful in finding the exact time zone you want.

Tuesday, October 14, 2008

How to backup and restore a Mysql database

To backup your mysql database type the following.

mysqldump -u username -ppassword database_name > dump.sql

make sure not to leave a space between the -p and the password, otherwise mysql will prompt you for a password and will assume that your password is the database name.

the entire database will be backed up into the dump.sql file

then to restore the database type the following

'mysql -u username -ppassword database_name < dump.sql'

Thursday, September 25, 2008

How to reset a Forgotton Password in MYSQL

ever forgotton your root password for your mysql database, or any other users password for that matter, no problem here
simply stop mysql by typing
/etc/init.d/mysqld stop

then restart Mysql with the following command

mysqld_safe --skip-grant-tables

You should see mysqld start up successfully. Now you should be able to connect to mysql without a password.

mysql --user=root mysql

update user set Password=PASSWORD('new-password');
flush privileges;
exit;

once done restart mysql normally

/etc/init.d/mysqld restart

Shaaawiiing

Wednesday, September 17, 2008

How to add multiple users and passwords to your system

for names in user1 user2 user3 user4 user5
do
useradd $names
echo "anypassword" | passwd --stdin $names
done

or you can type it all in one line and hit enter like so

for names in user1 user2 user3 user4 user5;do useradd $names;echo "anypassword" |passwd --stdin $names;done



the above will add user1 , user2 ,user3, user4 and user5 to your system.
All users will have the password "their user name followed by anypassword"
You could have your list of users in a text file and enter the commands like so, presuming your text file is called users.txt and in your text file you just insert the user names underneath one another eg

user1
user2
user3

etc

for names in `cat users.txt`
do
echo $names
useradd $names
done

If you want to add usernames and passwords from a list, then make a text file like so

user1:password1
user2:password2
user3:password3

etc
save the file , in this example we'll save it as userlist.txt
then

for names in `cat userlist.txt`
do
user=`echo $names | cut -f1 -d:`
pwd=`echo $names | cut -f2 -d:`
useradd $user
echo $pwd | passwd --stdin $user
done

Monday, September 1, 2008

How to remove @#$ Annoying Console beeps

To Remove all console beeps whilst running X type
xset b off

this will disable the annoying console beeps for all programs.

To remove the console beep when running in run level 1 2 or 3 without X, or on one of the virtual consoles ( cntrl-Alt 1 to 6) type

setterm -blength 0


Thursday, August 28, 2008

Nog a sweet IP Tables rule

This one will force your users through your Squid proxy server, even if your users are configured to access the net directly

iptables -t nat - A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 3128

assuming that your squid proxy is configured to use port 3128 the above rule on your iptables firewall.

or if your squid proxy server is on a different server say 192.168.1.10 then
iptables -t nat - A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT -d 192.168.1.10 --to-ports 3128

if you need to insert the rule at line number 5 of an existing chain then
iptables -t nat - I PREROUTING 5 -i eth0 -p tcp --dport 80 -j REDIRECT --to-ports 3128


will forward all standard port 80 http traffic to your Squid Proxy server on port 3128 .......sweet

Tuesday, August 26, 2008

Making files undeletable, updatable only and unbackupable

Attributes on a file can help you control what people can do with different files you can make files
1: Undeletable (Even by root)
2: Updatable only (Even by root)
3: Unbackupable :) (Even by root)

1: to make a file undeletable type
chattr +i filename
even if root trys to delete the file they will not be allowed to, To make the file deletable again you will need to type
chattr -i filename

2: to make a file updateable only, which means that you will be able to append content to the file but you will not be allowed to delete it or remove content from it
type

chattr +a filename

3: to prevent a file from being backed up by admins using the dump command

type

chattr +d filename
to list the set attributes of a file type
lsattr filename

Tuesday, August 19, 2008

tip to extract configured settings only in a config file

some config files are huge, and have more comments "#" and blank space than they do actual configured settings.
sometimes you need to just have a look at the settings that are configured and not all the comments.
to do this try the following.

cat filename.conf |grep -v "^#" | grep -v "^$" | less

will list the config file without all the lines with comments and will also leave out all the blank space
giving you information on your configured settings only.

the -v switch in grep tells grep to list files that do not contain the following string
the ^ (caret) means the beginning of a line and the $ means the end of a line and then passing the output
to less, allows you to page through the file using your arrow keys

Thursday, August 14, 2008

Centralised Logging host

The last thing a cracker does after they have compromised your system is they try and remove all traces of what they have done on your system, they do this by altering or deleting your log files.

Log files are essential in monitoring your system and recovering it back to a working state after a failure or after being compromised. It is a very good idea to have your Log files stored on a central server, both for convenience and security reasons. In a large network it is also more convenient to have all your log files accessible in one central place.

decide on the server that will accept log messages from the other servers. On that server edit your
/etc/sysconfig/syslog file

and edit the stanza SYSLOGD_OPTIONS="-m 0"
add a -r like so
SYSLOGD_OPTIONS="-r -m 0"
-r =(receive log files)

restart syslogd by typing /etc/init.d/syslog restart
now your server is ready to accept logging messages from your other servers/machines

on the machine that you want to send the log files from. Edit your /etc/syslog.conf file
and add the following line
user.* @192.168.1.60

were 192.168.1.60 is the ip address of the server that you setup to receive the log files, you can substitute the ip address with the hostname of the server if you want.

restart syslogd by typing /etc/init.d/syslog restart
your server @ 192.168.1.60 will now receive and store all the log files from your machine that you have setup to send from.

you can test this new setup by using the logger command to create a log message
logger -i -t Clive "Testing centralised logging"
The message should appear in your centralised logging servers /var/log/messages file




Tuesday, August 12, 2008

Automounting Home Directories from a Centralized NFS server

In previous posts we configured NFS , client and server. Discussed auto mounting NFS shares and centralized user management . whats left to do is to centralize our users home directories. This makes backups easier to carry out as all users data is kept in one place. It also means that our users can login from any machine on the network and they will have immediate access to all of their files. which is pretty cool.

Right, so NFS server is setup and running on your Server, you must export the /home folder on your NFS server by adding it to your /etc/exports file and then type exportfs -a
NIS should also be configured and running on the same server as described in my previous post.

On the clients computer you need to edit your /etc/auto.master file and add the following entry

/home /etc/auto.home

then create a file called /etc/auto.home and put the following line in your new /etc/auto.home file

* -rw,nosuid,soft servername:/home/&

were servername is the name or ip address of your nfs/nis server

This entry in your /etc/auto.home file will insure that any directory a user tries to access under their own local /home directory (due to the"*" character) will cause an NFS mount on the server within its exported /home filesystem.
you could also add the following mount options to further improve matters

rsize=8192,wsize=8192 which would speed up NFS communication for reads (rsize) and writes (wsize) by setting a larger data block size, to be transferred at one time. do not set this option on older Linux kernels with older network cards as some network cards do not work well with the larger block sizes .
to add this option the file would look like so.


* -rw,nosuid,soft,rsize=8192,wsize=8192 servername:/home/&

The ampersand (&) takes the value of the user name in each line.

Done, your users home directories are now all centralized on your /nfs/nis server

Monday, August 11, 2008

Centralized user authentication with NIS

On a network with lots of users you will need to centralize your /etc/passwd file and your user database so that you can manage all your users in one place. you can centralize user management using NIS so that all users are added and deleted on one machine only. Users can log in from any other client machine, without the need to have a local user account on their own machines. (Together with Autofs described in my previous post and my next post on automounting /home directory over nfs . User databases can be managed in one place)
You can have multiple NIS servers on the same domain acting as Master and Slaves all managing one central user database.
The server acts as the central repository for all user names, passwords, and groups. The data is replicated from the /etc/passwd file to NIS databases.

On the server, you need to install a package called ypserv.

type apt-get install ypserv if you are using a debian based distribution or type

yum install ypserv if you are using a Red Hat derivative one.
After installing ypserv you need to setup a domain name that is used by server and client.

to setup your domain name type

domainname example

to make it persistent edit /etc/sysconfig/network file and add the following line

NISDOMAIN = example

were example is the name of your domain,

Next you need to convert the existing passwd, group and shadow files that contain user information and passwords to the NIS database format. You can do this using the following command:
/usr/lib/yp/ypinit -m

From now on, every time you add a user, delete a user, you have to update the NIS database. You can do this using the command:

make -C /var/yp

you should setup a cron job to run every hour or so to update the database for you automatically, do this by typing in crontab -e

and then adding the following line to your crontab file

0 * * * * make -C /var/yp &> /dev/null

this will build your nis database at the top of every hour

save the file

start the NIS server by typing
/etc/init.d/ypserv start

The server is now ready to handle authentication requests from the clients.

On the client, you need to install the yp-tools package, apt-get install yp-tools
for debian based distro's and yum install yp-tools for red hat derivative ones
then type
system-config-authentication
which will open your gui configuration program
click on enable NIS
and then click on configure NIS
enter the domain name ie example
and the ip address of your NIS server. if you don't have a gui then you can alternatively edit your /etc/yp.conf file, and point it to the appropriate server and domain name by adding the following line
domain example server servers_ip_address

The /etc/nsswitch.conf file lists the order for how lookups for various things are done, such as DNS lookup, user authentication, etc . to make NIS authentication faster, change the following in your /etc/nsswitch.conf file from:

passwd: files nisplus nis
shadow: files nisplus nis
group: files nisplus nis

To the following:

passwd: nis files nisplus
shadow : nis files nisplus
group: nis files nisplus

start the NIS client service by typing
/etc/init.d/ypbind start

you will now be able to login to your client machine using the
usernames
that are stored on your NIS Server. you will get
an error about not being able to mount your home directory, but my
next post on automounting home directories centrally addresses that problem

Sunday, August 10, 2008

Auto mount for Red Hat Derivative Distributions

In my previous 2 posts we setup NFS client and NFS server. There is another way of setting up the client so that the mount is temporary and only made if and when the share is required, this way speeds up your boot process as the NFS share will only mount if and when you need it and not during boot time. A mounted file system or share will always stay mounted until you umount it, this can cause problems with NFS especially if your connection to your server is lost as your machine would not have umounted the share.
Auto mounter to the rescue.
automounter will mount your shares on a temporary basis, as and when you need them. it will also umount your shares automatically after an interval of inactivity (60 seconds by default).
to setup automounter edit your /etc/auto.misc file and add the following

name_of_share -fstype=nfs 192.168.0.160:/share

were 192.168.1.60:/share is your nfs share on the server
name_of_share can be any name you choose, this will just specify the directory name you need to change to to automount the share.

save your /etc/auto.misc

now your /etc/auto.misc file is informed by the information in your /etc/auto.master file (you shouldent have to change anything in there but to understand how automount works it is a good idea to look inside the file.
if you look inside your /etc/auto.master file you will see 2 entries that look like so
/misc /etc/auto.misc
/net -hosts

this informs your auto.misc file to temporally mount your share under /misc
and your share will also be browsable under /net

you will need to ls /net/192.168.0.160 to see the shares on your server

or cd into /misc/name_of_share to access the share (even though you don't see the server or the share under /misc and /net they will appear only when you cd into them

the mount will stay mounted whilst the directory is in use and then for 60 seconds longer after inactivity The mount will disappear you will need to cd or ls the sharename to access it again.

if you type cd .. so that you are in the /misc folder and then type ls again you will now see your share, but it will only be there for 60 seconds and then it will automatically umount and dissapear

if you wanted to change the place that the automounts take place from /misc to some other directory of your choice then you will need to edit your /etc/auto.master file and change /misc to whatever you like. Once saved restart the automounter service.
/etc/init.d/autofs restart
for your changes to take effect.

Debian based distributions uses different tools to accomplish the same, I will cover the debian tools in a future post.

NFS Server

To share a directory on your computer and make it available to other computers on your network.
first make sure that nfs is installed and running as a service
/etc/init.d/ nfs status

if it is not running type /etc/init.d/ nfs start

and to make it start automatically at boot time type
chkconfig nfs on
(if your distribution doesn't come with chkconfig read this previous post on how to install it CHKCONFIG on Ubuntu)

nfs is also reliant on some other processes to be running for nfs to work these processes are portmap, rpc.mountd,nfsd and rpc.rquotd except for portmap the nfs script will start the others up, however if you have problems with nfs you can check if these services are running by typing
rpcinfo -p

NFS is very straightforward to setup there is only one file you have to edit and that is /etc/exports
edit it and add the following
/directory_name_to_be_shared 192.168.0.0/24(ro,sync,insecure)
make sure there are no spaces between the allowed network and the first (

were 192.168.0.0/24 is the network that you want to make the share available to, this could be a single ip address, a few ip addresses separarted by commas a domain name etc
ro = read only (you can change this to rw for read / write access)
insecure = will allow access on ports above 1024

save your file then type
exportfs -a to activate your nfs shares

If you add more shares to your /etc/exports file just add them underneath one another and then type
exportfs -ua and then
exportfs -a

this will re-read any modifications that you may have made.

NFS Client

you are able to connect to NFS shares that servers on your network have exported and made available to connect to. to see what NFS shares a server is making available simply type
showmount -e [server ip address or server name]

eg

showmount - e 192.168.1.60

This will show you what shares are available.
to temporally connect to the share you can mount it, and connect to it eg
mkdir /mnt/sharename
mount -t nfs 192.168.0.60: /share /mnt/sharename

to make the following persistant after reboots add the following to your /etc/fstab file
192.168.0.60: /share /mnt/sharename/ nfs soft, intr,timeout=100 0 0

-soft = will error out if share is not available
-intr = will allow nfs to be killed if server is unreachable
-timeout =100 very important as without a timeout if the share hangs you will not be able to login to your system.

save your /etc/fstab file and you are done.




Swap Space

During the life of your PC you most probably will be adding more physical RAM to it, in which case you may want to also increase the amount of swap space, swap space is disk space that is reserved for memory usage, the same as a paging file in Windows.
If your kernel needs more memory than what your physical RAM can produce it will write data to your swap space and use it as RAM, people will argue on how much swap space you should make available, some seem to think that if you add enough ram you don't even need swap space however this is not a good idea since swap space is always used no matter how much RAM your system has, Linux will always move infrequently used programs and data to swap space even if you have Gigabytes of Ram. Since Hard disk costs are so low I always stick to the rule of thumb with regards to how much swap space is enough, no matter how much physical Ram I add to my system I always increase my swap space accordingly up to a maximum of 4Gb (I never make swap space bigger than 4GB no matter how much RAM I have). The rule of thumb is that you should have double the amount of swap space available as the physical RAM on your computer. If you have 1GB of physical RAM then you should allocate 2GB of your hard drive to swap space.

Swap space is configured during installation but you can easily add more at anytime.

There are 2 methods of adding swap space to your system the one method is to create a partition of the swap space type and allocate that partition to swap space. The other way is to create a file of the required size (like a paging file) and then allocate that file to swap space.

To use the partition method you will need to create the partition using fdisk
create the partition to the size that you want to allocate as swap space and then set the partition id to type 82
once the partition is created issue the following command
mkswap -L SWAP-hda7 /dev/hda7

were hda7 is the the partition that you created

edit your /etc/fstab file and add the following entry

LABEL=SWAP-hda7 swap swap defaults 0 0

Save your /etc/fstab file and then issue the following command to read it into memory and turn it on
swapon -a

Done

Option 2,: To use a file instead of a partition, lets say you want to add a 2GB swapfile to your system
type
dd if=/dev/zero of=/swapfile bs=1024M count=2

this will create a 2GB sized file called /swapfile
then type
mkswap -L SWAPFILE /swapfile
then edit your /etc/fstab file and add the following entry

LABEL=SWAPFILE swap swap defaults 0 0
save your /etc/fstab file
and then type
swapon -a to read the file into memory and turn on all swap entries

you can check your swap status by typing
swapon -s



Friday, August 8, 2008

Fuser

Ever needed to umount a device or file system or needed to umount your portable USB drive
but you can't as you keep getting a "Device is busy" error.

You cannot umount a file system that has open files, file handles, or if the file system is currently in use. not knowing what is using the device or what is keeping it busy can be extremely frustrating.
fuser to the rescue.
fuser will tell you what processes are using a file system and keeping it busy, fuser will also allow you to kill the processes that are preventing you from umounting the filesystem or device.
Lets say it is your usb memory stick on /dev/sda1 that you cannot umount.
Type
fuser -v /dev/sda1

will show you what and who is locking your device.
Then type
fuser -km /dev/sda1
to kill all the processes that are locking up and keeping your device busy.
then you will be able to umount your device without any errors.

fuser will also tell you what process or user is accesing a specific file.
Type fuser -v /filename eg
fuser -v /home/cgerada/filename.txt
and if you wanted to kill the process that is locking up the file, simply type
fuser -km /home/cgerada/filename.txt

Wednesday, August 6, 2008

Working with Groups and Shared Directories.

There are three special sets of permissions that can be set to files and folders.
namely the Sticky Bit, 1770 (+t) the Set Group ID (sgid) 2770 (g+s) and the Set User ID (suid) 4770(u+s)

The Sticky bit and the sgid permissions are useful to set onto folders that are shared and accessed by a group of users.

the suid permission i will explain in another post since it is not relative to this topic.

Sticky Bit:

The sticky Bit sets permissions to a Directory that allows for only the owner of a file to be able to delete the file.
when the sticky bit is set users are only allowed to delete files that they created.
to set the sticky bit onto a folder simply type
chmod 1770 /folder_name
or
chmod +t /folder_name

eg chmod 1770 /marketing
or chmod +t /marketing

this would mean that every file created in the /marketing folder can only get deleted by the user who created that file.

Set Group ID (Sgid):

The sgid permission set onto a directory will insure that every file that is created inside that directory will inherit its permissions from the directory group and not from the person who created the file. This is essential in shared directories as it allows all users who are part of the group to have access to the files in the directory. an FTP shared directory for example would have the sgid permission set so that all files uploaded into the ftp folder would inherit the groups permission and not the permissions of the person who uploaded the file.

to set the sgid onto a directory simply type

chmod 2770 /folder_name
or
chmod g+s /folder_name

eg chmod 2770 /marketing
or chmod g+s /marketing

it makes sense to set both the sticky bit and the sgid onto a group directory. To set both permissions onto the same directory type
chmod 3770 /folder_name
eg chmod 3770 /marketing

lets demonstrate this in a real life scenario. We need to setup a group and a shared folder called Marketing and we want clive, jenny, ian and anthony to all have access to the Marketing folder
we need them all to be able to save files into the folder and we need them to be able to edit their own files but we do not want them to be able to delete each others files.

First we need to create the marketing folder

mkdir /marketing

next we need to create the marketing group that clive, jenny, ian and anthony are all part of. a quick way to do this is to type
groupadd marketing
and then edit your /etc/group file and add the users seperated by commas that you want to have access to the group were you find marketing:x501: (the users must exist on the system)
like so
marketing:x:501:clive,jenny,ian,anthony
next assign no specific user and the marketing group to have ownership to the group. type
chown nobody.marketing /marketing
next we want to assign the sticky bit as well as the sgid to the /marketing folder, so that all files created in the /marketing folder are accessible by everybody who is part of the marketing group
but only users who created the files are able to delete them.
type chmod 3770 /marketing

Done.

Find

The fundamentals behind the Unix Philosophy are :
  • All Programs do only one thing but they do it really well.
  • All programs must work together i.e. The output of one program must be able to be passed to the input of another.
  • Programs must handle text streams because that is the universal interface.

This philosophy is what makes every command in Linux so Feature rich, and the Find command is no exception. Find does one thing, and that is to find files on your system and it does it really really well.

the basics of the find command are
find -name "file1.txt"

this will search all directories from the directory that you are in and will look for the file named file1.txt

find -iname "file1.txt"

will search all directories from the directory that you are in and will perform a case insensitive search for the file named file1.txt ie it will find in its search if it exists File1.txt FiLe1.TxT, FILE1.TXT and file1.txt

find / -name "file1.txt"

will search your entire file system for file1.txt

find / -name "*.txt"

will search your entire file system for all files ending in .txt

find / -name "*able*"

will search your entire file system for all files that have able in their name.

right, now that we've got the basics out of the way.

multiple search criteria can be passed to find eg

find / -name "*.txt" -user cgerada

will search for and find all .txt files that are owned by user cgerada only.

find / -name "*.txt" -not -user cgerada

will search for and find all .txt files that are NOT owned by user cgerada (so all other users .txt files will be listed)

Right ... Moving on ................

find / -size +10M

will search your file system finding all files that are greater than 10Mb in Size.
this is extremely useful for cleaning up your hard drive of large zip archives that are taking space and are no longer needed but you do not remember were they are.

you can substitute the M for a G for gigabyte ie
find / -size +1G

will find all files on your file system that are larger than 1 GB

moving on...................

lets say you need to find a document on your system that you were working on in the last hour, you cannot remember the name of the document the only unique criteria you know is that you worked on it less than an hour ago.

find / -amin -60

will list all files that were accessed less than 60 Minutes ago

-amin = when file was last read (accessed)
-mmin = when file was last changed (modified)
-cmin = when file data or meta data last changed

if you want to look for files that were accessed more than 1 day ago use -atime instead of -amin eg

find / -atime -5
will list all files that were accessed less than 5 days ago

you can narrow down you search to find all .txt files that were accessed less than 5 days ago and are less than 1mb in size

find / -name "*.txt" -size -1M -atime -5


moving on ..................

find is able to find files that are newer or older than a known file.
lets say you want to find a file that you know was created after the time that you created file1.txt but you cannot remember the name of the file.

find / -newer file1.txt
will list all files that are newer than file1.txt.You can combine search criteria to narrow down your search eg
find / -name "*.txt" -newer file1.txt will search for all .txt files that were created after file1.txt and of course you can negate your search criteria so that you can search for all files that are older than file1.txt eg
find / -name "*.txt" -not -newer file1.txt

will find all .txt files that were created before file1.txt

moving on.......................

Find is able to execute other commands on files that it has found.
to execute other commands on the results of your search we use the
-ok or the -exec switches
lets say for example your hard disk has become full and you want to move all files that end in .zip that are bigger than 100Mb into a directory called /largefiles

find / -name "*.zip" -size +100M -ok mv {} /largefiles/ \;

the -ok switch lets you confirm each file before it is moved
if you substitute -ok for -exec then you will not be asked and the move will just take place.

I advise you to first run the command with out the -ok or -exec option which will just list the files, then if you are happy that the correct files have been listed re-run the command and add in the -exec option
the {} are place holders that will put every file it finds in the place holder and run the command on that file name .
The reason that the command ends in a \; is because find uses ; as the delimiting character and so does the bash shell so to prevent bash from interpreting the ; we need to put a \ in front of it. that way the interpreting of the ; delimiter is passed on to find.

other examples: Lets say you want to rename all .zip files on your system to .old files

find / -name "*.zip" -exec mv {}.old \;

or lets say you want to find all files on your system that end in .sh and make them all executable

find / -name "*/sh" -exec chmod 755 {} \;

remember whenever you use the -ok or -exec option your statement must end in a \;

lets say you want to be prompted before removing all of cgerada's tmp files that are over 5 days old

find /tmp -ctime +5 -user cgerada -ok rm {} \;to do the same but not be prompted you would use the -exec option like so
find /tmp -ctime +5 -user cgerada -exec rm {} \;


 this will remove all empty directories/folders

find -depth -type d -empty -exec rmdir {} \;


 flatten directory tree

This will move all the files with extension JPG in the directory tree starting at the current location to directory /home/cgerada/Pictures/2012-05-13.

find -name "*.JPG" -exec mv {} /home/cgerada/Pictures/2012-05-13 \;

This will list the filesizes of all the .zip files starting from the current directory.

find -name "*.zip" -exec du -h {} \;

This will tell you the total size of all files created from 2013-12-01 until now

find -name "*" -newermt 2013-12-01 -exec du -h {} \; | awk '{ total += $1 }END{ print total }'

Saturday, August 2, 2008

IP tables rule to secure against Brute force Attacks

make sure you read my previous post "How to further secure your sever for SSH" and applying
those tips first and then applying these rules.

The following 2 iptables commands will limit the amount of ssh logins to your server to only 4 allowed per minute from the same ip address
as compared to the default unlimited setting. you can change the numbers to any limits that you wish.

The reason why you would want to do this is to protect against scripts that are written to gain access to your system via brute force attacks to your server.
look at your /var/log/secure log file to see just how often a dictionary of user-names and passwords has been tried to login to your server.
by instating the following 2 iptables rules you secure yourself against these type of brute force login attacks.

change eth0 to whatever Ethernet port your server is connected to the outside world through.

iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent \--set
iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m
recent \--update --seconds 60 --hitcount 4 -j DROP

you must then save your rules

/etc/init.d/ iptables save

The --state switch receives a comma separated list of connection states as an argument, by using "--state NEW" this makes sure that only new connections are managed.

The --set parameter in the first rule also insures that the IP address of the host which initiated the connection will be added to the "recent list", where it is then checked if used again in our second rule.

The --update switch in the second rule checks whether the IP address is in the list of recent connections, to port 22, port 22 will be in the list because we used the --set switch to add it in the first rule.

Once it has confirmed that the ip address of the host has indeed connected before, then the --seconds switch is used to insure that the IP address is only going to be flagged if the last connection was within the time frame specified. The --hitcount switch will measure if the count of connection attempts is greater than or equal to the number given.

this rule will drop any connections if The IP address which initiated the connection has previously been added to the list and The IP address has sent a packet in the past 60 seconds and The IP address has sent more than 4 packets in total.



1000th Visit


Today 02 August 2008 @ 06:20.00 GMT

Thursday, July 31, 2008

Securtiy with IPTABLES

Another way to secure your server is by using iptables, you can use iptables together with tcp_wrappers or on their own, the choice is yours, the advantage with iptables is that iptables can be used to ACCEPT, DROP or REJECT packets of Data, it can also be used to FORWARD specific Data onto different Destinations and it can be used to configure NAT (Network Address Translation) also known as Masquerading.

the usage of IPTABLES is as follows:

iptables -t type (action) (direction) (type of packet) -j (what to do)

their are two types that you can choose (the -t switch)
filter = sets a rule for filtering packets
nat= configures Network Address Translation, also known as Masquerading
the default type is filter , if you don't specify a -t type the iptables command will assume that you are trying to setup a filtering rule. so you can leave out the -t switch if you are setting up a firewall rule.
next is the (action)
you can either
-A append a rule
-D delete a rule
-L list the currently configured Rules
-F flush the rules

next you need to specify which packets are the rules applied to (direction of packet)
INPUT = all incoming packets
OUTPUT = all outgoing packets
FORWARD = all packets that are being forwarded to another computer.

next you need to specify the source or destination address of the packet
-s ipaddress
-d ipaddress

next you need to specify the protocol of the packet using the -p switch
eg
-p tcp and then the port using the --dport switch eg
-p tcp --dport 80
and then finally what needs to be done with the packet which is the -j switch
DROP = the packet is dropped (no message is sent to the requesting host)
REJECT = the packet is rejected and an error message is sent to the requesting host
ACCEPT = the packet is Accepted
an ACCEPTED Packet can be forwarded by using the - A switch and then FORWARD
lets set up an iptable chain.
The first step is always to see what iptables are already configured . type
iptables -L
this reads the iptables from your /etc/sysconfig/iptables file (we do not edit this file directly it is best to use the iptables command with the relevant switches to configure your chains.)
iptables -L will return your rules in three different categories INPUT,FORWARD and OUTPUT
the following command will set a rule that denies all traffic from 192.168.0.0 network
iptables -A INPUT -s 192.168.0.0/24 -j REJECT
the following rule will make your server un-ping-able as it will drop all ICMP (ping) packets. Assume that your network is 192.168.0.0 the (!) inverts the meaning in this case the command applies to all IP addresses except those on the 192.168.0.0 network

if you need to insert the rule at line number 3 of the chain then type
iptables -I INPUT 3 -s 192.168.0.0/24 -j REJECT


iptables -A INPUT -s !192.168.0.0/24 -p icmp -j DROP
to delete any of the above commands simply retype them and change the -A to a -D eg:
iptables -D INPUT -s !192.168.0.0/24 -p icmp -j DROP will remove the previous chain
you can check your progress by typing iptables -L at anytime.
Once you have added the iptable rules that you want , you need to save your configuration. This is done with the following command
/etc/init.d/iptables save
this will save your configuration into the /etc/sysconfig/iptables file
you also need to insure that iptables starts up on run levels 2,3,4 and 5 so that it is persistent after a reboot
to do this type the following
chkconfig iptables on

Security with TCP wrappers

TCP_wrappers is on by default and you do not need to start any service for it to work.
TCP_wrappers is configured by editing 2 files /etc/hosts.allow and /etc/hosts.deny

When your system receives a network request for a service. The request is passed on to tcp_wrappers
tcp_wrappers is very straight forward and easy to set-up.
Users and clients that are listed in /etc/hosts.allow are allowed access to the listed services
and users and clients that are listed in the /etc/hosts.deny file are denied access to the listed services.
It's important to know the order of things that your system takes to make its decisions. When a request is made of your system, your system will first read your /etc/hosts.allow file and if it finds a rule in there for the requested service the rule is obeyed and no additional searches take place. If there are no rules in /etc/hosts.allow for the requested service then your system will look in /etc/hosts.deny and if it sees a rule in their for the service the service is denied. If your system sees no rules in neither /etc/host.allow nor in /etc/hosts.deny then the service is automatically granted access.
the syntax of your access rules are as follows

(SERVICES to allow or block separated by commas) : Clients or source destinations
so lets set up some rules, edit /etc/host.deny using your favourite text editor and add the following line

ALL : ALL

this will make your server air tight as every service from every host is Denied.
however we can allow the clients and services that we want by adding them to the /etc/hosts.allow file, remember your system Will first check your /etc/hosts.allow file and if it finds any rules in there then those rules are obeyed and no further checking for those rules will take place , so by adding the following line to /etc/hosts.allow

ALL : 192.168.0.0/24

substitute 192.168.0.0/24 with the ip address of the network that you want to allow access to your server from


this will allow access to all services from the 192.168.0.0 network to have access on your server but all other networks will be denied since the rule in your /etc/hosts.deny file will block them.

Your access rules can be very flexible , for example you could add a rule like so into your /etc/hosts.allow file

ALL : 192.168.0.0/24 EXCEPT 192.168.0.10

this would allow all hosts from the 192.168.0.0 network access to all services on your server except for host 192.168.0.10 who will be denied.

you can also allow access to specific services only eg:

sshd, ftpd, telnetd, http : 192.168.0.20

would allow host 192.168.0.20 to ssh, ftp telnet and access your server over http .

likewise you could also deny access to specific services and specific users by adding the rules into your /etc/hosts.deny file

eg :
ALL EXCEPT sshd : 192.168.0.0/24
added to your /etc/hosts.deny file would deny all services except ssh from all hosts on the 192.168.0/24 network.

as you can see TCP_WRAPPERS is extremely flexible and straight forward to use.

Other recognised commands that you can put into your /etc/hosts.allow and /etc/hosts.deny files are
.hostname.com (will block or allow clients from the specified hostname eg :
ALL : hostname.com in your /etc/hosts.allow file will allow all clients from the hostname.com domain access to all services on your server.

user@machine_name.hostname.com will apply to the specific user from a given computer

192.168. since the IP address ends with a . it specifies all hosts whose IP address starts with 192.168.

to see the exact names of all the services that you can allow or deny take a look at your /etc/services file.

Running Commands Conditionally

in Linux every command you run produces and records an exit status on whether the command was successful or not. If a command is successful the exit status will be recorded as "0" and if the command is not successful then the exit status is recorded as any number between "1 and 255". The exit status of the last command that you ran is recorded inside the $? variable.
To see what the exit status was of your previous command simply type echo $?
were this is useful is it allows you to run commands conditionally, based on the commands exit status. When you type two commands and you separate them with a && the second command will only run if the first command produced an exit status of "0" ie was successful.
If you separate the commands with a || then the second command will only run if the first command produced an exit status of "1-255" ie was not successful.
eg
ping 192.168.0.1 -c1 -w2 && echo "host is up"

- c1 = send 1 ping packet
-w2 = wait 2 seconds for a response

will display "host is up" if the ping command was successful at reaching the host at 192.168.0.1 and
ping 192.168.0.1 -c1 -w2 || echo "host is down"

will display "host is down" if the ping command was unsuccessful in reaching the host at 192.168.0.1

to get the desired result you should combine the commands like so

ping 192.168.0.1 -c1 -w2 && echo "host is up" || echo "host is down"

since you don't care about the actual output of the ping command (you are only interested if it was succesful or not ) You can redirect stdout and stderror to /dev/null so your final command would be something like:
ping 192.168.0.1 -c1 -w2 &> /dev/null && echo "Host is up" || echo "Host is Down"

Sunday, July 27, 2008

Save, Convert and join Youtube movies for playback on your Blackberry

lets say you see 4 movie clips on you tube that you want to join together to form 1 movie and save it to and play it on your Blackberry.
to start first download the mencoder tool. It should be available in your distributions repository.
type apt-get install mencoder if you have a debian derivative distro or
type yum install mencoder if you have a red hat one.


When you watch a movie clip on you tube the .flv file is automatically saved into your /tmp folder so once you have watched a clip that you want, you can simply copy it from your /tmp folder and save it somewhere safe, youtube saves its files with a name that starts with Flash followed by some arbitrary characters like FlashFgna. Copy these files to another folder so that they do not get deleted as your system will delete all files in your /tmp folder when you log out. so for my example watch the four movies you want on you tube, each one in a different tab of your browser once you have watched all 4 movies, open up a terminal window
type cd ~ then type mkdir videos which will create the directory were we are going to be working in.

cd videos to cd into the folder
type copy /tmp/Flash* .
will copy the four .flv files to your /home/username/videos/ folder don't worry if you don't see the .flv extension they are .flv files Linux doesn't care for extensions.

Sometimes you can skip the following step and you can try and join and convert your Flash files from .flv to .mp4. using the mencoder tool. but I have been much more successfull first encoding the files to the .avi format and then if i want to put the file on my Blackberry i will encode from .avi to .mp4



to encode from Flash to .avi we need to create the tool that will convert flv to avi files, here is a script that can do this.

Copy the following lines into your clipboard by highlighting them directly from this post and hitting cntl- "c"

#!/bin/sh

if [ -z "$1" ]; then
echo "Usage: $0 {-divx|-xvid} list_of_flv_files"
exit 1
fi

# video encoding bit rate
V_BITRATE=1000

while [ "$1" ]; do
case "$1" in
-divx)
MENC_OPTS="-ovc lavc -lavcopts \
vcodec=mpeg4:vbitrate=$V_BITRATE:mbd=2:v4mv:autoaspect"
;;
-xvid)
MENC_OPTS="-ovc xvid -xvidencopts bitrate=$V_BITRATE:autoaspect"
;;
*)
if file "$1" | grep -q "Macromedia Flash Video"; then
mencoder "$1" $MENC_OPTS -vf pp=lb -oac mp3lame \
-lameopts fast:preset=standard -o \
"`basename $1 .flv`.avi"
else
echo "$1 is not Flash Video. Skipping"
fi
;;
esac
shift
done


type vi /usr/local/bin/flv2avi.sh
(/usr/local/bin is a good place to save scripts as it is part of your path environment which means you will be able to execute your script from anywhere on your system)

this will open up your vi editor
once open type "i" to go into insert mode
then click on edit paste to paste the code into your script.

type :wq to save your script and exit out of vi

type chmod 755 /usr/local/bin/flv2avi.sh to make your script executable

to convert all the .flv files to avi we can do them all in one command. Type flv2avi.sh -divx Flashfile1 Flashfile2 Flashfile3 Flashfile4

were file1 file2 file3 file4 are the .flv files you want to convert.
once done you will have 4 additional files in your /home/username/videos folder all with .avi extensions.

now to join these files together and convert them to one .mp4 file which is the format that works best on your blackberry we also do it all in one command
mencoder file1.avi file2.avi file3.avi file4.avi -o newfilename.mp4 -ovc lavc -oac lavc
this will join all 4 files into one file and convert it to a .mp4 file
next just copy the file onto your blackberry and you will be able to play it using your blackberry's media player.

Saturday, July 26, 2008

Multiple Terminals in one Terminal Window with Terminator.

After a very short while of working on my desktop I often find myself with a mass of open terminal windows, and it sometimes becomes difficult to work as I struggle to find the Terminal Window that I want. I don't particularly like to use tabbed terminals to open multiple terminals, as I often need to see all of my open terminals side by side at the same time. Terminator to the Rescue!
Terminator is a virtual terminal program that allows you to split multiple terminal Windows in the same window. Terminator should be available in your Linux distributions Repository. apt-get install terminator if you use a Debian Flavoured Distribution or yum install terminator if you use a Red Hat one.
Once Installed you will find the terminator icon to start the program under Applications, accessories. or you can start terminator at the command line by typing terminator &
When it starts, a Terminal Window will open which will allow you to open new Terminals within your terminal in a split screened environment. What I like about this is when you minimize your terminal and you want to get back to it just by maximizing one Terminal you have all the terminals you were working on immediately accessible.
Once open try the following
Ctrl+Shift+O
Split terminals Horizontally.
Ctrl+Shift+E
Split terminals Vertically.
Ctrl+Shift+N
Move to next terminal.
Ctrl+Shift+P
Move to previous terminal.
Ctrl+Shift+W
Close the current terminal.
Ctrl+Shift+Q
F11 toggle Full Screen
you can also use your mouse to switch between terminals and to resize them by dragging their borders to the required size.

Friday, July 25, 2008

Session Managment with Screen

You log into your remote server via SSH and are busy downloading and installing a new program. In the middle of the download you loose your connection to your server. "Connection Closed" You have just lost your session! Screen to the rescue.

Screen is a window manager for your ssh terminal sessions. Screen is an absolute life saver when working over ssh as it allows you to reconnect to your sessions and continue working exactly were you left off. Screen allows you to re-attach to your session.

Screen is available in your Linux distributions repository. To install screen type

apt-get install screen if you are using a Debian based distro. Or yum install screen if you are using a red hat derivative distribution. You will want to run and install screen onto the machine that you are connecting to. Once installed start screen by typing screen.

If you are presented with a text message just hit enter. If nothing happens don't worry it just means that you are now inside a window within screen and it is running and working.

Screen uses the command "Ctrl-A" to send commands to screen instead of the shell. To get help, just type "Ctrl-A" then "?"

Screen supports multiple windows. This is useful for doing simultaneous tasks on the same machine over ssh without opening new sessions. Sometimes I need to run multiple tasks on the same remote machine or whilst one task is busy running I need to start up another task. Without screen I would need to make a new connection to the same machine or even more multiple connections and if any of the connections drop then i am screwed . With screen you connect to your remote machine only once and can run multiple tasks on the same connection and if your connection breaks, no problem you simply reconnect and re-attach your session.

To open a new window, you just use "Ctrl-A" "c"

run your task, eg mtr www.google.com

Now open a new window with "Ctrl-A" "c" again and start another task this time lets start top type top

To get back to your previous screen (mtr www.google.com task), use "Ctrl-A "n"

You can create multiple windows and toggle through them with "Ctrl-A" "n" for the next screen or "Ctrl-A" "p" for the previous one .

If you want to close your session but want to return to it later then you must detach from your session instead of closing it. This will leave your process running and will allow you to re-attach to the same process later. "Ctrl-A" "d". This will drop you into your shell. All screen windows are still there and you can re-attach to them later.

So you are using screen now and busy downloading a new program and suddenly your connection drops. Don't worry screen will keep the download going. Login to your system and type

screen - R to re-attach to your session and then use Ctrl - A "n" and "p" to toggle between all the sessions you were running on the remote system before your connection was lost.

another useful feature of screen is its ability to monitor a window for activity or for silence.

lets say you are downloading a file and you want to know when the download is finished, you will need to monitor for silence on that screen. To do that type "Ctrl-A" "_" . When your download is complete you will get an alert at the bottom with the window number. To quickly go to that window, use "Ctrl-A" " . After you do this, just type in the number of the window and enter. To stop monitoring, go to that window and undo the monitor with the same command. To monitor for activity type Ctrl-A "M" this will alert you when something new appears on the session that you wanted to monitor.

Screen can also be used to share a terminal session with another user. This is very useful if you need to show someone how to do something.

The host starts screen in a local xterm, using the command screen -S SessionName. The -S switch gives the session a name, which makes multiple screen sessions easier to manage.
Type:
screen -S screendemo
The remote user (bwayne) uses SSH to connect to the host computer (cgerada).
Type:
ssh bwayne@cgerada.computer.ip.address
The host (cgerada) then has to allow multiuser access in the screen session via the command CTRL-A :multiuser on .
Type:
CTRL-A
:multiuser on
Next, the host (cgerada) must grant permission to the remote user (bwayne) to access the screen session using the command CTRL-A :acladd user_name where user_name is the remote user's login ID.
Type:
CTRL-A
:acladd bwayne
The remote user can now connect to the hosts 'screen' session. The syntax to connect to another user's screen session is screen -x host_username/sessionname.
Type:
screen -x cgerada/screendemo

Voila both users will now share the same terminal session.