Details are provided at http://www.europe.redhat.com/products/rhel/server/details/
Some important ones are:
1. Red Hat Enterprise Linux 6 supports more sockets, more cores, more threads, and more memory.
2. Memory pages with errors can be declared as "poisoned", and will be avoided.
3. The new default file system, ext4, is faster, more robust, and scales to 16TB.
4. Redhat Cluster Nodes can re-enable themselves after failure without administrative intervention using unfencing.
5. iSCSI partitions may be used as either root or boot filesystems.
6. The new System Security Services Daemon (SSSD) provides centralized access to identity and authentication resources, enables caching and offline support.
7. Dracut has been introduced as an replacement of mkinitrd.
7. XEN has been replaced with KVM.
Showing posts with label RedHat. Show all posts
Showing posts with label RedHat. Show all posts
Thursday, June 23, 2011
Sunday, March 15, 2009
How to install Redhat cluster
How to install Redhat cluster
1. yum groupinstall -y “Cluster Storage” “Clustering”
2. yum install -y iscsi-initiator-utils isns-utils
3. chkconfig iscsi on
chkconfig iscsid on
4. The two “rhel-cluster-nodeX” systems have two NICs, one for production and one for HighAvailability check.
rhel-cluster-node1
192.168.234.201
10.10.10.1
rhel-cluster-node2
192.168.234.202
10.10.10.2
5. Add hostnames of both hosts to /etc/hosts file on all nodes.
6. #lsscsi should show the shared storage device, if not rescan the devices.
echo "- - -" > /sys/class/scsi_host/host0/scan
7. Suppose the device is /dev/sdb
#pvcreate /dev/sdb
#vgcreate vg1 /dev/sdb
lvcreate -l 10239 -n lv0 vg1
8. Create the GFS file system with dlm lock for max 8 hosts
# gfs_mkfs -p lock_dlm -t rhel-cluster:storage1 -j 8 /dev/vg1/lv0
9. To administer Red Hat Clusters with Conga, run luci and ricci as follows :
service luci start
service ricci start
10. On both systems, initialize the luci server using the luci_admin init command.
service luci stop
luci_admin init
This command create the ‘admin’ user and his password, for doing so follow on screen instruction, and check for an output as the following :
The admin password has been successfully set.
Generating SSL certificates…
The luci server has been successfully initialized
You must restart the luci server for changes to take effect, run the following to do it :
11. # service luci restart
12. Now start following cluster services
# service rgmanager start
# service cman start
13. Edit the fstab file as below.
/dev/vg1/lv0 /data gfs defaults,acl 0 0
14. Point your web browser to https://rhel-cluster-node1:8084 to access luci
15. As administrator of luci, select the cluster tab.
Click Create a New Cluster.
At the Cluster Name text box, enter cluster name “rhel-cluster.
Add the node name and password for each cluster node.
Click Submit. Clicking Submit causes the following actions:
a. Cluster software packages to be downloaded onto each cluster node.
b. Cluster software to be installed onto each cluster node.
c. Cluster configuration file to be created and propagated to each node in the cluster.
d. Starting the cluster.
A progress page shows the progress of those actions for each node in the cluster.
When the process of creating a new cluster is complete, a page is displayed providing a
configuration interface for the newly created cluster.
16. Managing your newly created cluster you can ad resources.
Add a resource, choose IP Address and use 192.168.234.200
17. Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”
Save the service.
18. If the service created give no errors, enable it, and try to start it on one cluster node.
19. The Cluster configuration file would be /etc/cluster/cluster.conf
1. yum groupinstall -y “Cluster Storage” “Clustering”
2. yum install -y iscsi-initiator-utils isns-utils
3. chkconfig iscsi on
chkconfig iscsid on
4. The two “rhel-cluster-nodeX” systems have two NICs, one for production and one for HighAvailability check.
rhel-cluster-node1
192.168.234.201
10.10.10.1
rhel-cluster-node2
192.168.234.202
10.10.10.2
5. Add hostnames of both hosts to /etc/hosts file on all nodes.
6. #lsscsi should show the shared storage device, if not rescan the devices.
echo "- - -" > /sys/class/scsi_host/host0/scan
7. Suppose the device is /dev/sdb
#pvcreate /dev/sdb
#vgcreate vg1 /dev/sdb
lvcreate -l 10239 -n lv0 vg1
8. Create the GFS file system with dlm lock for max 8 hosts
# gfs_mkfs -p lock_dlm -t rhel-cluster:storage1 -j 8 /dev/vg1/lv0
9. To administer Red Hat Clusters with Conga, run luci and ricci as follows :
service luci start
service ricci start
10. On both systems, initialize the luci server using the luci_admin init command.
service luci stop
luci_admin init
This command create the ‘admin’ user and his password, for doing so follow on screen instruction, and check for an output as the following :
The admin password has been successfully set.
Generating SSL certificates…
The luci server has been successfully initialized
You must restart the luci server for changes to take effect, run the following to do it :
11. # service luci restart
12. Now start following cluster services
# service rgmanager start
# service cman start
13. Edit the fstab file as below.
/dev/vg1/lv0 /data gfs defaults,acl 0 0
14. Point your web browser to https://rhel-cluster-node1:8084 to access luci
15. As administrator of luci, select the cluster tab.
Click Create a New Cluster.
At the Cluster Name text box, enter cluster name “rhel-cluster.
Add the node name and password for each cluster node.
Click Submit. Clicking Submit causes the following actions:
a. Cluster software packages to be downloaded onto each cluster node.
b. Cluster software to be installed onto each cluster node.
c. Cluster configuration file to be created and propagated to each node in the cluster.
d. Starting the cluster.
A progress page shows the progress of those actions for each node in the cluster.
When the process of creating a new cluster is complete, a page is displayed providing a
configuration interface for the newly created cluster.
16. Managing your newly created cluster you can ad resources.
Add a resource, choose IP Address and use 192.168.234.200
17. Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”
Save the service.
18. If the service created give no errors, enable it, and try to start it on one cluster node.
19. The Cluster configuration file would be /etc/cluster/cluster.conf
Friday, March 6, 2009
Configure ISCSI storage with RHEL5
In iSCSI (block level protocol on IP) parlance, the device where data is stored is called the target. This is usually a SAN or NAS device. The program or device on the server that handles communication with the iSCSI target is called the initiator. Red Hat ships a software-based initiator with RHEL.
1. Install iscsi-initiator-utils package.
# yum install iscsi-initiator-utils
2. Configure iSCSI by editing /etc/iscsi/iscsid.conf
node.session.auth.username = My_ISCSI_USR_NAME
node.session.auth.password = MyPassword
discovery.sendtargets.auth.username = My_ISCSI_USR_NAME
discovery.sendtargets.auth.password = MyPassword
3. Start iscsi service
# /etc/init.d/iscsi start
4. Discover Target storage (suppose with IP address 192.168.0.10)
# iscsiadm -m discovery -t sendtargets -p 192.168.0.10
#/etc/init.d/iscsi restart
5. Now new device should be available to system. do below
# fdisk -l
6. Partition Disk
#fdisk /dev/sdd
7. Create file system.
# mkfs.ext3 /dev/sdd1
8. Modify /etc/fstab for auto mounting of file system
/dev/sdd1 /mnt/iscsi ext3 _netdev 0 0
1. Install iscsi-initiator-utils package.
# yum install iscsi-initiator-utils
2. Configure iSCSI by editing /etc/iscsi/iscsid.conf
node.session.auth.username = My_ISCSI_USR_NAME
node.session.auth.password = MyPassword
discovery.sendtargets.auth.username = My_ISCSI_USR_NAME
discovery.sendtargets.auth.password = MyPassword
3. Start iscsi service
# /etc/init.d/iscsi start
4. Discover Target storage (suppose with IP address 192.168.0.10)
# iscsiadm -m discovery -t sendtargets -p 192.168.0.10
#/etc/init.d/iscsi restart
5. Now new device should be available to system. do below
# fdisk -l
6. Partition Disk
#fdisk /dev/sdd
7. Create file system.
# mkfs.ext3 /dev/sdd1
8. Modify /etc/fstab for auto mounting of file system
/dev/sdd1 /mnt/iscsi ext3 _netdev 0 0
Friday, January 2, 2009
How to create Raw Device in RHEL4 and RHEL5
On RHEL4, raw devices were setup easily using the simple and coherent file /etc/sysconfig/rawdevices, which included an internal example.
On RHEL5 this is not the case, and customizing in a rather less documented method the udev subsystem is required.
1. Add to /etc/udev/rules.d/60-raw.rules:
ACTION==”add”, KERNEL==”sdb1″, RUN+=”/bin/raw /dev/raw/raw1 %N”
2. To set permission (optional, but required for Oracle RAC!), create a new /etc/udev/rules.d/99-raw-perms.rules containing lines such as:
KERNEL==”raw[1-2]“, MODE=”0640″, GROUP=”oinstall”, OWNER=”oracle”
On RHEL5 this is not the case, and customizing in a rather less documented method the udev subsystem is required.
1. Add to /etc/udev/rules.d/60-raw.rules:
ACTION==”add”, KERNEL==”sdb1″, RUN+=”/bin/raw /dev/raw/raw1 %N”
2. To set permission (optional, but required for Oracle RAC!), create a new /etc/udev/rules.d/99-raw-perms.rules containing lines such as:
KERNEL==”raw[1-2]“, MODE=”0640″, GROUP=”oinstall”, OWNER=”oracle”
Monday, October 8, 2007
Tuesday, May 15, 2007
How to rescan devices in different Operating Systems
In Solaris
#devfsadm
In Linux
#echo "- - -" > /sys/class/scsi_host/host0/scan
#devfsadm
In Linux
#echo "- - -" > /sys/class/scsi_host/host0/scan
Sunday, April 29, 2007
How to detect LUN in RHEL
If Redhat system is not recognizing any LUN other than LUN0, do follwoing.
1. Open the /etc/modules.conf file.
2. For Linux 2.4 kernels, add the following line:
options scsi_mod max_scsi_luns=128
For Linux 2.6 kernels, add the following line:
options scsi_mod max_luns=256
3. Save the file.
4. rebuild the ram-disk associated with the current kernel
cd /boot
mkinitrd –v initrd-kernel.img kernel
5. Restart the host.
*******************************************************************
Example modules.conf (using vxfs & RedHat included QLogic driver)
options scsi_mod max_scsi_luns=512
alias scsi_hostadapter megaraid2
alias scsi_hostadapter1 qla2300
alias usb-controller usb-uhci
alias usb-controller1 ehci-hcd
alias eth0 tg3
alias eth1 tg3
alias eth2 e1000
alias eth3 e1000
alias eth4 e1000
alias eth5 e1000
insmod_opt=-N # required for vxportal and vxfs
above vxfs fdd vxportal
post-install vxportal (/usr/lib/fs/vxfs/vxenablef -e full;) >/dev/null 2>&1
alias char-major-10-32 vxportal
1. Open the /etc/modules.conf file.
2. For Linux 2.4 kernels, add the following line:
options scsi_mod max_scsi_luns=128
For Linux 2.6 kernels, add the following line:
options scsi_mod max_luns=256
3. Save the file.
4. rebuild the ram-disk associated with the current kernel
cd /boot
mkinitrd –v initrd-kernel.img kernel
5. Restart the host.
*******************************************************************
Example modules.conf (using vxfs & RedHat included QLogic driver)
options scsi_mod max_scsi_luns=512
alias scsi_hostadapter megaraid2
alias scsi_hostadapter1 qla2300
alias usb-controller usb-uhci
alias usb-controller1 ehci-hcd
alias eth0 tg3
alias eth1 tg3
alias eth2 e1000
alias eth3 e1000
alias eth4 e1000
alias eth5 e1000
insmod_opt=-N # required for vxportal and vxfs
above vxfs fdd vxportal
post-install vxportal (/usr/lib/fs/vxfs/vxenablef -e full;) >/dev/null 2>&1
alias char-major-10-32 vxportal
Saturday, February 18, 2006
How to create a LVM2 logical volume and the ext2 or ext3 filesystem
1. Create a new partition using parted, mkpart
# parted /dev/hda "mkpart primary 101.976 2500"
2. Create new physical volume on new partition using pvcreate.
# lvm pvcreate /dev/hda2
3. Create a new volume group using lvm vgcreate for above physical volume.
# lvm vgcreate TestVG /dev/hda2
4. Activate new volume group using lvm vgchange.
# lvm vgchange -a y TestVG
5. Create a test logical volume in this volume group using lvm lvcreate.
# lvm lvcreate -l598 TestVG -nTestLV
6. Create a file system on logical volume using mkfs.ext3.
# mkfs.ext3 /dev/TestVG/TestLV
7. mount the file system.
# mount /dev/TestVG/TestLV /mnt/test
# parted /dev/hda "mkpart primary 101.976 2500"
2. Create new physical volume on new partition using pvcreate.
# lvm pvcreate /dev/hda2
3. Create a new volume group using lvm vgcreate for above physical volume.
# lvm vgcreate TestVG /dev/hda2
4. Activate new volume group using lvm vgchange.
# lvm vgchange -a y TestVG
5. Create a test logical volume in this volume group using lvm lvcreate.
# lvm lvcreate -l598 TestVG -nTestLV
6. Create a file system on logical volume using mkfs.ext3.
# mkfs.ext3 /dev/TestVG/TestLV
7. mount the file system.
# mount /dev/TestVG/TestLV /mnt/test
Wednesday, January 4, 2006
how to find wwn for HBA in linux
Get the fibreutils.rpm package. Install it and issue command identify. This will give you the WWN of the QLogic HBA.
OR
Check /proc/scsi/(HBATYPE)/XXXXX file. replace HBATYPE with your card type and XXXX being your instance number.
OR
Check /proc/scsi/(HBATYPE)/XXXXX file. replace HBATYPE with your card type and XXXX being your instance number.
Sunday, September 4, 2005
How can I save the coredump (vmcore) onto the hard disk when I get a system OOPS or a kernel panic on Red Hat Enterprise Linux 4
Make sure diskdump package is installed and service is running. Now take a dump using Alt-SysRq-C or "echo c > /proc/sysrq-trigger".
After completing the dump, a vmcore file will be created during the next reboot sequence, and saved in a directory of the name format:
/var/crash/127.0.0.1-
The vmcore file's format is same as that created by the netdump facility, so you can use the crash command to analyze it.
After completing the dump, a vmcore file will be created during the next reboot sequence, and saved in a directory of the name format:
/var/crash/127.0.0.1-
The vmcore file's format is same as that created by the netdump facility, so you can use the crash command to analyze it.
Sunday, January 9, 2005
How do I view my initrd file in Red Hat Enterprise Linux 4
The initrd file is a compressed cpio archive of a temporary root file system. To view the contents of the file copy it to a directory and name it with the .gz file extension:
# cp /boot/initrd-.img /tmp/initrd.gz
Then use the gunzip program to decompress the file. Renaming the file to have a .gz extension allows the file to to be decompressed with gzip.
# cd /tmp
# gunzip initrd-.gz
To view the file it must be extracted using the cpio command:
# mkdir initrd
# cd initrd
# cpio -cid -I ../initrd-
Use tree command to see directory structure and then do a cat on init script which has list of modules loaded into kernel.
# cp /boot/initrd-
Then use the gunzip program to decompress the file. Renaming the file to have a .gz extension allows the file to to be decompressed with gzip.
# cd /tmp
# gunzip initrd-
To view the file it must be extracted using the cpio command:
# mkdir initrd
# cd initrd
# cpio -cid -I ../initrd-
Use tree command to see directory structure and then do a cat on init script which has list of modules loaded into kernel.
Friday, September 24, 2004
Friday, September 10, 2004
Why does my new kernel stop at boot up with the error message 'Kernel panic: VFS: Unable to mount root fs on ...'?
Red Hat uses loadable kernel modules to dynamically add or remove capabilities in a running kernel. To be able to read an ext3 filesystem for example, the kernel must load the ext3 kernel module.
At boot time the kernel is loaded by a boot loader (GRUB or LILO for example). An initrd image that contains kernel modules needed at boot time is also loaded. If the root ( / ) directory is on an ext3 formatted partition, in order to be able to read from that filesystem the boot kernel must load an initrd image that contains the ext3 kernel module.
If an initrd image is missing or that image does not include suitable kernel modules to access the filesystem on the partition, an error message similar to the following will be seen:
Kernel panic: VFS: Unable to mount root fs on ...
When a new kernel package supplied by Red Hat is installed, a new kernel initrd image is usually created automatically by a post-install script included in the kernel RPM package.
Under some circumstances, an initrd may fail to be created, usually because there is a problem with loopback device or a temporary filesystem is mounted with tmpfs
To manually create initrd image use mkinitrd command.
At boot time the kernel is loaded by a boot loader (GRUB or LILO for example). An initrd image that contains kernel modules needed at boot time is also loaded. If the root ( / ) directory is on an ext3 formatted partition, in order to be able to read from that filesystem the boot kernel must load an initrd image that contains the ext3 kernel module.
If an initrd image is missing or that image does not include suitable kernel modules to access the filesystem on the partition, an error message similar to the following will be seen:
Kernel panic: VFS: Unable to mount root fs on ...
When a new kernel package supplied by Red Hat is installed, a new kernel initrd image is usually created automatically by a post-install script included in the kernel RPM package.
Under some circumstances, an initrd may fail to be created, usually because there is a problem with loopback device or a temporary filesystem is mounted with tmpfs
To manually create initrd image use mkinitrd command.
Friday, August 20, 2004
How to boot linux system in single user mode
If you are using GRUB, use the following steps to boot into single-user mode:
1. If you have a GRUB password configured, type p and enter the password.
2. Select Red Hat Linux with the version of the kernel that you wish to boot and type e for edit. You will be presented with a list of items in the configuration file for the title you just selected.
3. Select the line that starts with kernel and type e to edit the line.
4. Go to the end of the line and type single as a separate word (press the [Spacebar] and then type single). Press [Enter] to exit edit mode.
5. Back at the GRUB screen, type b to boot into single user mode.
for lilo
type # linux single
1. If you have a GRUB password configured, type p and enter the password.
2. Select Red Hat Linux with the version of the kernel that you wish to boot and type e for edit. You will be presented with a list of items in the configuration file for the title you just selected.
3. Select the line that starts with kernel and type e to edit the line.
4. Go to the end of the line and type single as a separate word (press the [Spacebar] and then type single). Press [Enter] to exit edit mode.
5. Back at the GRUB screen, type b to boot into single user mode.
for lilo
type # linux single
Thursday, May 13, 2004
How Do I Add Temporary Swap Space in Linux?
In addition to a swap partition, Linux can also use a swap file. Some programs, like g++, can use huge amounts of virtual memory, requiring the temporary creation of extra space. To install an extra 64 MB of swap space, for example, use the following shell commands:
# dd if=/dev/zero of=/swap bs=1024 count=65535
# mkswap /swap
# swapon /swap
# dd if=/dev/zero of=/swap bs=1024 count=65535
# mkswap /swap
# swapon /swap
Subscribe to:
Comments (Atom)