Command disabled: index
 



<WRAP center round info 65%>
===Création de VM sous xen====
[[http://smpfr.mesdiscussions.net/smpfr/Software/Linux/tuto-creation-vm-sujet_205_1.htm]]
[[http://blog.cheramy.name/2010/01/20/deployer-des-machines-virtuelles-avec-les-xen-tools/]]
[[http://www.chicoree.fr/w/Premiers_pas_avec_Xen_sous_Debian]]
[[http://www.404blog.net/?p=15]]
</WRAP>

======Installation de xen sur une machine Debian virtuelle======
-----
<WRAP center round important 90%>
**Un xen sur une machine virtualisé n'est possible que sur un système 32bits donc veillez à prendre un debian i386 et non amd64**
</WRAP>
==-1.Installer le méta-paquet xen:==
<code=bash>
root@afpi:~$ aptitude -P install xen-linux-system
</code>
==-2.Faites un swap de priorité dans le GRUB et updatez==
<code=bash>
root@afpi:~$ mv -i /etc/grub.d/10_linux /etc/grub.d/21_linux
root@afpi:~$ update-grub
</code>
==Dès lors notre grub ressemblera à ca:==
{{:cours:activite1:xen1.png|}}
==-3.To avoid getting boot entries for each virtual machine you install on a volume group, disable the GRUB OS prober. Edit /etc/default/grub and add==
<code=bash>
# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu.
GRUB_DISABLE_OS_PROBER=true
</code>
After editing GRUB configuration, you must apply it by running:
<code=bash>
root@afpi:~# update-grub
</code>

By default, when Xen dom0 shuts down or reboots, it tries to save the state of the domUs. Sometimes there are problems with that and because it is also clean to just have the VMs shutdown upon host shutdown, if you want you can make sure they get shut down normally by setting these parameters in /etc/default/xendomains:
<code=bash>
XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""
</code>
==-4.Configuration de Xend : fichier /etc/xen/xend-config.sxp==
<code=bash>

##komo##
(logfile /var/log/xen/xend.log)
(xend-http-server yes)
(xend-unix-server yes)
(xend-tcp-xmlrpc-server yes)
(xend-unix-xmlrpc-server yes)
(xend-tcp-xmlrpc-server-address '0.0.0.0')
(xend-tcp-xmlrpc-server-port 8006)
(xend-port 8000)
(xend-address '')
(network-script 'network-bridge netdev=eth0')
(vif-script vif-bridge)
(vnc-listen '0.0.0.0')
(vncpasswd '')
(keymap 'fr')
##komo##
</code>
==5.This config file also has options to set the memory and CPU usage for your dom0, which you might want to change. To reduce dom0 memory usage as it boots, use the dom0_mem kernel option in the aforementioned GRUB_CMDLINE_XEN variable. Xen wiki also advise to disable dom0 memory ballooning and set minimal memory in /etc/xen/xend-config.sxp (1024M is an example) :==
<code=bash>
(dom0-min-mem 1024)
(enable-dom0-ballooning no)
</code>
<WRAP center round important 90%>
Après toute modification du fichier /etc/xen/xend-config.sxp, il est nécessaire de relancer le daemon /etc/init.d/xend restart
</WRAP>
===DomU (guests)===

If you want, you can also use tools that allow easy setup of virtual machine such as:
<code=bash>
xen-tools - apt-get install xen-tools
</code>
======Serveur de Fichier (NFS)======
----

===1. installation du serveur nfs===
<code=bash>
$ aptitude install nfs-kernel-server portmap
</code>
==2. Configuration du serveur nfs===

==2.a Création des répertoires partagés:==
<code=bash>
$ mkdir /mnt/{xen,iso}
</code>
==2.b Configuration des partages==
<code=bash>
$ vim /etc/exports
</code>
==Fichier /etc/exports==
<code=bash>
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#
/mnt/xen 192.168.202.112(rw,sync,no_root_squash,no_subtree_check)
/mnt/xen 192.168.202.168(rw,sync,no_root_squash,no_subtree_check)
/mnt/iso 192.168.202.112(rw,sync,no_root_squash,no_subtree_check)
/mnt/iso 192.168.202.168(rw,sync,no_root_squash,no_subtree_check)
</code>
==Fichier /etc/hosts.allow==
<code=bash>

# /etc/hosts.allow: list of hosts that are allowed to access the system.
# See the manual pages hosts_access(5) and hosts_options(5).
#
# Example: ALL: LOCAL @some_netgroup
# ALL: .foobar.edu EXCEPT terminalserver.foobar.edu
#
# If you're going to protect the portmapper use the name "portmap" for the
# daemon name. Remember that you can only use the keyword "ALL" and IP
# addresses (NOT host or domain names) for the portmapper, as well as for
# rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8)
# for further information.
#

#komo : Configuration nfs
portmap:192.168.202.112,192.168.202.168
nfsd:192.168.202.112,192.168.202.168
mountd:192.168.202.112,192.168.202.168
#komo : Configuration nfs
</code>
==Fichier /etc/hosts.deny==
<code=bash>
# /etc/hosts.deny: list of hosts that are _not_ allowed to access the system.
# See the manual pages hosts_access(5) and hosts_options(5).
#
# Example: ALL: some.host.name, .some.domain
# ALL EXCEPT in.fingerd: other.host.name, .other.domain
#
# If you're going to protect the portmapper use the name "portmap" for the
# daemon name. Remember that you can only use the keyword "ALL" and IP
# addresses (NOT host or domain names) for the portmapper, as well as for
# rpc.mountd (the NFS mount daemon). See portmap(8) and rpc.mountd(8)
# for further information.
#
# The PARANOID wildcard matches any host whose name does not match its
# address.

# You may wish to enable this to ensure any programs that don't
# validate looked up hostnames still leave understandable logs. In past
# versions of Debian this has been the default.
# ALL: PARANOID

#komo : configuation nfs
portmap:ALL
nfsd:ALL
mountd:ALL
</code>
==Relancer le service par :==
<code=bash>
$ exportfs -a
</code>
======Serveurs Xen======
----
===1. Installation du client nfs===
<code=bash>
# aptitude install nfs-common portmap
</code>
===2. Création des points de montages===
<code=bash>
# mkdir /mnt/{xen,iso}
</code>
===3. On monte les systèmes de fichiers nfs===
<code=bash>
root@afpi:~# mount -t nfs 192.168.202.15:/mnt/xen /mnt/xen/
root@afpi:~# mount -t nfs 192.168.202.15:/mnt/iso /mnt/iso/
</code>
===4. Vérification===
<code=bash>
root@afpi:~# mount
...
192.168.202.15:/mnt/iso on /mnt/iso type nfs (rw,addr=192.168.202.15)
192.168.202.15:/mnt/xen on /mnt/xen type nfs (rw,addr=192.168.202.15)
</code>

======Création d'une vm avec xen-tool======
----
===1. Monter les partage NFS===

Voir ici, partie 1, 2, 3, 4.

===2. Création d'un domU===

Les images disques de nos vm seront des fichiers (ce n'est pas ce qui est recommandé en production, mais nous reviendrons là-dessus…peut_être.) situées sur le serveur de fichiers dans le répertoire /mnt/xen.

Nous allons utiliser le<wrap em> package xen-tools</wrap> installé précédemment (faite le si ce n'est déjà fait).

==a. Fichier de configuration de ''xen-tools''==

L'utilitaire <wrap em>xen-tools permet d'automatiser la création de DOMU paravirtualisés</wrap> pour certaines distributions (cela veut donc dire qu'on peut s'en passer ! cf LM N°134, page 30). L'usage est simple. Il suffit de renseigner correctement le fichier /etc/xen-tools/xen-tools.conf et tout va s'enchaîner, y compris la création du fichier de configuration de la vm.
Ce fichier est amplement documenté et n'importe qui peut y retrouver ce dont il a besoin. Voici quand même un exemple fonctionnel pour une Debian Squeeze :
<code=bash>
root@xen1:~# vim /etc/xen-tools/xen-tools.conf
</code>
<code=bash>
#Le repertoire d'accueil des différents domaines
dir = /mnt/xen

#Choisissons notre distribution à installer
size = 4Gb # Disk image size.
memory = 128Mb # Memory size
swap = 128Mb # Swap size
# noswap = 1 # Don't use swap at all for the new system.
fs = ext3 # use the EXT3 filesystem for the disk image.
#dist = `xt-guess-suite-and-mirror --suite` # Default distribution to install.
dist = squeeze
image = sparse # Specify sparse vs. full disk images.

#Définissons les paramètres réseau de notre interface virtuelle
gateway = 192.168.254.2
dhcp = 1
nameserver = 192.168.254.2
bridge = eth0

#Si l'on souhaite initialiser le password de root une fois l'image créée
passwd = 1

# Le kernel et ramdisk à utiliser pour les machines virtuelles
kernel = /boot/vmlinuz-`uname -r`
initrd = /boot/initrd.img-`uname -r`
#remarquez que nous utilisons ceux fournis par le système hôtes.

#Choisir le serveur de téléchargement de la distribution à installer
mirror = `xt-guess-suite-and-mirror --mirror` # même serveur que pour l'installation de la machine hôte
</code>

===b. Créer l'image de la VM===
----
<code=bash>
root@xen1:~#xen-create-image --hostname=xen1VM0
</code>
Création d'une VM sur une machine sans processeur VM
<code=bash>
root@xen1:~# xen-create-image --hostname=xen1VM0
</code>
<code=bash>
General Information
--------------------
Hostname : xen1VM0
Distribution : squeeze
Mirror : http://debian.ens-cachan.fr/ftp/debian/
Partitions : swap 128Mb (swap)
/ 4Gb (ext3)
Image type : sparse
Memory size : 128Mb
Kernel path : /boot/vmlinuz-2.6.32-5-xen-686
Initrd path : /boot/initrd.img-2.6.32-5-xen-686

Networking Information
----------------------
IP Address : DHCP [MAC: 00:16:3E:7F:1B:1C]
Gateway : 192.168.254.2
Nameserver : 192.168.254.2


Creating partition image: /mnt/xen/domains/xen1VM0/swap.img
Done

Creating swap on /mnt/xen/domains/xen1VM0/swap.img
Done

Creating partition image: /mnt/xen/domains/xen1VM0/disk.img
Done

Creating ext3 filesystem on /mnt/xen/domains/xen1VM0/disk.img
Done
Installation method: debootstrap
Done

Running hooks
Done

No role scripts were specified. Skipping

Creating Xen configuration file
Done
All done


Logfile produced at:
/var/log/xen-tools/xen1VM0.log

Installation Summary
---------------------
Hostname : xen1VM0
Distribution : squeeze
IP-Address(es) : dynamic
RSA Fingerprint : 9b:f9:36:87:51:67:92:b4:f6:a4:3e:e5:37:15:c2:da
Root Password : N/A

root@xen1:~#
</code>
===c. Lançons notre domU fraichement créé===
---
<code=bash>
root@xen1:~# xm create -c xen1VM0.cfg
</code>
<code=bash>
Using config file "/etc/xen/xen1VM0.cfg".
Started domain xen1VM0 (id=5)
[ 0.000000] Reserving virtual address space above 0xf5800000
[ 0.000000] Initializing cgroup subsys cpuset
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 2.6.32-5-xen-686 (Debian 2.6.32-39) (dannf@debian.org) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Thu Nov 3 09:08:23 UTC 2011
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] AMD AuthenticAMD
...
...
...
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7
DHCPOFFER from 192.168.254.254
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPACK from 192.168.254.254
bound to 192.168.254.130 -- renewal in 801 seconds.
...
...
...
Debian GNU/Linux 6.0 xen1VM0 hvc0

xen1VM0 login: root
Password:
Last login: Sun Dec 4 16:52:38 CET 2011 on hvc0
Linux xen1VM0 2.6.32-5-xen-686 #1 SMP Thu Nov 3 09:08:23 UTC 2011 i686

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@xen1VM0:~#
</code>

======Cloner une VM======
----
===1. Avec xen-tools===

==a. Copie des fichier images :==
<code=bash>
root@xen1:~# cp -R /mnt/xen/domains/xen1VM0/ /mnt/xen/domains/xen1VM0-Clone
root@xen1:~# cp /etc/xen/xen1VM0.cfg /etc/xen/xen1VM0-clone.cfg
</code>
==b. Modifier le fichier /etc/xen/xen1VM0-clone.cfg :==
<code=bash>
disk = [
'file:/mnt/xen/domains/xen1VM0-Clone/disk.img,xvda2,w',
'file:/mnt/xen/domains/xen1VM0-Clone/swap.img,xvda1,w',
]

vif = [ 'mac=00:16:3E:7F:1B:1D,bridge=eth0' ]
name = 'xen1VM0-clone'
</code>
<fc #FF0000>c. On peut monter la nouvelle image pour verifier le fstab :

root@xen1:~# mount -o loop /mnt/xen/domains/xen1VM0-Clone/disk.img mntTemp/
root@xen1:~# less mntTemp/etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/xvda1 none swap sw 0 0
/dev/xvda2 / ext3 noatime,nodiratime,errors=remount-ro 0 1
mntTemp/etc/fstab (END)
Ne pas oublier de démonter le répertoire, une fois la vérification finie!!

root@xen1:~# umount mntTemp/</fc>
==d. Lancer la nouvelle machine==
<code=bash>
root@xen1:~# xm create xen1VM0-clone.cfg
</code>
<code=bash>
Using config file "/etc/xen/xen1VM0-clone.cfg".
Started domain xen1VM0-clone (id=3)
</code>
On vérifie que notre vm est lancée :
<code=bash>
root@xen1:~# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 875 1 r----- 339.4
xen1VM0 128 1 4.0
xen1VM0-clone 3 128 1 -b---- 8.3
</code>
==e. Vous pourrez récupérer la console par :==
<code=bash>
root@xen1:~# xm console 3
</code>
Ou
<code=bash>
root@xen1:~# xm console xen1VM0-clone
</code>

======Créer un snapshot avec xen-tools======
----
==a. créer un snapshot==

Il est possible de sauvegarder l'état d'une machine invitée pour pouvoir ensuite la restaurer en cas de problèmes ou lors de tests.
Pour cela, nous utilisons la commande xm save :
<code=bash>
root@xen1:~# mkdir /mnt/xen/snap
root@xen1:~# xm save -c 6 /mnt/xen/snap/snap-xen1VM0
root@xen1:~# ls -lh /mnt/xen/snap
total 125M
-rwxr-xr-x 1 root root 125M 4 déc. 22:16 snap-xen1VM0
</code>
==b. Restaurer un snapshot==

Pour le test de restauration, on modifie le nom de la vm (root@xen1VM0-Clone → xen1VM0-Clone-mod) :
<code=bash>
root@xen1VM0-Clone:~# cat > /etc/hostname
xen1VM0-Clone-mod
root@xen1VM0-Clone:~# hostname xen1VM0-Clone-mod
root@xen1VM0-Clone:~# logout

Debian GNU/Linux 6.0 xen1VM0-Clone-mod hvc0

xen1VM0-Clone-mod login:
</code>
On éteint la machine :
<code=bash>
root@xen1VM0-Clone-mod:~# poweroff
</code>
Pour restaurer la machine, il suffit d'utiliser la commande suivante :
<code=bash>
root@xen1:~# xm restore /mnt/xen/snap/snap-xen1VM0
root@xen1:~# xm console 8
[ 201.071370] Setting capacity to 8388608
[ 201.077355] Setting capacity to 8388608
[ 201.155122] Setting capacity to 262144

Debian GNU/Linux 6.0 xen1VM0-Clone hvc0

xen1VM0-Clone login: root
Password:
Last login: Sun Dec 4 22:43:10 CET 2011 on hvc0
Linux xen1VM0-Clone 2.6.32-5-xen-686 #1 SMP Thu Nov 3 09:08:23 UTC 2011 i686

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@xen1VM0-Clone:~#
</code>
On a restauré l'état de la vm avant le changement de nom !

======Migrer une VM======
----
===TOPOLOGIE===

{{:cours:activite1:xen2.png|}}

==1. Avec xen-tools==

[[http://www.virtuatopia.com/index.php/Migrating_Xen_domainU_Guests_Between_Host_Systems]]

==a. Requirements for Xen domainU Migration==

Before a Xen guest system can be migrated from one host to another a number of requirements must be met:

Both the source and destination hosts must have access to the root filesystem (and swap if specified) via the same path name. For example if the root filesystem is contained in a disk image with a path of /xen/xenguest.img then that image file must also be accessible at the same location on the target host. This is most commonly achieved by placing the image files on a file server such that it can be mounted via NFS.
Both systems must be running compatible processors.
The target host must have sufficient memory to accommodate the migrated guest domain.
The source and destination machines must reside on the same subnet.
The two systems need to be running compatible versions of Xen.
Firewall settings (and SELinux if enabled) must be configured to permit communication between the source and destination hosts.
Both systems must be configured to allow migration of virtual machines.
==b. Enabling Xen Guest Migration==

By default guest domain migration is disabled in the Xen configuration. A number of changes are necessary, therefore, prior to performing a migration. The required settings are located in the /etc/xen/xend-config.sxp configuration file and the necessary changes need to be implemented on both the source and target host systems. The first modification involves setting the xend_relocation_server value to yes:
<code=bash>
(xend-relocation-server yes)
</code>
Secondly, the xend-relocation-hosts-allow value must be changed to define the hosts from which relocation requests will be accepted. This can be a list of hostnames or IP addresses including wildcards. An empty value may also be specified (somewhat insecurely) to accept connections from any host.

Dans notre exemple :
Dans le fichier /etc/xen/xend-config.sxp du serveur xen1, on ajoute :
<code=bash>
(xend-relocation-hosts-allow '192.168.254.131')
</code>
Finally, the xend-relocation-address and xend-relocation-port settings on the source and destination systems must match. Leaving these values commented out with '#' characters so that the default value is used is also a viable option as shown below:
<code=bash>
#(xend-port 8000)
#(xend-relocation-port 8002)
</code>
==c. Preparing the Xen Migration Environment==

Le répertoire accueillant les images des disques de la vm à migrer doit être <wrap em>accessible en lecture/écriture par les deux serveurs xen</wrap>. Si vous avez suivi le tp, les images disques de nos vm se trouve sur le serveur NFS (fileServer sur la figure). Il nous suffit donc de monter les répertoires NFS sur les deux serveurs xen (cf procédure)

==d. Running the DomainU Guest==

Before attempting to perform a live migration of the Xen guest it is worth first checking that the guest will run successfully on both the source and target hosts. This will verify, for example, that both systems are configured with enough memory to execute the guest (non vérifié dans la suite).

root@xen1:~# xm create xen1VM0-clone.cfg -c
Assuming the guest boots successfully execute the appropriate shutdown command and wait for the guest to exit. Once you are certain the guest has exited repeat the above steps on the target system. If any problems occur on either system, rectify the issues before attempting the migration. Be sure to shutdown the guest on the target system before proceeding.

==e. Performing the Migration==

The first step in performing the migration is to start up the guest on the source host:
<code=bash>
root@xen1:~# xm create xen1VM0-clone.cfg -c
</code>
Once the system has booted exit from the console by pressing Ctrl+] (Ctrl+( sur putty). We can now view the list of guests running on the host:
<code=bash>
root@xen1:~# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 875 1 r----- 51.7
xen1VM0 128 1 0.0
xen1VM0-clone 4 128 1 -b---- 1.8
</code>
As shown above our guest domain has been assigned an<wrap em> ID of 4</wrap>. To perform the live migration we need to use the xm migrate command, the syntax for which is as follows:
<code=bash>
xm migrate domain id target host -l
</code>
In the above syntax outline domain id represents the ID of the domain to be migrated (obtainable using xm list), target host is the host name or IP address of the destination server and the -l flag indicates that a live migration is being performed.

In order to migrate our guest (domain ID 1)<wrap em> to our target host (IP address 192.168.254.131)</wrap> we therefore need to execute the following command:
<code=bash>
root@xen1:~# xm migrate 4 192.168.254.131 -l
</code>
After a short period of time the guest domain will no <wrap em>longer appear on the list of guests on the source host and will now be running on the target host.</wrap>
<code=bash>
root@xen2:~# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 875 1 r----- 76.5
xen1VM0-clone 1 128 1 -b---- 0.0
</code>

 
cours/activite1/xen.txt · Dernière modification: 2019/05/11 14:35 (modification externe)     Haut de page