Editing
VPS Management
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Moving a VE to another virt (migrate/migrateonline) == This will take a while to complete - and it is best to do this at night when the load is light on both machines. There are different methods for this, depending on which version of virtuozzo is installed on the src. and dst. virt. To check which version is running: [root@virt12 private]# cat /etc/virtuozzo-release Virtuozzo release 2.6.0 Ok, let's say that the VE is 1212, and vital stats are: <pre>[root@virt12 sbin]# vc 1212 VE_ROOT="/vz1/root/1212" VE_PRIVATE="/vz1/private/1212" OSTEMPLATE="fedora-core-2/20040903" IP_ADDRESS="69.55.229.84" TEMPLATES="devel-fc2/20040903 php-fc2/20040813 mysql-fc2/20040812 postgresql-fc2/20040813 mod_perl-fc2/20040812 mod_ssl-fc2/20040811 jre-fc2/20040823 jdk-fc2/20040823 mailman-fc2/20040823 analog-fc2/20040824 proftpd-fc2/20040818 tomcat-fc2/20040823 usermin-fc2/20040909 webmin-fc2/20040909 uw-imap-fc2/20040830 phpBB-fc2/20040831 spamassassin-fc2/20040910 PostNuke-fc2/20040824 sl-webalizer-fc2/20040 818" [root@virt12 sbin]# vzctl exec 1212 df -h Filesystem Size Used Avail Use% Mounted on /dev/vzfs 4.0G 405M 3.7G 10% /</pre> From this you can see that he’s using (and will minimally need free on the dst server) ~400MB, and he’s running on a Fedora 2 template, version 20040903. He’s also got a bunch of other templates installed. It’s is '''vital''' that '''all''' these templates exist on the dst system. To confirm that, on the dst system run: For < 3.0: <pre>[root@virt14 private]# vzpkgls | grep fc2 devel-fc2 20040903 PostNuke-fc2 20040824 analog-fc2 20040824 awstats-fc2 20040824 bbClone-fc2 20040824 jdk-fc2 20040823 jre-fc2 20040823 mailman-fc2 20040823 mod_frontpage-fc2 20040816 mod_perl-fc2 20040812 mod_ssl-fc2 20040811 mysql-fc2 20040812 openwebmail-fc2 20040817 php-fc2 20040813 phpBB-fc2 20040831 postgresql-fc2 20040813 proftpd-fc2 20040818 sl-webalizer-fc2 20040818 spamassassin-fc2 20040910 tomcat-fc2 20040823 usermin-fc2 20040909 uw-imap-fc2 20040830 webmin-fc2 20040909 [root@virt14 private]# vzpkgls | grep fedora fedora-core-1 20040121 20040818 fedora-core-devel-1 20040121 20040818 fedora-core-2 20040903 [root@virt14 private]#</pre> For these older systems, you can simply match up the date on the template. For >= 3.0: <pre>[root@virt19 /vz2/private]# vzpkg list centos-5-x86 2008-01-07 22:05:57 centos-5-x86 devel centos-5-x86 jre centos-5-x86 jsdk centos-5-x86 mod_perl centos-5-x86 mod_ssl centos-5-x86 mysql centos-5-x86 php centos-5-x86 plesk9 centos-5-x86 plesk9-antivirus centos-5-x86 plesk9-api centos-5-x86 plesk9-atmail centos-5-x86 plesk9-backup centos-5-x86 plesk9-horde centos-5-x86 plesk9-mailman centos-5-x86 plesk9-mod-bw centos-5-x86 plesk9-postfix centos-5-x86 plesk9-ppwse centos-5-x86 plesk9-psa-firewall centos-5-x86 plesk9-psa-vpn centos-5-x86 plesk9-psa-fileserver centos-5-x86 plesk9-qmail centos-5-x86 plesk9-sb-publish centos-5-x86 plesk9-vault centos-5-x86 plesk9-vault-most-popular centos-5-x86 plesk9-watchdog</pre> On these newer systems, it's difficult to tell whether the template on the dst matches exactly the src. Just cause a centos-5-x86 is listed on both servers doesn't mean all the same packages are there on the dst. To truly know, you must perform a sample rsync: rsync -avn /vz/template/centos/5/x86/ root@10.1.4.61:/vz/template/centos/5/x86/ if you see a ton of output from the dry run command, then clearly there are some differences. You may opt to let the rsync complete (without running in dry run mode) the only downside is you've now used up more space on the dst and also the centos template will be a mess with old and new data- it will be difficult if not impossible to undo (if someday we wanted to reclaim the space). If you choose to merge templates, you should closely inspect the dry run output. You should also take care to exclude anything in the /config directory. For example: rsync -av -e ssh --stats --exclude=x86/config /vz/template/ubuntu/10.04/ root@10.1.4.62:/vz/template/ubuntu/10.04/ Which will avoid this directory and contents: <pre>[root@virt11 /vz2/private]# ls /vz/template/ubuntu/10.04/x86/config* app os</pre> This is important to avoid since the config may differ on the destination and we are really only interested in making sure the pacakges are there, not overwriting a newer config with an older one. If the dst system was missing a template, you have 2 choices: # put the missing template on the dst system. 2 choices here: ## Install the template from rpm (found under backup2: /mnt/data4/vzrpms/distro/) or ## rsync over the template (found under /vz/template) - see above # put the ve on a system which has all the proper templates === pre-seeding a migration === When migrating a customer (or when doing many) depending on how much data you have to transfer, it can take some time. Further, it can be difficult to gauge when a migration will complete or how long it will take. To help speed up the process and get a better idea about how long it will take you can pre-transfer a customer's data to the destination server. If done correctly, vzmigrate will see the pre-transferred data and pick up where you left off, having much less to transfer (just changed/new files). We believe vzmigrate uses rsync to do it's transfer. Therefore not only can you use rsync to do a pre-seed, you can also run rsync to see what is causing a repeatedly-failing vzmigrate to fail. There's no magic to a pre-seed, you just need to make sure it's named correctly. Given: source: /vz1/private/1234 and you want to migrate to /vz2 on the target system, your rsync would look like: rsync -av /vz1/private/1234/ root@x.x.x.x:/vz2/private/1234.migrated/ After running that successful rsync, the ensuing migrateonline (or migrate) will take much less time to complete- depending on the # of files to be analyzed and the # of changed files. In any case, it'll be much much faster than had you just started the migration from scratch. Further, as we discuss elsewhere in this topic, a failed migration can be moved from <tt>/vz/private/1234</tt> to <tt>/vz/private/1234.migrated</tt> on the destination if you want to restart a failed migration. This should '''only''' be done if the migration failed and the CT is not running on the destination HN. === migrateonline intructions: src >=3.x -> dst>=3.x === A script called [[#migrateonline|migrateonline]] was written to handle this kind of move. It is basically a wrapper for <tt>vzmigrate</tt> – vzmigrate is a util to seamlessly- as no no reboot of the ve necessary- move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. [[#migrate|migrate]] mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrateonline emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: <tt>migrate</tt> is equivalent to <tt>migrateonline</tt>, but will <tt>migrate</tt> a ve AND restart it in the process. <pre>[root@virt12 sbin]# migrateonline usage: /usr/local/sbin/migrateonline <ip of node migrating to> <veid> [target dir: vz | vz1 | vz2] [root@virt12 sbin]# migrateonline 10.1.4.64 1212 vz starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005 Turning off offline_management Saved parameters for VE 1212 migrating with no start on 10.1.4.64 Connection to destination HN (10.1.4.64) is successfully established Moving/copying VE#1212 -> VE#1212, [/vz/private/1212], [/vz/root/1212] ... Syncing private area '/vz1/private/1212' - 100% |*************************************************| done Successfully completed clearing the arp cache now going to 10.1.4.64 and clear cache and starting starting it Delete port redirection Adding port redirection to VE(1): 4643 8443 Adding IP address(es) to pool: 69.55.229.84 Saved parameters for VE 1212 Starting VE ... VE is mounted Adding port redirection to VE(1): 4643 8443 Adding IP address(es): 69.55.229.84 Hostname for VE set: fourmajor.com File resolv.conf was modified VE start in progress... finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005 [root@virt12 sbin]#</pre> Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine. If they had backups, use the mvbackups command to move their backups to the new server: mvbackups 1212 virt14 vz Rename the ve [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/migrated-1212 [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/old-1212-migrated-20120404-noarchive Update the customer’s systems in mgmt to reflect the new path and server. IF migrateonline does not work, you can try again using simply migrate- this will result in a brief reboot for the ve. Before you try again, make sure of a few things: Depending on where in the migration died, there may be partial data on the dst system in 1 of 2 places: (given the example above) /vz/private/1212 or /vz/private/1212.migrated before you run migrate again, you'll want to rename so that all data is in 1212.migrated: mv /vz/private/1212 /vz/private/1212.migrated this way, it will pick up where it left off and transfer only new files. Likewise, if you want to speed up a migration, you can pre-seed the dst as follows: [root@virt12 sbin]# rsync -avSH /vz/private/1212/ root@10.1.4.64:/vz/private/1212.migrated/ then when you run migrate or migrateonline, it will only need to move the changed files- the migration will complete quickly === migrateonline/migrate failures (migrate manually) === Lets say for whatever reason the migration fails. If it fails with [[#migrateonline|migrateonline]], you should try [[#migrate|migrate]] (which will reboot the customer, so notify them ahead of time). You may want to run a [[#pre-seeding_a_migration|pre-seed]] rsync to see if you can find the problem. On older virts, we've seen this problem due to a large logfile (which you can find and encourage the customer to remove/compress): for f in `find / -size +1048576k`; do ls -lh $f; done You may also see migration failing due to quota issues. You can try to resolve by copying any quota file into the file you need: cp /var/vzquota/quota.1 /var/vzquota/quota.xxx If it complains about quota running you should then be able to stop it vzquota off xxxx If all else fails, migrate to a new VEID i.e. 1234 becomes 12341 If the rsync or [[#migrate|migrate]] fails, you can always move someone manually: 1. stop ve: <br> v stop 1234 2. copy over data<br> rsync -avSH /vz/private/1234/ root@1.1.1.1:/vzX/private/1234/ NOTE: if you've previously seeded the data (run rsync while the VE was up/running), and this is a subsequent rsync, make sure the last rsync you do (while the VE is not running, has the --delete option in the rsync) 3. copy over conf<br> scp /vzconf/1234.conf root@1.1.1.1:/vzconf 4. on dst, edit the conf to reflect the right vzX dir<br> vi /vzconf/1234.conf 5. on src remove the IPs<br> ipdel 1234 2.2.2.2 3.3.3.3 6. on dst add IPs <br> ipadd 1234 2.2.2.2 3.3.3.3 7. on dst, start ve: <br> v start 1324 8. cancel, then archive ve on src per above instrs. === migrate src=2.6.0 -> dst>=2.6.0, or mass-migration with customer notify === A script called <tt>migrate</tt> was written to handle this kind of move. It is basically a wrapper for vzmigrate – vzmigrate is a util to seamlessly move a ve from one host to another. This wrapper was initially written cause vz virtuozzo version 2.6.0 has a bug where the ve’s ip(s) on the src system were not properly removed from arp/route tables, causing problems when the ve was started up on the dst system. migrate mitigates that. Since it makes multiple ssh connections to the dst virt, it’s a good idea to put the pub key for the src virt in the authorized_keys file on the dst virt. In addition, migrate emails ve owners when their migration starts and stops. For this to happen they need to put email addresses (on a single line, space delimited) in a file on their system: /migrate_notify. If the optional target dir is not specified, the ve will be moved to the same private/root location as it was on the src virt. Note: migrateonline is equivalent to migrate, but will migrate a ve from one 2.6 '''kernel''' machine to another 2.6 kernel machine without restarting the ve. <pre>[root@virt12 sbin]# migrate usage: /usr/local/sbin/migrate <ip of node migrating to> <veid> [target dir: vz | vz1 | vz2] [root@virt12 sbin]# migrate 10.1.4.64 1212 vz starting to migrate 1212 at Sat Mar 26 22:40:38 PST 2005 Turning off offline_management Saved parameters for VE 1212 migrating with no start on 10.1.4.64 Connection to destination HN (10.1.4.64) is successfully established Moving/copying VE#1212 -> VE#1212, [/vz/private/1212], [/vz/root/1212] ... Syncing private area '/vz1/private/1212' - 100% |*************************************************| done Successfully completed clearing the arp cache now going to 10.1.4.64 and clear cache and starting starting it Delete port redirection Adding port redirection to VE(1): 4643 8443 Adding IP address(es) to pool: 69.55.229.84 Saved parameters for VE 1212 Starting VE ... VE is mounted Adding port redirection to VE(1): 4643 8443 Adding IP address(es): 69.55.229.84 Hostname for VE set: fourmajor.com File resolv.conf was modified VE start in progress... finished migrating 1212 at Sat Mar 26 22 22:52:01 PST 2005 [root@virt12 sbin]#</pre> Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail). Cancel the ve (first we have to rename things which migrate changed so cancelve will find them): [root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf On 2.6.1 you’ll also have to move the private area: [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212 <pre>[root@virt12 sbin]# cancelve 1212 v stop 1212 v set 1212 --offline_management=no --save Delete port redirection Deleting IP address(es) from pool: 69.55.229.84 Saved parameters for VE 1212 mv /vzconf/1212.conf /vzconf/deprecated-1212 mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414 don't forget to remove firewall rules and domains! [root@virt12 sbin]#</pre> Note: if the system had backups, [[#cancelve|cancelve]] would offer to remove them. You want to say '''no''' to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt. Then go to backup2 and move the dirs. So you’d do something like this: mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/ We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. If moving to the same drive, you can safely preserve hardlinks and move all files with: sh for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212 /mnt/data1/virt14/$f/vz/private/; done To move everyone off a system, you’d do: for f in `vl`; do migrate <ip> $f; done Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date. === vzmigrate: src=2.6.1 -> dst>=2.6.0 === This version of vzmigrate works properly with regard to handling ips. It will not notify ve owners of moves as in the above example. Other than that it’s essentially the same. <pre>[root@virt12 sbin]# vzmigrate 10.1.4.64 -r no 1212:1212:/vz/private/1212:/vz/root/1212 migrating on 10.1.4.64 Connection to destination HN (10.1.4.64) is successfully established Moving/copying VE#1212 -> VE#1212, [/vz/private/1212], [/vz/root/1212] ... Syncing private area '/vz1/private/1212' - 100% |*************************************************| done Successfully completed Adding port redirection to VE(1): 4643 8443 Adding IP address(es) to pool: 69.55.229.84 Saved parameters for VE 1212 Starting VE ... VE is mounted Adding port redirection to VE(1): 4643 8443 Adding IP address(es): 69.55.229.84 Hostname for VE set: fourmajor.com File resolv.conf was modified VE start in progress... [root@virt12 sbin]#</pre> Confirm that the system is up and running on the dst virt. Try to ssh to it from another machine (backup2 or mail). Cancel the ve (first we have to rename things which vzmigrate changed so cancelve will find them): <pre>[root@virt12 sbin]# mv /vzconf/1212.conf.migrated /vzconf/1212.conf [root@virt12 sbin]# mv /vz1/private/1212.migrated /vz1/private/1212 [root@virt12 sbin]# cancelve 1212 v stop 1212 v set 1212 --offline_management=no --save Delete port redirection Deleting IP address(es) from pool: 69.55.229.84 Saved parameters for VE 1212 mv /vzconf/1212.conf /vzconf/deprecated-1212 mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414 don't forget to remove firewall rules and domains! [root@virt12 sbin]#</pre> Note: if the system had backups, <tt>cancelve</tt> would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt. Then go to backup2 and move the dirs. So you’d do something like this: mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/ We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. If moving to the same drive, you can safely preserve hardlinks and move all files with: sh for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212 /mnt/data1/virt14/$f/vz/private/; done Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date. === src=2.5.x === First, go to the private dir: cd /vz1/private/ Stop the VE - make sure it stops totally cleanly. vzctl stop 1212 Then you’d use vemove - a script written to copy over the config, create tarballs of the ve’s data on the destination virt, and cancel the ve on the source system (in this example we’re going to put a ve that was in /vz1/private on the src virt, in /vz/private on the dst virt): <pre>[root@virt12 sbin]# vemove ERROR: Usage: vemove veid target_ip target_path_dir [root@virt12 sbin]# vemove 1212 10.1.4.64 /vz/private/1212 tar cfpP - 1212 --ignore-failed-read | (ssh -2 -c arcfour 10.1.4.64 "split - -b 1024m /vz/private/1212.tar" ) scp /vzconf/1212.conf 10.1.4.64:/vzconf cancelve 1212 v stop 1212 v set 1212 --offline_management=no --save Delete port redirection Deleting IP address(es) from pool: 69.55.229.84 Saved parameters for VE 1212 mv /vzconf/1212.conf /vzconf/deprecated-1212 mv /vz1/private/1212 /vz1/private/old-1212-cxld-20050414 don't forget to remove firewall rules and domains! [root@virt12 sbin]#</pre> Note: if the system had backups, cancelve would offer to remove them. You want to say no to this option – doing so would mean that the backups would have to be recreated on the dst virt. Instead, copy over the backup configs (make note of path changes in this example) from backup.config on the src virt to backup.config on the dst virt. Then go to backup2 and move the dirs. So you’d do something like this: mv /mnt/data1/virt12/0/vz1/private/1212 /mnt/data3/virt14/0/vz/private/ We don’t bother with the other dirs since there’s no harm in leaving them and eventually they’ll drop out. Besides, moving harlinked files across a filesystem as in the example above will create actual files and consume lots more space on the target drive. If moving to the same drive, you can safely preserve hardlinks and move all files with: sh for f in 0 1 2 3 4 5 6; do mv /mnt/data1/virt12/$f/vz1/private/1212 /mnt/data1/virt14/$f/vz/private/; done When you are done, go to /vz/private on the dst virt you will have files like this: <pre>1212.taraa 1212.tarab 1212.tarac</pre> Each one 1024m (or less, for the last one) in size. on the dst server and run: cat 1212.tar?? | tar xpPBf - and after 20 mins or so it will be totally untarred. Now since the conf file is already there, you can go ahead and start the system. vzctl start 1212 Update the customer’s systems by clicking the “move” link on the moved system, update the system, template (should be pre-selected as the same), and the shut down date. NOTE: you MUST tar the system up using the virtuozzo version of tar that is on all the virt systems, and further you MUST untar the tarball with the virtuozzo tar, using these options: `<tt>tar xpPBf -</tt>` If you tar up an entire VE and move it to a non-virtuozzo machine, that is ok, and you can untar it there with normal tar commands, but do not untar it and then repack it with a normal tar and expect it to work - you need to use virtuozzo tar commands on virtuozzo tarballs to make it work. The backups are sort of an exception, since we are just (usually) restoring user data that was created after we gave them the system, and therefore has nothing to do with magic symlinks or vz-rpms, etc.
Summary:
Please note that all contributions to JCWiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
JCWiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Create account
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information